Impressions on Gagnon-Bartsch and Bushong (2022)
Title: Learning with Misattribution of Reference Dependence Link to the paper.
Authors: Gagnon-Bartsch, Bushong
Year published: 2022
Journal: Journal of Economic Theory
This paper characterizes a certain type of wrong belief updating called "misattribution of reference dependence"
- Utility in this model has two components: an actual component, and a "relative to expectations" component.
- The "expectations" part is the crucial thing because if a person forms the "wrong" expectation then utility is affected
- The same outcome can be perceived differently depending on the expectation. This is exacerbated by loss-aversion on the part of the agent
- The utility is derived conditional on these expectations and so have a far-reaching effect into the future
-
- Loss aversion makes the updating of beliefs, volatile but convergent to something different from the truth
The model
- An agent consumes \(x_t\) at time \(t\). \(x_t\in\mathbb{R}\) and also \(x_t=\theta+\epsilon_t\).
- Agents also have loss aversion. Therefore, the utility function is piecewise-linear:
$$ \begin{eqnarray*}
u(x_t\vert \hat{\theta})=x_t+\eta \cdot n(x_t,\hat{\theta})
\end{eqnarray*}$$ where
\(\hat{\theta}\) is some expectation (conditional or unconditional) about \(x_t\), \(\eta>0\) is the weight given to sensations of gain and loss relative to absolute outcomes and \(n(x_t,\hat{\theta})\) is
$$
\begin{eqnarray*}
n(x_t,\hat{\theta})=\begin{cases}
x_t-\hat{\theta} & \text{ if } x_t\geq \hat{\theta}\\
\lambda(x_t-\hat{\theta}) & \text{ if } x_t< \hat{\theta}
\end{cases}
\end{eqnarray*}
$$
where \(\lambda\geq q\).
- The learning part goes as follows: as each
- Agents know that \(x_t=\theta+\epsilon_t\). They do not know \(\theta\) but have a prior on it: \(\theta\sim N(\theta_0,\rho^2)\). The also know that \(\epsilon_t\sim N(0,\sigma^2)\) for known \(\sigma^2\)
- Updating depends on the type of agent
- For all agents, \(\hat{\theta}_{t}=(1-\alpha_t)\cdot \hat{\theta}_{t-1} +\alpha_t \cdot x_t^{perceived}\)
- For a rational agent, \(x_t^{perceived}=x_t\) and \(\eta\) is correctly used by the agent.
- Thus, the rational agent uses his utility function \(u(x_t\vert\hat{\theta}_{t-1})=x_t+\eta \cdot n(x_t,\hat{\theta}_{t-1})\) correctly
- For a misguided agent, \(x_t^{perceived}=\hat{x}_t\) and he uses \(\hat{\eta}\in[0,\eta)\) instead of \(\eta\)
- Thus, the misguided agent uses the wrong parameters for his function \(u(x_t\vert\hat{\theta}_{t-1})\). In particular, the \(\hat{x}_t\) is obtained by equating two utility functions:
$$
\begin{eqnarray*}
u(x_t\vert\hat{\theta}_{t-1})&=&\hat{u}(\hat{x}_t\vert\hat{\theta}_{t-1})\\
x_t + \eta \cdot n(x_t,\hat{\theta}_{t-1}) &=& \hat{x}_t + \hat{\eta} \cdot n(\hat{x}_t,\hat{\theta}_{t-1}) \\
\end{eqnarray*}
$$
In particular, if \(x_t\geq \hat{\theta}\) we have
$$
\begin{eqnarray*}
x_t + \eta \cdot (x_t -\hat{\theta}_{t-1}) &=& \hat{x}_t + \hat{\eta} \cdot (\hat{x}_t-\hat{\theta}_{t-1}) \\
\hat{x}_{t}&=&x_t+ \left(\frac{\eta-\hat{\eta}}{1+\hat{\eta}}\right)\cdot (x_t-\hat{\theta}_{t-1})\\
\hat{x}_{t}&=&x_t+ \kappa^G\cdot (x_t-\hat{\theta}_{t-1})\\
\end{eqnarray*}
$$
if \(x_t< \hat{\theta}\) we have
$$
\begin{eqnarray*}
x_t + \eta \cdot (x_t -\hat{\theta}_{t-1}) &=& \hat{x}_t + \hat{\eta} \cdot \lambda \cdot (\hat{x}_t-\hat{\theta}_{t-1}) \\
\hat{x}_{t}&=&x_t+ \left(\frac{\eta-\hat{\eta}}{1+\lambda\cdot \hat{\eta}}\right)\cdot (x_t-\hat{\theta}_{t-1})\\
\hat{x}_{t}&=&x_t+ \kappa^L\cdot (x_t-\hat{\theta}_{t-1})\\
\end{eqnarray*}
$$
- In short, if there is no discrepancy between \(\eta\) and \(\hat{\eta}\), then \(\hat{x}_t=x_t\) and the two types of agents would update their beliefs about \(\theta\) correctly
Comments
- It is actually interesting that they find that there is negative covariance between beliefs of two consecutive periods before hitting some time threshold.
- But in the end the driver of the main result is just the form of the utility function and the wrong \(\hat{\eta}\)
- So, wrong initial expectations basically create a serial dependence of wrongness
- This is where the overweighting of the most recent thing occurs.
Interesting things to look at later
- Ambiguity (again!)
- Collect the different styles of biased learning among different papers
- Try this out in a beauty contest model. Is it possible to find an alternative reason for overweighting? It is not public or private information. This is all private information. Would it worsen with the introduction of coordination?