Impressions on Frankel and Kamenica (2019)
Title: Quantifying Information and Uncertainty Link to the paper.
Authors: Frankel, Kamenica
Year published: 2019
Journal: American Economic Review
This paper is trying to characterize the desirable features of ``the value of information'' as how it is instrumental to our decisions.
- Information is valuable to the extent that it is able to change our minds
- Information is measured not through the amounts of information that actually passes through us (bits and bytes etc). Instead, it is construed as a distance metric \(d\) between two vectors of beliefs \(p\) and \(q\).
- This is a strange thing for me to say because later they say that "metrics on the space \(\Delta(\Omega)\) will never satisfy one of the nice properties"
- The nice properties of measures of information are
- null information: \(d(q,q)=0 \ \forall \ q\)
- positivity: \(d(p,q)\geq 0 \ \forall\ (p,q)\)
- order-invariance: the expected sum of information generated by two signals should be the same no matter which order the signals are introduced
- A measure of information is only valid iff it satisfies all three properties above
- But only when the signals cause a shift in beliefs from \(q\) to \(p\) can we start to talk about the value of information, and the cost of uncertainty
- For sellers of information, if their payments depend on "how the information can change one's mind", the sellers would be restricted from withholding information if the payment structure satisfies the nice properties mentioned above, but especially order-invariance
- Though of course there are restrictions in the types of actions that the seller may do in the first place
- He can present alternative signals so long as they are mean-preserving (but how is this ensured?)
- He can delay earlier signals but he must eventually present all signals received
The model
- There is a finite set of possible states of the world: \(\Omega=\{1,\cdots,n\}\) with typical element \(\omega\)
- Given a certain utility function, there is an optimal action \(a\in\mathcal{A}\), the set of all possible actions for each state of the world.
- Agents do not know the actual state of the world but they have a belief (prior) about it.
- There are many possible probability distributions over \(\Omega\): The set of all probability distributions is \(\Delta(\Omega)\) with typical elements \(p\) and \(q\)
- These probability distributions represent priors and posteriors held by some agent
- An agent can move from a prior \(q\) to a posterior \(p\) by observing signals \(\pi\) that form a finite partition over the space \(\Omega\times[0,1]\)
- The signals do not play an important role at the moment but we will come back to this later
- The only important thing is that observing the signals will lead to a change in beliefs from \(q\) to \(p\)
- (Updating of) beliefs is important because agents would need to maximize their expected utility, which depends on their action and state of the world
- How useful information is depends on how much it can change an agent's mind.
The value of information for some decision problem \(\mathcal{D}=(\mathcal{A},u)\) is defined for any two beliefs \(p\) and \(q\) within the set \(\Delta(\Omega)\times \Delta(\Omega)\)
$$v_{\mathcal{D}}(p,q)=\underbrace{\mathbb{E}_p\left[u(a^*(p),\omega)\right]}_{\text{The correct action under the correct belief }p}- \underbrace{\mathbb{E}_p\left[u(a^*(q),\omega)\right]}_{\text{The correct action under the wrong belief }q}$$
- The information here is implicit in the formulation in that it takes some signal \(\pi\) to update the belief from \(q\) to \(p\).
A cognate metric for the same decision problem \(\mathcal{D}\) is the cost of uncertainty which is defined by
$$C_{\mathcal{D}}(q) =\mathbb{E}_q \left[\max_a u(a,\omega)\right] - \max_a \mathbb{E}_q \left[u(a,\omega)\right]$$
- \(\mathbb{E}_q \left[\max_a u(a,\omega)\right]\) is interpreted as the expected payoff to the decision-maker if the she were to learn the true state of the world before taking the action
- \(\max_a \mathbb{E}_q \left[u(a,\omega)\right]\) is interpreted as the expected payoff to the decision-maker from the action that is optimal given belief \(q\)
Some thoughts
- The notation \(\mathbb{E}_q \left[\max_a u(a,\omega)\right]\) is confusing or maybe I'm just dumb. Is it that the state of the world \(\omega\) is already assumed to be the correct one? For all of the parts involving utilty functions? Because I don't see how it is indicated in this expression that the true state of the world was learned.
- This paper reminds me of the Artzner (1999) paper on coherence of risk measures
Interesting things to look at later
- Do this but with ambiguity