I am an Economics graduate student at Harvard University. I use experiments and machine learning to draw insights about human behavior. You can reach me via email at cshubatt[at]g.harvard.edu.
This paper develops a theory of how tradeoffs govern the difficulty of comparisons. In our model, options are easier to compare when they involve less pronounced tradeoffs -- when they are 1) more similar feature-by-feature and 2) closer to dominance. These postulates yield tractable measures of comparison complexity in three domains: multiattribute, lottery, and intertemporal choice. Our model rationalizes multiple behavioral regularities, such as context effects, preference reversals, and apparent probability weighting and hyperbolic discounting. In choice data spanning all three domains, our model predicts errors, inconsistency, and cognitive uncertainty. Manipulating tradeoffs reverses classic behavioral regularities, in line with model predictions.
We develop interpretable, quantitative indices of the objective and subjective complexity of lottery choice problems that can be computed for any standard dataset. These indices capture the predicted error rate in identifying the lottery with the highest expected value. The most important complexity feature is the state-by-state dissimilarity of the lotteries in the set (“tradeoff complexity”). Using our complexity indices, we study behavioral responses to complexity out-of-sample across one million decisions in 11,000 unique binary choice problems. Complexity predicts strong attenuation of decisions to problem fundamentals. This can generate systematic biases in revealed preference measures such as spurious risk aversion. These effects are very large, to the degree that complexity explains a larger fraction of estimated choice errors than proximity to indifference. Accounting for complexity-driven attenuation in structural estimations improves model fit substantially. Complexity aversion explains a smaller fraction of the data.