Ep73 “The Dangers of Group Think on Decision Making” with Adi Sunderam

By Stanford Graduate School of Business

Share:

All Else Equal Podcast: Bayesian Updating & Decision-Making – A Detailed Summary

Key Concepts: Bayesian Updating, Prior Beliefs, Dogmatic Prior, Model-Based Decision Making, Confirmation Bias, Echo Chambers, Monty Hall Problem, Rationality vs. Intuition, Selective Exposure, Interpretation of Data, Model Selection.

I. Introduction: The Challenge of Bayesian Updating

The podcast episode, featuring Jules van Binsbergen (University of Pennsylvania) and Jonathan Berk (Stanford University), centers on the difficulties humans face in applying Bayesian updating – the statistically optimal method for incorporating new information into existing beliefs. Both professors agree that while mathematically sound, Bayesian updating is profoundly counterintuitive and often misapplied in real-world decision-making. Berk notes his reliance on explicitly working through the equations to avoid intuitive errors. Van Binsbergen highlights the existence of well-known examples, like the Monty Hall problem, where the correct Bayesian solution clashes with common sense.

II. The Monty Hall Problem: A Classic Illustration

The Monty Hall problem is presented as a prime example of Bayesian updating’s unintuitive nature. The scenario involves a contestant choosing one of three doors, one concealing a prize. After the contestant’s initial choice, the host reveals a losing door that wasn’t selected by the contestant. The contestant is then offered the chance to switch doors. The correct Bayesian strategy – switching doors – is often misunderstood, with many believing the odds become 50/50. Berk explains the logic: the host’s action provides information. Switching is advantageous because the host always reveals a losing door, concentrating the probability on the remaining unchosen door two-thirds of the time.

III. Beyond Simple Mistakes: The Rise of Dogmatic Beliefs & Echo Chambers

The discussion shifts from typical Bayesian errors to more concerning phenomena like the persistence of demonstrably false beliefs (e.g., conspiracy theories like the claim Kennedy was killed by the CIA). Berk argues that standard explanations of cognitive biases don’t fully account for this behavior. Van Binsbergen proposes a reframing of the question: instead of trying to disprove entrenched beliefs, one should ask what evidence would change the person’s mind. The inability to identify such evidence defines a “dogmatic prior” – a belief impervious to data.

IV. Introducing the Model: Sharing Models to Interpret Data

The podcast introduces the research paper “Sharing Models to Interpret Data” by Adi Sundaram (Harvard Business School) and Josh Schwartzstein. Sundaram explains the core innovation: the authors propose that individuals operate with a limited set of “models” or explanations for events. They are more inclined to interpret new information in ways that fit these pre-existing models, rather than considering a broader range of possibilities. This isn’t necessarily irrational; it’s a computational shortcut. The problem arises when the true explanation lies outside this limited set.

V. The Mechanics of Model-Based Interpretation

Sundaram elaborates on the model. Individuals don’t necessarily lack the capacity for Bayesian updating, but they restrict the scope of possibilities considered. They prioritize explanations that “fit well” with existing beliefs, even if those explanations are unlikely. Critically, they underestimate the flexibility of alternative interpretations offered by others. This leads to a situation where, even with new data, individuals can reinforce incorrect beliefs by finding increasingly convoluted explanations that support their pre-existing worldview.

VI. Real-World Examples: COVID-19 & Financial Markets

The podcast provides concrete examples. The early resistance to the possibility of a lab leak origin for COVID-19 is cited. As evidence accumulated suggesting a non-natural origin (e.g., a specific splice in the virus), explanations became increasingly far-fetched rather than considering the lab leak hypothesis. In finance, the persistence of beliefs despite contradictory evidence (e.g., attributing stock market gains to skill rather than luck) is also discussed. Sundaram points out that even in finance, where sophisticated models exist, individuals often prioritize narratives that confirm their existing biases.

VII. Sociological Implications: Echo Chambers & Groupthink

Van Binsbergen extends the discussion to the sociological level, suggesting that echo chambers on social media exacerbate this phenomenon. Within these groups, certain explanations are simply not permissible, leading to collective “super-creativity” in devising increasingly elaborate justifications for pre-determined conclusions. He argues this is a waste of intellectual resources, even among intelligent individuals.

VIII. The Role of Interpretation & Confirmation Bias

Sundaram emphasizes that people are actively engaged in interpreting data, not simply ignoring it. The issue isn’t a lack of data analysis, but a selective interpretation that prioritizes confirming existing beliefs. He draws a parallel to the COVID-19 inflation debate, where “inflation doves” consistently found reasons to dismiss evidence of rising prices.

IX. Differentiating the Model from Traditional Irrationality

The discussion clarifies the distinction between the proposed model and traditional views of irrationality. The model doesn’t assume people are simply bad at Bayesian updating; it posits that they are updating within a constrained set of possibilities. Experiments demonstrating that providing a model improves belief updating (even if the model is later revealed to be flawed) support this view.

X. Potential Interventions & Mitigation Strategies

The professors explore potential interventions. Sundaram suggests that fostering a culture of considering multiple models, rather than framing debates as “for” or “against,” could be beneficial. He draws an analogy to the legal system, where prosecution and defense present opposing arguments. Van Binsbergen highlights the importance of recognizing selection effects, particularly on social media, where algorithms curate information to reinforce existing beliefs. He argues that awareness of this bias is crucial for more rational decision-making.

XI. The Power of Narrative & the Difficulty of Changing Minds

Sundaram acknowledges the difficulty of changing deeply held beliefs. Once someone has committed to an interpretation, it’s hard to revert to a state of open-mindedness. He suggests that asking individuals what evidence would change their minds is a more productive approach than simply presenting contradictory information.

Notable Quotes:

  • Jonathan Berk: “Bayesian updating is perhaps one of the least intuitive branches of mathematics and one of the most difficult to understand.”
  • Jules van Binsbergen: “Once people are willing to commit to [a piece of information that would change their mind], it's much easier to have the debate after.”
  • Adi Sundaram: “The truth need not win in the long run. Because after the fact, you can always come up with a story that sounds better than the truth.”

Conclusion:

The podcast offers a nuanced perspective on the challenges of rational decision-making. It moves beyond simple explanations of cognitive biases to propose a model where individuals operate within a limited set of pre-defined explanations, selectively interpreting data to fit their existing beliefs. This model has significant implications for understanding phenomena like conspiracy theories, echo chambers, and the persistence of false beliefs, and suggests that interventions focused on broadening the scope of considered possibilities and recognizing the power of narrative may be crucial for improving decision-making in a complex world.

Chat with this Video

AI-Powered

Hi! I can answer questions about this video "Ep73 “The Dangers of Group Think on Decision Making” with Adi Sunderam". What would you like to know?

Chat is based on the transcript of this video and may not be 100% accurate.

Related Videos

Ready to summarize another video?

Summarize YouTube Video