3 Practical Lessons From an AI Investing Expert

By The Motley Fool

Share:

Key Concepts

  • The Evolution of AI & the Shift from Truth to Sense-Making: AI has progressed from rule-based systems focused on truth to modern LLMs prioritizing coherent output, even at the expense of factual accuracy.
  • Bounded Rationality & the Limits of Human & Artificial Intelligence: Both humans and AI operate with cognitive limitations, relying on heuristics and patterns rather than perfect rationality.
  • Trust & Risk Assessment in AI: Trust in AI should be proportional to the frequency and severity of errors, varying significantly based on application.
  • The Potential for AI Governance & Disempowerment: Concerns exist about AI becoming a gatekeeper, potentially disempowering humans through passive acceptance and algorithmic bias.
  • The Importance of Human Agency & Critical Consumption: Individuals must be aware of AI’s limitations and consume it responsibly to avoid cognitive decline and maintain agency.

Early Influences & the Foundations of AI (Part 1)

Vasant Dhar’s journey began with an unconventional upbringing in 1950s Kashmir, India, and Ethiopia, fostering resilience and adaptability. A formative experience of being mistakenly advanced a grade instilled a lifelong capacity for rapid learning. His academic path led him to the University of Pittsburgh in the 1980s, where he encountered Herbert Simon’s concept of “bounded rationality.” This challenged the economic assumption of perfect rationality, proposing that humans operate with limited cognitive resources and rely on heuristics. Early AI focused on representing knowledge and utilizing these heuristics, exemplified by the INTERNIST project – a 1970s medical diagnosis system developed through ten years of interaction with a medical expert. AI evolved from rule-based systems to machine learning (prediction-focused) and then to deep learning (direct perception), shifting from explicit knowledge representation to learning from data. Dhar’s “DR’s Conjecture” posits that patterns often emerge before the reasons for them become apparent, mirroring the process of scientific discovery.

Pattern Recognition & Early Applications (Part 1)

Dhar applied genetic algorithms to real-world problems, demonstrating the power of pattern recognition. At Morgan Stanley, he developed trading strategies, discovering that volatility impacted trade profitability by a factor of three – a pattern human traders hadn’t explicitly identified. Analyzing AC Nielsen data (tracking 50,000 households), he uncovered patterns like older women in the Northeast shopping on Thursdays (coupon day). His current project involves building an AI bot based on Aswath Damodaran’s valuation methodology, highlighting the challenges of replicating human reasoning. The success of Roger Federer in tennis was used as an analogy for the power of compounding small edges, mirroring a systematic investing approach (Federer won 80% of matches but only 54% of points). The genetic algorithm involves evolving solutions through a population of queries, evaluating performance, and iteratively refining them.

The Erosion of Truth & the Rise of “Sense-Making” (Part 2)

The conversation shifted to the evolving relationship between truth and AI. Modern generative AI, like LLMs, prioritizes “making sense” – predicting the next word – over adhering to truth, leading to “hallucinations” where the AI fabricates information. This contrasts with earlier AI’s focus on axiomatic truth and sound rules of inference. Truth is now imposed onto LLMs through human feedback, a process Dhar deems “fairly arbitrary.” The quality of training data is crucial; exposing an LLM to misinformation (e.g., moon landing hoax claims) could lead it to adopt and propagate falsehoods. Dhar stated, “Truth has become a casualty and an afterthought.”

Assessing Trust & the Cost of Errors (Part 2)

Dhar introduced a framework for assessing trust in AI, drawing parallels from algorithmic trading. He argues trust should be proportional to the frequency of errors and the severity of their consequences. He noted his trading algorithms have a win rate of 50-53%, meaning they are incorrect almost half the time. He contrasted this with applications impacting health or safety, requiring much higher reliability. He cited tragic examples of teenagers committing suicide after forming attachments to AI companions, emphasizing the lack of “duty of care” in such applications and the potential for high-cost errors.

Governance, Disempowerment & Responsible Consumption (Part 2)

The discussion pivoted to the question: “Will we govern AI or will AI govern us?” Dhar expressed concern about a “Huxlean world” where humans are gradually disempowered, with AI becoming a “gatekeeper” in areas like job applications. He stressed this could occur through passive acceptance, not malicious intent. He identified governments, academics, big tech, and individual users as crucial stakeholders in preventing this outcome. He advocated for individual awareness and responsible consumption of AI, warning against using it as a “crutch” that leads to “cognitive decline,” referencing his own experience with navigation apps diminishing his spatial reasoning skills. He echoed Jonathan Hight’s observation about the harm caused by social media, suggesting AI’s potential for harm is even greater. While acknowledging the debate around censorship (citing Ezra Klein and Sam Altman), he ultimately placed the onus on the consumer to be discerning.

The Value of Human Authorship (Part 2)

Dhar’s book aims to provide a framework for navigating the AI landscape, emphasizing its potential to amplify human skills if used correctly. He chose to write the book himself, rather than using ChatGPT, highlighting the importance of personal expression, enjoyment, and the sense of accomplishment derived from independent creation. He concluded, “Right now, the machines don't write as well as us, for now,” but also emphasized the intrinsic value of authorship.


Conclusion:

The interview highlights a critical juncture in the development of AI. While AI has made remarkable progress in pattern recognition and “sense-making,” its detachment from truth and the potential for disempowerment raise significant concerns. A proactive approach to governance, coupled with individual awareness and responsible consumption, is crucial to harnessing AI’s potential while mitigating its risks. The emphasis on human agency, critical thinking, and the intrinsic value of human creation underscores the importance of maintaining a balanced relationship with this rapidly evolving technology.

Chat with this Video

AI-Powered

Hi! I can answer questions about this video "3 Practical Lessons From an AI Investing Expert". What would you like to know?

Chat is based on the transcript of this video and may not be 100% accurate.

Related Videos

Ready to summarize another video?

Summarize YouTube Video