How Big Data and AI Are Lying to Business Owners - Tom Wheelwright and C. Thi Nguyen
By The Rich Dad Channel
The Flawed Foundation of Big Data: A Deep Dive into CT Wyn’s “The Score”
Key Concepts:
- Value Capture: The phenomenon where individuals become dominated by simple metrics that fail to capture the fullness of their values.
- Value Laden Technology: The idea that technology isn’t neutral; it inherently embodies specific values and biases.
- Objectivity Laundering: The process of presenting data as objective by obscuring the subjective decisions made during its collection and analysis.
- Mechanical Collection: Data gathered through standardized, automated processes, prioritizing portability over nuanced understanding.
- Portability vs. Context Sensitivity: The trade-off between data’s ability to be easily shared and its loss of rich, contextual detail.
- Qualitative vs. Quantitative Knowing: The distinction between descriptive, nuanced understanding and standardized, measurable data.
- Discount Rate (in Finance): A crucial, yet subjective, element in cost-benefit analysis determining the present value of future returns.
I. The Illusion of Data’s Omniscience
The discussion begins by establishing the pervasive influence of big data in modern life, driving AI, advertising, and business decisions. However, Dr. CT Wyn challenges the assumption that “big data is the answer,” arguing that it possesses inherent limitations. The core premise is that while powerful, big data is fundamentally flawed by design, creating a gap between reality and what is easily measurable. This flaw has profound implications for both business success and personal fulfillment. The conversation highlights that the largest datasets are those collected cheaply and easily at scale, inherently prioritizing quantifiable aspects over more complex, qualitative experiences.
II. The Philosophical Roots of the Problem
Dr. Wyn’s background in philosophy, specifically game theory and the philosophy of games, provides a unique lens for analyzing the issue. He explains his two areas of specialization: how knowledge is collectively constructed (particularly through data in bureaucratic settings) and the nature of games as systems of scoring. He realized a fundamental incompatibility: scoring systems in games can be liberating and encourage exploration, while scoring systems in institutions often stifle creativity and miss crucial information. This led to his book, The Score, which explores why scoring systems have such divergent effects in different contexts. His administrative experience quantifying student learning outcomes further solidified his concerns about the limitations of applying rigid metrics to complex human endeavors.
III. Portability and the Loss of Context
A central argument revolves around the concept of “engineered portability.” Data, as used by institutions, is deliberately designed to be context-insensitive, allowing for easy aggregation and comparison across diverse settings. This process, however, necessitates “weeding out” high-context, subtle understandings. The analogy of 2+2=4 is used to illustrate this: a universally understood truth, but one that lacks the nuance required for complex real-world accounting decisions (“what do you want 2+2 to be?”). Theodore Porter’s work on quantification culture is cited, highlighting the distinction between qualitative (rich, context-sensitive) and quantitative (portable, standardized) ways of knowing. The trade-off is clear: ease of aggregation comes at the cost of depth and accuracy.
IV. Mechanical Collection and the Rise of Algorithms
The discussion delves into the nature of data collection itself, distinguishing between mechanical and non-mechanical methods. Lorraine Daston’s work on the history of rules is referenced, explaining the shift from “principles” (allowing for exceptions and judgment) to “algorithms” (rigid, rule-based procedures). The example of a recipe versus a skilled cook illustrates this point: a recipe provides mechanical instructions, while a cook adapts based on sensory input and experience. This mechanical nature extends to data collection, prioritizing what can be easily and consistently measured, even if it doesn’t reflect the full picture. The example of screen time versus a child’s creative engagement demonstrates this perfectly: screen time is easily quantifiable, but doesn’t capture the quality of the activity.
V. AI, Bias, and the Outsourcing of Judgment
The conversation pivots to the implications for Artificial Intelligence. Dr. Wyn expresses concern that AI will exacerbate the existing problems by focusing attention on easily measurable targets, potentially neglecting more meaningful aspects of life. He introduces the concepts of “value capture” (outsourcing values to simplistic metrics) and “value laden technology” (the inherent biases embedded in technological design). The example of social media algorithms is used to illustrate how these systems can centralize attention and shape our perceptions, potentially narrowing our exposure to diverse perspectives. Shannon Vallor’s work on “moral deskilling” is cited, warning that over-reliance on AI can erode our own capacity for ethical judgment. The core worry is that we are outsourcing critical decisions about what matters to us to opaque, automated systems.
VI. Fake Objectivity and the Illusion of Neutrality
A crucial point is made about “objectivity laundering” – the process of presenting data as objective by concealing the subjective decisions made during its collection and analysis. The example of double-entry bookkeeping, as analyzed by Mary Poovey, is used to illustrate this: even seemingly objective systems rely on initial, subjective valuations. The discount rate in cost-benefit analysis is presented as a prime example of a seemingly technical parameter that is, in reality, a subjective judgment with significant consequences. This concept is further reinforced with the analogy of compounding interest, where the rate of return is a critical, often unstated, factor. The danger lies in accepting data as inherently neutral when it is, in fact, shaped by human choices and biases.
VII. The Good and the Bad: A Balanced Perspective
Despite his criticisms, Dr. Wyn acknowledges the immense power of big data, particularly in fields like medical science where quantifiable outcomes (life or death) are readily measurable. However, he cautions against allowing easily measurable metrics to overshadow more meaningful, but less quantifiable, aspects of life. He uses the example of fly fishing, where the pursuit of enjoyment can be undermined by an obsessive focus on catch numbers. The key takeaway is to be mindful of the limitations of data and to resist the temptation to optimize solely for what is easily measured, potentially sacrificing richness and fulfillment in the process.
Conclusion:
Dr. Wyn’s analysis provides a critical perspective on the limitations of big data, urging listeners to be aware of the inherent biases and trade-offs involved in its collection and interpretation. The conversation emphasizes the importance of preserving qualitative understanding, resisting the allure of “fake objectivity,” and safeguarding our own capacity for judgment in an increasingly data-driven world. The ultimate message is that while data can be a powerful tool, it should not be allowed to dictate our values or diminish the richness of human experience. The call to action is to be conscious consumers of data, recognizing its limitations and prioritizing what truly matters.
Chat with this Video
AI-PoweredHi! I can answer questions about this video "How Big Data and AI Are Lying to Business Owners - Tom Wheelwright and C. Thi Nguyen". What would you like to know?