L'algoritmo è uguale per tutti? | Ernesto Belisario | TEDxLink Campus University

By TEDx Talks

Share:

Key Concepts

  • La legge uguale per tutti: The foundational principle of justice, implying impartiality and equal treatment under the law.
  • Algoritmo: A set of rules or instructions followed by a computer to solve a problem or perform a task.
  • Intelligenza Artificiale (IA): The simulation of human intelligence processes by machines, especially computer systems.
  • Bias algoritmico: Prejudices or unfair inclinations embedded within an algorithm, often stemming from biased training data.
  • Scatola nera (Black box): A system whose internal workings are opaque or unknown, making it difficult to understand how it arrives at its outputs.
  • Allucinazioni dell'IA: Instances where generative AI systems produce fabricated or incorrect information, often presented as factual.
  • Common Law vs. Civil Law: Two major legal systems. Common law relies heavily on judicial precedent, while civil law systems are primarily based on codified statutes.
  • Judge Analytics: The analysis of judicial decision-making patterns, often using statistical data.
  • Errori giudiziari: Miscarriages of justice resulting from mistakes made during legal proceedings.

The Algorithm and Equality in Justice

The presentation challenges the long-standing principle of "La legge uguale per tutti" (The law is equal for all) in the context of increasing Artificial Intelligence (AI) use in the justice system. Ernesto Belisario, an attorney, poses the critical question: "Is the algorithm the same for everyone?"

The Myth of Algorithmic Infallibility

A common misconception is that algorithms are infallible and objective. This is illustrated by a case in Florida (2014) involving Brenda, an 18-year-old Black woman who stole a $80 bicycle and returned it, and Vernon, a 41-year-old white man with prior armed robbery convictions who stole an $80 toolbox. A predictive algorithm used in the justice system flagged Brenda as having a high risk of recidivism, while Vernon was assessed as having a low risk. Two years later, Brenda committed no further crimes, while Vernon was incarcerated again for a significant offense. This case demonstrates that algorithms are not neutral and can be influenced by the data they are trained on.

Algorithmic Bias and the "Black Box" Problem

AI systems, like the justice system's blindfolded Lady Justice, are intended to be impartial. However, algorithms can be biased because they are trained on historical data that may contain societal prejudices. If the data reflects a higher incidence of crime among certain ethnic groups, the algorithm may incorrectly infer a higher propensity for crime within those groups.

The "black box" nature of many AI systems is a significant concern. Judges and legal professionals often receive outputs from AI without understanding the training data, the rules applied, or the choices made by the algorithm. This lack of transparency, coupled with human "laziness" or the tendency to accept seemingly reliable outputs, can lead to uncritical adoption of AI-generated information.

Research from Harvard indicates that AI models like ChatGPT tend to respond from the perspective of wealthy, white, Western males. This is attributed to the overrepresentation of such perspectives in internet data, which is a primary source for AI training. This disparity in representation can lead to AI outputs that are not equitable or representative of diverse populations.

AI Hallucinations and Their Consequences

Generative AI models are prone to "hallucinations," where they fabricate information. This can be particularly dangerous in legal contexts. For example, an AI might generate a non-existent legal precedent to support a lawyer's argument, as seen in a case where an attorney produced 25 fabricated precedents and was fined £1000 per precedent. A database compiled by Damian Charloten has documented over 600 such AI hallucinations in legal proceedings globally in the past two years. Italy has also seen instances of AI hallucinations in court cases, including a recent TAR Lombardia ruling where an attorney used non-existent precedents and faced disciplinary action. Reports suggest that some Italian magistrates have also used AI to draft judgments without verifying precedents, leading to disciplinary proceedings.

Legal Frameworks and Human Oversight

Article 15 of Italy's AI law explicitly states that the magistrate retains full decision-making authority regarding the interpretation and application of law, evaluation of facts and evidence, and the adoption of measures. This emphasizes that AI can be a tool, but human oversight and verification are mandatory. The presentation argues that if humans merely act as "clickers" who prompt and copy-paste AI outputs without critical evaluation, the role of human judgment diminishes, and AI could potentially automate these tasks more effectively. The use of AI in any human process, including justice, highlights the evolving role of humans in society.

Common Law vs. Civil Law and AI

A distinction is drawn between Common Law systems (e.g., US) where precedent is binding, and Civil Law systems (e.g., Italy, France) where precedent is not strictly binding. In Common Law, AI's ability to quickly identify precedents could be impactful. In Civil Law systems, where jurisprudence evolves to meet societal needs, the reliance on AI for precedent might be less direct, but the potential for AI to influence interpretation remains.

France's Ban on Judge Analytics

France has banned "Judge Analytics," which involves collecting and publishing statistical data on judges' performance, particularly concerning asylum requests. The concern is that such data, akin to online ratings, could discourage independent judicial thinking and dissent from prevailing legal interpretations, thereby impacting judicial autonomy and the advancement of law. The severe penalties for violating this ban are seen as necessary to protect the democratic nature of the legal system.

Human Fallibility and the Need for AI Support

The presentation acknowledges that human justice is also fallible. Statistics from the US show thousands of innocent individuals have served lengthy prison sentences due to judicial errors. Factors contributing to these errors include false testimony, defense attorney mistakes, and even the "hungry judge effect," where judges are more likely to grant bail requests earlier in the day than before lunch. In Italy, an estimated 1000 individuals are unjustly imprisoned annually, costing the state around €30 million in compensation.

Therefore, AI can serve as a valuable support tool for humans in the justice system. However, effective utilization requires training. This training should not only highlight AI's potential for errors (hallucinations) but also address the tendency for humans to use AI outputs to confirm pre-existing beliefs rather than challenge them. Studies show that many judges use AI only when it aligns with their initial thoughts and disregard it when it contradicts them, thereby reinforcing their own biases.

Conclusion: The Future of AI in Justice

Currently, algorithms are not equal for everyone. However, the potential exists for them to become so. The justice system cannot, and should not, dispense with human judgment, defense, and administration. Nevertheless, humans must embrace AI to make justice less arbitrary, less unjust, and truly equal. The greatest danger of AI is not using it, but using it without critical understanding and oversight. The future of justice lies in a synergistic relationship between human intellect and AI capabilities, where each complements and corrects the other.

Chat with this Video

AI-Powered

Hi! I can answer questions about this video "L'algoritmo è uguale per tutti? | Ernesto Belisario | TEDxLink Campus University". What would you like to know?

Chat is based on the transcript of this video and may not be 100% accurate.

Related Videos

Ready to summarize another video?

Summarize YouTube Video