AI risks no one is talking about (but really should) by May Brooks-Kempler
By Canadian Institute for Cybersecurity (CIC)
Key Concepts
AI risks, data leakage, need-to-know principle, copyright ownership, plagiarism, AI bias, AI-powered fraud, shadow AI, workforce displacement, critical thinking, strategic thinking, ethics, lifelong learning, human factor in cybersecurity, soft skills, DLP (Data Leak Prevention).
AI Risks That No One Seems to Be Talking About
Introduction
The presentation focuses on AI risks, emphasizing that while AI offers immense potential, it also presents significant risks that are often overlooked. The speaker, May Brooks-Kempler, highlights the importance of understanding both the benefits and the potential downsides of AI, particularly in the context of cybersecurity.
AI's Promise and Peril
AI can significantly enhance cybersecurity by detecting and mitigating incidents faster and streamlining operations. However, AI is a dual-edged sword, used by both defenders and attackers. Guardrails in AI models are not always effective, and the rapid adoption of AI often leads to security being an afterthought.
Data Leakage: Beyond Data Breaches
Data leakage, unlike a data breach, often goes unnoticed. It involves the unintentional exposure of sensitive information.
-
External AI Tools: Employees using external AI tools (with or without permission) can inadvertently leak data.
-
Third-Party Suppliers: Third-party suppliers using public AI tools without proper protection can also cause data leakage.
-
Internal Data Leakage: Internal AI tools often lack need-to-know protection, leading to unauthorized access to sensitive data.
- Need to Know Principle: Emphasizes that access to data should be based on whether an individual needs the information to perform their job, regardless of their privileges or clearance level.
- Example: Mergers and Acquisitions (M&A): Early disclosure of M&A deals can be detrimental. AI tools might expose sensitive M&A data to individuals who lack the context to understand its confidentiality, leading to leaks.
- Example: Marketing Projects: AI-generated marketing plans can be based on existing plans, potentially leading to competitors launching similar campaigns simultaneously.
Copyrights, Plagiarism, and Trust
The ownership of AI-generated content is a complex issue. It raises questions about copyrights and plagiarism, especially in academic settings. AI can make things up or confuse information, raising concerns about the reliability of AI-generated content.
- Example: The speaker asked an LLM to provide a list of relevant articles for her literature review, but the LLM initially provided links to non-existent articles.
AI-Powered Fraud
AI facilitates personalized scams, phishing attacks, and social engineering attacks. Traditional methods of identifying fraudulent emails (e.g., grammatical errors) are becoming less reliable.
- Example: The speaker received a WhatsApp message from her son's compromised account, requesting a code sent to her email. She verified his identity by asking a personal question.
- Example: Ferrari Scam: Fraudsters used an AI-generated avatar of Ferrari's CEO in a Zoom call to manipulate the CFO into wiring a large sum of money. The CFO detected the scam because the language used by the avatar did not match the CEO's usual speech patterns.
Shadow AI
Shadow AI refers to the use of AI tools and applications without the knowledge or approval of IT management. This poses a significant threat because it can lead to data leakage, bias, and non-compliance with organizational policies.
- Example: A junior associate in a law firm used an AI companion connected to their personal Gmail account during a confidential M&A conversation, potentially exposing sensitive information.
- AI for That: The website "AI for That" highlights the vast array of AI tools available, making it easy for employees to find and use AI tools without considering the associated risks.
The Workforce and AI
While AI will replace some low-level roles, it cannot replace critical thinking, strategic thinking, ethics, and creativity. Individuals should focus on developing these skills to remain relevant in the workforce.
Reality Check: Using AI Responsibly
- Personal Level: Consider the potential consequences if shared data falls into the wrong hands.
- Academic Level: Use AI responsibly for research and review, but never copy and paste AI-generated content without thorough verification.
- Organizational Level: Implement policies, training, and tools to review and mitigate AI-related risks.
Conclusion
The speaker encourages the audience to stay curious, question everything, and embrace lifelong learning. AI is a powerful tool that can enhance abilities, but it must be used responsibly and with careful consideration of the potential risks.
Notable Quotes
- "Every tool that we use, the bad people are also using. The difference is we have to adhere to certain ethical and legal requirements and restrictions, they don't."
- "When you see something, think before you trust. When you hear something, think before you trust."
- "Technology is not the enemy. And AI is a tool, but it's not a teammate, we use it."
- "For me, a day without learning is a day wasted."
Technical Terms and Concepts
- AI (Artificial Intelligence): The simulation of human intelligence processes by computer systems.
- LLM (Large Language Model): A type of AI model that uses deep learning algorithms and vast amounts of data to understand, summarize, generate, and predict new content.
- Data Breach: An incident where sensitive, protected, or confidential data has been viewed, stolen, or used by an unauthorized individual.
- Data Leakage: The unintentional exposure of sensitive information.
- DLP (Data Leak Prevention): A system designed to detect and prevent sensitive data from leaving an organization's control.
- Need to Know Principle: A security principle that limits access to information to only those individuals who require it to perform their job duties.
- Least Privilege Principle: A security principle that grants users only the minimum level of access necessary to perform their job functions.
- Shadow IT: IT systems or devices built and used inside an organization without explicit organizational approval.
- Shadow AI: The use of AI tools and applications without the knowledge or approval of IT management.
- CISSP (Certified Information Systems Security Professional): A globally recognized certification for information security professionals.
- HCISP (HealthCare Information Security and Privacy Practitioner): A certification for professionals in healthcare information security and privacy.
Logical Connections
The presentation begins by establishing the dual nature of AI, highlighting its potential benefits and risks. It then delves into specific risks, such as data leakage, copyright issues, and AI-powered fraud, providing real-world examples to illustrate these points. The discussion then shifts to the impact of AI on the workforce and the importance of developing critical thinking and other soft skills. Finally, the presentation concludes with practical advice on using AI responsibly and encourages lifelong learning.
Synthesis/Conclusion
The main takeaways from the presentation are that AI presents both significant opportunities and risks. Organizations and individuals must be aware of these risks and take proactive steps to mitigate them. This includes implementing robust data protection measures, promoting responsible AI usage, and fostering a culture of lifelong learning. The human element remains crucial in cybersecurity, and individuals should focus on developing critical thinking, strategic thinking, and ethical awareness to remain relevant in the age of AI.
Chat with this Video
AI-PoweredHi! I can answer questions about this video "AI risks no one is talking about (but really should) by May Brooks-Kempler". What would you like to know?