Is AI just hype in 2025? Or are software engineers actually doomed?

By David Bombal

AITechnologyEducation
Share:

Key Concepts

  • Large Language Models (LLMs)
  • Retrieval Augmented Generation (RAG)
  • Artificial General Intelligence (AGI)
  • Hype Cycle in AI
  • Machine Learning Fundamentals (Learning Rate, Loss Function)
  • Convolutional Neural Networks (CNNs)
  • Transformers
  • Langchain
  • Unsloth
  • Google Colab
  • GPUs, TPUs, NPUs
  • Sora
  • Cybersecurity Automation with AI
  • AI and Privacy
  • Microsoft Recall

State of AI in 2025

  • Shift from Scaling to System Building: The focus has moved from simply increasing the size of language models to building smarter systems that integrate LLMs with other tools and data sources.
  • Retrieval Augmented Generation (RAG): Using external data to improve the accuracy of LLMs.
  • Combining LLMs: Integrating LLMs for different tasks, such as image generation and text writing.
  • Tool Integration: Allowing LLMs to use external tools (e.g., accessing weather data).
  • AGI Skepticism: The speaker expresses doubt about claims of imminent AGI, suggesting that LLMs are more likely to be used as components within larger systems.
  • Specific Training: LLMs excel at tasks they are specifically trained for, but should not be assumed to be general problem solvers.

Hype Cycle and Real-World Applications

  • AI Hype: Many companies are adding "AI" to their products simply to compete, even if the AI features are not always effective.
  • Metrics Matter: It's important to evaluate the actual performance of AI-powered products rather than relying on marketing claims.
  • Phishing Detection: Despite the existence of LLMs, phishing detection remains a challenge, with many false positives.
  • Cybersecurity Automation: AI and LLMs are becoming increasingly important for cybersecurity automation.
  • Protein Folding: AI has been transformative in areas like protein folding, demonstrating its potential to solve previously unsolvable problems.

Learning AI in 2025: A Step-by-Step Guide

  1. Python Proficiency: Learn Python as it is the primary language for machine learning. Standard Python is sufficient for most tasks.
  2. Introductory Machine Learning Course: Take a quick introductory machine learning course, such as Andrew Ng's Coursera course, to understand fundamental concepts like learning rates and loss functions.
  3. Hands-On Experimentation: Start playing around with pre-built libraries and examples, particularly using Google Colab.
  4. Start Training: Jump right in and start training models, even if you feel out of your depth initially.
  5. Data Exploration: Examine the structure of data used to train LLMs to understand how to shape your own data.
  6. Iterative Learning: Learn by seeing what has been done before and adapting it to your own tasks.
  7. Interpret Results: Learn how to interpret the results of your models to improve their performance.
  8. Progressive Learning: Start with standard machine learning techniques (e.g., support vector machines, small neural networks) before moving on to more complex topics like convolutional neural networks and transformers.
  9. Utilize Libraries: Explore libraries like Langchain for building working systems and Unsloth for fine-tuning LLMs.

Addressing Gatekeeping and Misconceptions

  • No PhD Required: You don't need a PhD or advanced math skills to get started with AI.
  • Democratization of AI: Libraries are designed to make AI easier to use, encouraging experimentation.
  • Focus on Fundamentals: Understanding the fundamentals of machine learning is more important than jumping straight into complex models.
  • Learning from Data: Examining the data used to train LLMs can provide valuable insights.

AI and Job Displacement

  • Streamlining vs. Replacement: Companies will try to streamline processes with AI, but complete job replacement is unlikely.
  • Marginal Efficiency Gains: AI may lead to marginal efficiency gains rather than wholesale job losses.
  • Human Element: Humans are still needed for tasks requiring empathy, critical thinking, and problem-solving.
  • Code Ownership: Developers need to understand the code they are working with, making it difficult to rely solely on AI-generated code.

AI and Privacy Concerns

  • Privacy Risks: Concerns about companies collecting and using personal data to train AI models.
  • Microsoft Recall: The Microsoft Recall feature raised privacy concerns due to its potential to capture and store desktop screenshots.
  • Terms and Conditions: Changes to terms and conditions that allow companies to use user-generated content for AI training.
  • On-Device Processing: Processing data locally on devices can help mitigate privacy risks.

Technical Terms Explained

  • Large Language Models (LLMs): Deep learning models trained on vast amounts of text data, capable of generating human-like text.
  • Retrieval Augmented Generation (RAG): A technique that enhances LLMs by providing them with relevant external data during text generation.
  • Artificial General Intelligence (AGI): A hypothetical level of AI that can perform any intellectual task that a human being can.
  • Convolutional Neural Networks (CNNs): A type of deep learning model commonly used for image recognition and processing.
  • Transformers: A type of neural network architecture that has revolutionized natural language processing and is used in many LLMs.
  • Langchain: A framework for building applications powered by LLMs.
  • Unsloth: A library for fine-tuning large language models efficiently.
  • Google Colab: A free cloud-based platform for running Python notebooks, often used for machine learning.
  • GPUs (Graphics Processing Units): Specialized processors designed for parallel processing, commonly used for training AI models.
  • TPUs (Tensor Processing Units): Custom hardware accelerators developed by Google specifically for machine learning tasks.
  • NPUs (Neural Processing Units): Specialized hardware accelerators for neural networks, often found in mobile devices.

Notable Quotes

  • "Phishing detection despite the fact that large language models exist doesn't seem to work very well constantly flagging the wrong things we've got so much more to do in this area."
  • "Just knowing how to train a deep network but no knowledge of who could use it or how it might apply to something like cyber security is going to be limiting."
  • "I think that the fact that we've seen this move over to trying to build and use LLMs as tools in as part of a larger system is kind of an admission in a way that they're not just going to solve everything on their own."
  • "I kind of feel like that's a bit gatekeeping right it's a bit people saying this is super hard look how smart I am yeah exactly I I don't like that approach right."

Synthesis/Conclusion

The video provides a balanced perspective on the current state of AI, acknowledging both the hype and the real potential of the technology. It emphasizes the importance of understanding the fundamentals of machine learning and encourages hands-on experimentation. The speaker expresses skepticism about claims of imminent AGI and highlights the privacy concerns associated with AI. The video also offers practical advice for individuals looking to learn AI in 2025, emphasizing the importance of Python proficiency, introductory courses, and continuous learning. Ultimately, the video suggests that AI will be a transformative force, but its impact will be more nuanced and gradual than some predictions suggest.

Chat with this Video

AI-Powered

Hi! I can answer questions about this video "Is AI just hype in 2025? Or are software engineers actually doomed?". What would you like to know?

Chat is based on the transcript of this video and may not be 100% accurate.

Related Videos

Ready to summarize another video?

Summarize YouTube Video