Summiz Summary

DEF CON 32 - On Your Ocean's 11 Team, I'm the AI Guy (technically Girl) - Harriet Farlow

Thumbnail image for DEF CON 32 - On Your Ocean's 11 Team, I'm the AI Guy (technically Girl) - Harriet Farlow
Summary

DEFCONConference


Talk Summary

☀️ Quick Takes

Is this Talk Clickbait?

Our analysis suggests that the Talk is not clickbait because it sufficiently addresses the title's claim by discussing AI roles in a team context, relevant to the Ocean's 11 theme.

1-Sentence-Summary

Harriet Farlow delves into the vulnerabilities and challenges of AI security in casinos, illustrating how adversarial machine learning and simple perturbations can manipulate AI systems, emphasizing the critical need for strategic integration and robust security measures in the adoption of AI technologies.

Favorite Quote from the Author

AI security is real.

💨 tl;dr

Harriet Farlow dives into AI security in casinos, highlighting vulnerabilities, trust issues, and the need for better governance. She discusses the growing use of AI for facial recognition but warns about the risks of hacking and the importance of human oversight.

💡 Key Ideas

  • Harriet Farlow, an AI expert, discusses AI security in casinos, emphasizing vulnerabilities and historical hacks.
  • The integration of AI in casinos is growing, mainly for facial recognition and person detection; however, trust issues persist.
  • AI adoption in casinos is complicated by reliance on third-party providers and compliance challenges, particularly with money laundering concerns.
  • Hacking AI systems requires specialized knowledge; adversarial machine learning is a key method to disrupt AI functionality.
  • Techniques like adversarial stickers and Distributed Adversarial Regions (DARS) can effectively deceive detection systems, including casino security.
  • Most casinos primarily use AI for computer vision tasks, with little implementation for other AI applications like chatbots.
  • There’s a significant overlap between open-source and proprietary AI models, leading to potential vulnerabilities in security.
  • Many organizations, including casinos, are ill-equipped to secure their AI models, with a troubling gap between AI usage and security awareness.
  • The talk highlights the metaphor of casinos as a model for surveillance society, stressing the need for better governance of AI technologies.

🎓 Lessons Learnt

  • Embrace Your Unique Background: Your diverse experiences can provide valuable insights and perspectives in the tech field, especially in AI security.

  • Trust and Security Are Paramount: In high-stakes environments like casinos, establishing trust and prioritizing security when adopting AI technologies is crucial.

  • Understand AI Context and Terminology: Knowing the specific context and language around AI is vital for effective communication and implementation.

  • Continuous Learning is Essential: The tech landscape, particularly in AI, is ever-evolving, necessitating ongoing education and adaptation.

  • Leverage Networking for Support: Engaging with peers and experts can provide crucial help and collaboration opportunities, especially during challenging projects like a PhD.

  • Adversarial Machine Learning is Growing: Understanding how to hack and defend against AI systems is increasingly important, with unique attack surfaces for different AI models.

  • Human Oversight Remains Critical: Despite advancements in AI, human input and oversight are still necessary for effective identification and application in various systems.

  • Security in AI is Often Overlooked: Organizations frequently focus on AI functionality without adequately addressing security implications, which can lead to vulnerabilities.

  • Context Matters for Model Performance: Different AI models react differently to inputs and potential disruptions, affecting their reliability and security.

🌚 Conclusion

To navigate the complex landscape of AI in high-stakes environments like casinos, organizations must prioritize security, embrace continuous learning, and ensure human involvement to mitigate risks and enhance trust.

Want to get your own summary?

In-Depth

Worried about missing something? This section includes all the Key Ideas and Lessons Learnt from the Talk. We've ensured nothing is skipped or missed.

All Key Ideas

Harriet Farlow's Talk on AI Security in Casinos

  • Harriet Farlow introduces herself as an AI expert from Australia, sharing a quirky drinking tradition called 'shoy.'
  • The talk, titled 'On Your Ocean's 11 Team, I'm the AI Guy,' focuses on AI security, particularly in the context of casinos.
  • Harriet has a diverse background in physics, data science, and machine learning security, and she recently started her own company, M Security Labs.
  • The presentation aims to explore high-profile casino hacks with a specific focus on the vulnerabilities of AI systems used by casinos.
  • The talk highlights the growing adoption of AI by organizations like casinos, which are often insecure and only attract attention when financial losses or reputation damage occur.

Challenges and Uses of AI in Casinos

  • Casinos face challenges in gaining customer trust despite statistically losing money, which relates to AI adoption across organizations.
  • Casinos are often linked to money laundering, with recent controversies in Australia highlighting compliance failures.
  • AI is used in casinos for various purposes, particularly facial recognition and person detection.
  • The landscape of AI integration in casinos involves reliance on third-party providers and consultants due to cost constraints.
  • Contrary to expectations, AI is not significantly used for card counting in casinos, which is considered Advantage play.

Insights on AI and Hacking

  • Intelligence varies in perception; some think of AGI, while others see it as advanced algorithms or cutting-edge technology.
  • Hacking an AI system requires understanding the specific technology rather than a general AGI goal.
  • Hacking AI has historical roots, with examples like Ron Harris stealing $100,000 in a casino by predicting numbers through algorithm knowledge.
  • Card counting in blackjack can reduce the casino's statistical advantage, highlighting algorithm hacking in gambling.
  • Adversarial machine learning involves creating crafted examples that deceive machine learning models, disrupting their intended function.
  • The field of adversarial machine learning is expanding, moving from academia to real-world applications, like misleading autonomous vehicles.
  • Different AI models, like convolutional neural networks and Transformers, have distinct attack surfaces that can be exploited through various adversarial methods.

Techniques for Deception in Detection Systems

  • Glasses and jumpers can hide a person from detection.
  • Marines successfully deceived a detection model by acting non-human.
  • Adversarial stickers can trick models into misidentifying objects.
  • Distributed adversarial regions (DARS) can manipulate images without altering physical objects.
  • DARS could be applied in military environments for camouflage.
  • The goal is to bypass casino security cameras using AI and DARS techniques.

AI Applications in Casinos

  • Casinos in Vegas are increasingly adopting AI, particularly for facial recognition and person detection, to enhance security and design.
  • AI is used in casino design to capture faces through strategic layouts, aiding in identifying criminals and problem gamblers.
  • The current reliance on AI in casinos is primarily a computer vision problem, with limited use for other AI types like chatbots.
  • A significant similarity exists between open-source models and proprietary models used by companies, often showing 95-99% likeness.
  • The concept of model convergence suggests that similar data leads to similar AI models, allowing for the use of open-source models in various applications.

Adversarial Attacks in Machine Learning

  • Implementing adversarial attacks on target models involves using optimization algorithms to identify regions that can be perturbed to cause misclassification.
  • There are over 100 different kinds of adversarial machine learning attacks, and creating them is relatively easy since it mainly involves optimization.
  • Most computer vision systems rely on convolutional neural networks, making it simple to create attacks that disrupt their functionality.
  • Facial recognition systems compress facial geometry into an embedding space, allowing for the identification of individuals based on clusters and relationships.
  • The attack tested resulted in a 40.4% reduction in confidence across various models, indicating significant disruption potential, especially in specific environments.

AI Security Challenges

  • Most open-source AI models are vulnerable to simple attacks that can significantly reduce their confidence in recognizing objects.
  • AI security discussions highlight that many organizations lack the maturity to implement advanced security measures.
  • Perturbations can disrupt model classifications without changing the actual objects being analyzed, affecting areas like computer vision, natural language processing, and signal processing.
  • The challenge of hacking AI models is different from traditional cybersecurity, as it involves more nuanced evaluations rather than binary outcomes.
  • Research indicates that adding noise or perturbations to language models can alter their outputs, even if that information wasn't initially encoded.

AI Security Challenges and Vulnerabilities

  • The classification confidence of a model should be viewed as part of the kill chain, with many current attacks being complex and academic rather than practical.
  • Adversarial machine learning attacks can disrupt models effectively, suggesting a significant vulnerability in AI security.
  • There is a reliance on insecure AI models in many organizations, highlighting the need for improved AI security measures.
  • While 94% of organizations can describe how they use AI, only 8% can articulate how they secure it, indicating a major gap in AI security awareness.
  • Casinos serve as a metaphor for a surveillance society, emphasizing the challenges of governing and implementing AI technologies effectively.
  • The reliance on third-party AI solutions creates challenges due to a lack of regulation on their security and robustness.

All Lessons Learnt

Lessons from Harriet's Journey in AI Security

  • Embrace your unique background and experiences.
  • AI security is an increasingly relevant field.
  • Engage with your audience to gauge their background.
  • Context matters in your presentation topic selection.
  • Continuous learning is essential in tech fields.

Key Considerations for AI Adoption in Casinos

  • Trust is crucial for organizations adopting AI.
  • Security is essential in high-stakes environments.
  • AI applications in casinos can be strategic but require careful consideration.
  • Card counting isn’t a primary use case for AI in casinos.
  • Understanding AI terminology matters.

Lessons on AI Hacking

  • Understand the specific technology to hack AI systems.
  • Historical context helps in understanding AI hacking.
  • Card counting is an example of algorithm hacking.
  • Adversarial machine learning is a growing field.
  • Different AI models have different attack surfaces.

Key Strategies for Success

  • Be adaptable in problem-solving. When faced with detection models, thinking outside the box can lead to successful deception, like acting not like people to fool the model.
  • Understand the limitations of technology. Adversarial attacks can be obvious, so it’s crucial to find less detectable methods, like using distributed adversarial regions instead of altering the actual image.
  • Leverage networking for support. While working on a PhD, reaching out to people can lead to valuable help and collaboration, especially if you're polite and approachable.
  • Balance multiple commitments wisely. Juggling a startup and a PhD can be tough, so managing time effectively and setting priorities is key to progress in both areas.

AI Insights

  • AI Implementation in Casinos: Casinos should consider integrating AI technologies like facial recognition during their redesign process to enhance security and monitor gambling behaviors effectively.
  • Model Similarity in AI: Companies often produce AI models that are significantly similar (95-99%) to open-source models, indicating that leveraging existing models can save time and resources in development.
  • Human Dependency in AI Systems: Despite advancements in AI, there remains a reliance on human input for identifying known individuals in systems, highlighting the importance of human oversight in AI applications.

Adversarial Attacks and Model Security

  • Creating adversarial attacks is straightforward. By using simple optimization algorithms, you can easily build adversarial machine learning attacks without needing pre-built libraries.
  • Facial recognition systems can be disrupted with clever alterations. Small changes, like adding specific patterns or jewelry, can significantly impact how models recognize individuals.
  • Model security challenges are real and impactful. Many computer vision models are vulnerable, and it doesn’t take much to create effective attacks that reduce their confidence levels significantly.
  • Context matters in model performance. Different models, like those used in facial recognition compared to military platforms, rely on different aspects of their input, making some easier to disrupt than others.

Lessons Learnt

  • Open source models are vulnerable to simple attacks.
  • Contextual changes can disrupt various AI applications.
  • Security is often overlooked in model evaluation.
  • Hacking AI systems isn't binary.

Lessons on AI Security

  • Organizations need to prioritize AI security.
  • Cybersecurity lessons should be applied to AI security.
  • Understanding AI usage is common, but securing AI is not.
  • Human oversight is still critical in AI environments.
  • Questioning third-party AI security is necessary.

Want to get your own summary?