Skip to main content

AI agents are revolutionizing organizational productivity, but they are also becoming a new entry point for attackers. Misconfigured, insufficiently tested, or simply misunderstood, these tools can expose sensitive data and compromise your entire information system.

In this webinar replay, discover:

  • The main threats to AI / LLM applications.
  • Demonstrations of attacks on AI agents.
  • Best practices for protecting yourself against these new threats.
  • Yogosha’s AI / LLM Security Checklist, for testing your systems and AI agents.

 

Webinar Summary

The Main Threats to AI/LLM Applications

In this webinar, Pedro Paniago, a world-renowned security researcher, highlighted 10 major vulnerabilities, based on the OWASP Top 10 for LLM applications standard.

Demonstration of Attacks on AI Agents

Subsequently, he specifically showed through several demos how he carried out several attacks on businesses and banks:

  • Stored XSS Triggered by a Prompt Injection.
  • Manipulation of Interest Rates via Prompt Injection.
  • Excessive Exposure of Sensitive Information in the HTTP Response.
  • Exfiltration of private data via prompt injection leading to full RAG data exfiltration.
  • Exfiltration of confidential information due to excessive agency leading to command IDORs in a chatbot.

With these attacks, he could have:

  • Executed JavaScript code in client sessions.
  • Obtained confidential company information such as passwords and employee salary slips.
  • Manipulated interest rates and caused financial losses for the bank.

How to Protect Your Organization?

To deal with these threats, Pedro Paniago shared several important practices to implement on your AI and LLM bots, apps, and agents:

  • Limit Agent Access to Your Tools: Apply the principle of least privilege by granting your AI agents only the access and permissions strictly necessary to accomplish their tasks.
  • Implement System Prompt Hardening: Use a solid and well-locked system prompt to prevent injections and manipulations. Good prompt engineering is crucial for limiting the attack surface, particularly through prompt isolation, role alignment, keyword detection, and output definition.
  • Apply Rate Limiting: Limit the frequency of requests to AI agents to reduce the risk of abuse, avoid brute-force attacks, and mitigate the non-deterministic nature of LLMs.
  • Limit Prompt Size: A prompt that is too long can confuse the model and make it forget initial instructions. Impose a strict limit on prompt size, both on the front-end and back-end, to prevent complex manipulations.
  • Clean Data Before Indexing: Before indexing data in a RAG system, ensure that all sensitive, private, or confidential information is removed to prevent it from being exfiltrated.
  • Apply Input/Output Controls: Use “guardrail” tools like NeMo Guardrails, LLM Guard, or Guardrails AI to monitor and filter prompts and responses in real-time.

Yogosha’s AI / LLM Security Checklist

Finally, this webinar concludes with Shoaib, Offensive Security Specialist at Yogosha, who quickly presents Yogosha’s AI / LLM Security Checklist, for thorough security testing of your AI systems and agents with experts from our community like Pedro.

About Yogosha

Yogosha is the editor of an Offensive Security Platform designed to industrialize and continuously manage all your offensive security operations, such as pentests and bug bounty programs.