Forget amusing chatbot tricks! Prompt injection has leveled up, becoming a serious security nightmare now that AI systems are embedded in critical CI/CD pipelines and given privileged access. Picture this: AI agents, trusted with your system's sensitive tools, suddenly running unauthorized commands or siphoning off credentials because of a sneaky prompt.
This isn't hypothetical; Aikido Security uncovered critical vulnerabilities in GitHub Actions, even affecting giants like Google, enabling command execution and credential theft. This talk explores why this new class of threat is super dangerous and way harder to solve than it seems, drawing parallels to how we once underestimated SQL injection.
Watch on YouTube
Top comments (0)