Your fancy AI coding assistant? It might just be Prompt-Jacking you! This talk dives into a new supply chain risk where seemingly helpful AI tools, like Cursor, can be turned against you through sneaky hidden text, leading to stuff like arbitrary code execution or data leaks. Yikes!
Get ready to learn how malicious prompts can quietly infect your entire codebase and agentic AIs, spreading like a digital virus. But don't despair! The session promises to reveal practical strategies to defend yourself and keep your AI-powered development safe and sound.
Watch on YouTube
Top comments (0)