Ever wondered how to truly mess with AI? Kasimir Schulz & Kenneth Yeung are spilling the beans on their year-long quest to break Large Language Models (LLMs), from both sides of the fence. This talk is all about the real deal in AI security, not just your grandma's image classification!
They'll reveal insane tricks like exploiting control tokens (they even hacked Google's Gemini!), getting an LLM to pop a shell from a simple image, and how sneaky insider threats can plant backdoors. Get ready to learn how to permanently jailbreak an LLM in mere minutes with just a CPU, letting you make any AI say exactly what you want!
Watch on YouTube
Top comments (0)