AI's big leap into full automation brings a truckload of privacy and security headaches, largely because those supposed "guardrails" are easier to bypass than you'd think. Turns out, large language models are pretty good at memorizing sensitive data, and engineers can't just cross their fingers and hope model providers solve all the privacy problems for them.
Forget relying on quick fixes or one-and-done red-teaming. The talk debunks several AI safety myths, urging a proactive approach. It's all about building a strong security culture, using iterative threat modeling, considering local LLMs, and diversifying your AI providers to really get a grip on these escalating risks.
Watch on YouTube
Top comments (0)