Forget "safety myths" in AI! As AI takes the wheel, privacy expert Katharine Jarmul warns that current "guardrails" are flimsy and easily bypassed—meaning senior engineers can't just trust model providers to magically keep things secure. Your AI models are actually super good at "memorizing" sensitive data, which is a huge leak risk.
It's time to ditch one-and-done red-teaming and stop hoping the next model version will fix everything. Instead, we need iterative security, building local LLMs, diversifying providers, and fostering a culture where security issues are caught before they blow up.
Watch on YouTube
Top comments (0)