Forget what you heard: AI's "safety" guardrails are flimsier than you think, especially as automation ramps up. Privacy expert Katharine Jarmul busts myths, revealing why trusting model providers for security is a big nope and how models often "memorize" your sensitive data.
Seriously, things like one-off red-teaming or hoping the next AI update will magically fix security issues just won't cut it. Engineers need to get proactive, understand the architecture, embrace threat modeling, and maybe even look into local LLMs to really keep things secure.
Watch on YouTube
Top comments (0)