AI is making the jump to full automation, kicking privacy and security concerns into high gear! Turns out, those AI "guardrails" you've heard about are shockingly easy to bypass, and senior engineers can't just cross their fingers and hope model providers magically sort out all the privacy issues.
It's time to bust some myths: models totally "memorize" sensitive data, and one-off red-teaming or waiting for the next update won't cut it. Instead, we need a proactive, interdisciplinary approach to risk, building a culture of psychological safety, and perhaps even embracing local LLMs and diversifying providers to really lock things down.
Watch on YouTube
Top comments (0)