Your AI Agents Are Overprivileged: The Case for Fine-Grained Authorization
Generative AI is booming, but most systems still use coarse-grained access controls that let agents roam free over sensitive data. In this talk from NDC Copenhagen, Ashish Jha argues that without document- and context-level permissions, enterprises risk data leaks, multi-tenant bleed, and compliance nightmares.
You’ll see how Fine-Grained Authorization (FGA) tools like OpenFGA and LangChain lock down RAG and agentic AI, enforce strict data boundaries, and audit every access—scaling to billions of decisions without slowing your apps. Whether you’re building internal copilots or customer-facing bots, this is your blueprint for winning the security battle.
Watch on YouTube
    
Top comments (0)