Your AI Agents Are Overprivileged: The Case for Fine-Grained Authorization
Generative AI is everywhere—but that magic can backfire if your agents have carte blanche access to sensitive data. Traditional access controls just don’t cut it when you need context-aware, doc-level permissions at enterprise scale.
In this NDC Copenhagen talk, Ashish Jha shows how Fine-Grained Authorization (FGA) plus tools like OpenFGA and LangChain can lock down your RAG and agentic AI systems. You’ll learn to isolate tenants, prevent data leaks, and audit every request—while still handling billions of access checks without breaking a sweat.
Watch on YouTube
    
Top comments (0)