Your AI Agents Are Overprivileged: The Case for Fine-Grained Authorization
Generative AI is booming, but most agents still rely on coarse access controls that can’t enforce contextual, document-level permissions. This leaves sensitive data exposed and enterprises at risk.
In his NDC Copenhagen talk, Ashish Jha makes the case for Fine-Grained Authorization (FGA) to lock down RAG and agentic AI systems. He walks through real-world examples using OpenFGA and LangChain to achieve multi-tenant isolation, prevent data leaks, provide audit trails, and scale to billions of access checks without slowing down. A must-win security challenge for any AI builder.
Watch on YouTube
    
Top comments (0)