Generative AI adoption is exploding, but most AI agents are running around with too much access, leaving sensitive data at risk. In this talk recorded at NDC Copenhagen, Ashish Jha argues that conventional access control just can’t enforce the nuanced, document-level permissions needed for RAG and agentic AI at scale.
Fine-Grained Authorization (FGA) is the answer: unlock precise data boundaries using tools like OpenFGA and LangChain. From multi-tenant isolation and leak prevention to full audit trails—Jha shows how to handle billions of access decisions without slowing down. Whether you’re building internal copilots or customer-facing bots, this is the security upgrade you can’t skip.
Watch on YouTube
 

 
    
Top comments (0)