Why LinkedIn Treats AI Agents as Infrastructure
LinkedIn’s AI dream team (shout-out to Karthik Ramgopal and Prince Valluri) joined the InfoQ Podcast to bust the “one-off hermit dev” myth and explain why they’ve turned agents into core infrastructure. They walk us through their Platform Engineering playbook, from defining intent with specs and sandboxes to the new Model Context Protocol (MCP) that keeps background and foreground agents in sync, secure, and observable.
Along the way, they dive into RAG-based context solutions, the human-in-the-loop review process, and their top tips for scaling AI safely—think compliance, auditing, and real-world best practices. If you’re an engineering leader or architect, this is your blueprint for making AI agents rock-solid in production.
Watch on YouTube
Top comments (0)