Systems Thinking & Complexity Theory for AI Architects
Nimisha Asthagiri ditches the PoC treadmill and dives into why today’s autonomous, self-learning multi-agent systems are breeding chaos if you don’t watch your feedback loops. She walks you through hands-on frameworks like Cynefin and the Iceberg model, plus causal flow diagrams, to spot and govern reinforcing loops that fuel nasty side-effects—from algorithmic addiction and burnout to total ethical misalignment.
Along the way you get real-world case studies (social media gone wrong, a rogue meeting scheduler), plus a tour of multi-agent design patterns (RAG, chain-of-thought, reflection), orchestration vs. decentralized topologies, and explainability tools like LIME and SHAP. By the end, you’ll be sketching your own CFD, drawing crisp architectural boundaries for human-in-the-loop governance, and thinking twice before unleashing your next AI agent on the world.
Watch on YouTube
Top comments (0)