Systems thinking meets AI mayhem
Nimisha Asthagiri from Thoughtworks cuts through the hype around multi-agent AI by showing how PoCs without a strategy for unintended consequences can spin out of control. Using frameworks like Cynefin and the Iceberg Model (plus real-world mishaps from social media algorithms to overzealous meeting schedulers), she teaches you to spot and govern those sneaky reinforcing loops that drive algorithmic addiction, burnout, and ethical slip-ups.
From causal diagrams to governance hacks
You’ll get a quick primer on Causal Flow Diagrams, agent design patterns (RAG, Chain-of-Thought, Reflection), and the big orchestration vs. decentralization trade-off. Wrap it all up with tools like LIME/SHAP for explainability, plus tips on human-in-the-loop checkpoints and governance agents so your next AI system plays nice with real people.
Watch on YouTube
Top comments (0)