Systems Thinking & Complexity Theory for AI Architects throws a spotlight on the hidden dangers of spinning up AI PoCs without a clear plan for their ripple effects. Thoughtworks’ Nimisha Asthagiri cuts through the hype with frameworks like Cynefin and the Iceberg Model, teaching you to spot and tame those nasty reinforcing loops that lead to burnout, algorithmic addiction, and skewed ethics.
Using social-media nightmares and meeting-scheduler bots as case studies, she walks you through Causal Flow Diagrams, multi-agent design patterns, and the orchestration vs. decentralization tradeoff. You’ll also pick up practical tips—think LIME, SHAP, human-in-the-loop guardrails—to keep your AI agents honest, explainable, and aligned with real-world needs.
Watch on YouTube
Top comments (0)