Systems Thinking for Responsible AI
Nimisha Asthagiri from Thoughtworks cuts through the PoC hype to show why you need systems thinking and complexity theory (think Cynefin, Iceberg models and causal flow diagrams) to spot and govern those vicious reinforcing loops that lead to algorithmic addiction, burnout or ethical drift. She uses real-world case studies—from social media misfires to autonomous meeting schedulers—to highlight new AI risks like fake alignment and the importance of defining agents (not just microservices).
Along the way you’ll pick up multi-agent design patterns (RAG, chain-of-thought, reflection), agent topology trade-offs (orchestration vs decentralization), and practical tools for explainability (LIME, SHAP). Wrap it up with human-in-the-loop boundaries, governance agents, and a dash of Q&A on the ethics of influencing behavior vs actually solving problems.
Watch on YouTube
Top comments (0)