Dehumanizing Agents: Why Explainability Is Crucial in the LLM Era
Lucía Conde-Moreno kicks off by reminding us that since AI learns from our flawed, biased data, it too can be biased and confidently wrong. Traditional explainable-AI techniques helped crack open the black box of earlier machine-learning models—but LLMs, with their polished prose and penchant for inventing facts, break those old methods.  
In her NDC Copenhagen talk, she shares a journey from classic ML to cutting-edge generative AI, laying out both established and novel explainability approaches from her own research. Expect insights into real-world risks, clever workarounds for tricky challenges, and practical tips on weaving transparent explanations into LLM workflows—even when you’re stuck using opaque third-party services.
Watch on YouTube
    
Top comments (0)