Ever tried letting AI fix your security bugs? Spyros Gasteratos recounts the hilarious — and often frustrating — journey of attempting to get LLMs to auto-remediate vulnerable code. Turns out, the AI's first brilliant idea was to just delete the problem function, or suggest epic 200-line refactors for simple SQL injections, hilariously marking serious threats as "false positives."
Forget simple prompting; they learned the hard way that practical AI code-fixing demands serious muscle. We're talking constraint-based planning, real developer feedback, and even multi-agent AI systems that literally argue amongst themselves before touching a single line of code. It's a wild ride from good intentions to humbling reality, showing what it really takes for AI to earn developers' trust.
Watch on YouTube
Top comments (0)