TL;DR
Generative AI is flipping traditional code reuse on its head by letting you whip up fresh snippets with a prompt—yet studies warn that models trained on vulnerable open-source code tend to reproduce those same security bugs. Even weirder, developers often trust AI-generated code more than their own or colleagues’, which, coupled with lightning-fast “code velocity,” risks unleashing more hidden flaws than ever.
On top of sketchy code, GenAI tools carry their own baggage: jailbreak exploits, data-poisoning attacks, sneaky malicious agents, runaway recursive learning loops, and potential IP infringements. Niels Tanis dives into real academic findings to map out these dangers and dish out strategies for keeping your AI-supercharged projects both powerful and safe.
Watch on YouTube
    
Top comments (0)