Blog
Application Security

What happens when AI codes from vulnerable code

Eric Dupré
5
min read

We often hear that AI coding assistants improve software quality. But what happens when the code they learn from is already insecure?

To find out, I ran a simple experiment. I built a small application and intentionally left six well-known vulnerabilities (SQL injection, XSS, open redirect, and more). Then I used an AI coding assistant in “vibe coding” mode to add a new feature—without giving it any security instructions—to see whether it would fix the issues… or simply repeat them like a parrot.

You will find the results and the prompts I used in the video below:

Share this post

Checkout our latest post

Keep up with the latest videos, podcasts and research from Glev

Claude Code Security has shaken the cybersecurity industry. What this really means for AppSec teams.
Rodolphe Mas
February 27, 2026
8
min read
The 1st vulnerability analysis agent that works as a tireless security engineer to discard false positives and surface only what matters.
Rodolphe Mas
February 20, 2026
3
min read
We often hear that AI coding assistants improve software quality. But what happens when the code they learn from is already insecure?
Eric Dupré
January 9, 2026
5
min read

Don't just find security issues in your code. Fix them for good.

Traditional code scanners stop at detection.
Glev goes further—investigating every issue in your code context, building agile remediation plans, and eliminating the security debt that holds teams back.