Blog
Application Security

What happens when AI codes from vulnerable code

Eric Dupré
5
min read

We often hear that AI coding assistants improve software quality. But what happens when the code they learn from is already insecure?

To find out, I ran a simple experiment. I built a small application and intentionally left six well-known vulnerabilities (SQL injection, XSS, open redirect, and more). Then I used an AI coding assistant in “vibe coding” mode to add a new feature—without giving it any security instructions—to see whether it would fix the issues… or simply repeat them like a parrot.

You will find the results and the prompts I used in the video below:

Share this post

Checkout our latest post

Keep up with the latest videos, podcasts and research from Glev

We often hear that AI coding assistants improve software quality. But what happens when the code they learn from is already insecure?
Eric Dupré
January 9, 2026
5
min read
Learn why CVSS-based prioritization fails. And how adding exploitability, exposure, and code criticality helps teams cut noise, focus on real risk, and finally eliminate security debt.
Rodolphe Mas
December 2, 2025
8
min read
It’s time to scale the passion of the pioneers into the intelligence of modern workflows.
Laurent Hausermann
November 16, 2025
5
min read

Don't just find security issues in your code. Fix them for good.

Traditional code scanners stop at detection.
Glev goes further—investigating every issue in your code context, building agile remediation plans, and eliminating the security debt that holds teams back.