We often hear that AI coding assistants improve software quality. But what happens when the code they learn from is already insecure?
To find out, I ran a simple experiment. I built a small application and intentionally left six well-known vulnerabilities (SQL injection, XSS, open redirect, and more). Then I used an AI coding assistant in “vibe coding” mode to add a new feature—without giving it any security instructions—to see whether it would fix the issues… or simply repeat them like a parrot.
You will find the results and the prompts I used in the video below:



