7 Security Vulnerabilities AI Puts in Every Project
AI code assistants ship fast — and ship flawed. Here are 7 security vulnerabilities AI puts in your project, with before/after code examples for each.
Rod
Founder & Developer
AI vulnerabilities aren't hypothetical. We scanned 200+ repos built with Cursor, Copilot, and ChatGPT over the past three months and found the same seven issues appearing in almost every one. Veracode's 2025 State of Software Security report put the number at 45% of AI-generated code containing at least one security flaw. Our data suggests that's conservative.
The reason this happens isn't that AI is careless. It's that AI optimizes for code that works, not code that's secure. Those are different goals.
Here are the seven vulnerabilities. Each one includes the bad pattern AI generates, why it's dangerous, and the fix.
1. Hardcoded Secrets
This is the most common ai vulnerability we see. You ask an AI to "add Stripe integration" or "connect to my database" and it generates working code with a placeholder that looks like this:
// BAD: AI fills in working credentials from context or uses obvious placeholders
const stripe = new Stripe("sk_live_abcdef123456", {
apiVersion: "2024-12-18",
});The problem compounds when you accept the suggestion, it works, and you commit without noticing the key is hardcoded. Bots scan every public push to GitHub within seconds. GitGuardian's 2025 report found an average response time of 4 minutes from a key being pushed to it being used maliciously.
// GOOD: Environment variable, never the raw value
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY!, {
apiVersion: "2024-12-18",
});Every API client initialization in AI-generated code deserves a second look.
2. SQL Injection via String Concatenation
AI knows about parameterized queries. It also generates string concatenation constantly, because that's what it learned from millions of StackOverflow answers.
// BAD: User input goes straight into the query string
const query = `SELECT * FROM users WHERE email = '${userEmail}'`;
const result = await db.query(query);An attacker sends userEmail = "' OR '1'='1" and gets your entire users table. This is a textbook SQL injection attack — and AI generates it constantly.
// GOOD: Parameterized query — the database driver handles escaping
const result = await db.query(
"SELECT * FROM users WHERE email = $1",
[userEmail]
);If you're using Supabase or Prisma, their query builders handle this correctly. The danger zone is any time AI generates raw SQL strings.
3. Missing Authentication on API Routes
AI generates the happy path. "Create an endpoint that returns user data" produces a working endpoint that returns user data — and doesn't check who's asking.
// BAD: Returns any user's data to anyone who asks
export async function GET(req: Request) {
const { userId } = await req.json();
const user = await db.users.findById(userId);
return Response.json(user);
}This exposes every user's data to unauthenticated requests. Anyone who finds the endpoint URL can enumerate your entire user base.
// GOOD: Auth check before any data access
export async function GET(req: Request) {
const session = await getServerSession(authOptions);
if (!session) {
return Response.json({ error: "Unauthorized" }, { status: 401 });
}
// Only return data for the authenticated user
const user = await db.users.findById(session.user.id);
return Response.json(user);
}Every API route needs an auth check as its first line. AI doesn't add this unless you explicitly ask for it — and sometimes not even then.
4. Path Traversal
When AI generates file upload or file serving code, it often trusts the filename from the request. This lets an attacker read files they shouldn't.
// BAD: User controls the file path
import fs from "fs";
import path from "path";
export async function GET(req: Request) {
const { filename } = await req.json();
const filePath = path.join("/uploads", filename);
const content = fs.readFileSync(filePath, "utf-8");
return new Response(content);
}Attacker sends filename = "../../etc/passwd" and gets your server's password file. Or "../../.env" and gets all your secrets.
// GOOD: Sanitize the path and confirm it stays in the allowed directory
import fs from "fs";
import path from "path";
export async function GET(req: Request) {
const { filename } = await req.json();
const uploadsDir = path.resolve("/uploads");
const requestedPath = path.resolve(uploadsDir, filename);
// Block any path that escapes the uploads directory
if (!requestedPath.startsWith(uploadsDir)) {
return Response.json({ error: "Invalid path" }, { status: 400 });
}
const content = fs.readFileSync(requestedPath, "utf-8");
return new Response(content);
}5. Cross-Site Scripting (XSS) via dangerouslySetInnerHTML
AI generates dangerouslySetInnerHTML often — it's the quick solution to rendering HTML content from a database or API. The name tells you it's dangerous. AI doesn't care.
// BAD: Renders raw HTML from the database directly into the DOM
function PostContent({ html }: { html: string }) {
return <div dangerouslySetInnerHTML={{ __html: html }} />;
}If an attacker can store HTML in your database (through a comment, a post, a profile bio), they can store a <script> tag too. That tag runs in every victim's browser.
// GOOD: Sanitize HTML before rendering it
import DOMPurify from "isomorphic-dompurify";
function PostContent({ html }: { html: string }) {
const clean = DOMPurify.sanitize(html);
return <div dangerouslySetInnerHTML={{ __html: clean }} />;
}Better option: store Markdown, render it with a safe Markdown renderer. Avoid raw HTML storage when possible.
6. CORS Misconfiguration — * Origin
AI sets Access-Control-Allow-Origin: * because it makes everything work immediately. It also makes everything work for attackers from any origin.
// BAD: Any website on the internet can make requests to your API
export async function middleware(request: NextRequest) {
const response = NextResponse.next();
response.headers.set("Access-Control-Allow-Origin", "*");
response.headers.set("Access-Control-Allow-Credentials", "true");
return response;
}Wildcard origin with Allow-Credentials: true is particularly bad — browsers normally block this combination, but some configurations allow credential-carrying cross-origin requests from untrusted sites.
// GOOD: Restrict CORS to your own domain
const ALLOWED_ORIGINS = [
"https://yourdomain.com",
process.env.NODE_ENV === "development" ? "http://localhost:3000" : "",
].filter(Boolean);
export async function middleware(request: NextRequest) {
const origin = request.headers.get("origin") ?? "";
const response = NextResponse.next();
if (ALLOWED_ORIGINS.includes(origin)) {
response.headers.set("Access-Control-Allow-Origin", origin);
}
return response;
}7. Vulnerable Dependencies
This one isn't AI generating bad code — it's AI recommending outdated packages. When you ask "how do I parse JWT tokens in Node.js," AI often suggests jsonwebtoken version patterns from its training data, which may be months or years old.
Packages get vulnerabilities. Your package.json from six months ago is not the same risk profile as the packages you installed today.
# Check what you actually have
npm audit
# Or let Data Hogo scan dependencies against the OSV database
# for a more complete picture than npm audit aloneThe npm advisory database has entries for thousands of packages. Running npm audit takes 10 seconds. Not running it and shipping a critical CVE in a dependency takes considerably longer to clean up.
How to Catch These Before They Ship
Reading this list is useful. Catching these in your actual codebase is better.
Most of these vulnerabilities are invisible during development — the code works fine, the tests pass, and nothing breaks until it does. That's what makes them dangerous.
Data Hogo scans for all seven of these patterns across your repo with 250+ detection rules. It catches hardcoded secrets, injection patterns, missing auth checks, path traversal, XSS sinks, CORS misconfigurations, and vulnerable dependencies in a single scan. You get a finding for each one with the exact file and line, and a plain-English explanation of what's wrong.
Scan your repo free — see what AI left behind →
For a deeper look at how these vulnerabilities appear in specific frameworks, read the common vulnerabilities in AI-generated code breakdown. If you're using Cursor specifically, the Cursor code security scan guide covers tool-specific patterns.
Frequently Asked Questions
Does AI-generated code have security vulnerabilities?
Yes. Multiple studies have found that between 40-45% of AI-generated code contains at least one security vulnerability. The most common issues are hardcoded secrets, SQL injection via string concatenation, missing authentication checks, and path traversal vulnerabilities. AI models optimize for code that runs correctly, not code that runs securely.
Is Copilot code secure?
GitHub Copilot generates functional code but does not guarantee secure code. A 2023 Stanford study found that developers using Copilot wrote more vulnerable code than those who didn't use it, partly because the AI often autocompletes patterns that work without security considerations. Copilot itself recommends reviewing all suggestions for security.
What is the most common vulnerability in AI-generated code?
Hardcoded secrets (API keys, passwords, tokens placed directly in source code) and SQL injection via string concatenation are the most common. Both happen because AI models learn from public code that contains these anti-patterns, and then reproduce them in new code.
How do I check if AI-generated code has security issues?
Run a static analysis scan on your repo. Data Hogo scans for the 7 vulnerability types described in this post — secrets, injection, broken auth, path traversal, XSS, CORS misconfiguration, and dependency vulnerabilities — across your full codebase with 250+ detection rules.
Can AI fix its own security vulnerabilities?
Partially. If you point out the specific vulnerability and explain what's wrong, AI assistants can generate a corrected version. The problem is knowing what to point out in the first place — which requires either a security review or an automated scan first.
The uncomfortable truth: AI makes you ship faster, but it doesn't make you ship more securely. The seven patterns above appear in projects from experienced developers who know better — because they accepted a suggestion, it worked, and they moved on. A quick scan before you deploy costs nothing. What comes back might surprise you.
Related Posts
Why AI Writes Insecure Code: The Vibe Coding Security Problem
The root cause of vibe coding security problems. Why AI coding tools write insecure code — training data, optimization targets, and context limitations explained.
The Real Cost of Insecure AI-Generated Code (With Numbers)
What does insecure AI code actually cost? Data breaches, downtime, legal liability, and reputation damage — with real dollar amounts and case studies.
The Vibe Coder's Complete Security Guide (2026)
Ship secure code without a security background. The complete vibe coder security guide — covering the 5 risks that actually matter and how to fix them fast.