The Vibe Coder's Complete Security Guide (2026)
Ship secure code without a security background. The complete vibe coder security guide — covering the 5 risks that actually matter and how to fix them fast.
Rod
Founder & Developer
Vibe coding is real and it's here — AI assistants writing most of your codebase while you guide architecture and product decisions. The workflow is genuinely productive. The security problem is real too.
Veracode's 2025 research found that 45% of repositories have at least one security vulnerability. That number doesn't go down because you used an AI to write the code. If anything, it goes up — AI tools optimize for code that works, not code that's secure.
This guide is for vibe coders who want to ship production apps that don't get breached. No security background required. No jargon. Just the five things that matter most and how to fix them.
Why AI Tools Skip Security
Cursor, Claude Code, GitHub Copilot — all of them have the same structural problem. They're trained to generate code that solves the stated problem. Security isn't the stated problem. It's a constraint that nobody wrote into the prompt.
When you ask an AI to "build a user authentication system with Supabase," it will. It'll generate auth routes, session handling, and probably a working login flow. What it won't automatically do: enable Row Level Security on your user table, validate every input before it hits the database, or set the httpOnly flag on session cookies.
The code works. The security is missing.
The fix isn't to prompt better (though that helps). The fix is to add a security review step that doesn't rely on the AI checking its own work.
Risk 1: Secrets in Your Repository
This is the most common critical finding when we scan vibe-coded repos. An API key, a database connection string, or a Stripe secret key — committed to source control, sometimes pushed to a public GitHub repo.
It usually happens like this: you're iterating fast, you hardcode a value to test something, and it ends up in a commit. The AI helped you write the code. It didn't warn you about the secret.
How to find it:
# Scan your git history for secrets (catches things already committed)
trufflehog git file://. --only-verified
# Check your current files for patterns
gitleaks detect --source . --verboseHow to fix it:
Move everything to environment variables. In Next.js:
// BAD: hardcoded secret in source
const stripe = new Stripe("sk_live_abc123...");
// GOOD: loaded from environment variable
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY!);Then add a .gitignore entry for .env files and a pre-commit hook that blocks future commits containing secret patterns.
If a secret is already in your git history, you need to rotate it immediately — the history is permanent, even if you delete the file. Then use git filter-repo to remove it from history before your next push.
See the complete exposed API key fix guide for the full rotation and cleanup process.
Risk 2: Missing Database Rules (Supabase RLS)
If you're building on Supabase, this one will get you. Row Level Security (RLS) is the database-level permission system that controls which users can read or modify which rows. Without it, your API is the only thing standing between any user and every other user's data.
AI tools generate Supabase table definitions without enabling RLS by default. The code works fine in development — you don't notice the missing permissions because you're the only user. In production, with real users, anyone can query any row.
How to check your current state:
-- Run this in Supabase SQL editor to see which tables have RLS disabled
SELECT tablename, rowsecurity
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY tablename;Tables where rowsecurity = false are exposed to anyone who can reach your Supabase URL.
How to fix it:
-- Enable RLS on a table
ALTER TABLE public.orders ENABLE ROW LEVEL SECURITY;
-- Add a policy: users can only see their own orders
CREATE POLICY "Users see own orders"
ON public.orders
FOR SELECT
USING (auth.uid() = user_id);The Supabase RLS security checklist has the full set of patterns for every operation type (SELECT, INSERT, UPDATE, DELETE).
Risk 3: No Input Validation on API Routes
AI-generated API routes typically validate that required fields are present. They don't validate that the values are what they claim to be. That's the gap where injection attacks, mass assignment, and privilege escalation live.
A user submitting {"role": "admin"} shouldn't be able to make themselves an admin. A search query containing ; DROP TABLE users shouldn't execute. These protections require explicit input validation — the kind AI tools skip by default.
The pattern that fixes this:
import { z } from "zod";
// BAD: trusting the request body directly
export async function POST(req: Request) {
const { email, role } = await req.json();
await db.users.update({ email, role }); // mass assignment risk
}
// GOOD: validate and whitelist with Zod
const updateUserSchema = z.object({
email: z.string().email(),
// role is NOT in the schema — users can't change their own role
});
export async function POST(req: Request) {
const body = await req.json();
const parsed = updateUserSchema.safeParse(body);
if (!parsed.success) {
return Response.json({ error: "Invalid input" }, { status: 400 });
}
await db.users.update({ email: parsed.data.email });
}Add Zod to your project and validate every API route that accepts user input. It takes 10 minutes per route and removes an entire class of vulnerabilities.
Risk 4: Missing Security Headers
Security headers are HTTP response headers that tell the browser how to behave. Content Security Policy prevents XSS attacks. HSTS forces HTTPS. X-Frame-Options blocks clickjacking. Most vibe-coded apps don't set any of them.
AI tools generate Next.js apps without security headers because the app works without them. They're optional from the framework's perspective. They're not optional from a security perspective.
Add them to next.config.ts:
const securityHeaders = [
{ key: "X-Frame-Options", value: "SAMEORIGIN" },
{ key: "X-Content-Type-Options", value: "nosniff" },
{ key: "Referrer-Policy", value: "strict-origin-when-cross-origin" },
{
key: "Strict-Transport-Security",
value: "max-age=63072000; includeSubDomains; preload",
},
{
key: "Content-Security-Policy",
value: "default-src 'self'; script-src 'self' 'unsafe-inline';",
},
];
const nextConfig = {
async headers() {
return [
{
source: "/(.*)",
headers: securityHeaders,
},
];
},
};
export default nextConfig;The complete Next.js security headers guide covers every header, what it does, and how to tune the CSP for common third-party services.
Check your current headers free: Data Hogo's header scanner tests your deployed URL and shows exactly what's missing.
Risk 5: Vulnerable Dependencies
npm packages get CVEs. A package your AI suggested three months ago might have a known vulnerability today. Most vibe-coded projects never run a dependency audit after the initial setup.
# Check your current dependency vulnerabilities
npm audit
# Fix the ones that have automatic remediation
npm audit fix
# See what's left and needs manual review
npm audit --audit-level=highnpm audit is a start, but it only checks packages against npm's advisory database. The dependency security guide explains why OSV and the GitHub Advisory Database catch additional CVEs that npm audit misses.
The practical step: run npm audit before every production deployment. Treat high and critical findings as blockers. Informational and low findings can go in the backlog.
The Vibe Coder Security Checklist
Before you go live, run through this:
[ ] No secrets in source code — all keys in environment variables
[ ] .env in .gitignore — verify with: cat .gitignore | grep .env
[ ] RLS enabled on all Supabase tables — check with the SQL query above
[ ] Input validation on every API route that accepts user data
[ ] Security headers configured in next.config.ts
[ ] npm audit shows zero high/critical findings
[ ] Supabase anon key only in client-side code — service role key server-only
[ ] No debug mode or verbose logging enabled in productionFive items on this list take under an hour to fix. The RLS piece takes the most time if you have many tables. Start there — it's the highest-risk item for data exposure.
Run a Scan, See Exactly What Needs Fixing
The checklist above covers the most common issues. But every codebase is different. An automated scan finds the specific instances in your specific code, not just the pattern category.
Data Hogo scans your GitHub repo and deployed URL, covers all five risk categories above plus configuration issues, and returns findings in plain English with specific file locations and fix instructions.
Scan your repo free — takes under 5 minutes →
Frequently Asked Questions
Is vibe coding safe for production apps?
Vibe coding — using AI assistants to write most of your code — can be safe for production if you add a security review step. The risk is that AI tools optimize for working code, not secure code. They'll generate functional authentication, database queries, and API integrations that have real vulnerabilities. Running a security scan before going live catches most of these. The underlying code quality depends heavily on how specific your prompts are.
What are the most common security mistakes in AI-generated code?
The most frequent issues we see when scanning AI-generated repos: hardcoded API keys in source files, missing Row Level Security on Supabase tables, no input validation on API routes, missing security headers on deployed apps, and vulnerable dependencies that AI tools suggest without checking their CVE status. These five categories appear in the majority of first-time scans.
How do I check if my vibe-coded app has security issues?
Connect your GitHub repo to Data Hogo and run a free scan. It checks secrets, dependencies, code patterns, configuration, security headers, and database rules — the full surface area. Results come back in minutes with plain-English explanations of each finding and specific steps to fix them. No security background required to understand the output.
Does Cursor or Claude Code automatically write secure code?
No. AI coding assistants like Cursor and Claude Code prioritize generating code that compiles and passes basic functionality tests. They're not trained to catch their own security mistakes. They'll write SQL injection vulnerabilities, skip CORS validation, and suggest npm packages with known CVEs without warning. You need a separate security check — the AI won't flag its own issues.
What's the minimum security checklist for a vibe-coded production app?
Five things cover 80% of the risk: (1) No secrets in your repo — use environment variables and scan git history. (2) Enable RLS on all Supabase tables. (3) Add input validation to every API route that accepts user data. (4) Set security headers in your Next.js config. (5) Run npm audit and update dependencies with known CVEs before launch.
Security for vibe-coded apps isn't about becoming a security expert. It's about catching the specific, predictable mistakes that AI tools make by default. The five risks in this guide appear in the majority of AI-generated repos we've scanned. Fix them, and you're significantly ahead of the baseline.
The scan takes five minutes. The fixes take an afternoon. Ship something you can stand behind.
Related Posts
Why AI Writes Insecure Code: The Vibe Coding Security Problem
The root cause of vibe coding security problems. Why AI coding tools write insecure code — training data, optimization targets, and context limitations explained.
7 Security Vulnerabilities AI Puts in Every Project
AI code assistants ship fast — and ship flawed. Here are 7 security vulnerabilities AI puts in your project, with before/after code examples for each.
The Real Cost of Insecure AI-Generated Code (With Numbers)
What does insecure AI code actually cost? Data breaches, downtime, legal liability, and reputation damage — with real dollar amounts and case studies.