Cursor Code Security Scan 2026: 50 Repos Analyzed
I ran a cursor code security scan on 50 public GitHub repos built with Cursor AI. Here's the exact breakdown of findings — and how to scan your own repo free.
Rod
Founder & Developer
I use Cursor in my own projects. I like it. This post isn't a hit piece.
It's an experiment. I wanted to know what a cursor code security scan turns up on real, public repositories — not a toy example, not a vendor's cherry-picked demo. So in January 2026, I scanned 50 public GitHub repos that were built with Cursor and documented every finding category.
Here's what I found.
How I Ran the Cursor Code Security Scan
I searched GitHub for repos with clear Cursor signals: a .cursor/ directory in the root, a .cursorrules file, or an explicit "Built with Cursor" mention in the README. Some had commit messages referencing Cursor prompts directly.
The 50 repos in the sample are all public, all on GitHub, and span a mix of project types — SaaS MVPs, portfolio apps, and weekend side projects. Most are Next.js or React-based, because that's what Cursor's user base tends to build. None are enterprise projects — this skews toward the indie developer and solo founder audience.
Since Data Hogo scans repos connected to your own GitHub account, I forked each of the 50 repos into my account before running them through the scanner. That also meant I could see each project the way any developer would — git clone, open it up, poke around before the automated scan even started.
I ran each fork through Data Hogo, which runs five parallel checks: secrets detection (Gitleaks + custom regex patterns), dependency auditing (npm audit + OSV database), code pattern analysis (Semgrep with 250+ security rules), configuration file review, and URL/header analysis.
Limitations worth stating upfront: 50 repos is a small sample. Public repos may skew toward early-stage projects where security is deprioritized. Forking captures the repo at a point in time — the original authors may have fixed issues since. This is a snapshot, not a longitudinal study.
The Results: What I Found Across 50 Cursor Repos
Industry context first: the Veracode State of Software Security 2025 report found that 45% of AI-generated code contains at least one vulnerability — a finding covered in more depth in my earlier post on vibe coding security risks. In December 2025, Tenzai researchers found an average of 69 vulnerabilities across 5 popular AI coding tools. I wanted to see what that looked like specifically in Cursor repos.
Here's the breakdown across the 50 repos I scanned:
| Finding Category | Repos Affected | % of Sample | Typical Severity |
|---|---|---|---|
| Hardcoded secrets / exposed API keys | 31 | 62% | Critical |
| Insecure dependencies (known CVEs) | 28 | 56% | High |
| Missing security headers | 41 | 82% | Medium |
| Missing or bypassed auth checks | 14 | 28% | High |
The overall picture: 44 out of 50 repos (88%) had at least one finding that would score as Medium severity or higher. Only 6 repos came back clean across all five scan engines.
Security scores across the sample clustered in the 40-65 range. A handful of repos scored above 75. None scored above 90.
Want to know where your repo lands? Scan your repo free — it takes 60 seconds.
Finding #1: Exposed Secrets Were the Most Common Problem
62% of the repos had at least one hardcoded secret. That was the most common finding by a wide margin, and it's also the one that surprised me the most in terms of how obvious the pattern was.
Cursor doesn't have global context about your repo's .env setup. It generates code based on what's in the current file. So when you ask it to add an OpenAI integration, it reaches for the most direct path — which often looks like this:
// What Cursor frequently generates — do not ship this
export async function POST(req: Request) {
const openai = new OpenAI({ apiKey: "sk-proj-xxxxxxxxxxxxxxxxxxxxxxxx" });
const completion = await openai.chat.completions.create({ ... });
return Response.json(completion);
}The fix is straightforward. Cursor will write it correctly if you have the right context in your file already — but without it, it defaults to inline:
// What it should look like
export async function POST(req: Request) {
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const completion = await openai.chat.completions.create({ ... });
return Response.json(completion);
}If you've already committed a secret to a public repo, changing your .env isn't enough. The old value is in Git history. Rotate the key in your provider's dashboard first, then purge it from history with BFG Repo-Cleaner.
Finding #2: Dependency Vulnerabilities No One Noticed
56% of the repos had at least one dependency with a known vulnerability. This one is sneaky because the code itself looks fine — the problem is in package.json.
Cursor suggests packages based on patterns it learned during training. If it was trained on data referencing a package version that's now known to be vulnerable, it'll suggest that version and you won't know. npm install runs, everything works, and you ship a dependency that has a documented attack vector.
What makes this worse: a lot of these repos had package-lock.json committed with no sign that anyone had run npm audit after the initial scaffold. The AI set up the project, it worked, the developer moved on.
Running npm audit --audit-level=high takes 3 seconds. It doesn't catch everything — supply chain attacks and logic vulnerabilities don't always show up there — but it's a fast first pass.
What Cursor Gets Right
This section isn't a token gesture. I mean it.
Across the 50 repos, Cursor produced consistently clean code structure. The component organization was logical. The API route patterns were sane. Variable naming was readable. In simple CRUD operations — which is most of what these apps did — I didn't find logic vulnerabilities. Cursor doesn't introduce SQL injection in ORMs, doesn't generate obvious XSS sinks in React output, and doesn't write eval() calls or other high-risk patterns in the contexts I saw.
The security problems it produces are almost entirely at integration points: where your app talks to an external service, where secrets are needed, where authentication needs to wrap a route. Those are exactly the spots where context matters most — and where Cursor has the least of it.
That's a tractable problem. It means you don't need to audit every line Cursor writes. You need to check the integration seams. Start with the quick wins: check your security headers — the complete Next.js security headers guide covers the fix — verify your .env isn't exposed, and if you're using Supabase, run through the Supabase RLS security checklist.
How to Run This Same Scan on Your Own Repo
You can run the same cursor code security scan I did. Here's the process, step by step — takes under two minutes.
Step 1: Go to datahogo.com and connect your GitHub account using the standard OAuth flow. We request read-only access to your code.
Step 2: Select the repo you want to scan. Public repos work on the free plan.
Step 3: Start the scan. Data Hogo runs five checks in parallel — secrets, dependencies, code patterns, config files, and headers. Takes about 60 seconds for a typical Next.js project.
Step 4: Read your security score and findings list. Every finding comes with a plain-English explanation: what it is, why it matters, and specifically what to change. No CVSS scores, no jargon walls.
Step 5: Fix the critical findings first. Rotated secrets and updated dependencies move the score the most, the fastest.
The free plan gives you 3 scans per month on 1 public repo. No credit card required.
Frequently Asked Questions
Is code written by Cursor AI secure?
Cursor can write secure code, but it frequently doesn't — especially around secrets management and dependency selection. In the repos I scanned, over half had at least one exposed secret or a dependency with a known vulnerability. Cursor is a genuinely useful tool; its output still needs a security pass before you ship anything that handles real user data.
How do I scan a GitHub repo for security vulnerabilities?
Connect your repo to a static analysis tool that covers the full surface area — secrets, dependencies, code patterns, and configuration. Data Hogo does this in under 60 seconds and the free plan covers 3 scans per month on a public repository. You get a security score and a prioritized findings list with plain-English explanations.
What security issues does AI-generated code commonly have?
The most common issues are hardcoded API keys and secrets, outdated dependencies with known CVEs, missing authentication checks on API routes, and absent security headers. These match the findings from Veracode's 2025 research and the OWASP Top 10 — AI tools didn't invent new problems, they just make the old ones easier to ship.
Can Cursor write code with exposed API keys?
Yes. Cursor doesn't have global awareness of your repo's .env setup — it generates code based on the current file's context. When you ask it to integrate an API, it often inlines the key directly rather than referencing an environment variable. This was the most common finding across the repos I scanned at 62%.
How do I find hardcoded secrets in a public repo?
Run a secrets scanner like Gitleaks against your repo, or use a tool like Data Hogo that combines secrets detection with dependency auditing and code pattern analysis in a single scan. Check the full Git history too — deleting a secret from a file doesn't remove it from older commits. If a key was ever in a public repo, treat it as compromised and rotate it.
What is a good security score for a GitHub repository?
A score above 80 (on a 0-100 scale) is a reasonable target for anything in production. Most Cursor-built repos I scanned landed in the 40-65 range before remediation. Rotating exposed secrets and updating vulnerable dependencies alone often moves the score 20+ points.
Your first scan is free. No credit card. No sales call.
Related Posts
7 AI Code Vulnerabilities That Show Up in Almost Every Repo
The most common AI code vulnerabilities explained with real examples. See what Cursor, Copilot, and ChatGPT keep putting in your code — find them fast.
Vibe Coding Security Risks in 2026: 45% of AI Code Has Flaws
45% of AI-generated code has at least one vulnerability. Here are the 5 most common vibe coding security risks — and how to scan your repo free in 60 seconds.