AI API Key in Frontend
Your OpenAI, Anthropic, or other AI API key is exposed in client-side code, where anyone can steal it and rack up charges on your account.
How It Works
API keys for AI providers are essentially credit cards with a spending limit. If you bundle OPENAI_API_KEY or ANTHROPIC_API_KEY into your frontend JavaScript, anyone who opens DevTools can copy it, use it to make API calls, and drain your balance. In Next.js, any env var without the NEXT_PUBLIC_ prefix stays server-side — but developers often add the prefix by mistake or call AI APIs directly from client components.
// BAD: calling OpenAI directly from a React component
// NEXT_PUBLIC_ prefix exposes this to the browser
const client = new OpenAI({ apiKey: process.env.NEXT_PUBLIC_OPENAI_API_KEY });// GOOD: proxy all AI calls through your own API route
// app/api/ai/route.ts — server-side only, key never reaches the browser
import { OpenAI } from 'openai';
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }); // no NEXT_PUBLIC_
export async function POST(req: Request) {
// add auth check here before calling the AI
}Real-World Example
This is one of the most common findings in vibe-coded apps. Developers scaffold a Next.js app, add NEXT_PUBLIC_OPENAI_API_KEY to their .env, and ship it. Keys get scraped from public GitHub repos within minutes by automated bots. OpenAI's key scanner catches some of these, but not all.
How to Prevent It
- Never use NEXT_PUBLIC_ prefix for AI API keys — they become visible in the browser bundle
- Always proxy AI API calls through a server-side route (/api/ai) where you control auth and rate limiting
- Add your AI API keys to .gitignore'd .env files and rotate immediately if accidentally exposed
- Set spending limits and usage alerts on your AI provider dashboard
- Use Gitleaks or Data Hogo to scan your repo for exposed keys before every deploy
Affected Technologies
Data Hogo detects this vulnerability automatically.
Scan Your Repo FreeRelated Vulnerabilities
Prompt Injection
highUser input is concatenated directly into an LLM prompt, letting attackers override your instructions and make the AI do things you never intended.
PII Leakage to AI Models
highYour app sends personally identifiable information — emails, names, passwords, phone numbers — to external AI APIs, exposing user data to third-party model providers.
AI Response Without Validation
mediumLLM output is rendered or executed directly without checking whether it matches the expected format or contains harmful content.
No AI Output Sanitization
mediumLLM-generated HTML or code is rendered directly in the UI without sanitization, opening the door to stored XSS attacks.