AI-Generated Code Execution
Your app uses eval() or Function() to execute code that was generated by an LLM, giving attackers a path to arbitrary code execution via prompt injection.
How It Works
Some apps ask an AI to generate JavaScript that gets executed directly with eval() or new Function(). If an attacker can influence the prompt (via their own input or indirect injection), they can craft a prompt that makes the AI output malicious code — which your app then executes with full server privileges. This is Remote Code Execution via a very roundabout path.
// BAD: executing AI-generated code with eval()
const aiCode = await askAI(`Write a JS function to filter: ${userFilter}`);
// If userFilter was injected, aiCode could be malicious
const result = eval(aiCode); // Full code execution// GOOD: never execute AI output as code — use structured outputs instead
// Ask the AI for data/config, not executable code
const filterConfig = await askAI(`Return JSON filter config for: ${sanitizedFilter}`);
const config = JSON.parse(filterConfig); // parse as data
const result = applyFilter(data, config); // run your own trusted functionReal-World Example
AI coding assistants that generate and auto-run code locally are particularly at risk. A 2024 proof-of-concept showed that by embedding malicious instructions in a code comment within a file being analyzed, an attacker could get a coding AI to generate and execute a reverse shell.
How to Prevent It
- Never use eval() or new Function() on AI-generated output under any circumstances
- Redesign to use structured data outputs (JSON configs, parameters) instead of executable code from AI
- If you absolutely must run AI-generated code, use an isolated sandbox (Docker, WebAssembly, vm2) with no network/filesystem access
- Validate all AI-generated code against an allowlist of safe operations before any execution
- Treat AI-generated code with the same distrust as user input — because with prompt injection, it is user input
Affected Technologies
Data Hogo detects this vulnerability automatically.
Scan Your Repo FreeRelated Vulnerabilities
Prompt Injection
highUser input is concatenated directly into an LLM prompt, letting attackers override your instructions and make the AI do things you never intended.
PII Leakage to AI Models
highYour app sends personally identifiable information — emails, names, passwords, phone numbers — to external AI APIs, exposing user data to third-party model providers.
AI Response Without Validation
mediumLLM output is rendered or executed directly without checking whether it matches the expected format or contains harmful content.
AI API Key in Frontend
criticalYour OpenAI, Anthropic, or other AI API key is exposed in client-side code, where anyone can steal it and rack up charges on your account.