AI & LLM Security

Prompt injection, PII leakage to AI models, unvalidated AI responses, API keys in frontend, output sanitization, and AI rate limiting.

9 vulnerabilities

Prompt Injection

high

User input is concatenated directly into an LLM prompt, letting attackers override your instructions and make the AI do things you never intended.

CWE-77OWASP LLM01:2025

PII Leakage to AI Models

high

Your app sends personally identifiable information — emails, names, passwords, phone numbers — to external AI APIs, exposing user data to third-party model providers.

CWE-359OWASP LLM02:2025

AI Response Without Validation

medium

LLM output is rendered or executed directly without checking whether it matches the expected format or contains harmful content.

CWE-116OWASP LLM02:2025

AI API Key in Frontend

critical

Your OpenAI, Anthropic, or other AI API key is exposed in client-side code, where anyone can steal it and rack up charges on your account.

CWE-312OWASP LLM09:2025

No AI Output Sanitization

medium

LLM-generated HTML or code is rendered directly in the UI without sanitization, opening the door to stored XSS attacks.

CWE-79OWASP LLM02:2025

Excessive AI Context

medium

Your app sends entire database records, config files, or secrets as context to an AI model, exposing far more data than the task requires.

CWE-359OWASP LLM02:2025

AI Model Fallback Insecure

low

When the primary AI model fails, your app silently falls back to a weaker or unvalidated model that bypasses your safety configurations.

CWE-636

No AI Rate Limiting

medium

Your app makes AI API calls with no per-user limits, letting a single user (or bot) trigger thousands of requests and drain your API budget in minutes.

CWE-770OWASP LLM04:2025

AI-Generated Code Execution

critical

Your app uses eval() or Function() to execute code that was generated by an LLM, giving attackers a path to arbitrary code execution via prompt injection.

CWE-95OWASP LLM02:2025