AI & LLM Security
Prompt injection, PII leakage to AI models, unvalidated AI responses, API keys in frontend, output sanitization, and AI rate limiting.
9 vulnerabilities
Prompt Injection
highUser input is concatenated directly into an LLM prompt, letting attackers override your instructions and make the AI do things you never intended.
PII Leakage to AI Models
highYour app sends personally identifiable information — emails, names, passwords, phone numbers — to external AI APIs, exposing user data to third-party model providers.
AI Response Without Validation
mediumLLM output is rendered or executed directly without checking whether it matches the expected format or contains harmful content.
AI API Key in Frontend
criticalYour OpenAI, Anthropic, or other AI API key is exposed in client-side code, where anyone can steal it and rack up charges on your account.
No AI Output Sanitization
mediumLLM-generated HTML or code is rendered directly in the UI without sanitization, opening the door to stored XSS attacks.
Excessive AI Context
mediumYour app sends entire database records, config files, or secrets as context to an AI model, exposing far more data than the task requires.
AI Model Fallback Insecure
lowWhen the primary AI model fails, your app silently falls back to a weaker or unvalidated model that bypasses your safety configurations.
No AI Rate Limiting
mediumYour app makes AI API calls with no per-user limits, letting a single user (or bot) trigger thousands of requests and drain your API budget in minutes.
AI-Generated Code Execution
criticalYour app uses eval() or Function() to execute code that was generated by an LLM, giving attackers a path to arbitrary code execution via prompt injection.