AI Response Without Validation
LLM output is rendered or executed directly without checking whether it matches the expected format or contains harmful content.
How It Works
LLMs are non-deterministic — they can return unexpected formats, hallucinated data, or even adversarially crafted content if the model was compromised or the prompt was injected. Rendering that output directly into a UI or database without validation can lead to XSS, data corruption, or logic errors in your app.
// BAD: AI response rendered directly into the DOM
const result = await openai.chat.completions.create({ messages });
const aiText = result.choices[0].message.content;
document.getElementById('output').innerHTML = aiText; // XSS if AI returns <script>// GOOD: sanitize AI output before rendering
import DOMPurify from 'dompurify';
const result = await openai.chat.completions.create({ messages });
const aiText = result.choices[0].message.content ?? '';
// Sanitize before touching the DOM
document.getElementById('output').innerHTML = DOMPurify.sanitize(aiText);Real-World Example
Researchers demonstrated in 2024 that indirect prompt injection through web content could cause AI browser assistants to exfiltrate data by embedding malicious instructions in pages the AI was summarizing — the AI output was then rendered without sanitization.
How to Prevent It
- Always sanitize AI-generated HTML with DOMPurify or equivalent before rendering
- Validate that AI output matches your expected schema (use Zod if you expect JSON)
- Use innerText instead of innerHTML when you only need to display text
- If the AI response doesn't match expected patterns, fail gracefully rather than rendering it
- Never execute AI-generated code without a sandbox — see CWE-95
Affected Technologies
Data Hogo detects this vulnerability automatically.
Scan Your Repo FreeRelated Vulnerabilities
Prompt Injection
highUser input is concatenated directly into an LLM prompt, letting attackers override your instructions and make the AI do things you never intended.
PII Leakage to AI Models
highYour app sends personally identifiable information — emails, names, passwords, phone numbers — to external AI APIs, exposing user data to third-party model providers.
AI API Key in Frontend
criticalYour OpenAI, Anthropic, or other AI API key is exposed in client-side code, where anyone can steal it and rack up charges on your account.
No AI Output Sanitization
mediumLLM-generated HTML or code is rendered directly in the UI without sanitization, opening the door to stored XSS attacks.