← Blog
·11 min read

OWASP A09 Logging and Monitoring Guide

OWASP A09 is why breaches go undetected for 204 days on average. Learn what to log, what never to log, and how to fix the silent failures in your app.

Rod

Founder & Developer

The average time to detect a security breach is 204 days, according to IBM's Cost of a Data Breach Report. Think about that for a second. An attacker can live inside your application for six months before anyone notices. The reason this happens isn't sophisticated evasion — it's OWASP A09:2021 Security Logging and Monitoring Failures. No logs, no alerts, no way to know something is wrong.

Most security categories in the OWASP Top 10 are about preventing attacks. A09 is different. It's about what happens after something goes wrong. When your logging fails, every other security control fails too — because there's no record that anyone tried anything, no signal that something unusual is happening, and no way to reconstruct what an attacker actually did.

This guide covers the specific logging failures that show up in real codebases — the ones we see when we scan repos at Data Hogo — and how to fix them.


What OWASP A09 Logging and Monitoring Failures Look Like in Practice

This is the OWASP category that developers least often think about until something goes wrong. Here's what it actually covers.

No Logging of Failed Authentication

Your app probably logs something when things break. Does it log when someone fails to log in? Most apps don't.

A single failed login is normal — a user mistyped their password. Fifty failed logins against the same account in two minutes is a credential stuffing attack. Without a log entry for every failure, you can't tell the difference. The attack happens silently, and if it succeeds, the successful login looks completely normal.

This is insufficient security logging — the most common A09 finding we see.

Logging Sensitive Data

This one cuts the other way: you're logging too much. Specifically, you're logging data that should never touch a log file.

The most common offender is logging the entire request body during debugging and never removing it:

// BAD: Logs password in plaintext — anyone with log access can steal credentials
app.post('/login', (req, res) => {
  console.log('Login attempt:', req.body); // { email: "user@example.com", password: "hunter2" }
  // ...
});

We've found this pattern in production codebases that were actively running. The developer added the log during development and pushed it. It's been sitting there for months, writing every user's password to stdout.

The console.log of sensitive data problem is broader than passwords. Full user objects contain tokens. Request headers contain Authorization values. Environment variables dumped to logs contain API keys. Any of these in your log file — which might be stored in plaintext, indexed by your logging provider, and accessible to multiple team members — is a real exposure.

Log Injection

This one most developers have never heard of. Log injection happens when you build log strings by concatenating user-supplied input, and an attacker includes newlines in their input to forge fake log entries.

// BAD: Attacker controls the 'username' value
// If username = "alice\nINFO 2026-02-28T10:00:00Z event=login_success user=admin"
// ...the log file now contains a fake "admin logged in" entry
logger.info('User logged in: ' + username);

This sounds abstract, but it has real consequences. An attacker can use log injection to hide their tracks by creating noise in the log file, to forge "success" entries that make intrusion detection systems ignore their activity, or to inject malicious content into log files that are processed by other tools. See the full log injection vulnerability entry for examples.

console.log in Production Without Aggregation

Using console.log in production isn't inherently wrong if you're writing to structured output that gets captured. But in practice, it usually means one of two things: either it's unstructured text that goes to stdout and gets dropped, or it's a debugging statement that was never meant to run in production and contains something sensitive.

Console.log in production without a centralized aggregation setup means your logs aren't searchable, not alertable, and often not retained. When something goes wrong, you have no history to look back at.

No Error Monitoring

Your app throws exceptions. Every app does. The question is whether you know about them when they happen.

Without error monitoring, unhandled exceptions fail silently, caught exceptions get swallowed without logging, and you only find out something is broken when a user emails you. From a security perspective, this matters because errors often signal attacks. SQL injection attempts trigger database errors. Path traversal attempts trigger file system errors. Broken authentication triggers exceptions in your JWT library.

If you're not watching errors in production, you're not watching attacks.

Environment Variables in Logs

This one is specific enough that it has its own category: environment variables in logs. It happens when you dump process.env for debugging, or when a library you're using logs its configuration at startup, and your environment includes credentials.

// BAD: Dumps ALL environment variables at startup — including secrets
console.log('Starting with config:', process.env);

If this line runs in production, every API key, database URL, and secret token in your environment is now in your log file.


What Good Security Logging Looks Like

Now the fixes. The core principle is this: use structured logging, log events not strings, and never let user input touch your log format directly.

Switch from console.log to a Structured Logger

The difference between console.log and a structured logger isn't just aesthetics. Structured logs are queryable, filterable, and machine-readable. Your monitoring system can alert on event: "login_failed" with a count. It can't do that with "Login failed for bob@example.com".

// BAD: Unstructured, not queryable, and logs the password
console.log('Login failed for', email, password);
// GOOD: Structured, queryable, sensitive fields redacted
import pino from 'pino';
 
const logger = pino({ level: 'info' });
 
function redactEmail(email: string): string {
  // Keep domain for debugging, redact local part
  const [, domain] = email.split('@');
  return `***@${domain}`;
}
 
logger.warn({
  event: 'login_failed',
  email: redactEmail(email), // never log full email if possible
  ip: req.ip,
  userAgent: req.headers['user-agent'],
  timestamp: new Date().toISOString(),
});

Pino is the right choice for Node.js — it's the fastest structured logger available and its output is natively JSON. Winston is also solid if you need more transport flexibility.

Log Events, Not Object Dumps

The temptation is to log the whole object. Don't.

// BAD: Dumps entire user object — includes tokens, hashed passwords, internal IDs
console.log('Authenticated user:', user);
// GOOD: Log only the fields you need for debugging and monitoring
logger.info({
  event: 'login_success',
  userId: user.id,
  ip: req.ip,
  // Nothing else. Not user.email, not user.token, not user.passwordHash
});

The rule is simple: log the ID of a thing, not the thing itself. If you need more context later, you can query the database by ID.

Fix Log Injection with Structured Fields

Structured logging also solves log injection. When user input is a field value instead of part of the log string, it can't escape into the log format.

// BAD: String concatenation — username can contain newlines and forge entries
logger.info('User logged in: ' + username);
 
// GOOD: Structured field — username is just a value, can't affect log format
logger.info({ event: 'login_success', username: username });

When username is a field in a JSON object, an attacker can make it "alice\nINFO: ..." and it'll be stored as the string value alice\nINFO: ... — not as a second log entry. The JSON structure prevents injection.

The Auth Event Logging Pattern

Here's what a complete authentication logging implementation looks like:

// auth-logger.ts — log every security-relevant auth event
import pino from 'pino';
 
const logger = pino({ level: 'info' });
 
export function logAuthEvent(event: {
  type: 'login_success' | 'login_failed' | 'logout' | 'password_reset' | 'mfa_failed';
  userId?: string;  // undefined on login_failed (user may not exist)
  ip: string;
  userAgent?: string;
}) {
  logger.warn({
    // Use 'warn' for failures — they're not errors, but they need attention
    ...event,
    timestamp: new Date().toISOString(),
  });
}
 
// In your login handler:
if (!passwordMatches) {
  logAuthEvent({ type: 'login_failed', ip: req.ip, userAgent: req.headers['user-agent'] });
  return res.status(401).json({ error: 'Invalid credentials' });
}
 
logAuthEvent({ type: 'login_success', userId: user.id, ip: req.ip });

Every failed login should generate a log entry. This is the minimum bar for OWASP A09 compliance, and it's the data you need to detect brute force attacks and credential stuffing.


What to Log — and What Never to Log

Log These Events

Category Events
Authentication Login success, login failure, logout, password reset request, MFA failure
Authorization 403 responses, permission denied, role escalation attempts
Input validation Request rejected due to invalid input (but not the invalid input itself)
Admin actions User created/deleted, settings changed, roles modified
Server errors Unhandled exceptions, 500 responses, database connection failures
Rate limiting Requests blocked by rate limiter, with IP and endpoint

Never Log These

Category Why
Passwords (plaintext or hashed) Plaintext is obvious. Hashed passwords can be cracked offline if stolen.
Session tokens / JWTs A token in a log file is a token that can be replayed.
API keys and secrets Logs are often stored in less-secured systems than your secrets manager.
Full credit card numbers PCI-DSS violation. Log last 4 digits only.
Full request bodies on auth endpoints Contains passwords. Log that a request happened, not its contents.
process.env dumps Contains all your secrets.
PII beyond what's necessary See PII in logs — GDPR and CCPA have opinions about this.

Real-World Consequences: When Logging Fails at Scale

The Equifax breach in 2017 is the case study for A09 failures. Attackers exploited a known Apache Struts vulnerability (CVE-2017-5638) and remained in Equifax's network for 76 days before detection. During that time, they exfiltrated the personal records of 147 million people.

The reason it took 76 days wasn't the sophistication of the attack — it was that monitoring tools were not configured correctly, an expired SSL certificate meant traffic inspection wasn't working, and the alerts that did fire were drowned in noise and weren't acted on.

The Target breach in 2013 is a different failure mode: the monitoring worked. Security tools detected the intrusion and fired alerts. Those alerts were reviewed by the security team — and ignored, because alert fatigue had made the team desensitized to them. The breach wasn't a technical monitoring failure, it was an operational one.

Both examples point to the same gap: logging and monitoring isn't a checkbox, it's an operational practice. You need logs, you need alerts that fire on meaningful thresholds, and you need someone who acts on those alerts.


Prevention: Building Security Logging That Actually Works

1. Use a Structured Logger from Day One

Install Pino or Winston on day one, not after an incident. The structured logging habit prevents both the "log sensitive data" problem and the "no queryable logs" problem simultaneously.

2. Ship Logs to a Centralized System

stdout isn't a log system. Ship your logs to a service that can alert, aggregate, and retain them. Options that work well for indie devs and small teams:

  • Sentry — best for error monitoring, free tier is generous. The no error monitoring problem is solved in about 10 minutes with Sentry.
  • Datadog — more complete observability, more expensive. Worth it at scale.
  • Logtail / Better Stack — affordable structured log aggregation. Good for small teams.
  • Railway / Vercel built-in logs — not sufficient alone (no alerting, limited retention), but a good starting point.

3. Set Alert Rules on Auth Events

Once you're logging auth events, set alerts. The minimum set:

  • More than 10 login failures from the same IP in 5 minutes
  • More than 5 login failures against the same account in 10 minutes
  • Any login from a country your users don't normally come from
  • Admin action performed outside business hours (optional, but catches a lot)

4. Audit Your Existing Logs for Sensitive Data

Run a search over your existing log output before you go further. Look for patterns that suggest credential exposure:

// Patterns to search for in your log output or log files:
// - "password"
// - "token"
// - "Authorization:"
// - "SECRET" / "KEY"
// - "process.env"

If you find any of these as values (not keys), you have a security logging failure that's actively running in production right now.

5. Never Log User Input Directly

Any time you're about to log something that came from a user, ask yourself: is this a field in a structured log entry, or is this being concatenated into a string? If it's the latter, fix it. If you genuinely need to log the user input for debugging (never in production), redact it or truncate it first.


How Data Hogo Detects A09 Failures

When we scan a repo at Data Hogo, the A09 checks look for specific code patterns:

  • console.log calls on routes that handle authentication or sensitive data
  • String concatenation in log calls that includes variables (log injection risk)
  • process.env passed to any logging function
  • Missing error handler middleware in Express/Next.js apps
  • req.body or req.headers logged on auth endpoints

We also check for the absence of structured logging libraries in package.json — a project with no Pino, Winston, or similar is almost certainly relying on console.log in production.

These checks map directly to the security logging failures, log injection, console.log sensitive data, PII in logs, env vars in logs, and no error monitoring entries in our vulnerability encyclopedia.

See what logging failures Data Hogo finds in your repo — scan free.


OWASP A09 and the Broader Security Picture

A09 doesn't exist in isolation. It's the category that determines whether you know about the other nine. A broken access control issue (A01) is detectable if you're logging 403 responses. An injection attack (A03) triggers database errors. Insecure deserialization (A08) causes exceptions.

Without logs, all of these attacks can succeed and go undetected. That's why IBM's 204-day number exists. The attackers aren't invisible — they're just operating in an environment where no one is watching.

The fix is not complicated. Structured logging with Pino takes 20 minutes to set up. Shipping logs to Sentry takes another 10. Writing a few alert rules takes an afternoon. None of this requires a security background — it requires treating observability as a first-class concern from the start of a project.

If you're curious where your app stands right now, the OWASP A09 checks in our security encyclopedia are a good starting point. Or skip straight to a scan and see the actual findings in your codebase.

Scan your repo free — find your A09 gaps in under 60 seconds →


Frequently Asked Questions

What is OWASP A09 Security Logging and Monitoring Failures?

OWASP A09:2021 covers the failure to log, monitor, or alert on security-relevant events in an application. Without sufficient logging, attackers can operate undetected for months. It includes missing logs for failed authentication, no alerting on suspicious activity, logging sensitive data like passwords, and having no error monitoring in production.

What should I log for security purposes?

Log authentication events (logins, failures, logouts, password resets), authorization failures (403s, permission denials), input validation failures, admin actions, and all server errors. For each event, capture a timestamp, user ID or IP address, event type, and outcome. Never log passwords, tokens, API keys, or other credentials.

Is console.log safe in production?

No. console.log in production is a security risk for two reasons: first, developers often log full objects that contain tokens, passwords, or PII; second, console output is unstructured, so it can't be queried, alerted on, or aggregated. Use a structured logger like Pino or Winston instead, and ship logs to a centralized system like Sentry or Datadog.

What is log injection and how do I prevent it?

Log injection is when an attacker supplies input containing newlines or control characters that get written directly into your log file, creating fake log entries. For example, a username of admin\nINFO: user=admin logged in successfully would forge a successful login entry in your log. Prevent it by never concatenating user input into log strings — use structured logging with separate fields for each value instead.

How long does it take to detect a security breach on average?

According to IBM's Cost of a Data Breach Report 2024, the average time to identify and contain a breach is 204 days. This number drops significantly when organizations have security monitoring and alerting in place. The biggest contributor to that 204-day window is the absence of meaningful logs and alerts — exactly what OWASP A09 addresses.

OWASPsecurityloggingmonitoringNode.jsvibe-codingapplication-security