16 Billion Passwords Leaked: What Developers Need to Do Right Now
16 billion credentials just hit the dark web. Most login endpoints have no rate limiting. Here's the exact attack chain targeting your /login route — and the fixes to stop it.
Rod
Founder & Developer
When I saw "16 billion passwords leaked" trending, my first move wasn't to read the article. It was to open our scan database and look at how many apps have zero rate limiting on their login endpoint.
The answer was not reassuring.
We're talking about the largest aggregated credential dump ever recorded — a collection pulling from RockYou, LinkedIn, Adobe, Dropbox, and dozens of other past breaches, cross-referenced and deduplicated into roughly 16 billion unique email/password pairs. It's sitting on dark web forums right now. Attackers are running automated bots against login endpoints as you read this. And most apps — including plenty built by good developers who care about security — are completely unprepared for what happens next.
What credential stuffing actually looks like
This isn't brute force. Brute force means guessing passwords randomly — password123, abc123, qwerty. Easy to block, easy to detect.
Credential stuffing is different. The attacker already has your user's real password. They got it from a different breach — a site your user signed up for five years ago and forgot about. They're not guessing. They're just checking whether your user reused that password on your app.
The attack chain is straightforward:
- Download the credential dump (or a filtered slice targeting a specific email domain)
- Load it into a tool like OpenBullet or a custom script
- Point it at your
/loginendpoint (or/api/auth/login, or/api/auth/callback— bots try common patterns) - Send thousands of login requests per hour, rotating IPs to avoid simple blocks
- Collect the ones that return a 200 instead of a 401
A 0.1% success rate sounds like nothing. On 16 billion credentials, it means 16 million valid logins. Against your app's 10,000 users, even a 0.5% hit rate means 50 compromised accounts. If those accounts have saved payment methods, API keys, or stored data — that's your liability, not the attacker's.
The kicker: bots don't care how big your app is. They're automated. They hit everything.
Why your app is a target even if you have 50 users
I hear this a lot: "We're too small to be a target." That logic made sense in 2010 when attacks were manual. It doesn't apply anymore.
Modern credential stuffing infrastructure is fully automated. The bot doesn't know your Monthly Recurring Revenue. It doesn't care about your Alexa rank. It sees an exposed /login route and it tries. That's the whole job.
What makes a small app worth hitting:
- Stored payment methods. If you save cards via Stripe, a compromised account has real monetary value.
- API access. If your app generates API keys, those keys can be resold or used for abuse.
- Email addresses. A working login confirms the email is active — useful for phishing campaigns.
- Trust. Your app might connect to other services (GitHub, Slack, cloud storage). One login can cascade.
We've scanned repos from indie hackers with fewer than 100 users. Many of them had no rate limiting at all on their auth endpoint. Not "weak rate limiting." None. Every credential in that 16-billion dump could be tested against their login with no friction.
If you want to see where your app sits on this, scan your repo with Data Hogo — missing rate limiting and lockout logic are consistently in our top 10 findings across all scans.
The 5 things to implement this week
These aren't optional anymore. A 16 billion credential dump in the wild means every one of these is actively relevant right now.
Fix 1 — Rate limit your login endpoint
This is the minimum viable defense. Limit login attempts per IP per time window. Your general API might allow 60 requests per minute — your login endpoint should allow 10, maximum.
// middleware.ts
import { NextRequest, NextResponse } from 'next/server'
// In-memory store for demo — use Redis or Upstash in production
const loginAttempts = new Map<string, { count: number; resetAt: number }>()
export function middleware(req: NextRequest) {
if (!req.nextUrl.pathname.startsWith('/api/auth/login')) {
return NextResponse.next()
}
const ip = req.headers.get('x-forwarded-for') ?? '127.0.0.1'
const now = Date.now()
const windowMs = 60 * 1000 // 1 minute
const limit = 10 // max 10 login attempts per minute per IP
const record = loginAttempts.get(ip)
if (!record || record.resetAt < now) {
loginAttempts.set(ip, { count: 1, resetAt: now + windowMs })
return NextResponse.next()
}
if (record.count >= limit) {
// Return 429 — Too Many Requests
return NextResponse.json(
{ error: 'Too many login attempts. Try again later.' },
{ status: 429 }
)
}
record.count++
return NextResponse.next()
}For production, replace the in-memory map with Redis or Upstash Rate Limit. In-memory doesn't survive restarts and doesn't work across multiple serverless instances.
Fix 2 — Account lockout after N failures
Rate limiting by IP is good. Combining it with per-account lockout is better. Bots rotate IPs — they can't rotate your user's email address.
// lib/auth/lockout.ts
import { createClient } from '@/lib/supabase/server'
const MAX_FAILURES = 5
const LOCKOUT_MINUTES = 15
export async function checkAccountLockout(email: string): Promise<boolean> {
const supabase = await createClient()
const { data } = await supabase
.from('profiles')
.select('failed_login_attempts, locked_until')
.eq('email', email)
.single()
if (!data) return false
// Account is locked
if (data.locked_until && new Date(data.locked_until) > new Date()) {
return true
}
return false
}
export async function recordFailedLogin(email: string): Promise<void> {
const supabase = await createClient()
const { data } = await supabase
.from('profiles')
.select('failed_login_attempts')
.eq('email', email)
.single()
const attempts = (data?.failed_login_attempts ?? 0) + 1
const lockedUntil =
attempts >= MAX_FAILURES
? new Date(Date.now() + LOCKOUT_MINUTES * 60 * 1000).toISOString()
: null
await supabase
.from('profiles')
.update({ failed_login_attempts: attempts, locked_until: lockedUntil })
.eq('email', email)
}
export async function resetFailedLogins(email: string): Promise<void> {
const supabase = await createClient()
await supabase
.from('profiles')
.update({ failed_login_attempts: 0, locked_until: null })
.eq('email', email)
}This requires adding failed_login_attempts (integer, default 0) and locked_until (timestamptz, nullable) columns to your profiles table. Small migration, big impact.
Fix 3 — Breach detection via the HIBP k-Anonymity API
When a user logs in with a password that appears in a known breach dataset, you should know — and prompt them to change it. The Have I Been Pwned API lets you check this without ever sending the actual password to a third-party server.
The trick is called k-Anonymity: you hash the password with SHA-1, send only the first 5 characters of the hash, get back a list of all matching hashes, and compare the rest locally. Troy Hunt's server never sees the full hash.
// lib/auth/breach-check.ts
export async function isPasswordBreached(password: string): Promise<boolean> {
// Hash the password with SHA-1
const msgBuffer = new TextEncoder().encode(password)
const hashBuffer = await crypto.subtle.digest('SHA-1', msgBuffer)
const hashHex = Array.from(new Uint8Array(hashBuffer))
.map((b) => b.toString(16).padStart(2, '0'))
.join('')
.toUpperCase()
// Split: first 5 chars go to the API, rest stays local
const prefix = hashHex.slice(0, 5)
const suffix = hashHex.slice(5)
const res = await fetch(`https://api.pwnedpasswords.com/range/${prefix}`, {
headers: { 'Add-Padding': 'true' }, // Prevents timing attacks
})
if (!res.ok) {
// GOOD: Fail open — don't block login if HIBP is down
console.warn('HIBP check failed, skipping breach detection')
return false
}
const text = await res.text()
// Each line is "HASH_SUFFIX:COUNT" — check if our suffix is in the list
return text
.split('\n')
.some((line) => line.split(':')[0] === suffix)
}Call this during login (not just registration). If the password is breached, don't block the login — instead, show a warning and prompt the user to update their password on next action. Blocking silently causes support tickets. Transparent prompts cause password resets.
Fix 4 — MFA on suspicious login patterns
You don't need to gate all logins behind MFA. You need to trigger it when something looks off: new device, new country, login at 3am from an IP that's never touched your app before.
The signal doesn't have to be perfect. It just has to introduce friction that bots can't pass automatically.
// lib/auth/anomaly.ts
import { createClient } from '@/lib/supabase/server'
interface LoginContext {
userId: string
ipAddress: string
userAgent: string
}
export async function isAnomalousLogin(ctx: LoginContext): Promise<boolean> {
const supabase = await createClient()
// Check the last 30 days of successful logins for this user
const { data: history } = await supabase
.from('login_history')
.select('ip_address, user_agent')
.eq('user_id', ctx.userId)
.eq('success', true)
.gte('created_at', new Date(Date.now() - 30 * 24 * 60 * 60 * 1000).toISOString())
if (!history || history.length === 0) {
// First login ever — definitely prompt MFA
return true
}
const knownIps = new Set(history.map((h) => h.ip_address))
const knownAgents = new Set(history.map((h) => h.user_agent))
// Flag if both IP and user agent are new
const newIp = !knownIps.has(ctx.ipAddress)
const newDevice = !knownAgents.has(ctx.userAgent)
return newIp && newDevice
}When isAnomalousLogin returns true, return a response that tells the client to prompt for an OTP or email verification code before completing the session. Don't let the session fully establish until the second factor clears.
Fix 5 — Alert on failed login spikes
You can implement every defense above and still want to know when you're actively under attack. A spike in failed logins is a signal — catch it and respond.
// lib/auth/monitoring.ts
const SPIKE_THRESHOLD = 200 // failed attempts
const SPIKE_WINDOW_MS = 5 * 60 * 1000 // in 5 minutes
// Track in Redis or Upstash — this is a simplified in-memory version
const failureLog: number[] = []
export function recordAuthFailure(): void {
const now = Date.now()
failureLog.push(now)
// Purge entries outside the window
const windowStart = now - SPIKE_WINDOW_MS
while (failureLog.length > 0 && failureLog[0] < windowStart) {
failureLog.shift()
}
if (failureLog.length >= SPIKE_THRESHOLD) {
// BAD: silently ignoring this
// GOOD: alert your on-call channel immediately
triggerAlert(failureLog.length)
}
}
async function triggerAlert(count: number): Promise<void> {
// Send to Slack, PagerDuty, email — wherever your team responds
await fetch(process.env.SLACK_WEBHOOK_URL!, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
text: `Security alert: ${count} failed login attempts in the last 5 minutes. Possible credential stuffing attack.`,
}),
})
}When this fires, you're not just logging — you're starting a response. Pull the IP list. Consider temporarily enabling CAPTCHA. Check if any accounts were successfully accessed in the same window.
The uncomfortable truth about AI-generated auth code
Here's something I've watched happen across hundreds of repos we've scanned: AI assistants write great happy-path auth code. The login flow works. Sessions get created. Tokens get validated.
But rate limiting? Lockout? Breach detection? They don't appear unless you explicitly ask for them.
This isn't a criticism of AI tools — it's a structural problem. When you prompt "add login to my Next.js app," the model generates code that satisfies the prompt. A working login. What it doesn't do is reason about threat models, or add defenses against attack patterns that aren't part of the functional requirement.
Veracode's 2025 State of Software Security report found that 45% of AI-generated code contains security vulnerabilities. In our own scans of AI-assisted repos, missing rate limiting on auth endpoints shows up in roughly 6 out of 10 projects.
That number would be higher if we only looked at repos that were scaffolded with a single AI prompt and shipped without a dedicated security review — which describes a large portion of what gets deployed in 2026.
The fix isn't to stop using AI for auth. It's to know which parts AI skips by default and add them yourself. The five fixes above are exactly those parts.
If you want to check where your current auth setup stands, the authentication security guide we published covers OWASP's authentication failure patterns in detail. And if you want an automated scan that checks your actual codebase for missing lockout and rate limiting logic, that's what Data Hogo is for — it flags these gaps the same way it flagged them in the repos we analyzed for this post.
TL;DR
- 16 billion leaked credentials means every login endpoint on the internet is being tested right now, automatically
- Credential stuffing is not brute force — attackers use real passwords from real breaches, not guesses. A 0.5% success rate on 10,000 users is 50 compromised accounts
- Bots don't care about your app size — they're automated and indiscriminate. 50 users with stored payment methods is a worthwhile target
- 5 fixes to implement this week: rate limit logins by IP, lock accounts after 5 failures, check passwords against Have I Been Pwned on login, trigger MFA on new device/IP, alert on failed login spikes
- AI-generated auth code skips all of this by default — the happy path works, the defenses don't appear unless you ask
- Quick test:
curl -X POST https://yourapp.com/api/auth/login -d '{"email":"test@test.com","password":"wrong"}'— run it 20 times in 10 seconds. If you never get a 429, you have no rate limiting
FAQ
What is credential stuffing and how does it work?
Credential stuffing is an automated attack where attackers take a list of username/password pairs leaked from previous breaches and systematically try them against your login endpoint using bots. Unlike brute force (which guesses random passwords), stuffing uses real passwords that real users actually chose — which is why it works so well against apps that don't rate limit login attempts.
How do I protect my app's login endpoint from credential stuffing?
The five most effective defenses are: rate limiting login attempts per IP, account lockout after N failures, breach detection via the Have I Been Pwned API, MFA prompts on suspicious login patterns, and alerting on failed login spikes. Most AI-generated auth code skips all five.
What is the 16 billion password leak?
It's an aggregated credential dump combining dozens of previous breaches — RockYou, LinkedIn, Adobe, and many others — resulting in roughly 16 billion unique email/password pairs. The danger is not the size alone but the cross-referencing: attackers can now match accounts across services with much higher success rates.
Does credential stuffing only affect large companies?
No. Bots are automated and indiscriminate. They hit every exposed login endpoint on the internet, regardless of the app's size. Even a 50-user SaaS is a target if its accounts contain stored payment methods or API access with real value.
What is the Have I Been Pwned API and how does it protect users?
Have I Been Pwned (HIBP) is a free service that lets you check if a password appears in known breach datasets without sending the actual password to their servers. You hash the password with SHA-1, send only the first 5 characters, and compare the response locally. It's called k-Anonymity and it's safe to use in production.