Amazon Web Services (AWS) contained a major outage Wednesday after nearly two hours of global disruptions that left numerous popular services scrambling. The incident, which primarily struck the US-EAST-1 region starting at 07:55 UTC on October 20, 2025, created a domino effect across the digital landscape. Slack went down. Atlassian crashed. Snapchat users stared at loading screens. Just another day in our wonderfully fragile digital ecosystem.
Technical details revealed the culprit: DynamoDB API DNS resolution issues. Not exactly dinner conversation material, but critical nonetheless. ThousandEyes data showed backend infrastructure problems without corresponding network events—a fancy way of saying AWS broke something important without the internet itself being at fault. Services dependent on US-EAST-1 endpoints couldn’t connect, timed out, or returned errors. The digital equivalent of showing up to a restaurant that’s mysteriously closed despite all the lights being on.
The internet’s favorite magic trick: everything disappears when Virginia’s DNS hiccups.
The AWS Health Dashboard confirmed the mess. API errors and connectivity issues spread like wildfire. Global features dependent on AWS crumbled. For businesses relying on cloud services, workflow disruptions became the morning’s unwelcome surprise. Customer service teams everywhere prepared their “we’re experiencing technical difficulties” templates.
Recovery efforts began at 09:22 UTC, with AWS engineers presumably downing extra coffee as they performed infrastructure adjustments. Services largely returned to normal by 09:35 UTC. That’s nearly two hours of digital chaos. Two hours of lost productivity. Two hours of explaining to clients why everything was broken.
AWS continued investigating even after services recovered, likely to prevent a repeat performance. The incident highlighted the precarious nature of our cloud-dependent business ecosystem. One DNS hiccup in Virginia, and suddenly companies worldwide can’t function.
For affected businesses, the outage served as yet another reminder of the need for resilience strategies. Because when AWS sneezes, apparently half the internet catches a cold. At least until 09:35 UTC.