Fully Modulated
Fully Modulated is your backstage pass to the stories and signals that shaped radio, TV, and wireless communication. Join Tyler, a broadcast engineer, as he uncovers the wild moments, quirky legends, and technical breakthroughs that keep the world connected. From vintage radio hacks to the real drama behind today’s digital waves, each episode blends deep research, humor, and storytelling for anyone curious about how media magic happens. Independent, insightful, and made for every fan who loves a good broadcast mystery.
Fully Modulated
AWS Outage: Why Broadcasters Need Multi-Cloud Like Backup Internet
The October 2025 AWS outage that took down major internet services for hours proves why broadcasters need multi-cloud redundancy and hybrid infrastructure to protect both revenue and emergency alerting capabilities. When automation goes down, commercials don't air, programming stops, listeners tune out, and advertising revenue disappears. Tyler breaks down why radio stations need backup systems using an analogy everyone understands: redundant internet connections.
Just like businesses maintain Spectrum as primary internet with Lumen as backup, broadcasters should run primary automation on AWS with backup on Azure or Google Cloud. When one cloud provider experiences DNS failures and outages, backup systems on different providers automatically take over, keeping programming running and revenue flowing.
This opinion episode covers why IP-based studio-to-transmitter links over single internet connections create vulnerability, why stations need backup STL paths through different providers or microwave links, and why broadcast automation distributed across multiple cloud providers prevents total station failure during provider outages. Tyler also discusses why on-premise EAS hardware should remain the foundation for emergency alerting even as CAP and IPAWS expand internet-based capabilities.
Dave's Garage technical analysis reveals why backup systems that depend on failed infrastructure don't actually work, and why true redundancy requires genuine independence between providers, just like backup internet connections use different backbone infrastructure.
Topics covered: AWS outage impact on broadcasting, multi-cloud redundancy for radio automation, hybrid on-premise and cloud infrastructure, IP STL backup strategies, EAS reliability considerations, broadcast revenue protection during cloud failures, redundant internet connection analogy for multi-cloud architecture, automation system failover, protecting morning drive advertising revenue, and why radio infrastructure diversity matters.
Send me a text message with your thoughts, questions, or feedback
Visible Wireless by Verizon
Same Verizon coverage, way cheaper bills. No contracts or hidden fees. $20 off for both of us.
If you enjoyed the show, be sure to follow Fully Modulated and leave a rating and review on Apple Podcasts or your favorite podcast app—it really helps more people discover the show.
Fully Modulated is not affiliated with or endorsed by any station, media company, or network. All opinions are solely my own.
FULL TRANSCRIPT
Hey, welcome back to Fully Modulated. I'm Tyler, and today we need to talk about something that's been on my mind, especially after what just happened with Amazon Web Services. Before we dive in, I need to be clear about something. This episode is an opinion piece. These are my views as someone who's been working in broadcast engineering since 2018 and radio since 2014, and now as a broadcast network engineer. What I'm about to say comes from experience, but it's still my perspective on a very complex issue.
So, here's what happened. On Monday, October 20th, 2025, Amazon Web Services went down. And I don't mean a little hiccup. I mean a massive, widespread outage that took down huge chunks of the internet. Snapchat, Ring doorbells, Coinbase, Roblox, Signal, even parts of Amazon's own shopping site and Prime Video. British government websites went offline. The outage started around 3:11 a.m. Eastern Time in AWS's US-EAST-1 region in northern Virginia, and it was caused by DNS resolution failures. Think of DNS as the phone book of the internet. When that fails, nothing can find anything else.
Dave from Dave's Garage, a retired Microsoft systems engineer, put out a fantastic breakdown of the technical details. He explained that DNS isn't just a lookup, it's a critical dependency for every microservice. When name resolution fails, clients don't degrade gracefully. They thrash, retry, back off, and then hammer the system again in exponential waves, creating what Dave called a denial of service snowball. And while all that chaos was happening in the cloud, broadcast was steady. Reliable. On the air.
Here's what really matters to us. When major chunks of the internet went down, millions of people lost access to their digital services. And you know what kept working? Radio. Broadcast radio. AM, FM, over-the-air television. Because when the internet fails, RF doesn't care. Broadcast signals kept going out, delivering news, information, and emergency alerts without missing a beat.
Now let's talk about what this means for broadcasting. There's been this push to move everything to the internet and to the cloud. IP-based studio-to-transmitter links over public internet from providers like Spectrum, Lumen, Verizon. Cloud automation systems running on AWS or Azure. Even discussions about moving the Emergency Alert System to cloud platforms. Monday's outage shows us exactly why we need to think about redundancy and diversity.
Let me use an analogy everyone understands. If you have a business or even at home, you might have redundant internet connections. Spectrum as your primary and Lumen as backup. If Spectrum goes down, Lumen keeps you online. You don't even notice the outage. That same concept applies to broadcast infrastructure, and it's critical because the stakes are higher. When your automation goes down, you're not just offline. You're losing revenue. Those ads aren't playing. Your sponsors aren't getting their spots aired. Your listeners are tuning out. Dead air drives audiences away fast.
So if your automation system runs entirely on AWS, and AWS goes down like it did Monday, what happens without a backup? Your station goes silent. Your morning show can't access voice tracks. Commercials stop. Your playlist stops. Within minutes, listeners switch stations. Your morning drive audience, your most valuable daypart, they're gone. And those advertisers who paid for morning drive spots? They're not getting what they paid for. That's lost revenue and damaged client relationships.
This is where multi-cloud architecture becomes essential. Run your primary automation on AWS but have backup automation on Azure or Google Cloud. Just like having Spectrum and Lumen for internet. When AWS fails, Azure takes over automatically. Your music keeps playing. Your ads keep running. Your revenue is protected. You wouldn't rely on a single internet connection for critical business operations, so why rely on a single cloud provider?
Or take a hybrid approach. On-premise automation as your primary, running on local servers in your rack, with cloud backup for disaster recovery. That way, internet problems don't affect normal operations. The cloud is there if your building burns down or you have a catastrophic local failure. But day to day, you're independent and not vulnerable to cloud provider outages.
Dave made an excellent point about multi-region failover. It's not just about data replication, it's about understanding dependencies. If your authentication tokens are stored only in US-EAST-1, or if your feature flags live in a table there, then even if you spin up compute in another region, the very first API call tries to go back to the thing that's already failed. You have to architect for true independence. Your backup systems can't depend on the same infrastructure as your primary. It's like making sure your Lumen backup internet doesn't route through Spectrum's infrastructure. They need to be truly independent.
The same applies to your STL. If you're running an IP-based STL over Spectrum, that's great technology. Cost-effective, flexible, high-quality audio. But what happens when that connection fails? If that's your only link from studio to transmitter, you're off the air. Keep that IP STL as primary but maintain a microwave STL as backup. Or have two IP STLs through completely different providers with different backbone infrastructure. When one fails, the other takes over automatically.
Now let's talk about EAS, because it's got unique considerations. The FCC has been exploring software-based EAS options, and some alerts already arrive via CAP through IPAWS over the internet. But traditional EAS equipment, the dedicated hardware sitting in your rack monitoring local and state primary sources over RF, works independently of the internet. My opinion is that should remain your foundation.
When severe weather hits, when there's a tornado warning, when seconds count, you need that hardware to work regardless of what's happening with any cloud provider. CAP and IPAWS are valuable additions that extend reach and provide more detailed information. But they should supplement traditional broadcast EAS, not replace it. During Monday's AWS outage, if your EAS system was hosted entirely in AWS, you couldn't reach it. And if there was an actual emergency during that time, you might not have been able to relay alerts to your community. For a life-safety system, that's unacceptable. So even as the industry explores cloud options, on-premise hardware with true independence should remain the foundation.
But automation, traffic systems, content delivery? Those can absolutely use multi-cloud approaches because they're business critical, not life-safety critical in the same way. And multi-cloud gives you the reliability you need.
Let me paint a complete picture. It's a weekday afternoon. Severe weather is moving through. The National Weather Service issues a tornado warning. At the same time, there's a major AWS outage. If you've architected your infrastructure right, here's what happens. Your on-premise EAS hardware gets the alert, processes it, and relays it. Check. Your primary automation runs on AWS and is affected, but backup automation on Azure takes over seamlessly. Programming continues. Ads keep running. Your listeners hear the EAS alert, then they hear your station continuing to broadcast normally. You haven't lost your audience. You haven't lost revenue. And you've served your community. That's proper infrastructure design.
Contrast that with putting everything in one place. Automation on AWS. Traffic on AWS. Remote access on AWS. IP STL as the only link over a single internet connection. When AWS goes down, everything fails at once. No automation. No programming. Dead air. Listeners tune out. Revenue stops. It's like having only one internet connection at your business. When it goes down, you're done.
Here's what really drives this home. It wasn't a small provider that failed. It was AWS, one of the largest and most reliable cloud providers in the world. Massive infrastructure, professional staff, built-in redundancy. And they still had a catastrophic failure that lasted hours. If AWS can fail like that, anyone can. Everything fails eventually. The question is whether you've architected your systems to survive those failures.
Dave talked about what he called internet monoculture. We used to spread risk across many independent providers. Now we've concentrated it. AWS, Azure, Google Cloud run huge portions of the internet. When one fails, the blast radius is enormous. The answer for broadcasters is diversity. Multiple providers, just like multiple internet connections. On-premise infrastructure for critical systems. Use cloud for what makes sense, but don't make any single cloud provider your only option.
So here's the solution. For your STL, if you're using IP, have a backup using different infrastructure. Microwave or a second IP path through a completely different provider with different backbone routing. For automation and critical systems, don't put everything on one provider. Run primary on AWS and backup on Azure, or keep automation on-premise with cloud backup. For EAS, maintain on-premise hardware as your primary foundation that operates completely independently, even as you potentially add internet-based capabilities through CAP and IPAWS.
And test your redundancy regularly. Turn off your primary automation and make sure backup takes over without dead air. Simulate an AWS outage and verify Azure handles the load. Disable your primary STL and confirm backup kicks in. Train your staff on manual procedures. Because systems will fail. The question is whether you're ready.
Monday's AWS outage should be a wake-up call. We need to build real redundancy that protects both our emergency alerting capabilities and our business operations. Losing automation during a cloud outage means losing listeners and revenue. That's just as serious from a business perspective as failing to relay an emergency alert is from a public safety perspective. And the solution is the same concept you already understand from having backup internet connections. Apply that thinking to cloud providers, to STLs, to all your critical infrastructure.
To the listeners, this is why your local radio station matters. When major internet outages happen, when cloud services fail, stations that have built their infrastructure right will be there. Broadcasting. With your music, your talk shows, your news, your weather. Because we've built our systems to survive failures and serve you no matter what.
Shout out to Dave's Garage for his excellent technical breakdown. I'll have the link in the episode description.
That's it for today's episode of Fully Modulated. If you've got thoughts on this, if you're using multi-cloud approaches, or if you've got your own take on building redundant infrastructure, I'd love to hear from you. And if you want to support the show, head over to fullymodulated.com and become a modulator for just three bucks a month. Thanks for listening, and I'll catch you on the next one.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
The Why Files: Operation Podcast
The Why Files: Operation Podcast
Sightings
REVERB | QCODE
This Week In Radio Tech (TWiRT)
guysfromqueens
The Ezra Klein Show
New York Times Opinion
Alive with Steve Burns
Lemonada Media
Friends Who Pretend
Chris Bryant
99% Invisible
Roman Mars
Hard Fork
The New York Times
Tetragrammaton with Rick Rubin
Rick Rubin
The 404 Media Podcast
404 Media
The Daily
The New York Times
Honestly with Bari Weiss
The Free Press
Search Engine
PJ Vogt
Pod Save America
Crooked Media
Danny Jones Podcast
Danny Jones | QCODE
Darknet Diaries
Jack Rhysider
Soul Boom
Rainn Wilson