How to Build Facilities That Survive Cascading Disruptions

When a natural disaster strikes, it rarely stays contained. Its effects ripple across regions, overwhelming shared infrastructure. In recent years, a series of hurricanes, earthquakes, and heatwaves have exposed the fragility of our interconnected systems. When Hurricane Harvey barrelled into the Texas Gulf Coast in 2017, freight lorries were stranded, refineries were forced offline, and nearly a tenth of America’s trucking capacity disappeared within days. Five years earlier, Superstorm Sandy plunged swathes of New Jersey and New York into darkness, bringing transport to a standstill. In 2011, Japan’s Tōhoku earthquake and tsunami disrupted car and electronics factories around the world. Even heatwaves, such as California’s record-breaking temperatures in 2020, triggered cascading blackouts across entire regions.
Each disaster started locally but quickly disrupted connected systems far beyond the initial zone of impact. Yet most firms still plan logistics as if facilities operate in isolation. Traditional models for locating factories, warehouses, or data centres assume either that sites fail independently or that, like dominos, they collapse all at once. Both are deceptively simple, and both are inaccurate.
It is this gap between how disruptions actually occur and how companies plan for them that ISB Professor Vishwakant Malladi, along with Kumar Muthuraman, examine in their study. They propose a model that identifies how facilities fail together, helping firms choose sites that are both cost-effective and resilient in the long run.
Beyond the false comfort of independence
At first glance, deciding where to build warehouses seems straightforward: minimise servicing costs without spreading resources too thin. Introduce the risk of disruption, however, and things turn thorny.
Classical approaches dodge complexity by choosing between two extremes. The independent model treats each site as if it could fail on its own, making calculations simple but overly optimistic. The extreme-dependence model, on the other hand, assumes that if one site fails, all do, which is a cautious but often overly conservative perspective.
Neither captures the ground reality. Real crises sit in the messy middle: a hurricane might wipe out several ports while sparing others, or a cyberattack might disable only sites sharing the same software. Modelling these partial, correlated disruptions is challenging, especially when data is scarce, and possible scenarios multiply fast.
Hidden hands behind failures
The researchers tackle this complexity using a partially subordinated Markov chain, where each facility is either operational or down. Failures are influenced by hidden factors called subordinators—shared vulnerabilities such as a hurricane belt, a common supplier, or an IT backbone. Facilities may face multiple exposures, mirroring real-world interdependencies.