The Day the Lights Went Out; We're All On the Grid Together

August 16, 2003
Albert-László Barabási

Once power is fully restored, it will take little time to find the culprit: most likely, it will be a malfunctioning switch or fuse, a snapped power line or some other local failure. Somebody will be fired, promotions and raises denied, and lawmakers will draw up legislation guaranteeing that this problem will not occur again.

Something will be inevitably missed, however, during all this finger-pointing: this week's blackout has little to do with faulty equipment, negligence or bad design. President Bush's call to upgrade the power grid will do little to eliminate power failures. The magnitude of the blackout is rooted in an often ignored aspect of our globalized world: vulnerability due to interconnectivity.

In the early days of electricity, all power was produced locally. First each neighborhood, later each city, had its own power plant. Local generators had to satisfy the peak demands of hot summer nights, when everything from air-conditioners to televisions run full power. That means that the generators were idle most of the time outside of peak hours.

That extra capacity was shared as utilities learned to decrease costs by connecting their facilities and helping each other out during peak-demand periods. The current power grid linked up formerly isolated systems with enough wire to stretch to the moon and back. It requires only a computer keystroke to redirect power produced in New York to the Midwest.

With thousands of generators and hundreds of thousands of miles of lines, the network became so interconnected that even on a normal day a single perturbance can be detected thousands of miles away. This created a whole new set of problems and vulnerabilities, the effects of which have been felt by millions in the past two days.

Because electricity cannot be stored, when a line goes down, its power must be shifted to other lines. Most of the time the neighboring lines have no difficulty carrying the extra load. If they do, however, they will also tip and redistribute their increased load to their neighbors.

This occasionally leads to a cascading failure -- a series of lines becomes overburdened and malfunctions in a short period of time. This is exactly what happened in August 1996 when, because of unusually warm weather, a 1,300-megawatt power line in Oregon sagged, hit a tree and went dead. Power was redistributed automatically but the other lines also failed, causing a blackout in 11 Western states and two Canadian provinces.

Cascading failures are common in most complex networks. They take place on the Internet, where traffic is rerouted to bypass malfunctioning routers, occasionally creating denial of service attacks on routers that are not equipped to handle extra traffic. We witnessed one in 1997, when the International Monetary Fund pressured the central banks of several Pacific nations to limit their credit. That started a cascading monetary failure that left behind scores of failed banks and corporations around the world.

Cascading failures are occasionally our ally, however. The American effort to dry up the money supply of terrorist organizations is aimed at crippling terrorist networks. And doctors and researchers hope to induce cascading failures to kill cancer cells.

The effect of power blackouts, economic crises and terrorism can easily be limited or even eliminated if we are willing to cut the links. Strictly local energy production would guarantee that each blackout would also be strictly local.

But severing the ties would also cripple the network. Shutting down international trade would surely eliminate the impact of the Japanese central bank on the American economy, but it would also guarantee a global economic meltdown. Closing our borders would reduce the chance of terrorist attacks, but it would also risk the American dream of diversity and openness.

The events of the past few days -- unwanted side effects of our network society -- are just one of the periodic reminders that we live in a globalized world. While celebrating that everybody on earth is only six handshakes from us, we need to accept that so are their problems and vulnerabilities.

Most failures emerge and evaporate locally, largely unnoticed by the rest of the world. A few, however, percolate through our dense technological and social networks, hitting us from the most unexpected directions. Unless we are willing to cut the connections, the only way to change the world is to improve all nodes and links.

Originally Published by New York Times (2003)

Photo Credit: Jeremy Perkins

Figure 1. How hard is to distinguish random from scale-free networks? To show how different are the predictions of the two modeling paradigms, the scale-free and that or the random network models, I show the degree distribution of four systems: Internet at the router level; Protein-protein interaction network of yeast; Email network; Citation network, together with the expected best Poisson distribution fit. It takes no sophisticated statistical tools to notice that the Poisson does not fit.
Box 3: All we need is love

If you have difficulty understanding the need for the super-weak, weakest, weak, strong and strongest classification, you are not alone. It took me several days to get it. So let me explain it in simple terms.

Assume that we want to find the word Love in the following string: "Love". You could of course simply match the string and call it mission accomplished. That, however, would not offer statistical significance for your match.

BC insist that we must use a rigorous algorithm to decide if there is Love in Love. And they propose one, that works like this: Take the original string of letters, and break it into all possible sub-strings: 


They call the match super-strong if at least 90% of these sub-strings matches Love. In this case we do have Love in the list, but it is only one of the 14 possible sub-strings, so Love is not super strong.  

They call the match super-weak if at least 50% of the strings matches the search string. Love is obviously not super-weak either.

At the end Clauset's algorithm arrives to the inevitable conclusion: There is no Love in Love.

The rest of us: Love is all you need

‍Figure 3. Differentiating model systems Curious about the reason the method adopted by BC cannot distinguish the Erdős-Rényi and the scale-free model, we generated the degree distribution of both models for N=5,000 nodes, the same size BC use for their test. We have implemented the scale-free model described in Appendix E of Ref [1], a version of the original scale-free model (their choice is problematic, btw, but let us not dwell on that now). In the plot we  show three different realizations for each network, allowing us to see the fluctuations between different realisations, which are small at this size. The differences between the two models are impossible to miss: The largest nodes in any of the Erdős-Rényi networks have degree less then 20, while the scale-free model generate hubs with hundreds of links. Even a poorly constructed statistical test could tell the difference. Yet,  38% of the time the method used by BC does not identify the scale-free model to be even ‘weak scale-free,’  while 51% of the time it classifies the ER model to be ‘weak scale-free.’


Recent posts
Albert-László Barabási
November 28, 2018

Factors ranging from the timing of a book’s release to its subject matter can determine whether it will crack the vaunted list.

continue reading
Albert-László Barabási
March 6, 2018

A study's failure to find scale-free networks where decades of research has documented their existence offers a cautionary tale on using search criteria that fails elementary tests.

continue reading