Half of U.S. Coal runs out in 30 years, not 250

Preface. The USGS did a survey of coal in the U.S. in 1974 and announced that America had 250 years of coal left.  In 2007, the National Research Council wrote a report suggesting 100 years was more likely due to “a combination of increased rates of production…transportation issues, recoverability, and location”, and that the USGS ought to re-survey the U.S. to find out.

Not until 2015 was a new survey done on the Powder River Basin (PRB) in Wyoming and Montana, which supplies 45% of U.S. coal.  The USGS found that at best, 40 years of coal were left (35 years in 2020).   Here’s how the USGS calculated this in Billions of Short Tons:

  • 1,156 BST original resources (mostly coal that isn’t economic or technologically obtainable).
  • 1,148 BST after subtracting out previously mined coal
  • 179 BST geological constraints; subtract Environmental, societal, technological restrictions
  • 162 BST Subtract too deep, too thin, high stripping ratios, mining technology limitations
  • 25 BST  2% of original resource estimate after subtracting coal that is more expensive than the market value of coal

You would think that this would be huge news, but the only major news media it appeared in were U.S. News and World Report and Pittsburgh Post-Gazette.

Then in 2017, the Little snake river and red desert coal fields were assessed again.  Originally there were 19.37 BST in resources, but at this point in time there is only 1% of this original resource, 167 million short tons of reserves that are economically and technologically obtainable (Shaffer 2017).

Two other basins have a lot of coal but have not been reassessed, the Appalachian and Illinois Basins. Plus the Raton and Piceance Basins in the Rocky Mountain Province.

Lignite has such low energy density that it is not worth evaluating the lignite basins of Williston in the Northern Great Plains Province and the Gulf Coast Province (USGS 2017b).

The QUALITY needs to be considered. Tad Patzek, former chairman of the Department of Petroleum and Geosystems Engineering at the University of Texas, Austin, found that energy-contentwise, global coal peak may have occurred already in 2011 (Patzek et al. 2010). Still though, a lot of coal left, though to the extent it depends on diesel trucks and other petroleum inputs, and plentiful water, production is likely to decline as oil declines. Also as overburden increases, the “stripping ratio”, the tons of earth must be removed to get at the coal to mine it will take more and more energy at a time when energy is declining. Many thick coal seams curve deeper into the earth making them more energy intense to mine.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Matthew Brown. Feb. 23, 2016  Amid coal market struggles, less fuel worth mining in US.   Associated Press.

This AP article is based on a USGS report that “presents the final results of the first assessment of both coal resources and reserves for all significant coal beds in the entire Powder River Basin, northeastern Wyoming and southeastern Montana. The basin covers about 19,500 square miles, and contains the largest resources of low-sulfur, low-ash, subbituminous coal in the United States. It is the single most important coal basin in the United States. In 2012, almost 420 million short tons were produced from this basin, which was about 42 percent of the total coal production in the United States.

(AP) — Vast coal seams dozens of feet thick that lie beneath the rolling hills of the Northern Plains once appeared almost limitless, fueling boasts that domestic reserves were sufficient to power the U.S. for centuries.

But an exhaustive government analysis says that at current prices and mining rates the country’s largest coal reserves, located along the Montana-Wyoming border, will be tapped out in just a few decades.

The finding by the U.S. Geological Survey upends conventional wisdom on the lifespan for the nation’s top coal-producing region, the Powder River Basin.

“You’re looking at a forty-year life span, maximum, for Powder River coal,” said USGS geologist Jon Haacke, one of the authors of the analysis.

Claims that the U.S. had reserves sufficient to last as long as 250 years came from greatly inflated estimates of how much coal could be mined, Haacke added. They were based on data put out by the U.S. Energy Department last updated comprehensively in the 1990s.

USGS study leader James Luppens said the Energy Department estimates were in “desperate need of revision.” But there are no immediate plans to do so or to incorporate the new findings, said Lance Harris, a supervisor with the Energy Department’s coal team.

For decades, the agency has made little distinction between coal reserves that reasonably could be mined and those that could not.

The perception of coal’s abundance began to shift in 2008, when the USGS team released initial data that called into question the longevity of U.S. supplies.

Yet assertions that America was the “Saudi Arabia of coal” persisted, including in 2010 by President Barack Obama and continuing in recent months by industry supporters. The Department of Energy states on its website that based on current mining rates, “estimated recoverable coal reserves would last about 261 years.”

Leslie Glustrom, an environmental activist from Boulder, Colorado, who has urged the Energy Department to change how it tallies up the nation’s untapped resources, said she believes the end for the Powder River Basin is coming even more rapidly than the USGS study suggests. And she said it has little to do with a “war on coal” that Republicans frequently accuse the Obama administration of waging.

“This is not a political problem. It’s a geologic problem,” Glustrom said.

It’s been four decades since its low-sulfur content first made Powder River Basin coal the fuel of choice among electric utilities that needed to cut their sulfur dioxide pollution. Sprawling strip mines in the region have since removed more than 11 billion tons of coal, the equivalent of 95 million loaded rail cars.

To gauge how much coal remains, USGS researchers since 2004 have analyzed the geology from minerals removed by 30,000 holes drilled deep into the earth. The data revealed almost 1.1 trillion tons of coal buried across the 20,000-square mile Powder River Basin. Of that, only 162 billion tons is within coal seams considered thick enough and close enough to the surface to make extracting them worthwhile.

The amount drops even more drastically when the coal’s quality is factored in and compared against current prices. When the USGS data was first compiled, in 2013, Powder River Basin coal was selling for $10.90 a ton, resulting in about 23 billion tons being designated as economically-recoverable.

With coal prices down to $9.55 a ton, the reserve estimate has plummeted to just 16 billion tons, Haacke said. That’s equivalent to 40 years at the current production pace of 400 million tons annually from the basin’s 16 mines in Wyoming and Montana.

Meanwhile, mining costs have trended up. That’s been driven by an increase in the “stripping ratio” — how many tons of earth must be removed to mine a ton of coal as the region’s thick coal seams curve gradually deeper into the earth.

“It became two to one, then three to one, then three-and-a-half to one,” Haacke said of the stripping ratio. “That becomes a dirt-moving operation rather than a coal-moving operation.”

Luppens, James A., et al. 2015. Coal Geology and Assessment of Coal Resources and Reserves in the Powder River Basin, Wyoming and Montana. USGS.

This report presents the final results of the first assessment of both coal resources and reserves for all significant coal beds in the entire Powder River Basin, northeastern Wyoming and southeastern Montana. The basin covers about 19,500 square miles, exclusive of the part of the basin within the Crow and Northern Cheyenne Indian Reservations in Montana. The Powder River Basin, which contains the largest resources of low-sulfur, low-ash, subbituminous coal in the United States, is the single most important coal basin in the United States. The U.S. Geological Survey used a geology-based assessment methodology to estimate an original coal resource of about 1.16 trillion short tons for 47 coal beds in the Powder River Basin; in-place (remaining) resources are about 1.15 trillion short tons. This is the first time that all beds were mapped individually over the entire basin. A total of 162 billion short tons of recoverable coal resources (coal reserve base) are estimated at a 10:1 stripping ratio or less. An estimated 25 billion short tons of that coal reserve base met the definition of reserves, which are resources that can be economically produced at or below the current sales price at the time of the evaluation. The total underground coal resource in coal beds 10–20 feet thick is estimated at 304 billion short tons.

This report is groundbreaking as it provides the first published maps of the individual coal beds for the entire PRB.

Prior resource assessments relied on net coal thickness maps for only selected beds. Although net thickness maps are sufficient for estimating in-place (remaining) resources, the mapping of all individual beds is necessary for conducting economic studies to determine the coal reserve base for the Powder River Basin. The coal reserve base includes those resources that are currently (October 2014) economic (reserves), but also may encompass those parts of a resource that have a reasonable potential for becoming economically available. Thus, the coal reserve base provides a more realistic estimate of the portion of in-place resources that are potentially recoverable, which is important from a national energy standpoint. A key to the success of this current assessment was incorporating as much data as practical from the recent, extensive coal bed methane development in the basin. The interpretation of these new data proved critical to the development of a comprehensive geologic model needed for estimating coal resources and reserves in the Powder River Basin. A total of 29,928 drill holes were used for this assessment.

There is often confusion regarding the use of the terms coal resources and coal reserves as they relate to assessments. Although the two terms have been used interchangeably, there are significant differences between the definitions. Coal resources include those in-place tonnage estimates determined by summing the volumes for identified resources and hypothetical resources, using coal zones of a minimum thickness and within certain depth limits (commonly 0–2,000 feet [ft] deep) (Pierce and Dennen, 2009). Coal reserves are a subset of coal resources and are considered economically minable at the time of classification (Wood and others, 1983).

The cumulative results from the four PRB assessment areas are 24.5 BST of coal reserves and a total recoverable coal resource (coal reserve base) of 162 BST in coal beds greater than 5 ft in thickness and less than a 10:1 stripping ratio

So far 11 billion tons of coal filling 95 million rail cars have been removed. Yes, there’s a lot of coal down there: 1.1 trillion tons, but only 162 billion tons are thick and close enough to the surface to justify mining them.  Remember, money is an abstract concept that can’t move your car even an inch if stuffed into the gas tank.  No matter what the price of coal, if it takes more energy to mine and transport than the energy contained within the coal, it’s an energy sink and the mine will be shut down.

References

NRC. 2007. Coal. Research and Development to support national energy policy National Research Council.

Patzek, T., et al. 2010. A global coal production forecast with multi-Hubbert cycle analysis.
Energy 35: 3109–3122

Shaffer, B. N., et al. 2017. Assessment of coal resources and reserves in the Little Snake River Coal Field and Red Desert Assessment Area, Greater Green River Basin, Wyoming. Fact Sheet 2019-3053. United States Geological Survey.

Singh S (2021) China power crunch spreads, shutting factories and dimming growth outlook. Reuters. https://www.reuters.com/world/china/chinas-power-crunch-begins-weigh-economic-outlook-2021-09-27/

USGS. 2017b. Assessing U.S. coal resources and reserves. Fact sheet 2017-3067. United States Geological Survey.

Xu M (2022) Analysis: Quantity over quality – China faces power supply risk despite coal output surge. Reuters
https://www.reuters.com/markets/commodities/quantity-over-quality-china-faces-power-supply-risk-despite-coal-output-surge-2022-06-21/

Posted in Coal, Peak Coal | Tagged , , , | Comments Off on Half of U.S. Coal runs out in 30 years, not 250

Were other humans the first victims of the 6th mass extinction?

Preface. This article makes a good case that we did indeed wipe out other hominids. “…Yet the extinction of Neanderthals, at least, took a long time—thousands of years. While Neanderthals lost the war, to hold on so long they must have fought and won many battles against us, suggesting a level of intelligence close to our own.”

I seriously doubt we’ll drive ourselves extinct, though carrying capacity of the earth is at best 1 billion (pre-fossil fuels) or less (topsoil erosion, deforestation, pollution, climate change, etc).

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles,Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Longrich, N. 2019. Were other humans the first victims of the sixth mass extinction? The conversation.

Nine human species walked the Earth 300,000 years ago. Now there is just one. The Neanderthals, Homo neanderthalensis, were stocky hunters adapted to Europe’s cold steppes. The related Denisovans inhabited Asia, while the more primitive Homo erectus lived in Indonesia, and Homo rhodesiensis in central Africa.

Several short, small-brained species survived alongside them: Homo naledi in South Africa, Homo luzonensis in the Philippines, Homo floresiensis (“hobbits”) in Indonesia, and the mysterious Red Deer Cave People in China. Given how quickly we’re discovering new species, more are likely waiting to be found.

By 10,000 years ago, they were all gone. The disappearance of these other species resembles a mass extinction. But there’s no obvious environmental catastrophe—volcanic eruptions, climate change, asteroid impact—driving it. Instead, the extinctions’ timing suggests they were caused by the spread of a new species, evolving 260,000-350,000 years ago in Southern Africa: Homo sapiens.

The spread of modern humans out of Africa has caused a sixth mass extinction, a greater than 40,000-year event extending from the disappearance of Ice Age mammals to the destruction of rainforests by civilisation today. But were other humans the first casualties?

We are a uniquely dangerous species. We hunted wooly mammoths, ground sloths and moas to extinction. We destroyed plains and forests for farming, modifying over half the planet’s land area. We altered the planet’s climate. But we are most dangerous to other human populations, because we compete for resources and land.

History is full of examples of people warring, displacing and wiping out other groups over territory, from Rome’s destruction of Carthage, to the American conquest of the West and the British colonization of Australia. There have also been recent genocides and ethnic cleansing in Bosnia, Rwanda, Iraq, Darfur and Myanmar. Like language or tool use, a capacity for and tendency to engage in genocide is arguably an intrinsic, instinctive part of human nature. There’s little reason to think that early Homo sapiens were less territorial, less violent, less intolerant—less human.

Optimists have painted early hunter-gatherers as peaceful, noble savages, and have argued that our culture, not our nature, creates violence. But field studies, historical accounts, and archaeology all show that war in primitive cultures was intense, pervasive and lethal. Neolithic weapons such as clubs, spears, axes and bows, combined with guerrilla tactics like raids and ambushes, were devastatingly effective. Violence was the leading cause of death among men in these societies, and wars saw higher casualty levels per person than World Wars I and II.

Old bones and artifacts show this violence is ancient. The 9,000-year-old Kennewick Man, from North America, has a spear point embedded in his pelvis. The 10,000-year-old Nataruk site in Kenya documents the brutal massacre of at least 27 men, women, and children.

It’s unlikely that the other human species were much more peaceful. The existence of cooperative violence in male chimps suggests that war predates the evolution of humans. Neanderthal skeletons show patterns of trauma consistent with warfare. But sophisticated weapons likely gave Homo sapiens a military advantage. The arsenal of early Homo sapiens probably included projectile weapons like javelins and spear-throwers, throwing sticks and clubs.

Complex tools and culture would also have helped us efficiently harvest a wider range of animals and plants, feeding larger tribes, and giving our species a strategic advantage in numbers.

The ultimate weapon

But cave paintings, carvings, and musical instruments hint at something far more dangerous: a sophisticated capacity for abstract thought and communication. The ability to cooperate, plan, strategizemanipulate and deceive may have been our ultimate weapon.

The incompleteness of the fossil record makes it hard to test these ideas. But in Europe, the only place with a relatively complete archaeological record, fossils show that within a few thousand years of our arrival , Neanderthals vanished. Traces of Neanderthal DNA in some Eurasian people prove we didn’t just replace them after they went extinct. We met, and we mated.

Elsewhere, DNA tells of other encounters with archaic humans. East Asian, Polynesian and Australian groups have DNA from Denisovans. DNA from another species, possibly Homo erectus, occurs in many Asian people. African genomes show traces of DNA from yet another archaic species. The fact that we interbred with these other species proves that they disappeared only after encountering us.

But why would our ancestors wipe out their relatives, causing a mass extinction—or, perhaps more accurately, a mass genocide?

The answer lies in population growth. Humans reproduce exponentially, like all species. Unchecked, we historically doubled our numbers every 25 years. And once humans became cooperative hunters, we had no predators. Without predation controlling our numbers, and little family planning beyond delayed marriage and infanticide, populations grew to exploit the available resources.

Further growth, or food shortages caused by drought, harsh winters or overharvesting resources would inevitably lead tribes into conflict over food and foraging territory. Warfare became a check on population growth, perhaps the most important one.

Our elimination of other species probably wasn’t a planned, coordinated effort of the sort practiced by civilizations, but a war of attrition. The end result, however, was just as final. Raid by raid, ambush by ambush, valley by valley, modern humans would have worn down their enemies and taken their land.

Yet the extinction of Neanderthals, at least, took a long time—thousands of years. This was partly because early Homo sapiens lacked the advantages of later conquering civilizations: large numbers, supported by farming, and epidemic diseases like smallpox, flu, and measles that devastated their opponents. But while Neanderthals lost the war, to hold on so long they must have fought and won many battles against us, suggesting a level of intelligence close to our own.

Today we look up at the stars and wonder if we’re alone in the universe. In fantasy and science fiction, we wonder what it might be like to meet other intelligent species, like us, but not us. It’s profoundly sad to think that we once did, and now, because of it, they’re gone.

Posted in Human Nature | Tagged , | Comments Off on Were other humans the first victims of the 6th mass extinction?

Movie review of Michael Moore’s “Planet of the Humans”

Preface. This documentary was made by Jeff Gibbs, a writer and environmentalist, with Michael Moore as the executive producer. This movie is worth watching, and an entertaining and quick way to understand why rebuildable “renewables” are neither green or a solution for replacing fossil fuels.

I watched the movie and then read 20 criticisms of it. None were any good, it is as if the reviewers had watched an entirely different movie. Most yell names at it and call it Bullshit, rather than offer legitimate criticisms as to what was wrong and criticize it for things it never said.  A lot of howling can be heard, like an ox who’s been gored.  McKibben is super angry about his portrayal. Here is Gibbs response to Bill McKibben.

All of the dozens of critiques zero in on something trivially incorrect, like some remark that solar panels only last 10 years. I do wish the film makers had left out questionable bits, but none of the attacks on this movie address the main points:

  • Renewables aren’t replacing natural gas and coal plants because they’re needed as backup since not enough energy storage exists, especially not batteries.
  • Renewables require stunning amounts of fossil fuels to generate the high heat to smelt metal ores. Nothing I have ever written or could write is as effective and stunning in conveying the ginormous amount of fossils needed to construct renewable contraptions than the sequence of dozens of metals being smelted (I wish the movie had also shown the fossils to mine the ore, transport it to the smelter, crush it, fabricate into pieces, ship and truck transport of pieces to assembly factory, truck transport to final destination, and so on).
  • Electricity in Germany and elsewhere is a tiny fraction of OVERALL energy use.

The only legitimate criticism, if it is every offered, would need to come from scientists, who understand that you can’t rant, rave, and call a film names, you have to actually state what was wrong and cite peer-reviewed evidence to back it up.  You can’t cherry-pick some random fact that makes wind or solar look good as rebuttal.

The Guardian is more reasonable, but accuses the film of not offering a solution, and asks what about nuclear power. It’s not fair to say a 100 minute film should have covered nuclear and dozens of other topics.

So far the best reviews that have many points I didn’t mention are by Robert Brice (here), Richard Heinberg (here), McClennen at Salon (here), and episode 24 Banana town of the delightful podcast “Crazy Town” (here).

I’ve been writing about peak oil, the coming energy crisis, and the other death by a thousand cuts that will eventually lead to collapse of the world’s fossil-fueled civilization since 2001. Which sadly means going back to the wood based energy and infrastructure societies of the past. The film sure got it right that burning biomass and making biofuels are quite destructive.

And finally, William Rees, professor at the University of British Columbia, wrote me to point out that even if renewables were ‘the answer’, even if we could contrive a cheap plentiful substitute for fossil fuels — it would be a catastrophe. We would simply use the energy bounty to completely dismember the Earth.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Michael Moore. 2020. Planet of the Humans. Youtube.com

Gibbs starts out by asking “Why are we still addicted to fossil fuels? So I began to follow the green energy movement.”

He went to a solar fair that ran on solar power until it rained, then biodiesel generators were turned on, which didn’t work, so they plugged into the electric grid.  Other “green” events later in the movie running on solar power are actually using diesel generators and the grid.

Famous, rich, powerful people support Greenness. Obama gave hope that the green movement would ramp up.  Al Gore shared ideas with Obama, Sir Richard Branson invested in renewables. So did Vinod Khosla, major banks, investment groups, and Bloomberg gave $50 million to the Sierra club to fight coal. 

Then he shows how that “green” technology may not be.  GM introduced a new line of electric vehicles, the Volt in Lansing Michigan.  Gibbs point out that much of the electricity in this region is produced with coal, which isn’t very green.  Electric cars need rare earth metals, which often contain radioactive material in the areas they’re mined, and that has to be disposed of somehow. And many minerals that require massive amounts of energy to mine, smelt, and fabricate

Then he shows a huge field with solar panels that the owner said could power 10 homes at best.  Critics of the movie dismiss this saying the latest solar panels are far more powerful, but even if five times better, if it takes this large an area to power just 50 homes when the sun is shining, it’s not hard to imagine the millions of acres of panels required to power a city.

He interviews an environmental health and safety employee where a wind plant might be installed on a mountain loved for its beauty and hiking, who points out that the turbines will still require a backup fossil power plant idling 100% of the time on stand-by to step in when the wind dies and ramp down when it surges, using more energy on stand-by than if just kept running.  The forested mountain also protects the watershed, but not any longer if it is deforested for turbines.

The point that renewables don’t replace fossils was then made more strongly by Richard York, at the University of Oregon, who published an article in Nature Climate Change titled “Do alternative energy sources displace fossil fuels?” that showed green energy did not replace fossil fuels. I just looked at the article, it’s much worse than that: “each unit of electricity generated by non-fossil-fuel sources displaced less than one-tenth of a unit of fossil-fuel-generated electricity.”  So renewables add to energy generation, but aren’t replacing fossil generation.

On top of that, fossils were used to mine the materials of wind and solar, crush the ore, smelt the metal out, fabricate it into pieces, transport each piece to the assembly factory, and deliver the wind turbine or solar panel to its destination, and ongoing maintenance.  So we aren’t making a transition to something else, or kicking our addiction to fossil fuels at all. We’re just expanding the amount of electrical energy produced a tiny bit.

Coal plants certainly aren’t being replaced by solar and wind, but with much larger natural gas plants fueled by the largest expansion of fossil fuel production in American history. The Sierra Club’s “beyond coal” campaign may have helped get many coal plants closed, but it did not reduce consumption of fossil fuels.

Gibbs asks if we are so desperate to find a green solution that we don’t look closely enough at them.  At U.C. Berkeley he’s shown how solar panels are made.  First, quartz is dynamited out of mountains, then coal melts the silicon out of quartz at 1800 F.  That is decidedly not green. 

Even solar companies admitted they weren’t entirely green, since making solar panels requires mining, and only produce maximum power a few hours a day if the sun is up.  Like wind, natural gas plants have to back solar up most of the time according to Philip Moeller, a Federal Energy regulatory commissioner. This is not efficient, and causes wear and tear on fossil and nuclear plants which weren’t designed to do this, shortening their lifespan and increasing maintenance costs.

Without battery storage, fossil plants have to provide baseload power and balancing power.  The world uses 546,000,000 Giga BTU.  All the batteries in the world can store 51 Giga BTU according to the International Energy Agency (IEA).  Then they degrade.   Many critics castigate this without a citation to prove it false. If anything, the problem is far worse than what the film portrayed. In my book “When Trucks Stop Running”, I show how the only battery there are enough materials on earth that could be made for half a day of global electricity generation is a Sodium Sulfur (NaS) battery. Using data from the Department of Energy (DOE/EPRI 2013) energy storage handbook, I calculated that the cost of NaS batteries capable of storing 24 hours of electricity generation in the United States came to $40.77 trillion dollars, covered 923 square miles, and weighed in at a husky 450 million tons. And after 15 years you need to replace it.

Concentrated solar power (CSP) exists only in deserts.  They need to burn natural gas for hours to run turbines before and after the sun comes up.  They were built with fossil fuels and in my research I found that they cost about $1 billion each.  The sun is renewable but the solar arrays are not.  You use more fossil fuels to build these facilities than the energy they’ll ever produce.  Gibbs points out that if you were to criticize a CSP plant, you’d be called evil, yet it is the evil Koch brothers who make almost every component of the glass, steel, and other parts using the most toxic and industrial processes that have ever been invented. 

CSP Ivanpah takes up over 5 square miles of a beautiful desert that was destroyed to build it. Only a few years later things began falling apart.

You’ll hear that Germany has 35%, even 50% renewable power, but Germany is still Europe’s largest consumer of coal and these figures are at best the highest days of electricity generation. Not overall power use. Electricity is only 20% of energy consumption, fossils power German manufacturing, transportation, heating, and other non-electric needs. In addition, Germany has just built a large liquefied natural gas terminal to import US gas.

Elon Musk promised that his Tesla factory in Sparks, Nevada, would run off of solar, wind, and geothermal, but that is not true, the factory is connected to the electric grid.  In fact, there is no factory that runs entirely off 100% renewable energy anywhere in the world.

Then a dizzying series of film depicting dozens of mining operations of minerals and metal needed to make wind and solar plus the coal and other fossils required, and equipment and vehicles running on diesel that is so NOT green.

So why are bankers, industrialists and environmental leaders only focused on green technology? 

Gibbs asks Sheldon Solomon at Skidmore College if perhaps to deny death, the right has religion and endless fossil fuels, the left says no worries, we have solar and wind?  Yes, he confirms.  We know we’re here, we don’t like that we’re animals, so we enveloped ourselves in protective beliefs of religion, cultural, and so on.  Hearing points of view that contradict your comfortable illusions creates anxiety.

McNeil biomass power plant, the biggest source of renewables in Vermont, burns trees.   Trees emit a great deal of CO2 and toxic metals, also not clean and green at 30 cords per hour, 400,000 tons of wood a year.  It took a lot of fossils to cut them down, chip, and truck them in and this biomass plant simply couldn’t exist without fossil fuels.  Made worse by using old tires, creosote, and other wastes are added, since green wood doesn’t burn well.

Environmental groups have touted for years that forests are renewable and will grow back.  Sure if you wait a century.  If all trees were cut down and burned they would power America for only a year. 

Many universities have decided to go “green” by burning biomass.  At a North Carolina college, Bruce Nilles, the director of the Sierra Clubs “Beyond Coal” project, proudly announced this. “Out of bed with coal companies, and into bed with logging companies?” Gibbs asks.  Bill McKibben spoke at a college in Vermont that planned to burn wood with great favor and fervor.

To create 40 million gallons of ethanol, a project in Michigan proposed using a million tons of green wood, which would use more natural gas fertilizer to get new trees to grow to replace them than ethanol produced.

Wood chips from America are being exported all over the world. Burning wood is by far the largest “green energy” in the world. Plenty of environmentalists realize this.  But leaders have promoted it at times, calling it sustainable and renewable.  Though when Sierra Club, 350.org, and other leaders are asked by Gibbs directly they all dodged the question.  Only one rejected them, Vandana Shiva of India.

Gibbs then addresses the profit motive.  Businesses are making a lot of money hiding under the cover of “green” energy.  Bloomberg, Jeremy Grantham (sells forests), Richard Branson ran an airplane on rainforest destroying coconut oil, Vinod Khosla ethanol with wood chips, and too many more to list.  Several environmental leaders and groups were mentioned who promoted “green” funds that actually only put a very small amount of money into green projects and much more into non-green investments.

How is 350.org funded? McKibben says they don’t get funds from large entities.  The film does not accuse him of this either, but McKibben has an angry rebuttal of the film denying it, even though the film didn’t make this accusation.

We must accept that infinite growth on a finite planet isn’t possible, we must take control away from billionaires, they are not our friends.

Many interviewed brought up population as the main issue.  And the need to consume less.  If we don’t, we crash, this happens to species all the time.  Fossil fuels allowed our population to expand to an impact 100 times greater than only 100 years ago from population and energy consumption.  Steven Running, ecologist, talked about the limits we’re reaching, including fisheries declining, farmland declining, groundwater and rivers vanishing, and numerous other limits are being reached.  It is not just the CO2 destroying the planet, it’s us and everything we’re doing.  

To learn more from the film makers, see the discussion at: “Planet of the Humans” Earth Day Live Stream w/ Michael Moore, Jeff Gibbs & Ozzie Zehner

Afternote: Here are some articles that rebut many of the criticisms with peer-reviewed evidence instead of random information about this-or-that and straw man arguments about something the film said but actually didn’t. Also, to expect a 100 minute film to cover EVERYTHING is absurd.

Fossil-fueled industrial heat hard to impossible to replace with renewables

Why solar power can’t save us from the coming energy crisis

48 Reasons why wind power can not replace fossil fuels

Utility scale energy storage has a long way to go to make renewables possible

Pumped Hydro Storage (PHS)

Who Killed the Electric Car and more importantly, the Electric Truck?

More posts about electric cars (topics include self-driving, lithium shortages, etc).

CSP Barriers and Obstacles

NREL. April 2012. Geothermal power and interconnection. The Economics of Getting to Market.

Nuclear power is too expensive and 37 reactors likely to shut down because of that

A Nuclear spent fuel fire at Peach Bottom in Pennsylvania could force 18 million people to evacuate

Peak Uranium by Ugo Bardi from Extracted: How the Quest for Mineral Wealth Is Plundering the Planet

Peak soil: Industrial agriculture destroys ecosystems and civilizations. Biofuels make it worse.

Wood, the fuel of preindustrial societies, is half of EU renewable energy

And finally, my book “When trucks stop running” makes the case that civilization ends when trucks stop. EV simply don’t matter. Here’s what would happen if trucks stopped (see links at the end for why trucks can’t be electrified, and read my book about why trucks can’t run on electricity, batteries, hydrogen, biofuels, natural gas, liquefied coal, etc):

What would happen if trucks stopped running?

Posted in Alternative Energy, Biomass, Coal, Natural Gas, Solar, Wind | Tagged , , , , , | 15 Comments

How sand transformed civilization

Preface. No wonder we’re reaching peak sand. We use more of this natural resource than of any other except water. Civilization consumes nearly 50 billion tons of sand & gravel a year, enough to build a concrete wall 88 feet (27 m) high and 88 feet wide right around the equator.  

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Financial Sense, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Vince Beiser. 2018. The World in a Grain. The Story of Sand and How It Transformed Civilization. Riverhead Books.

Riverbeds and beaches around the world are being stripped bare of their precious grains. Farmlands and forests are being torn up. And people are being imprisoned, tortured, and murdered. All over sand.

In 1950, some 746 million people—less than one-third of the world’s population—lived in cities. Today, the number is almost 4 billion,

The overwhelming bulk of it goes to make concrete, by far the world’s most important building material. In a typical year, according to the United Nations Environment Programme, the world uses enough concrete to build a wall 88 feet high and 88 feet wide right around the equator.    

There is such intense need for certain types of construction sand that places like Dubai, which sits on the edge of an enormous desert in the Arabian Peninsula, are importing sand from Australia.

Sand mining tears up wildlife habitat, fouls rivers, and destroys farmland.

Thieves in Jamaica made off with 1,300 feet of white sand from one of the island’s finest beaches in 2008. Smaller-scale beach-sand looting is ongoing in Morocco, Algeria, Russia, and many other places around the world.

The damage being done to beaches is only one facet, and not even the most dangerous one, of the damage being done by sand mining around the world. Sand miners have completely obliterated at least two dozen Indonesian islands since 2005. Hauled off boatload by boatload, the sediment forming those islands ended up mostly in Singapore, which needs titanic amounts of sand to continue its program of artificially adding territory by reclaiming land from the sea.

The city-state has created an extra 50 square miles in the past 40 years and is still adding more, making it by far the world’s largest sand importer. The demand has denuded beaches and riverbeds in neighboring countries to such an extent that Indonesia, Malaysia, Vietnam, and Cambodia have all restricted or completely banned exports of sand to Singapore.

Sand miners are increasingly turning to the seafloor, vacuuming up millions of tons with dredges the size of aircraft carriers. One-third of all aggregate used in construction in London and southern England comes from beneath the United Kingdom’s offshore waters. Japan relies on sea sand even more heavily, pulling up around 40 million cubic meters from the ocean floor each year. That’s enough to fill up the Houston Astrodome thirty-three times.

Hauling all those grains from the seafloor tears up the habitat of bottom-dwelling creatures and organisms. The churned-up sediment clouds the water, suffocating fish and blocking the sunlight that sustains underwater vegetation.

The dredging ships dump grains too small to be useful, creating further waterborne dust plumes that can affect aquatic life far from the original site.

Dredging of ocean sand has also damaged coral reefs in Florida and many other places, and threatens important mangrove forests, sea grass beds, and endangered species such as freshwater dolphins and the Royal Turtle. One round of dredging may not be significant, but the cumulative effect of several can be. Large-scale ocean sand mining is new enough that there hasn’t been a lot of research on it, meaning that no one knows for sure what the long-term environmental impacts will be. We’re sure to find out in the coming years, however, given how fast the practice is expanding.

What is sand?

The average grain of sand is a tad larger than the width of a human hair. Those grains can be made by glaciers grinding up stones, by oceans degrading seashells and corals (many Caribbean beaches are made of decomposed shells), even by volcanic lava chillingand shattering upon contact with air or water.

Nearly 70% of all sand grains on Earth are quartz. These are the ones that matter most to us.

Silicon and oxygen, are the most abundant elements in the Earth’s crust, so it’s no surprise that quartz is one of the most common minerals on Earth. It is found abundantly in the granite and other rocks that form the world’s mountains and other geologic features.

Most of the quartz grains we use were formed by erosion. Wind, rain, freeze-thaw cycles, microorganisms, and other forces eat away at mountains and other rock formations, breaking grains off their exposed surfaces. Rain then washes those grains downhill, sweeping them into rivers that carry countless tons of them far and wide. This waterborne sand accumulates in riverbeds, on riverbanks, and on the beaches where the rivers meet the sea. Over the centuries, rivers periodically overflow their banks and shift their courses, leaving behind huge deposits of sand

Quartz is tremendously hard, which is why quartz grains survive this long, bruising journey intact while other mineral grains disintegrate.

Over millions of years, sands are often buried under newer layers of sediment, uplifted into new mountains, then eroded and transported once again.

Quartz always comes mixed with bits of other materials: iron, feldspar, whatever other minerals prevail in the local geology. (Pure quartz is transparent,

A certain amount of those other substances need to be filtered out before the sand can be used to make concrete, glass, or other products.

Sand is deployed on its own to make other construction materials like mortar, plaster, and roofing components.

Marine sands—the naval wing of the army, found on the ocean floor—are of similar composition, making them useful for artificial land building, such as Dubai’s famous palm-tree-shaped man-made islands. These underwater grains can also be used for concrete, but that requires washing the salt off them—an expensive step most contractors would rather avoid.

Silica sands are purer—at least 95%.  These are the sands you need to make glass.  Silica sands are also used to help make molds for metal foundries, add luster to paint, and filter the water in swimming pools, among many other tasks. Some of the unique properties of industrial sands suit them for highly specific jobs. The silica sands of western Wisconsin, for instance, have a particular shape and structure that make them ideal for use in fracking for oil and gas.

Small amounts of extremely high-purity quartz, a tiny, elite group possessed of rare attributes that enable them to perform extraordinary feats. These particles are made into high-tech equipment essential for manufacturing computer chips. Some are also used to create the sparkling sand traps of exclusive golf courses or to line Persian Gulf horse-racing tracks

Underwater sands are easier to mine, since there’s no intervening earth, known as overburden, to scrape away. They also come largely cleansed of dust-sized particles. On land, sand is usually quarried from open pits. Sometimes that requires using explosives and crushing machines to break apart sandstone,

Harvesting sand

Raw sand needs to be washed and run through a series of screens to sort it by size.

In the United States, some 4,100 companies and government agencies harvest aggregate from about 6,300 locations in all fifty states.

The harm done by sand mining

Colossal amounts of more ordinary construction sand is dredged up from riverbeds or dug from nearby floodplains. In central California, floodplain sand mining has diverted river waters into dead-end detours and deep pits that have proven fatal traps for salmon.

Dredging sand from riverbeds, as from seabeds, can destroy habitat and muddy waters to a lethal degree for anything living in the water. Kenyan officials shut down all river sand mines in one western province in 2013 because of the environmental damage they were causing. In Sri Lanka,33 sand extraction has left some riverbeds so deeply lowered that seawater intrudes into them, damaging drinking water supplies.

India’s Supreme Court warned in 2011 that “the alarming rate of unrestricted sand mining” was disrupting riparian ecosystems all over the country, with fatal consequences for fish and other aquatic organisms and “disaster” for many bird species.

In Vietnam, researchers with the World Wildlife Federation believe sand mining on the Mekong River is a key reason the 15,000-square-mile Mekong Delta—home to 20 million people and source of half of all the country’s food and much of the rice that feeds the rest of Southeast Asia—is gradually disappearing. The ocean is overtaking the equivalent of one and a half football fields of this crucial region’s land every day. Already, thousands of acres of rice farms have been lost.

For centuries, the delta has been replenished by sediment carried down from the mountains of Central Asia by the Mekong River. But in recent years, in each of the several countries along its course, miners have begun pulling huge quantities of sand from the riverbed to use for the construction of Southeast Asia’s surging cities. Nearly 50 million tons of sand are being extracted annually. “The sediment flow has been halved,” says Marc Goichot, a researcher with the World Wildlife Fund’s Greater Mekong Programme. That means that while natural erosion of the delta continues, its natural replenishment does not. At this rate, nearly half the Mekong delta will be wiped out by the end of this century.

Sand extraction from rivers has also caused untold millions of dollars’ worth of damage to infrastructure around the world. The stirred-up sediment clogs up water supply equipment, and all the earth removed from riverbanks leaves the foundations of bridges exposed and unsupported. A 1998 study found that each ton of aggregate mined from the San Benito River on California’s central coast caused $11 million in infrastructure damage—costs that are borne by taxpayers. In many countries, sand miners have dug up so much ground that they have dangerously exposed the foundations of bridges and hillside buildings, putting them at risk of collapse.

Fisherfolk from Cambodia to Sierra Leone are losing their livelihoods as sand mining decimates the populations of fish and other aquatic creatures they rely on. In some places, mining has made riverbanks collapse, taking out agricultural land and causing floods that have displaced whole families. In Vietnam in 2017 alone, so much soil slid into heavily mined rivers, taking with it the crops and homes of hundreds of families, that the government shut down sand extraction completely in two provinces.

And in Houston, Texas, government officials say that sand mining in the nearby San Jacinto River—much of it illegal—seriously exacerbated flooding damage during 2017’s Hurricane Harvey.  It seems that sand miners stripped away so much vegetation along the river banks that huge amounts of silt were left exposed, and were then washed into the river by Harvey’s rains. That silt then piled up in riparian bottlenecks and at the bottom of Lake Houston, the city’s principal source of drinking water, causing them to overflow into nearby neighborhoods.

River-bottom sand also plays an important role in local water supplies. It acts like a sponge, catching the water as it flows past and percolating it down into underground aquifers. But when that sand has been stripped away, instead of being drawn underground, the water just keeps on moving to the sea, leaving aquifers to shrink. As result, there are parts of Italy and southern India where river sand mining has drastically depleted local drinking water supplies. Elsewhere, the lack of water is killing crops.

In 2015, New York state authorities slapped a $700,000 fine on a Long Island contractor who had illegally gouged thousands of tons of sand from a 4.5-acre patch of land near the town of Holtsville and then refilled the pit with toxic waste.

In Morocco, fully half the sand used for construction is estimated to be mined illegally; whole stretches of beach in that country are disappearing.

India is a vast country of more than 1 billion people. It hides hundreds, most likely thousands, of illegal sand mining operations. Corruption and violence will stymie many of even the best-intentioned attempts to crack down on them. And it’s not just India.

There is large-scale illegal sand extraction going on in dozens of countries. One way or another, sand is mined in almost every country on Earth. India is only the most extreme manifestation of a slow-building crisis that affects the whole world.

Concrete is the skeleton of the modern world, the scaffold on which so much else is built. It gives us the power to dam enormous rivers, erect buildings of Olympian height, and travel to all but the remotest corners of the world with an ease that would astonish our ancestors. Measured by the number of lives it touches, concrete is easily the most important man-made material ever invented.

Cement is not the same thing as concrete. Cement is an ingredient of concrete. It’s the glue that binds the gravel and sand together. Cements (there are many forms) are typically made by crushing up clay, lime, and other minerals, firing them in a kiln at temperatures up to 2,700 degrees, then milling the result into a silky-fine gray powder. Mix that powder with water and you get a paste. The paste doesn’t simply dry, like mud; it “cures,” meaning the powder’s molecules bond together via a process called hydration, its chemical components gripping each other ever tighter, making the resulting substance extremely strong. Reinforced with a platoon of sand, that paste thickens into mortar, the stuff used to hold bricks together.

Concrete is made by adding “aggregate”—sand and gravel—to the mix of cement and water. Typical concrete is about 75% aggregate, 15% water, and 10% cement.

Roman engineers developed sophisticated techniques to improve on basic concrete. Concrete shrinks as it hardens, which can cause it to crack. Water seeping into the cracks expands when it freezes, widening those cracks and further weakening the concrete. Adding horsehair helped with shrinkage, the Romans found, and putting a bit of blood or animal fat in the mix helped the concrete withstand the effects of freezing water.

Today, there are hundreds of formulas for making cement tailored to specific weather conditions, project types, and other variables.

95% of the roughly 83 million tons of cement manufactured in America is Portland cement.

On its own, concrete is basically artificial stone. Reinforced with iron or steel, though, it becomes a building material unlike anything found in nature, one that combines the strengths of both metal and stone. That’s what makes it so useful for so many purposes.

By 1906 there were very few reinforced concrete buildings in California. That was largely thanks to bitter opposition from powerful building trade unions, especially on Ransome’s home turf of San Francisco. Bricklayers, stonemasons, and others, correctly seeing in concrete a mortal threat to their professions, denounced it as unproven and unsafe. Just a few months before the quake, a group of bricklayers and steelworkers in Los Angeles tried to convince the city council to forbid the construction of any more concrete buildings31 within municipal limits. The tradesmen also made a case against concrete on the grounds that it was plain ugly.

Concrete made possible the Panama Canal, begun in 1903, which reshaped an entire nation’s landscape and the world’s shipping routes. It was used to make bunkers for millions of troops in World War I

One million tons of it were deployed to anchor San Francisco’s Golden Gate Bridge.

Every mile of the US interstate highway is made with some 15,000 tons of concrete. Throw in the medians, overpasses, ramps, and road base, and all told, an estimated 1.5 billion tons of gravel and sand went into making the national highway system. That’s more than enough concrete to build a sidewalk reaching to the moon and back—twice.  

Modern asphalt pavement is often more than 90% sand and gravel.

One advantage asphalt had over wood was that it didn’t soak up urine from the endless parade of horses that were the primary form of transport at the time. And unlike brick or stone, asphalt had no gaps between blocks for manure to get stuck in, a serious health hazard.

These days, asphalt producers like to boast that 93% of all 2.2 million miles of America’s paved roads are surfaced with their product. They don’t mention that it’s often just an overlay on top of concrete base.

Both asphalt and concrete are basically just gravel and sand stuck together. The difference is the binding agent. In concrete, it’s cement. In asphalt pavement, it’s bitumens.

The basic trade-off is that in general, asphalt is cheaper to lay down and to maintain, and provides a smoother, quieter ride. Concrete, on the other hand, lasts longer and doesn’t need as much repairing in the first place. The choice often comes down to how much money a given government agency has handy.

Both types of pavement began creeping over city streets in the late 1800s, but outside of urban areas at that time, there was almost nothing but dirt to travel on. Roads just weren’t that important. For most of American history, if you wanted to move lots of people or large quantities of goods any significant distance, you did it via water. Rivers, lakes, canals, and seacoasts carried trade and travelers between settlements. Then along came the railroads in the mid-1800s. Trains connected existing centers and made it easier for people to settle further inland.

Roads, such as they were, were for local travel and hauling small loads via horse, wagon, or foot.

By 1912, there were nearly a million cars on American roads—10 percent of them Model T’s. They jostled for space with the new trucks that farmers were investing in to haul their produce, and which businesses were turning to as an alternative to railroads. At the time, there were still 21 million horses hauling people and cargo, but it was clear automobiles were becoming ever more important.

One of the central difficulties in building those first highways was getting the armies of sand to where they were needed. Each mile of paved road required around 2,000 tons of sand and 3,000 tons of gravel. Hauling all that aggregate out to the rural areas where most of the new highways were being built was no small feat; after all, at the time there were hardly any trucks, and no existing roads on which to transport the aggregate from the mines to the new roadbeds. Builders had to rely on horses and wagons, or build special rail lines to bring trains to the roadbeds. Locomotives would haul in carloads of rock, sand, and cement to be mixed on-site.

Roads became a major industry unto themselves. Hundreds of thousands of men worked building them (including chain-ganged prisoners forced to break rocks for roads). More jobs were created in the gas stations, repair shops, restaurants, hotels, and motels that grew up alongside the new highways. Hundreds of other businesses grew fat supplying the raw materials to the road makers—cement, asphalt, gravel, and of course, sand.

11 million tons of sand and gravel were needed to build California’s Shasta Dam. Kaiser figured it would be simple, since he already owned a sizable aggregate mine near the dam site north of Redding; all he had to do was load it up on trains and pay for the transport. But the local railroad quoted a price Kaiser thought too high. So he came up with an audacious work-around. He built a conveyor belt nearly ten miles long, the longest the world had ever seen, to carry a thousand tons of sand and rock per hour up and down rugged hills and across several creeks to the dam site. Later, Kaiser parlayed his expertise with aggregate into a prize gig as one of the main contractors building the Hoover Dam.

The road network is far more resilient compared to rail lines. Trucks can drive around bomb craters, after all, but trains can’t get past damaged track. trucks carry 70% of all US freight, seven times more than trains.

In addition to all the grains embedded in the 11 inches of concrete on the roads’ surface, a further 21 inches of aggregates were needed for the underlying road base.

Consumption of sand and gravel in the US hit a record high of nearly 700 million tons in 1958, a figure almost twice the 1950 total. By then, according to a federal Bureau of Mines report, so much had already been used that “sources of aggregate were limited in some states” and “nearly depleted in other areas.” Entire new types of monster dump trucks, capable of carrying huge loads off-road, were designed to meet the need to move all that aggregate.

Figuring out exactly how to build those roads took some doing. The Bureau of Public Roads set up a testing center near Chicago where researchers experimented with different types and proportions of sand, gravel, cement, and other ingredients to figure out how much of a beating from heavily loaded trucks each paving mixture could stand up to and for how long. They built a series of looping test tracks composed of various asphalt and concrete mixes, and then set a company of soldiers to drive trucks over them—19 hours a day, every day for two years. The bureau used the data to set pavement design standards.

Whatever else you can say about suburbs, their low density and dependence on cars make them an especially sand-intensive form of settlement. Think of all the sand that goes into those wide roads and all those low-slung, spread-out houses, each with its own driveway. Every one of those houses contains hundreds of tons of sand and gravel, from its asphalt driveway to its concrete foundation to its stuccoed walls to the grains on its roof shingles.

The open spaces of suburbia also made possible an explosive proliferation of swimming pools, which require large amounts of sand in the form of concrete.

American sand and gravel production grew in step with the spread of suburbs.

It can be shaped and molded into almost any form, from twenty-ton slabs to strands thinner than a human hair, from delicate crystal to bulletproof shields. It makes fiber-optic cables and beer bottles, microscope lenses and fiberglass kayaks, the skins of skyscrapers and the teeny camera lenses on your cell phone.

Glass is the thing that lets us see everything. Without it, we’d have no photographs, films, or television, “no understanding of the world of bacteria and viruses, no antibiotics and no revolution in molecular biology from the discovery of DNA,

A more refined breed of grain is required than the common construction sand used for concrete. Glass sand belongs to a category called industrial, or silica, sand.  The best silica sands also come relatively uniform in size. Grains that are too big won’t melt as easily, and ones that are too small will be blown away by air currents in the furnaces.

Construction sand grains retain their form when made into concrete; they are cemented together with countless legions of their fellow grains and their big brothers, gravel pieces, perpetually working together. The grains that become glass, however, are actually transmuted, losing their individual bodies as they are fused together to form a completely different substance.

Glass

Getting them to do that, however, is not easy. It takes temperatures topping 1,600 degrees Celsius to melt silica grains. But mixing sand with additives known as flux, such as soda (aka sodium carbonate), lowers that melting point dramatically. Throw in a little calcium, in the form of powdered limestone or seashell fragments, melt it all together, and when the mixture cools, you have basic glass.

Glassmaking developed into such a profitable art in Venice that in 1291 the city-state’s rulers ordered all of the city’s glassmakers to move to the island of Murano. There they were treated like aristocrats—but not allowed to leave, lest they take their coveted craft secrets to rival nations.

“The invention of spectacles increased the intellectual life of professional workers by fifteen years or more,” write Macfarlane and Martin. Eyeglasses likely abetted the surge of knowledge in Europe from the fourteenth century on. “Much of the later work of great writers such as Petrarch would not have been completed without spectacles. The active life of skilled craftsmen, often engaged in very detailed close work, was also almost doubled,” Macfarlane and Martin maintain. The ability to read into one’s old age became even more important once the printing press came into widespread use from the middle of the fifteenth century.

To manufacture glass profitably, glassmakers need easy access to high-quality sand, cheap energy to run the furnaces, and a transportation network to get the product to market.

It insulated the Alaskan oil pipeline,

In the single year following the introduction of the bottle-making machine, silica sand production in the United States leapt from 1.1 million tons to 4.4 million tons. Clawing all those grains from the earth wreaked considerable damage on the environment. Starting in 1890, sand miners completely dismantled the Hoosier Slide, a 200-foot-tall Indiana dune near Michigan City that was once a tourist attraction, hauling its grains away in wheelbarrows to sell to glassmakers

Lake Michigan shoreline dunes, some as high as 300 feet, were also mined out of existence until public outcry forced the state government to protect them in the 1970s and 1980s.

Elsewhere in Indiana, the Gary Evening Post complained in 1913 that “sand sucker” boats were “stealing the bottom” of Lake Michigan to sell to glassmakers. At the time, no permit or payment was required; anyone was free to dredge as much sand as they liked. (Indiana sand also provided fill for the site of the 1893 Chicago World’s Fair, and to reclaim the land on which Chicago’s famous Lincoln Park was built.)

Owens’s machine quickly and completely wiped out jobs for another class of workers: children. The unions suddenly became crusaders for eliminating child labor—partly because their low pay dragged down wages for everyone, at a time when workingmen’s livelihoods were already in jeopardy. But more important, kids simply were no longer needed in the factories. The dangerous, repetitive tasks that had been given to children were now better handled by machines. In 1880, nearly one-quarter of all glass industry workers were children; by 1919, fewer than 2 percent were.

The irony of all this was that Owens himself didn’t see much wrong with child labor. He always insisted his own early career was a fine one for any stouthearted lad. In a 1922 magazine interview, he expounded: “One of the greatest evils of modern life is the growing habit of regarding work as an affliction. When I was a youngster I wanted to work. . . . A great deal of the trouble to day is with the mothers. Too many boys are being brought up by sentimental women. The first fifteen or twenty years of their lives are spent in playing. . . . When they finally start to work, they are so useless and so helpless that it is positively pathetic. The young man who has begun to work when he was a boy has them handicapped. . . . The hard work I did as a boy never injured me.” He added: “I went through all the jobs the boys performed, and I enjoyed every bit of the experience.

Before 1900, beer and whiskey were distributed in kegs to taverns; if you wanted some to take home, you had to supply your own jug. Milk was stored in metal cans delivered by milk wagons; it was served in pitchers. There was no such thing as a baby bottle. Glass is a near-perfect material for packaging food and beverages. It is nonporous and impermeable, and almost nothing reacts with it chemically, which means a bottle will not interact with whatever is inside it. It won’t rust or leach BPAs or impart a plasticky taste; the liquid inside will retain its aroma and flavor for a very long time. So the sudden availability of cheap high-quality bottles was a colossal gift to makers of soft drinks, beer, medicines, and other bottled consumables.

Owens’s mass-manufactured bottles hit the market at the same time that automobiles were taking over the country and paved roads were spreading. Both developments made it easier than ever to distribute products like bottled drinks far and wide. Trucks loaded with products packaged in sand rolled smoothly from shop to shop on roads made of sand.

By 1916 they had a good enough model to launch a new company selling sheet glass. Its impact was as profound as the bottle machine, turning windows for houses and cars, as well as glass tableware, from luxury items into everyday basics.

Glass-skinned skyscrapers took over city skylines. Plate glass production worldwide mushroomed twenty-five-fold between 1980 and 2010.37 Today, more than 11 billion square yards of flat glass are consumed every year38—more than enough to glaze over the entire city of Houston six times.

Owens-Illinois employees in the 1930s developed a threadlike form of glass that is flexible, strong, lightweight, waterproof, and heat resistant, which they dubbed Fiberglas.

Others had spun glass into threads before, but the new process allowed for the creation of strands as thin as four microns around and thousands of feet long.

To make fiberglass, silica is melted down along with other substances—boron, calcium oxide, magnesia—to make it more workable and give it other properties desired for specific products, such as greater tensile strength. This molten glass is extruded through a metal sleeve set with tiny holes, and the streams are caught on high-speed winders that spin them into filaments. Once cooled and coated with chemical resin, these strands can be used in all kinds of ways.

Owens-Illinois employees in the 1930s developed a threadlike form of glass that is flexible, strong, lightweight, waterproof, and heat resistant, which they dubbed Fiberglas. (Yes, with one s. Later, other companies brought their own versions to market and the stuff became known generically as fiberglass.) Others had spun glass into threads before, but the new process allowed for the creation of strands as thin as four microns around and thousands of feet long. As is true of all glass products, it owes its existence to sand. To make fiberglass, silica is melted down along with other substances—boron, calcium oxide, magnesia—to make it more workable and give it other properties desired for specific products, such as greater tensile strength. This molten glass is extruded through a metal sleeve set with tiny holes, and the streams are caught on high-speed winders that spin them into filaments. Once cooled and coated with chemical resin, these strands can be used in all kinds of ways. Fiberglass pipe insulation to kayaks. Highly efficient insulation made with fiberglass also helped make possible the movement of millions of people into America’s South and Southwest, areas too unpleasantly hot in summer for most folks to consider without a reliable way to keep the heat out. Sand in the form of fiberglass made it easier for people to move to the sand-strewn deserts of Arizona and Nevada.

(Ceramics, incidentally, are also largely composed of sand; ground silica provides the skeleton to which the clay and other additives are attached.)

Glass has long since lost its premier position as the world’s beverage container material of choice; plastic bottles and metal cans now make up 80 percent of the market.

The industry’s center of gravity today is China, which is now both the world’s largest producer and consumer of glass, churning out and gobbling up more than half of all the world’s flat glass. It so thoroughly dominates glass manufacture today

Computer Chips

Spruce Pine, it turns out, is the source of the purest natural quartz ever found on Earth. This ultra-elite corps of silicon dioxide particles plays a key role in manufacturing the silicon used to make computer chips. In fact, there’s an excellent chance the chip that makes your laptop or cell phone work was made using quartz from this obscure Appalachian backwater. “It’s a billion-dollar industry here,” said Glover with a hooting laugh. “Can’t tell by driving through here. You’d never know it.

Mica used to be prized for wood- and coal-burning stove windows and for electrical insulation in vacuum tube electronics. It’s now used mostly as a specialty additive in cosmetics and things like caulks, sealants, and drywall joint compound.

Step one is to take high-purity silica sand, the kind used for glass. (Lump quartz is also sometimes used.) That quartz is then blasted in a powerful electric furnace, creating a chemical reaction that separates out much of the oxygen. That leaves you with what is called silicon metal, which is about 99 percent pure silicon. But that’s not nearly good enough for high-tech uses. Silicon for solar panels has to be 99.999999 percent pure—six 9s after the decimal. Computer chips are even more demanding. Their silicon needs to be 99.99999999999 percent pure—eleven 9s.

The next step is to melt down the polysilicon. But you can’t just throw this exquisitely refined material in a cook pot. If the molten silicon comes into contact with even the tiniest amount of the wrong substance, it causes a ruinous chemical reaction. You need crucibles made from the one substance that has both the strength to withstand the heat required to melt polysilicon, and a molecular composition that won’t infect it. That substance is pure quartz. This is where Spruce Pine quartz comes in. It’s the world’s primary source of the raw material needed to make the fused-quartz crucibles in which computer-chip-grade polysilicon is melted. A fire in 2008 at one of the main quartz facilities in Spruce Pine for a time all but shut off the supply of high-purity quartz to the world market, sending shivers through the industry.

A 2017 study by the US Geological Survey warned that unless something is done, as much as two-thirds of Southern California’s beaches may be completely eroded by 2100.2 To understand why, you

Massive coastal development—marinas, jetties, ports—blocks the flow of ocean-borne sand.

River dams also cut off the flow of sand that used to feed beaches.

Southern California’s beaches have lost as much as four-fifths of the sediment that rivers used to bring them, thanks to dams.

Louisiana loses an estimated sixteen square miles of wetlands every year—a crucial natural defense against hurricanes—because levees and canals on the Mississippi have reduced the flow of sediment that used to replenish them.6 Egypt’s Aswan Dam has done a similar number on the shore of the Nile Delta. China’s colossal Three Gorges Dam project is expected to have an even greater impact.

Sand mining makes the problem worse. Dams combined with upriver sand mining are decimating the supply of replenishing sediment to Vietnam’s Mekong Delta, home to 20 million people and source of half that country’s food supply.

Illegal beach sand mining has been reported all over the world. In Morocco and Algeria, illegal miners have stripped entire beaches for construction sand, leaving behind rocky moonscapes. Thieves in Hungary made off with hundreds of tons of sand from an artificial river beach in 2007. Five miles of beach was stripped down to its clay foundation in Russian-occupied Crimea in 2016. Smugglers in Malaysia, Indonesia, and Cambodia pile beach sand onto small barges in the night and sell them in Singapore.8 Beaches have been torn up in India and elsewhere by

Government officials in Puerto Rico have had to restrict beach sand mining because so many grains were being taken to build tourist hotels that the very beaches those tourists came for were disappearing.

Add rising seas to shrinking beaches and you have a serious problem worldwide.

Beach nourishment, also known as beach replenishment, has become a major industry. More than $7 billion has been spent in the United States in recent decades on artificially rebuilding hundreds of miles of beach nationwide. Almost all of the costs are covered by taxpayers; much of it is overseen by the federal US Army Corps of Engineers. Florida accounted for about a quarter of the total,

Eastman Aggregate would dump a million tons of new sand on Broward’s beaches over the course of several months. The grains are mined from an inland quarry a couple of hours drive away. Trucks haul that sand down the highway, squeeze their way in between the villas and hotels, and dump it on the shore. Excavators load the freshly delivered sand into hulking yellow dump trucks, which ferry it to the edge of the renourishment zone. Small bulldozers then push the grains into place, extending an evenly proportioned beach out into the surf.

Hauling and placing sand with trucks is both considerably slower and far more expensive than the more common method, which is to dredge sand from the sea bottom and blast it onto the shore through floating pipes. The problem is that over the last four decades since beach nourishment began in earnest, Broward County has used up all the sea sand it is legally and technically able to lay its hands on. Nearly 12 million cubic yards13 of underwater grains have been stripped off the ocean bottom and thrown onto Broward’s shores. There are still some pockets of sand on the seabed, but dredging them is forbidden because it could damage the coral reefs they sit next to.

The same goes for Miami-Dade County to the south.

There is lots of sand left off the coasts of three other Florida counties farther north. They haven’t worked their beaches quite as hard as the tourist meccas to the south, and the continental shelf up there extends further out before dropping into the deep ocean, giving them a larger area to dredge from. Miami-Dade has asked for help, but the northern counties have so far refused to share. They don’t want to find themselves in Miami’s position thirty years from now.

Olympic beach volleyball players. To make sure their bare feet come into contact only with grains of just the right size and shape, sand was brought in from Hainan Island for the 2008 Beijing Games, and from a quarry in Belgium for the 2004 Athens Games.

This particular beach is only expected to last about six years before it needs more upkeep.

In Broward County, they make no bones about it. “Beaches are a form of infrastructure,” said Sharp. “You pave your potholes, we pave our beaches with sand.

For most of human history, beaches weren’t places to relax, but to work. The sandy shores were where fishermen launched their boats and cleaned their catch, where small traders unloaded their cargo. Coastal people built their homes a safe distance from the unpredictable weather and waves of the shoreline, often facing away from the sea for added protection.16 “When Europeans and Americans first settled the coasts, they largely ignored, indeed avoided, what are today’s most coveted stretches of shore,” writes historian John R. Gillis in The Human Shore, an account of our changing relationship with our coasts. “The beach was used for landing but not for settlement. Its featureless barrenness was not only inhospitable but repulsive.

 “1820s-era England is responsible for a turning point in the history of seaside resorts, as this was when the first major bathing establishments were constructed for the specific purpose of bathing, relaxation, and play,”18 writes University of Florida scholar Tatyana Ressetar in her master’s thesis

The popularity of beaches grew through the late 1800s among the burgeoning middle class, with their newfound leisure time, and as railroads made the shores accessible to lower-class city dwellers who previously had no way to reach them.

The rich began building private seaside mansions, and the middle class copied them on a smaller scale, until by the 1930s there were seaside towns all over Europe and North America. The rise of the automobile and post–World War II prosperity brought unprecedented numbers to the beach, more and more of whom chose to retire there as time went on.

A century ago, Hawaii’s Waikiki Beach was a narrow ribbon of sand fringed by marsh; it was beefed up to its current expansive size with grains barged in from other Hawaiian islands, and at one point in the 1930s with sand shipped from California. Today it still requires regular renourishing.

Many of Spain’s Canary Island beaches were just rocky coastlines until developers dumped tons of sand imported from the Caribbean and Morocco on them.

The glamorization of the sandy beach gave rise to cities like Miami Beach and Fort Lauderdale. Roads built of sand made it possible for people to drive to them. Concrete made it possible to build whole cities in the middle of nowhere to house them all. Later, concrete built the vast theme parks—Walt Disney World, Universal Studios—which attracted even more people. Sand abetting sand abetting sand.

Washington subsidizes local governments and homeowners who build in imperiled coastal areas to the tune of billions of dollars in the form of insurance guarantees, disaster bailouts, and other protections. Taxpayer-funded beach nourishment also has the perverse effect of shoring up property values, a recent study found.

NOTE: to read further, be sure to buy the book, I left a lot out of the above

Posted in Concrete, Peak Sand | Tagged , , | Comments Off on How sand transformed civilization

Far out power #1: human fat, playgrounds, solar wind towers, perpetual motion, thermal depolymerization

Preface. Plans for hydrogen, wind, solar, wave and all the other re-buildable contraptions that use fossil fuels in every single step of their short 15-25 year life cycle and hence are non-renewable, are just as silly as the ideas below,  yet these  schemes with negative energy return that can’t make themselves without fossil fuels are written about in respectable scientific journals, unlike the proposals below.

I’ve been writing about this since 2001, now Michael Moore has made a film called “Planet of the Humans” that explains this as well.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Liposuction fat

Mr. Buthune thinks the use of human fat as an energy source has some potential.  “There’s an interesting business model: link a biodiesel plant with the cosmetic surgeons,” says Mr. Bethune. “In Auckland we produce about 330 pounds of fat per week from liposuction, which would make about 40 gallons of fuel. If it is going to be chucked out, why not?” (Schouten 2005)

At an Exxon conference, the Yes Men pulled a prank of giving a presentation of making a new fuel, Vivoleum, out of humans killed by climate change. Hundreds of candles made of human hair that smelled like dead people were handed out (Yes Men 2015).

Auckland, New Zealand, adventurer Peter Bethune plans to break the round-the-world powerboat speed record in a boat powered by biodiesel fuel partly manufactured from human fat. The lean Mr. Bethune had about three ounces of fat extracted from his body yesterday in a lipsuction procedure, and he is seeking volunteers to donate more (Schouten 2005).

Playground power

The only place I could find this actually existing is in Ghana, Africa, where Empower Playgrounds provides merry-go-rounds to schools that generate and store electricity as they are spun around (Brownlee 2013).

Perpetual motion

Violates all the laws of physics and thermodynamics, even the patent office got wise and won’t accept any applications (Wikipedia, Park 2000).

Thermal depolymerization

Garbage and landfills turn can be turned into biogass.  But as energy declines, there will be less and less garbage, not only because there won’t be the fuel to take it to a landfill, but people will be burning anything they can get their hands on to cook and heat with.

Solar Wind Towers (Slav 2019)

More than 30 years ago a giant tower was built in Manzanares, Spain, to produce electricity in a way that at the time must have seen even more eccentric than it seems now, by harnessing the power of air movement. The Manzanares tower was, sadly, toppled by a storm. Decades ago, several other firms tried to replicate the idea, but none has succeeded. Why?

The idea behind the so-called solar wind towers is pretty straightforward. The more popular version is the solar updraft tower, which works as follows:

On the ground, around the hollow tower, there is a solar energy collector—a transparent surface suspended a little above ground—which heats the air underneath.

As the air heats up, it is drawn into the tower, also called a solar chimney, since hot air is lighter than cold air. It enters the tower and moves up it to escape through the top. In the process, it activates a number of wind turbines located around the base of the tower. The main benefit over other renewable technologies? Doing away with the intermittency of PV solar, since the air beneath the collector could stay hot even when the sun is not shining.

But the cost of building one is simply too expensive, and investors are wary of the problems related to the very tall height required (the taller the better).

References

Brownlee, J. 2013. A Merry-Go-Round That Turns The Power Of Play Into Electricity. fastcompany.

Park, R. 2000. Perpetual Motion: still going around. Washington Post.

Schouten, H. 2005. Earthrace biofuel promoter to power boat using human fat. calorielab.com

Slav, I. 2019. The fatal flaw in a perfect energy solution. oilprice.com

Posted in Far Out | Tagged , , , , | 4 Comments

How a pandemic or bioweapon could take civilization down

Preface.  I just listened to a 3.5 hour podcast on pandemics and bioweapons with the best up-to-date coverage I know of, and more interesting to listen to than reading a book or article.  Just one of many scary problems: synthetic biology and CRISPR tools today are on the way to being accessible within 20 years or less to the public (Cross 2018, Sharma et al 2020). That would make it possible for just one person to assemble a virus like the bird flu (H5N1) and let it loose.

2021-4-3 Engineering the Apocalypse by Rob Reid & Sam Harris

Rob Reid’s podcast has suggestions for what we could do, such as creating a universal flue and corona virus vaccine, as well as vaccines for other viruses we know of. There are many ways to monitor the rise of a pandemic through testing, air sampling and more.

I would guess there are many possible motivations –perhaps someone who is suicidal or crazy like all the mass shooters. Or a nation. North Korea comes to mind, but a nation at war that has developed bioweapons and a vaccine to counteract their engineered virus might innoculate their own population against it before unleashing it in the world. A deep ecologist protecting biodiversity and climate. Or a billionaire with a New Zealand bunker who wants to carry on with his non-negotiable way of life by killing billions to delay limits to growth and the end of fossil fuel production.

I think it is more likely for civilization to fail from energy shortages now that peak oil is upon us, ending the precision machine tools, supply chains, and technology that could create a bioweapon or a vaccine.

And who knows what Russia has and might use? In 1973 the Soviet Union decided it would be much cheaper to develop bioweapons than nuclear missiles, and their Biopreparat program successfully weaponized: Smallpox, Bubonic plague, Anthrax, Venezuelan equine encephalitis, Tularemia, Influenza, Brucellosis Marburg virus, Machupo virus, Veepox (hybrid of Venezuelan equine encephalitis & smallpox), and Ebolapox (hybrid of ebola & smallpox).

Below is an article about whether a pandemic could bring civilization down. The main way this would happen is if the death rate is so high that essential workers would stay home.

Alice Friedemann   www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

MacKenzie, D. April 5, 2008. Will a pandemic bring down civilization? NewScientist.

For years we have been warned that a pandemic is coming. It could be flu, it could be something else. We know that lots of people will die. As terrible as this will be, on an ever more crowded planet, you can’t help wondering whether the survivors might be better off in some ways. Wouldn’t it be easier to rebuild modern society into something more sustainable if, perish the thought, there were fewer of us.

Yet would life ever return to something resembling normal after a devastating pandemic? Virologists sometimes talk about their nightmare scenarios – a plague like ebola or smallpox – as “civilization ending”. Surely they are exaggerating. Aren’t they?

Many people dismiss any talk of collapse as akin to the street-corner prophet warning that the end is nigh. In the past couple of centuries, humanity has innovated its way past so many predicted plagues, famines and wars – from Malthus to Dr Strangelove – that anyone who takes such ideas seriously tends to be labeled a doom-monger.

There is a widespread belief that our society has achieved a scale, complexity and level of innovation that make it immune from collapse. “It’s an argument so ingrained both in our subconscious and in public discourse that it has assumed the status of objective reality,” writes biologist and geographer Jared Diamond of the University of California, Los Angeles, author of the 2005 book Collapse. “We think we are different.”
Ever more vulnerable

A growing number of researchers, however, are coming to the conclusion that far from becoming ever more resilient, our society is becoming ever more vulnerable. In a severe pandemic, the disease might only be the start of our problems.

No scientific study has looked at whether a pandemic with a high mortality could cause social collapse – at least none that has been made public. The vast majority of plans for weathering a pandemic all fail even to acknowledge that crucial systems might collapse, let alone take it into account.

There have been many pandemics before, of course. In 1348, the Black Death killed about a third of Europe’s population. Its impact was huge, but European civilization did not collapse. After the Roman empire was hit by a plague with a similar death rate around AD 170, however, the empire tipped into a downward spiral towards collapse. Why the difference? In a word: complexity.

In the 14th century, Europe was a feudal hierarchy in which more than 80% of the population were peasant farmers. Each death removed a food producer, but also a consumer, so there was little net effect. “In a hierarchy, no one is so vital that they can’t be easily replaced,” says Yaneer Bar-Yam, head of the New England Complex Systems Institute in Cambridge, Massachusetts. “Monarchs died, but life went on.”

Individuals matter

The Roman empire was also a hierarchy, but with a difference: it had a huge urban population – not equaled in Europe until modern times – which depended on peasants for grain, taxes and soldiers. “Population decline affected agriculture, which affected the empire’s ability to pay for the military, which made the empire less able to keep invaders out,” says anthropologist and historian Joseph Tainter at Utah State University in Logan. “Invaders in turn further weakened peasants and agriculture.”

A high-mortality pandemic could trigger a similar result now, Tainter says. “Fewer consumers mean the economy would contract, meaning fewer jobs, meaning even fewer consumers. Loss of personnel in key industries would hurt too.”

Bar-Yam thinks the loss of key people would be crucial. “Losing pieces indiscriminately from a highly complex system is very dangerous,” he says. “One of the most profound results of complex systems research is that when systems are highly complex, individuals matter.”

The same conclusion has emerged from a completely different source: tabletop “simulations” in which political and economic leaders work through what would happen as a hypothetical flu pandemic plays out. “One of the big ‘Aha!’ moments is always when company leaders realize how much they need key people,” says Paula Scalingi, who runs pandemic simulations for the Pacific Northwest economic region of the US. “People are the critical infrastructure.”
Vital hubs

Especially vital are “hubs” – the people whose actions link all the rest. Take truck drivers. When a strike blocked petrol deliveries from the UK’s oil refineries for 10 days in 2000, nearly a third of motorists ran out of fuel, some train and bus services were cancelled, shops began to run out of food, hospitals were reduced to running minimal services, hazardous waste piled up, and bodies went unburied. Afterwards, a study by Alan McKinnon of Heriot-Watt University in Edinburgh, UK, predicted huge economic losses and a rapid deterioration in living conditions if all road haulage in the UK shut down for just a week.

What would happen in a pandemic when many truckers are sick, dead or too scared to work? Even if a pandemic is relatively mild, many might have to stay home to care for sick family or look after children whose schools are closed. Even a small impact on road haulage would quickly have severe knock-on effects.

One reason is just-in-time delivery. Over the past few decades, people who use or sell commodities from coal to aspirin have stopped keeping large stocks, because to do so is expensive. They rely instead on frequent small deliveries.

Cities typically have only three days’ worth of food, and the old saying about civilizations being just three or four meals away from anarchy is taken seriously by security agencies such as MI5 in the UK.

How long would your stocks last if shops emptied and your water supply dried up? Even if everyone were willing, US officials warn that many people might not be able to afford to stockpile enough food.

Two-day supply

Hospitals rely on daily deliveries of drugs, blood and gases. “Hospital pandemic plans fixate on having enough ventilators,” says public health specialist Michael Osterholm at the University of Minnesota in Minneapolis, who has been calling for broader preparation for a pandemic. “But they’ll run out of oxygen to put through them first. No hospital has more than a two-day supply.” Equally critical is chlorine for water purification plants.

It’s not only absentee truck drivers that could cripple the transport system; new drivers can be drafted in and trained fairly quickly, after all. Trucks need fuel, too. What if staff at the refineries that produce it don’t show up for work?

Some models suggest absenteeism sparked by a 1918-type pandemic could cut the workforce by half at the peak of a pandemic wave.

Critical infrastructure

All the companies that provide the critical infrastructure of modern society – energy, transport, food, water, telecoms – face similar problems if key workers fail to turn up. According to US industry sources, one electricity supplier in Texas is teaching its employees “virus avoidance techniques” in the hope that they will then “experience a lower rate of flu onset and mortality” than the general population.

The fact is that the best way for people to avoid the virus will be to stay home. But if everyone does this – or if too many people try to stockpile supplies after a crisis begins – the impact of even a relatively minor pandemic could quickly multiply.

Planners for pandemics tend to overlook the fact that modern societies are becoming ever more tightly connected, which means any disturbance can cascade rapidly through many sectors. For instance, many businesses have contingency plans that count on some people working online from home. Models show there won’t be enough bandwidth to meet demand.

And what if the power goes off? This is where complex inter-dependencies could prove disastrous. Refineries make diesel fuel not only for trucks but also for the trains that deliver coal to electricity generators, which now usually have only 20 days’ reserve supply, Osterholm notes. Coal-fired plants supply 30 % of the UK’s electricity, 50% of the US’s and 85% of Australia’s.

Powerless

The coal mines need electricity to keep working. Pumping oil through pipelines and water through mains also requires electricity. Making electricity depends largely on coal; getting coal depends on electricity; they all need refineries and key people; the people need transport, food and clean water. If one part of the system starts to fail, the whole lot could go. Hydro and nuclear power are less vulnerable to disruptions in supply, but they still depend on highly trained staff.

With no electricity, shops will be unable to keep food refrigerated even if they get deliveries. Their tills won’t work either. Many consumers won’t be able to cook what food they do have. With no chlorine, water-borne diseases could strike just as it becomes hard to boil water. Communications could start to break down as radio and TV broadcasters, phone systems and the internet fall victim to power cuts and absent staff. This could cripple the global financial system, right down to local cash machines, and will greatly complicate attempts to maintain order and get systems up and running again.

Even if we manage to struggle through the first few weeks of a pandemic, long-term problems could build up without essential maintenance and supplies. Many of these problems could take years to work their way through the system. For instance, with no fuel and markets in disarray, how do farmers get the next harvest in and distributed?
Closing borders

As a plague takes hold, some countries may be tempted to close their borders. But quarantine is not an option any more. “These days, no country is self-sufficient for everything,” says Lay. “The worst mistake governments could make is to isolate themselves.” The port of Singapore, a crucial shipping hub, plans to close in a pandemic only as a last resort, he says. Yet action like this might not be enough to prevent international trade being paralysed as other ports close for fear of contagion or for lack of workers, as ships’ crews sicken and exporters’ assembly lines grind to a halt without their own staff, power, transport or fuel and supplies.

Osterholm warns that most medical equipment and 85% of US pharmaceuticals are made abroad, and this is just the start. Consider food packaging. Milk might be delivered to dairies if the cows get milked and there is fuel for the trucks and power for refrigeration, but it will be of little use if milk carton factories have ground to a halt or the cartons are an ocean away.

“No one in pandemic planning thinks enough about supply chains,” says Osterholm. “They are long and thin, and they can break.” When Toronto was hit by SARS in 2003, the major surgical mask manufacturers sent everything they had, he says. “If it had gone on much longer they would have run out.”

The trend is for supply chains to get ever longer, to take advantage of economies of scale and the availability of cheap labour. Big factories produce goods more cheaply than small ones, and they can do so even more cheaply in countries where labor is cheap.
Flawed assumptions

Disaster planners usually focus on single-point events of this kind: industrial accidents, hurricanes or even a nuclear attack. But a pandemic happens everywhere at the same time, rendering many such plans useless.

The main assumption is how serious a pandemic could be. Many national plans are based on mortality rates from the mild 1957 and 1968 pandemics. “No government pandemic plans consider the possibility that the death rate might be higher than in 1918,” says Tim Sly of Ryerson University in Toronto, Canada.
Death rate

This scenario assumes around 3% of those who fall ill die. Of all the people known to have caught H5N1 bird flu so far, 63% have died. “It seems negligent to assume that H5N1, if it goes pandemic, will necessarily become less deadly,” says Sly. And flu is far from the only viral threat we face.

The ultimate question is this: what if a pandemic does have huge knock-on effects? What if many key people die, and many global balancing acts are disrupted? Could we get things up and running again? “Much would depend on the extent of the population decline,” says Tainter. “Possibilities range from little effect to a mild recession to a major depression to a collapse.”

References

Cross R (2018) Synthetic biology could enable bioweapons development. A new National Academies report names and classifies the kinds of biological weapons that could emerge from techniques like CRISPR gene editing and DNA synthesis? Chemical & engineering news 96.

Sharma A et al (2020) Next generation agents (synthetic agents): Emerging threats and challenges in detection, protection, and decontamination. Handbook on Biological Warfare Preparedness.

Posted in 3) Fast Crash, Biowarfare, Interdependencies, Pandemic Fast Crash | Tagged , , , , , , | 3 Comments

Fall of Indus valley & Akkadian civilizations from climate change

Preface. Any civilization or region that survives energy decline must then survive climate change for many centuries. As far as the wind systems that collapsed the Akkadian empire, it’s already happening:

“Greenhouse gases are increasingly disrupting the jet stream, a powerful river of winds that steers weather systems in the Northern Hemisphere. That’s causing more frequent summer droughts, floods and wildfires, a new study says. The findings suggest that summers like 2018, when the jet stream drove extreme weather on an unprecedented scale across the Northern Hemisphere, will be 50% more frequent by the end of the century if emissions of carbon dioxide and other climate pollutants from industry, agriculture and the burning of fossil fuels continue at a high rate” (Berwyn 2018).

Alice Friedemann www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, April 2021, Springer, “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Malik N (2020) Uncovering transitions in paleoclimate time series and the climate driven demise of an ancient civilization. Chaos: An Interdisciplinary Journal of Nonlinear Science.

There are several theories about why the Indus Valley Civilization declined—including invasion by nomadic Indo-Aryans and earthquakes—but climate change appears to be the most likely scenario. Shifting monsoon patterns led to the demise of the Indus Valley Civilization, a Bronze Age civilization contemporary to Mesopotamia and ancient Egypt.

Bressan, D (2019) Climate Change Caused the World’s First Empire To Collapse. Forbes

The Akkadian Empire was the first ancient empire of Mesopotamia, centered around the lost city of Akkad. The reign of Akkad is sometimes regarded as the first empire in history, as it developed a central government and elaborate bureaucracy to rule over a vast area comprising modern Iraq, Syria, parts of Iran and central Turkey. Established around 4.600 years ago, it abruptly collapsed two centuries later as settlements were suddenly abandoned. New research published in the journal Geology argues that shifting wind systems contributed to the demise of the empire.

The region of the Middle East is characterized by strong northwesterly winds known locally as shamals. This weather effect occurs one or more times a year. The resulting wind typically creates large sandstorms that impact the climate of the area. To reconstruct the temperature and rainfall patterns of the area around the ancient metropolis of Tell-Leilan, the researchers sampled 4,600- to 3,000-year-old fossil Porites corals, deposited by an ancient tsunami on the northeastern coast of Oman.

The genus Porites builds a stony skeleton using the mineral aragonite (CaCO3). Studying the chemical and isotopic signatures of the carbon and oxygen used by the living coral, it is possible to reconstruct the sea-surface temperature conditions and so the precipitation and evaporation balance of a region located near the sea.

The fossil evidence shows that there was a prolonged winter shamal season accompanied by frequent shamal days lasting from 4.500 to 4.100 years ago, coinciding with the collapse of the Akkadian empire 4.400 years ago . The impact of the dust storms and lack of rainfall would have caused major agricultural problems possibly leading to famine and social instability. Weakened from the inside, the Akkadian Empire became an easy target to many opportunistic tribes living nearby. Hostile invasions, helped by the shifting climate, finally brought an end to the first modern empire in history.

The collapse of the Akkadian Empire concides also with the proposed onset of the Meghalayan Age, an age marked by mega-droughts on a global scale that crushed a number of civilizations worldwide.

References

Berwyn, B. 2018. Global Warming Is Messing with the Jet Stream. That Means More Extreme Weather. A new study links the buildup of greenhouse gas emissions to more frequent heat waves, floods and droughts in the Northern Hemisphere. insideclimatenews.org

Posted in Climate Change, Collapsed & collapsing nations | Tagged , | Comments Off on Fall of Indus valley & Akkadian civilizations from climate change

Nuclear Power problems

Preface.  There are half a dozen articles below. Although safety and disposal of nuclear waste ought to be the main reasons why no more plants should be built, what actually stops them today are the high costs: it can take a decade to get a permit, and then will cost $8.5–$20 billion (O’Grady 2008), up to 8 times more than an equivalent $2.5 billion natural gas power plant that can be built in just a few years. No banker in their right mind is going to lend the money, especially with so many delays and cost overruns, as you’ll see in the new articles below.

The Nuclear Reactor commission asked the operators of 60 US nuclear power plant sites to model their current food risk, including the likely effects of climate change on their mostly 50+ year old plants. Ninety percent of these reactor sites need to be modifed, with 54 having at least one food risk exceeding their design, 53 not built to withstand the current risk from intense precipitation, 25 in jeopardy based on current food projections for streams and rivers, and 19 that were not designed for their maximum storm surge (Flavelle and Lin 2019).

In addition, existing U.S. nuclear power plants are old and in decline. By 2030, U.S. nuclear power generation might be the source of just 10% of electricity, half of their 20% production of electricity now, because 38 reactors producing a third of nuclear power are past their 40-year life span, and another 33 reactors producing a third of nuclear power are over 30 years old. Although some will have their licenses extended, the  reactors that produce half of nuclear power are at risk of closing because of economics, breakdowns, unreliability, long outages, safety, and expensive post-Fukushima retrofit. Cooper predicted 37 were at risk, and since 2013 many of them have closed, as well as reactors he didn’t predict (Cooper 2013).

If CO2 reduction is the goal, nuclear power produces more carbon emissions than renewables (Sovacool et al 2020).

And if we are dumb enough to try to build more, we’ll smack into the brick wall of Peak Uranium.

And as my books “Life After Fossil Fuels: A Reality Check on Alternative Energy”, and “When Trucks Stop Running: Energy and the Future of Transportation”, explain, if transportation and manufacturing can’t be electrified, and the electric grid fail once natural gas is so scarce it’s mainly devoted to making fertilizer, which keeps 4 billion of us alive, why would you waste energy, time, and money building futile nuclear power plants?

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

2023 Brumfiel G et al. Russia is draining a massive Ukrainian reservoir, endangering a nuclear plant. NPR.

https://www.npr.org/2023/02/10/1155761686/russia-is-draining-a-massive-ukrainian-reservoir-endangering-a-nuclear-plant

Russia appears to be draining an enormous reservoir in Ukraine, imperiling drinking water, agricultural production and safety at Europe’s largest nuclear plant and the water level at the reservoir has plummeted to its lowest point in three decades. At stake is drinking water for hundreds of thousands of residents, irrigation for nearly half-a-million acres of farmland, and the cooling system at the Zaporizhzhia Nuclear Power Plant. The current level of the reservoir is 14 meters, approximately and if the level falls below 13.2 meters, the Zaporizhzhia Nuclear Power Plant’s cooling system, which relies on water from the reservoir, would be in peril.

The Kakhovka Reservoir is a massive, man-made lake roughly the size of the Great Salt Lake in Utah. It is the final body of water in a network of reservoirs along Ukraine’s Dnipro River. Since the 1950s, it has been used to provide drinking and irrigation water to parts of Ukraine’s southern districts of Kherson and Zaporizhzhia. A lengthy canal leading from the reservoir also supplies Russian-occupied Crimea. A network of canals leading from the reservoir irrigates roughly 200,000 hectares (494,000 acres) of farmland that is used to grow sunflowers, grain and vegetables.

 

2021 Nuclear power’s economic failure. The Ecologist

https://theecologist.org/2021/dec/13/nuclear-powers-economic-failure

This article lists many catastrophic cost overruns of nuclear power projects around the world, here are a few of them:

  • The V.C. Summer project in South Carolina (two AP1000 reactors) was abandoned after the expenditure of at least US$9 billion leading Westinghouse to file for bankruptcy in 2017. Criminal investigations and prosecutions related to the V.C. Summer project are ongoing ‒ and bailout programs to prolong operation of ageing reactors in the US are also mired in corruption.
  • The only remaining reactor construction project in the US is the Vogtle project in Georgia (two AP1000 reactors). The current cost estimate of US$27-30+ billion is twice the estimate when construction began (US$14-15.5 billion). Costs continue to increase and the Vogtle project only survives because of multi-billion-dollar taxpayer bailouts. The project is six years behind schedule.
  • In 2021, TVA abandoned the unfinished Bellefonte nuclear plant in Alabama, 47 years after construction began and following the expenditure of an estimated US$5.8 billion.
  • The only current reactor construction project in France is one EPR reactor under construction at Flamanville. The current cost estimate of €19.1 billion is 5.8 times greater than the original estimate. The Flamanville reactor is 10 years behind schedule.
  • The only current reactor construction project in the UK comprises two EPR reactors under construction at Hinkley Point. In the late 2000s, the estimated construction cost for one EPR reactor in the UK was £2 billion. The current cost estimate for two EPR reactors at Hinkley Point is £22-23 billion, over five times greater than the initial estimate.
  • One EPR reactor (Olkiluoto-3) is under construction in Finland. The current cost estimate of about €11 billion is 3.7 times greater than the original estimate. Olkiluoto-3 is 13 years behind schedule

2020 Nuclear Safety

The International Atomic Energy Agency is supposed to keep track of all the nuclear incidents in the world, but if you go to their incident report page, you’ll notice that the Turkey Point reactor issues in the March 22, 2016 article aren’t mentioned, and British newspaper “The Guardian” also says that their list is incomplete. Wikipedia is very much out of date, but has some fairly long lists of nuclear problems.  The NRDC has a good deal of information, for instance, their article called “What if the Fukushima nuclear fallout crisis had happened here?” where you can see how hit your home would be if the nearest nuclear reactor had a similar level of disaster.

Deign J (2020) MIT Study Lays Bare Why Nuclear Costs Keep Rising. Greentechmedia

The main reason for spiraling nuclear plant construction bills is soft costs, the indirect expenses related to activities such as engineering design, purchasing, planning, scheduling and — ironically — estimating and cost control. These indirect expenses accounted for 72% of the increase seen in reactor construction costs between 1976 and 1987, a period in which the amount of money needed for containment buildings rose by almost 118%.

The research is sober reading for those who contend that the more times a reactor model is built, the less it will cost. The MIT study found that in 3 out of 4 reactor designs, the first to be built was the cheapest.  Productivity keeps going down, often due to delays which add to costs as workers are idle.

Some argue that Small Modular Reactors (SMRs) will be cheaper, but they aren’t likely to be commercial for a decade or more, plus the costs are uncertain.

Delbert C (2020) France’s Revolutionary Nuclear Reactor Is a Leaky, Expensive Mess. With a bloated budget, endless delays, and shoddy construction, EPR looks like a big mistake. Popular Mechanics.

France’s EPR (European Pressurized Reactor) has been delayed from 13 to 17 years. It’s already 10 years past its due date and four times over budget (from $3.9 to $14.6 billion). It’s not even a major technological leap, just an iteration on a previous design.

Dujmovic, J. 2019. Think fossil fuels are bad? Nuclear energy is even worse. MarketWatch

Not long ago, I wrote about nuclear plants and the large number of “incidents” (many of which go under the radar) that occur every year, despite upgrades, updates, technological advancements and research that’s put in nuclear energy.

Researchers from the Swiss Federal Institute of Technology have come up with an unsettling discovery. Using the most complete and up-to-date list of nuclear accidents to predict the likelihood of another nuclear cataclysm, they concluded that there is a 50% chance of a Chernobyl-like event (or larger) occurring in the next 27 years, and that we have only 10 years until an event similar to Three Mile Island, also with the same probability. (The Three Mile Island Unit 2 reactor, near Middletown, Pa., partially melted down on March 28, 1979. This was the most serious commercial nuclear-plant accident in the U.S.)

Then there’s the problem of nuclear waste. Just in the U.S., commercial nuclear-power plants have generated 80,000 metric tons of useless but highly dangerous and radioactive spent nuclear fuel — enough to fill a football field about 20 meters (65 feet) deep.  Over the next few decades, the amount of waste will increase to 140,000 metric tons, but there is still no disposal site in the U.S. or a clear plan on how to store this highly dangerous material.

Nuclear waste will remain dangerous — deadly to humans and toxic to nature — for hundreds of thousands of years.

Digging deep wells and tunnels in which it can be stored is simply kicking a very dangerous can down the road — a can that can break open and contaminate the environment because of earthquakes, human error and acts of terrorism.

Let’s also not forget that the majority of developed countries have felt the need to use seas and oceans as nuclear-dumping sites. Although the practice was prohibited in 1994, the damage was already done. The current amount of nuclear waste in world seas greatly exceeds what’s currently stored in the U.S. And that’s just documented waste, so the exact number may be much higher.

Some may be comforted by the fact that 2011 data suggest the damage to the environment was minimal, but let’s not forget that these containers will eventually decay and their contents will spill and mix with water, polluting marine life and changing the biosphere. Finally, all of this contamination comes back to us in the form of food we eat, water we drink and air we breathe.

Ambellas S (2017) Overwhelmed Massachusetts nuclear power plant spikes with radiation. The Pilgrim Nuclear Power Plant has spiked with radiation to near alert levels alarming officials. infowars.com

Alvarez L (2016) Nuclear Plant Leak Threatens Drinking Water Wells in Florida. New York Times.

April 2014 ASPO newsletter

“Nuclear power is probably the biggest asset we have in the fight against climate change…But I’m a business guy and I’m a pragmatist, and there’s no future for nuclear in the United States. There’s certainly no future for new nuclear… [Very few know] how close the system came to collapsing in January because everyone wants to go to natural gas and there wasn’t enough natural gas in the system.  The purpose of having old coal plants, to be frank, is keeping the lights on for the next three, five, 10 years…I’m not anti-utilities, I’m not anti-nuclear, I’m not anti-coal, I’m just anti-bullshit.” — David Crane, CEO of NRG Inc., the U.S.’ largest independent power generator

Matthew Wald. 8 Jun 2012. Court Forces a Rethinking of Nuclear Fuel. New York Times.

The Nuclear Regulatory Commission acted hastily in concluding that spent fuel can be stored safely at nuclear plants for the next century or so in the absence of a permanent repository, and it must consider what will happen if none are ever established, a federal appeals court ruled on Friday.  The commission’s wrong decision was made so that the operating licenses of dozens of power reactors (and 4 new ones) could be extended.

The three judge panel unanimously decided that the commission was wrong to assume nuclear fuel would be safe for many decades without analyzing actual reactor storage pools individually across the nation. Nor did they adequately analyze the risk that cooling water might leak from the pools or that the fuel could ignite.

22 May 2012. Severe Nuclear Reactor Accidents Likely Every 10 to 20 Years, European Study Suggests. ScienceDaily

Catastrophic nuclear accidents such as the core meltdowns in Chernobyl and Fukushima are more likely to happen than previously assumed. Based on the operating hours of all civil nuclear reactors and the number of nuclear meltdowns that have occurred, scientists at the Max Planck Institute for Chemistry have calculated that such events may occur once every 10 to 20 years — some 200 times more often than estimated in the past. The researchers also determined that 50% of the radioactive caesium-137 would be spread over an area of more than 1,000 kilometres away from the nuclear reactor, and 25% would go more than 2,000 kilometres. Their results show that Western Europe is likely to be contaminated about once in 50 years by more than 40 kilobecquerel of caesium-137 per square meter. According to the International Atomic Energy Agency, an area is defined as being contaminated with radiation from this amount onwards. In view of their findings, the researchers call for an in-depth analysis and reassessment of the risks associated with nuclear power plants.  Currently, there are 440 nuclear reactors in operation, and 60 more are planned.
Citizens in the densely populated southwestern part of Germany run the worldwide highest risk of radioactive contamination. If a single nuclear meltdown were to occur in Western Europe, around 28 million people on average would be affected by contamination of more than 40 kilobecquerels per square meter. This figure is even higher in southern Asia, due to the dense populations. A major nuclear accident there would affect around 34 million people, while in the eastern USA and in East Asia this would be 14 to 21 million people.
Reference: J. Lelieveld, et al. Global risk of radioactive fallout after major nuclear reactor accidents. Atmospheric Chemistry and Physics, 2012; 12 (9): 4245

Smith, Rebecca. 4 Feb 2012. Worn Pipes Shut California Reactors.  Wall Street Journal. The two reactors at the San Onofre nuclear-power station near San Clemente, Calif., will remain shut down this weekend while federal safety officials investigate why critical—and relatively new—equipment is showing signs of premature wear.  Components in nuclear plants are subjected to extreme heat, pressure, radiation and chemical exposure, all of which can take a toll on materials.  Commission inspectors say they also have found problems with hundreds of steam tubes at the plant’s other reactor.   Experts say the closures may signal a broader problem for the nuclear industry, which has been trying to reassure Americans that its aging reactors are safe in the wake of last year’s disaster at the Fukushima Daiichi plant in Japan. Mr. Dricks said. Two pipes had lost 35% of their wall thickness in just two years of service. Most—about 800—had lost 10% to 20% of wall thickness. The pipes are about three-quarters of an inch in diameter.

Munson, R. 2008. From Edison to Enron: The Business of Power and What It Means for the Future of Electricity. Praeger.

Cost overruns on reactors nearly drove some power companies into bankruptcy.   In 1984 the Department of Energy calculated more than 75% of reactors cost at least double the estimated price.

Utility WPPSS in Washington state defaulted, scaring investors, who once thought there’d be over a thousand reactors running by 2000 with electricity too cheap to meter.  In fact, only 82 plants existed in 2000 and power prices soared 60% between 1969 and 1984 due to the cost overruns.

Nuclear executives tried to blame their problems on too much regulation and environmentalists, but regulations only came after reactors began to break down.   Intense radiation and high temperatures caused pipes, valves, tubes, fuel rods, and cooling systems to crack, corrode, bend, and malfunction.  Only then did the public create the Atomic Energy Commission (now the Nuclear Regulatory Commission) to regulate nuclear power facilities.

Munson lists quite a few problems, but you should search on “Nuclear Reactor Hazards  Ongoing Dangers of Operating Nuclear Technology in the 21st Century” to get a real good understanding of the magnitude of failures despite regulation.  Indeed, even the Wall Street Journal was forced to admit at one point that reactor troubles “tell the story of projects crippled by too little regulation, rather than too much.”

Some of this stemmed from nuclear engineers seeing uranium as just a complicated way to boil water.  But a reactor is not simple, there are over 40,000 valves, the fuel rods reach temperatures over 4,800 F, and it isn’t easy to contain the nuclear reactions.

Management was poor as well, with Forbes magazine calling the U.S. nuclear program “the largest managerial disaster in business history, a disaster on a monumental scale.”

References

Cooper, M. 2013. Renaissance in reverse: Competition pushes aging U.S. Nuclear reactors to the brink of economic abandonment. South Royalton: Vermont Law School.

Flavelle, C, Lin JCF (2019) U.S. Nuclear power plants weren’t built for climate change. Bloomberg

O’Grady, E. 2008. Luminant seeks new reactor. London: Reuters.

Sovacool BK, Schmid P, Stirling A et al (2020). Differences in carbon emissions reduction between countries pursuing renewable electricity versus nuclear power. Nature energy 5: 928-935

Posted in Electric Grid & EMP Electromagnetic Pulse, Nuclear Power Collapse, Nuclear Power Energy | Tagged , , | 3 Comments

Fossil-fueled industrial heat hard to impossible to replace with renewables

Preface. Cement, steel, glass, bricks, ceramics, chemicals, and much more depend on fossil-fueled high heat (up to 3200 F) to make. Except for the electric-arc furnace to recycle existing steel, there aren’t any renewable ways to make cement, other metals, and other high-heat products, and industries aren’t working on this either.

Alice Friedemann  www.energyskeptic.com Women in ecology  author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity

***

Roberts, D. 2019. This climate problem is bigger than cars and much harder to solve. Low-carbon options for heavy industry like steel and cement are scarce and expensive. Vox

Climate activists are fond of saying that we have all the solutions we need to the climate crisis; all we lack is the political will. This is incorrect. There are some uses of fossil fuels, that we do not yet know how to decarbonize.

Take, for instance, industrial heat: the extremely high-temperature heat used to make steel and cement.

Heavy industry is responsible for around 22% of global CO2 emissions, with 42% of that — about 10% of global emissions — from combustion to produce large amounts of high-temperature heat for industrial products like cement, steel, and petrochemicals.

To put that in perspective, industrial heat’s 10% is greater than the CO2 emissions of all the world’s cars (6%) and planes (2%) combined. Yet, consider how much you hear about electric vehicles. Consider how much you hear about flying shame. Now consider how much you hear about … industrial heat.

Not much, I’m guessing. But the fact is, today, virtually all of that combustion is fossil-fueled, and there are very few viable low-carbon alternatives. For all kinds of reasons, industrial heat is going to be one of the toughest nuts to crack, carbon-wise. And we haven’t even gotten started.

A cement factory at dusk.
A cement factory at dusk.

Some light has been cast into this blind spot with the release of two new reports by Julio Friedmann, a researcher at the Center for Global Energy Policy (CGEP) at Columbia University (among many items on a long résumé).

The first report, co-authored with CGEP’s Zhiyuan Fan and Ke Tang, is about the current state of industrial heat technology: “Low-Carbon Heat Solutions for Heavy Industry: Sources, Options, and Costs Today.”

The second, co-authored with a group of scholars for the Innovation for Cool Earth Forum (ICEF), is a roadmap for decarbonizing industrial heat, including a set of policy recommendations.

There’s a lot in these reports, but I’m going to guess your patience for industrial heat is limited, so I’ve boiled it down to three sections. First, I’ll offer a quick overview of why industrial heat is so infernally difficult to decarbonize; second, a review of the options available for decarbonizing it; and third, some recommendations for how to move forward.

Why industrial heat is such a vexing carbon dilemma

There’s a reason you don’t hear much about industrial heat: Consumers don’t buy it. It is a market dominated entirely by large, little-known industrial firms that operate outside the public eye. So unlike electricity, or cars, there is little prospect of moving the market through popular consumer demand. Policymakers will have to do this on their own. And it won’t be easy.

The biggest industrial emitters are cement, steel, and the chemical industries; also making a notable contribution are refining, fertilizer, and glass. As a group, these industries have three notable features.

First, almost all of them are globally traded commodities. Their prices are not set domestically. They compete with optimized supply chains around the world, with razor-thin margins. Domestic policies that raise their prices risk “carbon leakage” (i.e., companies simply moving overseas to find cheaper labor and operating environments).

What’s more, some of these industries, especially cement and steel, are especially prized by national governments for their jobs and their national security implications. Politicians are leery of any policy that might push those industries away. “As one indication, most cement, steel, aluminum, and petrochemicals have received environmental waivers or been politically exempted from carbon limits,” says the CGEP report, “even in countries with stringent carbon targets.

Furnace at an aluminum foundry.
Furnace at an aluminum foundry. 

Second, they involve facilities and equipment meant to last between 20 and 50 years. Blast furnaces sometimes make it to 60. These are large, long-term capital investments, with relatively low stock turnover. “Few industrial facilities show signs of imminent closure, especially in developing countries,” the CGEP report says, “making deployment of replacement facilities and technologies problematic.” At the very least, solutions that can work with existing equipment will have a head start.

Third, their operational requirements are both stringent and varied. They all have in common that they require large amounts of high-temperature heat and high “heat flux,” the ability to deliver large amounts of heat steadily, reliably, and continuously. Downtime in these industries is incredibly expensive.

At the same time, the specific requirements and processes at work in these industries vary widely. To take one example, steel and iron are made using blast furnaces that burn coke (a form of “cooked” coal with high-carbon content). “Coke also provides carbon as a reductant, acts as structural support to hold the ore burden, and provides porosity for rising hot gas and sinking molten iron,” the CGEP report says. “Because of these multiple roles, directly replacing coke combustion with an alternative source of process heat is not practical.”

A cement kiln works somewhat differently, as do the reactors that power chemical conversions, as does a glassblower. The variety of specific operational characteristics makes across-the-board substitution for industrial heat difficult.

Each of these industries is going to require its own solution. And it’s going to have to be a solution that doesn’t raise their costs much or at least takes steps to protect them from international competition.

The options to date are not much to speak of.

The options for decarbonizing industrial heat are scarce

What are the alternatives that might provide high heat and high heat flux with less or no carbon emissions? The report is not sanguine: “The pathway toward net-zero carbon emission for industry is not clear, and only a few options appear viable today.”

Alternatives can be broken down into five basic categories:

  1. Biomass: Either biodiesel or woodchips can be combusted directly.
  2. Electricity: “Resistive” electricity can be used to, say, power an electric arc furnace.
  3. Hydrogen: This is technically a subcategory of electricity, since it is derived from processes powered by electricity; it is produced through steam reforming of methane (SMR) to make carbon-intensive “grey” hydrogen, SMR with carbon capture and storage to make “blue” hydrogen, or electrolysis, pulling hydrogen directly out of water, to make low-carbon “green” hydrogen.
  4. Nuclear: Nuclear power plants, either conventional reactors or new third-generation reactors, give off heat that can be carried as steam.
  5. Carbon capture and storage (CCS): Rather than decarbonizing the processes themselves, their CO2 emissions could be captured and buried, either the CO2 directly from the heat source (“heat CCS”) or the CO2 from the entire facility (“full facility CCS”).

All of these options have their difficulties and drawbacks. None of them is anywhere close to cost parity with existing processes.

Some are limited by the intensity of the heat they can produce. Here’s a breakdown:

industrial heat temperature requirements

Some options are limited by the specific requirements of particular industrial processes. Cement kilns work better with energy-dense internal fuel; resistive electricity on the outer surface doesn’t work as well.

But the biggest limitations are costs, where the news is somewhat disheartening, for two reasons.

First, even the most promising and viable options substantially raise operational costs. And second, the options that are currently the least expensive are not exactly the ones environmentalists might prefer.

There’s a lot in the report on the methodology of comparing costs across the technologies, but the main thing to keep in mind is that these cost estimates are provisional. They involve various contestable assumptions, and real performance data is often not available. So it’s all to be taken with a grain of salt, pending further research. That said, here’s a rough-and-ready cost comparison:

cost comparison of industrial heat options

You might notice that most of the blue bars, the low-carbon options, are way over on the expensive right. The only ones that are reasonably affordable are nuclear and blue hydrogen.

Hydrogen is the most promising alternative

In terms of ability to generate high-temperature heat, availability, and suitability to multiple purposes, hydrogen is probably the leading candidate among industrial-heat alternatives. Unfortunately, the cost equation on hydrogen is not good: the cleaner it is, the more expensive it is.

The cheapest way to produce hydrogen, the way around 95 percent of it is now produced, is steam methane reforming (SMR), which reacts steam with methane in the presence of a catalyst at high temperatures and pressures. It is an extremely carbon-intensive process, thus “grey hydrogen.”

The carbon emissions from SMR can be captured and buried via CCS (though they rarely are today). As the chart above indicates, this kind of “blue hydrogen” is the cheapest low(er) carbon alternative for high-temperature industrial heat.

“Green hydrogen” is made via electrolysis, using electricity to separate hydrogen from water. If it is made with carbon-free energy, it too is carbon-free. There are a few different forms of electrolysis, which we don’t need to get into. The main thing to know is that they are expensive — the least expensive is more than twice as expensive as blue hydrogen.

hydrogen costs

Here’s a simplified cost chart, to make these comparisons clearer:

industrial heat costs

Note: These numbers reflect “what is actionable today within existing facilities.”

For now, to a first approximation, all the available low-carbon alternatives substantially raise costs of industrial-heat processes against the baseline.

And here’s the real kicker: in most cases, it is cheaper to capture and bury CO2 from these processes than it is to switch out systems for low-carbon alternatives.

CCS is often cheaper than low-carbon alternatives

Take cement production. It requires temperatures of at least 1,450°C, so the only viable options are hydrogen, biomass, resistive electric, or CCS. Here’s how much they would increase cement (“clinker”) production costs:

cement production costs

As you can see, every low-carbon alternative raises costs more than 50 percent above baseline. The only ones that don’t raise it more than 100 percent are CCS (of the heat source only), blue hydrogen, or resistive electric in places with extremely cheap and plentiful carbon-free energy.

The alternative that climate hawks would most prefer, the carbon-free option that would work best for most applications, is green hydrogen. But that currently raises costs between 400 and 800 percent. Ouch.

The situation is much the same for steel:

steel costs

And so on down the line, from chemicals to glass to ceramics; In almost every case, the cheapest near-term decarbonization solution is just to capture and bury the carbon emissions.

Of course, that’s just on average. The actual costs will depend on geography — whether there are suitable burial sites for CO2, whether natural gas is cheap, whether there’s a lot of hydro or wind nearby — but there’s no getting around the simple truth about today’s industrial-heat alternatives: What’s green isn’t very feasible, and what’s feasible isn’t very green.

Here’s a qualitative chart that tries to get at that relationship.

industrial heat feasibility

What’s most feasible is on the right. What’s most expensive is up top. There isn’t much in that lower-right feasible/cheap quadrant except blue hydrogen, for now.

The report emphasizes that these initial technology rankings are “temporary at best” and “highly speculative, uncertain, and contingent.” Much more needs to be understood about the costs and feasibility of these options. Their relative attractiveness may change quickly with technology development.

As this list makes clear, there is a lot that needs to be done before “we have all the solutions we need” in heavy industrial sectors. And there are other sectors that remain difficult to decarbonize as well (shipping, heavy freight, airplanes).

A final note about electrification

The only technology solution with a potential path down the cost curve to the point of being competitive with (properly priced) fossil fuels is electrification.

The charts above reveal two things about electrification of industrial heat. One, resistive electricity is the only low-carbon industrial-heat option competitive with CCS or blue hydrogen, and that’s only where clean electricity is extremely cheap and plentiful. And two, the only truly carbon-free, unlimited, all-purpose alternative available is green hydrogen, which requires plentiful renewable energy.

Both argue for the absolute imperative of making clean electricity cheaper.

At current prices and with current technologies, an all or mostly renewable grid would have difficulty with industrial heat, which requires enormous, intensive amounts of energy, reliably and continuously supplied. Some industrial applications could shift their demand around in time to accommodate renewables or make their processes intermittent, but most can’t. They need controllable, dispatchable power.

An electric arc furnace.
An electric arc furnace.

Building a renewable-based grid that could handle heavy industry would require much cheaper and more energy-dense storage, more and better transmission, smarter meters and appliances, and better demand response, but above all, it would require extremely cheap and abundant carbon-free electricity.

Posted in Alternative Energy, Manufacturing & Industrial Heat | Tagged , , , | 2 Comments

A 1-year blackout could kill 90% of Americans

Preface. What follows is the 30-page testimony of Dr. Pry at a 2015 U.S. House of Representatives session that I’ve summarized.

One of the ways that an electromagnetic pulse (EMP) could be generated is by a solar flare. During the Carrington event of 1859, one of the most violent solar storms of the past 200 years, the telegraph network collapsed in large parts of northern Europe and North America. According to estimates, the associated flare released only a hundredth of the energy of a superflare. Today, in addition to the infrastructure on the Earth’s surface, especially satellites would be at risk.

It turns out that these may be far more common than expected.  Stars similar to the Sun produce a gigantic outburst of radiation on average about once every 100 years per star. These superflares release more energy than a trillion hydrogen bombs and make all previously recorded solar flares pale in comparison. This estimate is based on an inventory of 56450 sun-like stars, which shows that previous studies have significantly underestimated the eruptive potential of these stars. In data from NASA’s space telescope Kepler, superflaring, sun-like stars can be found ten to a hundred times more frequently than previously assumed. The Sun, too, is likely capable of similarly violent eruptions. Vasilyev V et al (2024) Sun-like stars produce superflares roughly once per century. Science. DOI: 10.1126/science.adl5441

In addition to electromagnetic pulses from a solar event, nuclear weapon, or purpose-built equipment, cyberattacks can also bring down the power grid. A result is often the destruction of one or more transformers, which can take one to five years to replace, each weighing 100 to 400 tons. The U.S. doesn’t make them, yet it would be crazy to buy one from the largest producer, China, since they could build in a cyberattack and construct it with shoddy counterfeit components that might lead to failure (DOE 2014, Steidler 2020). Plus like Russia, China is fully able to launch a cyberattack to shut down the electric grid and other infrastructure (Crawford 2014).

From 2019 to present, Russia’s SVR intelligence agency stole sensitive communications and plans during an a cyberattack across hundreds of networks in the United States, and it’s feared that they got their hands on the technical blueprints for how the U.S. would restore power after a major blackout.  And left back doors to return to snooping whenever they like. This would enable them to know which systems to target to keep the power from coming back on after a blackstart (2021 U.S. officials are reportedly privately worried Russia stole blueprints for U.S. blackout restoration).

2020 Update:   Peter Pry, the executive director of the EMP Task Force on National and Homeland Security, issued a report showing that China  has super-EMP weapons, knows how to protect itself against an EMP attack, and could conduct a first-strike attack. China also has the most active ballistic missile development program in the world, using stolen U.S. technology to develop at least three types of high-tech weapons to attack the electric grid and key technologies that could cause a surprise “Pearl Harbor” of a deadly blackout to the entire U.S.  China has built a network of satellites, high-speed missiles, and super-electromagnetic pulse weapons that could melt down our electric grid, fry critical communications, and even takeout the ability of our aircraft carrier groups to respond. Conca J (2020) China Has ‘First-Strike’ Capability To Melt U.S. Power Grid With Electromagnetic Pulse Weapon. Forbes.

Related articles and energyskeptic posts:

Alice Friedemann   www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Testimony of Dr. Peter Vincent Pry  at the U.S. House of Representatives Serial No. 114-42 on May 13, 2015. The EMP Threat:  the state of preparedness against the threat of an electromagnetic pulse (EMP) event.  House of Representatives. 94 pages.

“The EMP Commission estimates that a nationwide blackout lasting one year could kill up to 9 of 10 Americans through starvation, disease, and societal collapse” 

A natural electromagnetic pulse (EMP) from a geomagnetic super-storm, like the 1859 Carrington Event or 1921 Railroad Storm, or nuclear EMP attack could cause a year-long blackout, and collapse all the other critical infrastructures–communications, transportation, banking and finance, food and water–necessary to sustain modern society and the lives of 310 million Americans.

Seven days after the commencement of blackout, emergency generators at nuclear reactors would run out of fuel. The reactors and nuclear fuel rods in cooling ponds would meltdown and catch fire, as happened in the nuclear disaster at Fukushima, Japan. The 104 U.S. nuclear reactors, located mostly among the populous eastern half of the United States, could cover vast swaths of the nation with dangerous plumes of radioactivity (see Richard Stone’s May 24, 2016 Science article: “Spent fuel fire on U.S. soil could dwarf impact of Fukushima” ).

Nuclear EMP is like super-lightning. The electromagnetic shockwave unique to nuclear weapons, called E1 EMP, travels at the speed of light, potentially injecting into electrical systems thousands of volts in a nanosecond–literally a million times faster than lightning, and much more powerful. Russian open source military writings describe their Super-EMP Warhead as generating 200,000 volts/meter, which means that the target receives 200,000 volts for every meter of its length. So, for example, if the cord on a PC is two meters long, it receives 400,000 volts. An automobile 4 meters long could receive 800,000 volts (unless it is parked underground).

No other threat can cause such broad and deep damage to all the critical infrastructures as a nuclear EMP attack. A nuclear EMP attack would collapse the electric grid, blackout and directly damage transportation systems, industry and manufacturing, satellite navigation, telecommunications systems and computers, banking and finance, and the infrastructures for food and water. Jetliners carry about 500,000 passengers on over 1,000 aircraft in the skies over the U.S. at any given moment. Many, most or virtually all of these would crash, depending upon the strength of the EMP field.

Cars, trucks, trains and traffic control systems would be damaged. In the best case, even if only a few percent of ground transportation vehicles are rendered inoperable, massive traffic jams would result. In the worst case, virtually all vehicles of all kinds would be rendered inoperable. In any case, all vehicles would stop operating when they run out of gasoline. The blackout would render gas stations inoperable and paralyze the infrastructure for synthesizing and delivering petroleum products and fuels of all kinds.

Industry and manufacturing would be paralyzed by collapse of the electric grid. Damage to SCADAS and safety control systems would likely result in widespread industrial accidents, including gas line explosions, chemical spills, fires at refineries and chemical plants producing toxic clouds.

Cell phones, personal computers, the internet, and the modern electronic economy that supports personal and big business cash, credit, debit, stock market and other transactions and record keeping would cease operations. The Congressional EMP Commission warns that society could revert to a barter economy.

Worst of all, about 72 hours after the commencement of blackout, when emergency generators at the big regional food warehouses cease to operate, the nation’s food supply will begin to spoil. Supermarkets are resupplied by these large regional food warehouses that are, in effect, the national larder, collectively having enough food to sustain the lives of 310 million Americans for about one month, at normal rates of consumption. The Congressional EMP Commission warns that as a consequence of the collapse of the electric grid and other critical infrastructures, “It is possible for the functional outages to become mutually reinforcing until at some point the degradation of infrastructure could have irreversible effects on the country’s ability to support its population.”

The New “Lightning War” / Blitzkrieg.

A new Lightning War launched by our adversaries would attack the electric grid and other critical infrastructures all at once with a coordinated assault of cyber-war, sabotage, and EMP attacks, perhaps at the same time as a space weather geomagnetic storm, or severe weather such as a hurricane or blizzard.

U.S. emergency planners tend to think of EMP, cyber, sabotage, severe weather, and geo-storms  as unrelated threats.  However, foreign adversaries (i.e. Iran, North Korea, China, Russia) in their military doctrines and military operations appear to be planning an offensive “all hazards” strategy that would throw at the U.S. electric grid and civilian critical infrastructures every possible threat simultaneously. Such an assault is potentially more decisive than Nazi Germany’s Blitzkrieg (“Lightning War”) strategy that nearly conquered the western democracies during World War II.

Catastrophe from a geomagnetic super-storm may well happen sooner rather than later–and perhaps in combination with a nuclear EMP attack.

Paul Stockton, President Obama’s former Assistant Secretary of Defense for Homeland Defense, on June 30, 2014, at the Electric Infrastructure Security Summit in London, warned an international audience that an adversary might coordinate nuclear EMP attack with an impending or ongoing geomagnetic storm to confuse the victim and maximize damage. Stockton notes that, historically, generals have often coordinated their military operations with the weather. For example, during World War II, General Dwight Eisenhower deliberately launched the D-Day invasion following a storm in the English Channel, correctly calculating that this daring act would surprise Nazi Germany.

Future military planners of the New Lightning War may well coordinate a nuclear EMP attack and other operations aimed at the electric grid and critical infrastructures with the ultimate space weather threat–a geomagnetic storm.

“China and Russia have considered limited nuclear attack options that, unlike their Cold War plans, employ EMP as the primary or sole means of attack,” according to the Congressional EMP Commission, “Indeed, as recently as May 1999, during the NATO bombing of the former Yugoslavia, high-ranking members of the Russian Duma, meeting with a U.S. congressional delegation to discuss the Balkans conflict, raised the specter of a Russian EMP attack that would paralyze the United States.”

Russia has made many nuclear threats against the U.S. since 1999, which are reported in the western press only rarely. On December 15, 2011, Pravda, the official mouthpiece of the Kremlin, gave this advice to the United States in “A Nightmare Scenario for America”: No missile defense could prevent…EMP…No one seriously believes that U.S. troops overseas are defending “freedom” or defending their country…. Perhaps they ought to close the bases, dismantle NATO and bring the troops home where they belong before they have nothing to come home to and no way to get there. On June 1, 2014, Russia Today, a Russian television news show, also broadcast to the West in English, predicted that the United States and Russia would be in a nuclear war by 2016.

Iran, the world’s leading sponsor of international terrorism, openly writes about making a nuclear EMP attack to eliminate the United States. Iran has practiced missile launches that appear to be training and testing warhead fusing for a high-altitude EMP attack–including missile launching for an EMP attack from a freighter. An EMP attack launched from a freighter could be performed anonymously, leaving no fingerprints, to foil deterrence and escape retaliation.

“What is different now is that some potential sources of EMP threats are difficult to deter–they can be terrorist groups that have no state identity, have only one or a few weapons, and are motivated to attack the U.S. without regard for their own safety,” cautions the EMP Commission in its 2004 report, “Rogue states, such as North Korea and Iran, may also be developing the capability to pose an EMP threat to the United States, and may also be unpredictable and difficult to deter.”

On April 16, 2013, North Korea simulated a nuclear EMP attack against the United States, orbiting its KSM-3 satellite over the U.S. at the optimum trajectory and altitude to place a peak EMP field over Washington and New York and blackout the Eastern Grid, that generates 75 percent of U.S. electricity. On the very same day, as described earlier, parties unknown executed a highly professional commando-style sniper attack on the Metcalf transformer substation that is a key component of the Western Grid. A few months later, in July 2013, North Korean freighter Chon Chong Gang transited the Gulf of Mexico carrying nuclear-capable SA-2 missiles in its hold on their launchers. The missiles had no warheads, but the event demonstrated North Korea’s capability to execute a ship-launched nuclear EMP attack from U.S. coastal waters anonymously, to escape U.S. retaliation. The missiles were only discovered, hidden under bags of sugar, because the freighter tried returning to North Korea through the Panama Canal and past inspectors.

What does all this signify? Connect these dots: North Korea’s apparent practice EMP attack with its KSM-3 satellite; the simultaneous “dry run” sabotage attack at Metcalf; North Korea’s possible practice for a ship-launched EMP attack a few months later; and cyber-attacks from various sources were happening all the time, and are happening every day. These suggest the possibility that in 2013 at least North Korea may have exercised against the United States an all-out combined arms operation aimed at targeting U.S. critical infrastructures–the New Lightning War.

How does an EMP damage the electric grid?

EHV Transformers are the technological foundation of our modern electronic civilization as they make it possible to transmit electric power over great distances.

An event that damages hundreds–or as few as 9–of the 2,000 EHV transformers in the United States could plunge the nation into a protracted blackout lasting months or even years. 

Transformers are typically as large as a house, weigh hundreds of tons, costs millions of dollars, and cannot be mass produced but must be custom-made by hand. Making a single EHV transformer takes about 18 months. Annual worldwide production of EHV transformers is about 200 per year.  Unfortunately, although Nikolai Tesla invented the EHV transformer and the electric grid in the U.S., EHV transformers are no longer manufactured in the United States. Because of their great size and cost, U.S. electric utilities have very few spare EHV transformers. The U.S. must import EHV transformers made in Germany or South Korea, the only two nations in the world that make them for export.

SCADAS are basically small computers that run the electric grid and all the critical infrastructures. SCADAS regulate the flow of electric current through EHV transformers, the flow of natural gas or of water through pipelines, the flow of data through communications and financial systems, and operate everything from traffic control lights to the refrigerators in regional food warehouses. SCADAS are ubiquitous in the civilian critical infrastructures, number in the millions, and are as indispensable as EHV transformers to running our modern electronic civilization. An event that damages large numbers of SCADAS would put that civilization at risk.

Nuclear weapon  EMP–The Worst Threat

A high-altitude nuclear EMP attack is the greatest single threat that could be posed to EHV transformers, SCADAS and other components of the national electric grid and other critical infrastructures. Nuclear EMP includes a high-frequency electromagnetic shockwave called E1 EMP that can potentially damage or destroy virtually any electronic system having a dimension of 18 inches or greater. Consequently, a high-altitude nuclear EMP event could cause broad damage of electronics and critical infrastructures across continental North America, while also causing deep damage to industrial and personal property, including to automobiles and personal computers.

E1 EMP is unique to nuclear weapons.

Nuclear EMP can also produce E2 EMP, comparable to lightning.

Nuclear EMP can also produce E3 EMP comparable to or greater than a geomagnetic superstorm. Even a relatively low-yield nuclear weapon, like the 10-kiloton Hiroshima bomb, can generate an E3 EMP field powerful enough to damage EHV transformers.

Nuclear EMP Attacks by Missile, Aircraft and Balloon

A nuclear weapon detonated at an altitude of 200 kilometers (124 miles) over the geographic center of the United States would create an EMP field potentially damaging to electronics over all the 48 contiguous States. The Congressional EMP Commission concluded that virtually any nuclear weapon, even a crude first generation atomic bomb having a low yield, could potentially inflict an EMP catastrophe.

The EMP Commission also found that Russia, China, and probably North Korea have nuclear weapons specially designed to generate extraordinarily powerful EMP fields— called by the Russians Super-EMP weapons–and this design information may be widely proliferated: “Certain types of relatively low-yield nuclear weapons can be employed to generate potentially catastrophic EMP effects over wide geographic areas, and designs for variants of such weapons may have been illicitly trafficked for a quarter-century.”

A sophisticated long-range missile is not required

Any short-range missile or other delivery vehicle that can deliver a nuclear weapon to an altitude of 30 kilometers (18.5 miles) or higher can make a potentially catastrophic EMP attack on the United States. Although a nuclear weapon detonated at 30 km could not cover the entire continental U.S. with an EMP field, the field would still cover a very large multi-state region-and be more intense. Lowering the height-of-burst (HOB) for an EMP attack decreases field radius, but increases field strength.

An EMP attack at 30 kilometers HOB anywhere over the eastern half of the U.S. would cause cascading failures far beyond the EMP field and collapse the Eastern Grid, that generates 75% of U.S. electricity. The nation could not survive without the Eastern Grid.

A Scud missile launched from a freighter could perform such an EMP attack. Over 30 nations have Scuds, as do some terrorist groups and private collectors. Scuds are available for sale on the world and black markets.

Any aircraft capable of flying Mach 1 could probably do a zoom climb to 30 kilometers altitude to make an EMP attack, if the pilot is willing to commit suicide.

Even a meteorological balloon could be used to loft a nuclear weapon 30 km high to make an EMP attack. During the period of atmospheric nuclear testing in the 1950s and early 1960s, more nuclear weapons were tested at altitude by balloon than by bombers or missiles.

Geomagnetic Storms

In contrast, natural EMP from a geomagnetic super-storm generates only E3 EMP which has such long wavelengths that it requires power lines, telephone lines, pipelines, and railroad tracks over 1 kilometer in length to do harm, so it can’t hurt small targets like autos or personal computers.  However,  a protracted nationwide blackout resulting from such a storm would stop everything within a few days. Personal computers cannot run for long on batteries, nor can automobiles run without gasoline.

Natural EMP from geomagnetic storms, caused when a coronal mass ejection from the Sun collides with the Earth’s magnetosphere, poses a significant threat to the electric grid and the 18 critical infrastructures, that all depend directly or indirectly upon electricity. Normal geomagnetic storms occur every year causing problems with communications and electric grids for nations located at high northern latitudes, such as Norway, Sweden, Finland and Canada.  The 1989 Hydro-Quebec Storm blacked-out the eastern half of Canada in 92 seconds, melted an EHV transformer at the Salem, New Jersey nuclear power plant, and caused billions of dollars in economic losses.

In 1921 a geomagnetic storm 10 times more powerful than the 1989 Hydro-Quebec Storm, the Railroad Storm, afflicted the whole of North America. It did not have catastrophic consequences because electrification of the U.S. and Canada was still in its infancy. The National Academy of Sciences estimates that if the 1921 Railroad Storm recurs today, it would cause a catastrophic nationwide blackout lasting 4-10 years and costing trillions of dollars.

The Carrington Event.   The most powerful geomagnetic storm ever recorded is the 1859 Carrington Event, estimated to be ten times more powerful than the 1921 Railroad Storm and classed as a geomagnetic superstorm. Natural EMP from the Carrington Event penetrated miles deep into the Atlantic Ocean and destroyed the just laid intercontinental telegraph cable. The Carrington Event was a worldwide phenomenon, causing fires in telegraph stations and forest fires from telegraph lines bursting into flames on several continents. Fortunately, in the horse and buggy days of 1859, civilization did not depend upon electrical systems.

Recently scientists have found that “storms like the Carrington Event are not as rare as scientists thought and could happen every few decades, seriously damaging modern communication and navigation systems around the globe” (Eisenstat. 2019. Extreme solar storms may be more frequent than previously thought. phys.org).

Recurrence of a Carrington Event today would collapse electric grids and critical infrastructures all over the planet, putting at risk billions of lives. Scientists estimate that geomagnetic superstorms occur about every 100-150 years. The Earth is probably overdue to encounter another Carrington Event.

On July 22, 2012, NASA warned that a powerful solar flare narrowly missed the Earth that would have generated a geomagnetic super-storm, like the 1859 Carrington Event, and collapsed electric grids and life sustaining critical infrastructures worldwide.

The National Intelligence Council (NIC), that speaks for the entire U.S. Intelligence Community, published a major unclassified report in December 2012 Global Trends 2030 that warns a geomagnetic super-storm, like recurrence of the 1859 Carrington Event, is one of only eight “Black Swans” that could by or before 2030 change the course of global civilization. The NIC concurs with the consensus view that another Carrington Event could recur at any time, possibly before 2030, and that, if it did, electric grids and critical infrastructures that support modern civilization could collapse worldwide.

NASA estimates that the likelihood of a geomagnetic super-storm is 12 percent per decade. This virtually guarantees that Earth will experience a natural EMP catastrophe in our lifetimes or that of our children.

Non-Nuclear EMP Radio-Frequency Weapons (RFWs)

RFWs are non-nuclear weapons that use a variety of means, including explosively driven generators, to emit an electromagnetic pulse similar to the E1 EMP from a nuclear weapon, except less energetic and of much shorter radius. The range of RF Weapons is rarely more than one kilometer.

RF Weapons can be built relatively inexpensively using commercially available parts and design information available on the internet. In 2000 the Terrorism Panel of the House Armed Services Committee conducted an experiment, hiring an electrical engineer and some students to try building an RFW on a modest budget, using design information available on the internet, made from parts purchased at Radio Shack.  They built two RF Weapons in one year, both successfully tested at the U.S. Army proving grounds at Aberdeen. One was built into a Volkswagen bus, designed to be driven down Wall Street to disrupt stock market computers and information systems and bring on a financial crisis. The other was designed to fit in the crate for a Xerox machine so it could be shipped to the Pentagon, sit in the mailroom, and burn-out Defense Department computers.

EMP simulators that can be carried and operated by one man, and used as an RF Weapon, are available commercially. For example, one U.S. company advertises for sale an “EMP Suitcase” that looks exactly like a metal suitcase, can be carried and operated by one man, and generates 100,000 volts/meter over a short distance. The EMP Suitcase is not intended to be used as a weapon, but as an aid for designing factories that use heavy duty electronic equipment that emit electromagnetic transients, so the factory does not self-destruct.

But a terrorist, criminal, or madman, armed with the EMP Suitcase, could potentially destroy electric grid SCADAS or an EHV transformer and blackout a city. Thanks to RF Weapons, we have arrived at a place where the technological pillars of civilization for a major metropolitan area could be toppled by a single individual. The EMP Suitcase can be purchased without a license by anyone.

Terrorists armed with RF Weapons might use unclassified computer models to duplicate the U.S. FERC study and figure out which nine crucial transformer substations need to be attacked in order to blackout the entire national grid for weeks or months. RFWs would offer significant operational advantages over assault rifles and bombs. Something like the EMP Suitcase could be put in the trunk of a car, parked and left outside the fence of an EHV transformer or SCADA colony, or hidden in nearby brush or a garbage can, while the bad guys make a leisurely getaway. If the EMP fields are strong enough, it would be just as effective as, and far less conspicuous than, dropping a big bomb to destroy the whole transformer substation. Maximum effect could be achieved by penetrating the security fence and hiding the RF Weapon somewhere even closer to the target.

Some documented examples of successful attacks using Radio Frequency Weapons, and accidents involving electromagnetic transients, are described in the Department of Defense Pocket Guide for Security Procedures and Protocols for Mitigating Radio Frequency Threats (Technical Support Working Group, Directed Energy Technical Office, Dahlgren Naval Surface Warfare Center):

  • “In the Netherlands, an individual disrupted a local bank’s computer network because he was turned down for a loan. He constructed a Radio Frequency Weapon the size of a briefcase, which he learned how to build from the Internet. Bank officials did not even realize that they had been attacked or what had happened until long after the event.”
  • “In St. Petersburg, Russia, a criminal robbed a jewelry store by defeating the alarm system with a repetitive RF generator. Its manufacture was no more complicated than assembling a home microwave oven.”
  • “In Kzlyar, Dagestan, Russia, Chechen rebel commander Salman Raduyev disabled police radio communications using RF transmitters during a raid.”
  • “In Russia, Chechen rebels used a Radio Frequency Weapon to defeat a Russian security system and gain access to a controlled area.”
  • “Radio Frequency Weapons were used in separate incidents against the U.S. Embassy in Moscow to falsely set off alarms and to induce a fire in a sensitive area.”
  • “March 21-26, 2001, there was a mass failure of keyless remote entry devices on thousands of vehicles in the Bremerton, Washington, area…The failures ended abruptly as federal investigators had nearly isolated the source. The Federal Communications Commission (FCC) concluded that a U.S. Navy presence in the area probably caused the incident, although the Navy disagreed.”
  • “In 1999, a Robinson R-44 news helicopter nearly crashed when it flew by a high frequency broadcast antenna.”
  • “In the late 1980s, a large explosion occurred at a 36-inch diameter natural gas pipeline in the Netherlands. A SCADA system, located about one mile from the naval port of Den Helder, was affected by a naval radar. The RF energy from the radar caused the SCADA system to open and close a large gas flow-control valve at the radar scan frequency, resulting in pressure waves that traveled down the pipe and eventually caused the pipeline to explode.”
  • “In June 1999 in Bellingham, Washington, RF energy from a radar induced a SCADA malfunction that caused a gas pipeline to rupture and explode.”
  • “In 1967, the USS Forrestal was located at Yankee Station off Vietnam. An A4 Skyhawk launched a Zuni rocket across the deck. The subsequent fire took 13 hours to extinguish. 134 people died in the worst U.S. Navy accident since World War II. EMI [Electro-Magnetic Interference, Pry] was identified as the probable cause of the Zuni launch.”
  • North Korea used an Radio Frequency Weapon, purchased from Russia, to attack airliners and impose an “electromagnetic blockade” on air traffic to Seoul, South Korea’s capitol. The repeated attacks by RFW also disrupted communications and the operation of automobiles in several South Korean cities in December 2010; March 9, 2011; and April-May 2012 as reported in “Massive GPS Jamming Attack By North Korea” (GPSWORLD.COM, May 8, 2012).

Protecting the electric grid and other critical infrastructures from nuclear EMP attack will also protect them from the lesser threat posed by Radio Frequency Weapons.

Sabotage–Kinetic Attacks

Kinetic attacks are a serious threat to the electric grid and are clearly part of the game plan for terrorists and rogue states. Sabotage of the electric grid is perhaps the easiest operation for a terrorist group to execute and would be perhaps the most cost-effective means, requiring only high-powered rifles, for a very small number of bad actors to wage asymmetric warfare–perhaps against all 310 million Americans.  Terrorists have figured out that the electric grid is a major societal vulnerability.

Terror Blackout in Mexico. On the morning of October 27, 2013, the Knights Templars, a terrorist drug cartel in Mexico, attacked a big part of the Mexican grid, using small arms and bombs to blast electric substations. They blacked-out the entire Mexican state of Mihoacan, plunging 420,000 people into the dark, isolating them from help from the Federales. The Knights went into towns and villages and publicly executed local leaders opposed to the drug trade. Ironically, that evening in the United States, the National Geographic aired a television docudrama “American Blackout” that accurately portrayed the catastrophic consequences of a cyber-attack that blacks-out the U.S. grid for 10 days. The North American Electric Reliability Corporation and some utilities criticized “American Blackout” for being alarmist and unrealistic, apparently unaware that life had already anticipated art just across the porous border in Mexico. Life had already anticipated art months earlier than “American Blackout”, and not in Mexico, but in the United States.

Terror Blackout of Yemen. On June 9, 2014, while world media attention was focused on the terror group Islamic State in Iraq and Syria (ISIS) overrunning northern Iraq, Al Qaeda in the Arabian Peninsula (AQAP) used mortars and rockets to destroy electric transmission towers to blackout all of Yemen, a nation of 16 cities and 24 million people. AQAP’s operation against the Yemen electric grid is the first time in history that terrorists have sunk an entire nation into blackout. The blackout went virtually unreported by the world press.

The Metcalf Attack (San Jose, California).  On April 16, 2013, apparently terrorists or professional saboteurs practiced making an attack on the Metcalf transformer substation outside San Jose, California, that services a 450 megawatt power plant providing electricity to the Silicon Valley and the San Francisco area. NERC and the utility Pacific Gas and Electric (PG&E) own Metcalf. They claimed that the incident was merely an act of vandalism, and discouraged press interest. Consequently, the national press paid nearly no attention to the Metcalf affair for nine months. Jon Wellinghoff, Chairman of the U.S. Federal Energy Regulatory Commission, conducted an independent investigation of Metcalf. He brought in the best of the best of U.S. special forces–the instructors who train the U.S. Navy SEALS. They concluded that the attack on Metcalf was a highly professional military operation, comparable to what the SEALS themselves would do when attacking a power grid.

Footprints suggested that a team of perhaps as many as six men executed the Metcalf operation. They knew about an underground communications tunnel at Metcalf and knew how to access it by removing a manhole cover (which required at least two men). They cut communications cables and the 911 cable to isolate the site. They had pre-surveyed firing positions. They used AK-47s, the favorite assault rifle of terrorists and rogue states. They knew precisely where to shoot to maximize damage to the 17 transformers at Metcalf. They escaped into the night just as the police arrived and have not been apprehended or even identified. They left no fingerprints anywhere, not even on the expended shell casings.

The Metcalf assailants only damaged but did not destroy the transformers–apparently deliberately. The Navy SEALS and U.S. FERC Chairman Wellinghoff concluded that the Metcalf operation was a “dry run”, like a military exercise, practice for a larger and more ambitious attack on the grid to be executed in the future.   Military exercises never try to destroy the enemy, and try to keep a low profile so that the potential victim is not moved to reinforce his defenses. For example, Russian strategic bomber exercises only send a few aircraft to probe U.S. air defenses in Alaska, and never actually launch nuclear-armed cruise missiles. They want to probe and test our air defenses–not scare us into strengthening those defenses.

Chairman Wellinghoff was aware of an internal study by U.S. FERC that concluded saboteurs could blackout the national electric grid for weeks or months by destroying just nine crucial transformer substations.

Much to his credit, Jon Wellinghoff became so alarmed by his knowledge of U.S. grid vulnerability, and the apparent NERC cover-up of the Metcalf affair, that he resigned his chairmanship to warn the American people in a story published by the Wall Street Journal in February 2014. The Metcalf story sparked a firestorm of interest in the press and investigations by Congress. Consequently, NERC passed, on an emergency basis, a new standard for immediately upgrading physical security for the national electric grid. PG&E promised to spend over $100 million over the next three years to upgrade physical security.

Two months later, amid growing fears that ISIS may somehow act on its threats to attack America, on August 27, 2014, parties unknown again broke into the Metcalf transformer substation and escaped PG&E security guards and the police. PG&E claims that the second Metcalf affair is, again, merely vandalism. Yet after NERC’s emergency new physical security standards and PG&E’s alleged massive investment in improved security–Metcalf should have been the Rock of Gibraltar of the North American electric grid. If terrorists or someone is planning an attack on the U.S. electric grid, Metcalf would be the perfect place to test the supposedly strengthened security of the national grid.

Does stolen equipment prove that Metcalf-2 was a burglary? In the world of spies and saboteurs, mock burglary is a commonplace device for covering-up an intelligence operation, and hopefully quelling fears and keeping the victim unprepared.

If PG&E is telling the truth, and the second successful operation against Metcalf is merely by vandals–this is an engraved invitation by ISIS or Al Qaeda or rogue states to attack the U.S. electric grid. It means that all of PG&E and NERC’s vaunted security improvements cannot protect Metcalf from the stupidest of criminals, let alone from terrorists.

About one month later, on September 23, 2014, another investigation of PG&E security at transformer substations, including Metcalf, reported that the transformer substations are still not secure. Indeed, at one site a gate was left wide open. Former CIA Director R. James Woolsey, after reviewing the investigation results, concluded, “Overall, it looks like there is essentially no security.”

Why isn’t anything being done?

In the U.S. Congress, bipartisan bills with strong support, such as the GRID Act and the SHIELD Act, that would protect the electric grid from nuclear and natural EMP, have been stalled for a half-decade, blocked by corruption and lobbying by powerful utilities.

The U.S. Federal Energy Regulatory Commission has published interagency reports acknowledging that nuclear EMP attack is an existential threat against which the electric grid must be protected. But U.S. FERC claims to lack legal authority to require the North American Electric Reliability Corporation and the electric utilities to protect the grid. “Given the national security dimensions to this threat, there may be a need to act quickly to act in a manner where action is mandatory rather than voluntary and to protect certain information from public disclosure,” said Joseph McClelland, Director of FERC’s Office of Energy Projects, testifying in May 2011 before the Senate Energy and Natural Resources Committee. “The commission’s legal authority is inadequate for such action.” Others think U.S. FERC has sufficient legal authority to protect the grid, but lacks the will to do so because of an incestuous relationship with the NERC.

NERC and the electric power industry deny that it is their responsibility to protect the grid from nuclear EMP attack. NERC thinks it is not their job, but the job of the Department of Defense, to protect the United States from nuclear EMP attack, so argued NERC President and CEO, Gerry Cauley, in his May 2011 testimony before the Senate Energy and Natural Resources Committee. Mark Lauby, NERC’s reliability manager, is quoted by Peter Behr in his EENEWS article (August 26, 2011) that “…the terrorist scenario–foreseen as the launch of a crude nuclear weapon on a version of a SCUD missile from a ship off the U.S. coast–is the government’s responsibility, not industry’s.”

But DOD can protect the grid only by waging preventive wars against countries like Iran, North Korea, China and Russia, or by vast expansion and improvement of missile defenses costing tens of billions of dollars–none of which may stop the EMP threat.

The Department of Defense has no legal authority to EMP harden the privately owned electric grid. Such protection is supposed to be the job of NERC and the utilities.

Preventive wars would make an EMP attack more likely, perhaps inevitable. It is not worth spending thousands of lives and trillions of dollars on wars, just so NERC and the utilities can avoid a small increase in electric bills for EMP hardening the grid. U.S. FERC estimates EMP hardening would cost the average ratepayer an increase in their electric bill of 20 cents annually.

NERC “Operational Procedures” Non-Solution. The North American Electric Reliability Corporation (NERC), the lobby for the electric power industry that is also supposed to set industry standards for grid security, claims it can protect the grid from geomagnetic super-storms by “operational procedures.” Operational procedures would rely on satellite early warning of an impending Carrington Event to allow grid operators to shift around electric loads, perhaps deliberately brownout or blackout part or all of the grid in order to save it. NERC estimates operational procedures would cost the electric utilities almost nothing, about $200,000 dollars annually.

But there is no command and control system for coordinating operational procedures among the 3,000 independent electric utilities in the United States.  Operational procedures routinely fail to prevent blackouts from normal terrestrial weather, like snowstorms and hurricanes. There is no credible basis for thinking that operational procedures alone would be able to cope with a geomagnetic super-storm–a threat unprecedented in the experience of NERC and the electric power industry.

The ACE satellite NERC proposes to use is aged and sometimes gives false warnings that are not a reliable basis for implementing operational procedures. While coronal mass ejections can be seen approaching Earth typically about three days before impact, the Carrington Event reached Earth in only 11 hours, and the Ace satellite cannot warn whether a geo-storm will hit the Earth until 20 to 30 minutes before impact.   Quite recently, on September 19-20, 2014, the National Oceanic and Atmospheric Administration and NERC demonstrated again that they are unable to ascertain until shortly before impact whether a coronal mass ejection (CME) will cause a threatening geomagnetic storm on Earth.

Ironically, on September 8-10, 2014, a week before this CME, there was a security conference on threats to the national electric grid meeting in San Francisco, where executives from the electric power industry credited themselves with building robust resilience into the electric power grid. They even congratulated themselves and their industry with exemplary performance coping with and recovering from blackouts caused by hurricanes and other natural disasters. The thousands of Americans left homeless due to Hurricanes Katrina and Sandy, the hundreds of businesses lost or impoverished in New Orleans and New York City, would no doubt disagree.

The U.S. Government Accountability Office (GAO), if it had jurisdiction to grade electric grid reliability during hurricanes, would almost certainly give the utilities a failing grade. Ever since Hurricane Andrew in 1992, the U.S. GAO has found serious fault with efforts by the Federal Emergency Management Agency, the Department of Homeland Security, and the Department of Defense to rescue and recover the American people from every major hurricane. Blackout of the electric grid, of course, seriously impedes the capability of FEMA, DHS, and DOD to do anything.

Since the utilities regulate themselves through the North American Electric Reliability Corporation, their uncritical view of their own performance reinforces a “do nothing” attitude in the electric power industry.

For example, after the Great Northeast Blackout of 2003, it took NERC a decade to propose a new “vegetation management plan” to protect the national grid from tree branches. NERC has been even more resistant and slow to respond to other much more serious threats, including cyber-attack, sabotage, and natural EMP from geomagnetic storms.

Most alarming, NERC and the utilities do not appear to know their jobs, and are already in panic and despair over the challenges posed by severe weather, cyber threats, and geomagnetic storms. Peter Behr in an article published in Energy Wire (September 12, 2014) reports that at an electric grid security summit, Gary Leidich, Board Chairman of the Western Electricity Coordinating Council–which oversees reliability and security for the Western Grid–appears overwhelmed, as if he wants to escape his job, crying: “Who is really responsible for reliability? And who has the authority to do something about it?”

“The biggest cyber threat is from an electromagnetic pulse, which in the military doctrines of our potential adversaries would be part of an all-out cyber war.”, writes former Speaker of the House, Newt Gingrich, in his article “The Gathering Cyber Storm” (CNN, August 12, 2013). Gingrich warns that NERC “should lead, follow or get out of the way of those who are trying to protect our nation from a cyber catastrophe. Otherwise, the Congress that certified it as the electric reliability organization can also decertify it.”

Much to their credit, a few in the electric power industry understand the necessity of protecting the grid from nuclear EMP attack, have broken ranks with NERC, and are trying to meet the crisis. John Houston of Centerpoint Energy in Texas; Terry Boston of PJM, the largest grid in North America (located in the midwest); and Con Ed in New York–all are trying to protect their grids from nuclear EMP. State Governors and State Legislatures need to come to the rescue. States have a duty to their citizens to fill the gap in homeland security and public safety when the federal government, and the utilities, fail. State governments and their Public Utility Commissions have the legal authority and the moral obligation to, where necessary, compel the utilities to secure the grid against all hazards. State governments have an obligation to help and oversee and ensure that grid security is being done right by those utilities that act voluntarily. Failing to protect the grid from nuclear EMP attack is failing to protect the nation from all hazards.

Regulatory Malfeasance

As noted repeatedly elsewhere, Washington’s process for regulating the electric power industry has never worked well, in fact has always been broken. The electric power industry is the only civilian critical infrastructure that is allowed to regulate itself.

The North American Electric Reliability Corporation is the industry’s former trade association, which continues to act as an industry lobby. NERC is not a U.S. government agency. It does not represent the interests of the people. NERC in its charter answers to its “stakeholders”–the electric utilities that pay for NERC, including NERC’s highly salaried executives and staff.

The U.S. Federal Energy Regulatory Commission, the U.S. government agency that is supposed to partner with NERC in protecting the national electric grid, has publicly testified before Congress that U.S. FERC lacks regulatory power to compel NERC and the electric power industry to protect the grid from natural and nuclear EMP and other threats. Consider the contrast in regulatory authority between the U.S. FERC and, as examples, the U.S. Federal Aviation Administration (FAA), the U.S. Department of Transportation (DOT), or the U.S. Food and Drug Administration (FDA):

  • FAA has regulatory power to compel the airlines industry to ground aircraft considered unsafe, to change aircraft operating procedures considered unsafe, and to make repairs or improvements to aircraft in order to protect the lives of airline passengers. –DOT has regulatory power to compel the automobile industry to install on cars safety glass, seatbelts, and airbags in order to protect the lives of the driving public.
  • FDA has power to regulate the quality of food and drugs, and can ban under criminal penalty the sale of products deemed by the FDA to be unsafe to the public.

Unlike the FAA, DOT, FDA or any other U.S. government regulatory agency, the Federal Energy Regulatory Commission does not have legal authority to compel the industry it is supposed to regulate to act in the public interest. For example, U.S. FERC lacks legal power to direct NERC and the electric utilities to install blocking devices, surge arrestors, faraday cages or other protective devices to save the grid, and the lives of millions of Americans, from a natural or nuclear EMP catastrophe. Or so the FERC has testified to the Congress.

Congress has responded to this dilemma by introducing bipartisan bills, the SHIELD Act and the GRID Act, to empower U.S. FERC to protect the grid from an EMP catastrophe. Lobbying by NERC has stalled both bills for years. Currently, U.S. FERC only has the power to ask NERC to propose a standard to protect the grid. NERC standards are approved, or rejected, by the electric power industry. Historically, NERC typically takes years to develop standards to protect the grid that will pass industry approval. For example, NERC took a decade to propose a “vegetation management” standard to protect the grid from tree branches in 2012. This after ruminating for ten years over the tree branch induced Great Northeast Blackout of 2003, that plunged 50 million Americans into the dark. Once NERC proposes a standard to U.S. FERC, FERC cannot modify the standard, but must accept or reject the proposed standard. If U.S. FERC rejects the proposed standard, NERC gets to go back to the drawing board, and the process starts all over again. The NERC-FERC arrangement is a formula for thwarting effective U..S. government regulation of the electric power industry. Fortunately, Governors, State Legislatures and their Public Utility Commissions have legal power to compel utilities to protect the grid from natural and nuclear EMP and other threats.

Critics argue that the U.S. Federal Energy Regulatory Commission is corrupt–because of a too cozy relationship with NERC and a rotating door between FERC and the electric power industry -and cannot be trusted to secure the grid, even if given legal powers to do so. U.S. FERC’s approval of NERC’s hollow standard for geomagnetic storms appears proof positive that Washington is too corrupt to be trusted.

NERC’s Hollow GMD Protection Standard

Observers serving on NERC’s Geo-Magnetic Disturbance Task Force, that developed the NERC standard for grid protection against geomagnetic storms, have denounced the NERC GMD Standard and published papers exposing, not merely that the Standard is inadequate, but that it is hollow, a pretended or fake Standard. These experts opposed to the NERC GMD Standard include the foremost authorities on geomagnetic storms and electric grid vulnerability in the Free World.  See:  John G. Kappenman and Dr. William A. Radasky, Examination of NERC GMD Standards and Validation of Ground Models and Geo-Electric Fields Proposed in this NERC GMD Standard, Storm Analysis Consultants and Metatech Corporation, July 30, 2014 (Executive Summary appended to this chapter). -EIS Council Comments on Benchmark GMD Event for NERC GMD Task Force Consideration, Electric Infrastructure Security Council, May 21, 2014. –Thomas Popik and William Harris for The Foundation for Resilient Societies, Reliability Standard for Geomagnetic Disturbance Operations, Docket No. RM14-1-000, critiques submitted to U.S. FERC on March 24, July 21, and August 18, 2014.

Kappenman and Radasky, who served on the Congressional EMP Commission and are among the world’s foremost scientific and technical experts on geomagnetic storms and grid vulnerability, warn that NERC’s GMD Standard consistently underestimates the threat from geostorms: “When comparing…actual geo-electric fields with NERC model derived geo-electric fields, the comparisons show a systematic under-prediction in all cases of the geo-electric field by the NERC model.”

The Foundation for Resilient Societies, that includes on its Board of Advisors a brain trust of world class scientific experts–including Dr. William Graham who served as President Reagan’s Science Advisor, director of NASA, and Chairman of the Congressional EMP Commission–concludes from their participation on the NERC GMD Task Force that NERC “cooked the books” to produce a hollow GMD Standard: The electric utility industry clearly recognized in this instance how to design a so-called “reliability standard” that, though foreseeably ineffective in a severe solar storm, would avert financial liability to the electric utility industry even while civil society and its courts might collapse from longer-term outages. In this instance and others, a key feature of the NERC standard-setting process was to progressively water down requirements until the proposed standard obviously benefitted the ballot participants and therefore could pass. In the process, any remaining public benefit was diluted beyond perceptibility…

The several Foundation critiques identify numerous profound and obvious holes in what it describes as NERC’s “hollow” GMD Standard, and rightly castigates U.S. FERC for approving what is, in reality, a Paper Mache GMD Standard that would not protect the grid from a geomagnetic super-storm:

  • “FERC erred by approving a standard that exempts transmission networks with no transformers with a high side (wye-grounded) voltage at or above 200 kV when actual data and lessons learned from past operating incidents show significant adverse impacts of solar storms on equipment operating below 200 kV.”
  • “The exclusion of networks operating at 200kV and below is inconsistent with the prior bright-line definition of the Bulk Electric System” as defined by U.S. FERC.
  • “FERC erred by approving a standard that does not require instrumentation of electric utility networks during solar storm conditions when installation of GIC [Ground Induced Current–Pry] monitors would be cost-effective and in the public interest.”
  • “FERC erred by approving a standard that does not require utilities to perform the most rudimentary planning for solar storms, i.e., mathematical comparison of megawatt capacity of assets at risk during solar storms to power reserves.”
  • “FERC erred by concluding that sixteen Reliability Coordinators could directly communicate with up to 1,500 Transmission and Generator Operators during severe GMD events with a warning time of as little as 15 minutes and that Balancing Authorities and Generator Operators should not take action on their own because of possible lack of GIC data.”
  • “FERC erred by assuming that there would be reliable and prompt two-way communications between Reliability Coordinators and Generator Operators immediately before and during severe solar storms.”

The Foundation is also critical of U.S. FERC for approving a NERC GMD Standard that lacks transparency and accountability. The utilities are allowed to assess their own vulnerability to geomagnetic storms, to devise their own preparations, to invest as much or as little as they like in those preparations, and all without public scrutiny or review of utility plans by independent experts.

Dr. William Radasky, who holds the Lord Kelvin Medal for setting standards for protecting European electronics from natural and nuclear EMP, and John Kappenman, who helped design the ACE satellite upon which industry relies for early warning of geomagnetic storms, conclude that the NERC GMD Standard so badly underestimates the threat that “its resulting directives are not valid and need to be corrected.”

Kappenman and Radasky: These enormous model errors also call into question many of the foundation findings of the NERC GMD draft standard. The flawed geoelectric field model was used to develop the peak geo-electric field levels of the Benchmark model proposed in the standard. Since this model understates the actual geo-electric field intensity for small storms by a factor of 2 to 5, it would also understate the maximum geo-electric field by similar or perhaps even larger levels. Therefore, the flaw is entirely integrated into the NERC Draft Standard and its resulting directives are not valid and need to be corrected.  The excellent Kappenman-Radasky critique of the NERC GMD Standard represents the consensus view of all the independent observers who participated in the NERC GMD Task Force, including the author. The Kappenman-Radasky critique warns NERC and U.S. FERC that, “Nature cannot be fooled!”

Perhaps most revelatory of U.S. FERC’s untrustworthiness, by approving the NERC GMD Standard that grossly underestimates the threat from geo-storms–U.S. FERC abandoned its own much more realistic estimate of the geo-storm threat. It is incomprehensible why U.S. FERC would ignore the findings of its own excellent interagency study, one of the most in depth and meticulous studies of the EMP threat ever performed, that was coordinated with Oak Ridge National Laboratory, the Department of Defense, and the White House.

U.S. FERC’s preference for NERC’s “junk science” over U.S. FERC’s own excellent scientific assessment of the geo-storm threat can only be explained as incompetence or corruption or both.

What do we know about a nuclear EMP?

A high-altitude nuclear electromagnetic pulse attack is the most severe threat to the electric grid and other critical infrastructures, far more damaging than a geomagnetic super-storm, the worst case of severe weather, sabotage by kinetic attacks, or cyber-attack.  Not one major U.S. Government study dissents from the consensus that nuclear EMP attack would be catastrophic, and that protection is achievable and necessary.

There is more empirical data on nuclear EMP and its effects on electronic systems and infrastructures than almost any other threat, except severe weather. In addition to the 1962 STARFISH PRIME high-altitude nuclear test that generated EMP that damaged electronic systems in Hawaii and elsewhere, the Department of Defense has decades of atmospheric and underground nuclear test data relevant to EMP. And defense scientists have for over 50 years studied EMP effects on electronics in simulators. Most recently, the Congressional EMP Commission made its threat assessment by testing a wide range of modern electronics crucial to critical infrastructures in EMP simulators.

There is a scientific and strategic consensus behind the Congressional EMP Commission’s assessment that a nuclear EMP attack would have catastrophic consequences for the United States, but that “correction is feasible and well within the Nation’s means and resources to accomplish.” Every major U.S. Government study to examine the EMP threat and solutions concurs with the EMP Commission, including the Congressional Strategic Posture Commission (2009), the U.S. Department of Energy and North American Electric Reliability Corporation (2010), and the U.S. Federal Energy Regulatory Commission interagency report, coordinated with the White House, Department of Defense, and Oak Ridge National Laboratory (2010).

Russian Nuclear EMP Tests.  STARFISH PRIME is not the only high-altitude nuclear EMP test. The Soviet Union (1961-1962) conducted a series of high-altitude nuclear EMP tests over what was then its own territory–not once but seven times–using a variety of warheads of different designs. The EMP fields from six tests covered Kazakhstan, an industrialized area larger than Western Europe. In 1994, during a thaw in the Cold War, Russia shared the results from one of its nuclear EMP tests, that used their least efficient warhead design for EMP–it collapsed the Kazakhstan electric grid, damaging transformers, generators and all other critical components.  The USSR during the Kazakhstan high-altitude EMP experiments tested some low-yield warheads, at least one probably an Enhanced Radiation Warhead that emitted large quantities of gamma rays, that generate the E1 EMP electromagnetic shockwave. It is possible that the USSR developed their Super-EMP Warhead early in the Cold War as a secret super-weapon.  The Soviets apparently quickly repaired the damage to Kazakhstan’s electric grid and other critical infrastructures, thereby proving definitively that with smart planning and good preparedness it is possible to survive and recover from an EMP catastrophe.

Other threats: Cyber-attack

Cyber-attacks, the use of computer viruses and hacking to invade and manipulate information systems and SCADAS, is almost universally described by U.S. political and military leaders as the greatest threat facing the United States. Every day, literally thousands of cyber-attacks are made on U.S. civilian and military systems, most of them designed to steal information. Joint Chiefs Chairman, General Martin Dempsey, warned on June 27, 2013, that the United States must be prepared for the revolutionary threat represented by cyber warfare (Claudette Roulo, DoD News, Armed Force Press Service): “One thing is clear. Cyber has escalated from an issue of moderate concern to one of the most serious threats to our national security,” cautioned Chairman Dempsey, “We now live in a world of weaponized bits and bytes, where an entire country can be disrupted by the click of a mouse.”

Cyber Hype? Skeptics claim that the catastrophic scenarios envisioned for cyber warfare are grossly exaggerated, in part to justify costly cyber programs wanted by both the Pentagon and industry at a time of scarce defense dollars. Many of the skeptical arguments about the limitations of hacking and computer viruses are technically correct. However, it is not widely understood that foreign military doctrines define “information warfare” and “cyber warfare” as encompassing kinetic attacks and EMP attack–which is an existential threat to the United States.

Thomas Rid’s book Cyber War Will Not Take Place (Oxford University Press, 2013) exemplifies the viewpoint of a growing minority of highly talented cyber security experts and scholars who think there is a conspiracy of governments and industry to hype the cyber threat. Rid’s bottom line is that hackers and computer bugs are capable of causing inconvenience–not apocalypse. Cyber-attacks can deny services, damage computers selectively but probably not wholesale, and steal information, according to Rid. He does not rule out that future hackers and viruses could collapse the electric grid, concluding such a feat would be, not impossible, but nearly so.

In a 2012 BBC interview, Rid chastised then Secretary of Defense Leon Panetta for claiming that Iran’s Shamoon Virus, used against the U.S. banking system and Saudi Arabia’s ARAMCO, could foreshadow a “Cyber Pearl Harbor” and for threatening military retaliation against Iran. Rid told the BBC that the world has, “Never seen a cyber-attack kill a single human being or destroy a building.”

Cyber security expert Bruce Schneier claims, “The threat of cyberwar has been hugely hyped” to keep growing cyber security programs at the Pentagon’s Cyber Command, the Department of Homeland Security, and new funding streams to Lockheed Martin, Raytheon, Century Link, and AT&T, who are all part of the new cyber defense industry. The Brookings Institute’s Peter Singer wrote in November 2012, “Zero. That is the number of people who have been hurt or killed by cyber terrorism.” Ronald J. Delbert, author of Black Code: Inside the Battle for Cyberspace, a lab director and professor at the University of Toronto, accuses RAND and the U.S. Air Force of exaggerating the threat from cyber warfare.

Peter Sommer of the London School of Economics and Ian Brown of Oxford University, in Reducing Systemic Cybersecurity Risk, a study for Europe’s Organization for Economic Cooperation and Development, are far more worried about natural EMP from the Sun than computer viruses: “a catastrophic cyber incident, such as a solar flare that could knock out satellites, base stations and net hardware” makes computer viruses and hacking “trivial in comparison.”

The now declassified Aurora experiment is the empirical basis for the claim that a computer virus might be able to collapse the national electric grid. In Aurora, a virus was inserted into the SCADAS running a generator, causing the generator to malfunction and eventually destroy itself. However, using a computer virus to destroy a single generator does not prove it is possible or likely that an adversary could destroy all or most of the generators in the United States. Aurora took a protracted time to burn out a generator–and no intervention by technicians attempting to save the generator was allowed, as would happen in a nationwide attack, if one could be engineered. Nor is there a single documented case of a even a local blackout being caused in the United States by a computer virus or hacking–which surely would have happened by now, if vandals, terrorists, or rogue states could attack U.S. critical infrastructures easily by hacking.

Even the Stuxnet Worm, the most successful computer virus so far, reportedly according to White House sources jointly engineered by the U.S. and Israel to attack Iran’s nuclear weapons program, proved a disappointment. Stuxnet succeeded in damaging only 10 percent of Iran’s centrifuges for enriching uranium, and did not stop or even significantly delay Tehran’s march towards the bomb. During the recently concluded Gaza War between Israel and Hamas, a major cyber campaign using computer bugs and hacking was launched against Israel by Hamas, the Syrian Electronic Army, Iran, and by sympathetic hackers worldwide. The Gaza War was a Cyber World War against Israel.

The Institute for National Security Studies, at Tel Aviv University, in “The Iranian Cyber Offensive during Operation Protective Edge” (August 26, 2014) reports that the cyber-attacks caused inconvenience and in the worst case some alarm, over a false report that the Dimona nuclear reactor was leaking radiation: “…the focus of the cyber offensive…was the civilian internet. Iranian elements participated in what the C4I officer described as an attack unprecedented in its proportions and the quality of its targets….The attackers had some success when they managed to spread a false message via the IDF’s official Twitter account saying that the Dimona reactor had been hit by rocket fire and that there was a risk of a radioactive leak.” However, the combined hacking efforts of Hamas, SEA, Iran and hackers worldwide did not blackout Israel or significantly impede Israel’s war effort.

But tomorrow is always another day. Cyber warriors are right to worry that perhaps someday someone will develop the cyber bug version of an atomic bomb. Perhaps such a computer virus already exists in a foreign laboratory, awaiting use in a future surprise attack. On July 6, 2014, reports surfaced that Russian intelligence services allegedly infected 1,000 power plants in Western Europe and the United States with a new computer virus called Dragonfly. No one knows what Dragonfly is supposed to do. Some analysts think it was just probing the defenses of western electric grids. Others think Dragonfly may have inserted logic bombs into SCADAS that can disrupt the operation of electric power plants in a future crisis.

Cyber warfare is an existential threat to the United States, not because of computer viruses and hacking alone, but as envisioned in the military doctrines of potential adversaries whose plans for an all-out Cyber Warfare Operation include the full spectrum of military capabilities-including EMP attack. In 2011, a U.S. Army War College study In The Dark: Planning for a Catastrophic Critical Infrastructure Event warned U.S. Cyber Command that U.S. doctrine should not overly focus on computer viruses to the exclusion of EMP attack and the full spectrum of other threats, as planned by potential adversaries.

Reinforcing the above, a Russian technical article on cyber warfare by Maxim Shepovalenko (Military-Industrial Courier July 3, 2013), notes that a cyber-attack can collapse “the system of state and military control…its military and economic infrastructure” because of “electromagnetic weapons…an electromagnetic pulse acts on an object through wire leads on infrastructure, including telephone lines, cables, external power supply and output of information.” Cyber warriors who think narrowly in terms of computer hacking and viruses invariably propose anti-hacking and anti-viruses as solutions. Such a solution will result in an endless virus versus anti-virus software arms race that may ultimately prove unaffordable and futile.

The worst case cyber scenario envisions a computer virus infecting the SCADAS that regulate the flow of electricity into EHV transformers, damaging the transformers with overvoltage, and causing a protracted national blackout. But if the transformers are protected with surge arrestors against the worst threat–nuclear EMP attack–they would be unharmed by the worst possible overvoltage that might be system generated by any computer virus. This EMP hardware solution would provide a permanent and relatively inexpensive fix to what is the extremely expensive and apparently endless virus versus anti-virus software arms race that is ongoing in the new cyber defense industry.

 

Other threats: Severe Weather

Hurricanes, snow storms, heat waves and other severe weather pose an increasing threat to the increasingly overtaxed, aged and fragile national electric grid. So far, the largest and most protracted blackouts in the United States have been caused by severe weather.  For example:

  • Hurricane Katrina (August 29, 2005), the worst natural disaster in U.S. history, blacked out New Orleans and much of Louisiana, the blackout seriously impeding rescue and recovery efforts. Lawlessness swept the city. Electric power was not restored to parts of New Orleans for months, making some neighborhoods a criminal no man’s land too dangerous to live in. New Orleans has still not fully recovered its pre-Katrina population. Economic losses to the Gulf States region totaled $108 billion dollars.
  • Hurricane Sandy on October 29, 2012, caused blackouts in parts of New York and New Jersey that in some places lasted weeks. Again, as in Katrina, the blackout gave rise to lawlessness and seriously impeded rescue and recovery. Thousands were rendered homeless in whole or in part because of the protracted blackout in some neighborhoods. Partial and temporary blackouts were experienced in 24 States. Total economic losses were $68 billion dollars.
  • A heat-wave on August 14, 2003, caused a power line to sag into a tree branch, which seemingly minor incident began a series of cascading failures that resulted in the Great Northeast Blackout of 2003. Some 50 million Americans were without electric power–including New York City. Although the grid largely recovered after a day, disruption of the nation’s financial capital was costly, resulting in estimated economic losses of about $6 billion dollars.
  • On September 18, 2014, a heat wave caused rolling brownouts and blackouts in northern California so severe that some radio commentators speculated that a terrorist attack on the grid might be underway.

 

What to do: All Hazards Strategy–EMP Protection is Key

Most of the general public and State governments are unaware of the EMP threat and that political gridlock in Washington has prevented the Federal government from implementing any of the several cost-effective plans for protecting the national electric grid.

All Hazards Protection: Most state governments are unaware that they can protect the grid within their State to shield their citizens from the catastrophic consequences of a national blackout, and that if they protect the grid from the worst threat, nuclear EMP, that will also help to protect the grid from other hazards such as a geomagnetic storm EMP, cyber-attack, sabotage, and severe weather.

States Should EMP Harden Their Grids.  All states should prepare themselves for all hazards in this age of the Electronic Blitzkrieg.  State governments and their Public Utility Commissions should exercise aggressive oversight to ensure that the transformer substations and electric grids in their States are safe and secure. The record of NERC and the electric utilities indicates they cannot be trusted to provide for the security of the grid. State governments can protect their grid from sabotage by the “all hazards” strategy that protects against the worst threat–nuclear EMP attack. For example, faraday cages to protect EHV transformers and SCADAS colonies from EMP would also screen from view these vital assets so they could not be accurately targeted by high-powered rifles, as is necessary in order to destroy them by small arms fire. The faraday cages could be made of heavy metal or otherwise fortified for more robust protection against more powerful weapons, like rocket propelled grenades.

Surge arrestors to protect EHV transformers and SCADAS from nuclear EMP would also protect the national grid from collapse due to sabotage. The U.S. FERC scenario where terrorists succeed in collapsing the whole national grid by destroying merely nine transformer substations works only because of cascading overvoltage. When the nine key substations are destroyed, megawatts of electric power gets suddenly dumped onto other transformers, which in their turn get overloaded and fail, dumping yet more megawatts onto the grid. Cascading failures of more and more transformers ultimately causes a protracted national blackout. This worst case scenario for sabotage could not happen if the transformers and SCADAS are protected against nuclear EMP–which is a more severe threat than any possible system-generated overvoltage.

Critics rightly argue that NERC’s proposed operational procedures (satellite warnings) is a non-solution designed as an excuse to avoid the expense of the only real solution–physically hardening the electric grid to withstand EMP.

NERC rejects the recommendation of the Congressional EMP Commission to physically protect the national electric grid from nuclear EMP attack by installing blocking devices, surge arrestors, faraday cages and other proven technologies. These measures would also protect the grid from the worst natural EMP from a geomagnetic super-storm like another Carrington Event. The estimated one time cost–$2 billion dollars–is what the United States gives away every year in foreign aid to Pakistan.

Yet Washington remains gridlocked between lobbying by NERC and the wealthy electric power industry on the one hand, and the recommendations of the Congressional EMP Commission and other independent scientific and strategic experts on the other hand. The States should not wait for Washington to act, but should act now to protect themselves.

While gridlock in Washington has prevented the Federal Government from protecting the national electric power infrastructure, threats to the grid–and to the survival of the American people–from EMP and other hazards are looming ever larger. Grid vulnerability to EMP and other threats is now a clear and present danger.

The Congressional EMP Commission warned that an “all hazards” strategy should be pursued to protect the electric grid and other critical infrastructures, which means trying to find common solutions that protect against more than one threat–ideally all threats. The “all hazards” strategy is the most practical and most cost-effective solution to protecting the electric grid and other critical infrastructures. Electric grid operation and vulnerability is critically dependent upon two key technologies: Extra-High Voltage (EHV) transformers and Supervisory Control and Data Acquisition Systems (SCADAS).

The Congressional EMP Commission recommended protecting the electric grid and other critical infrastructures against nuclear EMP as the best basis for an “all hazards” strategy. Nuclear EMP may not be as likely as other threats, but it is by far the worst, the most severe, threat.

The EMP Commission found that if the electric grid can be protected and quickly recovered from nuclear EMP, the other critical infrastructures can also be recovered, with good planning, quickly enough to prevent mass starvation and restore society to normalcy. If EHV transformers, SCADAS and other critical components are protected from the worst threat–nuclear EMP–then they will survive, or damage will be greatly mitigated, from all lesser threats, including natural EMP from geomagnetic storms, severe weather, sabotage, and cyber-attack.

The “all hazards” strategy recommended by the EMP Commission is not only the most cost-effective strategy–it is a necessary strategy.

New York and Massachusetts Protect Their Grids.

New York Governor Andrew Cuomo and Massachusetts Governor Deval Patrick would not agree that NERC’s performance during Hurricane Sandy was exemplary. Under the leadership of Governor Patrick, Massachusetts is spending $500 million to upgrade the security of its electric grid from severe weather. New York is spending a billion dollars to protect its grid from severe weather.

The biggest impediment to recovering an electric grid from hurricanes is not fallen electric poles and downed power lines. When part of the grid physically collapses, an overvoltage can result that can damage all kinds of transformers, including EHV transformers, SCADAS and other vital grid components. Video footage shown on national television during Hurricane Sandy showed spectacular explosions and fires erupting from transformers and other grid vital components caused by overvoltage.

If the grid is hardened to survive a nuclear EMP attack by installation of surge arrestors, it would easily survive overvoltage induced by hurricanes and other severe weather. This would cost a lot less than burying power lines underground and other measures being undertaken by New York and Massachusetts to fortify their grids against hurricanes–all of which will be futile if transformers and SCADAS are not protected against overvoltage.

Unfortunately, both States are probably spending a lot more than they have to by focusing on severe weather, instead of an “all hazards” strategy to protect their electric grids.

According to a senior executive of New York’s Consolidated Edison, briefing at the Electric Infrastructure Security Summit in London on July 1, 2014–Con Ed is taking some modest steps to protect part of the New York electric grid from nuclear EMP attack. This good news has not been reported anywhere in the press. I asked the Con Ed executive why New York is silent about beginning to protect its grid from nuclear EMP? Loudly advertising this prudent step could have a deterrent effect on potential adversaries planning an EMP attack. The Con Ed executive could offer no explanation.

New York City because of its symbolism as the financial and cultural capital of the Free World, and perhaps because of its large Jewish population, has been the repeated target of terrorist attacks with weapons of mass destruction. A nuclear EMP attack centered over New York City, the warhead detonated at an altitude of 30 kilometers, would cover all the northeastern United States with an EMP field, including Massachusetts.  A practitioner of the New Lightning War may be more likely to exploit a hurricane, blizzard, or heat wave than a geomagnetic storm, when launching a coordinated cyber, sabotage, and EMP attack. Terrestrial bad weather is more commonplace than bad space weather.

PETER VINCENT PRY is Executive Director of the EMP Task Force on National and Homeland Security, a Congressional Advisory Board dedicated to achieving protection of the United States from electromagnetic pulse (EMP), cyber-attack, mass destruction terrorism and other threats to civilian critical infrastructures on an accelerated basis. Dr. Pry also is Director of the United States Nuclear Strategy Forum, an advisory board to Congress on policies to counter Weapons of Mass Destruction. Dr. Pry served on the staffs of the Congressional Commission on the Strategic Posture of the United States (2008-2009); the Commission on the New Strategic Posture of the United States (2006-2008); and the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack (2001-2008). Dr. Pry served as Professional Staff on the House Armed Services Committee (HASC) of the U.S. Congress, with portfolios in nuclear strategy, WMD, Russia, China, NATO, the Middle East, Intelligence, and Terrorism (1995-2001). While serving on the HASC, Dr. Pry was chief advisor to the Vice Chairman of the House Armed Services Committee and the Vice Chairman of the House Homeland Security Committee, and to the Chairman of the Terrorism Panel. Dr. Pry played a key role: running hearings in Congress that warned terrorists and rogue states could pose an EMP threat, establishing the Congressional EMP Commission, helping the Commission develop plans to protect the United States from EMP, and working closely with senior scientists who first discovered the nuclear EMP phenomenon. Dr. Pry was an Intelligence Officer with the Central Intelligence Agency responsible for analyzing Soviet and Russian nuclear strategy, operational plans, military doctrine, threat perceptions, and developing U.S. paradigms for strategic warning (1985-1995). He also served as a Verification Analyst at the U.S. Arms Control and Disarmament Agency responsible for assessing Soviet compliance with strategic and military arms control treaties (1984-1985). Dr. Pry has written numerous books on national security issues, including Apocalypse Unknown: The Struggle To Protect America From An Electromagnetic Pulse Catastrophe; Electric Armageddon: Civil-Military Preparedness For An Electromagnetic Pulse Catastrophe; War Scare: Russia and America on the Nuclear Brink; Nuclear Wars: Exchanges and Outcomes; The Strategic Nuclear Balance: And Why It Matters; and Israel’s Nuclear Arsenal. Dr. Pry often appears on TV and radio as an expert on national security issues. The BBC made his book War Scare into a two-hour TV documentary Soviet War Scare 1983 and his book Electric Armageddon was the basis for another TV documentary Electronic Armageddon made by the National Geographic.

References

Crawford J (2014) The U.S. government thinks China could take down the power grid. CNN.

DOE (2014) Large power transformers and the U.S. electric grid. U.S. Department of Energy.  https://www.energy.gov/sites/prod/files/2014/04/f15/LPTStudyUpdate-040914.pdf

Steidler P (2020) End China’s infection of the US Power Grid. realclearenergy.

Related articles

Electric Grid

Posted in Blackouts, Blackouts Electric Grid, Congressional Record U.S., Electric Grid & EMP Electromagnetic Pulse, Electricity Infrastructure, Extreme Weather, Nuclear Power Collapse, Nuclear War, U.S. Congress Energy Policy, U.S. Congress Infrastructure | Tagged , , , , , | 4 Comments