Far out power #1: human fat, playgrounds, solar wind towers, perpetual motion, thermal depolymerization

Preface. Plans for hydrogen, wind, solar, wave and all the other re-buildable contraptions that use fossil fuels in every single step of their short 15-25 year life cycle and hence are non-renewable, are just as silly as the ideas below,  yet these  schemes with negative energy return that can’t make themselves without fossil fuels are written about in respectable scientific journals, unlike the proposals below.

I’ve been writing about this since 2001, now Michael Moore has made a film called “Planet of the Humans” that explains this as well.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Liposuction fat

Mr. Buthune thinks the use of human fat as an energy source has some potential.  “There’s an interesting business model: link a biodiesel plant with the cosmetic surgeons,” says Mr. Bethune. “In Auckland we produce about 330 pounds of fat per week from liposuction, which would make about 40 gallons of fuel. If it is going to be chucked out, why not?” (Schouten 2005)

At an Exxon conference, the Yes Men pulled a prank of giving a presentation of making a new fuel, Vivoleum, out of humans killed by climate change. Hundreds of candles made of human hair that smelled like dead people were handed out (Yes Men 2015).

Auckland, New Zealand, adventurer Peter Bethune plans to break the round-the-world powerboat speed record in a boat powered by biodiesel fuel partly manufactured from human fat. The lean Mr. Bethune had about three ounces of fat extracted from his body yesterday in a lipsuction procedure, and he is seeking volunteers to donate more (Schouten 2005).

Playground power

The only place I could find this actually existing is in Ghana, Africa, where Empower Playgrounds provides merry-go-rounds to schools that generate and store electricity as they are spun around (Brownlee 2013).

Perpetual motion

Violates all the laws of physics and thermodynamics, even the patent office got wise and won’t accept any applications (Wikipedia, Park 2000).

Thermal depolymerization

Garbage and landfills turn can be turned into biogass.  But as energy declines, there will be less and less garbage, not only because there won’t be the fuel to take it to a landfill, but people will be burning anything they can get their hands on to cook and heat with.

Solar Wind Towers (Slav 2019)

More than 30 years ago a giant tower was built in Manzanares, Spain, to produce electricity in a way that at the time must have seen even more eccentric than it seems now, by harnessing the power of air movement. The Manzanares tower was, sadly, toppled by a storm. Decades ago, several other firms tried to replicate the idea, but none has succeeded. Why?

The idea behind the so-called solar wind towers is pretty straightforward. The more popular version is the solar updraft tower, which works as follows:

On the ground, around the hollow tower, there is a solar energy collector—a transparent surface suspended a little above ground—which heats the air underneath.

As the air heats up, it is drawn into the tower, also called a solar chimney, since hot air is lighter than cold air. It enters the tower and moves up it to escape through the top. In the process, it activates a number of wind turbines located around the base of the tower. The main benefit over other renewable technologies? Doing away with the intermittency of PV solar, since the air beneath the collector could stay hot even when the sun is not shining.

But the cost of building one is simply too expensive, and investors are wary of the problems related to the very tall height required (the taller the better).


Brownlee, J. 2013. A Merry-Go-Round That Turns The Power Of Play Into Electricity. fastcompany.

Park, R. 2000. Perpetual Motion: still going around. Washington Post.

Schouten, H. 2005. Earthrace biofuel promoter to power boat using human fat. calorielab.com

Slav, I. 2019. The fatal flaw in a perfect energy solution. oilprice.com

Posted in Far Out | Tagged , , , , | 4 Comments

How a pandemic could bring down civilization

Preface. Some excerpts from the article below:

“The fact is that the best way for people to avoid the virus will be to stay home. But if everyone does this – or if too many people try to stockpile supplies after a crisis begins – the impact of even a relatively minor pandemic could quickly multiply.

Especially vital are “hubs” – the people whose actions link all the rest. Take truck drivers. When a strike blocked petrol deliveries from the UK’s oil refineries for 10 days in 2000, nearly a third of motorists ran out of fuel, some train and bus services were cancelled, shops began to run out of food, hospitals were reduced to running minimal services, hazardous waste piled up, and bodies went unburied. Afterwards, a study by Alan McKinnon of Heriot-Watt University in Edinburgh, UK, predicted huge economic losses and a rapid deterioration in living conditions if all road haulage in the UK shut down for just a week.

What would happen in a pandemic when many truckers are sick, dead or too scared to work?  Even a small impact on road haulage would quickly have severe knock-on effects [because of] just-in-time delivery. Over the past few decades, people who use or sell commodities from coal to aspirin have stopped keeping large stocks, because to do so is expensive. They rely instead on frequent small deliveries.

Cities typically have only three days’ worth of food, and the old saying about civilizations being just three or four meals away from anarchy is taken seriously by security agencies such as MI5 in the UK.

Interdependencies: Coal mines need electricity to keep working. Pumping oil through pipelines and water through mains also requires electricity. Making electricity depends largely on coal; getting coal depends on electricity; they all need refineries and key people; the people need transport, food and clean water. If one part of the system starts to fail, the whole lot could go. Hydro and nuclear power are less vulnerable to disruptions in supply, but they still depend on highly trained staff. With no electricity, shops will be unable to keep food refrigerated even if they get deliveries. Their tills won’t work either. Many consumers won’t be able to cook what food they do have. With no chlorine, water-borne diseases could strike just as it becomes hard to boil water. Communications could start to break down as radio and TV broadcasters, phone systems and the internet fall victim to power cuts and absent staff. This could cripple the global financial system, right down to local cash machines, and will greatly complicate attempts to maintain order and get systems up and running again.”

The covid-19 pandemic has revealed the fragility of depending on just-in-time deliveries so that unused items don’t sit in warehouses or stores.  In the pandemic, this made it hard to restock empty shelves quickly enough.  Even our lifestyles are “just-in-time” picking up a day or two of groceries at a local store rather than stocking up at a supermarket outside of town for many reasons such as small refrigerators or eating out a lot.  Our highly efficient systems have no slack, redundancy, resilience, or spare capacity. They work perfectly well in perfect times, but are quite fragile in a pandemic.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


MacKenzie, D. April 5, 2008. Will a pandemic bring down civilization? NewScientist.

For years we have been warned that a pandemic is coming. It could be flu, it could be something else. We know that lots of people will die. As terrible as this will be, on an ever more crowded planet, you can’t help wondering whether the survivors might be better off in some ways. Wouldn’t it be easier to rebuild modern society into something more sustainable if, perish the thought, there were fewer of us.

Yet would life ever return to something resembling normal after a devastating pandemic? Virologists sometimes talk about their nightmare scenarios – a plague like ebola or smallpox – as “civilization ending”. Surely they are exaggerating. Aren’t they?

Many people dismiss any talk of collapse as akin to the street-corner prophet warning that the end is nigh. In the past couple of centuries, humanity has innovated its way past so many predicted plagues, famines and wars – from Malthus to Dr Strangelove – that anyone who takes such ideas seriously tends to be labeled a doom-monger.

There is a widespread belief that our society has achieved a scale, complexity and level of innovation that make it immune from collapse. “It’s an argument so ingrained both in our subconscious and in public discourse that it has assumed the status of objective reality,” writes biologist and geographer Jared Diamond of the University of California, Los Angeles, author of the 2005 book Collapse. “We think we are different.”
Ever more vulnerable

A growing number of researchers, however, are coming to the conclusion that far from becoming ever more resilient, our society is becoming ever more vulnerable. In a severe pandemic, the disease might only be the start of our problems.

No scientific study has looked at whether a pandemic with a high mortality could cause social collapse – at least none that has been made public. The vast majority of plans for weathering a pandemic all fail even to acknowledge that crucial systems might collapse, let alone take it into account.

There have been many pandemics before, of course. In 1348, the Black Death killed about a third of Europe’s population. Its impact was huge, but European civilization did not collapse. After the Roman empire was hit by a plague with a similar death rate around AD 170, however, the empire tipped into a downward spiral towards collapse. Why the difference? In a word: complexity.

In the 14th century, Europe was a feudal hierarchy in which more than 80% of the population were peasant farmers. Each death removed a food producer, but also a consumer, so there was little net effect. “In a hierarchy, no one is so vital that they can’t be easily replaced,” says Yaneer Bar-Yam, head of the New England Complex Systems Institute in Cambridge, Massachusetts. “Monarchs died, but life went on.”

Individuals matter

The Roman empire was also a hierarchy, but with a difference: it had a huge urban population – not equaled in Europe until modern times – which depended on peasants for grain, taxes and soldiers. “Population decline affected agriculture, which affected the empire’s ability to pay for the military, which made the empire less able to keep invaders out,” says anthropologist and historian Joseph Tainter at Utah State University in Logan. “Invaders in turn further weakened peasants and agriculture.”

A high-mortality pandemic could trigger a similar result now, Tainter says. “Fewer consumers mean the economy would contract, meaning fewer jobs, meaning even fewer consumers. Loss of personnel in key industries would hurt too.”

Bar-Yam thinks the loss of key people would be crucial. “Losing pieces indiscriminately from a highly complex system is very dangerous,” he says. “One of the most profound results of complex systems research is that when systems are highly complex, individuals matter.”

The same conclusion has emerged from a completely different source: tabletop “simulations” in which political and economic leaders work through what would happen as a hypothetical flu pandemic plays out. “One of the big ‘Aha!’ moments is always when company leaders realize how much they need key people,” says Paula Scalingi, who runs pandemic simulations for the Pacific Northwest economic region of the US. “People are the critical infrastructure.”
Vital hubs

Especially vital are “hubs” – the people whose actions link all the rest. Take truck drivers. When a strike blocked petrol deliveries from the UK’s oil refineries for 10 days in 2000, nearly a third of motorists ran out of fuel, some train and bus services were cancelled, shops began to run out of food, hospitals were reduced to running minimal services, hazardous waste piled up, and bodies went unburied. Afterwards, a study by Alan McKinnon of Heriot-Watt University in Edinburgh, UK, predicted huge economic losses and a rapid deterioration in living conditions if all road haulage in the UK shut down for just a week.

What would happen in a pandemic when many truckers are sick, dead or too scared to work? Even if a pandemic is relatively mild, many might have to stay home to care for sick family or look after children whose schools are closed. Even a small impact on road haulage would quickly have severe knock-on effects.

One reason is just-in-time delivery. Over the past few decades, people who use or sell commodities from coal to aspirin have stopped keeping large stocks, because to do so is expensive. They rely instead on frequent small deliveries.

Cities typically have only three days’ worth of food, and the old saying about civilizations being just three or four meals away from anarchy is taken seriously by security agencies such as MI5 in the UK.

In the US, plans for dealing with a pandemic call for people to keep three weeks’ worth of food and water stockpiled. Some planners think everyone should have at least 10 weeks’ worth. [My comment: Here in earthquake prone California, the recommendation is 3 DAYS. On the anniversary of the 1906 earthquake in 2012, the newspapers polled people on how many days of food and water they have stored, and less than half had 3 days worth].

How long would your stocks last if shops emptied and your water supply dried up? Even if everyone were willing, US officials warn that many people might not be able to afford to stockpile enough food.

Two-day supply

Hospitals rely on daily deliveries of drugs, blood and gases. “Hospital pandemic plans fixate on having enough ventilators,” says public health specialist Michael Osterholm at the University of Minnesota in Minneapolis, who has been calling for broader preparation for a pandemic. “But they’ll run out of oxygen to put through them first. No hospital has more than a two-day supply.” Equally critical is chlorine for water purification plants.

It’s not only absentee truck drivers that could cripple the transport system; new drivers can be drafted in and trained fairly quickly, after all. Trucks need fuel, too. What if staff at the refineries that produce it don’t show up for work?

Some models suggest absenteeism sparked by a 1918-type pandemic could cut the workforce by half at the peak of a pandemic wave.

Critical infrastructure

All the companies that provide the critical infrastructure of modern society – energy, transport, food, water, telecoms – face similar problems if key workers fail to turn up. According to US industry sources, one electricity supplier in Texas is teaching its employees “virus avoidance techniques” in the hope that they will then “experience a lower rate of flu onset and mortality” than the general population.

The fact is that the best way for people to avoid the virus will be to stay home. But if everyone does this – or if too many people try to stockpile supplies after a crisis begins – the impact of even a relatively minor pandemic could quickly multiply.

Planners for pandemics tend to overlook the fact that modern societies are becoming ever more tightly connected, which means any disturbance can cascade rapidly through many sectors. For instance, many businesses have contingency plans that count on some people working online from home. Models show there won’t be enough bandwidth to meet demand.

And what if the power goes off? This is where complex inter-dependencies could prove disastrous. Refineries make diesel fuel not only for trucks but also for the trains that deliver coal to electricity generators, which now usually have only 20 days’ reserve supply, Osterholm notes. Coal-fired plants supply 30 % of the UK’s electricity, 50 % of the US’s and 85 % of Australia’s.


The coal mines need electricity to keep working. Pumping oil through pipelines and water through mains also requires electricity. Making electricity depends largely on coal; getting coal depends on electricity; they all need refineries and key people; the people need transport, food and clean water. If one part of the system starts to fail, the whole lot could go. Hydro and nuclear power are less vulnerable to disruptions in supply, but they still depend on highly trained staff.

With no electricity, shops will be unable to keep food refrigerated even if they get deliveries. Their tills won’t work either. Many consumers won’t be able to cook what food they do have. With no chlorine, water-borne diseases could strike just as it becomes hard to boil water. Communications could start to break down as radio and TV broadcasters, phone systems and the internet fall victim to power cuts and absent staff. This could cripple the global financial system, right down to local cash machines, and will greatly complicate attempts to maintain order and get systems up and running again.

Even if we manage to struggle through the first few weeks of a pandemic, long-term problems could build up without essential maintenance and supplies. Many of these problems could take years to work their way through the system. For instance, with no fuel and markets in disarray, how do farmers get the next harvest in and distributed?
Closing borders

As a plague takes hold, some countries may be tempted to close their borders. But quarantine is not an option any more. “These days, no country is self-sufficient for everything,” says Lay. “The worst mistake governments could make is to isolate themselves.” The port of Singapore, a crucial shipping hub, plans to close in a pandemic only as a last resort, he says. Yet action like this might not be enough to prevent international trade being paralysed as other ports close for fear of contagion or for lack of workers, as ships’ crews sicken and exporters’ assembly lines grind to a halt without their own staff, power, transport or fuel and supplies.

Osterholm warns that most medical equipment and 85% of US pharmaceuticals are made abroad, and this is just the start. Consider food packaging. Milk might be delivered to dairies if the cows get milked and there is fuel for the trucks and power for refrigeration, but it will be of little use if milk carton factories have ground to a halt or the cartons are an ocean away.

“No one in pandemic planning thinks enough about supply chains,” says Osterholm. “They are long and thin, and they can break.” When Toronto was hit by SARS in 2003, the major surgical mask manufacturers sent everything they had, he says. “If it had gone on much longer they would have run out.”

The trend is for supply chains to get ever longer, to take advantage of economies of scale and the availability of cheap labour. Big factories produce goods more cheaply than small ones, and they can do so even more cheaply in countries where labor is cheap.
Flawed assumptions

Disaster planners usually focus on single-point events of this kind: industrial accidents, hurricanes or even a nuclear attack. But a pandemic happens everywhere at the same time, rendering many such plans useless.

The main assumption is how serious a pandemic could be. Many national plans are based on mortality rates from the mild 1957 and 1968 pandemics. “No government pandemic plans consider the possibility that the death rate might be higher than in 1918,” says Tim Sly of Ryerson University in Toronto, Canada.
Death rate

This scenario assumes around 3% of those who fall ill die. Of all the people known to have caught H5N1 bird flu so far, 63% have died. “It seems negligent to assume that H5N1, if it goes pandemic, will necessarily become less deadly,” says Sly. And flu is far from the only viral threat we face.

The ultimate question is this: what if a pandemic does have huge knock-on effects? What if many key people die, and many global balancing acts are disrupted? Could we get things up and running again? “Much would depend on the extent of the population decline,” says Tainter. “Possibilities range from little effect to a mild recession to a major depression to a collapse.”

Posted in 3) Fast Crash, Interdependencies, Pandemic | Tagged , , | 4 Comments

Fall of Akkadian empire due to climate change

Preface. Any civilization or region that survives energy decline must then survive climate change for many centuries. As far as the wind systems that collapsed the Akkadian empire, it’s already happening:

“Greenhouse gases are increasingly disrupting the jet stream, a powerful river of winds that steers weather systems in the Northern Hemisphere. That’s causing more frequent summer droughts, floods and wildfires, a new study says. The findings suggest that summers like 2018, when the jet stream drove extreme weather on an unprecedented scale across the Northern Hemisphere, will be 50% more frequent by the end of the century if emissions of carbon dioxide and other climate pollutants from industry, agriculture and the burning of fossil fuels continue at a high rate” (Berwyn 2018).

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Bressan, D. 2019. Climate Change Caused the World’s First Empire To Collapse. Forbes

The Akkadian Empire was the first ancient empire of Mesopotamia, centered around the lost city of Akkad. The reign of Akkad is sometimes regarded as the first empire in history, as it developed a central government and elaborate bureaucracy to rule over a vast area comprising modern Iraq, Syria, parts of Iran and central Turkey. Established around 4.600 years ago, it abruptly collapsed two centuries later as settlements were suddenly abandoned. New research published in the journal Geology argues that shifting wind systems contributed to the demise of the empire.

The region of the Middle East is characterized by strong northwesterly winds known locally as shamals. This weather effect occurs one or more times a year. The resulting wind typically creates large sandstorms that impact the climate of the area. To reconstruct the temperature and rainfall patterns of the area around the ancient metropolis of Tell-Leilan, the researchers sampled 4,600- to 3,000-year-old fossil Porites corals, deposited by an ancient tsunami on the northeastern coast of Oman.

The genus Porites builds a stony skeleton using the mineral aragonite (CaCO3). Studying the chemical and isotopic signatures of the carbon and oxygen used by the living coral, it is possible to reconstruct the sea-surface temperature conditions and so the precipitation and evaporation balance of a region located near the sea.

The fossil evidence shows that there was a prolonged winter shamal season accompanied by frequent shamal days lasting from 4.500 to 4.100 years ago, coinciding with the collapse of the Akkadian empire 4.400 years ago . The impact of the dust storms and lack of rainfall would have caused major agricultural problems possibly leading to famine and social instability. Weakened from the inside, the Akkadian Empire became an easy target to many opportunistic tribes living nearby. Hostile invasions, helped by the shifting climate, finally brought an end to the first modern empire in history.

The collapse of the Akkadian Empire concides also with the proposed onset of the Meghalayan Age, an age marked by mega-droughts on a global scale that crushed a number of civilizations worldwide.


Berwyn, B. 2018. Global Warming Is Messing with the Jet Stream. That Means More Extreme Weather. A new study links the buildup of greenhouse gas emissions to more frequent heat waves, floods and droughts in the Northern Hemisphere. insideclimatenews.org

Posted in Climate Change, Historically | Tagged , | Comments Off on Fall of Akkadian empire due to climate change

Nuclear reactor issues

Preface.  There are half a dozen articles below. Although safety and disposal of nuclear waste ought to be the main reasons why no more plants should be built, what will really stop them is because it takes years to get permits and $8.5–$20 billion in capital must be raised for a new 3400 MW nuclear power plant (O’Grady 2008). This is almost impossible when a much cheaper and much safer 3400 MW natural gas plant can be built for $2.5 billion in half the time or less.

U.S. nuclear power plants are old and in decline. By 2030, U.S. nuclear power generation might be the source of just 10% of electricity, half of their 20% production of electricity now, because 38 reactors producing a third of nuclear power are past their 40-year life span, and another 33 reactors producing a third of nuclear power are over 30 years old. Although some will have their licenses extended, 37 reactors that produce half of nuclear power are at risk of closing because of economics, breakdowns, unreliability, long outages, safety, and expensive post-Fukushima retrofits (Cooper 2013).

If you’ve read the nuclear reactor hazards paper or my summary of it, then you understand why there will continually be accidents like Fukushima and Chernobyl.  That makes investors and governments fearful of spending billions of dollars to build nuclear plants.

Nor will people be willing to use precious oil as it declines to build a nuclear power plant that could take up to 10 years to build, when that oil will be more needed for tractors to plant and harvest food and trucks to deliver the food to cities (electric power can’t do that, tractors and trucks have to run on oil).

And if we are dumb enough to try, we’ll smack into the brick wall of Peak Uranium.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Nuclear Safety in the news

The International Atomic Energy Agency is supposed to keep track of all the nuclear incidents in the world, but if you go to their incident report page, you’ll notice that the Turkey Point reactor issues in the March 22, 2016 article aren’t mentioned, and British newspaper “The Guardian” also says that their list is incomplete. Wikipedia is very much out of date, but has some fairly long lists of nuclear problems.  The NRDC has a good deal of information, for instance, their article called “What if the Fukushima nuclear fallout crisis had happened here?” where you can see how hit your home would be if the nearest nuclear reactor had a similar level of disaster.

Dujmovic, J. 2019. Think fossil fuels are bad? Nuclear energy is even worse. MarketWatch

Not long ago, I wrote about nuclear plants and the large number of “incidents” (many of which go under the radar) that occur every year, despite upgrades, updates, technological advancements and research that’s put in nuclear energy.

Researchers from the Swiss Federal Institute of Technology have come up with an unsettling discovery. Using the most complete and up-to-date list of nuclear accidents to predict the likelihood of another nuclear cataclysm, they concluded that there is a 50% chance of a Chernobyl-like event (or larger) occurring in the next 27 years, and that we have only 10 years until an event similar to Three Mile Island, also with the same probability. (The Three Mile Island Unit 2 reactor, near Middletown, Pa., partially melted down on March 28, 1979. This was the most serious commercial nuclear-plant accident in the U.S.)

Then there’s the problem of nuclear waste. Just in the U.S., commercial nuclear-power plants have generated 80,000 metric tons of useless but highly dangerous and radioactive spent nuclear fuel — enough to fill a football field about 20 meters (65 feet) deep.  Over the next few decades, the amount of waste will increase to 140,000 metric tons, but there is still no disposal site in the U.S. or a clear plan on how to store this highly dangerous material.

Nuclear waste will remain dangerous — deadly to humans and toxic to nature — for hundreds of thousands of years.

Digging deep wells and tunnels in which it can be stored is simply kicking a very dangerous can down the road — a can that can break open and contaminate the environment because of earthquakes, human error and acts of terrorism.

Let’s also not forget that the majority of developed countries have felt the need to use seas and oceans as nuclear-dumping sites. Although the practice was prohibited in 1994, the damage was already done. The current amount of nuclear waste in world seas greatly exceeds what’s currently stored in the U.S. And that’s just documented waste, so the exact number may be much higher.

Some may be comforted by the fact that 2011 data suggest the damage to the environment was minimal, but let’s not forget that these containers will eventually decay and their contents will spill and mix with water, polluting marine life and changing the biosphere. Finally, all of this contamination comes back to us in the form of food we eat, water we drink and air we breathe.

Ambellas, S. January 6, 2017. Overwhelmed Massachusetts nuclear power plant spikes with radiation. The Pilgrim Nuclear Power Plant has spiked with radiation to near alert levels alarming officials. infowars.com

Alvarez, L. March 22, 2016. Nuclear Plant Leak Threatens Drinking Water Wells in Florida. New York Times.

Turkey point, in Florida is the culprit, also mentioned as one of 37 plants at risk of closing in Cooper’s article.

April 2014 ASPO newsletter

“Nuclear power is probably the biggest asset we have in the fight against climate change…But I’m a business guy and I’m a pragmatist, and there’s no future for nuclear in the United States. There’s certainly no future for new nuclear… [Very few know] how close the system came to collapsing in January because everyone wants to go to natural gas and there wasn’t enough natural gas in the system.  The purpose of having old coal plants, to be frank, is keeping the lights on for the next three, five, 10 years…I’m not anti-utilities, I’m not anti-nuclear, I’m not anti-coal, I’m just anti-bullshit.” — David Crane, CEO of NRG Inc., the U.S.’ largest independent power generator

Matthew Wald. 8 Jun 2012. Court Forces a Rethinking of Nuclear Fuel. New York Times.

The Nuclear Regulatory Commission acted hastily in concluding that spent fuel can be stored safely at nuclear plants for the next century or so in the absence of a permanent repository, and it must consider what will happen if none are ever established, a federal appeals court ruled on Friday.  The commission’s wrong decision was made so that the operating licenses of dozens of power reactors (and 4 new ones) could be extended.

The three judge panel unanimously decided that the commission was wrong to assume nuclear fuel would be safe for many decades without analyzing actual reactor storage pools individually across the nation. Nor did they adequately analyze the risk that cooling water might leak from the pools or that the fuel could ignite.

22 May 2012. Severe Nuclear Reactor Accidents Likely Every 10 to 20 Years, European Study Suggests. ScienceDaily

Catastrophic nuclear accidents such as the core meltdowns in Chernobyl and Fukushima are more likely to happen than previously assumed. Based on the operating hours of all civil nuclear reactors and the number of nuclear meltdowns that have occurred, scientists at the Max Planck Institute for Chemistry have calculated that such events may occur once every 10 to 20 years — some 200 times more often than estimated in the past. The researchers also determined that 50% of the radioactive caesium-137 would be spread over an area of more than 1,000 kilometres away from the nuclear reactor, and 25% would go more than 2,000 kilometres. Their results show that Western Europe is likely to be contaminated about once in 50 years by more than 40 kilobecquerel of caesium-137 per square meter. According to the International Atomic Energy Agency, an area is defined as being contaminated with radiation from this amount onwards. In view of their findings, the researchers call for an in-depth analysis and reassessment of the risks associated with nuclear power plants.  Currently, there are 440 nuclear reactors in operation, and 60 more are planned.
Citizens in the densely populated southwestern part of Germany run the worldwide highest risk of radioactive contamination. If a single nuclear meltdown were to occur in Western Europe, around 28 million people on average would be affected by contamination of more than 40 kilobecquerels per square meter. This figure is even higher in southern Asia, due to the dense populations. A major nuclear accident there would affect around 34 million people, while in the eastern USA and in East Asia this would be 14 to 21 million people.
Reference: J. Lelieveld, et al. Global risk of radioactive fallout after major nuclear reactor accidents. Atmospheric Chemistry and Physics, 2012; 12 (9): 4245

Smith, Rebecca. 4 Feb 2012. Worn Pipes Shut California Reactors.  Wall Street Journal. The two reactors at the San Onofre nuclear-power station near San Clemente, Calif., will remain shut down this weekend while federal safety officials investigate why critical—and relatively new—equipment is showing signs of premature wear.  Components in nuclear plants are subjected to extreme heat, pressure, radiation and chemical exposure, all of which can take a toll on materials.  Commission inspectors say they also have found problems with hundreds of steam tubes at the plant’s other reactor.   Experts say the closures may signal a broader problem for the nuclear industry, which has been trying to reassure Americans that its aging reactors are safe in the wake of last year’s disaster at the Fukushima Daiichi plant in Japan. Mr. Dricks said. Two pipes had lost 35% of their wall thickness in just two years of service. Most—about 800—had lost 10% to 20% of wall thickness. The pipes are about three-quarters of an inch in diameter.

Munson, R. 2008. From Edison to Enron: The Business of Power and What It Means for the Future of Electricity. Praeger.

Cost overruns on reactors nearly drove some power companies into bankruptcy.   In 1984 the Department of Energy calculated more than 75% of reactors cost at least double the estimated price.

Utility WPPSS in Washington state defaulted, scaring investors, who once thought there’d be over a thousand reactors running by 2000 with electricity too cheap to meter.  In fact, only 82 plants existed in 2000 and power prices soared 60% between 1969 and 1984 due to the cost overruns.

Nuclear executives tried to blame their problems on too much regulation and environmentalists, but regulations only came after reactors began to break down.   Intense radiation and high temperatures caused pipes, valves, tubes, fuel rods, and cooling systems to crack, corrode, bend, and malfunction.  Only then did the public create the Atomic Energy Commission (now the Nuclear Regulatory Commission) to regulate nuclear power facilities.

Munson lists quite a few problems, but you should search on “Nuclear Reactor Hazards  Ongoing Dangers of Operating Nuclear Technology in the 21st Century” to get a real good understanding of the magnitude of failures despite regulation.  Indeed, even the Wall Street Journal was forced to admit at one point that reactor troubles “tell the story of projects crippled by too little regulation, rather than too much.”

Some of this stemmed from nuclear engineers seeing uranium as just a complicated way to boil water.  But a reactor is not simple, there are over 40,000 valves, the fuel rods reach temperatures over 4,800 F, and it isn’t easy to contain the nuclear reactions.

Management was poor as well, with Forbes magazine calling the U.S. nuclear program “the largest managerial disaster in business history, a disaster on a monumental scale.”


Cooper, M. 2013. Renaissance in reverse: Competition pushes aging U.S. Nuclear reactors to the brink of economic abandonment. South Royalton: Vermont Law School.

O’Grady, E. 2008. Luminant seeks new reactor. London: Reuters.

Posted in Nuclear Power, Nuclear Power | 3 Comments

Fossil-fueled industrial heat hard to impossible to replace with renewables

Preface. Cement, steel, glass, bricks, ceramics, chemicals, and much more depend on fossil-fueled high heat (up to 3200 F) to make. Except for the electric-arc furnace to recycle existing steel, there aren’t any renewable ways to make cement, other metals, and other high-heat products, and industries aren’t working on this either.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Roberts, D. 2019. This climate problem is bigger than cars and much harder to solve. Low-carbon options for heavy industry like steel and cement are scarce and expensive. Vox

Climate activists are fond of saying that we have all the solutions we need to the climate crisis; all we lack is the political will. This is incorrect. There are some uses of fossil fuels, that we do not yet know how to decarbonize.

Take, for instance, industrial heat: the extremely high-temperature heat used to make steel and cement.

Heavy industry is responsible for around 22% of global CO2 emissions, with 42% of that — about 10% of global emissions — from combustion to produce large amounts of high-temperature heat for industrial products like cement, steel, and petrochemicals.

To put that in perspective, industrial heat’s 10% is greater than the CO2 emissions of all the world’s cars (6%) and planes (2%) combined. Yet, consider how much you hear about electric vehicles. Consider how much you hear about flying shame. Now consider how much you hear about … industrial heat.

Not much, I’m guessing. But the fact is, today, virtually all of that combustion is fossil-fueled, and there are very few viable low-carbon alternatives. For all kinds of reasons, industrial heat is going to be one of the toughest nuts to crack, carbon-wise. And we haven’t even gotten started.

A cement factory at dusk.
A cement factory at dusk.

Some light has been cast into this blind spot with the release of two new reports by Julio Friedmann, a researcher at the Center for Global Energy Policy (CGEP) at Columbia University (among many items on a long résumé).

The first report, co-authored with CGEP’s Zhiyuan Fan and Ke Tang, is about the current state of industrial heat technology: “Low-Carbon Heat Solutions for Heavy Industry: Sources, Options, and Costs Today.”

The second, co-authored with a group of scholars for the Innovation for Cool Earth Forum (ICEF), is a roadmap for decarbonizing industrial heat, including a set of policy recommendations.

There’s a lot in these reports, but I’m going to guess your patience for industrial heat is limited, so I’ve boiled it down to three sections. First, I’ll offer a quick overview of why industrial heat is so infernally difficult to decarbonize; second, a review of the options available for decarbonizing it; and third, some recommendations for how to move forward.

Why industrial heat is such a vexing carbon dilemma

There’s a reason you don’t hear much about industrial heat: Consumers don’t buy it. It is a market dominated entirely by large, little-known industrial firms that operate outside the public eye. So unlike electricity, or cars, there is little prospect of moving the market through popular consumer demand. Policymakers will have to do this on their own. And it won’t be easy.

The biggest industrial emitters are cement, steel, and the chemical industries; also making a notable contribution are refining, fertilizer, and glass. As a group, these industries have three notable features.

First, almost all of them are globally traded commodities. Their prices are not set domestically. They compete with optimized supply chains around the world, with razor-thin margins. Domestic policies that raise their prices risk “carbon leakage” (i.e., companies simply moving overseas to find cheaper labor and operating environments).

What’s more, some of these industries, especially cement and steel, are especially prized by national governments for their jobs and their national security implications. Politicians are leery of any policy that might push those industries away. “As one indication, most cement, steel, aluminum, and petrochemicals have received environmental waivers or been politically exempted from carbon limits,” says the CGEP report, “even in countries with stringent carbon targets.

Furnace at an aluminum foundry.
Furnace at an aluminum foundry. 

Second, they involve facilities and equipment meant to last between 20 and 50 years. Blast furnaces sometimes make it to 60. These are large, long-term capital investments, with relatively low stock turnover. “Few industrial facilities show signs of imminent closure, especially in developing countries,” the CGEP report says, “making deployment of replacement facilities and technologies problematic.” At the very least, solutions that can work with existing equipment will have a head start.

Third, their operational requirements are both stringent and varied. They all have in common that they require large amounts of high-temperature heat and high “heat flux,” the ability to deliver large amounts of heat steadily, reliably, and continuously. Downtime in these industries is incredibly expensive.

At the same time, the specific requirements and processes at work in these industries vary widely. To take one example, steel and iron are made using blast furnaces that burn coke (a form of “cooked” coal with high-carbon content). “Coke also provides carbon as a reductant, acts as structural support to hold the ore burden, and provides porosity for rising hot gas and sinking molten iron,” the CGEP report says. “Because of these multiple roles, directly replacing coke combustion with an alternative source of process heat is not practical.”

A cement kiln works somewhat differently, as do the reactors that power chemical conversions, as does a glassblower. The variety of specific operational characteristics makes across-the-board substitution for industrial heat difficult.

Each of these industries is going to require its own solution. And it’s going to have to be a solution that doesn’t raise their costs much or at least takes steps to protect them from international competition.

The options to date are not much to speak of.

The options for decarbonizing industrial heat are scarce

What are the alternatives that might provide high heat and high heat flux with less or no carbon emissions? The report is not sanguine: “The pathway toward net-zero carbon emission for industry is not clear, and only a few options appear viable today.”

Alternatives can be broken down into five basic categories:

  1. Biomass: Either biodiesel or woodchips can be combusted directly.
  2. Electricity: “Resistive” electricity can be used to, say, power an electric arc furnace.
  3. Hydrogen: This is technically a subcategory of electricity, since it is derived from processes powered by electricity; it is produced through steam reforming of methane (SMR) to make carbon-intensive “grey” hydrogen, SMR with carbon capture and storage to make “blue” hydrogen, or electrolysis, pulling hydrogen directly out of water, to make low-carbon “green” hydrogen.
  4. Nuclear: Nuclear power plants, either conventional reactors or new third-generation reactors, give off heat that can be carried as steam.
  5. Carbon capture and storage (CCS): Rather than decarbonizing the processes themselves, their CO2 emissions could be captured and buried, either the CO2 directly from the heat source (“heat CCS”) or the CO2 from the entire facility (“full facility CCS”).

All of these options have their difficulties and drawbacks. None of them is anywhere close to cost parity with existing processes.

Some are limited by the intensity of the heat they can produce. Here’s a breakdown:

industrial heat temperature requirements

Some options are limited by the specific requirements of particular industrial processes. Cement kilns work better with energy-dense internal fuel; resistive electricity on the outer surface doesn’t work as well.

But the biggest limitations are costs, where the news is somewhat disheartening, for two reasons.

First, even the most promising and viable options substantially raise operational costs. And second, the options that are currently the least expensive are not exactly the ones environmentalists might prefer.

There’s a lot in the report on the methodology of comparing costs across the technologies, but the main thing to keep in mind is that these cost estimates are provisional. They involve various contestable assumptions, and real performance data is often not available. So it’s all to be taken with a grain of salt, pending further research. That said, here’s a rough-and-ready cost comparison:

cost comparison of industrial heat options

You might notice that most of the blue bars, the low-carbon options, are way over on the expensive right. The only ones that are reasonably affordable are nuclear and blue hydrogen.

Hydrogen is the most promising alternative

In terms of ability to generate high-temperature heat, availability, and suitability to multiple purposes, hydrogen is probably the leading candidate among industrial-heat alternatives. Unfortunately, the cost equation on hydrogen is not good: the cleaner it is, the more expensive it is.

The cheapest way to produce hydrogen, the way around 95 percent of it is now produced, is steam methane reforming (SMR), which reacts steam with methane in the presence of a catalyst at high temperatures and pressures. It is an extremely carbon-intensive process, thus “grey hydrogen.”

The carbon emissions from SMR can be captured and buried via CCS (though they rarely are today). As the chart above indicates, this kind of “blue hydrogen” is the cheapest low(er) carbon alternative for high-temperature industrial heat.

“Green hydrogen” is made via electrolysis, using electricity to separate hydrogen from water. If it is made with carbon-free energy, it too is carbon-free. There are a few different forms of electrolysis, which we don’t need to get into. The main thing to know is that they are expensive — the least expensive is more than twice as expensive as blue hydrogen.

hydrogen costs

Here’s a simplified cost chart, to make these comparisons clearer:

industrial heat costs

Note: These numbers reflect “what is actionable today within existing facilities.”

For now, to a first approximation, all the available low-carbon alternatives substantially raise costs of industrial-heat processes against the baseline.

And here’s the real kicker: in most cases, it is cheaper to capture and bury CO2 from these processes than it is to switch out systems for low-carbon alternatives.

CCS is often cheaper than low-carbon alternatives

Take cement production. It requires temperatures of at least 1,450°C, so the only viable options are hydrogen, biomass, resistive electric, or CCS. Here’s how much they would increase cement (“clinker”) production costs:

cement production costs

As you can see, every low-carbon alternative raises costs more than 50 percent above baseline. The only ones that don’t raise it more than 100 percent are CCS (of the heat source only), blue hydrogen, or resistive electric in places with extremely cheap and plentiful carbon-free energy.

The alternative that climate hawks would most prefer, the carbon-free option that would work best for most applications, is green hydrogen. But that currently raises costs between 400 and 800 percent. Ouch.

The situation is much the same for steel:

steel costs

And so on down the line, from chemicals to glass to ceramics; In almost every case, the cheapest near-term decarbonization solution is just to capture and bury the carbon emissions.

Of course, that’s just on average. The actual costs will depend on geography — whether there are suitable burial sites for CO2, whether natural gas is cheap, whether there’s a lot of hydro or wind nearby — but there’s no getting around the simple truth about today’s industrial-heat alternatives: What’s green isn’t very feasible, and what’s feasible isn’t very green.

Here’s a qualitative chart that tries to get at that relationship.

industrial heat feasibility

What’s most feasible is on the right. What’s most expensive is up top. There isn’t much in that lower-right feasible/cheap quadrant except blue hydrogen, for now.

The report emphasizes that these initial technology rankings are “temporary at best” and “highly speculative, uncertain, and contingent.” Much more needs to be understood about the costs and feasibility of these options. Their relative attractiveness may change quickly with technology development.

As this list makes clear, there is a lot that needs to be done before “we have all the solutions we need” in heavy industrial sectors. And there are other sectors that remain difficult to decarbonize as well (shipping, heavy freight, airplanes).

A final note about electrification

The only technology solution with a potential path down the cost curve to the point of being competitive with (properly priced) fossil fuels is electrification.

The charts above reveal two things about electrification of industrial heat. One, resistive electricity is the only low-carbon industrial-heat option competitive with CCS or blue hydrogen, and that’s only where clean electricity is extremely cheap and plentiful. And two, the only truly carbon-free, unlimited, all-purpose alternative available is green hydrogen, which requires plentiful renewable energy.

Both argue for the absolute imperative of making clean electricity cheaper.

At current prices and with current technologies, an all or mostly renewable grid would have difficulty with industrial heat, which requires enormous, intensive amounts of energy, reliably and continuously supplied. Some industrial applications could shift their demand around in time to accommodate renewables or make their processes intermittent, but most can’t. They need controllable, dispatchable power.

An electric arc furnace.
An electric arc furnace.

Building a renewable-based grid that could handle heavy industry would require much cheaper and more energy-dense storage, more and better transmission, smarter meters and appliances, and better demand response, but above all, it would require extremely cheap and abundant carbon-free electricity.

Posted in Alternative Energy, Industrial Heat | Tagged , , , | 2 Comments

A nationwide blackout lasting 1 year could kill up to 90% Americans

Preface. What follows is the testimony of Dr. Pry at a 2015 U.S. House of Representatives session. I have cut, shortened, and rearranged the order of the 30 page original document.  The electric grid is coming down one way or another in the future.  Without natural gas, wind and solar can’t be balanced (no energy storage is in sight), and the electric grid is already falling apart.  Electricity generating contraptions like wind turbines and solar panels need fossil fuels at every single step of their life cycle to exist, they can’t outlast the brief oil age.

Related articles:

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Testimony of Dr. Peter Vincent Pry  at the U.S. House of Representatives Serial No. 114-42 on May 13, 2015. The EMP Threat:  the state of preparedness against the threat of an electromagnetic pulse (EMP) event.  House of Representatives. 94 pages.

“The EMP Commission estimates that a nationwide blackout lasting one year could kill up to 9 of 10 Americans through starvation, disease, and societal collapse” 

A natural electromagnetic pulse (EMP) from a geomagnetic super-storm, like the 1859 Carrington Event or 1921 Railroad Storm, or nuclear EMP attack could cause a year-long blackout, and collapse all the other critical infrastructures–communications, transportation, banking and finance, food and water–necessary to sustain modern society and the lives of 310 million Americans.

Seven days after the commencement of blackout, emergency generators at nuclear reactors would run out of fuel. The reactors and nuclear fuel rods in cooling ponds would meltdown and catch fire, as happened in the nuclear disaster at Fukushima, Japan. The 104 U.S. nuclear reactors, located mostly among the populous eastern half of the United States, could cover vast swaths of the nation with dangerous plumes of radioactivity (see Richard Stone’s May 24, 2016 Science article: “Spent fuel fire on U.S. soil could dwarf impact of Fukushima” ).

Nuclear EMP is like super-lightning. The electromagnetic shockwave unique to nuclear weapons, called E1 EMP, travels at the speed of light, potentially injecting into electrical systems thousands of volts in a nanosecond–literally a million times faster than lightning, and much more powerful. Russian open source military writings describe their Super-EMP Warhead as generating 200,000 volts/meter, which means that the target receives 200,000 volts for every meter of its length. So, for example, if the cord on a PC is two meters long, it receives 400,000 volts. An automobile 4 meters long could receive 800,000 volts (unless it is parked underground).

No other threat can cause such broad and deep damage to all the critical infrastructures as a nuclear EMP attack. A nuclear EMP attack would collapse the electric grid, blackout and directly damage transportation systems, industry and manufacturing, satellite navigation, telecommunications systems and computers, banking and finance, and the infrastructures for food and water. Jetliners carry about 500,000 passengers on over 1,000 aircraft in the skies over the U.S. at any given moment. Many, most or virtually all of these would crash, depending upon the strength of the EMP field.

Cars, trucks, trains and traffic control systems would be damaged. In the best case, even if only a few percent of ground transportation vehicles are rendered inoperable, massive traffic jams would result. In the worst case, virtually all vehicles of all kinds would be rendered inoperable. In any case, all vehicles would stop operating when they run out of gasoline. The blackout would render gas stations inoperable and paralyze the infrastructure for synthesizing and delivering petroleum products and fuels of all kinds.

Industry and manufacturing would be paralyzed by collapse of the electric grid. Damage to SCADAS and safety control systems would likely result in widespread industrial accidents, including gas line explosions, chemical spills, fires at refineries and chemical plants producing toxic clouds.

Cell phones, personal computers, the internet, and the modern electronic economy that supports personal and big business cash, credit, debit, stock market and other transactions and record keeping would cease operations. The Congressional EMP Commission warns that society could revert to a barter economy.

Worst of all, about 72 hours after the commencement of blackout, when emergency generators at the big regional food warehouses cease to operate, the nation’s food supply will begin to spoil. Supermarkets are resupplied by these large regional food warehouses that are, in effect, the national larder, collectively having enough food to sustain the lives of 310 million Americans for about one month, at normal rates of consumption. The Congressional EMP Commission warns that as a consequence of the collapse of the electric grid and other critical infrastructures, “It is possible for the functional outages to become mutually reinforcing until at some point the degradation of infrastructure could have irreversible effects on the country’s ability to support its population.”

The New “Lightning War” / Blitzkrieg.

A new Lightning War launched by our adversaries would attack the electric grid and other critical infrastructures all at once with a coordinated assault of cyber-war, sabotage, and EMP attacks, perhaps at the same time as a space weather geomagnetic storm, or severe weather such as a hurricane or blizzard.

U.S. emergency planners tend to think of EMP, cyber, sabotage, severe weather, and geo-storms  as unrelated threats.  However, foreign adversaries (i.e. Iran, North Korea, China, Russia) in their military doctrines and military operations appear to be planning an offensive “all hazards” strategy that would throw at the U.S. electric grid and civilian critical infrastructures every possible threat simultaneously. Such an assault is potentially more decisive than Nazi Germany’s Blitzkrieg (“Lightning War”) strategy that nearly conquered the western democracies during World War II.

Catastrophe from a geomagnetic super-storm may well happen sooner rather than later–and perhaps in combination with a nuclear EMP attack.

Paul Stockton, President Obama’s former Assistant Secretary of Defense for Homeland Defense, on June 30, 2014, at the Electric Infrastructure Security Summit in London, warned an international audience that an adversary might coordinate nuclear EMP attack with an impending or ongoing geomagnetic storm to confuse the victim and maximize damage. Stockton notes that, historically, generals have often coordinated their military operations with the weather. For example, during World War II, General Dwight Eisenhower deliberately launched the D-Day invasion following a storm in the English Channel, correctly calculating that this daring act would surprise Nazi Germany.

Future military planners of the New Lightning War may well coordinate a nuclear EMP attack and other operations aimed at the electric grid and critical infrastructures with the ultimate space weather threat–a geomagnetic storm.

“China and Russia have considered limited nuclear attack options that, unlike their Cold War plans, employ EMP as the primary or sole means of attack,” according to the Congressional EMP Commission, “Indeed, as recently as May 1999, during the NATO bombing of the former Yugoslavia, high-ranking members of the Russian Duma, meeting with a U.S. congressional delegation to discuss the Balkans conflict, raised the specter of a Russian EMP attack that would paralyze the United States.”

Russia has made many nuclear threats against the U.S. since 1999, which are reported in the western press only rarely. On December 15, 2011, Pravda, the official mouthpiece of the Kremlin, gave this advice to the United States in “A Nightmare Scenario for America”: No missile defense could prevent…EMP…No one seriously believes that U.S. troops overseas are defending “freedom” or defending their country…. Perhaps they ought to close the bases, dismantle NATO and bring the troops home where they belong before they have nothing to come home to and no way to get there. On June 1, 2014, Russia Today, a Russian television news show, also broadcast to the West in English, predicted that the United States and Russia would be in a nuclear war by 2016.

Iran, the world’s leading sponsor of international terrorism, openly writes about making a nuclear EMP attack to eliminate the United States. Iran has practiced missile launches that appear to be training and testing warhead fusing for a high-altitude EMP attack–including missile launching for an EMP attack from a freighter. An EMP attack launched from a freighter could be performed anonymously, leaving no fingerprints, to foil deterrence and escape retaliation.

“What is different now is that some potential sources of EMP threats are difficult to deter–they can be terrorist groups that have no state identity, have only one or a few weapons, and are motivated to attack the U.S. without regard for their own safety,” cautions the EMP Commission in its 2004 report, “Rogue states, such as North Korea and Iran, may also be developing the capability to pose an EMP threat to the United States, and may also be unpredictable and difficult to deter.”

On April 16, 2013, North Korea simulated a nuclear EMP attack against the United States, orbiting its KSM-3 satellite over the U.S. at the optimum trajectory and altitude to place a peak EMP field over Washington and New York and blackout the Eastern Grid, that generates 75 percent of U.S. electricity. On the very same day, as described earlier, parties unknown executed a highly professional commando-style sniper attack on the Metcalf transformer substation that is a key component of the Western Grid. A few months later, in July 2013, North Korean freighter Chon Chong Gang transited the Gulf of Mexico carrying nuclear-capable SA-2 missiles in its hold on their launchers. The missiles had no warheads, but the event demonstrated North Korea’s capability to execute a ship-launched nuclear EMP attack from U.S. coastal waters anonymously, to escape U.S. retaliation. The missiles were only discovered, hidden under bags of sugar, because the freighter tried returning to North Korea through the Panama Canal and past inspectors.

What does all this signify? Connect these dots: North Korea’s apparent practice EMP attack with its KSM-3 satellite; the simultaneous “dry run” sabotage attack at Metcalf; North Korea’s possible practice for a ship-launched EMP attack a few months later; and cyber-attacks from various sources were happening all the time, and are happening every day. These suggest the possibility that in 2013 at least North Korea may have exercised against the United States an all-out combined arms operation aimed at targeting U.S. critical infrastructures–the New Lightning War.

How does an EMP damage the electric grid?

EHV Transformers are the technological foundation of our modern electronic civilization as they make it possible to transmit electric power over great distances.

An event that damages hundreds–or as few as 9–of the 2,000 EHV transformers in the United States could plunge the nation into a protracted blackout lasting months or even years. 

Transformers are typically as large as a house, weigh hundreds of tons, costs millions of dollars, and cannot be mass produced but must be custom-made by hand. Making a single EHV transformer takes about 18 months. Annual worldwide production of EHV transformers is about 200 per year.  Unfortunately, although Nikolai Tesla invented the EHV transformer and the electric grid in the U.S., EHV transformers are no longer manufactured in the United States. Because of their great size and cost, U.S. electric utilities have very few spare EHV transformers. The U.S. must import EHV transformers made in Germany or South Korea, the only two nations in the world that make them for export.

SCADAS are basically small computers that run the electric grid and all the critical infrastructures. SCADAS regulate the flow of electric current through EHV transformers, the flow of natural gas or of water through pipelines, the flow of data through communications and financial systems, and operate everything from traffic control lights to the refrigerators in regional food warehouses. SCADAS are ubiquitous in the civilian critical infrastructures, number in the millions, and are as indispensable as EHV transformers to running our modern electronic civilization. An event that damages large numbers of SCADAS would put that civilization at risk.

Nuclear weapon  EMP–The Worst Threat

A high-altitude nuclear EMP attack is the greatest single threat that could be posed to EHV transformers, SCADAS and other components of the national electric grid and other critical infrastructures. Nuclear EMP includes a high-frequency electromagnetic shockwave called E1 EMP that can potentially damage or destroy virtually any electronic system having a dimension of 18 inches or greater. Consequently, a high-altitude nuclear EMP event could cause broad damage of electronics and critical infrastructures across continental North America, while also causing deep damage to industrial and personal property, including to automobiles and personal computers.

E1 EMP is unique to nuclear weapons.

Nuclear EMP can also produce E2 EMP, comparable to lightning.

Nuclear EMP can also produce E3 EMP comparable to or greater than a geomagnetic superstorm. Even a relatively low-yield nuclear weapon, like the 10-kiloton Hiroshima bomb, can generate an E3 EMP field powerful enough to damage EHV transformers.

Nuclear EMP Attacks by Missile, Aircraft and Balloon

A nuclear weapon detonated at an altitude of 200 kilometers (124 miles) over the geographic center of the United States would create an EMP field potentially damaging to electronics over all the 48 contiguous States. The Congressional EMP Commission concluded that virtually any nuclear weapon, even a crude first generation atomic bomb having a low yield, could potentially inflict an EMP catastrophe.

The EMP Commission also found that Russia, China, and probably North Korea have nuclear weapons specially designed to generate extraordinarily powerful EMP fields— called by the Russians Super-EMP weapons–and this design information may be widely proliferated: “Certain types of relatively low-yield nuclear weapons can be employed to generate potentially catastrophic EMP effects over wide geographic areas, and designs for variants of such weapons may have been illicitly trafficked for a quarter-century.”

A sophisticated long-range missile is not required

Any short-range missile or other delivery vehicle that can deliver a nuclear weapon to an altitude of 30 kilometers (18.5 miles) or higher can make a potentially catastrophic EMP attack on the United States. Although a nuclear weapon detonated at 30 km could not cover the entire continental U.S. with an EMP field, the field would still cover a very large multi-state region-and be more intense. Lowering the height-of-burst (HOB) for an EMP attack decreases field radius, but increases field strength.

An EMP attack at 30 kilometers HOB anywhere over the eastern half of the U.S. would cause cascading failures far beyond the EMP field and collapse the Eastern Grid, that generates 75% of U.S. electricity. The nation could not survive without the Eastern Grid.

A Scud missile launched from a freighter could perform such an EMP attack. Over 30 nations have Scuds, as do some terrorist groups and private collectors. Scuds are available for sale on the world and black markets.

Any aircraft capable of flying Mach 1 could probably do a zoom climb to 30 kilometers altitude to make an EMP attack, if the pilot is willing to commit suicide.

Even a meteorological balloon could be used to loft a nuclear weapon 30 km high to make an EMP attack. During the period of atmospheric nuclear testing in the 1950s and early 1960s, more nuclear weapons were tested at altitude by balloon than by bombers or missiles.

Geomagnetic Storms

In contrast, natural EMP from a geomagnetic super-storm generates only E3 EMP which has such long wavelengths that it requires power lines, telephone lines, pipelines, and railroad tracks over 1 kilometer in length to do harm, so it can’t hurt small targets like autos or personal computers.  However,  a protracted nationwide blackout resulting from such a storm would stop everything within a few days. Personal computers cannot run for long on batteries, nor can automobiles run without gasoline.

Natural EMP from geomagnetic storms, caused when a coronal mass ejection from the Sun collides with the Earth’s magnetosphere, poses a significant threat to the electric grid and the 18 critical infrastructures, that all depend directly or indirectly upon electricity. Normal geomagnetic storms occur every year causing problems with communications and electric grids for nations located at high northern latitudes, such as Norway, Sweden, Finland and Canada.  The 1989 Hydro-Quebec Storm blacked-out the eastern half of Canada in 92 seconds, melted an EHV transformer at the Salem, New Jersey nuclear power plant, and caused billions of dollars in economic losses.

In 1921 a geomagnetic storm 10 times more powerful than the 1989 Hydro-Quebec Storm, the Railroad Storm, afflicted the whole of North America. It did not have catastrophic consequences because electrification of the U.S. and Canada was still in its infancy. The National Academy of Sciences estimates that if the 1921 Railroad Storm recurs today, it would cause a catastrophic nationwide blackout lasting 4-10 years and costing trillions of dollars.

The Carrington Event.   The most powerful geomagnetic storm ever recorded is the 1859 Carrington Event, estimated to be ten times more powerful than the 1921 Railroad Storm and classed as a geomagnetic superstorm. Natural EMP from the Carrington Event penetrated miles deep into the Atlantic Ocean and destroyed the just laid intercontinental telegraph cable. The Carrington Event was a worldwide phenomenon, causing fires in telegraph stations and forest fires from telegraph lines bursting into flames on several continents. Fortunately, in the horse and buggy days of 1859, civilization did not depend upon electrical systems.

Recently scientists have found that “storms like the Carrington Event are not as rare as scientists thought and could happen every few decades, seriously damaging modern communication and navigation systems around the globe” (Eisenstat. 2019. Extreme solar storms may be more frequent than previously thought. phys.org).

Recurrence of a Carrington Event today would collapse electric grids and critical infrastructures all over the planet, putting at risk billions of lives. Scientists estimate that geomagnetic superstorms occur about every 100-150 years. The Earth is probably overdue to encounter another Carrington Event.

On July 22, 2012, NASA warned that a powerful solar flare narrowly missed the Earth that would have generated a geomagnetic super-storm, like the 1859 Carrington Event, and collapsed electric grids and life sustaining critical infrastructures worldwide.

The National Intelligence Council (NIC), that speaks for the entire U.S. Intelligence Community, published a major unclassified report in December 2012 Global Trends 2030 that warns a geomagnetic super-storm, like recurrence of the 1859 Carrington Event, is one of only eight “Black Swans” that could by or before 2030 change the course of global civilization. The NIC concurs with the consensus view that another Carrington Event could recur at any time, possibly before 2030, and that, if it did, electric grids and critical infrastructures that support modern civilization could collapse worldwide.

NASA estimates that the likelihood of a geomagnetic super-storm is 12 percent per decade. This virtually guarantees that Earth will experience a natural EMP catastrophe in our lifetimes or that of our children.

Non-Nuclear EMP Radio-Frequency Weapons (RFWs)

RFWs are non-nuclear weapons that use a variety of means, including explosively driven generators, to emit an electromagnetic pulse similar to the E1 EMP from a nuclear weapon, except less energetic and of much shorter radius. The range of RF Weapons is rarely more than one kilometer.

RF Weapons can be built relatively inexpensively using commercially available parts and design information available on the internet. In 2000 the Terrorism Panel of the House Armed Services Committee conducted an experiment, hiring an electrical engineer and some students to try building an RFW on a modest budget, using design information available on the internet, made from parts purchased at Radio Shack.  They built two RF Weapons in one year, both successfully tested at the U.S. Army proving grounds at Aberdeen. One was built into a Volkswagen bus, designed to be driven down Wall Street to disrupt stock market computers and information systems and bring on a financial crisis. The other was designed to fit in the crate for a Xerox machine so it could be shipped to the Pentagon, sit in the mailroom, and burn-out Defense Department computers.

EMP simulators that can be carried and operated by one man, and used as an RF Weapon, are available commercially. For example, one U.S. company advertises for sale an “EMP Suitcase” that looks exactly like a metal suitcase, can be carried and operated by one man, and generates 100,000 volts/meter over a short distance. The EMP Suitcase is not intended to be used as a weapon, but as an aid for designing factories that use heavy duty electronic equipment that emit electromagnetic transients, so the factory does not self-destruct.

But a terrorist, criminal, or madman, armed with the EMP Suitcase, could potentially destroy electric grid SCADAS or an EHV transformer and blackout a city. Thanks to RF Weapons, we have arrived at a place where the technological pillars of civilization for a major metropolitan area could be toppled by a single individual. The EMP Suitcase can be purchased without a license by anyone.

Terrorists armed with RF Weapons might use unclassified computer models to duplicate the U.S. FERC study and figure out which nine crucial transformer substations need to be attacked in order to blackout the entire national grid for weeks or months. RFWs would offer significant operational advantages over assault rifles and bombs. Something like the EMP Suitcase could be put in the trunk of a car, parked and left outside the fence of an EHV transformer or SCADA colony, or hidden in nearby brush or a garbage can, while the bad guys make a leisurely getaway. If the EMP fields are strong enough, it would be just as effective as, and far less conspicuous than, dropping a big bomb to destroy the whole transformer substation. Maximum effect could be achieved by penetrating the security fence and hiding the RF Weapon somewhere even closer to the target.

Some documented examples of successful attacks using Radio Frequency Weapons, and accidents involving electromagnetic transients, are described in the Department of Defense Pocket Guide for Security Procedures and Protocols for Mitigating Radio Frequency Threats (Technical Support Working Group, Directed Energy Technical Office, Dahlgren Naval Surface Warfare Center):

  • “In the Netherlands, an individual disrupted a local bank’s computer network because he was turned down for a loan. He constructed a Radio Frequency Weapon the size of a briefcase, which he learned how to build from the Internet. Bank officials did not even realize that they had been attacked or what had happened until long after the event.”
  • “In St. Petersburg, Russia, a criminal robbed a jewelry store by defeating the alarm system with a repetitive RF generator. Its manufacture was no more complicated than assembling a home microwave oven.”
  • “In Kzlyar, Dagestan, Russia, Chechen rebel commander Salman Raduyev disabled police radio communications using RF transmitters during a raid.”
  • “In Russia, Chechen rebels used a Radio Frequency Weapon to defeat a Russian security system and gain access to a controlled area.”
  • “Radio Frequency Weapons were used in separate incidents against the U.S. Embassy in Moscow to falsely set off alarms and to induce a fire in a sensitive area.”
  • “March 21-26, 2001, there was a mass failure of keyless remote entry devices on thousands of vehicles in the Bremerton, Washington, area…The failures ended abruptly as federal investigators had nearly isolated the source. The Federal Communications Commission (FCC) concluded that a U.S. Navy presence in the area probably caused the incident, although the Navy disagreed.”
  • “In 1999, a Robinson R-44 news helicopter nearly crashed when it flew by a high frequency broadcast antenna.”
  • “In the late 1980s, a large explosion occurred at a 36-inch diameter natural gas pipeline in the Netherlands. A SCADA system, located about one mile from the naval port of Den Helder, was affected by a naval radar. The RF energy from the radar caused the SCADA system to open and close a large gas flow-control valve at the radar scan frequency, resulting in pressure waves that traveled down the pipe and eventually caused the pipeline to explode.”
  • “In June 1999 in Bellingham, Washington, RF energy from a radar induced a SCADA malfunction that caused a gas pipeline to rupture and explode.”
  • “In 1967, the USS Forrestal was located at Yankee Station off Vietnam. An A4 Skyhawk launched a Zuni rocket across the deck. The subsequent fire took 13 hours to extinguish. 134 people died in the worst U.S. Navy accident since World War II. EMI [Electro-Magnetic Interference, Pry] was identified as the probable cause of the Zuni launch.”
  • North Korea used an Radio Frequency Weapon, purchased from Russia, to attack airliners and impose an “electromagnetic blockade” on air traffic to Seoul, South Korea’s capitol. The repeated attacks by RFW also disrupted communications and the operation of automobiles in several South Korean cities in December 2010; March 9, 2011; and April-May 2012 as reported in “Massive GPS Jamming Attack By North Korea” (GPSWORLD.COM, May 8, 2012).

Protecting the electric grid and other critical infrastructures from nuclear EMP attack will also protect them from the lesser threat posed by Radio Frequency Weapons.

Sabotage–Kinetic Attacks

Kinetic attacks are a serious threat to the electric grid and are clearly part of the game plan for terrorists and rogue states. Sabotage of the electric grid is perhaps the easiest operation for a terrorist group to execute and would be perhaps the most cost-effective means, requiring only high-powered rifles, for a very small number of bad actors to wage asymmetric warfare–perhaps against all 310 million Americans.  Terrorists have figured out that the electric grid is a major societal vulnerability.

Terror Blackout in Mexico. On the morning of October 27, 2013, the Knights Templars, a terrorist drug cartel in Mexico, attacked a big part of the Mexican grid, using small arms and bombs to blast electric substations. They blacked-out the entire Mexican state of Mihoacan, plunging 420,000 people into the dark, isolating them from help from the Federales. The Knights went into towns and villages and publicly executed local leaders opposed to the drug trade. Ironically, that evening in the United States, the National Geographic aired a television docudrama “American Blackout” that accurately portrayed the catastrophic consequences of a cyber-attack that blacks-out the U.S. grid for 10 days. The North American Electric Reliability Corporation and some utilities criticized “American Blackout” for being alarmist and unrealistic, apparently unaware that life had already anticipated art just across the porous border in Mexico. Life had already anticipated art months earlier than “American Blackout”, and not in Mexico, but in the United States.

Terror Blackout of Yemen. On June 9, 2014, while world media attention was focused on the terror group Islamic State in Iraq and Syria (ISIS) overrunning northern Iraq, Al Qaeda in the Arabian Peninsula (AQAP) used mortars and rockets to destroy electric transmission towers to blackout all of Yemen, a nation of 16 cities and 24 million people. AQAP’s operation against the Yemen electric grid is the first time in history that terrorists have sunk an entire nation into blackout. The blackout went virtually unreported by the world press.

The Metcalf Attack (San Jose, California).  On April 16, 2013, apparently terrorists or professional saboteurs practiced making an attack on the Metcalf transformer substation outside San Jose, California, that services a 450 megawatt power plant providing electricity to the Silicon Valley and the San Francisco area. NERC and the utility Pacific Gas and Electric (PG&E) own Metcalf. They claimed that the incident was merely an act of vandalism, and discouraged press interest. Consequently, the national press paid nearly no attention to the Metcalf affair for nine months. Jon Wellinghoff, Chairman of the U.S. Federal Energy Regulatory Commission, conducted an independent investigation of Metcalf. He brought in the best of the best of U.S. special forces–the instructors who train the U.S. Navy SEALS. They concluded that the attack on Metcalf was a highly professional military operation, comparable to what the SEALS themselves would do when attacking a power grid.

Footprints suggested that a team of perhaps as many as six men executed the Metcalf operation. They knew about an underground communications tunnel at Metcalf and knew how to access it by removing a manhole cover (which required at least two men). They cut communications cables and the 911 cable to isolate the site. They had pre-surveyed firing positions. They used AK-47s, the favorite assault rifle of terrorists and rogue states. They knew precisely where to shoot to maximize damage to the 17 transformers at Metcalf. They escaped into the night just as the police arrived and have not been apprehended or even identified. They left no fingerprints anywhere, not even on the expended shell casings.

The Metcalf assailants only damaged but did not destroy the transformers–apparently deliberately. The Navy SEALS and U.S. FERC Chairman Wellinghoff concluded that the Metcalf operation was a “dry run”, like a military exercise, practice for a larger and more ambitious attack on the grid to be executed in the future.   Military exercises never try to destroy the enemy, and try to keep a low profile so that the potential victim is not moved to reinforce his defenses. For example, Russian strategic bomber exercises only send a few aircraft to probe U.S. air defenses in Alaska, and never actually launch nuclear-armed cruise missiles. They want to probe and test our air defenses–not scare us into strengthening those defenses.

Chairman Wellinghoff was aware of an internal study by U.S. FERC that concluded saboteurs could blackout the national electric grid for weeks or months by destroying just nine crucial transformer substations.

Much to his credit, Jon Wellinghoff became so alarmed by his knowledge of U.S. grid vulnerability, and the apparent NERC cover-up of the Metcalf affair, that he resigned his chairmanship to warn the American people in a story published by the Wall Street Journal in February 2014. The Metcalf story sparked a firestorm of interest in the press and investigations by Congress. Consequently, NERC passed, on an emergency basis, a new standard for immediately upgrading physical security for the national electric grid. PG&E promised to spend over $100 million over the next three years to upgrade physical security.

Two months later, amid growing fears that ISIS may somehow act on its threats to attack America, on August 27, 2014, parties unknown again broke into the Metcalf transformer substation and escaped PG&E security guards and the police. PG&E claims that the second Metcalf affair is, again, merely vandalism. Yet after NERC’s emergency new physical security standards and PG&E’s alleged massive investment in improved security–Metcalf should have been the Rock of Gibraltar of the North American electric grid. If terrorists or someone is planning an attack on the U.S. electric grid, Metcalf would be the perfect place to test the supposedly strengthened security of the national grid.

Does stolen equipment prove that Metcalf-2 was a burglary? In the world of spies and saboteurs, mock burglary is a commonplace device for covering-up an intelligence operation, and hopefully quelling fears and keeping the victim unprepared.

If PG&E is telling the truth, and the second successful operation against Metcalf is merely by vandals–this is an engraved invitation by ISIS or Al Qaeda or rogue states to attack the U.S. electric grid. It means that all of PG&E and NERC’s vaunted security improvements cannot protect Metcalf from the stupidest of criminals, let alone from terrorists.

About one month later, on September 23, 2014, another investigation of PG&E security at transformer substations, including Metcalf, reported that the transformer substations are still not secure. Indeed, at one site a gate was left wide open. Former CIA Director R. James Woolsey, after reviewing the investigation results, concluded, “Overall, it looks like there is essentially no security.”

Why isn’t anything being done?

In the U.S. Congress, bipartisan bills with strong support, such as the GRID Act and the SHIELD Act, that would protect the electric grid from nuclear and natural EMP, have been stalled for a half-decade, blocked by corruption and lobbying by powerful utilities.

The U.S. Federal Energy Regulatory Commission has published interagency reports acknowledging that nuclear EMP attack is an existential threat against which the electric grid must be protected. But U.S. FERC claims to lack legal authority to require the North American Electric Reliability Corporation and the electric utilities to protect the grid. “Given the national security dimensions to this threat, there may be a need to act quickly to act in a manner where action is mandatory rather than voluntary and to protect certain information from public disclosure,” said Joseph McClelland, Director of FERC’s Office of Energy Projects, testifying in May 2011 before the Senate Energy and Natural Resources Committee. “The commission’s legal authority is inadequate for such action.” Others think U.S. FERC has sufficient legal authority to protect the grid, but lacks the will to do so because of an incestuous relationship with the NERC.

NERC and the electric power industry deny that it is their responsibility to protect the grid from nuclear EMP attack. NERC thinks it is not their job, but the job of the Department of Defense, to protect the United States from nuclear EMP attack, so argued NERC President and CEO, Gerry Cauley, in his May 2011 testimony before the Senate Energy and Natural Resources Committee. Mark Lauby, NERC’s reliability manager, is quoted by Peter Behr in his EENEWS article (August 26, 2011) that “…the terrorist scenario–foreseen as the launch of a crude nuclear weapon on a version of a SCUD missile from a ship off the U.S. coast–is the government’s responsibility, not industry’s.”

But DOD can protect the grid only by waging preventive wars against countries like Iran, North Korea, China and Russia, or by vast expansion and improvement of missile defenses costing tens of billions of dollars–none of which may stop the EMP threat.

The Department of Defense has no legal authority to EMP harden the privately owned electric grid. Such protection is supposed to be the job of NERC and the utilities.

Preventive wars would make an EMP attack more likely, perhaps inevitable. It is not worth spending thousands of lives and trillions of dollars on wars, just so NERC and the utilities can avoid a small increase in electric bills for EMP hardening the grid. U.S. FERC estimates EMP hardening would cost the average ratepayer an increase in their electric bill of 20 cents annually.

NERC “Operational Procedures” Non-Solution. The North American Electric Reliability Corporation (NERC), the lobby for the electric power industry that is also supposed to set industry standards for grid security, claims it can protect the grid from geomagnetic super-storms by “operational procedures.” Operational procedures would rely on satellite early warning of an impending Carrington Event to allow grid operators to shift around electric loads, perhaps deliberately brownout or blackout part or all of the grid in order to save it. NERC estimates operational procedures would cost the electric utilities almost nothing, about $200,000 dollars annually.

But there is no command and control system for coordinating operational procedures among the 3,000 independent electric utilities in the United States.  Operational procedures routinely fail to prevent blackouts from normal terrestrial weather, like snowstorms and hurricanes. There is no credible basis for thinking that operational procedures alone would be able to cope with a geomagnetic super-storm–a threat unprecedented in the experience of NERC and the electric power industry.

The ACE satellite NERC proposes to use is aged and sometimes gives false warnings that are not a reliable basis for implementing operational procedures. While coronal mass ejections can be seen approaching Earth typically about three days before impact, the Carrington Event reached Earth in only 11 hours, and the Ace satellite cannot warn whether a geo-storm will hit the Earth until 20 to 30 minutes before impact.   Quite recently, on September 19-20, 2014, the National Oceanic and Atmospheric Administration and NERC demonstrated again that they are unable to ascertain until shortly before impact whether a coronal mass ejection (CME) will cause a threatening geomagnetic storm on Earth.

Ironically, on September 8-10, 2014, a week before this CME, there was a security conference on threats to the national electric grid meeting in San Francisco, where executives from the electric power industry credited themselves with building robust resilience into the electric power grid. They even congratulated themselves and their industry with exemplary performance coping with and recovering from blackouts caused by hurricanes and other natural disasters. The thousands of Americans left homeless due to Hurricanes Katrina and Sandy, the hundreds of businesses lost or impoverished in New Orleans and New York City, would no doubt disagree.

The U.S. Government Accountability Office (GAO), if it had jurisdiction to grade electric grid reliability during hurricanes, would almost certainly give the utilities a failing grade. Ever since Hurricane Andrew in 1992, the U.S. GAO has found serious fault with efforts by the Federal Emergency Management Agency, the Department of Homeland Security, and the Department of Defense to rescue and recover the American people from every major hurricane. Blackout of the electric grid, of course, seriously impedes the capability of FEMA, DHS, and DOD to do anything.

Since the utilities regulate themselves through the North American Electric Reliability Corporation, their uncritical view of their own performance reinforces a “do nothing” attitude in the electric power industry.

For example, after the Great Northeast Blackout of 2003, it took NERC a decade to propose a new “vegetation management plan” to protect the national grid from tree branches. NERC has been even more resistant and slow to respond to other much more serious threats, including cyber-attack, sabotage, and natural EMP from geomagnetic storms.

Most alarming, NERC and the utilities do not appear to know their jobs, and are already in panic and despair over the challenges posed by severe weather, cyber threats, and geomagnetic storms. Peter Behr in an article published in Energy Wire (September 12, 2014) reports that at an electric grid security summit, Gary Leidich, Board Chairman of the Western Electricity Coordinating Council–which oversees reliability and security for the Western Grid–appears overwhelmed, as if he wants to escape his job, crying: “Who is really responsible for reliability? And who has the authority to do something about it?”

“The biggest cyber threat is from an electromagnetic pulse, which in the military doctrines of our potential adversaries would be part of an all-out cyber war.”, writes former Speaker of the House, Newt Gingrich, in his article “The Gathering Cyber Storm” (CNN, August 12, 2013). Gingrich warns that NERC “should lead, follow or get out of the way of those who are trying to protect our nation from a cyber catastrophe. Otherwise, the Congress that certified it as the electric reliability organization can also decertify it.”

Much to their credit, a few in the electric power industry understand the necessity of protecting the grid from nuclear EMP attack, have broken ranks with NERC, and are trying to meet the crisis. John Houston of Centerpoint Energy in Texas; Terry Boston of PJM, the largest grid in North America (located in the midwest); and Con Ed in New York–all are trying to protect their grids from nuclear EMP. State Governors and State Legislatures need to come to the rescue. States have a duty to their citizens to fill the gap in homeland security and public safety when the federal government, and the utilities, fail. State governments and their Public Utility Commissions have the legal authority and the moral obligation to, where necessary, compel the utilities to secure the grid against all hazards. State governments have an obligation to help and oversee and ensure that grid security is being done right by those utilities that act voluntarily. Failing to protect the grid from nuclear EMP attack is failing to protect the nation from all hazards.

Regulatory Malfeasance

As noted repeatedly elsewhere, Washington’s process for regulating the electric power industry has never worked well, in fact has always been broken. The electric power industry is the only civilian critical infrastructure that is allowed to regulate itself.

The North American Electric Reliability Corporation is the industry’s former trade association, which continues to act as an industry lobby. NERC is not a U.S. government agency. It does not represent the interests of the people. NERC in its charter answers to its “stakeholders”–the electric utilities that pay for NERC, including NERC’s highly salaried executives and staff.

The U.S. Federal Energy Regulatory Commission, the U.S. government agency that is supposed to partner with NERC in protecting the national electric grid, has publicly testified before Congress that U.S. FERC lacks regulatory power to compel NERC and the electric power industry to protect the grid from natural and nuclear EMP and other threats. Consider the contrast in regulatory authority between the U.S. FERC and, as examples, the U.S. Federal Aviation Administration (FAA), the U.S. Department of Transportation (DOT), or the U.S. Food and Drug Administration (FDA):

  • FAA has regulatory power to compel the airlines industry to ground aircraft considered unsafe, to change aircraft operating procedures considered unsafe, and to make repairs or improvements to aircraft in order to protect the lives of airline passengers. –DOT has regulatory power to compel the automobile industry to install on cars safety glass, seatbelts, and airbags in order to protect the lives of the driving public.
  • FDA has power to regulate the quality of food and drugs, and can ban under criminal penalty the sale of products deemed by the FDA to be unsafe to the public.

Unlike the FAA, DOT, FDA or any other U.S. government regulatory agency, the Federal Energy Regulatory Commission does not have legal authority to compel the industry it is supposed to regulate to act in the public interest. For example, U.S. FERC lacks legal power to direct NERC and the electric utilities to install blocking devices, surge arrestors, faraday cages or other protective devices to save the grid, and the lives of millions of Americans, from a natural or nuclear EMP catastrophe. Or so the FERC has testified to the Congress.

Congress has responded to this dilemma by introducing bipartisan bills, the SHIELD Act and the GRID Act, to empower U.S. FERC to protect the grid from an EMP catastrophe. Lobbying by NERC has stalled both bills for years. Currently, U.S. FERC only has the power to ask NERC to propose a standard to protect the grid. NERC standards are approved, or rejected, by the electric power industry. Historically, NERC typically takes years to develop standards to protect the grid that will pass industry approval. For example, NERC took a decade to propose a “vegetation management” standard to protect the grid from tree branches in 2012. This after ruminating for ten years over the tree branch induced Great Northeast Blackout of 2003, that plunged 50 million Americans into the dark. Once NERC proposes a standard to U.S. FERC, FERC cannot modify the standard, but must accept or reject the proposed standard. If U.S. FERC rejects the proposed standard, NERC gets to go back to the drawing board, and the process starts all over again. The NERC-FERC arrangement is a formula for thwarting effective U..S. government regulation of the electric power industry. Fortunately, Governors, State Legislatures and their Public Utility Commissions have legal power to compel utilities to protect the grid from natural and nuclear EMP and other threats.

Critics argue that the U.S. Federal Energy Regulatory Commission is corrupt–because of a too cozy relationship with NERC and a rotating door between FERC and the electric power industry -and cannot be trusted to secure the grid, even if given legal powers to do so. U.S. FERC’s approval of NERC’s hollow standard for geomagnetic storms appears proof positive that Washington is too corrupt to be trusted.

NERC’s Hollow GMD Protection Standard

Observers serving on NERC’s Geo-Magnetic Disturbance Task Force, that developed the NERC standard for grid protection against geomagnetic storms, have denounced the NERC GMD Standard and published papers exposing, not merely that the Standard is inadequate, but that it is hollow, a pretended or fake Standard. These experts opposed to the NERC GMD Standard include the foremost authorities on geomagnetic storms and electric grid vulnerability in the Free World.  See:  John G. Kappenman and Dr. William A. Radasky, Examination of NERC GMD Standards and Validation of Ground Models and Geo-Electric Fields Proposed in this NERC GMD Standard, Storm Analysis Consultants and Metatech Corporation, July 30, 2014 (Executive Summary appended to this chapter). -EIS Council Comments on Benchmark GMD Event for NERC GMD Task Force Consideration, Electric Infrastructure Security Council, May 21, 2014. –Thomas Popik and William Harris for The Foundation for Resilient Societies, Reliability Standard for Geomagnetic Disturbance Operations, Docket No. RM14-1-000, critiques submitted to U.S. FERC on March 24, July 21, and August 18, 2014.

Kappenman and Radasky, who served on the Congressional EMP Commission and are among the world’s foremost scientific and technical experts on geomagnetic storms and grid vulnerability, warn that NERC’s GMD Standard consistently underestimates the threat from geostorms: “When comparing…actual geo-electric fields with NERC model derived geo-electric fields, the comparisons show a systematic under-prediction in all cases of the geo-electric field by the NERC model.”

The Foundation for Resilient Societies, that includes on its Board of Advisors a brain trust of world class scientific experts–including Dr. William Graham who served as President Reagan’s Science Advisor, director of NASA, and Chairman of the Congressional EMP Commission–concludes from their participation on the NERC GMD Task Force that NERC “cooked the books” to produce a hollow GMD Standard: The electric utility industry clearly recognized in this instance how to design a so-called “reliability standard” that, though foreseeably ineffective in a severe solar storm, would avert financial liability to the electric utility industry even while civil society and its courts might collapse from longer-term outages. In this instance and others, a key feature of the NERC standard-setting process was to progressively water down requirements until the proposed standard obviously benefitted the ballot participants and therefore could pass. In the process, any remaining public benefit was diluted beyond perceptibility…

The several Foundation critiques identify numerous profound and obvious holes in what it describes as NERC’s “hollow” GMD Standard, and rightly castigates U.S. FERC for approving what is, in reality, a Paper Mache GMD Standard that would not protect the grid from a geomagnetic super-storm:

  • “FERC erred by approving a standard that exempts transmission networks with no transformers with a high side (wye-grounded) voltage at or above 200 kV when actual data and lessons learned from past operating incidents show significant adverse impacts of solar storms on equipment operating below 200 kV.”
  • “The exclusion of networks operating at 200kV and below is inconsistent with the prior bright-line definition of the Bulk Electric System” as defined by U.S. FERC.
  • “FERC erred by approving a standard that does not require instrumentation of electric utility networks during solar storm conditions when installation of GIC [Ground Induced Current–Pry] monitors would be cost-effective and in the public interest.”
  • “FERC erred by approving a standard that does not require utilities to perform the most rudimentary planning for solar storms, i.e., mathematical comparison of megawatt capacity of assets at risk during solar storms to power reserves.”
  • “FERC erred by concluding that sixteen Reliability Coordinators could directly communicate with up to 1,500 Transmission and Generator Operators during severe GMD events with a warning time of as little as 15 minutes and that Balancing Authorities and Generator Operators should not take action on their own because of possible lack of GIC data.”
  • “FERC erred by assuming that there would be reliable and prompt two-way communications between Reliability Coordinators and Generator Operators immediately before and during severe solar storms.”

The Foundation is also critical of U.S. FERC for approving a NERC GMD Standard that lacks transparency and accountability. The utilities are allowed to assess their own vulnerability to geomagnetic storms, to devise their own preparations, to invest as much or as little as they like in those preparations, and all without public scrutiny or review of utility plans by independent experts.

Dr. William Radasky, who holds the Lord Kelvin Medal for setting standards for protecting European electronics from natural and nuclear EMP, and John Kappenman, who helped design the ACE satellite upon which industry relies for early warning of geomagnetic storms, conclude that the NERC GMD Standard so badly underestimates the threat that “its resulting directives are not valid and need to be corrected.”

Kappenman and Radasky: These enormous model errors also call into question many of the foundation findings of the NERC GMD draft standard. The flawed geoelectric field model was used to develop the peak geo-electric field levels of the Benchmark model proposed in the standard. Since this model understates the actual geo-electric field intensity for small storms by a factor of 2 to 5, it would also understate the maximum geo-electric field by similar or perhaps even larger levels. Therefore, the flaw is entirely integrated into the NERC Draft Standard and its resulting directives are not valid and need to be corrected.  The excellent Kappenman-Radasky critique of the NERC GMD Standard represents the consensus view of all the independent observers who participated in the NERC GMD Task Force, including the author. The Kappenman-Radasky critique warns NERC and U.S. FERC that, “Nature cannot be fooled!”

Perhaps most revelatory of U.S. FERC’s untrustworthiness, by approving the NERC GMD Standard that grossly underestimates the threat from geo-storms–U.S. FERC abandoned its own much more realistic estimate of the geo-storm threat. It is incomprehensible why U.S. FERC would ignore the findings of its own excellent interagency study, one of the most in depth and meticulous studies of the EMP threat ever performed, that was coordinated with Oak Ridge National Laboratory, the Department of Defense, and the White House.

U.S. FERC’s preference for NERC’s “junk science” over U.S. FERC’s own excellent scientific assessment of the geo-storm threat can only be explained as incompetence or corruption or both.

What do we know about a nuclear EMP?

A high-altitude nuclear electromagnetic pulse attack is the most severe threat to the electric grid and other critical infrastructures, far more damaging than a geomagnetic super-storm, the worst case of severe weather, sabotage by kinetic attacks, or cyber-attack.  Not one major U.S. Government study dissents from the consensus that nuclear EMP attack would be catastrophic, and that protection is achievable and necessary.

There is more empirical data on nuclear EMP and its effects on electronic systems and infrastructures than almost any other threat, except severe weather. In addition to the 1962 STARFISH PRIME high-altitude nuclear test that generated EMP that damaged electronic systems in Hawaii and elsewhere, the Department of Defense has decades of atmospheric and underground nuclear test data relevant to EMP. And defense scientists have for over 50 years studied EMP effects on electronics in simulators. Most recently, the Congressional EMP Commission made its threat assessment by testing a wide range of modern electronics crucial to critical infrastructures in EMP simulators.

There is a scientific and strategic consensus behind the Congressional EMP Commission’s assessment that a nuclear EMP attack would have catastrophic consequences for the United States, but that “correction is feasible and well within the Nation’s means and resources to accomplish.” Every major U.S. Government study to examine the EMP threat and solutions concurs with the EMP Commission, including the Congressional Strategic Posture Commission (2009), the U.S. Department of Energy and North American Electric Reliability Corporation (2010), and the U.S. Federal Energy Regulatory Commission interagency report, coordinated with the White House, Department of Defense, and Oak Ridge National Laboratory (2010).

Russian Nuclear EMP Tests.  STARFISH PRIME is not the only high-altitude nuclear EMP test. The Soviet Union (1961-1962) conducted a series of high-altitude nuclear EMP tests over what was then its own territory–not once but seven times–using a variety of warheads of different designs. The EMP fields from six tests covered Kazakhstan, an industrialized area larger than Western Europe. In 1994, during a thaw in the Cold War, Russia shared the results from one of its nuclear EMP tests, that used their least efficient warhead design for EMP–it collapsed the Kazakhstan electric grid, damaging transformers, generators and all other critical components.  The USSR during the Kazakhstan high-altitude EMP experiments tested some low-yield warheads, at least one probably an Enhanced Radiation Warhead that emitted large quantities of gamma rays, that generate the E1 EMP electromagnetic shockwave. It is possible that the USSR developed their Super-EMP Warhead early in the Cold War as a secret super-weapon.  The Soviets apparently quickly repaired the damage to Kazakhstan’s electric grid and other critical infrastructures, thereby proving definitively that with smart planning and good preparedness it is possible to survive and recover from an EMP catastrophe.

Other threats: Cyber-attack

Cyber-attacks, the use of computer viruses and hacking to invade and manipulate information systems and SCADAS, is almost universally described by U.S. political and military leaders as the greatest threat facing the United States. Every day, literally thousands of cyber-attacks are made on U.S. civilian and military systems, most of them designed to steal information. Joint Chiefs Chairman, General Martin Dempsey, warned on June 27, 2013, that the United States must be prepared for the revolutionary threat represented by cyber warfare (Claudette Roulo, DoD News, Armed Force Press Service): “One thing is clear. Cyber has escalated from an issue of moderate concern to one of the most serious threats to our national security,” cautioned Chairman Dempsey, “We now live in a world of weaponized bits and bytes, where an entire country can be disrupted by the click of a mouse.”

Cyber Hype? Skeptics claim that the catastrophic scenarios envisioned for cyber warfare are grossly exaggerated, in part to justify costly cyber programs wanted by both the Pentagon and industry at a time of scarce defense dollars. Many of the skeptical arguments about the limitations of hacking and computer viruses are technically correct. However, it is not widely understood that foreign military doctrines define “information warfare” and “cyber warfare” as encompassing kinetic attacks and EMP attack–which is an existential threat to the United States.

Thomas Rid’s book Cyber War Will Not Take Place (Oxford University Press, 2013) exemplifies the viewpoint of a growing minority of highly talented cyber security experts and scholars who think there is a conspiracy of governments and industry to hype the cyber threat. Rid’s bottom line is that hackers and computer bugs are capable of causing inconvenience–not apocalypse. Cyber-attacks can deny services, damage computers selectively but probably not wholesale, and steal information, according to Rid. He does not rule out that future hackers and viruses could collapse the electric grid, concluding such a feat would be, not impossible, but nearly so.

In a 2012 BBC interview, Rid chastised then Secretary of Defense Leon Panetta for claiming that Iran’s Shamoon Virus, used against the U.S. banking system and Saudi Arabia’s ARAMCO, could foreshadow a “Cyber Pearl Harbor” and for threatening military retaliation against Iran. Rid told the BBC that the world has, “Never seen a cyber-attack kill a single human being or destroy a building.”

Cyber security expert Bruce Schneier claims, “The threat of cyberwar has been hugely hyped” to keep growing cyber security programs at the Pentagon’s Cyber Command, the Department of Homeland Security, and new funding streams to Lockheed Martin, Raytheon, Century Link, and AT&T, who are all part of the new cyber defense industry. The Brookings Institute’s Peter Singer wrote in November 2012, “Zero. That is the number of people who have been hurt or killed by cyber terrorism.” Ronald J. Delbert, author of Black Code: Inside the Battle for Cyberspace, a lab director and professor at the University of Toronto, accuses RAND and the U.S. Air Force of exaggerating the threat from cyber warfare.

Peter Sommer of the London School of Economics and Ian Brown of Oxford University, in Reducing Systemic Cybersecurity Risk, a study for Europe’s Organization for Economic Cooperation and Development, are far more worried about natural EMP from the Sun than computer viruses: “a catastrophic cyber incident, such as a solar flare that could knock out satellites, base stations and net hardware” makes computer viruses and hacking “trivial in comparison.”

The now declassified Aurora experiment is the empirical basis for the claim that a computer virus might be able to collapse the national electric grid. In Aurora, a virus was inserted into the SCADAS running a generator, causing the generator to malfunction and eventually destroy itself. However, using a computer virus to destroy a single generator does not prove it is possible or likely that an adversary could destroy all or most of the generators in the United States. Aurora took a protracted time to burn out a generator–and no intervention by technicians attempting to save the generator was allowed, as would happen in a nationwide attack, if one could be engineered. Nor is there a single documented case of a even a local blackout being caused in the United States by a computer virus or hacking–which surely would have happened by now, if vandals, terrorists, or rogue states could attack U.S. critical infrastructures easily by hacking.

Even the Stuxnet Worm, the most successful computer virus so far, reportedly according to White House sources jointly engineered by the U.S. and Israel to attack Iran’s nuclear weapons program, proved a disappointment. Stuxnet succeeded in damaging only 10 percent of Iran’s centrifuges for enriching uranium, and did not stop or even significantly delay Tehran’s march towards the bomb. During the recently concluded Gaza War between Israel and Hamas, a major cyber campaign using computer bugs and hacking was launched against Israel by Hamas, the Syrian Electronic Army, Iran, and by sympathetic hackers worldwide. The Gaza War was a Cyber World War against Israel.

The Institute for National Security Studies, at Tel Aviv University, in “The Iranian Cyber Offensive during Operation Protective Edge” (August 26, 2014) reports that the cyber-attacks caused inconvenience and in the worst case some alarm, over a false report that the Dimona nuclear reactor was leaking radiation: “…the focus of the cyber offensive…was the civilian internet. Iranian elements participated in what the C4I officer described as an attack unprecedented in its proportions and the quality of its targets….The attackers had some success when they managed to spread a false message via the IDF’s official Twitter account saying that the Dimona reactor had been hit by rocket fire and that there was a risk of a radioactive leak.” However, the combined hacking efforts of Hamas, SEA, Iran and hackers worldwide did not blackout Israel or significantly impede Israel’s war effort.

But tomorrow is always another day. Cyber warriors are right to worry that perhaps someday someone will develop the cyber bug version of an atomic bomb. Perhaps such a computer virus already exists in a foreign laboratory, awaiting use in a future surprise attack. On July 6, 2014, reports surfaced that Russian intelligence services allegedly infected 1,000 power plants in Western Europe and the United States with a new computer virus called Dragonfly. No one knows what Dragonfly is supposed to do. Some analysts think it was just probing the defenses of western electric grids. Others think Dragonfly may have inserted logic bombs into SCADAS that can disrupt the operation of electric power plants in a future crisis.

Cyber warfare is an existential threat to the United States, not because of computer viruses and hacking alone, but as envisioned in the military doctrines of potential adversaries whose plans for an all-out Cyber Warfare Operation include the full spectrum of military capabilities-including EMP attack. In 2011, a U.S. Army War College study In The Dark: Planning for a Catastrophic Critical Infrastructure Event warned U.S. Cyber Command that U.S. doctrine should not overly focus on computer viruses to the exclusion of EMP attack and the full spectrum of other threats, as planned by potential adversaries.

Reinforcing the above, a Russian technical article on cyber warfare by Maxim Shepovalenko (Military-Industrial Courier July 3, 2013), notes that a cyber-attack can collapse “the system of state and military control…its military and economic infrastructure” because of “electromagnetic weapons…an electromagnetic pulse acts on an object through wire leads on infrastructure, including telephone lines, cables, external power supply and output of information.” Cyber warriors who think narrowly in terms of computer hacking and viruses invariably propose anti-hacking and anti-viruses as solutions. Such a solution will result in an endless virus versus anti-virus software arms race that may ultimately prove unaffordable and futile.

The worst case cyber scenario envisions a computer virus infecting the SCADAS that regulate the flow of electricity into EHV transformers, damaging the transformers with overvoltage, and causing a protracted national blackout. But if the transformers are protected with surge arrestors against the worst threat–nuclear EMP attack–they would be unharmed by the worst possible overvoltage that might be system generated by any computer virus. This EMP hardware solution would provide a permanent and relatively inexpensive fix to what is the extremely expensive and apparently endless virus versus anti-virus software arms race that is ongoing in the new cyber defense industry.


Other threats: Severe Weather

Hurricanes, snow storms, heat waves and other severe weather pose an increasing threat to the increasingly overtaxed, aged and fragile national electric grid. So far, the largest and most protracted blackouts in the United States have been caused by severe weather.  For example:

  • Hurricane Katrina (August 29, 2005), the worst natural disaster in U.S. history, blacked out New Orleans and much of Louisiana, the blackout seriously impeding rescue and recovery efforts. Lawlessness swept the city. Electric power was not restored to parts of New Orleans for months, making some neighborhoods a criminal no man’s land too dangerous to live in. New Orleans has still not fully recovered its pre-Katrina population. Economic losses to the Gulf States region totaled $108 billion dollars.
  • Hurricane Sandy on October 29, 2012, caused blackouts in parts of New York and New Jersey that in some places lasted weeks. Again, as in Katrina, the blackout gave rise to lawlessness and seriously impeded rescue and recovery. Thousands were rendered homeless in whole or in part because of the protracted blackout in some neighborhoods. Partial and temporary blackouts were experienced in 24 States. Total economic losses were $68 billion dollars.
  • A heat-wave on August 14, 2003, caused a power line to sag into a tree branch, which seemingly minor incident began a series of cascading failures that resulted in the Great Northeast Blackout of 2003. Some 50 million Americans were without electric power–including New York City. Although the grid largely recovered after a day, disruption of the nation’s financial capital was costly, resulting in estimated economic losses of about $6 billion dollars.
  • On September 18, 2014, a heat wave caused rolling brownouts and blackouts in northern California so severe that some radio commentators speculated that a terrorist attack on the grid might be underway.


What to do: All Hazards Strategy–EMP Protection is Key

Most of the general public and State governments are unaware of the EMP threat and that political gridlock in Washington has prevented the Federal government from implementing any of the several cost-effective plans for protecting the national electric grid.

All Hazards Protection: Most state governments are unaware that they can protect the grid within their State to shield their citizens from the catastrophic consequences of a national blackout, and that if they protect the grid from the worst threat, nuclear EMP, that will also help to protect the grid from other hazards such as a geomagnetic storm EMP, cyber-attack, sabotage, and severe weather.

States Should EMP Harden Their Grids.  All states should prepare themselves for all hazards in this age of the Electronic Blitzkrieg.  State governments and their Public Utility Commissions should exercise aggressive oversight to ensure that the transformer substations and electric grids in their States are safe and secure. The record of NERC and the electric utilities indicates they cannot be trusted to provide for the security of the grid. State governments can protect their grid from sabotage by the “all hazards” strategy that protects against the worst threat–nuclear EMP attack. For example, faraday cages to protect EHV transformers and SCADAS colonies from EMP would also screen from view these vital assets so they could not be accurately targeted by high-powered rifles, as is necessary in order to destroy them by small arms fire. The faraday cages could be made of heavy metal or otherwise fortified for more robust protection against more powerful weapons, like rocket propelled grenades.

Surge arrestors to protect EHV transformers and SCADAS from nuclear EMP would also protect the national grid from collapse due to sabotage. The U.S. FERC scenario where terrorists succeed in collapsing the whole national grid by destroying merely nine transformer substations works only because of cascading overvoltage. When the nine key substations are destroyed, megawatts of electric power gets suddenly dumped onto other transformers, which in their turn get overloaded and fail, dumping yet more megawatts onto the grid. Cascading failures of more and more transformers ultimately causes a protracted national blackout. This worst case scenario for sabotage could not happen if the transformers and SCADAS are protected against nuclear EMP–which is a more severe threat than any possible system-generated overvoltage.

Critics rightly argue that NERC’s proposed operational procedures (satellite warnings) is a non-solution designed as an excuse to avoid the expense of the only real solution–physically hardening the electric grid to withstand EMP.

NERC rejects the recommendation of the Congressional EMP Commission to physically protect the national electric grid from nuclear EMP attack by installing blocking devices, surge arrestors, faraday cages and other proven technologies. These measures would also protect the grid from the worst natural EMP from a geomagnetic super-storm like another Carrington Event. The estimated one time cost–$2 billion dollars–is what the United States gives away every year in foreign aid to Pakistan.

Yet Washington remains gridlocked between lobbying by NERC and the wealthy electric power industry on the one hand, and the recommendations of the Congressional EMP Commission and other independent scientific and strategic experts on the other hand. The States should not wait for Washington to act, but should act now to protect themselves.

While gridlock in Washington has prevented the Federal Government from protecting the national electric power infrastructure, threats to the grid–and to the survival of the American people–from EMP and other hazards are looming ever larger. Grid vulnerability to EMP and other threats is now a clear and present danger.

The Congressional EMP Commission warned that an “all hazards” strategy should be pursued to protect the electric grid and other critical infrastructures, which means trying to find common solutions that protect against more than one threat–ideally all threats. The “all hazards” strategy is the most practical and most cost-effective solution to protecting the electric grid and other critical infrastructures. Electric grid operation and vulnerability is critically dependent upon two key technologies: Extra-High Voltage (EHV) transformers and Supervisory Control and Data Acquisition Systems (SCADAS).

The Congressional EMP Commission recommended protecting the electric grid and other critical infrastructures against nuclear EMP as the best basis for an “all hazards” strategy. Nuclear EMP may not be as likely as other threats, but it is by far the worst, the most severe, threat.

The EMP Commission found that if the electric grid can be protected and quickly recovered from nuclear EMP, the other critical infrastructures can also be recovered, with good planning, quickly enough to prevent mass starvation and restore society to normalcy. If EHV transformers, SCADAS and other critical components are protected from the worst threat–nuclear EMP–then they will survive, or damage will be greatly mitigated, from all lesser threats, including natural EMP from geomagnetic storms, severe weather, sabotage, and cyber-attack.

The “all hazards” strategy recommended by the EMP Commission is not only the most cost-effective strategy–it is a necessary strategy.

New York and Massachusetts Protect Their Grids.

New York Governor Andrew Cuomo and Massachusetts Governor Deval Patrick would not agree that NERC’s performance during Hurricane Sandy was exemplary. Under the leadership of Governor Patrick, Massachusetts is spending $500 million to upgrade the security of its electric grid from severe weather. New York is spending a billion dollars to protect its grid from severe weather.

The biggest impediment to recovering an electric grid from hurricanes is not fallen electric poles and downed power lines. When part of the grid physically collapses, an overvoltage can result that can damage all kinds of transformers, including EHV transformers, SCADAS and other vital grid components. Video footage shown on national television during Hurricane Sandy showed spectacular explosions and fires erupting from transformers and other grid vital components caused by overvoltage.

If the grid is hardened to survive a nuclear EMP attack by installation of surge arrestors, it would easily survive overvoltage induced by hurricanes and other severe weather. This would cost a lot less than burying power lines underground and other measures being undertaken by New York and Massachusetts to fortify their grids against hurricanes–all of which will be futile if transformers and SCADAS are not protected against overvoltage.

Unfortunately, both States are probably spending a lot more than they have to by focusing on severe weather, instead of an “all hazards” strategy to protect their electric grids.

According to a senior executive of New York’s Consolidated Edison, briefing at the Electric Infrastructure Security Summit in London on July 1, 2014–Con Ed is taking some modest steps to protect part of the New York electric grid from nuclear EMP attack. This good news has not been reported anywhere in the press. I asked the Con Ed executive why New York is silent about beginning to protect its grid from nuclear EMP? Loudly advertising this prudent step could have a deterrent effect on potential adversaries planning an EMP attack. The Con Ed executive could offer no explanation.

New York City because of its symbolism as the financial and cultural capital of the Free World, and perhaps because of its large Jewish population, has been the repeated target of terrorist attacks with weapons of mass destruction. A nuclear EMP attack centered over New York City, the warhead detonated at an altitude of 30 kilometers, would cover all the northeastern United States with an EMP field, including Massachusetts.  A practitioner of the New Lightning War may be more likely to exploit a hurricane, blizzard, or heat wave than a geomagnetic storm, when launching a coordinated cyber, sabotage, and EMP attack. Terrestrial bad weather is more commonplace than bad space weather.

PETER VINCENT PRY is Executive Director of the EMP Task Force on National and Homeland Security, a Congressional Advisory Board dedicated to achieving protection of the United States from electromagnetic pulse (EMP), cyber-attack, mass destruction terrorism and other threats to civilian critical infrastructures on an accelerated basis. Dr. Pry also is Director of the United States Nuclear Strategy Forum, an advisory board to Congress on policies to counter Weapons of Mass Destruction. Dr. Pry served on the staffs of the Congressional Commission on the Strategic Posture of the United States (2008-2009); the Commission on the New Strategic Posture of the United States (2006-2008); and the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack (2001-2008). Dr. Pry served as Professional Staff on the House Armed Services Committee (HASC) of the U.S. Congress, with portfolios in nuclear strategy, WMD, Russia, China, NATO, the Middle East, Intelligence, and Terrorism (1995-2001). While serving on the HASC, Dr. Pry was chief advisor to the Vice Chairman of the House Armed Services Committee and the Vice Chairman of the House Homeland Security Committee, and to the Chairman of the Terrorism Panel. Dr. Pry played a key role: running hearings in Congress that warned terrorists and rogue states could pose an EMP threat, establishing the Congressional EMP Commission, helping the Commission develop plans to protect the United States from EMP, and working closely with senior scientists who first discovered the nuclear EMP phenomenon. Dr. Pry was an Intelligence Officer with the Central Intelligence Agency responsible for analyzing Soviet and Russian nuclear strategy, operational plans, military doctrine, threat perceptions, and developing U.S. paradigms for strategic warning (1985-1995). He also served as a Verification Analyst at the U.S. Arms Control and Disarmament Agency responsible for assessing Soviet compliance with strategic and military arms control treaties (1984-1985). Dr. Pry has written numerous books on national security issues, including Apocalypse Unknown: The Struggle To Protect America From An Electromagnetic Pulse Catastrophe; Electric Armageddon: Civil-Military Preparedness For An Electromagnetic Pulse Catastrophe; War Scare: Russia and America on the Nuclear Brink; Nuclear Wars: Exchanges and Outcomes; The Strategic Nuclear Balance: And Why It Matters; and Israel’s Nuclear Arsenal. Dr. Pry often appears on TV and radio as an expert on national security issues. The BBC made his book War Scare into a two-hour TV documentary Soviet War Scare 1983 and his book Electric Armageddon was the basis for another TV documentary Electronic Armageddon made by the National Geographic.

Related articles

Electric Grid

Posted in Blackouts, Congressional Record U.S., Electricity, EMP Electromagnetic Pulse, Energy Policy, Extreme Weather, Infrastructure, Nuclear, Nuclear Power | Tagged , , , , , | 4 Comments

Invasion of feral hogs yet another hazard for the future

Preface. The Decline category used to be Death By A Thousand Cuts. Feral hogs are yet another cut for anyone who survives peak oil. Not only will climate change be drastically cutting back food production, feral hogs will too, and potentially spread disease to livestock.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Robbins, J. 2019. Feral Pigs Roam the South. Now Even Northern States Aren’t Safe. The swine have established themselves in Canada and are encroaching on border states like Montana and North Dakota. New York Times.

Feral pigs are widely considered to be the most destructive invasive species in the United States. They can do remarkable damage to the ecosystem, wrecking crops and hunting animals like birds and amphibians to near extinction.  They have wrecked military planes on runways.

The swine are also reservoirs for at least 32 diseases, including bovine tuberculosis, brucellosis and leptospirosis. Outbreaks of E. coli in spinach and lettuce have been blamed on feral hogs defecating in farm fields.

There are reports that people have contracted hepatitis and brucellosis from butchering the animals after hunting.

If an animal disease like African swine fever or hoof-and-mouth gets into these animals, it will be almost impossible to stop. It will shut down our livestock industry.

In the United States, their stronghold is the South — about half of the nation’s six million feral pigs live in Texas. But in the past 30 years, the hogs have expanded their range to 38 states from 17.

Many experts thought the pigs couldn’t thrive in cold climates. But they burrow into the snow in winter, creating so-called pigloos — a tunnel or cave with a foot or two of snow on top for insulation. Many have developed thick coats of fur.

The damage in the United States is estimated to be $1.5 billion annually, but likely closer to $2.5 billion.  Feral pigs don’t browse the landscape; they dig out plants by the root, and lots of them. Big hogs can chew up acres of crops in a single night, destroying pastures, tearing out fences, digging up irrigation systems, polluting water supplies.

They are very smart and can be very big — a Georgia pig called Hogzilla is believed to have weighed at least 800 pounds — and populations grow rapidly.

Pigs will literally eat anything.  They eat ground-nesting birds — eggs and young and adults. They eat frogs. They eat salamanders. They are huge on insect larvae. I’ve heard of them taking adult white-tailed deer.  A recent study found that mammal and bird communities are 26 percent less diverse in forests where feral pigs are present. Sea turtles are an especially egregious example.

Posted in Agriculture, Biodiversity Loss, BioInvasion, Disease | Tagged , , , , | 3 Comments

Energy Slaves: every American has somewhere between 200 and 8,000 energy slaves

Source: https://www.homesthatfit.com/how-precious-is-energy-ask-your-slaves/

Preface.  To give you an idea of what energy slaves are, consider what it would take to use human power to provide these:

How much energy does it take to toast a single slice of Bread? Olympic Cyclist Vs. Toaster: Can He Power It?

Human Power Shower – Bang goes the theory – BBC One

And as the article below points out, if you tried to bicycle to power your TV,  that free electricity is not free at all. Since food costs money it may even end up being more expensive produced by cycling than from the grid — you’ll end up paying for it all the same, just not in utility bills, but in food.

Slav, I. 2019. Why Powering A City With Bicycles Is Impossible. oiprice.com

Many people have taken a crack at estimating how much muscle power the energy contained within oil represents.  Although there are different results, they all show how powerful oil is and how angry our descendants will be that we wasted it driving around in 4,000 pound cars for pleasure as they’re sawing wood and heaving hundred-pound sacks of grain onto horse-drawn wagons.

As I was writing my book “When Trucks Stop Running”, I came up with this (but didn’t include it since there are too many numbers):   A class 8 truck can carry 50,000 pounds of goods 500 miles in one day. This would take 1,250 people carrying 40-pound backpacks walking 16 miles a day for 31 days. If the people ate 2,000 kcal of raw food a day, they’d burn 77.5 million kcal (and even more energy if the food is cooked). The truck needs 31 times less energy: at 7 mpg, that’s 71 gallons of diesel containing 35,000 kcal per gallon. Trucks carried over 13.182 billion tons of goods, equal to 329 million people each carrying 40 pound packs.

Wiki definition: An Energy Slave is that quantity of energy (ability to do work) which, when used to construct and drive non-human infrastructure (machines, roads, power grids, fuel, draft animals, wind-driven pumps, etc.) replaces a unit of human labor (actual work). An energy slave does the work of a person, through the consumption of energy in the non-human infrastructure.

Below are nine estimates of how many energy slaves we have.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Nate Hagens & D. J. White. 2017. GDP, Jobs, and Fossil Largesse. Resilience.org

[excerpts, be sure to read the entire article at the link above because it explains not just energy slaves but the whole situation we’re in, the psychology, economics, and more]

One barrel of oil contains about 1700 kWh of work potential.46 Compared to an average human work day where 0.6kWh is generated, one barrel of oil, currently costing under than $50 to global citizens, contains about 10.5 years of human labor equivalence (4.5 years after conversion losses).47

GDP – what nations aspire to – is a measure of finished goods and services generated in an economy. It is strongly correlated with energy use, and given that almost 90% of our primary energy use is fossil fuels, with their combustion.48

Outside of nuclear and hydro, ~99.5% of ‘labor’ in human economies is done by oil, coal and natural gas (measured by joules of output). Due to this cheap embodied labor residing in fossil carbon compounds, the average human being in 2016 enjoys 14x the goods and services as the average human being in the year 1800. (the average American=> 49x).

The common political mantra that higher GDP creates social benefits by lifting all boats has become suspect since the 2008 recession and ‘recovery.’ For the first time in the history of the USA, we now have more bartenders and waitresses than manufacturing jobs.50 In order to maximize dollar profits, it often makes more sense for corporations to mechanize and hire ‘fossil slaves’ than to hire ‘real workers.’ Real income peaked in the USA around 1970 for the bottom 50% of wage earners.51

…at 2016 USA wage rates, moving from $20 per barrel (the long-run average cost for oil), to $150 per barrel, the army of energy slaves declines from 22,000 per barrel to under 3,000 – meaning the economy shrinks and therefore much more work needs to be accomplished via efficiency improvements, real humans, or making do with less.

Thing is, the energy slaves will soon be going away forever. In the last 30 years we’ve burned a third of all fossil energy that has been used since it was discovered thousands of years ago.58 Since your authors have been alive, humans have used more energy than in the entire 300,000 year history of homo sapiens.59 We are just now passing through the all-time peak of liquid hydrocarbon availability, which is the chief driver of our economies

we’re heading back into times – either gradually or suddenly, but inexorably – in which human labor makes up an increasing percentage of the total energy we have available. One day human (and perhaps animal) labor will again be the majority of the work done in human societies – just like it is in an anthill.

And this will happen in the context of a more used-up natural world. Rather than being able to catch dinner by throwing a hook in the nearby ocean, the nearest healthy schools of fish may be ten thousand miles away in Antarctica, and hard to get to without dirt-cheap energy slaves to make giant refrigerated ships to pursue and move them around for us. The copper mines will be mostly used up. The inorganic phosphate deposits we used to make fertilizer, mostly gone. And so on.

Or rather than “gone,” let’s use the more accurate term energetically remote. That is, there will still be loads of “stuff” underground, but it won’t be the very pure ores of yesteryear. It’ll be stuff that requires digging up a huge amount of rock for a tiny amount of whatever we’re after. Because we always use the best stuff first. Yet we’ll be going after worse and worse ore with fewer and fewer slaves.

So, right as our energy slaves are about to start going away forever, leaving 7-10 billion humans without the things they have come to take for granted, our nations have decided the answer is to make more babies! Yep, to raise GDP you need more demand for toys, diapers, teachers, etc… more jobs, because more jobs means more transactions which means more GDP! More GDP means “growth” so growth is good! China has just reversed its 1-child policy, which prevented massive starvations and slowed the horrendous assault on China’s environment.60 Many other nations, such as Japan, Germany, and Sweden, are now offering bonuses for getting pregnant. In Denmark advertising firms are encouraging couples to have more babies for the good of the economy via sexy commercials.61

Paradoxically, as traditional drivers of GDP growth – development of virgin land, credit expansion, low cost fossil fuels, and groundbreaking innovation- wane in their impact, there may be renewed incentives proposed not to shrink our population as ecology would advise, but instead to grow it! Currently we are having (as a species) over 120 million babies per year.62 This works out to over 335,000 human babies born every day – compared to a total extant population of all the other Great Apes (bonobos, chimpanzees and gorillas) of about ~200,000!63 Since ‘demand” is considered a quasi-magical force in current economic theory, babies are considered to be good for business (yet children brought into the world now for GDP reasons will face some real challenges in their lives.

Ugo Bardi. 2017. Why do we need jobs if we can have slaves working for us? Cassandra’s legacy.  Below are excerpts (paraphrased) from this post you can find here.

Energy slaves do what human slaves and domestic animals once did, only faster and cheaper. Today instead of 500 human slaves, every American has 500 fossil-fueled energy slaves working 24/7 for them, every day of the year. These energy slaves don’t complain, don’t sleep, and don’t need to be fed, but they do leave behind a lot of waste (climate change). But they’re invisible, we think about them as much as we do the nitrogen we breathe in — 78% of air.

The slaves have also made shipping nearly free, so any actual human labor we need can be hired in the cheapest places on earth (under essentially slave labor conditions), and shipped to us by planes, trains, ships and trucks for next to nothing. The average dinner travels over 1400 miles to get to your plate in USA.

But the energy slaves will soon be going away forever.  We are just now passing through the all-time peak of liquid hydrocarbon availability. Every year from now on, we’ll have fewer fossil energy slaves, until we’re back to the times centuries ago when biomass, and human and animal muscle made up the energy we had available.

Worse yet, here are just of the problems that will happen in a used-up natural world:

  • Rather than being able to catch dinner by throwing a hook in the nearby ocean, the nearest healthy schools of fish may be 10,000 miles away in Antarctica
  • The copper mines will be mostly used up.
  • The inorganic phosphate deposits essential to grow food that we put in fertilizer (not to mention the natural gas) will be mostly gone.

Sure, not gone entirely, but it may as well be since resources will be so energetically far away, the pure ores of yesterday will be mostly used up, because we used the best stuff up. And we’ll have far fewer energy slaves to help us get the poor remainders.

Our financial systems require growth, yet in the future we won’t be able to grow, we’ll be shrinking. Think about it, money is just bits and bytes or paper, a claim on future energy and resources as gold once was.  Each year we need growth in a household/city/state/nation/world to service and pay off monetary loans that were created previously. No serious government or institutional body has plans for anything other than continued growth into the future. Growth requires resource access and affordability but starts first with population.

So, right as our energy slaves are about to start going away forever, leaving 7-10 billion humans without the things they have come to take for granted, our nations have decided the answer is to make more babies! Yep, to raise GDP you need more demand for toys, diapers, teachers, etc… more jobs, because more jobs means more transactions which means more GDP! More GDP means “growth” so growth is good! China has just reversed its 1-child policy, which prevented massive starvation’s and slowed the horrendous assault on China’s environment. Many other nations, such as Japan, Germany, and Sweden, are now offering bonuses for getting pregnant. In Denmark advertising firms are encouraging couples to have more babies for the good of the economy via sexy commercials

MAY 14, 1957. Rear Admiral Hyman G. Rickover, U.S. Navy. Energy Resources and Our Future. Scientific Assembly of the Minnesota State Medical Association

With high energy consumption goes a high standard of living. Thus the enormous fossil energy which we in this country control feeds machines which make each of us master of an army of mechanical slaves. Man’s muscle power is rated at 35 watts continuously, or one-twentieth horsepower. Machines therefore furnish every American industrial worker with energy equivalent to that of 244 men, while at least 2,000 men push his automobile along the road, and his family is supplied with 33 faithful household helpers. Each locomotive engineer controls energy equivalent to that of 100,000 men; each jet pilot of 700,000 men. Truly, the humblest American enjoys the services of more slaves than were once owned by the richest nobles, and lives better than most ancient kings. In retrospect, and despite wars, revolutions, and disasters, the hundred years just gone by may well seem like a Golden Age.

R. Buckminster Fuller  et al., “Document 1: Inventory of World Resources, Human Trends and Needs,” in World Design Science Decade 1965–1975 (Carbondale, Ill.: Southern Illinois University, 1965–1967), pages 29–30.

“Energy Slave” was first used by R. Buckminster Fuller in the caption of an illustration for the cover of the February 1940 issue of Fortune Magazine, entitled “World Energy”. Alfred Ubbelohde also coined the term, apparently independently, in his 1955 book, “Man and Energy”, but the term did not come to be widely used until the 1960s, and is generally credited to Fuller.

The enormous gulf between high-energy and low-energy societies was dramatized by Buckminster Fuller when he proposed the unit of an “energy-slave,” based on the average output of a hard-working man doing 150,000 foot-pounds of work per day and working 250 days per year. In low-energy societies, the nonhuman energy slaves are typically horses, oxen, windmills, and riverboats. Using Fuller’s unit, the average American at the end of the century had more than 8,000 energy-slaves at his or her disposal. Moreover, Fuller pointed out, “energy-slaves, although doing only the foot-pounds of humans, are enormously more effective because they can work under conditions intolerable to man, e.g., 5,000° F, no sleep, ten-thousandths of an inch tolerance, one million times magnification, 400,000 pounds per square inch pressure, 186,000 miles per second alacrity and so forth.”

Walter Youngquist. Geodestinies: The Inevitable Control of Earth Resources over Nations. Page 22.

The measure ‘energy slave’ comes from estimations of how much manpower equivalent you can get from burning a material in terms of energy.  A person power (PP), or 1 energy slave, equals .25 horsepower = 186 watts = 635 Btu/Hr.   Therefore, 300 Energy Slaves = 1/10 lb Uranium, 4700 lbs natural gas, 5150 lbs coal and 8,000 lbs petroleum.

Tad Patzek Powering the World: Offshore Oil & Gas Production 2012.

In one day an average U.S. resident consumes 4.2 gallons of oil equivalent, or a 1/10 of a barrel:

  • An average U.S. resident develops 100 W of power per 24-hour day
  • Let’s assume that he/she can work for 8 hours/day at 200 W on average
  • Then, 4.2 gallons of petroleum is equivalent to 0.1 × 6.1 x 109 / 200 / 3600 / 8 = 106 days of labor

We would have to work hard for over 100 days to make up for what we consume as hydrocarbons in 1 day. One year of gorging on hydrocarbons is equal to 1 century of hard human labor.

In France (which has a lower level of energy use than the United States) each citizen has 500 energy slaves according to How much of a slave master am I?

Alexander, Samuel. January 15, 2012. Peak Oil, Energy Descent, and the Fate of Consumerism. University of Melbourne – Office for Environmental Programs.

Energy Slaves as the Invisible Foundation of Consumer Lifestyles

We could begin by noting, rather bluntly, that the world currently consumes around 89 million barrels of oil per day. This mind-boggling figure, which aggregates conventional and non-conventional oil, becomes all the more astonishing when we bear in mind the incredible energy density of oil. David Hughes, one of Canada’s premier energy analysts, has recently done the math (Nikiforik). He concludes that there is approximately six gigajoules six billion joules in one barrel of oil, or about 1,700 kilowatt hours. Multiply that by today’s oil consumption of 89 million barrels per day and this represents the consumption throughout equivalent of about 14,000 years of fossilized sunshine every day (Hughes). These figures may not mean very much to those readers unfamiliar to thinking in terms of energy, so it can be helpful to convert them into terms of human labor, which can prove more comprehensible. Hughes has done this calculation also (Nikiforik), and concludes that a healthy human being peddling quickly on a bicycle can produce enough energy to light a 100-watt bulb or 360,000 joules an hour). If this person works eight hours a day, five days a week, Hughes calculates that it would take roughly 8.6 years of human labor to produce the energy stored in one barrel of oil. Let us pause for a moment and reflect on this astounding conclusion. One barrel of oil is the equivalent of 8.6 years of human labor, and the world today consumers 89 million barrels of oil, everyday.

This type of analysis gave rise to the notion of ‘energy slaves’, a term coined by American energy philosopher, Buckminster Fuller, in 1944 (Armaroli). The purpose behind the energy slave concept is to understand how much human labor would be required, hypothetically, to sustain a certain action, lifestyle, or culture in the absence of the highly concentrated fossil-fuel energies available today. For example, it would take 11 energy slaves peddling madly simply to power an ordinary toaster. When this concept is applied to modern consumer societies as a whole, the results are eye opening, to say the least. Given that the average North American currently consumes over 24 barrels of oil per year, the average inhabitant in that region of the world would require at least 204 energy slaves to sustain their lifestyles (Nikiforik). The average Australian would require 130 energy slaves; the average Western European around 110 energy slaves. As if these figures were not confronting enough, in the absence of oil the global economy in its entirety would need approximately 66 billion energy slaves to sustain itself in its current form.

Whatever way one looks at this analysis, these are astonishing figures, representing a spectacular amount of energy that obviously far exceeds what was at the disposal of Monarchs,
aristocrats, and slave owners in previous eras. When we also bear in mind how cheap oil has been in the past, this analysis gets more remarkable still. Even at today’s price of around $100 per barrel, which historically is extremely high, this is really a very cheap form of energy. One only needs to imagine offering someone $100 for 8.6 years of labor to realize that even today’s so-called expensive oil is still amazingly cheap. Western–style consumer lifestyles, it can be seen, being so energy intensive, are utterly dependent on a cheap and abundant supply of energy, and in ways that are not always obvious. We often fail to see how central energy is to our lives because it is invisible, only its consequences are visible.

Armaroli, Nicola et al. 2011, Energy for a Sustainable World Ch 3.

Hughes, David. 2010. Peak Energy and its Implications for the City of Edmonton. City of Edmonton, Canada

Nikiforik, Andrew. May 5, 2011.  ‘You and Your Slaves’ The Tyee. http://thetyee.ca/Opinion/2011/05/05/EnergySlaves/

Ron Patterson, author of the Peak Oil Barrel blog, wrote this in August of 2002:

Fossil fuel brought on the advent of the energy slave. With fossil fuels, we could move hundreds of times as much freight, hundreds of times faster. We invented the cotton gin and spinning machines. We invented the loom with the flying shuttle, and were able to produce hundreds of times as much fabric in a fraction of the time. The sewing was invented and we could manufacture clothes in a fraction of the time it once took. Even shoe manufacturing was the benefactor of the energy slave. We could manufacture shoes in a fraction of the time it once took us, and the last cobbler eventually died.

Everything we possess, including the roof over our head was produced with the aid of energy slaves. But nowhere have these energy slaves made more impact than in food production. Huge tractors pull sixteen pan plows where a horse once pulled one, and very slowly at that. Grain, once thrashed by hand is now thrashed by machine. Automatic cotton and corn pickers can do in one day what it once took a hundred farm hands weeks to do. Ammonium nitrate fertilizer and genetically modified seeds has enabled us to push food production even higher. Now one farmer can produce the food a hundred farmers once produced and on one quarter of the land. And, not the least important, we now have refrigeration with which we can preserve the food we grow until the next crop comes in. Also food can be preserved with pressure canning. This is also an art that has been taken over by large companies with the aid of energy slaves.

But when the energy slaves die, all that will be gone. And there is no going back because we do not possess the skills that our ancestors had. But that is not the half of it, you haven’t heard nothing yet. Our numbers are six times greater than they were in the pre-petroleum days. We are, for the most part, crowded into cities far away from any source of food that we may glean from nature.

Shiela Newman, author of the outstanding We Can Do Better blog, wrote in 2003:

The hypothesis I have made here is that without fossil energy ‘slaves’ we will go back to human slavery.  At first glance this may seem fanciful, but I think that that response to this theory would reflect an implicit assumption by many people concerned about fossil fuel depletion. They believe there will be apocalypse and die-off; they are worried, and believe that civilization will dissolve through resource war and economic collapse. They think people will be enslaved either by shifting, marauding gangs in unstable fragments of nation states, or whatever societies might emerge when Peak Oil is something that is receding into the past, in perhaps 40 years from now.

Before the commercial harnessing of fossil fuel, both human population growth and imperial expansion depended exclusively on the output of human slaves, beasts of burden, biomass and other natural sources of energy available, such as wind and water power.  In many parts of the world today commercial energy consumption per capita is still just a few dozen kilograms per year.   Compared with the speed that humans are able to proliferate and expand their influence under the aegis of fossil fuels, former, natural processes of human use of the biosphere were necessarily low energy and very slow. In the advanced nations the elements of coercion, abduction and bondage necessary to pre-fossil fuel societies, particularly with regard to human slavery, are subjectively and objectively seen as cruel.

Modern democracy, which by definition depends on the freeing of  human slaves, and which many have accepted as the ‘inevitable’ or even ‘final’ ethical evolution  of man, that enlightened  ape, in fact probably depends less on spontaneous moral perfectionism than on large flows of cheap fossil energy and ubiquitous fossil energy slaves.  The corollary of this is that when we no longer have oil-based economies  we will probably no longer have democracy.  The slide towards plutocracy, totalitarianism, imperialism and fascist organisational frameworks, which some see as consequences or as part of the global imposition of economic rationalism, may well be a sign that democracy is already eroding.

Human institutions must react and respond to changing economic circumstances, and therefore to changing regimes of energy-economic conditions.   Access or lack of access to commercially useful fossil fuels most certainly affected the date and type, speed and nature of Industrial Revolution and urbanization experienced by various nations and economies. Beginning with coal in England in the late 17th century, production increases of even this single source of fossil energy soon led to fast industrialization and urbanization, and almost straight-line, upward growth of population, with lesser but similar effects through Western Europe later. European anglophone colonization was a direct consequence, with permanent immigration from non-English speaking Europe more of a secondary phenomena.  Results of this include the cultural, economic and constitutional frameworks of the USA, Canada, Australia, New Zealand and other countries still existing today.  In direct contrast the slower growing, less fossil energy dependent Arab and Muslim Empires retained many features of feudalism, including human slavery.

In 1492 Christopher Columbus left a cruel feudal, pre-fossil fuel society that depended on wood, wind, water  and the bio-energy of horses and human slaves for another cruel feudal pre-fossil fuel society with a different, and even less energy-intensive technology mix.

The Europe that produced Columbus was plague-ridden and in the grip of the “Little Ice-Age”. Population growth was very low, perhaps no more than 0.05% per year, and there were many fallbacks, that is years of population decline. Technologically well-matched Christians and Moors had been entwined for centuries, in North Africa and the Near East, in medieval battle using horses, some armour, low-technology artillery and by today’s standards simple biological and chemical weapons.

Things were very different in Central America. The Spanish quickly enslaved the Aztecs, who lacked guns, horses or sophisticated armour and were decimated by European diseases.  The economist Kondratiev estimated that perhaps about 50,000 tons of gold, and 450,000 tons of silver were transferred from conquered South America to Europe in the period of 1550-1600, starting the oldest and most-powerful of his ‘economic waves’.  Despite, or even because of this, Spanish colonial society and the metropole or European Spanish society, did not surge ahead.   Attempts to draw migrants from European Spain to its colonies were weak, hesitant and very ineffective. No rush to migrate occurred. In the South American colonies however, pre-fossil energy feudal society merged bloodily with pre-fossil fuel, non-European feudal societies and produced another, and similar feudal society. Hispaniola is mainly remembered for its cruelty, slavery and genocide, carried out in major part in the name of religion and with the help of priests. As Indian slaves succumbed to siphilis, smallpox, manhunts and the other ravages of this manifestation of medieval Christian messianism, they were replaced by African slaves.

Hispanic America then became locked in a time-warp, populated by near-serfs in semi-feudal agricultural societies, ideologically bolstered by a mix of primitive Catholicism, animism and non Christian myths and beliefs. Attempts to explain this failure to develop democracy and a competitive economy like that of the US in a period of under 200 years following colonisation include Spanish colonialism’s genocide, prevalence of tropical diseases, the climate factor, and the medieval, non-growth, or anti-growth ethic in which there is no work ethic, nor incitement to accumulate wealth by members of society outside the nobility and the church. Often overlooked, however, is that the Spanish motherland at that time also lacked a fossil fuel economy, and the technology, population pressure, and growth ethic which all stem from explosive economic and demographic growth.

Continental Europe left Hispanic America behind. From the 14th century plague and other sicknesses receded and Muslim settlers were progressively driven back from the heartlands of Western Europe. Glaciers made their final retreat revealing excellent new soils; agricultural and artisanal technologies improved.  From 1550 to 1820 the populations of Spain and other western European countries grew between 50% and 80%.  Medieval Christian society faced political challenge from religious reform such as the Protestant movement, from scientific inquiry dating from the Renaissance, and from collectivist and communal, pre-communist ideas embodied by the French revolution . As a direct consequence previous socio-religious reins on, and limits to ‘progress’, including the pursuit of wealth and personal consumption, were modified, but not to the degree they were across the Channel and the Atlantic.

The English Protestants who displaced the Indians of North America from about 1620 came from a different world to medieval Spanish colonists in South America, and they were not government sponsored. Their desire to establish a haven of religious freedom, to practice their fundamentalist beliefs, helped commit the pilgrims to permanent settlement. Having no way back they were condemned to succeed where others had failed.

The real starter motor for North American colonialism was the fossil ‘forest’ of English coal.   In Malthus’s Britain, coal brought forth not only hydrocarbon energy slaves, but fueled a population explosion in the 19th century, to which colonial pillage brought a steady crop of African slaves. Happily for this experiment, however, virtually limitless amounts of land had suddenly become available to soak up demographic oversupply, and this ‘window of opportunity’ soon created endless tales of opening up new territory and building infrastructures where nothing had previously existed.  Capital, however, remained scarce, while land and labor were in extravagant abundance.

Access to the Americas, with dispossession and genocide of indigenous peoples took the land potentially available to Europeans from about 24 acres to 120 acres per capita (an approximate 5-fold increase). The European ‘footprint’ therefore expanded radically and indelibly – unlike its earlier, shadow overlay in South America. The California, Alaska and other smaller gold rushes (1847-1900)  provided nearly instant wealth and attracted yet more people to North America.  Mineral wealth and new technology, like steam-power and electricity were to make for a brilliant 19th century, beneath the soot, tar and grime-daubed cities of the emerging American empire.

Meanwhile, across the Atlantic, Islam had been pushed back out of Europe mostly to arid and relatively unproductive territories, where, to this day, in several of its states, it still keeps slaves.

The rise in fossil energy slaves coincided with the abolition of human slavery in the US under Abraham Lincoln.  As well as the use of beasts of burden, reliance on coerced human energy had required  the institution of a social class entirely denied the rights of ordinary citizens and treated like domestic animals.  As more efficient fuel sources came to provide fossil slaves instead of live ones, humans had enough to spare to become apparently comparatively more generous and gentle.  Although a social hierarchy was maintained, human rights and material circumstances improved. Slavery was soon to be abolished in North America.

By the time of the discovery of oil in the North America in 1850, and its first commercial production in 1859, coal had already contributed to a huge population explosion.  Energy slaves were fueling population growth at around 3% per annum.  Combined with the economic growth enabled by coal and now oil, and driven by rapid population growth there was thus fast-rising demand for jobs, which assisted in turn the slavery abolition movement, dating between 1810 and 1863 (the end of the civil war) when all slaves became free by law in the US.  Emancipation of slaves was one of the purposes of the civil war.  Another was the establishment of democracy, according to the definition that is attributed to Abraham Lincoln, “… government of the people, by the people, for the people, shall not perish from the earth” .

Despite two major depressions, in the 1890s and the 1930s, the US generally remained on a steep upward track in prosperity curve, driven by and favoring more population growth. The curve of increasing fossil energy consumption perfectly tracks those for economic output, population growth, urbanization and any other index of growth or progress that one can choose.

In the thirty years after the Second World War, the peoples of the first world became richer and freer than they ever had been before, even if they engaged in many and bloody liberation wars – which somehow failed to liberate – in what was to be called the Third World. The cheap resource supplier nations could not help but notice that they remained outside the charmed circle of fossil-powered wealth, however. The Third World contained, and contains the Islamic resource of petroleum. The Third World, including its Muslim nations, and from the Bandung Conference of 1955, demanded that it should derive more benefit from this and other mineral and agricultural commodities, hitherto supplied to the first world economic frameworks dating from colonial times.

So the coal-based industrial revolution that had given rise to the British Empire had been refueled in the 20th century with pilgrim oil and Muslim oil, and had then roared forward again, despite ‘peripheral’ opposition.  Arguably it reached its apotheosis in the Space Race between the two putative master races of the US and the Soviet Union, when it seemed that men, accompanied by beautiful and fertile women, would almost certainly soon travel to the stars and colonize their planets. This was a necessary escape clause for a species that was fast outgrowing its petri-dish. It was an attempt – a rather pathetic attempt – to identify a new and limitless horizon for the old notion of endless territorial expansion. It was perhaps a ‘shadow in the gene’, the same instinct that makes all animals expand into available territory, that  had brought the first waves of humans from Africa to Europe, and from Asia to Australasia during the last ice-age, and had continued with modern migration from Europe to the Americas and Australia.

Unfortunately or not, fossil fuel civilization began to falter in the second half of the twentieth century. One sign that it was coming to an end was that former colonial populations, many of them Muslim,  sought sovereignty over natural resources.  The third world revolt heralded by the formation of OPEC was contained finally, but still hit the first world hard, giving rise to the 1973 oil shock. The different reactions of  continental Western European societies, compared to those of English speaking settler societies (e.g. USA, Canada and Australia) to Oil Shock from the 1970s provides an outline image to how these societies will react when the Age of Oil disappears.

In 1971 the USA’s domestic oil production peaked.  Soon after, in 1973, the world’s major known oil reserves were commandeered by ‘indigenous upstarts’ through OPEC. The fortunes of Islam were turning once again. The fossil energy rich Future Eaters of the Americas and Oceania briefly glimpsed their apocalypse and blinked, but the countries of the first world rallied. Firstly using ‘petrodollar recycling’, conferences on International Solidarity and the recognition of Palestine, before reverting to type with Gulf War 1 of 1991 and preparation for Gulf War 2 of 2003, the First World has proven its inability to contemplate the Age of Oil’s demise. Brave words in the 1973-75 period, to the effect that ‘We can consume less – we are not addicted”, have disappeared. Since the liberation of Kuwait and its restoration to full sovereignty and oil output, syrupy speeches have given way to smart missiles and dumb propaganda.

In 1999 the average North American was using the equivalent in petroleum energy of 174 human slaves working an eight hour day every single day. The situation was however always a little different in Western Europe, where fossil fuel had never seemed so assured, and where fossil fuel civilization was a recent overlay, rather than the inventor of that society.  France, notably, was fossil energy poor.  After the first world war, both France and England had gone out of their way to locate and secure oil assets overseas in their colonies.  Unlike the US, Canada, and Australia, the French and other non-English speaking Western Europeans, cut off from their imported oil supplies, had struggled through oil shortages during World War Two.  Oil sources in Romania  came under attack early.  In 1942 Hitler’s army made for the Baku oilfields in southern Russia.  During the last days of that war, as General Patton pursued the German forces across France, a carpet of US oil-bearing pipeline had unrolled magically behind him, at the rate of up to 50 miles a day, constructed by teams from Texas.

The fate of the European allied forces had finally depended, worryingly, on the late entry of US oil powered forces.  But would the US always remain a friend?  What would the friendship cost?  And would the US always have oil to spare?  Then there had been a short dress rehearsal when the Suez Canal crisis (1956) threatened supply and the prospect of war in the Middle East.   Unlike the UK, European countries had few colonial stakes in oil bearing territories, most of which had become independent. In September 1973 Algeria, France’s last former colony, nationalized French oil companies . Israel and Palestine were quite some distance from the US and Australia, but frighteningly close to Western Europe and the threat of nuclear war during Yom Kippur was all too present in 1973. # France and other EU countries still pursue a defensive oil economics policy, acquiring assets but selling the bulk to countries overseas, whilst attempting to maximise self-sufficiency at home by developing and extending the capacity of nuclear, hydro and even renewable energy sources. Increasingly in Europe – and to this the English can agree – national opinion seeks zero net immigration from outside the EEC. Constant migrant inflow to the US is a cornerstone of US economic policy. In Europe, demographic policy since 1974 and demographic trajectories  indicate fast-falling total populations by 2050   Of course, relative to Peak Oil, this will be not be enough and will come too late – but is not the perfect opposite of sanity, which is the US position.

We see here then, the formation of at least two different first world geopolitical blocs on the question of energy resources.  The result will be a continuing divergence in policies on oil economics and energy policy, industrial and economic goals, population targets, social security policy, and systems of land development and urban planning.

Geopolitically, in the Euro-Asiatic region, new ‘land bridges’ are forming as the shrunken Russian Federation retreats from inherited, untenable frontiers in Southern Muslim republics of the former USSR. Due uniquely to oil, the US has leaped into this region – establishing its first beach head in Afghanistan – but forces are gathering to dispute it this foothold.  Across the narrow Mediterranean Europe is bonding, forming economic then strategic alliances with Muslim countries, as every day shown by European stances on the Israel-Palestine war or ‘conflict’. No greater contrast exists than that of European public opinion regarding Israel, and that of the USA. Cheap energy and air travel, and mass communications have enabled people from everywhere on earth to pour into the US, or to understand its way of life, full of wonders like Disneyland, self-flushing toilets, and cappuccinos in giant disposable plastic cups.  America itself, apart from its ruling elites, is now an ethnic mix, and in world terms a demographic superpower –only numerically outclassed by China and India. The difference is that America flatly rejects any limit on personal energy consumption; its lifestyle in the words of  G.W. Bush “is non negotiable”, while the Euro-Asians can not only negotiate but strategically ally with their land bridge neighbors, rivals, migrants and clients.

As a direct consequence the USA can only be intransigent, strident and aggressive – while Europeans, perhaps with the memory of failed colonization in South America and other places, have no such certainty, and must retreat to renouncement, to conciliation, to intrigue, to manipulation of and alliances with third world armed forces against the US. Naturally US oil policy has become a basis for war and conquest. Naturally European enthusiasm for a ‘remake’ of Gulf War 1 has been weak – lamentable to America, to a retreating America that becomes more alien and hostile each day as it beats the war drum, promising yet more conquest, and human blood to assuage its oil hunger. This has been noted by the Muslim world – by a culture at least as old as that of Europe.

Understandably, the oil-rich Islamic nations of the Middle East deeply resent the controlling economic hand of the non-muslim world, and America’s leading role as both banker and war-monger.  Just like the USA, their demographic explosions are artificially maintained by oil income, pending a catastrophic decline which will almost  certainly come after Peak Oil. In the short-term, those glorious days of petrodollar recycling of the 1970s must seem like something from the Persian Nights tales of legend, to populations of which 50% were born after 1970, many of which have recently known increasing poverty and humiliation. America’s ‘uncontrollable’ demographic growth is through the floodgate of immigration, which daily adds to the imperative to seek more energy by dominating geoeconomics, and Islam’s fertility has spawned populations of young men and women with nowhere to go and nothing to lose.

September the 11th 2001 was the day that Islam focused the world’s attention on the dissatisfaction of its peoples when a suite of suicide pilots caused US planes to ram the World Trade Center buildings and the Pentagon in an epic that most people had only expected to see in old Japanese movies involving King Kong and a giant turtle.

As for the outcome; be it international war, unexpected alliances, civil war or implosion of States, we can be fairly sure that if war does break out between the US and the Islamic oil empire, it will not be the final conflict, since many other nations and tribes wait in the wings, preparing to seize power or to scavenge.  But further down the track the question of how to go on living without abundant fossil energy will arise for all those who survive in the short-term.  It seems inevitable that human slavery –  which has never actually disappeared from the world – will at the very least expand its tragic role in the panoply of ‘energy solutions’ that will remain to be tested by the first world in the decades after the decline of the petroleum interval. 

Posted in Energy Slaves | Tagged , , , , | 2 Comments

Antonio Turiel: Explaining Peak Oil the Easy Way

Explaining Peak Oil the Easy Way

Preface. Turiel writes an excellent blog “The oil crash” at http://crashoil.blogspot.com. Turiel explains eloquently why the amount of oil will decline even though there are vast pools of oil left underground.

If you read my book “When trucks stop running”, you’ll realize the real issue is peak diesel, since civilization depends on heavy-duty trucks for hauling goods, tractors, harvesters, logging, mining, construction, and so on (and locomotives and ships are also dependent on diesel). U.S. Shale “fracked” oil is too light to make diesel, so the U.S. imports 7 mmb/d of foreign crude oil, that mixed with fracked oil can produce diesel. Since only about 12% of a barrel of crude has diesel, there’s no avoiding making gasoline, jet fuel, and other products — so it is gasoline that’s filling up storage, yet we have to keep producing gasoline.

Covid-19 may cause peak oil to come sooner rather than delay it. It’s likely that U.S. fracked oil will decline by half due to covid, as well as conventional, offshore, and deep-sea oil wells as well. All of them, once closed (shut-in) because there’s no more storage, are likely to produce half as much, or less after being opened again. Production beyond that requires a great deal of time and money.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


What is Peak Oil?

Hubbert’s curve shows the theoretical extraction from a single oil well or all oil wells in the World. Source: Wikimedia Commons.

A recurring topic in these discussions is the confusion between resources and production. Let me explain. In the world today there exist enormous, I might even say vast, resources of liquid hydrocarbons (a more appropriate name than oil, because it involves many diverse substances which are not completely equivalent).

If we take a look at just conventional crude oil, there is enough to supply current consumption for about 30 years.

If we include the extra-heavy oils (bitumen), there is enough to cover current consumption for a century.

Intuitive conclusion: there is no problem with oil, or at least not an urgent one. Right?

This is how the mass media sees it, and that is the reason why I decided to create The Oil Crash blog.

What is the reality?

The reality is that it does not matter how much there may be in the subsoil, but the rate at which you can extract it. In other words, what is the production going to be? I will give you an example.

Imagine you are thirsty and want to drink some water. I tell you that there is a glass of water available.

Possibility 1: There is a glass full of water. You take it and you drink it. This is what the world has done with oil since the 19th century up to the first oil crisis in 1970.

Possiblility 2: The water is spread over a smooth surface. You take a straw and, with a lot more effort than in the previous case, you manage to drink the water. This is what the world has done with oil from 1970 to 2000.

Possibility 3: The water is mixed with sand. You have to heat up the sand in a watertight container, condense the water which might evaporate and which ends up in the glass via a distillery, and then you must wait for it to cool down. The process is not perfect, part of the water never evaporates from the sand and part is lost because it comes out of the end of the distillery in the form of gas or it evaporates from the glass because it is still quite hot, and so you recover 2/3 of the original glass. Moreover, the process is really slow and creates a lot of heat, so you feel even more thirsty and the process never takes your thirst away. This is what the world has done with oil from 2000 until 2010.

Possibility 4: There is no liquid water, but I tell you that you can condense it in the air. It is a very slow and inefficient process, but you are thirsty and you have to find water from somewhere. The problem is that you have to keep drinking but the relative humidity of the room keeps going down. There is still a lot of water in the air, but it is extracted more and more slowly each time. You could build a mega-machine in order to “dry out all the air” in one go, but you don’t actually have the resources to do that, so in fact you have to make do with what you have. This is what is happening with oil since 2010.

Obviously, in the real world we have a mixture of the 4 possibilities, from simple oilfield extraction (possibility 1) to the ludicrously expensive oilfield extraction (possibility 4), however as time goes by the simple oilfields are running dry and we are finding ourselves left with the more complicated oilfields.

From time to time you come across economists who tell you, OK, that’s true, but by investing more money and with advances in technology those possibility 4 oilfields will be made profitable and quick. That is a lie. The problem is not the economic profitability of the oilfields but the energetic profitability (in other words, how much energy is gained for every unit of energy expended in obtaining it). If you spend more energy than you manage to extract, then forget it, this extraction will not be energetically profitable and will not therefore be economically profitable for obvious reasons. If you only gain a little bit of energy, then the extraction will certainly not be economically profitable, because there are other costs. In order for an oilfield to be worth extracting, practically, you have to gain much more energy than you spend. And as for technology, the thermodynamics set the limits on the yield given by the processes, limits which you cannot go beyond, and we are already far too close to those limits. There are no big improvements to hope for (there will be improvements, undoubtedly, but they will not be big).

This is the situation we are in now. The liquid hydrocarbon deposits which we have left are bad quality and it is difficult to extract the oil from them. For this reason, oil production will not now increase and will probably fall sharply in the next few years. Oil is not going to finish in 30 years, nor 100, nor 200: it will be here for many more centuries. What will happen is that we will have less of it available as each year passes. When I give a talk I always say that the situation is similar to a person who keeps having their wages reduced. In the beginning they earn £2,000 a month and they’re fine, unworried. Next year their wage is lowered to £1,800 a month and things are still fine. In the next year their wage is lowered to 1,600 and the person starts to get annoyed. The following year it’s lowered to 1,400, and the next it’s lowered to 1,200, then 1,100, then 1,000, then 900, then 850, then 800, then 775… They never stop receiving their wages, but with what they are earning, life gets more and more difficult. This is our situation: our energy wage is going to get smaller and smaller with each year and we will have to learn to live with less and less.

Posted in Flow Rate, Peak Oil | Tagged , , | 1 Comment

Shale gas is only good for plastics, not transportation fuels

Preface. The oil industry is making more plastic because electric cars have cut gasoline use, but because shale “fracked” gas is so light plastic is about the only use. It is not a transportation fuel that can save us from the coming peak oil energy crisis. 

But the plastics boom may be abruptly stopped in its tracks. The fracking industry may not last as long as many believe (Miller 2019) for geological reasons. Fracking may also fail the pandemic financial crash since most companies are in debt. 

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Gardiner, B. 2020. A Surge of New Plastic Is About to Hit the Planet as major oil companies ramp up their production. wired.com

Petrochemicals, the category that includes plastic, now account for 14 percent of oil use and are expected to drive half of oil demand growth between now and 2050, the International Energy Agency (IEA) says. The World Economic Forum predicts plastic production will double in the next 20 years.

And because the American fracking boom is unearthing, along with natural gas, large amounts of the plastic feedstock ethane, the United States is a big growth area for plastic production. With natural gas prices low, many fracking operations are losing money, so producers have been eager to find a use for the ethane they get as a byproduct of drilling.

“They’re looking for a way to monetize it,“ Feit said. “You can think of plastic as a kind of subsidy for fracking.”

Shell is building a $6 billion ethane cracking plant—a facility that turns ethane into ethylene, a building block for many kinds of plastic—in Monaca, Pennsylvania, 25 miles northwest of Pittsburgh. It is expected to produce up to 1.6 million tons of plastic annually after it opens in the early 2020s. Pennsylvania granted the Shell plant a tax break valued at $1.6 billion—one of the biggest in state history—and officials in Ohio and West Virginia are wooing firms eager to build more ethane crackers, storage facilities, and pipelines.

Since 2010, companies have invested more than $200 billion in 333 plastic and other chemical projects in the US, including expansions of existing facilities, new plants, and associated infrastructure such as pipelines.

If you aren’t going to use plastics, what are you going to use instead?” Alternatives like steel, glass, and aluminum have negative impacts of their own, including carbon footprints that can be greater than plastic’s. It makes cars lighter and therefore more efficient, insulates homes, reduces waste by extending food’s life, and keeps medical supplies sanitary, among many other uses.

Alter, L. 2019. Oil industry is spending billions on increasing plastics production. Treehugger.com

The increase in the production of petrochemicals, spurred by the abundance of shale gas as feedstock and the demand for ethane – a key component in plastics – has prompted energy companies to continue investing billions of dollars in the petrochemical sector. “The global petrochemical sector continues to expand exponentially as developing nations’ demand for petrochemical/chemical products continues to increase,” says Petroleum Economist.

Consultants note that oil producers are pivoting to plastics, away from gas or diesel, and that demand for petrochemical feedstocks will increase by 50%. Petrochemical manufacturers are building 11 new ethylene plants on the Gulf Coast, with capacity for polyethylene growing by 30 percent. The director of the trade association says, “You are going to see over $200 billion in investment in the Gulf Coast specifically related to petrochemical manufacturing.”

Harbors are being dredged, methanol complexes are being built, giant warehouses for pellet storage are under construction. “The Port of Greater Baton Rouge had its fortunes boosted recently with the announcement that ExxonMobil will spend $469 million to add a polypropylene manufacturing unit to its vast greater Baton Rouge petrochemical complex.”

Another $9.4 billion manufacturing complex on the Mississippi River will produce MDI or methylene diphenyl diisocyanate, which goes into our favorite products: polyurethane, spray-foam insulation, furniture, and textiles.


Miller, A. 2019. David Hughes’ Shale Reality check 2019. Postcarbon.org

Posted in Natural Gas | Tagged | 2 Comments