Carbon capture could require 25% of all global energy

Preface.  This is clearly a pipedream. Surely the authors know this, since they say that the energy needed to run direct air capture machines in 2100 is up to 300 exajoules each year. That’s more than half of global energy consumption today.  It’s equivalent to the current annual energy demand of China, the US, the EU and Japan combined.  It is equal to the global supply of energy from coal and gas in 2018.

That’s a showstopper. This CO2 chomper isn’t going anywhere.  It simply requires too much energy, raw materials, and an astounding, impossibly large-scale rapid deployment of 30% a year to be of any use.

Reaching 30 Gt CO2/yr of CO2 capture – a similar scale to current global emissions – would mean building some 30,000 large-scale DAC factories. For comparison, there are fewer than 10,000 coal-fired power stations in the world today. 

The cement and steel used in DACCS facilities would require a great deal of energy and CO2 emissions that need to be subtracted from whatever is sequestered.

Nor can the CO2 be stored in carbon capture sorbents – these are between the research and demonstration levels, far from being commercial, and are subject to degradation which would lead to high operational and maintenance costs.  Their manufacture also releases chemical pollutants that need to be managed, adding to the energy used even more. Plus sorbents can require a great deal of high heat and fossil fuel inputs, possibly pushing up the “quarter of global energy” beyond that.

As far as I can tell the idea of sorbents, which are far from being commercial and very expensive to produce, is only being proposed because there’s not enough geological storage to put CO2.

By the time all of the many technical barriers were overcome, oil would probably be declining, rendering the point of a DACCS facility moot.  A decline of 4-8% a year of global oil production will reduce CO2 emissions far more than DACCS.  Within two decades we’ll be down to 10% of the oil and emissions we once had.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Evans, S. 2019. Direct CO2 capture machines could use ‘a quarter of global energy’ in 2100.

This article is a summary of: Realmonte, G. et al. 2019. An inter-model assessment of the role of direct air capture in deep mitigation pathways, Nature Communications.

Machines that suck CO2 directly from the air using direct air capture (DAC) could cut the cost of meeting global climate goals, a new study finds, but they would need as much as a quarter of global energy supplies in 2100 to limiting warming to 1.5 or 2C above pre-industrial levels.

But the study also highlights the “clear risks” of assuming that DAC will be available at scale, with global temperature goals being breached by up to 0.8C if the technology then fails to deliver.

This means policymakers should not see DAC as a “panacea” that can replace immediate efforts to cut emissions, one of the study authors tells Carbon Brief, adding: “The risks of that are too high.

DAC should be seen as a “backstop for challenging abatement” where cutting emissions is too complex or too costly, says the chief executive of a startup developing the technology. He tells Carbon Brief that his firm nevertheless “continuously push back on the ‘magic bullet’ headlines”.

Negative emissions The 2015 Paris Agreement set a goal of limiting human-caused warming to “well below” 2C and an ambition of staying below 1.5C. Meeting this ambition will require the use of “negative emissions technologies” to remove excess CO2 from the atmosphere, according to the Intergovernmental Panel on Climate Change (IPCC).

This catch-all term of negative emissions technologies covers a wide range of approaches, including planting trees, restoring peatlands and other “natural climate solutions”. Model pathways rely most heavily on bioenergy with carbon capture and storage (BECCS). This is where biomass, such as wood pellets, is burned to generate electricity and the resulting CO2 is captured and storedThe significant potential role for BECCS raises a number of concerns, with land areas up to five times the size of India devoted to growing the biomass needed in some model pathways.

Another alternative is direct air capture, where machines are used to suck CO2 out of the atmosphere. If the CO2 is then buried underground, the process is sometimes referred to as direct air carbon capture and storage (DACCS).

Today’s new study explores how DAC could help meet global climate goals with “lower costs”, using two different integrated assessment models (IAMs). Study author Dr Ajay Gambhir, senior research fellow at the Grantham Institute for Climate Change at Imperial College London, explains to Carbon Brief:

“This is the first inter-model comparison…[and] has the most detailed representation of DAC so far used in IAMs. It includes two DAC technologies, with different energy inputs and cost assumptions, and a range of energy inputs including waste heat. The study uses an extensive sensitivity analysis [to test the impact of varying our assumptions]. It also includes initial analysis of the broader impacts of DAC technology development, in terms of material, land and water use.

The two DAC technologies included in the study are based on different ways to adsorb CO2 from the air, which are being developed by a number of startup companies around the world.

One, typically used in larger industrial-scale facilities such as those being piloted by Canadian firm Carbon Engineering, uses a solution of hydroxide to capture CO2. This mixture must then be heated to high temperatures to release the CO2 so it can be stored and the hydroxide reused. The process uses existing technology and is currently thought to have the lower cost of the two alternatives.

The second technology uses amine adsorbents in small, modular reactors such as those being developed by Swiss firm Climeworks. Costs are currently higher, but the potential for savings is thought to be greater, the paper suggests. This is due to the modular design that could be made on an industrial production line, along with lower temperatures needed to release CO2 for storage, meaning waste heat could be used.

Delayed cuts

Overall, despite “huge uncertainty” around the cost of DAC, the study suggests its use could allow early cuts in global greenhouse gas emissions to be somewhat delayed, “significantly reducing climate policy costs” to meet stringent temperature limits.

Using DAC means that global emissions in 2030 could remain at higher levels, the study says, with much larger use of negative emissions later in the century.  

The use of DAC in some of the modelled pathways delays the need to cut emissions in certain areas. The paper explains: “DACCS allows a reduction in near term mitigation effort in some energy-intensive sectors that are difficult to decarbonise, such as transport and industry.

Steve Oldham, chief executive of DAC startup Carbon Engineering says he sees this as the key purpose of CO2 removal technologies, which he likens to other “essential infrastructure” such as waste disposal or sewage treatment.

Oldham tells Carbon Brief that while standard approaches to cutting CO2 remain essential for the majority of global emissions, the challenge and cost may prove too great in some sectors. He says:

“DAC and other negative emissions technologies are the right solution once the cost and feasibility becomes too great…I see us as the backstop for challenging abatement.

Comparing costs

Even though DAC may be relatively expensive, the model pathways in today’s study still see it as much cheaper than cutting emissions from these hard-to tackle sectors. This means the models deploy large amounts of DAC, even if its costs are at the high end of current estimates.

It also means the models see pathways to meeting climate goals that include DAC as having lower costs overall (“reduce[d]… by between 60 to more than 90%”). Gambhir tells Carbon Brief:

“Deploying DAC means less of a steep mitigation pathway in the near-term, and lowers policy costs, according to the modelled scenarios we use in this study.

Gambhir tells Carbon Brief:

“Large-scale deployment of DAC in below-2C scenarios will require a lot of heat and electricity and a major manufacturing effort for production of CO2 sorbent. Although DAC will use less resources such as water and land than other NETs [such as BECCS], a proper full life-cycle assessment needs to be carried out to understand all resource implications.

Deployment risk There are also questions as to whether this new technology could be rolled out at the speed and scale envisaged, with expansion at up to 30% each year and deployment reaching 30 GtCO2/yr towards the end of the century. This is a “huge pace and scale”, Gambhir says, with the rate of deployment being a “key sensitivity” in the study results.

Prof Jennifer Wilcox, professor of chemical engineering at Worcester Polytechnic Institute, who was not involved with the research, says that this rate of scale-up warrants caution. She tells Carbon Brief:

“Is the rate of scale-up even feasible? Typical rules of thumb are increase by an order of magnitude per decade [growth of around 25-30% per year]. [Solar] PV scale-up was higher than this, but mostly due to government incentives…rather than technological advances.

If DAC were to be carried out using small modular systems, then as many as 30m might be needed by 2100, the paper says. It compares this number to the 73m light vehicles that are built each year.

The study argues that expanding DAC at such a rapid rate is comparable to the speed with which newer electricity generation technologies such as nuclear, wind and solar have been deployed.

The modelled rate of DAC growth is “breathtaking” but “not in contradiction with the historical experience”, Bauer says. This rapid scale-up is also far from the only barrier to DAC adoption.

The paper explains: “[P]olicy instruments and financial incentives supporting negative emission technologies are almost absent at the global scale, though essential to make NET deployment attractive.

Carbon Engineering’s Oldham agrees that there is a need for policy to recognise negative emissions as unique and different from standard mitigation. But he tells Carbon Brief that he remains “very very confident” in his company’s ability to scale up rapidly.

(Today’s study includes consideration of the space available to store CO2 underground, finding this not to be a limiting factor for DAC deployment.)

Breaching limits

The paper says that the challenges to scale-up and deployment on a huge scale bring significant risks, if DAC does not deliver as anticipated in the models. Committing to ramping up DAC rather than cutting emissions could mean locking the energy system into fossil fuels, the authors warn.

This could risk breaching the Paris temperature limits, the study explains:

“The risk of assuming that DACCS can be deployed at scale, and finding it to be subsequently unavailable, leads to a global temperature overshoot of up to 0.8C.

Gambhir says the risks of such an approach are “too high”:

“Inappropriate interpretations [of our findings] would be that DAC is a panacea and that we should ease near-term mitigation efforts because we can use it later in the century.

Bauer agrees:

“Policymakers should not make the mistake to believe that carbon removals could ever neutralise all future emissions that could be produced from fossil fuels that are still underground. Even under pessimistic assumptions about fossil fuel availability, carbon removal cannot and will not fix the problem. There is simply too much low-cost fossil carbon that we could burn.

Nonetheless, Prof Massimo Tavoni, one of the paper’s authors and the director of the European Institute on Economics and the Environment (EIEE), tells Carbon Brief that “it is still important to show the potential of DAC – which the models certainly highlight – but also the many challenges of deploying at the scale required”.

The global carbon cycle poses one final – and underappreciated – challenge to the large-scale use of negative emissions technologies such as DAC: ocean rebound. This is because the amount of CO2 in the world’s oceans and atmosphere is in a dynamic and constantly shifting equilibrium.

This equilibrium means that, at present, oceans absorb a significant proportion of human-caused CO2 emissions each year, reducing the amount staying in the atmosphere. If DAC is used to turn global emissions net-negative, as in today’s study, then that equilibrium will also go into reverse.

As a result, the paper says as much as a fifth of the CO2 removed using DAC or other negative emissions technologies could be offset by the oceans releasing CO2 back into the atmosphere, reducing their supposed efficacy.

Posted in CO2 and Methane, Far Out | Tagged , | 3 Comments

Himalayan glaciers that supply water to a billion people are melting fast

Preface. The Himalayan glaciers that supply water to a billion people are melting fast, already 30% has been lost since 1975.

Adding to the crisis are the 400 dams under construction or planned for Himalayan rivers in India, Pakistan, Nepal, and Bhutan to generate electricity and for water storage.  The dams’ reservoirs and transmission lines will destroy biodiversity, thousands of houses, towns, villages, fields, 660 square miles of forests, and even parts of the highest highway of the world, the Karakoram highway. The dam projects are at risk of collapse from earthquakes in this seismically active region and of breach from flood bursts from glacial lakes upstream. Dams also threaten to intensify flooding downstream during intense downpours when reservoirs overflow (IR 2008, Amrith 2018).

Since the water flows to 16 nations, clearly these dams could cause turmoil and even war if river flows are cut off from downstream countries.  Three of these nations, India, Pakistan, and China, have nuclear weapons.

It’s already happening. After a terrorist attack that killed 40 Indian police officers in Kashmir, Indiadecided to retaliate by cutting off some river water that continues on to Pakistan, “adding an extra source of conflict between two nuclear-armed neighbors”. Pakistan is one of the most water-stressed countries in the world with seriously depleted underground aquifers and less storage behind their two largest dams due to silt (Johnson 2019).

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Wu, K. 2019. Declassified spy images show Earth’s ‘Third Pole’ is melting fast.  Accelerating ice melt in the Himalayas may imperil up to a billion people in South Asia who rely on glacier runoff for drinking water and more.

According to a study published today in the journal Science Advances, rising temperatures in the Himalayas have melted nearly 30% of the region’s total ice mass since 1975.

These disappearing glaciers imperil the water supply of up to a billion people throughout Asia.

Once nicknamed Earth’s ‘Third Pole’ for its impressive cache of snow and ice, the Himalayas may now have a bleak future ahead. Four decades of satellite data, including recently declassified Cold War-era spy film, suggest these glaciers are currently receding twice as fast as they were at the end of the 20th century.

Several billion tons of ice are sloughing off the Himalayas each year without being replaced by snow. That spells serious trouble for the peoples of South Asia, who depend on seasonal Himalayan runoff for agriculture, hydropower, drinking water, and more. Melting glaciers could also prompt destructive floods and threaten local ecosystems, generating a ripple effect that may extend well beyond the boundaries of the mountain’s warming peaks.

The study’s sobering findings come as the result of a massive compilation of data across time and space. While previous studies have documented the trajectories of individual glaciers in the Himalayas, the new findings track 650 glaciers that span a staggering 1,250-mile-wide range across Nepal, Bhutan, India, and China. They also draw on some 40 years of satellite imagery, which the scientists stitched together to reconstruct a digital, three-dimensional portrait of the glacier’s changing surfaces—almost like an ultra-enhanced panorama.

When a team of climatologists analyzed the time series, they found a stark surge in glacier shrinkage. Between 1975 and 2000, an average of about 10 inches of ice were shed from the glaciers each year. Post-Y2K, however, the net loss doubled to around 20 inches per year—a finding in keeping with accelerated rates of warming around the globe.

While previous studies have had difficulty disentangling the relative contributions of rising temperatures, ice-melting pollutants, and reduced rainfall to the boost in glacier melt, the latter two simply aren’t enough to explain the alarming drop in ice mass in recent years.


Amrith, S. S. 2018. The race to dam the Himalayas. Hundreds of big projects are planned for the rivers that plunge from the roof of the world. New York Times.

IR. 2008. Mountains of concrete: Dam building in the Himalayas. International Rivers.

Johnson, K. 2019. Are India and Pakistan on the verge of a water war? Foreign Policy.

Posted in Caused by Scarce Resources, Climate Change, Climate Change, Dams, Water, Water, Water | Tagged , , , | Leave a comment

Billionaire apocalypse bunkers

Vivos Bunker
Vivos built a 575 bunker compound in South Dakota that’s almost the size of Manhattan.


There are many reasons why people might want a bunker, but peak oil, peak phosphorous, peak everything really… and limits to growth were not among the reasons given. James Howard Kunstler’s book, “The Long Emergency: Surviving the End of Oil, Climate Change, and Other Converging Catastrophes of the 21st century” could have been titled the permanent emergency. Once oil begins to decline globally in earnest, within 20 years there’ll be 10% or less of oil left (unless wars end oil sooner and faster than that), and oil and other fossils are what allowed humans to expand from about 1 billion to 7.8 billion today.

So, when people emerge from their bunkers, they’d better know how to farm and be living on arable land with adequate rainfall.

You can’t run from the “end of the world” to a bunker. It’s just a very fancy tombstone.

Most of what follows comes from the website “The Backup Plan For Humanity. Secure your space in a Vivos underground shelter to survive virtually any catastrophe“. It’s fun to poke around on and has many photos not shown below.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Bendix, A. 2019. 45 unreal photos of ‘billionaire bunkers’ that could shelter the superrich during an apocalypse.

A lot has changed since December 21, 2012, when around 10% of people falsely believed the world would end. Billionaires aren’t the only ones prepping for doomsday, but they could be the most prepared if the world gets hit by an asteroid or nuclear missile. In recent years, companies have built “billionaire bunkers” that cater to the apocalyptic fears of the superrich.

The effects of climate change have become more frequent and severe, threatening vulnerable areas with floods, hurricanes, and extreme heat. Machines have become more intelligent, leading some to worry about a technological overthrow of society. And the possibility ofglobal nuclear warfare looms even larger, with North Korea continuing to advance its nuclear weapons program.

Predicting one of these unlikely doomsday scenarios may be impossible, but planning for them isn’t if you’re a member of the 1%. Take a look at the “billionaire bunkers” that could house the super rich during an apocalypse.

Amid growing threats to the safety of our planet, a small group of elites — namely, Silicon Valley execs and New York City financiers — have started to discuss their doomsday plans over dinner.

Amid growing threats to the safety of our planet, a small group of elites — namely, Silicon Valley execs and New York City financiers — have started to discuss their doomsday plans over dinner.

In some cases, these conversations have prompted wealthy individuals to purchase underground bunkers to shelter themselves during a disaster.

“Billionaire bunkers” don’t need to be built from scratch.

A few companies now manufacture luxury doomsday shelters that cater to super rich clientele. The Vivos Group, a company based in Del Mar, California, is building a “global underground shelter network” for high-end clients.

The Vivos Group, a company based in Del Mar, California, is building a

Their fanciest compound, known as Europa One, is located beneath a 400-foot-tall mountain in the village of Rothenstein, Germany.

Their fanciest compound, known as Europa One, is located beneath a 400-foot-tall mountain in the village of Rothenstein, Germany.

The shelter was once a storage space for Soviet military equipment during the Cold War, according to the company’s website.

The shelter was once a storage space for Soviet military equipment during the Cold War, according to the company's website.

In exchange for purchasing a bunker, residents are provided with a full-time staff and security team.

In exchange for purchasing a bunker, residents are provided with a full-time staff and security team.

This property is designed to withstand a close-range nuclear blast, airline crash, earthquake, flood, or military attack.

The property is designed to withstand a close-range nuclear blast, airline crash, earthquake, flood, or military attack.

A typical living quarters has two floors. On the lower level (shown below), there are multiple bedrooms, a pool table, and a movie theater.

A typical living quarters has two floors. On the lower level (shown below), there are multiple bedrooms, a pool table, and a movie theater.

Each family is allotted 2,500 square feet, but has the option to extend their residence to 5,000 square feet.

Each family is allotted 2,500 square feet, but has the option to extend their residence to 5,000 square feet.

This sample movie theater can fit a family of five.

This sample movie theater can fit a family of five.

The bunker includes communal spaces, such as a pub for tossing back a few while the world comes to an end.

The bunker includes communal spaces, such as a pub for tossing back a few while the world comes to an end.

Or a chapel for sending prayers to the rest of humanity.

Or a chapel for sending prayers to the rest of humanity.

When doomsday arrives, the company envisions residents arriving in Germany by car or plane. From there, Vivos will transport them via helicopter to their sheltered homes.

When doomsday arrives, the company envisions residents arriving in Germany by car or plane. From there, Vivos will transport them via helicopter to their sheltered homes.

The full underground structure stretches nearly 230,000 square feet.

The full underground structure stretches nearly 230,000 square feet.

There are only 34 private living quarters, so space is limited.

There are only 34 private living quarters, so space is limited.

But the price will likely preclude most people from buying. Private apartments start at $2.5 million and fully furnished, semi-private suites start at around $40,000 a person.

But the price will likely preclude most people from buying. Private apartments start at $2.5 million and fully furnished, semi-private suites start at around $40,000 a person.

If billionaires can’t find space at Europa One, there’s also xPoint, a compound in South Dakota that’s almost the size of Manhattan.

If billionaires can't find space at Europa One, there's also xPoint, a compound in South Dakota that's almost the size of Manhattan.

xPoint was originally built by Army engineers.

xPoint was originally built by Army engineers.

The compound’s location near the Black Hills of South Dakota makes it relatively safe from flooding and nuclear targets, according to Vivos.

The compound's location near the Black Hills of South Dakota makes it relatively safe from flooding and nuclear targets, according to Vivos.

xPoint comes with its own electrical and water systems, so residents can survive for at least a year without having to go outside.

xPoint comes with its own electrical and water systems, so residents can survive for at least a year without having to go outside.

The entire compound consists of 575 bunkers, each with enough space for 10 to 24 people.

The entire compound consists of 575 bunkers, each with enough space for 10 to 24 people.

Each bunker is around 2,200 square feet.

Each bunker is around 2,200 square feet.

The bunkers start at $35,000, but residents will also have to pay $1,000 in annual rent. That’ll likely require some savings when it’s unsafe to go outdoors.

The bunkers start at $35,000, but residents will also have to pay $1,000 in annual rent. That'll likely require some savings when it's unsafe to go outdoors.

The company has yet another shelter in Indiana, which can house just 80 people.

The company has yet another shelter in Indiana, which can house just 80 people.

The Vivos website likens the shelter to “a very comfortable 4-Star hotel.”

The Vivos website likens the shelter to

The communal living room has 12-foot-high ceilings.

The communal living room has 12-foot-high ceilings.

Residents aren’t expected to bring anything other than clothing and medication.

Residents aren't expected to bring anything other than clothing and medication.

Vivos provides the rest, including laundry facilities, food, toiletries, and linens.

Vivos provides the rest, including laundry facilities, food, toiletries, and linens.

There’s even exercise equipment and pet kennels.

There's even exercise equipment and pet kennels.

The shelter is co-owned by its members, which makes it slightly more affordable than the company’s other models.

The shelter is co-owned by its members, which makes it slightly more affordable than the company's other models.

Vivos claims on its website that the shelter is safe from tsunamis, earthquakes, and nuclear attacks.

Vivos claims on its website that the shelter is safe from tsunamis, earthquakes, and nuclear attacks.

In a statement, the company said interest in its shelters has “skyrocketed over the past few years.” The website says that “few” spaces remain across its network of bunkers.

In a statement, the company said interest in its shelters has

Vivos members aren’t all elite one-percenters, the company said, “but rather well-educated, average people with a keen awareness of the current global events.”

Vivos members aren't all elite one-percenters, the company said,

The Survival Condo Project, on the other hand, caters exclusively to the superrich.

The Survival Condo Project, on the other hand, caters exclusively to the superrich.

The company’s 15-story facility, fashioned from a retired missile silo, cost $20 million to build.

The company's 15-story facility, fashioned from a retired missile silo, cost $20 million to build.

In an interview with the New Yorker, the company’s CEO, Larry Hall, said his facility represented “true relaxation for the ultra-wealthy.”

In an interview with the New Yorker, the company's CEO, Larry Hall, said his facility represented

Source: The New Yorker

The facility only has room for about a dozen families, or 75 people in total.

The facility only has room for about a dozen families, or 75 people in total.

The facility is somewhere north of Wichita, Kansas, but its exact location is secret.

The facility is somewhere north of Wichita, Kansas, but its exact location is secret.

A single unit is relatively small — around 1,820 square feet.

A single unit is relatively small — around 1,820 square feet.

As of last year, units were advertised for $3 million each. The company also sells half-floor units for around $1.5 million.

As of last year, units were advertised for $3 million each. The company also sells half-floor units for around $1.5 million.

All floors are connected by a high-speed elevator.

All floors are connected by a high-speed elevator.

Homeowners can venture outside, but there are SWAT team-style trucks available to pick them up within 400 miles.

Homeowners can venture outside, but there are SWAT team-style trucks available to pick them up within 400 miles.

Under a crisis scenario, residents have to secure permission from the company’s board of directors before leaving the premises.

Under a crisis scenario, residents have to secure permission from the company's board of directors before leaving the premises.

But conditions inside are far from unbearable. The facility comes with a gym, game center, dog park, classroom, and 75-foot swimming pool.

But conditions inside are far from unbearable. The facility comes with a gym, game center, dog park, classroom, and 75-foot swimming pool.

There’s even a rock wall. Doomsday has never sounded so luxurious.

There's even a rock wall. Doomsday has never sounded so luxurious.
Posted in Where are the rich going | Tagged , , | 10 Comments

How safe are utility-scale energy storage batteries?

Preface.  Airplanes can be forced to make an emergency landing if even a small external battery pack, like the kind used to charge cell phones, catches on fire (Mogg 2019).

If a small battery pack can force an airplane to land, imagine the conflagration of a utility scale storage battery might cause.

A lithium-ion battery designed to store just one day of U.S. electricity generation (11 TWh) to balance solar and wind power would be huge.  Using data from the Department of Energy (DOE/EPRI 2013) energy storage handbook, I calculated that the cost of a utility-scale lithium ion battery capable of storing 24 hours of electricity generation in the United States would cost $11.9 trillion dollars, take up 345 square miles, and weigh 74 million tons.

And at least 6 weeks of energy storage is needed to keep the grid up during times when there’s no sun or wind.  This storage has to come mainly from batteries, because there’s very few places to put Compressed Air Energy Storage (CAES), Pumped Hydro energy storage (PHS) (and also because it has a very low energy density), or Concentrated Solar Power with Thermal Energy Storage.  Currently natural gas is the main energy storage, always available to quickly step in when the wind dies and sun goes down, as well as provide power around the clock with help from coal, nuclear, and hydropower.

Storing large amounts of energy, whether it’s in larger rechargeable batteries, or smaller disposable batteries, can be inherently dangerous. The causes of lithium battery failure can include puncture, overcharge, overheating, short circuit, internal cell failure and manufacturing deficiencies.  Nearly all of the utility-scale batteries now on the grid or in development are massive versions the same lithium ion technology that powers cellphones and laptops. If the batteries get too hot, a fire can start and trigger a phenomenon known as thermal runaway, in which the fire feeds on itself and is nearly impossible to stop until it consumes all the available fuel.

This image has an empty alt attribute; its file name is 2MW-AZ-battery-that-exploded.jpg

Already a 2 megawatt battery (above) installed by the Arizona Public Service electric company exploded in April 2019 and sent eight firefighters and a policeman to the hospital (Cooper 2019) and at least 23 South Korean lithium-ion facilities caught fire in a series of incidents dating back to August 2017 (Deign 2019).

Below are excerpts from an 82 page Department of Energy document. Clearly containing utility scale energy batteries will be difficult:

“Thermal damage to ESS chemistries and components presents unique challenges to the fire service community, building owners, and insurers. As evidenced in full-scale testing of EV battery fires, fire suppression required more water than anticipated, and significantly more in some cases. Additionally, confirming that the fire was completely extinguished was difficult due to the containment housings of EV batteries that can mask continued thermal reaction within undamaged cells. In one of the tests performed by Exponent, Inc., one battery reignited after being involved in a full-scale fire test some 22 hours post-extinguishment; in another case, an EV experienced a subsequent re-ignition 3 weeks post-crash testing.”

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


USDOE. December 2014. Energy Storage Safety Strategic Plan. U.S. Department of Energy.

Energy storage is emerging as an integral component to a resilient and efficient grid through a diverse array of potential application. The evolution of the grid that is currently underway will result in a greater need for services best provided by energy storage, including energy management, backup power, load leveling, frequency regulation, voltage support, and grid stabilization. The increase in demand for specialized services will further drive energy storage research to produce systems with greater efficiency at a lower cost, which will lead to an influx of energy storage deployment across the country. To enable the success of these increased deployments of a wide variety of storage technologies, safety must be instilled within the energy storage community at every level and in a way that meets the need of every stakeholder. In 2013, the U.S. Department of Energy released the Grid

Energy Storage Strategy , which identified four challenges related to the widespread deployment of energy storage. The second of these challenges, the validation of energy storage safety and reliability, has recently garnered significant attention from the energy storage community at large. This focus on safety must be immediately ensured to enable the success of the burgeoning energy storage industry, whereby community confidence that human life and property not

The safe application and use of energy storage technology knows no bounds. An energy storage system (ESS) will react to an external event, such as a seismic occurrence, regardless of its location in relation to the meter or the grid. Similarly, an incident triggered by an ESS, such as a fire, is ‘blind’ as to the location of the ESS in relation to the meter.

Most of the current validation techniques that have been developed to address energy storage safety concerns have been motivated by the electric vehicle community, and are primarily focused on Li-ion chemistry and derived via empirical testing of systems. Additionally, techniques for Pb-acid batteries have been established, but must be revised to incorporate chemistry changes within the new technologies. Moving forward, all validation techniques must be expanded to encompass grid-scale energy storage systems, be relevant to the internal chemistries of each new storage system and have technical bases rooted in a fundamental-scientific understanding of the mechanistic responses of the materials.


Grid energy storage systems are “enabling technologies”; they do not generate electricity, but they do enable critical advances to modernize the electric grid. For example, there have been numerous studies that have determined that the deployment of variable generation resources will impact the stability of grid unless storage is included.5 Additionally, energy storage has been demonstrated to provide key grid support functions through frequency regulation.6 The diversity in the performance needs and deployment environments drive the need of a wide array of storage technologies.

Often, energy storage technologies are categorized as being high-power or high-energy. This division greatly benefits the end user of energy storage systems because it allows for the selection of a technology that fits an application’s requirements, thus reducing cost and maximizing value. Frequency regulation requires very rapid response, i.e. high-power, but does not necessarily require high energy. By contrast, load-shifting requires very high-energy, but is more flexible in its power needs. Uninterruptible power and variable generation integration are applications where the needs for high-power versus high-energy fall somewhere in between the aforementioned extremes. Figure 1 shows the current energy storage techniques deployed onto the North American grid.7 This variety in storage technologies increases the complexity in developing a single set of protocols for evaluating and improving the safety of grid storage technologies and drives the need for understanding across length scales, from fundamental materials processes through full scale system integration. 5 Denholm, Paul; Ela, Erik; Kirby, Brendan; Milligan, Michael.

Figure 1. Percentage of Battery Energy Storage Systems Deployed8 Lithium Iron Total Megawatt PercentagePhosphate 4.84% Flow Other 2.62% 14.38% Lead acid 28.20% Sodium sulfur 8.17% Lithium ion 41.79%

Figure 1. Percentage of Battery Energy Storage Systems Deployed.

The variety of deployment environments and application spaces compounds the complexity of the approaches needed to validate the safety of energy storage systems. The difference in deployment environment impacts the safety concerns, needs, risk, and challenges that affect stakeholders. For example, an energy storage system deployed in a remote location will have very different potential impacts on its environment and first responder needs than a system deployed in a room in an office suite, or on the top floor of a building in a city center. The closer the systems are to residences, schools, and hospitals, the higher the impact of any potential incident regardless of system size.

Pumped hydro is one of the oldest and most mature energy storage technologies and represents 95% of the installed storage capacity. Other storage technologies, such as batteries, flywheels and others, make up the remaining 5% of the installed storage base, are much earlier in their deployment cycle and have likely not reached the full extent of their deployed capacity.

Though flywheels are relative newcomers to the grid energy storage arena, they have been used as energy storage devices for centuries with the earliest known flywheel being from 3100 BC Mesopotamia. Grid scale flywheels operate by spinning a rotor up to tens of thousands of RPM storing energy in a combination of rotational kinetic energy and elastic energy from deformation of the rotor. These systems typically have large rotational masses that in the case of a catastrophic radial failure need a robust enclosure to contain the debris. However, if the mass of the debris particles can be reduced through engineering design, the strength, size and cost of the containment system can be significantly reduced.

As electrochemical technologies, battery systems used in grid storage can be further categorized as redox flow batteries, hybrid flow batteries, and secondary batteries without a flowing electrolyte. For the purposes of this document, vanadium redox flow batteries and zinc bromine flow batteries are considered for the first two categories, and lead-acid, lithium ion, sodium nickel chloride and sodium sulfur technologies in the latter category. As will be discussed in detail in this document, there are a number of safety concerns specific to batteries that should be addressed, e.g. release of the stored energy during an incident, cascading failure of battery cells, and fires.

A reactive approach to energy storage safety is no longer viable. The number and types of energy storage deployments have reached a tipping point with dramatic growth anticipated in the next few years fueled in large part by major, new, policy-related storage initiatives in California14, Hawaii15, and New York. The new storage technologies likely to be deployed in response to these and other initiatives are maturing too rapidly to justify moving ahead without a unified scientifically based set of safety validation techniques and protocols. A compounding challenge is that startup companies with limited resources and experience in deployment are developing many of these new storage technologies. Standardization of the safety processes will greatly enhance the cost and viability of new technologies, and of the startup companies themselves. The modular nature of ESS is such that there is just no single entity clearly responsible for ESS safety; instead, the each participant in the energy storage community has a role and a responsibility. The following sections outline the gaps in addressing the need for validated grid energy storage system safety.

To date, the most extensive energy storage safety and abuse R&D efforts have been done for Electric Vehicle (EV) battery technologies. These efforts have been limited to lithium ion, lead-acid and nickel metal hydride chemistries and, with the exception of grid-scale lead-acid systems, are restricted to smaller size battery packs applicable to vehicles.

The increased scale, complexity, and diversity in technologies being proposed for grid- scale storage necessitates a comprehensive strategy for adequately addressing safety in grid storage systems. The technologies deployed onto the grid fall into the categories of electro-chemical, electromechanical, and thermal, and are themselves within different categories of systems, including CAES, flywheels, pumped hydro and SMES. This presents a significant area of effort to be coordinated and tackled in the coming years, as a number of gap areas currently exist in codes and standards around safety in the field. R&D efforts must be coordinated to begin to address the challenges.

An energy storage system can be categorized primarily by its power, energy and technology platform. For grid-scale systems, the power/energy spectrum spans from smaller kW/kWh to large MW/MWh systems. Smaller kW/kWh systems can be deployed for residential and community storage applications, while larger MW/MWh systems are envisioned for electric utility transmission and distribution networks to provide grid level services. This is in contrast to electric vehicles, for which the U.S. Advanced Battery Consortium (USABC) goals are both clearly defined and narrow in scope with an energy goal of 40 kWh. While in practice some EV packs are as large as 90 kWh, the range of energy is still small compared with the grid storage applications. This research is critical to the ability of first responders to understand the risks posed by ESS technologies and allow for the development of safe stratagies to minimize risk and mitigate the event.

Furthermore, the diversity of battery technologies and stationary storage systems is not generally present in the EV community. Therefore, the testing protocols and procedures used historically and currently for storage systems for transportation are insufficient to adequately address this wide range of storage systems technologies for stationary applications. Table 1 summarizes the high level contrast between this range of technologies and sizes of storage in the more established area of EV. The magnitude of effort that must be taken on to encompass the needs of safety in stationary storage is considerable because most research and development to improve safety and efforts to develop safety validation techniques are in the EV space. Notably, the size of EV batteries ranges by a factor of two; by contrast, stationary storage scales across many orders of magnitude. Likewise, the range of technologies and uses in stationary storage are much more varied than in EV. Therefore, while the EV safety efforts pave the way in developing R&D programs around safety and developing codes and standards, they are highly insufficient to address many of the significant challenges in approaching safe development, installation, commissioning, use and maintenance of stationary storage systems.

An additional complexity of grid storage systems is that the storage system can either be built on-site or pre-assembled, typically in shipping containers. These pre-assembled systems allow for factory testing of the fully integrated system, but are exposed to potential damage during shipping. For the systems built on site, the assembly is done in the field; much of the safety testing and qualification could potentially be done by local inspectors, who may or may not be as aware of the specifics of the storage system. Therefore, the safety validation of each type of system must be approached differently and each specific challenge must be addressed.

Batteries and flywheels are currently the primary focus for enhanced grid-scale safety. For these systems, the associated failure modes at grid-scale power and energy requirements have not been well characterized and there is much larger uncertainty around the risks and consequences of failures. This uncertainty around system safety can lead to barriers to adoption and market success, such as difficulty with assessing value and risk to these assets, and determining the possible consequences to health and the environment. To address these barriers, concerted efforts are needed in the following areas: • Materials Science R&D – Research into all device components • Engineering controls and system design • Modeling • System testing and analysis • Commissioning and field system safety research It is a notable challenge within the areas outlined above to develop understanding and confidence in relating results at one scale to expected outcomes at another scale, or predicting the interplay between components, as well as protecting against unexpected outcomes when one or more failure mode is present at the same time in a system. Extensive research, modeling and validation are required to address these challenges. Furthermore, it is necessary to pool the analysis approaches of failure mode and effects analysis (FMEA) and to use a safety basis in both research and commissioning to build a robust safety program. Furthermore, identifying, responding and mitigating to any observed safety events are critical in validating the safety of storage.

A holistic view with regard to setting standards to ensure thorough safety validation techniques is the desired end goal; the first step is to study on the R&D level failure from the cell to system level, and from the electrochemistry and kinetics of the materials to module scale behavior. Detailed hazards analysis must be conducted for entire systems in order to identify failure points caused by abuse conditions and the potential for cascading events, which may result in large scale damage and/or fire. While treating the storage system as a “black box” is helpful in setting practical standards for installation, understanding the system at the basic materials and chemistry levels and how issues can initiate failure at the cell and system level is critical to ensure overall system safety.

Batteries, understanding the fundamental electrochemistry and materials changes under selected operating conditions helps guide the cell level safety. Knowledge of cell-level failure modes and how they propagate to battery packs guides the cell chemistry, cell design and integration. Each system has different levels of risk associated with basic electrochemistry that must be understood; the trade-off between electrochemical performance and safety must be managed. There are some commonalities of safety issues between storage technologies. For example, breeching of a Na/S (NAS) or Na/NiCl2 (Zebra) battery could result in exposure of molten material and heat transfer to adjacent cells. Evolution of H2 from lead-acid cells or H2 and solvent vapor from lithium-ion batteries during overcharge abuse could results in a flammable/combustible gas mixture. Thermal runaway in lithium-ion (Li-ion) cells could transfer heat to adjacent cells and propagate the failure through a battery.

Moreover, while physical hazards are often considered, health and environmental safety issues also need to be evaluated to have a complete understanding of the potential hazards associated with a battery failure. These may include the toxicity of gas species evolved from a cell during abuse or when exposed to abnormal environments,  toxicity of electrolyte during a cell breech or spill in a Vanadium redox flow battery (VRB), environmental impact of water runoff used to extinguish a battery fire containing heavy metals. Flywheels provide an entirely different set of considerations, including mechanical containment testing and modeling, vacuum loss testing, and material fatigue testing under stress.

The topic of Li-ion battery safety is rapidly gaining attention as the number of battery incidents increases. Recent incidents, such as a cell phone runaway during a regional flight in Australia and a United Parcel Service plane crash near Dubai, reinforce the potential consequence of Liion battery runaway events. The sheer size of grid storage needs and the operational demands make it increasingly difficult to find materials with the necessary properties, especially the required thermal behavior to ensure fail-proof operation. The main failure modes for these battery systems are either latent (manufacturing defects, operational heating, etc.) or abusive (mechanical, electrical, or thermal).

Any of these failures can increase the internal temperature of the cell, leading to electrolyte decomposition, venting, and possible ignition. While significant strides are being made, major challenges remain in combating solvent flammability still remain, which is the most significant area that needs improvement to address safety of Li-ion cells, and is therefore discussed here in greater detail. To mitigate thermal instability of the electrolyte, a number of different approaches have been developed with varied outcomes and moderate success. Conventional electrolytes typically vent flammable gas when overheated due to overcharging, internal shorting, manufacturing defects, physical damage, or other failure mechanisms. The prospects of employing Li-ion cells in applications depend on substantially reducing the flammability, which requires materials developments (including new lithium salts) to improve the thermal properties. One approach is to use fire retardants (FR) in the electrolyte as an additive to improve thermal stability. Most of these additives have a history of use as FR in the plastics industry. Broadly, these additives can be grouped into two categories—those containing phosphorous and that containing fluorine. A concerted effort to provide a hazard assessment and classification of the event and mitigation when an ESS fails, either through internal or external mechanical, thermal, or electrical stimulus is needed by the community.

Electrolyte Safety R&D The combustion process is a complex chemical reaction by which fuel and an oxidizer in the presence of heat react and burn. Convergence of heat (an oxidizer) and fuel (the substance that burns) must happen to have combustion. The oxidizer is the substance that produces the oxygen so that the fuel can be burned, and heat is the energy that drives the combustion process. In the combustion process a sequence of chemical reactions occur leading to fire.41 In this situation a variety of oxidizing, hydrogen and fuel radicals are produced that keep the fire going until at least one of the three constituents is exhausted.

5.4.1 Electrolytes Despite several studies on the issue of flammability, complete elimination of fire in Li-ion cells has yet to be achieved. One possible reason for the failure could be linked to lower flash point (FP) (<38.7 °C) of the solvents.42 Published data shows that polyphosphazene polymers and ionic liquids used as electrolytes are nonflammable.43 However, the high FP of these chemicals is generally accompanied by increased viscosity, thus limiting low temperature operation and degrading cell performance at sub-ambient temperatures. These materials may also have other problems such as poor wetting of the electrodes and separator materials, excluding them from use in cells despite being nonflammable. Ideally, solvents would be used that have no FP while simultaneously exhibiting ideal electrolyte behavior (see below for a number of critical properties that the electrolytes need to meet) and would remain liquid at low temperatures down to -50 ºC or below for use in Li-ion cells. A number of critical electrochemical and thermal properties are given below that FR have to meet simultaneously. The tradeoffs between properties are possible but when it comes to safety there cannot be tradeoffs. • High voltage stability • Comparable conductivity to traditional electrolytes • Lower flame propagation rate or no fire at all • Lower self-heating rate • Stable against both the electrodes • Able to wet the electrodes and separator materials • Higher onset temperature for exothermic peaks with reduced overall heat production • No miscibility problems with co-solvents

The higher energy density of Li-ion cells can only result in a more volatile device, and while significant efforts have been put forth to address safety, significant research is still needed. To improve safety of Li-ion batteries, the electrolyte flammability needs significant advances or further mitigation is needed in areas that will contain the effects of failures to provide graceful failures with safer outcomes in operation.

Electrodes, separators, current collectors, casings, cell format headers and vent ports While electrolytes are by far the most critical component in Li-ion battery safety, research has been pursued into safety considerations around the other components of the cell. These factors can become more critical as research continues in wider ranges of chemistries for stationary storage.

Capacitors Electrostatic capacitors are a major failure mechanism in power electronics. These predominately fail because of the strong focus on low cost devices, and low control over manufacturing. In response, they are used at a highly de-rated level, and often with redundant design. When they fail they often show slow degradation with decreasing resistivity leading eventually to shorting. Cascading failures can lead to higher consequence failures elsewhere in a system. Arcs or cascading failures can occur. The added complexity of redundant design is a safety risk. While there is a niche market for high reliability capacitors, they are not economically viable for most applications, including grid storage. These devices are made of precious metals and higher quality ceramic processing that leads to fewer oxygen vacancies in the device.

Polymer capacitors can have a safety advantage as they can be self-healing, and therefore graceful failure; however these are poor performers at elevated temperatures and are flammable.

Currently, the low cost and low reliability of capacitors make them a very common component that fails in devices, affecting the power electronics and providing a possible trigger for a cascading failure. While improved reliability has been achieved in capacitors such devices are cost prohibitive due to their manufacturing and testing. Development of improved capacitors at reasonable cost, or design to prevent cascading failures in the event of capacitor failure should be addressed.

Pumps tubing and tanks Components specific to flow battery, and hybrid flow battery technologies have not been researched in the context of safety for battery technology. These include components such as pumps, tubing and storage tanks. Research from other areas that use similar components can be a starting point, but these demonstrate how the range of components is much broader than current R&D in battery safety.

Manufacturing defects The design of components and testing depends on understanding the range of purity in materials, and conformity in engineering. Defects are a large contributor to shorts in batteries for example. Understanding the reproducibility among parts, and the influence of defects on failure is critical to understanding and designing for safer storage systems.

The science of fault detection within large battery systems is still within its infancy; most analysis and monitoring of large battery systems is focused on monitoring issues such as state of health and state of charge monitoring, however limited work has been performed. Offer et al.53 first

Software Analytics. In this day and age of information technology, any comprehensive research, development, and deployment strategy for energy storage should be rounded out with an appropriate complement of software analytics. Software is on a par with hardware in importance, not only for engineering controls, but for performance monitoring; anomaly detection, diagnosis, and tracking; degradation and failure prediction; maintenance; health management; and operations optimization. Ultimately, it will become an important factor in improving overall system and system-of-systems safety. As with any new, potentially high consequence technology, improving safety will be an ongoing process. By analogy with airline safety, energy storage projects which use cutting-edge technologies would benefit from “black boxes” to record precursors to catastrophic failures. The black boxes would be located off-site and store minutes to months of data depending on the time scale of the phenomena being sensed. They would be required for large-scale installations, recommended for medium-scale installations, and optional for small installations. Evolving standards for what and how much should be recorded will be based on the results from research as well as experience.

Since some energy storage technologies are still early in their development and deployment, there should be an emphasis on developing safety cases. Safety cases should cover the full range of safety events that could reasonably be anticipated, and would therefore highlight the areas in which software analytics are required to ensure the safety of each system. Each case would tell a story of an initiating event, an assessment of its probability over time, the likely subsequent events, and the likely final outcome or outcomes. The development of safety cases need not be onerous, but they should demonstrate to everyone involved that serious thought has been given to safety.

Table 2. Common Tests to Assess Risk from Electrical, Mechanical, and Environmental Conditions55 Condition Electrical Mechanical Environmental Tests under development Tests Test of current flow Abnormal charging test, overcharging and charging time Forced discharge test Crush test Impact test Shock test Vibration test Heating test Temperature cycling test Low pressure altitude test Failure propagation Internal short circuit (non-impact test) Ignition/flammability IR absorption diagnostics Separator testing

The established tests for electrical, mechanical and environmental conditions are therefore tailored to identifying and quantifying the consequence and likelihood of failure in lead-acid and lithium ion technologies with typical analyses that include burning characteristics, off-gassing, smoke particulates, and environmental run off from fire suppression efforts. Even for the most studied abuse case of lithium ion technologies, some tests have been identified as very crude or ineffective with limited technical merit. For example, the puncture test, used to replicate failure under an internal short, is widely believed to lack the ability to accurately to mimic this particular failure mode. These tests are less likely to reproduce potential field failures when applied to technologies for which they were not originally designed. The above testing relates exclusively to cell/pack/module level and does not take into consideration the balance of the storage system. Other tests on Li-ion system are targeted at invoking and quantifying specific events; for example, impact testing and overcharging tests probe the potential for thermal runaway which occurs during anode and cathode decomposition reactions. Other failure modes addressed by current validation techniques include electrolyte flammability, thermal stability of materials including the separators, electrolyte components and active materials, and cell-to-cell failure.

Gap areas and opportunities An energy storage system deployed on the grid, whether at the residential (<10kW) or bulk generation scale on the order of MW, is susceptible to similar failures as described above for Li-ion. However, given the multiple chemistries and application space, there is a significant gap in our ability to understand and quantify potential failures under real-world conditions; in order to ensure safety as grid storage systems are deployed, it is critical to understand their potential failure modes within each deployment environment. Furthermore, it must be considered that grid-scale systems include at the very least: power electronics, transformers, switchgear, heating and cooling systems and housing structures or enclosures. The size and the variety of technologies necessitate a rethinking of safety work as it is adopted from current validation techniques in the electrified vehicle space. 

To address the component and system level safety concerns for all the technologies being developed for stationary energy storage, further efforts will be required to: understand these systems at the fundamental materials science, develop appropriate engineering controls, fire protection and suppression methods, system design, complete validation testing and analysis, and establish real world based models for operating. System level safety must also address several additional factors including the relevant codes, standards and regulations, the needs of first responders, and anticipate risks and consequences not covered by current CSR. The wide range of chemistries and operating conditions required for grid-scale storage presents a significant challenge for safety R&D. The longer life requirements and wider range of uses for storage require a better understanding of degradation and end of life failures under normal operating and abuse conditions. The size of batteries also necessitates a stronger reliance on modeling. Multi-scale models for understanding thermal runaway, and fire propagation; whether originated in the chemistry, the electronics, or external to the system; have not been developed. Currently gap areas for stationary energy storage exist from materials research and modeling through system life considerations such as operation and maintenance.

Engineering controls and system design. Currently the monitoring needs of batteries, as well as effectiveness of means to separate battery cells and modules, or various fire suppression systems and techniques in systems have not been studied extensively. Individual companies and installations have relied on past experience in designing these systems. For example: Na battery installations have focused on mitigating the potential impact of the high operating temperature, Pb-acid batteries has focused on controlling failures associated with hydrogen build up, while in technologies that don’t use electrochemistry like flywheels, have focused on mechanical concerns such as run-out and high temperature, or change in chamber pressure. Detailed testing and modeling are required to fully understand the needs in system monitoring and containment of failure propagation. Rigorous design of safety features that adequately address potential failures are also still needed in most technology areas. Current efforts have widely focused on monitoring cell and module level voltages in addition to the thermal environment; however the tolerances for safe operation are not known for these systems. Further development efforts are needed to help manufacturers and installers understand the appropriate level of monitoring in order to safely operate a system and prevent failure resulting from internal short circuits, latent manufacturing defects or abused batteries from propagating to the full system.

Modeling The size and cost of grid-scale storage system make it prohibitive to test full-scale systems, modeling can play a critical role in improved safety.

Fire suppression Large-scale energy storage systems can mitigate risk of loss by isolating parts of a system in different transportation containers, or using materials or assemblies to section off batteries. Most current systems have automated and manually triggered fire suppression systems within the enclosure but have limited knowledge if such suppression systems will be useful in the event of fire.

The interactions between fire suppressants and system chemistries must be fully understood to determine the effectiveness of fire suppression. Key variables include the: volume of suppressant required, rate of suppressant release, and distribution of suppressants. Basic assumptions about electrochemical safety have not been elucidated, for example it is not even clear whether a battery fire is of higher consequence than other types of fires, and if so at what scale this is of concern.

The National Fire Protection Association (NFPA) has provided a questionnaire regarding suppressants for vehicle batteries. Tactics for suppression of fires involving electric-drive vehicle (EDV) batteries: a. How effective is water as a suppressant for large battery fires? b. Are there projectile hazards? c. How long must suppression efforts be conducted to place the fire under control and then fully extinguish it? d. What level of resources will be needed to support these fire suppression efforts? 1 e. Is there a need for extended suppression efforts? f. What are the indicators for instances where the fire service should allow a large battery pack to burn rather than attempt suppression?

NFPA 13, Standard for the Installation of Sprinkler Systems,60 does not contain specific sprinkler installation recommendations or protection requirements for Li-ion batteries. Reports and literature on suppressants universally recommended the use of water.61 However, the quantity of water needed for a battery fire is large: 275-2639 gallons for a 40 kWh EV sized Liion battery pack. This is higher than recommended for internal combustion engine (ICE) vehicle fires.

Summary. Science-based safety validation techniques for an entire energy storage system are critical as the deployments of energy storage systems expand. These techniques are currently based on previous industry knowledge and experience with energy storage for vehicles, as well as experience with grid-scale Pb-acid batteries. Now, they must be broadened to encompass gridscale systems. The major hurtle to this expansion is encompassing both much broader range in scale stationary storage systems, as well as the much broader range of technologies. Furthermore, the larger scale of stationary storage over EV storage necessitates the consideration of a wider range of concerns, beyond the storage device. This includes areas such as power electronics and fire suppression. The required work to develop validation is significant. As progress is made in understanding validation through experiment and modeling, these evidence-based results can feed into codes, regulations and standards, and can inform manufacturers and customers of stationary storage solutions to improve the safety of deployed systems.

Currently, fire departments do not categorize ESS as stand-alone infrastructure capable of causing safety incidents independent of the systems that they support. Instead, fire departments categorize grid ESS as back-up power systems such as uninterruptible power supplies (UPS) for commercial, utility, communications and defense settings, or as PV battery-backed systems for on, or off-grid residential applications. This categorization results in limited awareness of ESS and their potential risks, and thus the optimal responses to incidents. This categorization of energy storage systems as merely back-up power systems also results in the treatment of ESS as peripheral to the risk management tools.

The energy storage industry is rapidly expanding due to market pressures. This expansion is surpassing both the updating of current CSR and development of new CSR needed for determining what is and is not safe and

No general, technology-independent standard for ESS integration into a utility or a stand-alone grid has yet been developed.

Incident responses with standard equipment are tailored to the specific needs of the incident type and location, whether it’s two “pumper” engines and a “ladder” truck with two to four personnel, plus a Battalion Chief to act as Incident Commander, for a total of 9 to 13 personnel responding to an injury/accident, or a structure fire that requires five engines, two trucks, and two Battalion Chiefs for a total of 17 to 30 personnel. With each additional “alarm” struck will send another two to three “pumper” engines and a “ladder” truck. In all of these cases, the incident response personnel typically arrive on scene with only standard equipment. This equipment is guided by various NFPA standards for equipment on each apparatus, personal protective equipment (PPE), and other rescue tools. In responding to an ESS incident, the fire service seldom incorporates equipment specialized for electrical incidents.

A number of unique challenges must be considered in developing responses to any energy storage incident. In particular, difficulties securing energized electrical components can present significant safety challenges for fire service personnel. Typically, the primary tasks are to isolate power to the affected areas, contain spills, access and rescue possible victims, and limit access to the hazard area. The highest priority is given to actions that support locating endangered persons and removing them to safety with the least possible risk to responders. Where the rescue of victims continues until it is either accomplished or determined that there are no survivors or the risk to responders is too great. Industrial fires can be quite dangerous depending on structure occupancy, i.e. the contents, process, and personnel inside. Water may be used from a safe distance on larger fires that have extended beyond the original equipment or area of origin, or which are threatening nearby exposures; however, determination of “safe” distance has been little researched by the fire service scientific community.

Fire suppression and protection systems. Each ESS installation is guided by application of existing CSR that may not reflect the unique and varied chemistries in use. Fire-suppressant selection should be based on the efficacy of specific materials and needed quantities on site based on appropriate and representative testing, conducted in consultation with risk managers, fire protection engineers, and others, as well as alignment with existing codes and standards. For example, non-halogenated inert gas discharge systems may not be adequate for thermally unstable oxide chemistries, as they generate oxides in the process of heating, which may lead to combustion in oxygen deficient atmospheres. Ventilation requirements imposed by some Authorities Having Jurisdiction (AHJs) may work against the efficacy of these gaseous suppression agents. Similarly, water-based sprinkler systems may not prove effective for dissipating heat dissipation in large-scale commodity storage of similar chemistries. Therefore, additional research is needed to provide data on which to base proper agent selection for the occupancy and commodity, and to establish standards that reflect the variety of chemistries and their combustion profile.

Current commodity classification systems used in fire sprinkler design (NFPA 13-Standard for Installation of Sprinkler Systems) do not have a classification for lithium or flow batteries. This is problematic, as the fire hazard may be significantly higher depending on the chemicals involved and will likely result in ineffective or inaccurate fire sprinkler coverage. Additionally, thermal decomposition of electrolytes may produce flammable gasses that present explosion risks.

Verification and control of stored energy. Severe energy storage system damage resulting from fire, earthquake, or significant mechanical damage may require complete discharge, or neutralization of the chemistry, to facilitate safe handling of components. Though the deployment of PV currently exceeds that of ESS, there is still a lack of a clear response procedure to de-energize distributed PV generation in the field. Fire fighters typically rely on the local utility to secure supply-side power to facilities.

In the case of small residential or commercial PV, the utility is not able to assist because the system is on the owner side of the meter, which presents a problem for securing a 600Vdc rooftop array. Identifying the PV integrators responsible for installation may not be possible, and other installers may be hesitant to assume any liability for a system they did not install. This leaves a vacuum for the safe, complete overhaul of a damaged structure with PV. Similarly, ESS faces the complication of unclear resources for assistance and the inabilities of many first responders to knowledgably verify that the ESS is discharged or de-energized.

Post-incident response and recovery. Thermal damage to ESS chemistries and components presents unique challenges to the fire service community, building owners, and insurers. As evidenced in full-scale testing of EV battery fires, fire suppression required more water than anticipated, and significantly more in some cases. Additionally, confirming that the fire was completely extinguished was difficult due to the containment housings of EV batteries that can mask continued thermal reaction within undamaged cells. In one of the tests performed by Exponent, Inc., one battery reignited after being involved in a full-scale fire test some 22 hours post-extinguishment; in another case, an EV experienced a subsequent re-ignition 3 weeks post-crash testing.

Governmental approvals and permits related to the siting, construction, development, operation, and grid integration of energy storage facilities can pose significant hurdles to the timely and cost effective implementation of any energy storage technology. The process for obtaining those approvals and permits can be difficult to navigate, particularly for newer technologies for which the environmental, health, and safety impacts may not be well documented or understood either by the agencies or the public.


Cooper, J. 2019.  Arizona fire highlights challenges for energy storage. Associated Press.

Deign, J. 2019.  The Safety Question Persists as Energy Storage Prepares for Huge Growth. Recent battery plant blazes and a hydrogen station blast have again raised questions about the safety of energy storage technologies.

DOE/EPRI. 2013. Electricity storage handbook in collaboration with NRECA. USA: Sandia National Laboratories and Electric Power Research Institute.

Mogg, T. 2019. Battery pack suspected cause of recent Virgin Atlantic aircraft fire.

Posted in Safety | Leave a comment

Global oil discoveries far from breaking even with consumption

This image has an empty alt attribute; its file name is oil-discoveries-rystad-2013-2018.jpg

Preface.  According to Bloomberg (2016), oil discoveries in 2015 were the lowest since 1947, with just 2.7 billion barrels of conventional oil found globally (though Rystad calculated this differently at 5.6, nearly twice as much). Since the world burns 36.5 billion barrels of oil a year in 2019, we’re not even close to breaking even.

Rystad Energy (2019) in “Global discoveries on the rise as majors take a bigger bite” estimates barrels of oil equivalent, which includes both conventional oil and gas. Since oil is the master resource that makes gas, transportation, and all other goods and activities possible, I’ve taken the second number as the percent of oil in the BOE to come up with how much conventional oil was found. It falls way short of the 36.5 billion barrels we’re consuming. The pantry is emptying out, perhaps pushing the peak oil date forward in time as we continue to grow at 1% a year in oil consumption and put nothing at all back on the shelves.  Peak Demand? Ha!  Not until we’re forced to cut back from oil shortages.

2013 50:50 17.4 billion BOE  8.7 billion BOE oil  shortfall: 27.8 billion BOE
2014 54:46 16.0 billion BOE  7.4 billion BOE oil shortfall: 29.1 billion BOE
2015 61:39 14.4 billion BOE  5.6 billion BOE oil shortfall: 30.9 billion BOE
2016 57:43 8.4 billion BOE  3.6 billion BOE oil  shortfall: 32.9 billion BOE
2017 40:60 10.3 billion BOE 6.2 billion BOE oil shortfall: 30.3 billion BOE
2018 46:54 9.1 billion BOE 4.9 billion BOE oil  shortfall: 31.6 billion BOE

This doesn’t include fracked oil, but the IEA expects that to peak somewhere from now to 2023.

What it means is enjoy life while it’s still good, and stock your pantry while you’re at it.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Mikael, H. August 29, 2016. Oil Discoveries at 70-Year Low Signal Supply Shortfall Ahead. Bloomberg.

2016 figure only shows exploration results to August. Discoveries were just 230 million barrels in 1947 but skyrocketed the next year when Ghawar was discovered in Saudi Arabia, and is till the world's largest oil field.  Source: Wood Mackenzie

2016 figure only shows exploration results to August. Discoveries were just 230 million barrels in 1947 but skyrocketed the next year when Ghawar was discovered in Saudi Arabia, and it is still the world's largest oil field, though recently it was learned that Ghawar is in decline at 3.5% a year. Source: Wood Mackenzie

Explorers in 2015 discovered only about a tenth as much oil as they have annually on average since 1960. This year, they’ll probably find even less, spurring new fears about their ability to meet future demand.

With oil prices down by more than half since the price collapse two years ago, drillers have cut their exploration budgets to the bone. The result: Just 2.7 billion barrels of new supply was discovered in 2015, the smallest amount since 1947, according to figures from Edinburgh-based consulting firm Wood Mackenzie Ltd. This year, drillers found just 736 million barrels of conventional crude as of the end of last month.

That’s a concern for the industry at a time when the U.S. Energy Information Administration estimates that global oil demand will grow from 94.8 million barrels a day this year to 105.3 million barrels in 2026. While the U.S. shale boom could potentially make up the difference, prices locked in below $50 a barrel have undercut any substantial growth there. Ten years down from now this will have a “significant potential to push oil prices up. Given current levels of investment across the industry and decline rates at existing fields, a “significant” supply gap may open up by 2040″.

Oil companies will need to invest about $1 trillion a year to continue to meet demand, said Ben Van Beurden, the CEO of Royal Dutch Shell Plc, during a panel discussion at the Norway meeting. He sees demand rising by 1 million to 1.5 million barrels a day, with about 5 percent of supply lost to natural declines every year.

New discoveries from conventional drilling, meanwhile, are “at rock bottom,” said Nils-Henrik Bjurstroem, a senior project manager at Oslo-based consultants Rystad Energy AS. “There will definitely be a strong impact on oil and gas supply, and especially oil.

Global inventories have been buoyed by full-throttle output from Russia and OPEC, which have flooded the world with oil despite depressed prices as they defend market share. But years of under-investment will be felt as soon as 2025, Bjurstroem said. Producers will replace little more than one in 20 of the barrels consumed this year, he said.

There were 209 wells drilled through August this year, down from 680 in 2015 and 1,167 in 2014, according to Wood Mackenzie. That compares with an annual average of 1,500 in data going back to 1960.

Overall, the proportion of new oil that the industry has added to offset the amount it pumps has dropped from 30 percent in 2013 to a reserve-replacement ratio of just 6 percent this year in terms of conventional resources, which excludes shale oil and gas, Bjurstroem predicted. Exxon Mobil Corp. said in February that it failed to replace at least 100 percent of its production by adding resources with new finds or acquisitions for the first time in 22 years.

“That’s a scary thing because, seriously, there is no exploration going on today,” Per Wullf, CEO of offshore drilling company Seadrill Ltd., said by phone.

Posted in How Much Left, Peak Oil | Tagged , | 2 Comments

Scientists on where to be in the 21st century based on sustainability

Preface. The article below is based on Hall & Day’s book “America’s Most Sustainable Cities and Regions: Surviving the 21st Century Megatrends”.

Related articles:

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Day, J. W., et al. Oct 2013. Sustainability and place: How emerging mega-trends of the 21st century will affect humans and nature at the landscape levelEcological Engineering.

Five scientists have written a peer-reviewed article about where the best and worst places will be in the future in America based on how sustainable a region is when you take into account climate change, energy reserves, population, sea-level rise, increasingly strong hurricanes, and other factors.  Three of the scientists, John W. Day, David Pimentel, and Charles Hall, are “rock stars” in  ecology.

Below are some excerpts from this 16 page paper that I found of interest (select the title above to see the full original paper).

Best places to be

WhereToBe Day2013 good places greenestThe greener the better — unless there are too many people (circles indicate large cities).  modified from U.S. EPA (2013)

where to be day 2013 best is underperforming now








Move to an Under-performing Region (and away from a Mega-region): Many areas rich in natural resources often have high poverty rates, perhaps due to “the resource curse”, usually applied internationally to countries rich in fossil fuels, agriculture, forestry, and fisheries, but financially poor with stratified social classes. We believe this concept can be applied to states. You can see above that most under-performing counties are rural. These are regions that have not kept pace with national trends over the last 3 decades in terms of population, employment, and wages. Note that with the exception of the Great Lakes mega-region, the under-performing regions are outside of the 11 mega-regions. These underperforming areas generally have high natural resources and agricultural production.

Worst Places to Be

Worst places to be

Several areas of the U.S. will have compromised sustainability in the 21st century. These include the southern Great Plains, the Southwest, the southern half of California, the Gulf and Atlantic coasts, especially southern Louisiana and Southern Florida, and areas of dense population such as south Florida and the Northeast.

where to be day 2013 not in megaregions

[My comment: You should also consider how long forests will last in your area, people will be burning them to cook and heat their homes with, and eventually make furniture, homes, floors, spoons, and hundreds of other objects as shown in the FoxFire series.  There were 92 million people in 1920, just 29% of the population we have now.  To zero in on the details, see this account of what happened in Vermont]

forest virgin growth 1620 1850 1920

Avoid the large megaregions.

Future Trends

The trends of energy scarcity, climate change, population, and many other factors are likely to reduce the sustainability of the landscape humans depend on, in some places more than others, since materials and energy both limited and distributed unevenly.

Industrial agriculture is very energy intensive: 19% of the total energy use in the U.S.

14% of that is Agricultural production, food processing and packaging, 5% for transportation and preparation.

  • Each American uses 528 gallons/year in oil equivalents to supply their food, or 169 billion gallons for 320 million Americans.
  • About 33% of the energy required to produce 2.5 acres of crops is invested in machine operation.
  • On average, nearly 10 calories of energy are used to make 1 calorie of edible food.
  • Cropland provides 99.7% of the global human food supply (measured in calories) with less than 1% coming from the sea.
  • Global per capita use is      .50 acre for cropland and 1.25 acres of pasture land
  • The U.S. and Europe use 1.25 acres of cropland and 2     acres  of pasture land
  • Crop-land now occupies 17% of the total land area in the U.S., but little additional land is available or even suitable for future agricultural expansion.
  • As the U.S. population increases, climate impacts grow, and energy resources decrease, there will be less cropland area per capita.
  • A significant portion of food produced in the U.S. is irrigated and located in areas where water shortages will increase.

Agricultural land

  • 1950:   1,250,000,000 acres
  • 2000:      943,000,000 acres – down 21.5% from 1950

…………Acres              States (cropland is unequally distributed)

  • 508,000,000    N & S Dakota, Nebraska, Kansas, Oklahoma, Texas, New Mexico,                         Colorado, Montana, Wyoming
  • 135,700,000    Ohio, Indiana, Illinois, Wisconsin, Minnesota, Iowa, Missouri
  •   27,800,000    California (50% of vegetables, fruits, and nuts in USA

Crops need a lot of water.  Some use 265 to to 530 gallons of water per 2.2 pounds of crops produced (dry matter).  Corn needs 10 million liters per hectare, soybeans 6 million L/ha fora yield of 3.0 tons/ha. Wheat requires only about 2.4 million L/ha for a yield of 2.7 t/ha. Under semiarid conditions, yields of non-irrigated crops, such as corn, are low (1.0 t/ha to 2.5 t/ha) even when ample amounts of fertilizer are applied. Approximately 40% of water use in the United States is used solely for irrigation. Reducing irrigation dependence in the U.S. would save significant amounts of energy, but probably require that crop production shift from the dry and arid western regions to the more agriculturally suitable eastern U.S.

Why Cities will be Bad Places to be

The cities most dependent on cheap energy will be the most affected (especially the Southwest and southern great plains)

U.S. population increased steadily from 3.9 million in 1790 to nearly 310 million in 2010 (or almost 8,000% in just 220 years, an exponential growth rate of almost 2%). Life also became progressively more urbanized and by 2010, 259 million people or 83% lived in urban areas compared to 56 million in rural areas.

The maintenance of large urban megaregions requires enormous and continuous inputs of energy and materials. Modern industrial society and modern cities are inherently unsustainable.

Some have argued that large urban areas are more energy efficient than rural areas (Dodman, 2009). But Fragkias (2013) examined the relation between city size and greenhouse gas emissions and found that emissions scale proportionally with urban population size for U.S. cities and that larger cities are not more emissions efficient than smaller ones. In a review of energy and material flows through the world’s 25 largest urban areas, Decker(2000) also concluded that large urban areas are only weakly dependent on their local environment for energy and material inputs but are constrained by their local areas for supplying water and absorbing wastes. Rees  contends that if cities are to be sustainable in the future, they must rebalance production and consumption, abandon growth, and re-localize. The trajectory of megatrends of the 21st century will make this difficult for all large urban regions in the U.S. and impossible for some.

By 2025, it is estimated that 165 million people, or about half the population, will live in 4 megaregions; the Northeast, Great Lakes, Southern California, and San Francisco Bay regions. An additional 45 million will live in south Florida and the Houston-Dallas region. The supply lines that support these megaregions with food, energy, and other materials stretch for long distances across the landscape. Areas dependent on longer, energy intensive supply lines are vulnerable to the rising costs of energy for transportation.

The economies of urban areas, especially the currently most economically successful ones based on the human, financial, and information service sectors, are strongly dependent on the spending of discretionary income, which is predicted to decrease substantially over the 21st century.

Best cities to live in

But many cities have lost population, especially those that were based in the manufacturing sector of the economy during the 20th century. Detroit and Flint, Michigan, are often cited as examples but there are many others. Between 1950 and 2000, St. Louis lost 59% of its population. Pittsburgh, Buffalo, Detroit, and Cleveland lost more than 45% each. It is possible that many of the rust belt cities that have experienced population decreases will be more sustainable than more “successful” cities in the northeast and other areas. They now have a lower population density and tend to exist in rich agricultural regions. Indeed, abandoned land is being used for food production in a number of depopulating cities.

Worst cities to live in

By contrast, the northeast is the most densely populated region of the country. The population is expected to reach almost 60 million by 2025. The states that make up the region have about 34 million acres of farmland or about 0.2 ha per person. By contrast, it takes about 1.2 ha per capita to provide the food consumed in the U.S. If agriculture becomes more local and less productive as some predict due to increasing energy costs then it will be a challenge to maintain the current food supply to the northeast.

The least sustainable region will likely be the southwestern part of the country from the southern plains to California. Climate change is already impacting this region and it is projected to get hotter and drier. Winter precipitation is predicted to be more rain and less snow. These trends will lead to less water for direct human consumption and for agriculture. This is critical since practically all agriculture in the region is irrigated. The Southwest has the lowest level of ecosystem services of any region in the U.S. California is the most populous state in the nation with most people living in the southern half of the state, the area with highest water stress. The Los Angeles metro area is the second largest in the nation. But population density is low over much of the rest of the region and is concentrated in large urban areas such as Las Vegas, Phoenix, and Albuquerque. California is one of the most important food producing states in the nation but this will be threatened by water scarcity and increasing energy costs. Much of the region is strongly dependent on tourism and spending discretionary income, especially Las Vegas, so future economic health will likely be compromised in coming decades. Many cities and regions whose economy is dependent on tourism will have compromised sustainability.

Energy scarcity

World oil production peaked in 2005 and has been on a plateau since then.

400 giant fields discovered before 1960 provide 80% of world oil.

Shale oil and gas have very high depletion rates and the production of unconventional reserves such as the Canadian and Venezuelan tar sands are extremely unlikely to be scaled up sufficiently to offset conventional decline rates.

Society depends on the surplus energy provided from the energy extraction sector for the material and energy throughput that allows for economic growth and productivity. As energy becomes more expensive to extract and produce, more money and energy that might otherwise be spent in other sectors of the economy must be spent in the energy sector, decreasing real growth (my comment: that means fewer jobs and increasing poverty)

The transition to a less oil reliant, more sustainable society in the U.S. is many decades away.

Since so much of the economy depends upon the widespread availability of cheap oil for the production and distribution of goods, the onset of peak oil and the decline in net energy available to society has profound implications for overall societal well being (my comment: this is the understatement of the year – what this means is extreme social unrest from hunger and lack of oil or natural gas to heat homes and cook with, etc)

Just as the first half of the oil age consisted of constantly increasing production, the second half of the oil age will consist of a continual rate of depletion that cannot be offset by new discoveries or low EROI alternatives.

Descriptions of regions in the article

  • Most negatively affected areas: Southwest including much of California & Southern Great Plains. All of these regions will be drier with less water at the same time population is growing.
  • decreased fresh water availability: Southern Great Plains, SouthWest (Lake Mead has a 50% chance of drying up within 2 decades)
  • Eastern half of U.S.: abundant natural resources but avoid megaregions
  • Poor soil: southwest
  • Severest climate change impacts: Southwest
  • Driest, hottest, most extreme droughts and floods: Southwest
  • Most tree deaths, super forest fires, loss of species, dust: Southwest
  • Snow melting too fast: West Coast – fewer crops, especially California which grows 1/3 of America’s food
  • Flooding: Mississippi basin due to more intense storms in the future
  • Rising sea level: coastal zones
  • stronger hurricanes: Gulf and Atlantic coasts from warming surface waters of the oceans. Hurricanes are also expected to become more frequent.
  • Hurricane surge: Gulf and Atlantic Coasts with New Orleans the worst threatened
  • Mississippi delta: resources of the river can be used to rebuild and restore the rich natural systems of the area
  • Energy scarcity will affect everyone everywhere
  • Less rain: great plains
  • Ogallala aquifer depletion: great plains (energy scarcity will add to the cost and difficulties of pumping the water up)

Good areas

  • High rainfall and primary production: eastern states
  • High ecosystem services: river valleys and coastal areas
  • Estuaries, swamps, floodplains
  • Warmer, moist climates: higher primary productivity than colder climates

There’s a lot more, I especially liked the attack of the current economic paradigms (i.e. growth forever) on pages 6 – 9.

Also, many of the referenced papers in the article are good reads with important details not covered fully in this paper.

I personally think cities might be good for a few years into the crisis as governments concentrate resources and supply lines where the highest population densities are.  Gas stations out in the rural areas will be the first to close, throwing some places into sudden self-reliancy.  But at some point the whole system snaps like a volcano erupting from oil shocks, rusting oil and gas infrastructure falling apart (especially refineries), natural disasters, black swans like (cyber) warfare, electric grid down for a year or more, nuclear winter from nuclear war anywhere in the world, electromagnetic pulses from solar flares or a nuclear explosion, hunger and consequent social unrest, and other factors in the Decline, Collapse, and “A Fast Crash?” categories.  That will make cities the worst places to be.  Best to move to under-performing areas now since it will take years to become part of another community and learn the necessary skills.

Posted in Where to Be or Not to Be | Tagged , | 30 Comments

Microbes a key factor in climate change

Preface. The IPCC, like economists, assumes our economy and burning of fossil fuels will grow exponentially until 2100 and beyond, with no limits to growth. But conventional oil peaked and has stayed on a plateau since 2005, so clearly peak global oil production is in sight. As is peak soil, aquifer depletion, biodiversity destruction, and deforestation to name just a few existential threats besides climate change.

The lack of attention to microbes in the IPCC model further weakens their predictions about the trajectory of climate change. As this article notes, diatoms are our friends, they “perform 25–45% of total primary production in the oceans, owing to their prevalence in open-ocean regions when total phytoplankton biomass is maximal. Diatoms have relatively high sinking speeds compared with other phytoplankton groups, and they account for ~40% of particulate carbon export to depth”.

Diatoms didn’t appear until 40 million years ago, and sequester so much carbon that they caused the poles to form ice caps. So certainly scientists should study whether their numbers are decreasing or increasing. But also the IPCC needs to include diatoms and other microbes in their models. It’s a big deal that they haven’t, since microorganisms support the existence of all higher life forms.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

* * *

University of New South Wales. 2019. Leaving microbes out of climate change conversation has major consequences, experts warn. Science Daily.

Original article: Cavicchioli, R., et al. 2019. Scientists’ warning to humanity: microorganisms and climate change. Nature Reviews Microbiology.

More than 30 microbiologists from 9 countries have issued a warning to humanity — they are calling for the world to stop ignoring an ‘unseen majority’ in Earth’s biodiversity and ecosystem when addressing climate change.

The researchers are hoping to raise awareness both for how microbes can influence climate change and how they will be impacted by it — calling for including microbes in climate change research, increasing the use of research involving innovative technologies, and improving education in classrooms.

“Micro-organisms, which include bacteria and viruses, are the lifeforms that you don’t see on the conservation websites,” says Professor Cavicchioli. “They support the existence of all higher lifeforms and are critically important in regulating climate change. “However, they are rarely the focus of climate change studies and not considered in policy development.”

Professor Cavicchioli calls microbes the ‘unseen majority’ of lifeforms on earth, playing critical functions in animal and human health, agriculture, the global food web and industry.

For example, the Census of Marine Life estimates that 90% of the ocean’s total biomass is microbial. In our oceans, marine lifeforms called phytoplankton take light energy from the sun and remove carbon dioxide from the atmosphere as much as plants. The tiny phytoplankton form the beginning of the ocean food web, feeding krill populations that then feed fish, sea birds and large mammals such as whales.

Marine phytoplankton perform half of the global photosynthetic CO2 fixation and half of the oxygen production despite amounting to only ~1% of global plant biomass. In comparison with terrestrial plants, marine phytoplankton are distributed over a larger surface area, are exposed to less seasonal variation and have markedly faster turnover rates than trees (days versus decades). Therefore, phytoplankton respond rapidly on a global scale to climate variations.

Sea ice algae thrive in sea ice ‘houses’. If global warming trends continue, the melting sea ice has a downstream effect on the sea ice algae, which means a diminished ocean food web.

Climate change is literally starving ocean life,” says Professor Cavicchioli.

Beyond the ocean, microbes are also critical to terrestrial environments, agriculture and disease.

In terrestrial environments, microbes release a range of important greenhouse gases to the atmosphere (carbon dioxide, methane and nitrous oxide), and climate change is causing these emissions to increase,” Professor Cavicchioli says.

“Farming ruminant animals releases vast quantities of methane from the microbes living in their rumen — so decisions about global farming practices need to consider these consequences.

“And lastly, climate change worsens the impact of pathogenic microbes on animals (including humans) and plants — that’s because climate change is stressing native life, making it easier for pathogens to cause disease.

“Climate change also expands the number and geographic range of vectors (such as mosquitos) that carry pathogens. The end result is the increased spread of disease, and serious threats to global food supplies.”

Greater commitment to microbe-based research needed

In their statement, the scientists call on researchers, institutions and governments to commit to greater microbial recognition to mitigate climate change.

“The statement emphasizes the need to investigate microbial responses to climate change and to include microbe-based research during the development of policy and management decisions,” says Professor Cavicchioli.

Additionally, climate change research that links biological processes to global geophysical and climate processes should have a much bigger focus on microbial processes.

This goes to the heart of climate change, so if micro-organisms aren’t considered effectively it means models cannot be generated properly and predictions could be inaccurate,” says Professor Cavicchioli.

“Decisions that are made now impact on humans and other forms of life, so if you don’t take into account the microbial world, you’re missing a very big component of the equation.”

Professor Cavicchioli says that microbiologists are also working on developing resources that will be made available for teachers to educate students on the importance of microbes.

“If that literacy is there, that means people will have a much better capacity to engage with things to do with microbiology and understand the ramifications and importance of microbes.”

Posted in Climate Change, Scientists Warnings to Humanity | Tagged , , | 5 Comments

Tesla pickup truck claims defy the laws of physics by John Engle

In 2017 scientists questioned whether the Tesla Semi could meet Musk’s claims (see post “Given the laws of physics, can the Tesla Semi really go 500 miles, and what will the price be?”). I suspect this is why Tesla has delayed the release another year, to 2020.

Below is another article that questions the claims Tesla is making about the Tesla pick-up truck.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

* * *

Engle, J. 2019. Tesla pickup truck claims defy the laws of physics.


Tesla has been teasing its pickup truck for months, with the first glimpse offered at the March reveal of the Model Y SUV.

Elon Musk has claimed the Tesla pickup will outmatch the Ford F-150 as a truck, as well as matching the Porsche 911 in sports car performance.

Tesla plans to unveil the pickup this summer, but the technological capabilities it would need to demonstrate to meet Musk’s promises are years from reality.

Tesla is known to make outlandish promises, but the Tesla Pickup may be the wildest yet. When reality sets in, Tesla may find a rude awakening.

For months, hype has been building around Tesla’s (TSLA) long-promised pickup truck. A partial image was teased at the unveiling event for the company’s Model Y SUV in March, and CEO Elon Musk has made numerous claims about its capabilities in the months since.

Much of the excitement around the Tesla Pickup stems from the out-of-this-world specs Musk has repeatedly promised. Yet, upon closer examination, most of the promised performance specifications appear to be physically impossible to achieve.

Pickup trucks are big business, especially in the United States, so it makes sense that Tesla would want to cash in on that lucrative market. However, Tesla’s appeal to science fiction over actual science suggests the project is not as mature as Musk claims.

Investors expecting the Tesla Pickup to contribute to the company’s strained bottom line anytime soon should think again.

Performance Anxiety

Musk has commented on the question of performance multiple times. At Tesla’s annual meeting this month, he declared that the pickup would have sports car performance capabilities on a par with the Porsche 911, while on a recent episode of the “Ride the Lightning” podcast he compared it favorably to the Ford (F) F-150:

“It’s going to be a truck that is more capable than other trucks. The goal is to be a better truck than a F-150 in terms of truck-like functionality and be a better sports car than a standard 911. That’s the aspiration.”

That sounds great in theory. After all, who wouldn’t want to drive a vehicle with the rugged capabilities of an F-150 and the handling of a 911? Unfortunately, when something sounds too good to be true, it almost always is. That certainly appears to be the case here.

CNBC recently interviewed Brett Smith, the director of propulsion technologies and energy infrastructure at the Center for Automotive Research, who threw some cold water on Musk’s big ambitions:

“While ‘you’re never going to get a vehicle that can do everything well,’ says Smith, ‘I have no doubt that they can build a pickup truck that has much better handling than the current F-150.’”

“But it will likely prove too difficult to build the typical features of a pickup truck, including the ability to haul heavy cargo and handle off-road driving, into ‘a light-weight, high-performance sports car’ that could match the Porsche 911, he says…‘The physics don’t work there.’”

The tradeoff between robust trucking capabilities and sports car performance are well understood in the automotive industry. Tesla has shown no magical ability to defy the basic physics and engineering constraints that render Musk’s promises essentially impossible to attain with present technology.

Hauling Fantasy

Musk’s claims about the Tesla Pickup have gone even farther into the realm of science fiction from time to time. Taking to Twitter in March, he scoffed at the Dodge Ram, declaring its 12,750 pounds of towing capacity to be puny:

  • Dodge Ram owner@RamLover69: you would still be physically fatigued if you tried to haul 12,000 lbs of steel beams like I do every day with my Dodge Ram, 2019s motor trend truck of the year
  • Replying to @RamLover69: 12,000lbs!? How puny. Do you construct Children’s Toys?

This was not the first time Musk has commented on the towing capacity of the Tesla Pickup Truck. Indeed, in June 2018, he proclaimed it would be able to haul an astonishing 300,000 pounds:

  • Elon Musk @elonmusk 26 Jun 2018: The Tesla truck will have dual motor all-wheel drive w crazy torque & a suspension that dynamically adjusts for load. Those will be standard.
  • Psycho Hippie @psycho_hippie 26 Jun 2018: 30,000 lb towing capacity
  • Elon Musk replying to @psycho_hippie: 300,000 lb towing capacity

Even if taken as mere braggadocio, Musk’s claim is so far beyond the pale as to merit a hearty scoff or two. Indeed, this claim literally defies the laws of battery engineering and physics. Even the Tesla Semi, an electric big rig truck, is supposedly only going to be rated to haul 80,000 pounds. The notion that the Tesla Pickup would have a large enough battery – and powerful enough drive-train – to rate a safe hauling capacity more than 20 times greater than that of its top fossil fuel-powered rivals borders on the obscene.

Of course, Musk is no stranger to claiming specifications for vehicles that cannot be supported by physics. The Semi, for example, is supposed to be able to travel 500 miles full laden. Yet, as Martin Daum, the head of Daimler’s trucking division, pointed out, such a capability is well beyond the limits of current battery technology:

“If Tesla really delivers on this promise, we’ll obviously buy two trucks — one to take apart and one to test because if that happens, something has passed us by. But for now, the same laws of physics apply in Germany and in California.

Musk’s claims appear even more suspect when one considers the hauling capacity of Tesla’s current vehicles. The Model X crossover is Tesla’s most robust vehicle, yet it faces substantial battery drain when used as a hauler. Even with the optional towing package, the Model X can lose more than 60% of its range when hauling weights approaching 5,000 pounds.

Admittedly, an electric pickup truck would likely see lower levels of degradation due to a more optimized design, but the battery consumption would still be substantial – especially if the promises of 911-level handling are to be taken seriously. The Tesla Pickup’s planned 400 to 500 mile range is not likely to hold up terribly well when used as a heavy-duty hauler.

Investor’s Eye View Musk is well known for his propensity to dabble in wishful thinking where vehicle performance is concerned, but the claims he has made about the Tesla Pickup are beyond the pale even for him. There is simply no way it will be able to do the things he claims.

As a consequence, it appears quite likely that the Tesla Pickup is far from reaching the serious prototype and basic testing stage, let alone the all-terrain and all-weather testing that will be critical to the development of a genuinely robust electric pickup truck.

Tesla fans who believe the current hype, and Tesla investors expecting the imminent arrival of a lucrative industry-leading pickup truck, are bound for a rude awakening.

Posted in Electric Trucks, Electric trucks impossible, Electrification | Tagged , , | 5 Comments

America loves the idea of family farms. That’s unfortunate. By Sarah Taber

Preface. As declining fossil fuels force more and more people back into being farmers, eventually 75 to 90% of the population, it would be much better for this to happen with family farms than gigantic mega-farms with workers who are slaves in all but name. This essay offers an alternative, collaborative worker-owned farming that has already been proven to work..

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

* * *

Taber, S. 2019. America loves the idea of family farms. That’s unfortunate.

Family farms are central to our nation’s identity. Most Americans, even those who have never been on a farm, have strong feelings about the idea of family farms — so much that they’re the one thing that all U.S. politicians agree on. Each election, candidates across the ideological spectrum roll out plans to save family farms — or give speeches about them, at least. From Little House on the Prairie to modern farmer’s markets, family farms are also the core of most Americans’ vision of what sustainable, just farming is supposed to look like.

But as someone who’s worked in agriculture for 20 years and researched the history of farming, I think we need to understand something: Family farming’s difficulties aren’t a modern problem born of modern agribusiness. It’s never worked very well. It’s simply precarious, and it always has been. Idealizing family farms burdens real farmers with overwhelming guilt and blame when farms go under. It’s crushing.

I wish we talked more openly about this. If we truly understood how rare it is for family farms to happen at all, never mind last multiple generations, I hope we could be less hard on ourselves. Deep down we all know that the razor-thin margins put families in impossible positions all the time, but we still treat it like it’s the ideal. We blame these troubles on agribusiness — but we don’t look deeper. We should. If we’re serious about building food systems that are sustainable and robust in the long term, we need to learn from how farming’s been done for most of human history: collaboratively.

Farming has almost always existed on a larger social scale—very extended families up to whole villages. We tend to think of medieval peasants as forebears of today’s family farms, but they’re not. Medieval villages worked much more like a single unit with little truly private infrastructure—draft animals, plows, and even land were operated at the community level.
Family farming as we know it— nuclear families that own their land, pass it on to heirs, raise some or all of their food, and produce some cash crops—is vanishingly rare in human history.

It’s easy to see how Anglo-Americans could mistake it for normal. Our cultural heritage is one of the few places where this fluke of a farming practice has made multiple appearances. Family farming was a key part of the political economy in ancient Rome, late medieval England, and colonial America. But we keep forgetting something very important about those golden ages of family farming. They all happened after, and only after, horrific depopulation events.

Rome emptied newly conquered lands by selling the original inhabitants into slavery. In England, the Black Death killed so many nobles and serfs that surviving peasants seized their own land and became yeomen — free small farmers who neither answered to a master nor commanded their own servants. Colonial Americans, seeking to recreate English yeoman farming, began a campaign of genocide against indigenous people that has lasted for centuries, and created one of the greatest transfers of land and wealth in history.

Family farming isn’t just difficult. It’s so brittle that it only makes a viable livelihood for farmers when land is nearly valueless for sheer lack of people. In areas where family farming has persisted for more than a couple generations it’s largely thanks to extensive, modern technocratic government interventions like grants, guaranteed loans, subsidized crop insurance, free training, tax breaks, suppression of farmworker wages, and more. Family farms’ dependence on the state is well understood within the industry, but it’s heresy to talk about it openly lest taxpayers catch on. I think it’s time to open up, because I don’t think a practice that needs that much life support can truly be considered “sustainable.” After seeing what I’ve seen from 20 years in the industry, continuing to present it as such feels to me like a type of con game — because there is a better way.

America’s history is filled with examples of collaborative farming. It’s just less publicized than single-family homesteading. African-American farmershave a long and determined history of collaborative farming, a brace against the viciousness of slavery and Jim Crow. Native peoples that farmed usually did so as a whole community rather than on a single-family basis. In the early days of the reservation system, some reservations grew their food on one large farm run by the entire nation or tribe. These were so successful that colonial governments panicked, broke them up, and forced indigenous farmers to farm as individual single-family homesteads. This was done with the express goal of impoverishing them — which says a lot about the realities of family farming, security, and financial independence. It also says a lot about how long those grim realities have been understood. Indigenous groups today run modern, innovative, community-level land operations, including over half the farms in Arizona; or Tanka’s work restoring prairies, bison, and traditional foodways in the Dakotas as the settler-built wheat economy dries up.

One collaborative tradition that’s been very public about how their community-size farms function is the Hutterites, a religious group of about 460 communities in the U.S. and Canada numbering 75-150 people apiece. Despite the harsh prairies where they live, and farming about half as many acres per capita as neighboring family farmers, Hutterites are thriving and expanding when neighboring family farms are throwing in the towel.
Their approach — essentially farming as a large employee-owned company with diverse crops and livestock — has valuable lessons.

Outsiders often chalk up the success of the Hutterites, who forgo most private property, to “free labor” or “not having to pay taxes.” Neither of these are accurate. Hutterite farms thrive due to farming as a larger community rather than as individual families. Family farms can achieve economies of scale by specializing in one thing, like expanding a dairy herd or crop acreage. But with only one or two family members running a farm, there simply isn’t enough bandwidth to run more than one or two operations, no matter how much labor-saving technology is involved. The community at a Hutterite farm allows them to actually pull off what sustainability advocates talk about, but family farms consistently struggle with: diversifying.

To understand why this structure is useful, take the experience of a colleaguewhose family runs a wheat farm in the Great Plains. He’s trying to make extra cash by grazing cattle on their crop when it’s young. This can enhance the soil and future yields if done right, and his family agreed to it, but they couldn’t help build the necessary fence, or pay for another laborer to help him. The property remains fenceless, without additional income, and without the soil health boosts from carefully managed grazing. Community-size farms like Hutterite operations have larger, more flexible labor pools that don’t get stuck in these catch-22 situations.

Stories like this abound in farm country. America’s farmland is filled with opportunities to sustainably grow more food from the same acres and earn extra cash, thwarted by the limited attention solo operations can give. We treat this plight as natural and inevitable. We treat it as something to solve by collective action on a national level — government policies that help family farms. We don’t talk about how readily these things can be solved by collective action at the local level.

Collaboration doesn’t just make better use of the land — it can also do a lot for farmers’ quality of life. Hutterites, thanks to farming on a community scale, get four weeks of vacation per year; new mothers get a few months’ maternity leave and a full-time helper of their choosing — something few American women in any vocation can do.

We don’t have to commit to the Hutterite lifestyle to benefit from the advantages of collaborative farming. Big, diverse, employee-owned farms work, and they can turn farming into a job that anyone can train for and get — you don’t have to be born into it.

Many of today’s new farmers who weren’t born into farming are young and woefully undercapitalized, stuck in a high-labor/low-revenues cycle with little chance for improvement. Others begin farming as a second career, with plenty of capital but a time horizon of perhaps 20 years — rather than the 40 it often takes to make planting orchards, significant investments in land, and other improvements worth it. These new farmers are absolutely trying to do the right thing, but solo farming simply doesn’t give them the resources or time horizon to “think like a cathedral builder.” Good farming is a relay race. We have to build human systems that work like a relay team.

Finally, and perhaps most important, collaborative farming can be a powerful tool for decolonization. Hutterite communities are powerhouses, raising most of the eggs, hogs, or turkeys in some states — and they’re also largely self-sufficient. This has allowed them to build their own culture to suit their own values. They have enough scale to build their own crop processing, so they can work directly with retailers and customers on their own terms instead of going through middlemen. They build their own knowledge instead of relying on “free” agribusiness advice as many family farms do. In other words, they’re powerful. Imagine what groups like this, with determined inclusivity from top leadership down through rank-and-file, could do to right the balance of power in the United States.

Solo farming does work for a few. I don’t want to discount their accomplishments — but I also don’t think we can give them their due without acknowledging the uphill battle they’re in. I think it’s important to be honest about family farming’s challenges and proactive about handling them. One of the best ways to do that is to pool efforts. Our culture puts so much emphasis on one “right” way of farming — solo family operations — that we ignore valuable lessons from people who’ve done it differently for hundreds or thousands of years. It’s time for us to open up and look at other ways of doing things.

Posted in Agriculture | Tagged , , , | 4 Comments

Bodhi Paul Chefurka: Carrying capacity, overshoot and sustainability

Preface. This is a post written by Bodhi Paul Chefurka in 2013 at his blog here. I don’t understand his ultimate sustainable carrying capacity based on hunter gatherers. Why will agriculture go away? But the rest of the article is spot on.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Ever since the writing of Thomas Malthus in the early 1800s, and especially since Paul Ehrlich’s publication of “The Population Bomb”  in 1968, there has been a lot of learned skull-scratching over what the sustainable human population of Planet Earth might “really” be over the long haul.

This question is intrinsically tied to the issue of ecological overshoot so ably described by William R. Catton Jr. in his 1980 book “Overshoot:The Ecological Basis of Revolutionary Change”.  How much have we already pushed our population and consumption levels above the long-term carrying capacity of the planet?

In this article I outline my current thoughts on carrying capacity and overshoot, and present five estimates for the size of a sustainable human population.

Carrying Capacity

Carrying capacity” is a well-known ecological term that has an obvious and fairly intuitive meaning: “The maximum population size of a species that the environment can sustain indefinitely, given the food, habitat, water and other necessities available in the environment.” 

Unfortunately that definition becomes more nebulous and controversial the closer you look at it, especially when we are talking about the planetary carrying capacity for human beings. Ecologists will claim that our numbers have already well surpassed the planet’s carrying capacity, while others (notably economists and politicians…) claim we are nowhere near it yet!
This confusion may arise because we tend to confuse two very different understandings of the phrase “carrying capacity”.  For this discussion I will call these the “subjective” view and the “objective” views of carrying capacity.
The subjective view is carrying capacity as seen by a member of the species in question. Rather than coming from a rational, analytical assessment of the overall situation, it is an experiential judgement.  As such it tends to be limited to the population of one’s own species, as well as having a short time horizon – the current situation counts a lot more than some future possibility.  The main thing that matters in this view is how many of one’s own species will be able to survive to reproduce. As long as that number continues to rise, we assume all is well – that we have not yet reached the carrying capacity of our environment.

From this subjective point of view humanity has not even reached, let alone surpassed the Earth’s overall carrying capacity – after all, our population is still growing.  It’s tempting to ascribe this view mainly to neoclassical economists and politicians, but truthfully most of us tend to see things this way.  In fact, all species, including humans, have this orientation, whether it is conscious or not.

Species tend to keep growing until outside factors such as disease, predators, food or other resource scarcity – or climate change – intervene.  These factors define the “objective” carrying capacity of the environment.  This objective view of carrying capacity is the view of an observer who adopts a position outside the species in question.It’s the typical viewpoint of an ecologist looking at the reindeer on St. Matthew Island, or at the impact of humanity on other species and its own resource base.

This is the view that is usually assumed by ecologists when they use the naked phrase “carrying capacity”, and it is an assessment that can only be arrived at through analysis and deductive reasoning.  It’s the view I hold, and its implications for our future are anything but comforting.

When a species bumps up against the limits posed by the environment’s objective carrying capacity,its population begins to decline. Humanity is now at the uncomfortable point when objective observers have detected our overshoot condition, but the population as a whole has not recognized it yet. As we push harder against the limits of the planet’s objective carrying capacity, things are beginning to go wrong.  More and more ordinary people are recognizing the problem as its symptoms become more obvious to casual onlookers.The problem is, of course, that we’ve already been above the planet’s carrying capacity for quite a while.
One typical rejoinder to this line of argument is that humans have “expanded our carrying capacity” through technological innovation.  “Look at the Green Revolution!  Malthus was just plain wrong.  There are no limits to human ingenuity!”  When we say things like this, we are of course speaking from a subjective viewpoint. From this experiential, human-centric point of view, we have indeed made it possible for our environment to support ever more of us. This is the only view that matters at the biological, evolutionary level, so it is hardly surprising that most of our fellow species-members are content with it.

The problem with that view is that every objective indicator of overshoot is flashing red.  From the climate change and ocean acidification that flows from our smokestacks and tailpipes, through the deforestation and desertification that accompany our expansion of human agriculture and living space, to the extinctions of non-human species happening in the natural world, the planet is urgently signalling an overload condition.

Humans have an underlying urge towards growth, an immense intellectual capacity for innovation, and a biological inability to step outside our chauvinistic, anthropocentric perspective.  This combination has made it inevitable that we would land ourselves and the rest of the biosphere in the current insoluble global ecological predicament.


When a population surpasses its carrying capacity it enters a condition known as overshoot.  Because the carrying capacity is defined as the maximum population that an environment can maintain indefinitely, overshoot must by definition be temporary.  Populations always decline to (or below) the carrying capacity.  How long they stay in overshoot depends on how many stored resources there are to support their inflated numbers.  Resources may be food, but they may also be any resource that helps maintain their numbers.  For humans one of the primary resources is energy, whether it is tapped as flows (sunlight, wind, biomass) or stocks (coal, oil, gas, uranium etc.).  A species usually enters overshoot when it taps a particularly rich but exhaustible stock of a resource.  Like fossil fuels, for instance…
Population growth in the animal kingdom tends to follow a logistic curve.  This is an S-shaped curve that starts off low when the species is first introduced to an ecosystem, at some later point rises very fast as the population becomes established, and then finally levels off as the population saturates its niche. 
Humans have been pushing the envelope of our logistic curve for much of our history. Our population rose very slowly over the last couple of hundred thousand years, as we gradually developed the skills we needed in order to deal with our varied and changeable environment,particularly language, writing and arithmetic. As we developed and disseminated those skills our ability to modify our environment grew, and so did our growth rate. 
If we had not discovered the stored energy resource of fossil fuels, our logistic growth curve would probably have flattend out some time ago, and we would be well on our way to achieving a balance with the energy flows in the world around us, much like all other species do.  Our numbers would have settled down to oscillate around a much lower level than today, similar to what they probably did with hunter-gatherer populations tens of thousands of years ago.

Unfortunately, our discovery of the energy potential of coal created what mathematicians and systems theorists call a “bifurcation point” or what is better known in some cases as a tipping point. This is a point at which a system diverges from one path onto another because of some influence on events.  The unfortunate fact of the matter is that bifurcation points are generally irreversible.  Once past such a point, the system can’t go back to a point before it.

Given the impact that fossil fuels had on the development of world civilization, their discovery was clearly such a fork in the road.  Rather than flattening out politely as other species’ growth curves tend to do, ours kept on rising.  And rising, and rising. 

What is a sustainable population level?

Now we come to the heart of the matter.  Okay, we all accept that the human race is in overshoot.  But how deep into overshoot are we?  What is the carrying capacity of our planet?  The answers to these questions,after all, define a sustainable population.

Not surprisingly, the answers are quite hard to tease out.  Various numbers have been put forward, each with its set of stated and unstated assumptions –not the least of which is the assumed standard of living (or consumption profile) of the average person.  For those familiar with Ehrlich and Holdren’s I=PAT equation, if “I” represents the environmental impact of a sustainable population, then for any population value “P” there is a corresponding value for “AT”, the level of Activity and Technology that can be sustained for that population level.  In other words, the higher our standard of living climbs, the lower our population level must fall in order to be sustainable. This is discussed further in an earlier article on Thermodynamic Footprints.

To get some feel for the enormous range of uncertainty in sustainability estimates we’ll look at five assessments, each of which leads to a very different outcome.  We’ll start with the most optimistic one, and work our way down the scale.

The Ecological Footprint Assessment

The concept of the Ecological Footprint was developed in 1992 by William Rees and Mathis Wackernagel at the University of British Columbia in Canada.

The ecological footprint is a measure of human demand on the Earth’s ecosystems. It is a standardized measure of demand for natural capital that may be contrasted with the planet’s ecological capacity to regenerate. It represents the amount of biologically productive land and sea area necessary to supply the resources a human population consumes, and to assimilate associated waste. As it is usually published, the value is an estimate of how many planet Earths it would take to support humanity with everyone following their current lifestyle.

It has a number of fairly glaring flaws that cause it to be hyper-optimistic. The “ecological footprint” is basically for renewable resources only. It includes a theoretical but underestimated factor for non-renewable resources.  It does not take into account the unfolding effects of climate change, ocean acidification or biodiversity loss (i.e. species extinctions).  It is intuitively clear that no number of “extra planets” would compensate for such degradation.

Still, the estimate as of the end of 2012 is that our overall ecological footprint is about “1.7 planets”.  In other words, there is at least 1.7 times too much human activity for the long-term health of this single, lonely planet.  To put it yet another way, we are 70% into overshoot.

It would probably be fair to say that by this accounting method the sustainable population would be (7 / 1.7) or about four billion people at our current average level of affluence.  As you will see, other assessments make this estimate seem like a happy fantasy.

The Fossil Fuel Assessment

The main accelerant of human activity over the last 150 to 200 years has been fossil fuel.  Before 1800 there was very little fossil fuel in general use, with most energy being derived from wood, wind, water, animal and human power. The following graph demonstrates the precipitous rise in fossil fuel use since then, and especially since 1950.

This information was the basis for my earlier Thermodynamic Footprint analysis.  That article investigated the influence of technological energy (87% of which comes from fossil fuels) on human planetary impact, in terms of how much it multiplies the effect of each “naked ape”. The following graph illustrates the multiplier at different points in history:

Fossil fuels have powered the increase in all aspects of civilization, including population growth.  The “Green Revolution” in agriculture that was kicked off by Nobel laureate Norman Borlaug in the late 1940s was largely a fossil fuel phenomenon, relying on mechanization,powered irrigation and synthetic fertilizers derived from fossil fuels. This enormous increase in food production supported a swift rise in population numbers, in a classic ecological feedback loop: more food (supply) => more people (demand) => more food => more people etc…

Over the core decades of the Green Revolution from 1950 to 1980 the world population almost doubled, from fewe rthan 2.5 billion to over 4.5 billion.  The average population growth over those three decades was 2% per year.  Compare that to 0.5% from 1800 to 1900; 1.00% from 1900 to 1950; and 1.5% from 1980 until now:

This analysis makes it tempting to conclude that a sustainable population might look similar to the situation in 1800, before the Green Revolution, and before the global adoption of fossil fuels: about 1 billion peopleliving on about 5% of today’s global average energy consumption.

It’s tempting (largely because it seems vaguely achievable), but unfortunately that number may still be too high.  Even in 1800 the signs of human overshoot were clear, if not well recognized:  there was already widespread deforestation through Europe and the Middle East; and desertification had set into the previously lush agricultural zones of North Africa and the Middle East.

Not to mention that if we did start over with “just” one billion people, an annual growth rate of a mere 0.5% would put the population back over seven billion in just 400 years.  Unless the growth rate can be kept down very close to zero, such a situation is decidedly unsustainable.

The Population Density Assessment

There is another way to approach the question.  If we assume that the human species was sustainable at some point in the past, what point might we choose and what conditions contributed to our apparent sustainability at that time?

I use a very strict definition of sustainability.  It reads something like this: “Sustainability is the ability of a species to survive in perpetuity without damaging the planetary ecosystem in the process.”  This principle applies only to a species’ own actions, rather than uncontrollable external forces like Milankovitch cycles, asteroid impacts, plate tectonics, etc.

In order to find a population that I was fairly confident met my definition of sustainability, I had to look well back in history – in fact back into Paleolithic times.  The sustainability conditions I chose were: a very low population density and very low energy use, with both maintained over multiple thousands of years. I also assumed the populace would each use about as much energy as a typical hunter-gatherer: about twice the daily amount of energy a person obtains from the food they eat.

There are about 150 million square kilometers, or 60 million square miles of land on Planet Earth.  However, two thirds of that area is covered by snow, mountains or deserts, or has little or no topsoil.  This leaves about 50 million square kilometers (20 million square miles) that is habitable by humans without high levels of technology.

A typical population density for a non-energy-assisted society of hunter-forager-gardeners is between 1 person per square mile and 1 person per square kilometer. Because humans living this way had settled the entire planet by the time agriculture was invented 10,000 years ago, this number pegs a reasonable upper boundary for a sustainable world population in the range of 20 to 50 millionpeople.

I settled on the average of these two numbers, 35 million people.  That was because it matches known hunter-forager population densities, and because those densities were maintained with virtually zero population growth (less than 0.01% per year)during the 67,000 years from the time of the Toba super-volcano eruption in 75,000 BC until 8,000 BC (Agriculture Day on Planet Earth).

If we were to spread our current population of 7 billion evenly over 50 million square kilometers, we would have an average density of 150 per square kilometer.  Based just on that number, and without even considering our modern energy-driven activities, our current population is at least 250 times too big to be sustainable. To put it another way, we are now 25,000%into overshoot based on our raw population numbers alone. 

As I said above, we also need to take the population’s standard of living into account. Our use of technological energy gives each of us the average planetary impact of about 20 hunter-foragers.  What would the sustainable population be if each person kept their current lifestyle, which is given as an average current Thermodynamic Footprint (TF) of 20?

We can find the sustainable world population number for any level of human activity by using the I = PAT equation mentioned above.

  • We decided above that the maximum hunter-forager population we could accept as sustainable would be 35 million people, each with a Thermodynamic Footprint of 1.
  • First, we set I (the allowable total impact for our sustainable population) to 35, representing those 35 million hunter-foragers.
  • Next, we set AT to be the TF representing the desired average lifestyle for our population.  In this case that number is 20.
  • We can now solve the equation for P.  Using simple algebra, we know that I = P x AT is equivalent to P = I / AT.  Using that form of the equation we substitute in our values, and we find that P = 35 / 20.  In this case P = 1.75.

This number tells us that if we want to keep the average level of per-capita consumption we enjoy in in today’s world, we would enter an overshoot situation above a global population of about 1.75 million people. By this measure our current population of 7 billion is about 4,000 times too big and active for long-term sustainability. In other words, by this measure we are we are now 400,000% into overshoot

Using the same technique we can calculate that achieving a sustainable population with an American lifestyle (TF = 78) would permit a world population of only 650,000 people – clearly not enough to sustain a modern global civilization. 

For the sake of comparison, it is estimated that the historical world population just after the dawn of agriculture in 8,000 BC was about five million, and in Year 1 was about 200 million.  We crossed the upper threshold of planetary sustainability in about 2000 BC, and have been in deepening overshoot for the last 4,000 years.

The Ecological Assessments

As a species, human beings share much in common with other large mammals.  We breathe, eat, move around to find food and mates, socialize, reproduce and die like all other mammalian species.  Our intellec tand culture, those qualities that make us uniquely human, are recent additions to our essential primate nature, at least in evolutionary terms.

Consequently it makes sense to compare our species’ performance to that of other, similar species – species that we know for sure are sustainable.  I was fortunate to find the work of American marine biologist Dr. Charles W. Fowler, who has a deep interest in sustainability and the ecological conundrum posed by human beings.  The following two assessments are drawn from Dr. Fowler’s work.

First assessment

In 2003, Dr. Fowler and Larry Hobbs co-wrote a paper titled, Is humanity sustainable?”  that was published by the Royal Society.  In it, they compared a variety of ecological measures across 31 species including humans. The measures included biomass consumption, energy consumption, CO2 production, geographical range size, and population size.

It should come as no great surprise that in most ofthe comparisons humans had far greater impact than other species, even to a 99%confidence level.  The only measure inwhich we matched other species was in the consumption of biomass (i.e. food).

When it came to population size, Fowler and Hobbs foundthat there are over two orders of magnitude more humans than one would expectbased on a comparison to other species – 190 times more, in fact.  Similarly, our CO2 emissions outdid otherspecies by a factor of 215.

Based on this research, Dr. Fowler concluded that there are about 200 times too many humans on the planet.  This brings up an estimate for a sustainable population of 35 million people.

This is the same as the upper bound established above by examining hunter-gatherer population densities.  The similarity of the results is not too surprising, since the hunter-gatherers of 50,000 years ago were about as close to “naked apes” as humans have been in recent history.

Second assessment

In 2008, five years after the publication cited above, Dr. Fowler wrote another paper entitled Maximizing biodiversity, information and sustainability.”  In this paper he examined the sustainability question from the point of view of maximizing biodiversity.  In other words, what is the largest human population that would not reduce planetary biodiversity?

This is, of course, a very stringent test, and one that we probably failed early in our history by extirpating mega-fauna in the wake of our migrations across a number of continents.

In this paper, Dr. Fowler compared 96 different species, and again analyzed them in terms of population, CO2 emissions and consumption patterns.

This time, when the strict test of biodiversity retention was applied, the results were truly shocking, even to me.  According to this measure, humans have overpopulated the Earth by almost 700 times.  In order to preserve maximum biodiversity on Earth, the human population may be no more than 10 million people – each with the consumption of a Paleolithic hunter-forager.



As you can see, the estimates for a sustainable human population vary widely – by a factor of 400 from the highest to the lowest.

The Ecological Footprint doesn’t really seem intended as a measure of sustainability.  Its main value is to give people with no exposure to ecology some sense that we are indeed over-exploiting our planet.  (It also has the psychological advantage of feeling achievable with just a little work.)  As a measure of sustainability,it is not helpful.

As I said above, the number suggested by the Thermodynamic Footprint or Fossil Fuel analysis isn’t very helpful either – even a population of one billion people without fossil fuels had already gone into overshoot.

That leaves us with three estimates: two at 35 million, and one of 10 million.

I think the lowest estimate (Fowler 2008, maximizing biodiversity), though interesting, is out of the running in this case, because human intelligence and problem-solving ability makes our destructive impact on biodiversity a foregone conclusion. We drove other species to extinction 40,000 years ago, when our total population was estimated to be under 1 million.

That leaves the central number of 35 million people, confirmed by two analyses using different data and assumptions.  My conclusion is that this is probably the largest human population that could realistically be considered sustainable.

So, what can we do with this information?  It’s obvious that we will not (and probably cannot) voluntarily reduce our population by 99.5%.  Even an involuntary reduction of this magnitude would involve enormous suffering and a very uncertain outcome.  In fact, it’s close enough to zero that if Mother Nature blinked, we’d be gone.

In fact, the analysis suggests that Homo sapiens is an inherently unsustainable species.  This outcome seems virtually guaranteed by our neocortex, by the very intelligence that has enabled our rise to unprecedented dominance over our planet’s biosphere.  Is intelligence an evolutionary blind alley?  From the singular perspective of our own species, it quite probably is. If we are to find some greater meaning or deeper future for intelligence in the universe, we may be forced to look beyond ourselves and adopt a cosmic, rather than a human, perspective.


How do we get out of this jam?

How might we get from where we are today to a sustainable world population of 35 million or so?  We should probably discard the notion of “managing” such a population decline.  If we can’t get our population to simply stop growing, an outright reduction of over 99% is simply not in the cards.  People seem virtually incapable of taking these kinds of decisions in large social groups.  We can decide to stop reproducing, but only as individuals or (perhaps) small groups. Without the essential broad social support, such personal choices will make precious little difference to the final outcome.  Politicians will by and large not even propose an idea like “managed population decline”  – not if they want to gain or remain in power, at any rate.  China’s brave experiment with one-child families notwithstanding, any global population decline will be purely involuntary.


A world population decline would (will) be triggered and fed by our civilization’s encounter with limits.  These limits may show up in any area: accelerating climate change, weather extremes,shrinking food supplies, fresh water depletion, shrinking energy supplies,pandemic diseases, breakdowns in the social fabric due to excessive complexity,supply chain breakdowns, electrical grid failures, a breakdown of the international financial system, international hostilities – the list of candidates is endless, and their interactions are far too complex to predict.

In 2007, shortly after I grasped the concept and implications of Peak Oil, I wrote my first web article on population decline: Population: The Elephant in the Room.  In it I sketched out the picture of a monolithic population collapse: a straight-line decline from today’s seven billion people to just one billion by the end of this century.
As time has passed I’ve become less confident in this particular dystopian vision.  It now seems to me that human beings may be just a bit tougher than that.  We would fight like demons to stop the slide, though we would potentially do a lot more damage to the environment in the process.  We would try with all our might to cling to civilization and rebuild our former glory.  Different physical, environmental and social situations around the world would result in a great diversity in regional outcomes.  To put it plainly, a simple “slide to oblivion” is not in the cards for any species that could recover from the giant Toba volcanic eruption in just 75,000 years.

Or Tumble?

Still, there are those physical limits I mentioned above.  They are looming ever closer, and it seems a foregone conclusion that we will begin to encounter them for real within the next decade or two. In order to draw a slightly more realistic picture of what might happen at that point, I created the following thought experiment on involuntary population decline. It’s based on the idea that our population will not simply crash, but will oscillate (tumble) down a series of stair-steps: first dropping as we puncture the limits to growth; then falling below them; then partially recovering; only to fall again; partially recover; fall; recover… 

I started the scenario with a world population of 8 billion people in 2030. I assumed each full cycle of decline and partial recovery would take six generations, or 200 years.  It would take three generations (100 years) to complete each decline and then three more in recovery, for a total cycle time of 200 years. I assumed each decline would take out 60% of the existing population over its hundred years, while each subsequent rise would add back only half of the lost population. 

In ten full cycles – 2,000 years – we would be back to a sustainable population of about 40-50 million. The biggest drop would be in the first 100 years, from 2030 to 2130 when we would lose a net 53 million people per year. Even that is only a loss of 0.9% pa, compared to our net growth today of 1.1%, that’s easily within the realm of the conceivable,and not necessarily catastrophic – at least to begin with. 

As a scenario it seems a lot more likely than a single monolithic crash from here to under a billion people.  Here’s what it looks like:

It’s important to remember that this scenario is not a prediction. It’s an attempt to portray a potential path down the population hill that seems a bit more probable than a simple, “Crash! Everybody dies.”

It’s also important to remember that the decline will probably not happen anything like this, either. With climate change getting ready to push humanity down the stairs, and the strong possibility that the overall global temperature will rise by 5 or 6 degrees Celsius even before the end of that first decline cycle, our prospects do not look even this “good” from where I stand.

Rest assured, I’m not trying to present 35 million people as some kind of “population target”. It’s just part of my attempt to frame what we’re doing to the planet, in terms of what some of us see as the planetary ecosphere’s level of tolerance for our abuse. 

The other potential implicit in this analysis is that if we did drop from 8 to under 1 billion, we could then enter a population free-fall. As a result, we might keep falling until we hit the bottom of Olduvai Gorge again. My numbers are an attempt to define how many people might stagger away from such a crash landing.  Some people seem to believe that such an event could be manageable.  I don’t share that belief for a moment. These calculations are my way of getting that message out.

I figure if I’m going to draw a line in the sand, I’m going to do it on behalf of all life, not just our way of life.

What can we do? 

To be absolutely clear, after ten years of investigating what I affectionately call “The Global Clusterfuck”, I do not think it can be prevented, mitigated or managed in any way.  If and when it happens, it will follow its own dynamic, and the force of events could easily make the Japanese and Andaman tsunamis seem like pleasant days at the beach.

The most effective preparations that we can make will all be done by individuals and small groups.  It will be up to each of us to decide what our skills, resources and motivations call us to do.  It will be different for each of us – even for people in the same neighborhood, let alone people on opposite sides of the world.

I’ve been saying for a couple of years that each of us will each do whatever we think is appropriate to the circumstances, in whatever part of the world we can influence. The outcome of our actions is ultimately unforeseeable, because it depends on how the efforts of all 7 billion of us converge, co-operate and compete.  The end result will be quite different from place to place – climate change impacts will vary, resources vary, social structures vary, values and belief systems are different all over the world.The best we can do is to do our best.

Here is my advice: 

  • Stay awake to what’s happening around us.
  • Don’t get hung up by other people’s “shoulds and shouldn’ts”.
  • Occasionally re-examine our personal values.  If they aren’t in alignment with what we think the world needs, change them.
  • Stop blaming people. Others are as much victims of the times as we are – even the CEOs and politicians.
  • Blame, anger and outrage is pointless.  It wastes precious energy that we will need for more useful work.
  • Laugh a lot, at everything – including ourselves.
  • Hold all the world’s various beliefs and “isms” lightly, including our own.
  • Forgive others. Forgive ourselves. For everything.
  • Love everything just as deeply as you can.

That’s what I think might be helpful. If we get all that personal stuff right, then doing the physical stuff about food, water, housing,transportation, energy, politics and the rest of it will come easy – or at least a bit easier. And we will have a lot more fun doing it.

I wish you all the best of luck!
Bodhi Paul Chefurka
May 16, 2013


Posted in Overshoot, Paul Chefurka, Population | Tagged , , | 7 Comments