Peak Sand

Preface.  Sand Primer:

  • Without sand, there would be no concrete, ceramics, computer chips, glass, plastics, abrasives, paint and so on
  • We can’t use desert sand because it’s too round, polished by the wind, and doesn’t stick together. You need rough edges, so desert sand is worthless
  • Good sand  is getting so rare there’s an enormous amount of illegal mining in over 70 countries.  In India the Sand Mafia is one of the most powerful, will kill for sand. It’s easy to steal sand and sell there.
  • This has led to between 75%-90% of beaches in the world receding and a huge amount of environmental damage.
  • By 2100 all beaches will be gone
  • Australia is selling sand to nations that don’t have any more (like the United Arab Emirates, who used all of their ocean sand to make artificial islands)
  • Sand is a big business, sales are $70 Billion a year
  • concrete is 40% sand

How Much Sand is needed?

  • 200 tons  Average house
  • 3,000 tons  Hospital or other large building
  • 30,000 tons per kilometer of highway
  • 12,000,000 tons  Nuclear Power Plant (that’s equal to nearly 250 miles of highway)

Half of all sand is trapped behind the 845,000 dams in the world.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Fountain, H., et al 2019. Melting Greenland Is Awash in Sand. New York Times.

Glaciers grind rocks into silt, sand and gravel.  Greenland hopes that there’s enough sand for them to become a sand exporter, if the environmental damage isn’t too high.

That won’t be easy.  Nearly all sand is mined within 50 miles of its destination because it costs too much to move it more than that.  So Greenland would have to find a way to make moving sand profitable.

A way to find the sand is required as well, since much of what the glacier produces is a fine silt that isn’t suitable for concrete.

Then if sand is found, an energy intensive process begins. A pipe is extended to the sea floor and sucks up water and sand.  Huge amounts of sand would need to be extracted into large bulk carriers, and new ports,and loading facilities built.  The distance to the nearest large cities is considerable longer than 50 miles. Boston is 2250 miles and London 1900 miles away.

Peak Sand in the news:

Gillis, J.R. November 4, 2014. Why Sand Is Disappearing. New York Times.

Extract, rearranged, sometimes paraphrased or reworded:

Today 75 to 90 percent of the world’s natural sand beaches are disappearing, due partly to massive legal and illegal mining, rising sea levels, increasing numbers of severe storms, and massive erosion from human development along coastlines. Many low-lying barrier islands are already submerged.

The sand and gravel business is now growing faster than the economy as a whole. In the United States, the market for mined sand has become a billion-dollar annual business, growing at 10% a year since 2008. Interior mining operations use huge machines working in open pits to dig down under the earth’s surface to get sand left behind by ancient glaciers. But as demand has risen — and the damming of rivers has held back the flow of sand from mountainous interiors — natural sources of sand have been shrinking.

One might think that desert sand would be a ready substitute, but its grains are finer and smoother; they don’t adhere to rougher sand grains, and tend to blow away. As a result, the desert state of Dubai brings sand for its beaches all the way from Australia.

And now there is a global beach-quality sand shortage, caused by the industries that have come to rely on it. Sand is vital to the manufacturing of abrasives, glass, plastics, microchips and even toothpaste, and, most recently, to the process of hydraulic fracturing. The quality of silicate sand found in the northern Midwest has produced what is being called a “sand rush” there, more than doubling regional sand pit mining since 2009.

But the greatest industrial consumer of all is the concrete industry. Sand from Port Washington on Long Island — 140 million cubic yards of it — built the tunnels and sidewalks of Manhattan from the 1880s onward. Concrete still takes 80 percent of all that mining can deliver. Apart from water and air, sand is the natural element most in demand around the world, a situation that puts the preservation of beaches and their flora and fauna in great danger. Today, a branch of Cemex, one of the world’s largest cement suppliers, is still busy on the shores of Monterey Bay in California, where its operations endanger several protected species.

The huge sand mining operations emerging worldwide, many of them illegal, are happening out of sight and out of mind, as far as the developed world is concerned. But in India, where the government has stepped in to limit sand mining along its shores, illegal mining operations by what is now referred to as the “sand mafia” defy these regulations. In Sierra Leone, poor villagers are encouraged to sell off their sand to illegal operations, ruining their own shores for fishing. Some Indonesian sand islands have been devastated by sand mining.
To those of us who visit beaches only in summer, they seem as permanent a part of our natural heritage as the Rocky Mountains and the Great Lakes. But shore dwellers know differently. Beaches are the most transitory of landscapes, and sand beaches the most vulnerable of all.
Yet the extent of this global crisis is obscured because so-called beach nourishment projects attempt to hold sand in place and repair the damage by the time summer people return, creating the illusion of an eternal shore.

Before next summer, endless lines of dump trucks will have filled in bare spots and restored dunes. Virginia Beach alone has been restored more than 50 times. In recent decades, East Coast barrier islands have used 23 million loads of sand, much of it mined inland and the rest dredged from coastal waters — a practice that disturbs the sea bottom, creating turbidity that kills coral beds and damages spawning grounds, which hurts inshore fisheries.

It is time for us to understand where sand comes from and where it is going. Sand was once locked up in mountains and it took eons of erosion before it was released into rivers and made its way to the sea. As Rachel Carson wrote in 1958, “in every curving beach, in every grain of sand, there is a story of the earth.” Now those grains are sequestered yet again — often in the very concrete sea walls that contribute to beach erosion.

We need to stop taking sand for granted and think of it as an endangered natural resource. Glass and concrete can be recycled back into sand, but there will never be enough to meet the demand of every resort. So we need better conservation plans for shore and coastal areas. Beach replenishment — the mining and trucking and dredging of sand to meet tourist expectations — must be evaluated on a case-by-case basis, with environmental considerations taking top priority. Only this will ensure that the story of the earth will still have subsequent chapters told in grains of sand.

Videos about Sand:


Coastal Care on Sand Mining:

Wiki on Sand mining:

Sand Mining Facts:

Stop illegal sand mining in India


Below is a table from CRYSTALLINE SILICA PRIMER,  Industrial Minerals, U.S. Department of the Interior about how sand is used (see Table 2 for even more uses):

Table 1. Silica In Commodities And End-Product Applications

Commodity/form of silica/major commercial applications

  • Antimony / Quartz / Flame retardants, batteries, ceramics, glass, alloys
  • Bauxite / Quartz / Aluminum production, refractories, abrasives
  • Beryllium / Quartz / Electronic applications
  • Cadmium / Quartz, jasper, opal, etc. / Batteries, coatings and platings, pigments, plastics, alloys
  • Cement / None / Concrete (quartz in concrete mix)
  • Clay / Quartz, cristobalite / Paper, ceramics, paint, refractories
  • Copper / Quartz /Electrical conduction, plumbing, machinery
  • Crushed stone / Quartz /Construction
  • Diatomite / Quartz, amorphous silica /Filtration aids
  • Dimension stone / Quartz /Building facings
  • Feldspar / Quartz / Glass, ceramics, filler material
  • Fluorspar / Quartz /Acids, steel making flux, glass, enamel, weld rod coatings
  • Garnet / Quartz / Abrasives, filtration, gem stone
  • Germanium / Quartz, jasper, etc. / Infrared optics, fiber optics, semiconductors
  • Gold / Quartz, chert /Jewelry, dental, industrial, monetary
  • Gypsum / Quartz /Gypsum board (prefabricated building product), industrial and building plaster
  • Industrial sand / Quartz / Glass, foundry sand
  • Iron ore / Chert, quartz / Iron and steel industry
  • Iron oxide pigment / Chert, quartz, amorphous silica / Construction materials, paint, coatings
  • Lithium / Quartz /Ceramics, glass, aluminum product
  • Magnesite / Quartz / Refractories
  • Mercury / Quartz / Chlorine and caustic soda manufacture, batteries
  • Mica / Quartz / Joint cement, paint, roofing
  • Perlite / Quartz etc / Building construction products
  • Phosphate rock / Quartz / Fertilizers
  • Pumice / Volcanic glass, Quartz / Concrete aggregate, building block
  • Pyrophyllite / Quartz / Ceramics, refractories
  • Sand and gravel / Quartz / Construction
  • Selenium / Quartz / Photocopiers, glass manufacturing, pigments
  • Silicon / Quartz / Silicon and ferrosilicon for ferrous foundry and steel
  • industry; computers; photoelectric cells
  • Silver / Quartz, chert / Photographic material, electrical and electronic products
  • Talc / Quartz / Ceramics, paint, plastics, paper
  • Tellerium / Quartz / Steel and copper alloys, rubber compounding, electronics
  • Thallium / Quartz, etc / Electronics, superconductors, glass alloy
  • Titanium / Quartz / Pigments for paint, paper, plastics, metal for aircraft,
  • chemical processing equipment
  • Tungsten / Quartz / Cemented carbides for metal machining and wear-resistant components
  • Vanadium / Quartz, amorphous silica / Alloying element in iron, steel, and titanium
  • Zinc / Quartz, etc / Galvanizing, zinc-based alloys, chemicals, agriculture
  • Zircon / Quartz / Ceramics, refractories, zirconia production

In Heavy Industry

Foundry molds and cores for the production of metal castings are made from quartz sand. The manufacture of high-temperature silica brick for use in the linings of glass- and steel-melting furnaces represents another common use of crystalline silica in industry. The oil and gas industry uses crystalline silica to break up rock in wells. The operator pumps a water-sand mixture, under pressure, into the rock formations to fracture them so that oil and gas may be easily brought to the surface. More than 1 million tons of quartz sand were used annually for this purpose during the 1970’s and early 1980’s when oil-well drilling was at its peak. Quartz sand is also used for filtering sediment and bacteria from water supplies and in sewage treat ment. Although this use of crystalline silica has increased in recent years, it still represents a small proportion of the total use.

High-Tech Applications

Historically, crystalline silica, as quartz, has been a material of strategic importance. During World War II, communications components in telephones and mobile military radios were made from quartz. With today’s emphasis on military command, control, and communications surveillance and with modern advances in sophisticated electronic systems, quartz-crystal devices are in even greater demand. In the field of optics, quartz meets many needs. It has certain optical properties that permit its use in polarized laser beams. The field of laser optics uses quartz as windows, prisms, optical filters, and timing devices. Smaller portions of high-quality quartz crystals are used for prisms and lenses in optical instruments. Scientists are experimenting with quartz bars to focus sunlight in solar-power applications. Quartz crystals possess a unique property called piezoelectricity. A piezoelectric crystal converts mechanical pressure into electricity and vice versa. When a quartz crystal is cut at an exact angle to its axis, pressure on it generates a minute electrical charge, and likewise, an electrical charge applied to quartz causes it to vibrate more than 30,000 times per second in some applications. Piezoelectric quartz crystals are used to make electronic oscillators, which provide accurate frequency control for radio transmitters and radio-frequency telephone circuits. Incoming signals of interfering frequencies can be filtered out by piezoelectric crystals. Piezoelectric crystals are also used for quartz watches and other time-keeping devices

USGS 2011 Minerals Yearbook U.S. Department of the Interior U.S. Geological Survey SAND AND GRAVEL, CONSTRUCTION

(It’s 2014 but 2011 is the most recent data available, only a third of those queried responded, stats for sand vs gravel are not broken out, no information about ecological damage or theft, a pretty inept, incomplete report overall, but for what it’s worth): A total of 810 million metric tons (Mt) of construction sand and gravel was produced in the United States in 2011. This was a slight increase of 5 Mt from the revised production of 2010, the first increase in annual production since 2006, following 4 consecutive years of decreases. The slight improvement came in response to increased demand from certain State economies experiencing the boom in natural gas and oil production and from some construction segments.

As sand and gravel became less available owing to resource constraint or economic conditions in some locales, builders began to crush bedrock to produce a manufactured sand and gravel often referred to as crushed stone

Of the 810 Mt of construction sand and gravel produced in 2011, 60% was reported or estimated without a breakdown by end use (tables 4–5). Of the remaining 327 Mt, 44% was used as concrete aggregate; 25% was used for road base and coverings and road stabilization; 13%, for asphaltic concrete aggregate and other bituminous mixtures; 12%, for construction fill; about 1% each, for concrete products, plaster and gunite sands, and snow and ice control; and the remainder was used for golf course maintenance, filtration, railroad ballast, road stabilization, roofing granules, and many other miscellaneous uses.

The high cost of transportation limit foreign trade to mostly local transactions across international boundaries. U.S. imports and exports were equivalent to less than 1% of domestic consumption.

Posted in Concrete, Sand | Tagged , | Leave a comment

Boston Globe: the false promise of nuclear power

Preface. This article raises many objections to nuclear power. Theoretically it could be cheaper, but the exact opposite has happened, it keeps getting more expensive. For example the only new reactors being built in the U.S. now are at Georgia Power’s Vogtle plant. Costs were initially estimated at $14 billion; the latest estimate is $21 billion. The first reactors at the plant, built in the 1970s, took a decade longer to build than planned, and cost 10 times more than expected. The two under construction now were expected to be running 2016, but it’s now unlikely that they’ll be ready in 2022.

The authors also point out that reactors are vulnerable to catastrophes from extreme weather, earthquakes, volcanoes, tsunamis; from technical failure; and unavoidable human error. Climate change has led to severe droughts that shut down reactors as the surrounding waters become too warm to provide the vital cooling function.

And much more.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Jay Lifton, Naomi Oreskes. 2019. The false promise of nuclear power. Boston Globe.

Commentators from Greenpeace to the World Bank agree that climate change is an emergency, threatening civilization and life on our planet. Any solution must involve the control of greenhouse gas emissions by phasing out fossil fuels and switching to alternative technologies that do not impair the human habitat while providing the energy we require to function as a species.

This sobering reality has led some prominent observers to re-embrace nuclear energy. Advocates declare it clean, efficient, economical, and safe. In actuality it is none of these. It is expensive and poses grave dangers to our physical and psychological well-being. According to the US Energy Information Agency,the average nuclear power generating cost is about $100 per megawatt-hour. Compare this with $50 per megawatt-hour for solar and $30 to $40 per megawatt-hour for onshore wind. The financial group Lazard recently said that renewable energy costs are now “at or below the marginal cost of conventional generation” — that is, fossil fuels — and much lower than nuclear.

In theory these high costs and long construction times could be brought down. But we have had more than a half-century to test that theory and it appears have been solidly refuted. Unlike nearly all other technologies, the cost of nuclear power has risen over time. Even its supporters recognize that it has never been cost-competitive in a free-market environment, and its critics point out that the nuclear industry has followed a “negative learning curve.” Both the Nuclear Energy Agency and International Energy Agency have concluded that although nuclear power is a “proven low-carbon source of base-load electricity,” the industry will have to address serious concerns about cost, safety, and waste disposal if it is to play a significant role in addressing the climate-energy nexus.

But there are deeper problems that should not be brushed aside. They have to do with the fear and the reality of radiation effects. At issue is what can be called “invisible contamination,” the sense that some kind of poison has lodged in one’s body that may strike one down at any time — even in those who had seemed unaffected by a nuclear disaster. Nor is this fear irrational, since delayed radiation effects can do just that. Moreover, catastrophic nuclear accidents, however infrequent, can bring about these physical and psychological consequences on a vast scale. No technological system is ever perfect, but the vulnerability of nuclear power is particularly great. Improvements in design cannot eliminate the possibility of lethal meltdowns. These may result from extreme weather; from geophysical events such as earthquakes, volcanoes, and tsunamis (such as the one that caused the Fukushima event); from technical failure; and from unavoidable human error. Climate change itself works against nuclear power; severe droughts have led to the shutting down of reactors as the surrounding waters become too warm to provide the vital cooling function.

Advocates of nuclear energy invariably downplay the catastrophic events at Fukushima and Chernobyl. They point out that relatively few immediate deaths were recorded in these two disasters, which is true. But they fail to take adequate account of medical projections. The chaos of both disasters and their extreme mishandling by authorities have led to great disparity in estimates. But informed evaluations in connection with Chernobyl project future cancer deaths at anywhere from several tens of thousands to a half-million.

Studies of Chernobyl and Fukushima also reveal crippling psychological fear of invisible contamination. This fear consumed Hiroshima and Nagasaki, and people in Fukushima painfully associated their own experiences with those of people in the atomic-bombed cities. The situation in Fukushima is still far from physically or psychologically stable. This fear also plagues Chernobyl, where there have been large forced movements of populations, and where whole areas poisoned by radiation remain uninhabitable.

The combination of actual and anticipated radiation effects — the fear of invisible contamination — occurs wherever nuclear technology has been used: not only at the sites of the atomic bombings and major accidents, but also at Hanford, Wash., in connection with plutonium waste from the production of the Nagasaki bomb; at Rocky Flats, Colo., after decades of nuclear testing; and at test sites in Nevada and elsewhere after soldiers were exposed to radiation following atomic bomb tests.

Nuclear reactors also raise the problem of nuclear waste, for which no adequate solution has been found despite a half-century of scientific and engineering effort. Even when a reactor is considered unreliable and is closed down, as occurred recently with the Pilgrim Point reactor in Plymouth, or closes for economic reasons, as at Vermont Yankee, the accumulated waste remains at the site, dangerous and virtually immortal. Under the 1982 Nuclear Waste Policy Act, the United States was required to develop a permanent repository for nuclear waste; nearly 40 years later, we still lack that repository.

Finally there is the gravest of dangers: plutonium and enriched uranium derived from nuclear reactors’ contributing to the building of nuclear weapons. Steps can be taken to reduce that danger by eliminating plutonium as a fuel, but wherever extensive nuclear power is put into use there is the possibility of its becoming weaponized. Of course, this potential weaponization makes nuclear reactors a tempting target for terrorists.

There are now more than 450 nuclear reactors throughout the world. If nuclear power is embraced as a rescue technology, there would be many times that number, creating a worldwide chain of nuclear danger zones — a planetary system of potential self-annihilation. To be fearful of such a development is rational. What is irrational is to dismiss this concern, and to insist, after the experience of more than a half-century, that a “fourth generation” of nuclear power will change everything.

Advocates of nuclear power frequently compare it to carbon-loaded coal. But coal is not the issue; it is already making its way off the world stage. The appropriate comparison is between nuclear and renewable energies. Renewables are part of an economic and energy revolution: They have become available far more quickly, extensively, and cheaply than most experts predicted, and public acceptance is high. To use renewables on the necessary scale, we will need improvements in energy storage, grid integration, smart appliances, and electric vehicle charging infrastructure. We should have an all-out national effort — reminiscent of World War II or, ironically, the making of the atomic bomb — that includes all of these areas to make renewable energies integral to the American way of life. Gas and nuclear will play a transitional role, but it is not pragmatic to bet the planet on a technology that has consistently underperformed and poses profound threats to our bodies and our minds.

Above all, we need to free ourselves of the “nuclear mystique” : the magic aura that radiation has had since the days of Marie Curie. We must question the misleading vision of “Atoms for Peace,” a vision that has always accompanied the normalization of nuclear weapons. We must free ourselves from the false hope that a technology designed for ultimate destruction could be transmogrified into ultimate life-enhancement.

Posted in Nuclear Power | Tagged , , , , , | 6 Comments

Rust Power

Preface.  This is yet another article with an energy generation idea that will probably never work out and become commercial.  But it gives hope and dreams to ordinary people who think what a cool idea, and who will never check in ten years to see if it happened.  It’s soothing to think that scientists are constantly coming up with Something. No need to worry about peak oil and other existential threats.

Now jump forward 100 years to after peak oil, which began sometime, let’s say, between 2020 and 2030. After the population has declined about 90%, the survivors will be 80-90% farmers in 2120.  Are they going to have the energy or know-how to run high-tech depositors of 10-nanometer thick iron? 

Or take this press release, Rice device channels heat into light, where engineers propose to use carbon nanotube film to create a device to recycle waste heat from industry and solar cells.

Really?  After a hard day of farming and trying to find wood and chop it to cook dinner and heat their home, the farmers are going create nanolayers and nanotubes?

Many are calling the time after peak oil “The Great Simplification”, so whatever proposals are made need to be low-tech. It’s only the unfathomably large abundance of cheap oil that’s allowed this mirage to appear and an extra 6 billion people to be born.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


David Grossman. July 30, 2019.  Could Rust Be a New Source of Renewable Energy? Using kinetic energy, it’s got the potential to be more efficient than solar panels. Popular Mechanics.

It’s been known for a long time that combining metals and salt water conducts electricity quite well.

This has spurred the idea of research into whether the kinetic energy of moving salt water could be transformed into electricity. At its best, this electrokinetic effect can generate electricity with around 30 percent efficiency, much higher than solar panels.

It occurred to scientists at Caltech and Northwestern that a really cheap and abundant metal to try would be iron rust. But not just any rust.  Rusty metal at the junkyard has too thick and uneven a layer to use

The rust required needs to be an extremely thin evenly spread film made in a laboratory using a very high tech process called physical vapor deposition which creates films a just 10 nanometers thick, thousands of times thinner than a human hair.

But don’t think you’ll be driving a boat anytime soon that magically moves across the salty ocean. A more practical application, if this passive electrical energy can ever be made to work, is for buoys floating in the ocean, or perhaps tidal energy.

Posted in Far Out | Tagged , | 2 Comments

Nuclear waste disposal drilled deep into earth’s crust

This image has an empty alt attribute; its file name is frack-hole-drilling.jpg

Preface. I suspect one the greatest tragedies of the decline of oil will be all the nuclear waste left for thousands of low-tech generations in the future. We owe it to them to clean up our mess while we still have excess oil energy to do it.  But far more likely nuclear wastewill sit at nuclear reactors, military sites, and wherever nuclear warheads are kept, shortening the lives of anyone who lives near them.

There are two articles below about possible ways to dispose of nuclear waste into deep holes that sound good to me.

Related: Too Hot to Touch: The Problem of High-Level Nuclear Waste by William M. Alley & Rosemarie Alley. 2013. Cambridge University Press to understand how serious the problem is.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Vidal, J. 2019. What should we do with nuclear waste? Ensia.

Richard Muller, professor emeritus of physics at the University of California, Berkeley and his daughter, co-founder of company  Deep Isolation gave a demonstration in January 2019 of how nuclear waste could be buried permanently using oil-fracking technology. A 140 pound steel canister (with no radioactive waste) was placed in a previously drilled borehole deep into the ground.

With this technique, there’s no need to excavate expensive tunnels. The Mullers think with larger canisters pushed through 300 boreholes up to two miles deep under a billion tons of rock where radiation can’t possibly leak out.  This method could store most of the US’s highest level nuclear waste permanently  for a third of what storage methods cost now.

Many ideas have been investigated, but most have been rejected as impractical, too expensive or ecologically unacceptable. They include shooting it into spaceisolating it in synthetic rockburying it in ice sheetsdumping it on the world’s most isolated islands; and dropping it to the bottom of the world’s deepest oceanic trenches.

Vertical boreholes up to 5,000 meters (16,000 feet) deep have also been proposed (see next article), and this option is said by some scientists to be promising. But there have been doubts because it is likely to be near impossible to retrieve waste from vertical boreholes.

Yet so far, no country has managed to build a deep repository for high-level waste.

“Although almost every nuclear country has, in principle, plans for the eventual burial of the most radioactive waste, only a handful have made any progress and nowhere in the world is there operating an authorized site for the deep geological disposal of the highest level radioactive waste,” says Andrew Blowers, author of The Legacy of Nuclear Power and a former member of the Committee on Radioactive Waste Management (CORWM) set up to advise the U.K. government on how and where to site and store nuclear waste.

“Currently no options have been able to demonstrate that waste will remain isolated from the environment over the tens to hundreds of thousands of years. There is no reliable method to warn future generations about the existence of nuclear waste dumps,” he says.

By law, however, all high-level U.S. nuclear waste must go to Yucca Mountain in Nevada, since 1987 the designated deep geological repository about 90 miles (140 kilometers) northwest of Las Vegas. But the site has been met with continued legal, regulatory and constitutional challenges, becoming a political yo-yo since it was identified as a potentially suitable repository. It is fiercely opposed by the Western Shoshone peoples, Nevada state and others.

A massive tunnel was excavated in Yucca Mountain but was never licensed and the site is now largely abandoned — to the frustration of the federal government and the nuclear industry, which has raised more than US$41 billion from a levy on consumer bills to pay for the repository and which must pay for heavy security at their temporary nuclear waste storage sites.

“We need a high-level repository. We are holding waste now at about 121 sites across the U.S.,” says Baker Elmore, director of federal programs at the Nuclear Energy Institute. “This costs the taxpayer US$800 million a year. We have 97 [nuclear] plants operating and the amount of waste is only going to grow. We are not allowing the science to play out here. There is US$41 billion in the government’s nuclear waste fund, and Yucca mountain is scientifically sound. We want a decision. We are going to need more than one repository.”

Cornwall, W. July 10, 2015. Deep Sleep. Boreholes drilled into Earth’s crust get a fresh look for nuclear waste disposal. Science Vol. 349: 132-135 

One of the world’s biggest radioactive headaches sits in an aging cinderblock building in the desert near Hanford, Washington, at the bottom of a pool of water that glows with an eerie blue light. The nearly 2000 half-meter-long steel cylinders are filled with highly radioactive cesium and strontium, leftover from making plutonium for nuclear weapons. The waste has been described as the most lethal single source of radiation in the United States, after the core of an active nuclear reactor. It could cause a catastrophe if the pool were breached by an unexpectedly severe earthquake, according to the U.S. Department of Energy (DOE), the waste’s owner.

For decades, the federal government has been floundering over what to do with the cylinders. They’re too hot to be easily housed with other waste. And the government’s quest to create a single permanent burial ground for all the nation’s high-level nuclear waste, from both military and civilian activities, is in disarray. U.S. high-level nuclear waste:

70,000 metric tons of civilian waste stored at 75 sites
13,000 metric tons of military waste stored at  5 sites

Now, a deceptively simple-sounding solution is emerging: Stick the cylinders in a very deep hole. The approach, known as deep borehole disposal, involves punching a 43-centimeter-wide hole 5 kilometers into hard rock in Earth’s crust. Engineers would then fill the deepest 2 kilometers with waste canisters, plug up the rest with concrete and clay, and leave the waste to quietly decay.

The idea has been around for decades, but not long ago scientists had all but abandoned it. Over the past 5 years, however, as improved drilling technologies converged with the political and technical woes bedeviling other nuclear waste solutions, boreholes have regained their allure. DOE has gone from spending almost nothing on borehole research to planning a full-scale field test, costing at least $80 million. And earlier this year U.S. Energy Secretary Ernest Moniz gave boreholes a dash of publicity during a major speech, mentioning them as a promising way to deal with the cesium and strontium waste at DOE’s Hanford Site nuclear complex.

Boreholes have “been plan B and just missed the boat for years,” says nuclear engineer Michael Driscoll, a retired professor from the Massachusetts Institute of Technology (MIT) in Cambridge and one of the concept’s leading advocates. “Maybe now is the time.

Many nuclear waste veterans, however, are skeptical. The technical challenges are daunting, they argue, and boreholes won’t end political opposition to building new nuclear waste facilities. “The borehole thing to me is a red herring,” says attorney Geoff Fettus of the Natural Resources Defense Council (NRDC) in Washington, D.C., which supports underground disposal in a shallower mine, but has sued DOE over now abandoned plans to bury the waste inside Nevada’s Yucca Mountain.

Still, even some doubters say that given the current deadlock over nuclear waste, boreholes deserve a second look, at least for those troublesome cylinders at Hanford.

“If we can move forward with disposing of some of the DOE waste, that’s a good thing,” says geoscientist Allison Macfarlane, director of the Center for International Science and Technology Policy at George Washington University in Washington, D.C., and a former chair of the U.S. Nuclear Regulatory Commission. “We have to make some progress somewhere.

IF ONE PERSON deserves credit for helping revive U.S. borehole research, it’s Driscoll, the retired MIT engineer. Now 80, he has spent more than 25 years quietly exploring the potential for depositing radioactive waste deep in granite bedrock.

Driscoll wasn’t the first to pursue the idea; since the 1950s, boreholes have vied with other nuclear waste disposal options, ranging from the improbable (shoot it into outer space or melt it into an ice sheet) to the mundane (stash it in a shallow mine). Ironically, by the time Driscoll got interested in boreholes, U.S. policymakers thought they had settled the issue. In 1987, after years of fierce debate, Congress approved legislation creating a national repository for high-level nuclear waste in a mine carved into Yucca Mountain, roughly 110 kilometers northwest of Las Vegas, Nevada. With that decision, U.S. funding for borehole research largely evaporated.

Driscoll wasn’t deterred. Boreholes, he thought, had some potential advantages over a single big facility. For example, they could spread the burden of storing waste that no one wanted, because suitable rock is found across the United States. So even as engineers began to plan the Yucca Mountain repository, Driscoll and a handful of graduate students kept churning out papers delving into borehole costs and technical feasibility.

In one scenario they explored, spent fuel rods are placed in slender canisters that are strung together like sausage links, then lowered into the hole. Even very radioactive material would be safe, advocates say, if placed in the right kind of deep rock: ancient crystalline granite with few cracks that might allow radioactive materials to seep into groundwater or reach the surface. The surrounding rock and the salty water would dissipate heat generated by the waste. And the top 3 kilometers of each hole would be plugged with a layer cake of cement, gravel, and bentonite clay, which swells when wet. The nation’s entire cache of high-level waste could fit into 700 to 950 boreholes, at a cost of $40 million per hole (not counting transportation), according to recent estimates by scientists at DOE’s Sandia National Laboratories in Albuquerque, New Mexico, who have worked with Driscoll.

Boreholes got their first big break in 2010, when the Obama administration announced that it was abandoning Yucca Mountain after years of delays and resistance from state politicians. The government began looking for other options. That year, Sandia made its first big investment: $734,000 to study how fluid and radioactive particles might behave in a borehole, and how best to seal it. In 2012, a presidential commission added its recommendation for more studies.

Soon after, Moniz became energy secretary. Moniz, a former colleague of Driscoll’s at MIT, had already heard his sales pitch about boreholes. In 2003, the two men served together on a study panel that endorsed “aggressively” studying the technology.

This past March, a White House policy shift opened the door further. Moniz announced that the Obama administration would abandon previous plans to put all high-level waste in one spot and instead would seek separate sites for disposing of commercial nuclear waste—about 85% of the total—and military waste. Moniz called some of the defense waste, including Hanford’s radioactive cylinders, “ideal candidates for deep borehole disposal.

CESIUM-137 AND STRONTIUM-90 are the hot potatoes of the nuclear waste world, packing a powerful radioactive punch in a relatively short half-life of 30 years. At Hanford, there’s barely enough to fill the back of a pickup truck. Yet it contains more than 100 million curies of radiation, roughly one-tenth the radiation in the core of a large nuclear reactor. And it produces enough heat to power more than 200 homes.
To prevent the tubes from causing trouble, they sit under about 4 meters of water in what resembles a giant swimming pool, emanating a blue glow known as Cherenkov radiation as high-energy particles slam into the water. The 1974 building housing the pool is past its 30-year life span, according to DOE’s inspector general. Bombarded by radiation, the pool’s concrete walls are significantly weakened in places. Some of the tubes have failed and been stuck inside larger containers. In a review of DOE facilities conducted after the 2011 disaster at Japan’s Fukushima Daiichi Nuclear Power Station, the department’s Office of Environmental Management concluded that the Hanford pool had the highest risk of catastrophic failure of any DOE facility, for example in a massive earthquake, according to a report from the department’s inspector general. DOE says it plans to move the pool waste into dry casks for safer storage, but it hasn’t said when.

“It’s an urgent situation and a huge safety risk,” says Tom Carpenter, executive director of the watchdog group Hanford Challenge in Seattle, Washington, which has been critical of DOE’s efforts to secure the waste.

Borehole advocates point out that the Hanford tubes are less than 7 centimeters in diameter, narrow enough to fit down a hole without extensive repackaging. All could fit into a single shaft. Other military waste could also go down a borehole, advocates add. One candidate is plutonium that DOE has extracted from dismantled nuclear weapons. Most of it is currently stored as softball-sized metal spheres at a DOE facility in Texas. In contrast to Hanford’s cesium and strontium, the plutonium is fairly cool, but extremely long-lived, with a half-life of 24,000 years. DOE is considering other options for the plutonium, including turning it into fuel for nuclear reactors or combining it with other nuclear waste and burying it. But boreholes could be an effective way to put it far out of the reach of anyone trying to lay their hands on bombmaking material.

Yet borehole disposal is not as straightforward as it might seem. The Nuclear Waste Technical Review Board, an independent panel that advises DOE, notes a litany of potential problems: No one has drilled holes this big 5 kilometers into solid rock. If a hole isn’t smooth and straight, a liner could be hard to install, and waste containers could get stuck. It’s tricky to see flaws like fractures in rock 5 kilometers down. Once waste is buried, it would be hard to get it back (an option federal regulations now require). And methods for plugging the holes haven’t been sufficiently tested. “These are all pretty daunting technical challenges,” says the board’s chair, geologist Rod Ewing, of Stanford University in Palo Alto, California.

Even if those technical problems are surmounted, boreholes might solve only a fraction of the nation’s waste problem. That’s because much of the high-level waste simply wouldn’t fit down a hole without extensive repackaging. “Due to the physical dimensions of much of the used nuclear fuel, it is not presently considered to be as good of a candidate [for borehole disposal] as the smaller waste forms,” said William Boyle, director of DOE’s Office of Used Nuclear Fuel Disposition Research and Development, in a statement to Science. Spent fuel rods from commercial power reactors, for instance, are often bundled into casks that are about 2 meters across.

Then there’s the same problem that dogged Yucca Mountain: the politics of finding a place to drill the holes. “Let’s just assume [boreholes] could work better than anybody ever imagined,” says Fettus, the NRDC attorney. “You still wouldn’t solve the nut that everyone has been unable to solve”: persuading state and local governments to take on waste from across the nation.

DESPITE THESE CHALLENGES, Sandia scientists are moving forward with a 5-year plan to drill one or more 5-kilometer-deep boreholes. Pat Brady, a Sandia geochemist helping plan the tests, is optimistic. “There’s a lot of institutional experience with drilling holes in the ground,” he says.

The drilling technology is better than ever, he says. Drillers have gained valuable experience boring deep holes into hard rock for geothermal energy, and improved rigs can more easily and accurately drill deep, straight holes. The Sandia team is currently looking for a U.S. site for the first test hole, with a plan to start drilling in the fall of 2016.

Besides seeing if they can cost-effectively drill a hole that’s deep and wide enough, they also want to test methods for determining whether the rock is solid and whether any water near the bottom of the hole is connected to shallow groundwater. Then they will lower a model waste canister down the hole to see if it gets stuck.

Other nations with nuclear waste, including China, are watching. But, for now, the United States is the only country getting ready to drill. “Nobody else has stepped forward,” says Geoff Freeze, a nuclear engineer at Sandia who is overseeing the U.S. experiment. “It kind of fell to us.”

Posted in Nuclear Waste | Tagged , | 17 Comments

Net Energy Cliff Will Lead to Collapse of Civilization

Energy CliffThe remaining oil is poor quality, and the energy to get this often remote oil so great that more and more energy (blue) goes into oil production itself, leaving far less — the grey area — available to fuel the rest of civilization. Source: 22 June 2009. David Murphy. The Net Hubbert Curve: What Does It Mean? theoildrum.

This is the scariest chart I’ve ever seen.  It shows civilization is likely to crash within the next 20-30 years. I thought oil depletion curve would be symmetric (blue), but this chart reveals it’s more likely to be a cliff (gray) when you factor in Energy Returned on Energy Invested (EROEI).

The gray represents the actual (net) energy after you subtract out the much higher amount of energy (blue) needed to get and process the remaining nasty, distant, low-quality, and difficult to get at oil.  We’ve already gotten the high-quality, easy oil.

Before peaking in 2006, the world production of conventional petroleum grew exponentially at 6.6% per year between 1880 and 1970.  Although Hubbert drew symmetric rising and falling production curves, the declining side may be steeper than a bell curve, because the heroic measures we’re taking now to keep production high (i.e. infill drilling, horizontal wells, enhanced oil recovery methods, etc.), may arrest decline for a while, but once decline begins, it will be more precipitous (Patzek 2007).

Clearly you can’t “grow” the economy without increasing supplies of energy.  You can print all the money or create all the credit you want, but try stuffing those down your gas tank and see how far you go.  Our financial system depends on endless growth to pay back debt, so when it crashes, there’s less credit available to finance new exploration and drilling, which guarantees an oil crisis further down the line.

Besides financial limits, there are political limits, such as wars over remaining resources.

For a little while you can fix broken infrastructure and still plant, harvest, and distribute food, maintain and operate drinking water and sewage treatment plants, pump water from running-dry aquifers like the Ogallala which grows 1/4 of our food, but at some point it will be hard to provide energy to all food and infrastructure.

The entire world is competing for the steep grey area of oil that’s left, most of which is in the Middle East.

Hubbert thought nuclear energy would fill in for fossil fuels

Gail Tverberg at ourfiniteworld writes “Hubbert only made his forecast of a symmetric downslope in the context of another energy source fully replacing oil or fossil fuels, even before the start of the decline. For example, looking at his 1956 paper, Nuclear Energy and the Fossil Fuels, we see nuclear taking over before the fossil fuel decline”.

The Power of Exponential Growth: Every 10 years we have burned more oil than all previous decades.

exponential 7pct oil needed

Another way of looking at this is what systems ecologists call Energy Returned on Energy Invested (EROEI). In the USA in 1930 an “investment” of the energy in 1 barrel of oil produced another 100 barrels of oil, or an EROEI of 100:1.   That left 99 other barrels to use to build roads, bridges, factories, homes, libraries, schools, hospitals, movie theaters, railroads, cars, buses, trucks, computers, toys, refrigerators – any object you can think of, and 500,000 products use petroleum as a feedstock (see point #6).  By 1970 EROEI was down to 30:1 and in 2000 it was 11:1 in the United States.

Charles A. S. Hall, who has studied EROEI for most of his career and published in Science and other top peer-reviewed journals, believes that society needs an EROEI of at least 12 or 13:1 to maintain our current level of civilization.

Because we got the easy oil first, we have used up 73% of the net energy that will ever be available, since the remaining half of the reserves require so much energy to extract.

Some other reasons why the cliff may even be steeper

It’s not our oil

Nearly all of the good, high quality, cheap sweet oil is in the Middle East.  Most of the remaining oil will need vast amounts of fresh water to get it out, but there is very limited fresh water in these countries.  The refineries and other extraction infrastructure are easy targets to damage or destroy by terrorists or in wars as well.

Export Land Model

Oil producing countries are using more and more of their own (declining) oil as population and industry grows within their own nation, and they too need to use more and more energy to get at their difficult oil.  This results in a similar chart to the net energy cliff — suddenly there will hardly be any oil to buy on the world markets.  See Jeffrey Brown’s article “The Export Capacity Index” (one of his statistics is that at the current rate of increasing imports of oil in India and China, these 2 countries alone would be importing 100% of available oil within 18 years).


As we improve our technology to get at the remaining oil, we make the cliff on the other side even steeper as we get oil now that would have been available to future generations.

Investments won’t be made because the payback times will lengthen

Since what remains is increasingly difficult and expensive to find, develop and extract, investment payback periods lengthen, eventually to impossibly long periods, or to periods that approach the useful life of the capital investment (effectively the same limit in the financial dimension as is an EROEI of 1). Which means it doesn’t matter how much might theoretically be underground, the only thing that matters is how much is actually going to be economically feasible to recover, and that is going to be considerably less than 100% of what might be theoretically and technically possible to recover.

Energy is becoming impossibly expensive, as you can see in these photos of The Tallest structure ever moved by Mankind, a Norwegian natural gas offshore platform.

Exponential growth of population

This makes whatever oil we have left last even less long.

Less oil obtained than could have been

Projects maximize a return on investment over a return of every last drop of possible oil.  Making money is so important that a lot of offshore Gulf oil that could have been obtained if extracted more slowly remains in the ground to wastefully get it out as fast as possible to make a profit because that’s how our financial system operates: short-term gratification.  But hey!  That’s less carbon dioxide and global warming, so in a totally unintended orgy of insatiable greed the “there are no limits to growth” billionaires have ironically helped save the planet.

Flow Rate: An 8% or higher decline rate is likely

Many energy company CEO’s and other experts the average rate of decline world-wide will reach 8% or higher.   If the 8% decline starts at 30 billion barrels in 2015, we’d have only half as much oil in 8 years.  That’s too fast for civilization to cope with.

  • 2016 = 27.6
  • 2017 = 25.4
  • 2018 = 23.4
  • 2019 = 21.5
  • 2020 = 19.8
  • 2021 = 18.2
  • 2022 = 16.7
  • 2023 = 15.4 (half of what we had 8 years ago!)
  • 2024 = 14

The decline rate could be less than 8% for a while, or more than 8% if an economic crash prevents the funding of future projects, wars interfere with oil production, the technology to drill for arctic oil isn’t figured out within a few years (we can’t do it yet safely), and so on.

Oil Chokepoints

There are several critical areas of the world where the flow of oil could be stopped by war or terrorism.

Wars, cyber-attacks, nuclear war, social chaos

By 2024, if not sooner, the unequal distribution of the remaining oil, starvation generated riots and pillaging, and collapsing economies have triggered war(s), massive migrations, and social chaos.

Shale oil and natural gas can not prevent the cliff.  Martin Payne explains: “shale oil plays give us a temporary reprieve from what Bob Hirsch called the severe consequences of not taking enough action proactively with respect to peak oil. Without unconventional oil, what we wind up with is essentially Hubbert’s cliff instead of a Hubbert’s rounded peak”. But this won’t last:Conventional oil–which was found in huge quantities, in giant fields in the 40’s and 50’s – well those giant fields had huge reserves and high porosities and permeabilities – meaning they would flow at very high rates for decades. This is in contrast to a relative few shale oil plays which have very low porosity and perm and which must be hydraulically fractured to flow. Conventional oil is just a different animal than unconventional oil; some unconventional oil wells have high initial rates of production, but all of these wells have high decline rates. Hubbert anticipated a lot of incremental efforts by the industry to make the right-hand or decline side of his curve a more gradual curve rather than a sharp drop (Andrews)

If any of these wars involve nuclear bombs, then at least a billion people will die.

The unrest has certainly curtailed the ability of oil companies to drill.

Even farmers may stop growing crops once city residents and roaming militias harvest whatever is grown (i.e. Africa as described in Parenti’s “Tropic of Chaos: Climate change and the new geography of violence).

Cyberattacks from China, Russia, and elsewhere have brought the electric grid down in the USA to prevent US military forces for trying to grab the remaining Saudi and Iraqi oil –the armed forces will be too busy trying to maintain order in the USA to venture abroad — nor could they go even if they wanted to, because Chinese and Russian drone attacks will have destroyed all of the United State oil refineries, and we have retaliated against them, so they won’t be able to refine oil either).  We’ve also cyberattacked their electric grids.  Most major cities have no sewage treatment or clean water. Nuclear power plants are melting down.

There’s no substitute for oil  

Coal — why it can’t easily subsitute for oil

“Peak is dead” and the future of oil supply:

Steve andrews (ASPO):  You mention in your paper that natural gas liquids can’t fully substitute for crude oil because they contain about a third less energy per unit volume and only one-third of that volume can be blended into transportation fuel.  In terms of the dominant use of crude oil—in the transportation sector—how significant is the ongoing increase in NGLs vs. the plateau in crude oil?

Richard G. Miller: The role of NGLs is a bit curious.  You can run a car on it if you want, but it’s not a drop-in substitute for liquid oil.  You can convert vehicle engines in fleets to run on liquefied gas; it’s probably better thought of as a fleet fuel.  But it’s not a substitute for oil for my car.  By and large, raising NGL production is not a substitution to making up a loss of liquid crude.

The only way I can see this being prevented or the end of oil delayed a few years, is if a government has already developed effective bio-weapons and doesn’t care if their own population suffers as well.

I feel crazy to have just written this very dire paragraph with just a few of the potential consequences, but the “shark-fin” curve made me do it!

Even though I’ve been reading and writing about peak everything since 2001, and the rise and fall of civilizations for 40 years, it is hard for me to believe a crash could happen so fast.  It is hard to believe there could ever be a time that isn’t just like now.  That there could ever be a time when I can’t hop into my car and drive 10,000 miles.

I can imagine the future all too well, but it is so hard to believe it.

Believe it.


Andrews, Steve. 29 July 2013. Interview with Martin Payne—Is Peak Oil Dead? ASPO-USA Peak Oil Review.

Patzek, T. 2007 How can we outlive our way of life? 20th round table on sustainable development of fuels, OECD headquarters.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

Posted in 2) Collapse, 3) Fast Crash, An Overview, By People, Energy, EROEI Energy Returned on Energy Invested, Net Energy Cliff, Stages of | Tagged , , , , , , | 12 Comments

Carbon capture could require 25% of all global energy

Preface.  This is clearly a pipedream. Surely the authors know this, since they say that the energy needed to run direct air capture machines in 2100 is up to 300 exajoules each year. That’s more than half of global energy consumption today.  It’s equivalent to the current annual energy demand of China, the US, the EU and Japan combined.  It is equal to the global supply of energy from coal and gas in 2018.

That’s a showstopper. This CO2 chomper isn’t going anywhere.  It simply requires too much energy, raw materials, and an astounding, impossibly large-scale rapid deployment of 30% a year to be of any use.

Reaching 30 Gt CO2/yr of CO2 capture – a similar scale to current global emissions – would mean building some 30,000 large-scale DAC factories. For comparison, there are fewer than 10,000 coal-fired power stations in the world today. 

The cement and steel used in DACCS facilities would require a great deal of energy and CO2 emissions that need to be subtracted from whatever is sequestered.

Nor can the CO2 be stored in carbon capture sorbents – these are between the research and demonstration levels, far from being commercial, and are subject to degradation which would lead to high operational and maintenance costs.  Their manufacture also releases chemical pollutants that need to be managed, adding to the energy used even more. Plus sorbents can require a great deal of high heat and fossil fuel inputs, possibly pushing up the “quarter of global energy” beyond that.

As far as I can tell the idea of sorbents, which are far from being commercial and very expensive to produce, is only being proposed because there’s not enough geological storage to put CO2.

By the time all of the many technical barriers were overcome, oil would probably be declining, rendering the point of a DACCS facility moot.  A decline of 4-8% a year of global oil production will reduce CO2 emissions far more than DACCS.  Within two decades we’ll be down to 10% of the oil and emissions we once had.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Evans, S. 2019. Direct CO2 capture machines could use ‘a quarter of global energy’ in 2100.

This article is a summary of: Realmonte, G. et al. 2019. An inter-model assessment of the role of direct air capture in deep mitigation pathways, Nature Communications.

Machines that suck CO2 directly from the air using direct air capture (DAC) could cut the cost of meeting global climate goals, a new study finds, but they would need as much as a quarter of global energy supplies in 2100 to limiting warming to 1.5 or 2C above pre-industrial levels.

But the study also highlights the “clear risks” of assuming that DAC will be available at scale, with global temperature goals being breached by up to 0.8C if the technology then fails to deliver.

This means policymakers should not see DAC as a “panacea” that can replace immediate efforts to cut emissions, one of the study authors tells Carbon Brief, adding: “The risks of that are too high.

DAC should be seen as a “backstop for challenging abatement” where cutting emissions is too complex or too costly, says the chief executive of a startup developing the technology. He tells Carbon Brief that his firm nevertheless “continuously push back on the ‘magic bullet’ headlines”.

Negative emissions The 2015 Paris Agreement set a goal of limiting human-caused warming to “well below” 2C and an ambition of staying below 1.5C. Meeting this ambition will require the use of “negative emissions technologies” to remove excess CO2 from the atmosphere, according to the Intergovernmental Panel on Climate Change (IPCC).

This catch-all term of negative emissions technologies covers a wide range of approaches, including planting trees, restoring peatlands and other “natural climate solutions”. Model pathways rely most heavily on bioenergy with carbon capture and storage (BECCS). This is where biomass, such as wood pellets, is burned to generate electricity and the resulting CO2 is captured and storedThe significant potential role for BECCS raises a number of concerns, with land areas up to five times the size of India devoted to growing the biomass needed in some model pathways.

Another alternative is direct air capture, where machines are used to suck CO2 out of the atmosphere. If the CO2 is then buried underground, the process is sometimes referred to as direct air carbon capture and storage (DACCS).

Today’s new study explores how DAC could help meet global climate goals with “lower costs”, using two different integrated assessment models (IAMs). Study author Dr Ajay Gambhir, senior research fellow at the Grantham Institute for Climate Change at Imperial College London, explains to Carbon Brief:

“This is the first inter-model comparison…[and] has the most detailed representation of DAC so far used in IAMs. It includes two DAC technologies, with different energy inputs and cost assumptions, and a range of energy inputs including waste heat. The study uses an extensive sensitivity analysis [to test the impact of varying our assumptions]. It also includes initial analysis of the broader impacts of DAC technology development, in terms of material, land and water use.

The two DAC technologies included in the study are based on different ways to adsorb CO2 from the air, which are being developed by a number of startup companies around the world.

One, typically used in larger industrial-scale facilities such as those being piloted by Canadian firm Carbon Engineering, uses a solution of hydroxide to capture CO2. This mixture must then be heated to high temperatures to release the CO2 so it can be stored and the hydroxide reused. The process uses existing technology and is currently thought to have the lower cost of the two alternatives.

The second technology uses amine adsorbents in small, modular reactors such as those being developed by Swiss firm Climeworks. Costs are currently higher, but the potential for savings is thought to be greater, the paper suggests. This is due to the modular design that could be made on an industrial production line, along with lower temperatures needed to release CO2 for storage, meaning waste heat could be used.

Delayed cuts

Overall, despite “huge uncertainty” around the cost of DAC, the study suggests its use could allow early cuts in global greenhouse gas emissions to be somewhat delayed, “significantly reducing climate policy costs” to meet stringent temperature limits.

Using DAC means that global emissions in 2030 could remain at higher levels, the study says, with much larger use of negative emissions later in the century.  

The use of DAC in some of the modelled pathways delays the need to cut emissions in certain areas. The paper explains: “DACCS allows a reduction in near term mitigation effort in some energy-intensive sectors that are difficult to decarbonise, such as transport and industry.

Steve Oldham, chief executive of DAC startup Carbon Engineering says he sees this as the key purpose of CO2 removal technologies, which he likens to other “essential infrastructure” such as waste disposal or sewage treatment.

Oldham tells Carbon Brief that while standard approaches to cutting CO2 remain essential for the majority of global emissions, the challenge and cost may prove too great in some sectors. He says:

“DAC and other negative emissions technologies are the right solution once the cost and feasibility becomes too great…I see us as the backstop for challenging abatement.

Comparing costs

Even though DAC may be relatively expensive, the model pathways in today’s study still see it as much cheaper than cutting emissions from these hard-to tackle sectors. This means the models deploy large amounts of DAC, even if its costs are at the high end of current estimates.

It also means the models see pathways to meeting climate goals that include DAC as having lower costs overall (“reduce[d]… by between 60 to more than 90%”). Gambhir tells Carbon Brief:

“Deploying DAC means less of a steep mitigation pathway in the near-term, and lowers policy costs, according to the modelled scenarios we use in this study.

Gambhir tells Carbon Brief:

“Large-scale deployment of DAC in below-2C scenarios will require a lot of heat and electricity and a major manufacturing effort for production of CO2 sorbent. Although DAC will use less resources such as water and land than other NETs [such as BECCS], a proper full life-cycle assessment needs to be carried out to understand all resource implications.

Deployment risk There are also questions as to whether this new technology could be rolled out at the speed and scale envisaged, with expansion at up to 30% each year and deployment reaching 30 GtCO2/yr towards the end of the century. This is a “huge pace and scale”, Gambhir says, with the rate of deployment being a “key sensitivity” in the study results.

Prof Jennifer Wilcox, professor of chemical engineering at Worcester Polytechnic Institute, who was not involved with the research, says that this rate of scale-up warrants caution. She tells Carbon Brief:

“Is the rate of scale-up even feasible? Typical rules of thumb are increase by an order of magnitude per decade [growth of around 25-30% per year]. [Solar] PV scale-up was higher than this, but mostly due to government incentives…rather than technological advances.

If DAC were to be carried out using small modular systems, then as many as 30m might be needed by 2100, the paper says. It compares this number to the 73m light vehicles that are built each year.

The study argues that expanding DAC at such a rapid rate is comparable to the speed with which newer electricity generation technologies such as nuclear, wind and solar have been deployed.

The modelled rate of DAC growth is “breathtaking” but “not in contradiction with the historical experience”, Bauer says. This rapid scale-up is also far from the only barrier to DAC adoption.

The paper explains: “[P]olicy instruments and financial incentives supporting negative emission technologies are almost absent at the global scale, though essential to make NET deployment attractive.

Carbon Engineering’s Oldham agrees that there is a need for policy to recognise negative emissions as unique and different from standard mitigation. But he tells Carbon Brief that he remains “very very confident” in his company’s ability to scale up rapidly.

(Today’s study includes consideration of the space available to store CO2 underground, finding this not to be a limiting factor for DAC deployment.)

Breaching limits

The paper says that the challenges to scale-up and deployment on a huge scale bring significant risks, if DAC does not deliver as anticipated in the models. Committing to ramping up DAC rather than cutting emissions could mean locking the energy system into fossil fuels, the authors warn.

This could risk breaching the Paris temperature limits, the study explains:

“The risk of assuming that DACCS can be deployed at scale, and finding it to be subsequently unavailable, leads to a global temperature overshoot of up to 0.8C.

Gambhir says the risks of such an approach are “too high”:

“Inappropriate interpretations [of our findings] would be that DAC is a panacea and that we should ease near-term mitigation efforts because we can use it later in the century.

Bauer agrees:

“Policymakers should not make the mistake to believe that carbon removals could ever neutralise all future emissions that could be produced from fossil fuels that are still underground. Even under pessimistic assumptions about fossil fuel availability, carbon removal cannot and will not fix the problem. There is simply too much low-cost fossil carbon that we could burn.

Nonetheless, Prof Massimo Tavoni, one of the paper’s authors and the director of the European Institute on Economics and the Environment (EIEE), tells Carbon Brief that “it is still important to show the potential of DAC – which the models certainly highlight – but also the many challenges of deploying at the scale required”.

The global carbon cycle poses one final – and underappreciated – challenge to the large-scale use of negative emissions technologies such as DAC: ocean rebound. This is because the amount of CO2 in the world’s oceans and atmosphere is in a dynamic and constantly shifting equilibrium.

This equilibrium means that, at present, oceans absorb a significant proportion of human-caused CO2 emissions each year, reducing the amount staying in the atmosphere. If DAC is used to turn global emissions net-negative, as in today’s study, then that equilibrium will also go into reverse.

As a result, the paper says as much as a fifth of the CO2 removed using DAC or other negative emissions technologies could be offset by the oceans releasing CO2 back into the atmosphere, reducing their supposed efficacy.

Posted in CO2 and Methane, Far Out | Tagged , | 3 Comments

Himalayan glaciers that supply water to a billion people are melting fast

Preface. The Himalayan glaciers that supply water to a billion people are melting fast, already 30% has been lost since 1975.

Adding to the crisis are the 400 dams under construction or planned for Himalayan rivers in India, Pakistan, Nepal, and Bhutan to generate electricity and for water storage.  The dams’ reservoirs and transmission lines will destroy biodiversity, thousands of houses, towns, villages, fields, 660 square miles of forests, and even parts of the highest highway of the world, the Karakoram highway. The dam projects are at risk of collapse from earthquakes in this seismically active region and of breach from flood bursts from glacial lakes upstream. Dams also threaten to intensify flooding downstream during intense downpours when reservoirs overflow (IR 2008, Amrith 2018).

Since the water flows to 16 nations, clearly these dams could cause turmoil and even war if river flows are cut off from downstream countries.  Three of these nations, India, Pakistan, and China, have nuclear weapons.

It’s already happening. After a terrorist attack that killed 40 Indian police officers in Kashmir, Indiadecided to retaliate by cutting off some river water that continues on to Pakistan, “adding an extra source of conflict between two nuclear-armed neighbors”. Pakistan is one of the most water-stressed countries in the world with seriously depleted underground aquifers and less storage behind their two largest dams due to silt (Johnson 2019).

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Wu, K. 2019. Declassified spy images show Earth’s ‘Third Pole’ is melting fast.  Accelerating ice melt in the Himalayas may imperil up to a billion people in South Asia who rely on glacier runoff for drinking water and more.

According to a study published today in the journal Science Advances, rising temperatures in the Himalayas have melted nearly 30% of the region’s total ice mass since 1975.

These disappearing glaciers imperil the water supply of up to a billion people throughout Asia.

Once nicknamed Earth’s ‘Third Pole’ for its impressive cache of snow and ice, the Himalayas may now have a bleak future ahead. Four decades of satellite data, including recently declassified Cold War-era spy film, suggest these glaciers are currently receding twice as fast as they were at the end of the 20th century.

Several billion tons of ice are sloughing off the Himalayas each year without being replaced by snow. That spells serious trouble for the peoples of South Asia, who depend on seasonal Himalayan runoff for agriculture, hydropower, drinking water, and more. Melting glaciers could also prompt destructive floods and threaten local ecosystems, generating a ripple effect that may extend well beyond the boundaries of the mountain’s warming peaks.

The study’s sobering findings come as the result of a massive compilation of data across time and space. While previous studies have documented the trajectories of individual glaciers in the Himalayas, the new findings track 650 glaciers that span a staggering 1,250-mile-wide range across Nepal, Bhutan, India, and China. They also draw on some 40 years of satellite imagery, which the scientists stitched together to reconstruct a digital, three-dimensional portrait of the glacier’s changing surfaces—almost like an ultra-enhanced panorama.

When a team of climatologists analyzed the time series, they found a stark surge in glacier shrinkage. Between 1975 and 2000, an average of about 10 inches of ice were shed from the glaciers each year. Post-Y2K, however, the net loss doubled to around 20 inches per year—a finding in keeping with accelerated rates of warming around the globe.

While previous studies have had difficulty disentangling the relative contributions of rising temperatures, ice-melting pollutants, and reduced rainfall to the boost in glacier melt, the latter two simply aren’t enough to explain the alarming drop in ice mass in recent years.


Amrith, S. S. 2018. The race to dam the Himalayas. Hundreds of big projects are planned for the rivers that plunge from the roof of the world. New York Times.

IR. 2008. Mountains of concrete: Dam building in the Himalayas. International Rivers.

Johnson, K. 2019. Are India and Pakistan on the verge of a water war? Foreign Policy.

Posted in Caused by Scarce Resources, Climate Change, Climate Change, Dams, Water, Water, Water | Tagged , , , | Comments Off on Himalayan glaciers that supply water to a billion people are melting fast

Billionaire apocalypse bunkers

Vivos Bunker
Vivos built a 575 bunker compound in South Dakota that’s almost the size of Manhattan.


There are many reasons why people might want a bunker, but peak oil, peak phosphorous, peak everything really… and limits to growth were not among the reasons given. James Howard Kunstler’s book, “The Long Emergency: Surviving the End of Oil, Climate Change, and Other Converging Catastrophes of the 21st century” could have been titled the permanent emergency. Once oil begins to decline globally in earnest, within 20 years there’ll be 10% or less of oil left (unless wars end oil sooner and faster than that), and oil and other fossils are what allowed humans to expand from about 1 billion to 7.8 billion today.

So, when people emerge from their bunkers, they’d better know how to farm and be living on arable land with adequate rainfall.

You can’t run from the “end of the world” to a bunker. It’s just a very fancy tombstone.

Most of what follows comes from the website “The Backup Plan For Humanity. Secure your space in a Vivos underground shelter to survive virtually any catastrophe“. It’s fun to poke around on and has many photos not shown below.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Bendix, A. 2019. 45 unreal photos of ‘billionaire bunkers’ that could shelter the superrich during an apocalypse.

A lot has changed since December 21, 2012, when around 10% of people falsely believed the world would end. Billionaires aren’t the only ones prepping for doomsday, but they could be the most prepared if the world gets hit by an asteroid or nuclear missile. In recent years, companies have built “billionaire bunkers” that cater to the apocalyptic fears of the superrich.

The effects of climate change have become more frequent and severe, threatening vulnerable areas with floods, hurricanes, and extreme heat. Machines have become more intelligent, leading some to worry about a technological overthrow of society. And the possibility ofglobal nuclear warfare looms even larger, with North Korea continuing to advance its nuclear weapons program.

Predicting one of these unlikely doomsday scenarios may be impossible, but planning for them isn’t if you’re a member of the 1%. Take a look at the “billionaire bunkers” that could house the super rich during an apocalypse.

Amid growing threats to the safety of our planet, a small group of elites — namely, Silicon Valley execs and New York City financiers — have started to discuss their doomsday plans over dinner.

Amid growing threats to the safety of our planet, a small group of elites — namely, Silicon Valley execs and New York City financiers — have started to discuss their doomsday plans over dinner.

In some cases, these conversations have prompted wealthy individuals to purchase underground bunkers to shelter themselves during a disaster.

“Billionaire bunkers” don’t need to be built from scratch.

A few companies now manufacture luxury doomsday shelters that cater to super rich clientele. The Vivos Group, a company based in Del Mar, California, is building a “global underground shelter network” for high-end clients.

The Vivos Group, a company based in Del Mar, California, is building a

Their fanciest compound, known as Europa One, is located beneath a 400-foot-tall mountain in the village of Rothenstein, Germany.

Their fanciest compound, known as Europa One, is located beneath a 400-foot-tall mountain in the village of Rothenstein, Germany.

The shelter was once a storage space for Soviet military equipment during the Cold War, according to the company’s website.

The shelter was once a storage space for Soviet military equipment during the Cold War, according to the company's website.

In exchange for purchasing a bunker, residents are provided with a full-time staff and security team.

In exchange for purchasing a bunker, residents are provided with a full-time staff and security team.

This property is designed to withstand a close-range nuclear blast, airline crash, earthquake, flood, or military attack.

The property is designed to withstand a close-range nuclear blast, airline crash, earthquake, flood, or military attack.

A typical living quarters has two floors. On the lower level (shown below), there are multiple bedrooms, a pool table, and a movie theater.

A typical living quarters has two floors. On the lower level (shown below), there are multiple bedrooms, a pool table, and a movie theater.

Each family is allotted 2,500 square feet, but has the option to extend their residence to 5,000 square feet.

Each family is allotted 2,500 square feet, but has the option to extend their residence to 5,000 square feet.

This sample movie theater can fit a family of five.

This sample movie theater can fit a family of five.

The bunker includes communal spaces, such as a pub for tossing back a few while the world comes to an end.

The bunker includes communal spaces, such as a pub for tossing back a few while the world comes to an end.

Or a chapel for sending prayers to the rest of humanity.

Or a chapel for sending prayers to the rest of humanity.

When doomsday arrives, the company envisions residents arriving in Germany by car or plane. From there, Vivos will transport them via helicopter to their sheltered homes.

When doomsday arrives, the company envisions residents arriving in Germany by car or plane. From there, Vivos will transport them via helicopter to their sheltered homes.

The full underground structure stretches nearly 230,000 square feet.

The full underground structure stretches nearly 230,000 square feet.

There are only 34 private living quarters, so space is limited.

There are only 34 private living quarters, so space is limited.

But the price will likely preclude most people from buying. Private apartments start at $2.5 million and fully furnished, semi-private suites start at around $40,000 a person.

But the price will likely preclude most people from buying. Private apartments start at $2.5 million and fully furnished, semi-private suites start at around $40,000 a person.

If billionaires can’t find space at Europa One, there’s also xPoint, a compound in South Dakota that’s almost the size of Manhattan.

If billionaires can't find space at Europa One, there's also xPoint, a compound in South Dakota that's almost the size of Manhattan.

xPoint was originally built by Army engineers.

xPoint was originally built by Army engineers.

The compound’s location near the Black Hills of South Dakota makes it relatively safe from flooding and nuclear targets, according to Vivos.

The compound's location near the Black Hills of South Dakota makes it relatively safe from flooding and nuclear targets, according to Vivos.

xPoint comes with its own electrical and water systems, so residents can survive for at least a year without having to go outside.

xPoint comes with its own electrical and water systems, so residents can survive for at least a year without having to go outside.

The entire compound consists of 575 bunkers, each with enough space for 10 to 24 people.

The entire compound consists of 575 bunkers, each with enough space for 10 to 24 people.

Each bunker is around 2,200 square feet.

Each bunker is around 2,200 square feet.

The bunkers start at $35,000, but residents will also have to pay $1,000 in annual rent. That’ll likely require some savings when it’s unsafe to go outdoors.

The bunkers start at $35,000, but residents will also have to pay $1,000 in annual rent. That'll likely require some savings when it's unsafe to go outdoors.

The company has yet another shelter in Indiana, which can house just 80 people.

The company has yet another shelter in Indiana, which can house just 80 people.

The Vivos website likens the shelter to “a very comfortable 4-Star hotel.”

The Vivos website likens the shelter to

The communal living room has 12-foot-high ceilings.

The communal living room has 12-foot-high ceilings.

Residents aren’t expected to bring anything other than clothing and medication.

Residents aren't expected to bring anything other than clothing and medication.

Vivos provides the rest, including laundry facilities, food, toiletries, and linens.

Vivos provides the rest, including laundry facilities, food, toiletries, and linens.

There’s even exercise equipment and pet kennels.

There's even exercise equipment and pet kennels.

The shelter is co-owned by its members, which makes it slightly more affordable than the company’s other models.

The shelter is co-owned by its members, which makes it slightly more affordable than the company's other models.

Vivos claims on its website that the shelter is safe from tsunamis, earthquakes, and nuclear attacks.

Vivos claims on its website that the shelter is safe from tsunamis, earthquakes, and nuclear attacks.

In a statement, the company said interest in its shelters has “skyrocketed over the past few years.” The website says that “few” spaces remain across its network of bunkers.

In a statement, the company said interest in its shelters has

Vivos members aren’t all elite one-percenters, the company said, “but rather well-educated, average people with a keen awareness of the current global events.”

Vivos members aren't all elite one-percenters, the company said,

The Survival Condo Project, on the other hand, caters exclusively to the superrich.

The Survival Condo Project, on the other hand, caters exclusively to the superrich.

The company’s 15-story facility, fashioned from a retired missile silo, cost $20 million to build.

The company's 15-story facility, fashioned from a retired missile silo, cost $20 million to build.

In an interview with the New Yorker, the company’s CEO, Larry Hall, said his facility represented “true relaxation for the ultra-wealthy.”

In an interview with the New Yorker, the company's CEO, Larry Hall, said his facility represented

Source: The New Yorker

The facility only has room for about a dozen families, or 75 people in total.

The facility only has room for about a dozen families, or 75 people in total.

The facility is somewhere north of Wichita, Kansas, but its exact location is secret.

The facility is somewhere north of Wichita, Kansas, but its exact location is secret.

A single unit is relatively small — around 1,820 square feet.

A single unit is relatively small — around 1,820 square feet.

As of last year, units were advertised for $3 million each. The company also sells half-floor units for around $1.5 million.

As of last year, units were advertised for $3 million each. The company also sells half-floor units for around $1.5 million.

All floors are connected by a high-speed elevator.

All floors are connected by a high-speed elevator.

Homeowners can venture outside, but there are SWAT team-style trucks available to pick them up within 400 miles.

Homeowners can venture outside, but there are SWAT team-style trucks available to pick them up within 400 miles.

Under a crisis scenario, residents have to secure permission from the company’s board of directors before leaving the premises.

Under a crisis scenario, residents have to secure permission from the company's board of directors before leaving the premises.

But conditions inside are far from unbearable. The facility comes with a gym, game center, dog park, classroom, and 75-foot swimming pool.

But conditions inside are far from unbearable. The facility comes with a gym, game center, dog park, classroom, and 75-foot swimming pool.

There’s even a rock wall. Doomsday has never sounded so luxurious.

There's even a rock wall. Doomsday has never sounded so luxurious.
Posted in Where are the rich going | Tagged , , | 10 Comments

How safe are utility-scale energy storage batteries?

Preface.  Airplanes can be forced to make an emergency landing if even a small external battery pack, like the kind used to charge cell phones, catches on fire (Mogg 2019).

If a small battery pack can force an airplane to land, imagine the conflagration of a utility scale storage battery might cause.

A lithium-ion battery designed to store just one day of U.S. electricity generation (11 TWh) to balance solar and wind power would be huge.  Using data from the Department of Energy (DOE/EPRI 2013) energy storage handbook, I calculated that the cost of a utility-scale lithium ion battery capable of storing 24 hours of electricity generation in the United States would cost $11.9 trillion dollars, take up 345 square miles, and weigh 74 million tons.

And at least 6 weeks of energy storage is needed to keep the grid up during times when there’s no sun or wind.  This storage has to come mainly from batteries, because there’s very few places to put Compressed Air Energy Storage (CAES), Pumped Hydro energy storage (PHS) (and also because it has a very low energy density), or Concentrated Solar Power with Thermal Energy Storage.  Currently natural gas is the main energy storage, always available to quickly step in when the wind dies and sun goes down, as well as provide power around the clock with help from coal, nuclear, and hydropower.

Storing large amounts of energy, whether it’s in larger rechargeable batteries, or smaller disposable batteries, can be inherently dangerous. The causes of lithium battery failure can include puncture, overcharge, overheating, short circuit, internal cell failure and manufacturing deficiencies.  Nearly all of the utility-scale batteries now on the grid or in development are massive versions the same lithium ion technology that powers cellphones and laptops. If the batteries get too hot, a fire can start and trigger a phenomenon known as thermal runaway, in which the fire feeds on itself and is nearly impossible to stop until it consumes all the available fuel.

This image has an empty alt attribute; its file name is 2MW-AZ-battery-that-exploded.jpg

Already a 2 megawatt battery (above) installed by the Arizona Public Service electric company exploded in April 2019 and sent eight firefighters and a policeman to the hospital (Cooper 2019) and at least 23 South Korean lithium-ion facilities caught fire in a series of incidents dating back to August 2017 (Deign 2019).

Below are excerpts from an 82 page Department of Energy document. Clearly containing utility scale energy batteries will be difficult:

“Thermal damage to ESS chemistries and components presents unique challenges to the fire service community, building owners, and insurers. As evidenced in full-scale testing of EV battery fires, fire suppression required more water than anticipated, and significantly more in some cases. Additionally, confirming that the fire was completely extinguished was difficult due to the containment housings of EV batteries that can mask continued thermal reaction within undamaged cells. In one of the tests performed by Exponent, Inc., one battery reignited after being involved in a full-scale fire test some 22 hours post-extinguishment; in another case, an EV experienced a subsequent re-ignition 3 weeks post-crash testing.”

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


USDOE. December 2014. Energy Storage Safety Strategic Plan. U.S. Department of Energy.

Energy storage is emerging as an integral component to a resilient and efficient grid through a diverse array of potential application. The evolution of the grid that is currently underway will result in a greater need for services best provided by energy storage, including energy management, backup power, load leveling, frequency regulation, voltage support, and grid stabilization. The increase in demand for specialized services will further drive energy storage research to produce systems with greater efficiency at a lower cost, which will lead to an influx of energy storage deployment across the country. To enable the success of these increased deployments of a wide variety of storage technologies, safety must be instilled within the energy storage community at every level and in a way that meets the need of every stakeholder. In 2013, the U.S. Department of Energy released the Grid

Energy Storage Strategy , which identified four challenges related to the widespread deployment of energy storage. The second of these challenges, the validation of energy storage safety and reliability, has recently garnered significant attention from the energy storage community at large. This focus on safety must be immediately ensured to enable the success of the burgeoning energy storage industry, whereby community confidence that human life and property not

The safe application and use of energy storage technology knows no bounds. An energy storage system (ESS) will react to an external event, such as a seismic occurrence, regardless of its location in relation to the meter or the grid. Similarly, an incident triggered by an ESS, such as a fire, is ‘blind’ as to the location of the ESS in relation to the meter.

Most of the current validation techniques that have been developed to address energy storage safety concerns have been motivated by the electric vehicle community, and are primarily focused on Li-ion chemistry and derived via empirical testing of systems. Additionally, techniques for Pb-acid batteries have been established, but must be revised to incorporate chemistry changes within the new technologies. Moving forward, all validation techniques must be expanded to encompass grid-scale energy storage systems, be relevant to the internal chemistries of each new storage system and have technical bases rooted in a fundamental-scientific understanding of the mechanistic responses of the materials.


Grid energy storage systems are “enabling technologies”; they do not generate electricity, but they do enable critical advances to modernize the electric grid. For example, there have been numerous studies that have determined that the deployment of variable generation resources will impact the stability of grid unless storage is included.5 Additionally, energy storage has been demonstrated to provide key grid support functions through frequency regulation.6 The diversity in the performance needs and deployment environments drive the need of a wide array of storage technologies.

Often, energy storage technologies are categorized as being high-power or high-energy. This division greatly benefits the end user of energy storage systems because it allows for the selection of a technology that fits an application’s requirements, thus reducing cost and maximizing value. Frequency regulation requires very rapid response, i.e. high-power, but does not necessarily require high energy. By contrast, load-shifting requires very high-energy, but is more flexible in its power needs. Uninterruptible power and variable generation integration are applications where the needs for high-power versus high-energy fall somewhere in between the aforementioned extremes. Figure 1 shows the current energy storage techniques deployed onto the North American grid.7 This variety in storage technologies increases the complexity in developing a single set of protocols for evaluating and improving the safety of grid storage technologies and drives the need for understanding across length scales, from fundamental materials processes through full scale system integration. 5 Denholm, Paul; Ela, Erik; Kirby, Brendan; Milligan, Michael.

Figure 1. Percentage of Battery Energy Storage Systems Deployed8 Lithium Iron Total Megawatt PercentagePhosphate 4.84% Flow Other 2.62% 14.38% Lead acid 28.20% Sodium sulfur 8.17% Lithium ion 41.79%

Figure 1. Percentage of Battery Energy Storage Systems Deployed.

The variety of deployment environments and application spaces compounds the complexity of the approaches needed to validate the safety of energy storage systems. The difference in deployment environment impacts the safety concerns, needs, risk, and challenges that affect stakeholders. For example, an energy storage system deployed in a remote location will have very different potential impacts on its environment and first responder needs than a system deployed in a room in an office suite, or on the top floor of a building in a city center. The closer the systems are to residences, schools, and hospitals, the higher the impact of any potential incident regardless of system size.

Pumped hydro is one of the oldest and most mature energy storage technologies and represents 95% of the installed storage capacity. Other storage technologies, such as batteries, flywheels and others, make up the remaining 5% of the installed storage base, are much earlier in their deployment cycle and have likely not reached the full extent of their deployed capacity.

Though flywheels are relative newcomers to the grid energy storage arena, they have been used as energy storage devices for centuries with the earliest known flywheel being from 3100 BC Mesopotamia. Grid scale flywheels operate by spinning a rotor up to tens of thousands of RPM storing energy in a combination of rotational kinetic energy and elastic energy from deformation of the rotor. These systems typically have large rotational masses that in the case of a catastrophic radial failure need a robust enclosure to contain the debris. However, if the mass of the debris particles can be reduced through engineering design, the strength, size and cost of the containment system can be significantly reduced.

As electrochemical technologies, battery systems used in grid storage can be further categorized as redox flow batteries, hybrid flow batteries, and secondary batteries without a flowing electrolyte. For the purposes of this document, vanadium redox flow batteries and zinc bromine flow batteries are considered for the first two categories, and lead-acid, lithium ion, sodium nickel chloride and sodium sulfur technologies in the latter category. As will be discussed in detail in this document, there are a number of safety concerns specific to batteries that should be addressed, e.g. release of the stored energy during an incident, cascading failure of battery cells, and fires.

A reactive approach to energy storage safety is no longer viable. The number and types of energy storage deployments have reached a tipping point with dramatic growth anticipated in the next few years fueled in large part by major, new, policy-related storage initiatives in California14, Hawaii15, and New York. The new storage technologies likely to be deployed in response to these and other initiatives are maturing too rapidly to justify moving ahead without a unified scientifically based set of safety validation techniques and protocols. A compounding challenge is that startup companies with limited resources and experience in deployment are developing many of these new storage technologies. Standardization of the safety processes will greatly enhance the cost and viability of new technologies, and of the startup companies themselves. The modular nature of ESS is such that there is just no single entity clearly responsible for ESS safety; instead, the each participant in the energy storage community has a role and a responsibility. The following sections outline the gaps in addressing the need for validated grid energy storage system safety.

To date, the most extensive energy storage safety and abuse R&D efforts have been done for Electric Vehicle (EV) battery technologies. These efforts have been limited to lithium ion, lead-acid and nickel metal hydride chemistries and, with the exception of grid-scale lead-acid systems, are restricted to smaller size battery packs applicable to vehicles.

The increased scale, complexity, and diversity in technologies being proposed for grid- scale storage necessitates a comprehensive strategy for adequately addressing safety in grid storage systems. The technologies deployed onto the grid fall into the categories of electro-chemical, electromechanical, and thermal, and are themselves within different categories of systems, including CAES, flywheels, pumped hydro and SMES. This presents a significant area of effort to be coordinated and tackled in the coming years, as a number of gap areas currently exist in codes and standards around safety in the field. R&D efforts must be coordinated to begin to address the challenges.

An energy storage system can be categorized primarily by its power, energy and technology platform. For grid-scale systems, the power/energy spectrum spans from smaller kW/kWh to large MW/MWh systems. Smaller kW/kWh systems can be deployed for residential and community storage applications, while larger MW/MWh systems are envisioned for electric utility transmission and distribution networks to provide grid level services. This is in contrast to electric vehicles, for which the U.S. Advanced Battery Consortium (USABC) goals are both clearly defined and narrow in scope with an energy goal of 40 kWh. While in practice some EV packs are as large as 90 kWh, the range of energy is still small compared with the grid storage applications. This research is critical to the ability of first responders to understand the risks posed by ESS technologies and allow for the development of safe stratagies to minimize risk and mitigate the event.

Furthermore, the diversity of battery technologies and stationary storage systems is not generally present in the EV community. Therefore, the testing protocols and procedures used historically and currently for storage systems for transportation are insufficient to adequately address this wide range of storage systems technologies for stationary applications. Table 1 summarizes the high level contrast between this range of technologies and sizes of storage in the more established area of EV. The magnitude of effort that must be taken on to encompass the needs of safety in stationary storage is considerable because most research and development to improve safety and efforts to develop safety validation techniques are in the EV space. Notably, the size of EV batteries ranges by a factor of two; by contrast, stationary storage scales across many orders of magnitude. Likewise, the range of technologies and uses in stationary storage are much more varied than in EV. Therefore, while the EV safety efforts pave the way in developing R&D programs around safety and developing codes and standards, they are highly insufficient to address many of the significant challenges in approaching safe development, installation, commissioning, use and maintenance of stationary storage systems.

An additional complexity of grid storage systems is that the storage system can either be built on-site or pre-assembled, typically in shipping containers. These pre-assembled systems allow for factory testing of the fully integrated system, but are exposed to potential damage during shipping. For the systems built on site, the assembly is done in the field; much of the safety testing and qualification could potentially be done by local inspectors, who may or may not be as aware of the specifics of the storage system. Therefore, the safety validation of each type of system must be approached differently and each specific challenge must be addressed.

Batteries and flywheels are currently the primary focus for enhanced grid-scale safety. For these systems, the associated failure modes at grid-scale power and energy requirements have not been well characterized and there is much larger uncertainty around the risks and consequences of failures. This uncertainty around system safety can lead to barriers to adoption and market success, such as difficulty with assessing value and risk to these assets, and determining the possible consequences to health and the environment. To address these barriers, concerted efforts are needed in the following areas: • Materials Science R&D – Research into all device components • Engineering controls and system design • Modeling • System testing and analysis • Commissioning and field system safety research It is a notable challenge within the areas outlined above to develop understanding and confidence in relating results at one scale to expected outcomes at another scale, or predicting the interplay between components, as well as protecting against unexpected outcomes when one or more failure mode is present at the same time in a system. Extensive research, modeling and validation are required to address these challenges. Furthermore, it is necessary to pool the analysis approaches of failure mode and effects analysis (FMEA) and to use a safety basis in both research and commissioning to build a robust safety program. Furthermore, identifying, responding and mitigating to any observed safety events are critical in validating the safety of storage.

A holistic view with regard to setting standards to ensure thorough safety validation techniques is the desired end goal; the first step is to study on the R&D level failure from the cell to system level, and from the electrochemistry and kinetics of the materials to module scale behavior. Detailed hazards analysis must be conducted for entire systems in order to identify failure points caused by abuse conditions and the potential for cascading events, which may result in large scale damage and/or fire. While treating the storage system as a “black box” is helpful in setting practical standards for installation, understanding the system at the basic materials and chemistry levels and how issues can initiate failure at the cell and system level is critical to ensure overall system safety.

Batteries, understanding the fundamental electrochemistry and materials changes under selected operating conditions helps guide the cell level safety. Knowledge of cell-level failure modes and how they propagate to battery packs guides the cell chemistry, cell design and integration. Each system has different levels of risk associated with basic electrochemistry that must be understood; the trade-off between electrochemical performance and safety must be managed. There are some commonalities of safety issues between storage technologies. For example, breeching of a Na/S (NAS) or Na/NiCl2 (Zebra) battery could result in exposure of molten material and heat transfer to adjacent cells. Evolution of H2 from lead-acid cells or H2 and solvent vapor from lithium-ion batteries during overcharge abuse could results in a flammable/combustible gas mixture. Thermal runaway in lithium-ion (Li-ion) cells could transfer heat to adjacent cells and propagate the failure through a battery.

Moreover, while physical hazards are often considered, health and environmental safety issues also need to be evaluated to have a complete understanding of the potential hazards associated with a battery failure. These may include the toxicity of gas species evolved from a cell during abuse or when exposed to abnormal environments,  toxicity of electrolyte during a cell breech or spill in a Vanadium redox flow battery (VRB), environmental impact of water runoff used to extinguish a battery fire containing heavy metals. Flywheels provide an entirely different set of considerations, including mechanical containment testing and modeling, vacuum loss testing, and material fatigue testing under stress.

The topic of Li-ion battery safety is rapidly gaining attention as the number of battery incidents increases. Recent incidents, such as a cell phone runaway during a regional flight in Australia and a United Parcel Service plane crash near Dubai, reinforce the potential consequence of Liion battery runaway events. The sheer size of grid storage needs and the operational demands make it increasingly difficult to find materials with the necessary properties, especially the required thermal behavior to ensure fail-proof operation. The main failure modes for these battery systems are either latent (manufacturing defects, operational heating, etc.) or abusive (mechanical, electrical, or thermal).

Any of these failures can increase the internal temperature of the cell, leading to electrolyte decomposition, venting, and possible ignition. While significant strides are being made, major challenges remain in combating solvent flammability still remain, which is the most significant area that needs improvement to address safety of Li-ion cells, and is therefore discussed here in greater detail. To mitigate thermal instability of the electrolyte, a number of different approaches have been developed with varied outcomes and moderate success. Conventional electrolytes typically vent flammable gas when overheated due to overcharging, internal shorting, manufacturing defects, physical damage, or other failure mechanisms. The prospects of employing Li-ion cells in applications depend on substantially reducing the flammability, which requires materials developments (including new lithium salts) to improve the thermal properties. One approach is to use fire retardants (FR) in the electrolyte as an additive to improve thermal stability. Most of these additives have a history of use as FR in the plastics industry. Broadly, these additives can be grouped into two categories—those containing phosphorous and that containing fluorine. A concerted effort to provide a hazard assessment and classification of the event and mitigation when an ESS fails, either through internal or external mechanical, thermal, or electrical stimulus is needed by the community.

Electrolyte Safety R&D The combustion process is a complex chemical reaction by which fuel and an oxidizer in the presence of heat react and burn. Convergence of heat (an oxidizer) and fuel (the substance that burns) must happen to have combustion. The oxidizer is the substance that produces the oxygen so that the fuel can be burned, and heat is the energy that drives the combustion process. In the combustion process a sequence of chemical reactions occur leading to fire.41 In this situation a variety of oxidizing, hydrogen and fuel radicals are produced that keep the fire going until at least one of the three constituents is exhausted.

5.4.1 Electrolytes Despite several studies on the issue of flammability, complete elimination of fire in Li-ion cells has yet to be achieved. One possible reason for the failure could be linked to lower flash point (FP) (<38.7 °C) of the solvents.42 Published data shows that polyphosphazene polymers and ionic liquids used as electrolytes are nonflammable.43 However, the high FP of these chemicals is generally accompanied by increased viscosity, thus limiting low temperature operation and degrading cell performance at sub-ambient temperatures. These materials may also have other problems such as poor wetting of the electrodes and separator materials, excluding them from use in cells despite being nonflammable. Ideally, solvents would be used that have no FP while simultaneously exhibiting ideal electrolyte behavior (see below for a number of critical properties that the electrolytes need to meet) and would remain liquid at low temperatures down to -50 ºC or below for use in Li-ion cells. A number of critical electrochemical and thermal properties are given below that FR have to meet simultaneously. The tradeoffs between properties are possible but when it comes to safety there cannot be tradeoffs. • High voltage stability • Comparable conductivity to traditional electrolytes • Lower flame propagation rate or no fire at all • Lower self-heating rate • Stable against both the electrodes • Able to wet the electrodes and separator materials • Higher onset temperature for exothermic peaks with reduced overall heat production • No miscibility problems with co-solvents

The higher energy density of Li-ion cells can only result in a more volatile device, and while significant efforts have been put forth to address safety, significant research is still needed. To improve safety of Li-ion batteries, the electrolyte flammability needs significant advances or further mitigation is needed in areas that will contain the effects of failures to provide graceful failures with safer outcomes in operation.

Electrodes, separators, current collectors, casings, cell format headers and vent ports While electrolytes are by far the most critical component in Li-ion battery safety, research has been pursued into safety considerations around the other components of the cell. These factors can become more critical as research continues in wider ranges of chemistries for stationary storage.

Capacitors Electrostatic capacitors are a major failure mechanism in power electronics. These predominately fail because of the strong focus on low cost devices, and low control over manufacturing. In response, they are used at a highly de-rated level, and often with redundant design. When they fail they often show slow degradation with decreasing resistivity leading eventually to shorting. Cascading failures can lead to higher consequence failures elsewhere in a system. Arcs or cascading failures can occur. The added complexity of redundant design is a safety risk. While there is a niche market for high reliability capacitors, they are not economically viable for most applications, including grid storage. These devices are made of precious metals and higher quality ceramic processing that leads to fewer oxygen vacancies in the device.

Polymer capacitors can have a safety advantage as they can be self-healing, and therefore graceful failure; however these are poor performers at elevated temperatures and are flammable.

Currently, the low cost and low reliability of capacitors make them a very common component that fails in devices, affecting the power electronics and providing a possible trigger for a cascading failure. While improved reliability has been achieved in capacitors such devices are cost prohibitive due to their manufacturing and testing. Development of improved capacitors at reasonable cost, or design to prevent cascading failures in the event of capacitor failure should be addressed.

Pumps tubing and tanks Components specific to flow battery, and hybrid flow battery technologies have not been researched in the context of safety for battery technology. These include components such as pumps, tubing and storage tanks. Research from other areas that use similar components can be a starting point, but these demonstrate how the range of components is much broader than current R&D in battery safety.

Manufacturing defects The design of components and testing depends on understanding the range of purity in materials, and conformity in engineering. Defects are a large contributor to shorts in batteries for example. Understanding the reproducibility among parts, and the influence of defects on failure is critical to understanding and designing for safer storage systems.

The science of fault detection within large battery systems is still within its infancy; most analysis and monitoring of large battery systems is focused on monitoring issues such as state of health and state of charge monitoring, however limited work has been performed. Offer et al.53 first

Software Analytics. In this day and age of information technology, any comprehensive research, development, and deployment strategy for energy storage should be rounded out with an appropriate complement of software analytics. Software is on a par with hardware in importance, not only for engineering controls, but for performance monitoring; anomaly detection, diagnosis, and tracking; degradation and failure prediction; maintenance; health management; and operations optimization. Ultimately, it will become an important factor in improving overall system and system-of-systems safety. As with any new, potentially high consequence technology, improving safety will be an ongoing process. By analogy with airline safety, energy storage projects which use cutting-edge technologies would benefit from “black boxes” to record precursors to catastrophic failures. The black boxes would be located off-site and store minutes to months of data depending on the time scale of the phenomena being sensed. They would be required for large-scale installations, recommended for medium-scale installations, and optional for small installations. Evolving standards for what and how much should be recorded will be based on the results from research as well as experience.

Since some energy storage technologies are still early in their development and deployment, there should be an emphasis on developing safety cases. Safety cases should cover the full range of safety events that could reasonably be anticipated, and would therefore highlight the areas in which software analytics are required to ensure the safety of each system. Each case would tell a story of an initiating event, an assessment of its probability over time, the likely subsequent events, and the likely final outcome or outcomes. The development of safety cases need not be onerous, but they should demonstrate to everyone involved that serious thought has been given to safety.

Table 2. Common Tests to Assess Risk from Electrical, Mechanical, and Environmental Conditions55 Condition Electrical Mechanical Environmental Tests under development Tests Test of current flow Abnormal charging test, overcharging and charging time Forced discharge test Crush test Impact test Shock test Vibration test Heating test Temperature cycling test Low pressure altitude test Failure propagation Internal short circuit (non-impact test) Ignition/flammability IR absorption diagnostics Separator testing

The established tests for electrical, mechanical and environmental conditions are therefore tailored to identifying and quantifying the consequence and likelihood of failure in lead-acid and lithium ion technologies with typical analyses that include burning characteristics, off-gassing, smoke particulates, and environmental run off from fire suppression efforts. Even for the most studied abuse case of lithium ion technologies, some tests have been identified as very crude or ineffective with limited technical merit. For example, the puncture test, used to replicate failure under an internal short, is widely believed to lack the ability to accurately to mimic this particular failure mode. These tests are less likely to reproduce potential field failures when applied to technologies for which they were not originally designed. The above testing relates exclusively to cell/pack/module level and does not take into consideration the balance of the storage system. Other tests on Li-ion system are targeted at invoking and quantifying specific events; for example, impact testing and overcharging tests probe the potential for thermal runaway which occurs during anode and cathode decomposition reactions. Other failure modes addressed by current validation techniques include electrolyte flammability, thermal stability of materials including the separators, electrolyte components and active materials, and cell-to-cell failure.

Gap areas and opportunities An energy storage system deployed on the grid, whether at the residential (<10kW) or bulk generation scale on the order of MW, is susceptible to similar failures as described above for Li-ion. However, given the multiple chemistries and application space, there is a significant gap in our ability to understand and quantify potential failures under real-world conditions; in order to ensure safety as grid storage systems are deployed, it is critical to understand their potential failure modes within each deployment environment. Furthermore, it must be considered that grid-scale systems include at the very least: power electronics, transformers, switchgear, heating and cooling systems and housing structures or enclosures. The size and the variety of technologies necessitate a rethinking of safety work as it is adopted from current validation techniques in the electrified vehicle space. 

To address the component and system level safety concerns for all the technologies being developed for stationary energy storage, further efforts will be required to: understand these systems at the fundamental materials science, develop appropriate engineering controls, fire protection and suppression methods, system design, complete validation testing and analysis, and establish real world based models for operating. System level safety must also address several additional factors including the relevant codes, standards and regulations, the needs of first responders, and anticipate risks and consequences not covered by current CSR. The wide range of chemistries and operating conditions required for grid-scale storage presents a significant challenge for safety R&D. The longer life requirements and wider range of uses for storage require a better understanding of degradation and end of life failures under normal operating and abuse conditions. The size of batteries also necessitates a stronger reliance on modeling. Multi-scale models for understanding thermal runaway, and fire propagation; whether originated in the chemistry, the electronics, or external to the system; have not been developed. Currently gap areas for stationary energy storage exist from materials research and modeling through system life considerations such as operation and maintenance.

Engineering controls and system design. Currently the monitoring needs of batteries, as well as effectiveness of means to separate battery cells and modules, or various fire suppression systems and techniques in systems have not been studied extensively. Individual companies and installations have relied on past experience in designing these systems. For example: Na battery installations have focused on mitigating the potential impact of the high operating temperature, Pb-acid batteries has focused on controlling failures associated with hydrogen build up, while in technologies that don’t use electrochemistry like flywheels, have focused on mechanical concerns such as run-out and high temperature, or change in chamber pressure. Detailed testing and modeling are required to fully understand the needs in system monitoring and containment of failure propagation. Rigorous design of safety features that adequately address potential failures are also still needed in most technology areas. Current efforts have widely focused on monitoring cell and module level voltages in addition to the thermal environment; however the tolerances for safe operation are not known for these systems. Further development efforts are needed to help manufacturers and installers understand the appropriate level of monitoring in order to safely operate a system and prevent failure resulting from internal short circuits, latent manufacturing defects or abused batteries from propagating to the full system.

Modeling The size and cost of grid-scale storage system make it prohibitive to test full-scale systems, modeling can play a critical role in improved safety.

Fire suppression Large-scale energy storage systems can mitigate risk of loss by isolating parts of a system in different transportation containers, or using materials or assemblies to section off batteries. Most current systems have automated and manually triggered fire suppression systems within the enclosure but have limited knowledge if such suppression systems will be useful in the event of fire.

The interactions between fire suppressants and system chemistries must be fully understood to determine the effectiveness of fire suppression. Key variables include the: volume of suppressant required, rate of suppressant release, and distribution of suppressants. Basic assumptions about electrochemical safety have not been elucidated, for example it is not even clear whether a battery fire is of higher consequence than other types of fires, and if so at what scale this is of concern.

The National Fire Protection Association (NFPA) has provided a questionnaire regarding suppressants for vehicle batteries. Tactics for suppression of fires involving electric-drive vehicle (EDV) batteries: a. How effective is water as a suppressant for large battery fires? b. Are there projectile hazards? c. How long must suppression efforts be conducted to place the fire under control and then fully extinguish it? d. What level of resources will be needed to support these fire suppression efforts? 1 e. Is there a need for extended suppression efforts? f. What are the indicators for instances where the fire service should allow a large battery pack to burn rather than attempt suppression?

NFPA 13, Standard for the Installation of Sprinkler Systems,60 does not contain specific sprinkler installation recommendations or protection requirements for Li-ion batteries. Reports and literature on suppressants universally recommended the use of water.61 However, the quantity of water needed for a battery fire is large: 275-2639 gallons for a 40 kWh EV sized Liion battery pack. This is higher than recommended for internal combustion engine (ICE) vehicle fires.

Summary. Science-based safety validation techniques for an entire energy storage system are critical as the deployments of energy storage systems expand. These techniques are currently based on previous industry knowledge and experience with energy storage for vehicles, as well as experience with grid-scale Pb-acid batteries. Now, they must be broadened to encompass gridscale systems. The major hurtle to this expansion is encompassing both much broader range in scale stationary storage systems, as well as the much broader range of technologies. Furthermore, the larger scale of stationary storage over EV storage necessitates the consideration of a wider range of concerns, beyond the storage device. This includes areas such as power electronics and fire suppression. The required work to develop validation is significant. As progress is made in understanding validation through experiment and modeling, these evidence-based results can feed into codes, regulations and standards, and can inform manufacturers and customers of stationary storage solutions to improve the safety of deployed systems.

Currently, fire departments do not categorize ESS as stand-alone infrastructure capable of causing safety incidents independent of the systems that they support. Instead, fire departments categorize grid ESS as back-up power systems such as uninterruptible power supplies (UPS) for commercial, utility, communications and defense settings, or as PV battery-backed systems for on, or off-grid residential applications. This categorization results in limited awareness of ESS and their potential risks, and thus the optimal responses to incidents. This categorization of energy storage systems as merely back-up power systems also results in the treatment of ESS as peripheral to the risk management tools.

The energy storage industry is rapidly expanding due to market pressures. This expansion is surpassing both the updating of current CSR and development of new CSR needed for determining what is and is not safe and

No general, technology-independent standard for ESS integration into a utility or a stand-alone grid has yet been developed.

Incident responses with standard equipment are tailored to the specific needs of the incident type and location, whether it’s two “pumper” engines and a “ladder” truck with two to four personnel, plus a Battalion Chief to act as Incident Commander, for a total of 9 to 13 personnel responding to an injury/accident, or a structure fire that requires five engines, two trucks, and two Battalion Chiefs for a total of 17 to 30 personnel. With each additional “alarm” struck will send another two to three “pumper” engines and a “ladder” truck. In all of these cases, the incident response personnel typically arrive on scene with only standard equipment. This equipment is guided by various NFPA standards for equipment on each apparatus, personal protective equipment (PPE), and other rescue tools. In responding to an ESS incident, the fire service seldom incorporates equipment specialized for electrical incidents.

A number of unique challenges must be considered in developing responses to any energy storage incident. In particular, difficulties securing energized electrical components can present significant safety challenges for fire service personnel. Typically, the primary tasks are to isolate power to the affected areas, contain spills, access and rescue possible victims, and limit access to the hazard area. The highest priority is given to actions that support locating endangered persons and removing them to safety with the least possible risk to responders. Where the rescue of victims continues until it is either accomplished or determined that there are no survivors or the risk to responders is too great. Industrial fires can be quite dangerous depending on structure occupancy, i.e. the contents, process, and personnel inside. Water may be used from a safe distance on larger fires that have extended beyond the original equipment or area of origin, or which are threatening nearby exposures; however, determination of “safe” distance has been little researched by the fire service scientific community.

Fire suppression and protection systems. Each ESS installation is guided by application of existing CSR that may not reflect the unique and varied chemistries in use. Fire-suppressant selection should be based on the efficacy of specific materials and needed quantities on site based on appropriate and representative testing, conducted in consultation with risk managers, fire protection engineers, and others, as well as alignment with existing codes and standards. For example, non-halogenated inert gas discharge systems may not be adequate for thermally unstable oxide chemistries, as they generate oxides in the process of heating, which may lead to combustion in oxygen deficient atmospheres. Ventilation requirements imposed by some Authorities Having Jurisdiction (AHJs) may work against the efficacy of these gaseous suppression agents. Similarly, water-based sprinkler systems may not prove effective for dissipating heat dissipation in large-scale commodity storage of similar chemistries. Therefore, additional research is needed to provide data on which to base proper agent selection for the occupancy and commodity, and to establish standards that reflect the variety of chemistries and their combustion profile.

Current commodity classification systems used in fire sprinkler design (NFPA 13-Standard for Installation of Sprinkler Systems) do not have a classification for lithium or flow batteries. This is problematic, as the fire hazard may be significantly higher depending on the chemicals involved and will likely result in ineffective or inaccurate fire sprinkler coverage. Additionally, thermal decomposition of electrolytes may produce flammable gasses that present explosion risks.

Verification and control of stored energy. Severe energy storage system damage resulting from fire, earthquake, or significant mechanical damage may require complete discharge, or neutralization of the chemistry, to facilitate safe handling of components. Though the deployment of PV currently exceeds that of ESS, there is still a lack of a clear response procedure to de-energize distributed PV generation in the field. Fire fighters typically rely on the local utility to secure supply-side power to facilities.

In the case of small residential or commercial PV, the utility is not able to assist because the system is on the owner side of the meter, which presents a problem for securing a 600Vdc rooftop array. Identifying the PV integrators responsible for installation may not be possible, and other installers may be hesitant to assume any liability for a system they did not install. This leaves a vacuum for the safe, complete overhaul of a damaged structure with PV. Similarly, ESS faces the complication of unclear resources for assistance and the inabilities of many first responders to knowledgably verify that the ESS is discharged or de-energized.

Post-incident response and recovery. Thermal damage to ESS chemistries and components presents unique challenges to the fire service community, building owners, and insurers. As evidenced in full-scale testing of EV battery fires, fire suppression required more water than anticipated, and significantly more in some cases. Additionally, confirming that the fire was completely extinguished was difficult due to the containment housings of EV batteries that can mask continued thermal reaction within undamaged cells. In one of the tests performed by Exponent, Inc., one battery reignited after being involved in a full-scale fire test some 22 hours post-extinguishment; in another case, an EV experienced a subsequent re-ignition 3 weeks post-crash testing.

Governmental approvals and permits related to the siting, construction, development, operation, and grid integration of energy storage facilities can pose significant hurdles to the timely and cost effective implementation of any energy storage technology. The process for obtaining those approvals and permits can be difficult to navigate, particularly for newer technologies for which the environmental, health, and safety impacts may not be well documented or understood either by the agencies or the public.


Cooper, J. 2019.  Arizona fire highlights challenges for energy storage. Associated Press.

Deign, J. 2019.  The Safety Question Persists as Energy Storage Prepares for Huge Growth. Recent battery plant blazes and a hydrogen station blast have again raised questions about the safety of energy storage technologies.

DOE/EPRI. 2013. Electricity storage handbook in collaboration with NRECA. USA: Sandia National Laboratories and Electric Power Research Institute.

Mogg, T. 2019. Battery pack suspected cause of recent Virgin Atlantic aircraft fire.

Posted in Safety | Comments Off on How safe are utility-scale energy storage batteries?

Global oil discoveries far from breaking even with consumption

This image has an empty alt attribute; its file name is oil-discoveries-rystad-2013-2018.jpg

Preface.  According to Bloomberg (2016), oil discoveries in 2015 were the lowest since 1947, with just 2.7 billion barrels of conventional oil found globally (though Rystad calculated this differently at 5.6, nearly twice as much). Since the world burns 36.5 billion barrels of oil a year in 2019, we’re not even close to breaking even.

Rystad Energy (2019) in “Global discoveries on the rise as majors take a bigger bite” estimates barrels of oil equivalent, which includes both conventional oil and gas. Since oil is the master resource that makes gas, transportation, and all other goods and activities possible, I’ve taken the second number as the percent of oil in the BOE to come up with how much conventional oil was found. It falls way short of the 36.5 billion barrels we’re consuming. The pantry is emptying out, perhaps pushing the peak oil date forward in time as we continue to grow at 1% a year in oil consumption and put nothing at all back on the shelves.  Peak Demand? Ha!  Not until we’re forced to cut back from oil shortages.

2013 50:50 17.4 billion BOE  8.7 billion BOE oil  shortfall: 27.8 billion BOE
2014 54:46 16.0 billion BOE  7.4 billion BOE oil shortfall: 29.1 billion BOE
2015 61:39 14.4 billion BOE  5.6 billion BOE oil shortfall: 30.9 billion BOE
2016 57:43 8.4 billion BOE  3.6 billion BOE oil  shortfall: 32.9 billion BOE
2017 40:60 10.3 billion BOE 6.2 billion BOE oil shortfall: 30.3 billion BOE
2018 46:54 9.1 billion BOE 4.9 billion BOE oil  shortfall: 31.6 billion BOE

This doesn’t include fracked oil, but the IEA expects that to peak somewhere from now to 2023.

What it means is enjoy life while it’s still good, and stock your pantry while you’re at it.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Mikael, H. August 29, 2016. Oil Discoveries at 70-Year Low Signal Supply Shortfall Ahead. Bloomberg.

2016 figure only shows exploration results to August. Discoveries were just 230 million barrels in 1947 but skyrocketed the next year when Ghawar was discovered in Saudi Arabia, and is till the world's largest oil field.  Source: Wood Mackenzie

2016 figure only shows exploration results to August. Discoveries were just 230 million barrels in 1947 but skyrocketed the next year when Ghawar was discovered in Saudi Arabia, and it is still the world's largest oil field, though recently it was learned that Ghawar is in decline at 3.5% a year. Source: Wood Mackenzie

Explorers in 2015 discovered only about a tenth as much oil as they have annually on average since 1960. This year, they’ll probably find even less, spurring new fears about their ability to meet future demand.

With oil prices down by more than half since the price collapse two years ago, drillers have cut their exploration budgets to the bone. The result: Just 2.7 billion barrels of new supply was discovered in 2015, the smallest amount since 1947, according to figures from Edinburgh-based consulting firm Wood Mackenzie Ltd. This year, drillers found just 736 million barrels of conventional crude as of the end of last month.

That’s a concern for the industry at a time when the U.S. Energy Information Administration estimates that global oil demand will grow from 94.8 million barrels a day this year to 105.3 million barrels in 2026. While the U.S. shale boom could potentially make up the difference, prices locked in below $50 a barrel have undercut any substantial growth there. Ten years down from now this will have a “significant potential to push oil prices up. Given current levels of investment across the industry and decline rates at existing fields, a “significant” supply gap may open up by 2040″.

Oil companies will need to invest about $1 trillion a year to continue to meet demand, said Ben Van Beurden, the CEO of Royal Dutch Shell Plc, during a panel discussion at the Norway meeting. He sees demand rising by 1 million to 1.5 million barrels a day, with about 5 percent of supply lost to natural declines every year.

New discoveries from conventional drilling, meanwhile, are “at rock bottom,” said Nils-Henrik Bjurstroem, a senior project manager at Oslo-based consultants Rystad Energy AS. “There will definitely be a strong impact on oil and gas supply, and especially oil.

Global inventories have been buoyed by full-throttle output from Russia and OPEC, which have flooded the world with oil despite depressed prices as they defend market share. But years of under-investment will be felt as soon as 2025, Bjurstroem said. Producers will replace little more than one in 20 of the barrels consumed this year, he said.

There were 209 wells drilled through August this year, down from 680 in 2015 and 1,167 in 2014, according to Wood Mackenzie. That compares with an annual average of 1,500 in data going back to 1960.

Overall, the proportion of new oil that the industry has added to offset the amount it pumps has dropped from 30 percent in 2013 to a reserve-replacement ratio of just 6 percent this year in terms of conventional resources, which excludes shale oil and gas, Bjurstroem predicted. Exxon Mobil Corp. said in February that it failed to replace at least 100 percent of its production by adding resources with new finds or acquisitions for the first time in 22 years.

“That’s a scary thing because, seriously, there is no exploration going on today,” Per Wullf, CEO of offshore drilling company Seadrill Ltd., said by phone.

Posted in How Much Left, Peak Oil | Tagged , | 2 Comments