Venezuela collapse: looting, hunger, blackouts, environmental catastrophe

Looted grocery store in San Cristobal, Venezuela

Preface. Venezuela is experiencing a double whammy of drought and low oil prices, which has lead to blackouts and inability to import food, due to their oil production peaking in 1997.  If you want to know how collapse will unfold in the United States and elsewhere, read the posts from the categories and posts below. Mexico may be next as you can read here.

Related posts:

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Smith M (2023) Venezuela’s Dilapidated Oil Industry Is An Environmental Catastrophe. oilprice

Crude oil is fouling waterways, contributing to deforestation, poisoning farmland and killing wildlife at an ever-greater rate.

The collapse of Venezuela’s once prolific oil industry has triggered an economic and humanitarian crisis that accelerated in 2019 after U.S. President Donald Trump implemented strict sanctions cutting the Maduro regime off from international energy markets. It isn’t only Venezuela’s economy and people which have suffered from a foundering hydrocarbon sector and corroding energy infrastructure, tremendous damage has occurred to the environment. Oil spills, leaking pipelines and storage facilities, noxious discharges from ramshackle intermittently operating refineries and toxic tar like slicks are commonplace in Venezuela. The OPEC member’s collapsing petroleum industry, along with precious metals and other mining, is a key culprit of the significant environmental damage occurring in the near-failed state. This is particularly worrying when it is considered that Venezuela is ranked as the eleventh most biodiverse country globally. Two organizations which provide regular reports and updates regarding the substantial environmental degradation occurring because of the petrostate’s oil industry are the Venezuelan Observatory of Environmental Human Rights and the Venezuelan Observatory of Political Ecology.

The Maracaibo Basin contains 15% of Venezuela’s copious oil reserves, which at 304 billion barrels are the largest globally, and is responsible for around two-thirds of the OPEC member’s hydrocarbon production. As a result, the lake contains thousands of drilling platforms, miles of pipelines which in many cases are unmapped and scores of storage facilities as well as other industry infrastructure. Most facilities are more than half a century old and heavily corroded because of their age and an endemic lack of crucial maintenance. The OVDHA estimates up to 1,000 barrels of crude (Spanish) are being discharged into Lake Maracaibo every day due to ongoing low-level leaks from heavily corroded petroleum pipelines, storage tanks and other decaying infrastructure.

Kurmanaev A (2021) Terrorist group steps into Venezuela as lawlessness grows. New York Times.

With Venezuela in shambles, criminals and insurgents run large stretches of the nation’s territory. But some of them are stepping to take over the role the government used to play, and bring drinking water to residents in the arid scrublands, teach farming workshops and offer medical checkups. They mediate land disputes, fine cattle rustlers, settle divorces, investigate crimes and punish thieves.

Many residents — hungry, hunted by local drug gangs and long complaining of being abandoned by their government — have welcomed the Marxist guerrillas of Colombia, also known as the National LIberation Army (ELN) for the kind of protection and basic services the state is failing to provide.

By some estimates, guerrilla fighters from across the border now operate in more than half of Venezuela’s territory, according to the Colombian military, rights activists, security analysts and dozens of interviews in the affected Venezuelan states. Organized and well-armed, the ELN quickly displaced the local gangs that terrorized villages. The guerrillas imposed harsh penalties for robbery and cattle rustling, mediated land feuds, trucked in drinking water, offered basic medical supplies and investigated murders in a way the state never did, residents said.

It was hardly a charitable undertaking, though. In return for bringing stability, the ELN took over the smuggling and drug trafficking routes in the area, much as they have in parts of Colombia. They also began taxing shopkeepers and ranchers.

Colombian guerrillas have used the Venezuelan countryside as a haven for decades, and neglected Caracas shantytowns have long been home to organized crime. But rarely have criminal organizations exerted such territorial and economic control — and the government so little — as they do now. Venezuela is sleepwalking into fragmentation by armed groups.

Before the ELN took control, criminals fought brutally over the smuggling routes, terrorized neighborhoods, sprayed houses with bullets, and demolished villages. Most residents fled to Colombia, but are coming back now that it is safer.

Dallmeier F, Burelli CV (2021) The world must act to stop Venezuela’s environmental destruction. The Washington Post.

The dismantling of Venezuela’s environmental institutions and the collapse of its oil sector have generated a chain reaction of unsustainable natural resource extraction. Illegal land grabbing, deforestation and an out-of-control gold rush in protected rainforest areas have created a perfect storm combining environmental degradation with a humanitarian crisis. Massive sediment loads from mining are decimating reservoirs and hydropower generation capacity, while mercury from gold extraction pollutes rivers and sickens people.

Throughout the 20th century, Venezuela, considered among the most biodiverse countries in the world, was a pioneer of sustainable policies. But starting in 1999, the government of Hugo Chávez began to systematically dismantle the country’s environmental protections, despite its progressive, pro-Indigenous rhetoric. “Eco-socialism” replaced functioning institutions, causing an avalanche of ecological disasters that mocked Venezuela’s commitments under the Paris agreement.

The devastation has accelerated under Nicolás Maduro, Chávez’s successor. Since becoming president, Maduro has overseen the total unraveling of Petróleos de Venezuela (PDVSA), Venezuela’s state oil company. PDVSA’s legal revenue from oil exports plummeted from $73 billion in 2011, to $22 billion in 2016, to $743 million in 2020.

The lack of infrastructure maintenance causes massive crude oil and pollutant spills with no remediation plans. Critical coastal marine and terrestrial environments are severely affected. The most important oil production regions, especially Lake Maracaibo, the northern Monagas state and the Orinoco oil belt, are degenerating into a mosaic of polluted wastelands.

To compensate for the losses in oil revenue, Maduro decreed 12 percent of the Venezuelan Amazon — an area bigger than Portugal — as a “mining development region”. This unique rainforest ecosystem, rich in biodiversity, also contains vast reserves of coltan, iron, bauxite, diamonds and, most importantly, gold.

According to Mongabay and Global Forest Watch, illegal mining, logging and collection of firewood for cooking accounted for over 3.2 million lost acres of rainforest between 2001 and 2018, one of the highest deforestation rates in tropical America. RAISG’s 2018 report and SOSOrinoco’s mining footprint map place Venezuela at the top of the list of Amazonian countries with the highest number of illegal mines. Hundreds of mining sectors have been detected, including 59 illegal gold mining clusters in Canaima National Park, a UNESCO World Heritage site, and other protected areas, which are home to 27 Indigenous communities.

Violence and disease plague the mining areas. Roughly 50 percent of reported malaria cases in Latin America are in Venezuela. Of 398,000 reported new cases in 2019, 70 percent were in southern Venezuela. Mining sites are exploited by state and nonstate groups, including the Colombian National Liberation Army (ELN) and the Revolutionary Armed Forces of Colombia (FARC)promoting violence, slave and child labor, prostitution and disintegration of Indigenous social structures.

2019. Venezuela’s Water System is Collapsing. New York Times.

In Venezuela, a crumbling economy and the collapse of even basic state infrastructure means water comes irregularly — and drinking it is an increasingly risky gamble. Scientists found that about a million residents were exposed to contaminated supplies. This puts them at risk of contracting waterborne viruses that could sicken them and threatens the lives of children and the most vulnerable.

The risks posed by poor water quality are particularly threatening for a population weakened by food and medication shortages. 

Electrical breakdowns and lack of maintenance have gradually stripped the city’s complex water system to a minimum. Water pumps, treatment plants, chlorine injection stations and entire reservoirs have been abandoned as the state ran out of money and skilled workers

Outside Caracas, the breakdown of the water infrastructure is even more profound, leaving millions without regular supplies and forcing communities to dig wells and rely on untreated rivers.

Kurmanaev, A., et al. 2019. A fuel shortage is crippling agriculture in Venezuela. New York Times.

The New York Times interviewed dozens of Venezuelan farmers. Nearly all have slashed their planting area this year and some are leaving their fields fallow – steps that are likely to deplete what is left of the food supply and lead even more Venezuelans to join the estimated four million who have already fled the country.

Farmers said they have tried to produce in spite of scarce inputs, price controls, crime, inflation and collapsing demand. But this year’s harvest is only half of 2018’s because of the gasoline shortage and other problems such as lack of seeds and fertilizer.

In Venezuela’s vast plains further east, sugar cane rots just yards from a refining mill and rice fields are left barren for the first time in 70 years because farmers don’t have fuel to transport their produce to distribution centers or seeds and fertilizers to plant new crops.

Venezuela’s main agricultural association, Fedeagro, estimates the area planted with the country’s main crops, corn and rice, will shrink by about 50% this year.

On a visit to Pueblo Llano last month, 150 cars waited outside the closed gas station for the sixth straight day. Many of the drivers slept in their cars to prevent robberies, braving the frigid weather at an altitude of 7,500 feet. During the day, they walked backed to their farmsteads, a trip that in some cases took hours. “While I’m sitting here in line, my produce is rotting in the fields,” said farmer Richard Rondón as he gave away summer squash as long as his arm from the back of his pickup truck to people passing by. “I got nothing to harvest with.”

The shortage has hamstrung the time-sensitive rice and corn harvest in the state of Portuguesa. In May, it prevented farmers from planting a new crop before the rainy season.

Pons, C. 2019. With Venezuela in collapse, towns slip into primitive isolation. Reuters.

At the once-busy beach resort of Patanemo, tourism has evaporated over the last two years as Venezuela’s economic crisis has deepened and deteriorating cellphone service left visitors too afraid of robbery to brave the isolated roads.   In some regions, travel requires negotiating roads barricaded by residents looking to steal from travelers.

These days, its Caribbean shoreline flanked by forested hills receives a different type of visitor: people who walk 10 minutes from a nearby town carrying rice, plantains or bananas in hopes of exchanging them for the fishermen’s latest catch.

With bank notes made useless by hyperinflation, and no easy access to the debit card terminals widely used to conduct transactions in urban areas, residents of Patanemo rely mainly on barter.  In visits to three villages across Venezuela, Reuters reporters saw residents exchanging fish, coffee beans and hand-picked fruit for essentials to make ends meet in an economy that shrank 48% during the first five years of President Nicolas Maduro’s government. 

Residents of the town of Guarico this year found a different way of paying bills – coffee beans for anything from haircuts to spare parts for agricultural machinery. The transactions are based on a reference price for how much coffee fetches on the local market, Linares said. In April, one kilo (2.2 pounds) of beans was worth the equivalent of $3.00. In El Tocuyo, another town in Lara state, three 100 kilo sacks of coffee buy 200 liters (53 gallons) of gasoline.

It is just one of a growing number of rural towns slipping into isolation as Venezuela’s economy implodes amid a long-running political crisis.

From the peaks of the Andes to Venezuela’s sweltering southern savannahs, the collapse of basic services including power, telephone and internet has left many towns struggling to survive.

Venezuela’s crisis has taken a heavy toll on rural areas, where the number of households in poverty reached 74% in 2017 compared with 34% in the capital of Caracas. Residents rarely travel to nearby cities, due to a lack of public transportation, growing fuel shortages and the prohibitive cost of consumer goods.

Lorente, M. 2019. Venezuela returns to ‘Middle Ages’ during power outages. Yahoo news. 

Walking for hours, making oil lamps, bearing water. For Venezuelans today, suffering under a new nationwide blackout that has lasted days, it’s like being thrown back to life centuries ago.

El Avila, a mountain that towers over Caracas, has become a place where families gather with buckets and jugs to fill up with water, wash dishes and scrub clothes. The taps in their homes are dry from lack of electricity to the city’s water pumps.  “We’re forced to get water from sources that obviously aren’t completely hygienic. But it’s enough for washing or doing the dishes,” said one resident, Manuel Almeida.

Because of the long lines of people, the activity can take hours of waiting.

Elsewhere, locals make use of cracked water pipes. But they still need to boil the water, or otherwise purify it.  “We’re going to bed without washing ourselves,” said one man, Pedro Jose, a 30-year-old living in a poorer neighborhood in the west of the capital.

Some shops seeing an opportunity have hiked the prices of bottles of water and bags of ice to between $3 and $5 — a fortune in a country where the monthly minimum salary is the equivalent of $5.50.

Better-off Venezuelans, those with access to US dollars, have rushed to fill hotels that have giant generators and working restaurants.

For others, preserving fresh food is a challenge. Finding it is even more difficult. The blackout has forced most shops to close.

We share food” among family members and friends, explained Coral Munoz, 61, who counts herself lucky to have dollars.

For Kelvin Donaire, who lives in the poor Petare district, survival is complicated.  He walks for more than an hour to the bakery where he works in the upmarket Los Palos Grandes area. “At least I’m able to take a loaf back home,” Donaire said.

Many inhabitants have taken to salting meat to preserve it without working refrigerators.

Others, more desperate, scour trash cans for food scraps. They are hurt most by having to live in a country where basic food and medicine has become scarce and out of reach because of rocketing hyperinflation.

The latest blackout this week also knocked out communications.  According to NetBlocks, an organization monitoring telecoms networks, 85% of Venezuela has lost connection.

In stores, cash registers no longer work and electronic payment terminals are blanked out. That’s serious in Venezuela, where even bread is bought by card because of lack of cash.  Some clients, trusted ones, are able to leave written IOUs.

With Caracas’s subway shut down, getting around the city is a trail, with choices between walking for miles, lining up in the out-sized hope of getting on one of the rare and badly overcrowded and dilapidated buses or managing to get fuel for a vehicle.  Pedro Jose said bus tickets have nearly doubled in price. 

As night casts Caracas into darkness, families light their homes as best they can. “We make lamps that burn gasoline, or oil, or kerosene — any type of fuel,” explained Lizbeth Morin, 30.

“We’ve returned to the Middle Ages.”

December 17, 2018 Planet money podcast: Bonus indicator: the measure of a tragedy

It’s hard to understand how bad a country is doing with figures like inflation rate, unemployment rate, and their minimum wage. A better way to understand a nation’s living standards is how many calories a person could afford to buy a day earning a minimum wage if they spent all of their money on food — that is — the food with the most calories, which in Venezuela has sometimes been pasta or flour, and today is the yucca plant.

Venezuelans could by 57,000 calories in 2012 with one day’s wages, and several dozen eggs.

But today a person can afford just 900 calories or 2 eggs. It would take a Venezuelan 6 weeks to be able to afford one Big Mac earning minimum wage.

Since the average person needs 2,000 calories a day, as well as calories to feed their family, and also housing, clothing, medicine, and so on, it’s not surprising that the average Venezuelan lost 24 pounds last year, and that Venezuela probably has the highest murder rate in the world.

The result is that at least 10% of Venezuelans have emigrated, nearly 3 million people. If that many proportionally left the U.S. we’d have 30 million people fleeing to Canada and Mexico and elsewhere.

July 16, 2018. Keith Johnson. How Venezuela Struck it poor. foreignpolicy.com

…”Venezuela’s murder rate, meanwhile, now surpasses that of Honduras and El Salvador, which formerly had the world’s highest levels, according to the Venezuelan Violence Observatory. Blackouts are a near-daily occurrence, and many people live without running water. According to media reports, schoolchildren and oil workers have begun passing out from hunger, and sick Venezuelans have scoured veterinary offices for medicine. Malaria, measles, and diphtheria have returned with a vengeance, and the millions of Venezuelans fleeing the country — more than 4 million, according to the International Crisis Group — are spreading the diseases across the region, as well as straining resources and goodwill.”

…”Thanks to their geology, Venezuela’s oil fields have enormous decline rates, meaning the country needs to spend more heavily than other petrostates just to keep production steady. “

2017-10-22 Oil Quality Issues Could Bankrupt Venezuela.  The next few weeks for Venezuela will be crucial, as it struggles to meet a huge stack of debt payments. Reports that the nation’s oil production is experiencing deteriorating quality raises a new cause for concern for the crumbling South American nation.Reuters reported that its oil shipments are “soiled with high levels of water, salt or metals that can cause problems for refineries”, which has led to $200 million in cancellations of oil contracts, making Venezuela even less able to make debt payments, since oil is the only source of revenue barely keeping the nation afloat. Many experienced oil workers have fled the country to find food and escape violence.  Because of these problems, and Trump imposed sanctions, U.S. imports have dropped from roughly 700,000 barrels per day to 250,000 bpd.

2017-5-2 Venezuela Is Heading for a Soviet-Style Collapse. A few lessons from the last time an oil economy crashed catastrophically

2017-2-27 ASPO Peak Oil Review: A new survey shows that 75% of Venezuelans may have lost an average of 19 pounds in the last year as widespread food shortages continue. Nearly a third of the population are now eating two meals a day or less. The survey also shows that the average shopper spends 35 hours a month waiting in line to buy food and other necessities. A sense of hopelessness has engulfed the country, and most no longer have an incentive or the strength to protest against the government and its policies as was happening two years ago. Government roundups of opposition politicians continue. Venezuela is clearly well on its way to becoming a failed state.

2016-11-1 Venezuela is telling hungry city dwellers to grow their own food. Washington Post

2016-10-21 Planet Money Podcast #731: How Venezuela Imploded

2016-8-23 Venezuela’s latest response to food shortages: Ban lines outside bakeries

2016-05-04 Hungry Venezuelans Hunt Dogs, Cats, Pigeons as Food Runs Out. Economic Crisis and Food Shortages Lead to Looting and Hunting Stray Animals  

Sabrina Martín. April 27, 2016. Looting On the Rise As Venezuela Runs Out of Food, Electricity. PanAmPost.

Food Producers Alert They Have Only 15 Days Left of Inventory amid Rampant Inflation

“Despair and violence is taking over Venezuela. The economic crisis sweeping the nation means people have to withstand widespread shortages of staple products, medicine, and food.  So when the Maduro administration began rationing electricity this week, leaving entire cities in the dark for up to 4 hours every day, discontent gave way to social unrest.

On April 26, people took to the streets in three Venezuelan states, looting stores to find food.

Maracaibo, in the western state of Zulia, is the epicenter of thefts: on Tuesday alone, Venezuelans raided pharmacies, shopping malls, supermarkets, and even trucks with food in seven different areas of the city.

Although at least nine people were arrested, and 2,000 security officers were deployed in the state, Zulia’s Secretary of Government Giovanny Villalobos asked citizens not to leave their homes. “There are violent people out there that can harm you,” he warned.

In Caracas, the Venezuelan capital, citizens reported looting in at least three areas of the city. Twitter users reported that thefts occurred throughout the night in the industrial zone of La California, Campo Rico, and Buena Vista.  The same happened in Carabobo, a state in central Venezuela.

Supermarkets employees from Valencia told the PanAm Post that besides no longer receiving the same amount of food as before, they must deal with angry Venezuelans who come to the stores only to find out there’s little to buy.

Purchases in supermarkets are rationed through a fingerprint system that does not allow Venezuelans to acquire the same regulated food for two weeks.

Due to the country’s mangled economy, millions must stand in long lines for hours just to purchase basic products, which many resell  for extra income as the country’s minimum wage is far from enough to cover a family’s needs.

On Wednesday, the Venezuelan Chamber of Food (Cavidea) said in a statement that most companies only have 15 days worth of stocked food.

According to the union, the production of food will continue to dwindle because raw materials as well as local and foreign inputs are depleted.

In the statement, Cavidea reported that they are 300 days overdue on payments to suppliers and it’s been 200 days since the national  government last authorized the purchase of dollars under the foreign currency control system.

The latest Survey of Living Conditions (Encovi) showed that more than 3 million Venezuelans eat only twice a day or less. The rampart inflation and low wages make it increasingly more difficult for people to afford food.

“Fruits and vegetables have disappeared from shopping lists. What you buy is what fills your stomach more: 40 percent of the basic groceries is made up of corn flour, rice, pasta, and fat”.

But not even that incomplete diet Venezuelans can live on because those food products are hard to come by. Since their prices are controlled by the government, they are scarce and more people demand them.

The survey also notes the rise of diseases such as gastritis, with an increase of 25 percent in 2015, followed by poisoning (24.11 percent), parasites (17.86 percent), and bacteria (10.71 percent).

The results of this study are consistent with the testimony of Venezuelan women, who told the PanAm Post that because “everything is so expensive” that they prefer to eat twice a day and leave lunch for their children. That way they can make do with the little portions they can afford.”

 

 
Posted in Social Disorder, Venezuela | Tagged , , | 5 Comments

Peak Sand

Preface. With world peak oil production in 2018 so far (Peak oil is here!) it looks like peak sand won’t be the main factor in the fall of our fossil-fueled civilization. After all, oil makes all materials and activities possible, including sand.  But still, interesting to know that so many limits to growth are being reached at roughly (see posts in category Peak Everything.)

Sand Primer:

  • Without sand, there would be no concrete, ceramics, computer chips, glass, plastics, abrasives, paint and so on
  • We can’t use desert sand because it’s too round, polished by the wind, and doesn’t stick together. You need rough edges, so desert sand is worthless
  • Good sand  is getting so rare there’s an enormous amount of illegal mining in over 70 countries.  In India the Sand Mafia is one of the most powerful, will kill for sand. It’s easy to steal sand and sell there.
  • This has led to between 75%-90% of beaches in the world receding and a huge amount of environmental damage.
  • By 2100 all beaches will be gone
  • Australia is selling sand to nations that don’t have any more (like the United Arab Emirates, who used all of their ocean sand to make artificial islands)
  • Sand is a big business, sales are $70 Billion a year
  • concrete is 40% sand
  • most construction sand comes from rivers, lakes and shorelines

How Much Sand is needed?

  • 200 tons  Average house
  • 3,000 tons  Hospital or other large building
  • 30,000 tons per kilometer of highway
  • 12,000,000 tons  Nuclear Power Plant (that’s equal to nearly 250 miles of highway)

Half of all sand is trapped behind the 845,000 dams in the world.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Zhong X et al (2022) Increasing material efficiencies of buildings to address the global sand crisis. Nature sustainability.

here is a rapidly unfolding sand supply crisis in meeting growing material needs for infrastructure. We find a ~45% increase in global building sand use from 2020 to 2060 under a middle-of-the-road baseline scenario

Buildings provide the basic human needs for shelter and social infrastructure and form the foundations of societies. The construction of buildings is also highly material-intensive and consumes a large amount of metallic (for example, steel and copper) and non-metallic minerals (mainly concrete, brick and glass).

prominent commentaries have pointed to severe global sand crises impacting regions as diverse as Cambodia, California, the Middle East and China. Sand over-exploitation has commonly driven ecosystem destruction/collapse in shoreline erosion, biodiversity loss, less food production, and disaster resilience degradation, This will get worse as sand extraction and new buildings grow.

The use of sand and gravel has seen the fastest increase in use across all solid materials used by humans and now represents the largest share of material use (~68–85% by mass), surpassing fossil fuels and biomass. Sand is used mostly for making concrete or glass (with concrete comprising over 95% of this use in the building sector) and requires chloride-free supplies (to prevent corrosion of other building materials) along with specific physical properties in terms of both size and shape. For example, desert sand is too smooth to be used as a binding agent for concrete and sea sand is too high in chloride levels for most construction purposes. Most construction sand is extracted from rivers, lakes and shorelines.

In a rapidly growing market this has led to over-exploitation and degradation. Even when regulated, illegal sand mining and trade has been reported in ~70 countries, often involving highly organized gangs or ‘mafias’ operating with the complicity of regulators. The livelihoods of an estimated 3 billion people living along rivers are remarkably threatened by long-term, unsustainable sand exploitation, along with deep impacts on ecology and land availability.

We explore how building sand use might be reduced by 5 to 23% with these 6 strategies: (1) more intensive use, (2) building lifetime extension, (3) reductions in concrete content by lightweight design, (4) timber framing, (5) component reuse and (6) natural sand substitution by alternatives (7) reduce the floor area, (8) international cooperation

Fountain, H., et al 2019. Melting Greenland Is Awash in Sand. New York Times.

Glaciers grind rocks into silt, sand and gravel.  Greenland hopes that there’s enough sand for them to become a sand exporter, if the environmental damage isn’t too high.

That won’t be easy.  Nearly all sand is mined within 50 miles of its destination because it costs too much to move it more than that.  So Greenland would have to find a way to make moving sand profitable.

A way to find the sand is required as well, since much of what the glacier produces is a fine silt that isn’t suitable for concrete.

Then if sand is found, an energy intensive process begins. A pipe is extended to the sea floor and sucks up water and sand.  Huge amounts of sand would need to be extracted into large bulk carriers, and new ports,and loading facilities built.  The distance to the nearest large cities is considerable longer than 50 miles. Boston is 2250 miles and London 1900 miles away.

2017-7-25 Has Fracking reached peak sand?  Houston Chronicle.

2016-11-17. Sand’s End. Miami beach has run out of sand, now what?

Gillis, J.R. November 4, 2014. Why Sand Is Disappearing. New York Times.

Today 75 to 90% of the world’s natural sand beaches are disappearing, due partly to massive legal and illegal mining, rising sea levels, increasing numbers of severe storms, and massive erosion from human development along coastlines. Many low-lying barrier islands are already submerged.

The sand and gravel business is now growing faster than the economy as a whole. In the United States, the market for mined sand has become a billion-dollar annual business, growing at 10% a year since 2008. Interior mining operations use huge machines working in open pits to dig down under the earth’s surface to get sand left behind by ancient glaciers. But as demand has risen — and the damming of rivers has held back the flow of sand from mountainous interiors — natural sources of sand have been shrinking.

One might think that desert sand would be a ready substitute, but its grains are finer and smoother; they don’t adhere to rougher sand grains, and tend to blow away. As a result, the desert state of Dubai brings sand for its beaches all the way from Australia.

And now there is a global beach-quality sand shortage, caused by the industries that have come to rely on it. Sand is vital to the manufacturing of abrasives, glass, plastics, microchips and even toothpaste, and, most recently, to the process of hydraulic fracturing. The quality of silicate sand found in the northern Midwest has produced what is being called a “sand rush” there, more than doubling regional sand pit mining since 2009.

But the greatest industrial consumer of all is the concrete industry. Sand from Port Washington on Long Island — 140 million cubic yards of it — built the tunnels and sidewalks of Manhattan from the 1880s onward. Concrete still takes 80 percent of all that mining can deliver. Apart from water and air, sand is the natural element most in demand around the world, a situation that puts the preservation of beaches and their flora and fauna in great danger. Today, a branch of Cemex, one of the world’s largest cement suppliers, is still busy on the shores of Monterey Bay in California, where its operations endanger several protected species.

The huge sand mining operations emerging worldwide, many of them illegal, are happening out of sight and out of mind, as far as the developed world is concerned. But in India, where the government has stepped in to limit sand mining along its shores, illegal mining operations by what is now referred to as the “sand mafia” defy these regulations. In Sierra Leone, poor villagers are encouraged to sell off their sand to illegal operations, ruining their own shores for fishing. Some Indonesian sand islands have been devastated by sand mining.
To those of us who visit beaches only in summer, they seem as permanent a part of our natural heritage as the Rocky Mountains and the Great Lakes. But shore dwellers know differently. Beaches are the most transitory of landscapes, and sand beaches the most vulnerable of all.
Yet the extent of this global crisis is obscured because so-called beach nourishment projects attempt to hold sand in place and repair the damage by the time summer people return, creating the illusion of an eternal shore.

Before next summer, endless lines of dump trucks will have filled in bare spots and restored dunes. Virginia Beach alone has been restored more than 50 times. In recent decades, East Coast barrier islands have used 23 million loads of sand, much of it mined inland and the rest dredged from coastal waters — a practice that disturbs the sea bottom, creating turbidity that kills coral beds and damages spawning grounds, which hurts inshore fisheries.

It is time for us to understand where sand comes from and where it is going. Sand was once locked up in mountains and it took eons of erosion before it was released into rivers and made its way to the sea. As Rachel Carson wrote in 1958, “in every curving beach, in every grain of sand, there is a story of the earth.” Now those grains are sequestered yet again — often in the very concrete sea walls that contribute to beach erosion.

We need to stop taking sand for granted and think of it as an endangered natural resource. Glass and concrete can be recycled back into sand, but there will never be enough to meet the demand of every resort. So we need better conservation plans for shore and coastal areas. Beach replenishment — the mining and trucking and dredging of sand to meet tourist expectations — must be evaluated on a case-by-case basis, with environmental considerations taking top priority. Only this will ensure that the story of the earth will still have subsequent chapters told in grains of sand.

Videos about Sand:

References

Coastal Care on Sand Mining: http://coastalcare.org/sections/inform/sand-mining/

Wiki on Sand mining: http://en.wikipedia.org/wiki/Sand_mining

Sand Mining Facts: http://threeissues.sdsu.edu/three_issues_sandminingfacts01.html

Stop illegal sand mining in India  http://www.washingtonpost.com/world/asia_pacific/indias-illegal-sand-mining-fuels-boom-ravages-rivers/2012/05/19/gIQA3HzdaU_story.html

———————————————————————————–

Below is a table from CRYSTALLINE SILICA PRIMER,  Industrial Minerals, U.S. Department of the Interior http://minerals.usgs.gov/minerals/pubs/commodity/silica/780292.pdf about how sand is used (see Table 2 for even more uses):

Table 1. Silica In Commodities And End-Product Applications

Commodity/form of silica/major commercial applications

  • Antimony / Quartz / Flame retardants, batteries, ceramics, glass, alloys
  • Bauxite / Quartz / Aluminum production, refractories, abrasives
  • Beryllium / Quartz / Electronic applications
  • Cadmium / Quartz, jasper, opal, etc. / Batteries, coatings and platings, pigments, plastics, alloys
  • Cement / None / Concrete (quartz in concrete mix)
  • Clay / Quartz, cristobalite / Paper, ceramics, paint, refractories
  • Copper / Quartz /Electrical conduction, plumbing, machinery
  • Crushed stone / Quartz /Construction
  • Diatomite / Quartz, amorphous silica /Filtration aids
  • Dimension stone / Quartz /Building facings
  • Feldspar / Quartz / Glass, ceramics, filler material
  • Fluorspar / Quartz /Acids, steel making flux, glass, enamel, weld rod coatings
  • Garnet / Quartz / Abrasives, filtration, gem stone
  • Germanium / Quartz, jasper, etc. / Infrared optics, fiber optics, semiconductors
  • Gold / Quartz, chert /Jewelry, dental, industrial, monetary
  • Gypsum / Quartz /Gypsum board (prefabricated building product), industrial and building plaster
  • Industrial sand / Quartz / Glass, foundry sand
  • Iron ore / Chert, quartz / Iron and steel industry
  • Iron oxide pigment / Chert, quartz, amorphous silica / Construction materials, paint, coatings
  • Lithium / Quartz /Ceramics, glass, aluminum product
  • Magnesite / Quartz / Refractories
  • Mercury / Quartz / Chlorine and caustic soda manufacture, batteries
  • Mica / Quartz / Joint cement, paint, roofing
  • Perlite / Quartz etc / Building construction products
  • Phosphate rock / Quartz / Fertilizers
  • Pumice / Volcanic glass, Quartz / Concrete aggregate, building block
  • Pyrophyllite / Quartz / Ceramics, refractories
  • Sand and gravel / Quartz / Construction
  • Selenium / Quartz / Photocopiers, glass manufacturing, pigments
  • Silicon / Quartz / Silicon and ferrosilicon for ferrous foundry and steel
  • industry; computers; photoelectric cells
  • Silver / Quartz, chert / Photographic material, electrical and electronic products
  • Talc / Quartz / Ceramics, paint, plastics, paper
  • Tellerium / Quartz / Steel and copper alloys, rubber compounding, electronics
  • Thallium / Quartz, etc / Electronics, superconductors, glass alloy
  • Titanium / Quartz / Pigments for paint, paper, plastics, metal for aircraft,
  • chemical processing equipment
  • Tungsten / Quartz / Cemented carbides for metal machining and wear-resistant components
  • Vanadium / Quartz, amorphous silica / Alloying element in iron, steel, and titanium
  • Zinc / Quartz, etc / Galvanizing, zinc-based alloys, chemicals, agriculture
  • Zircon / Quartz / Ceramics, refractories, zirconia production

In Heavy Industry

Foundry molds and cores for the production of metal castings are made from quartz sand. The manufacture of high-temperature silica brick for use in the linings of glass- and steel-melting furnaces represents another common use of crystalline silica in industry. The oil and gas industry uses crystalline silica to break up rock in wells. The operator pumps a water-sand mixture, under pressure, into the rock formations to fracture them so that oil and gas may be easily brought to the surface. More than 1 million tons of quartz sand were used annually for this purpose during the 1970’s and early 1980’s when oil-well drilling was at its peak. Quartz sand is also used for filtering sediment and bacteria from water supplies and in sewage treat ment. Although this use of crystalline silica has increased in recent years, it still represents a small proportion of the total use.

High-Tech Applications

Historically, crystalline silica, as quartz, has been a material of strategic importance. During World War II, communications components in telephones and mobile military radios were made from quartz. With today’s emphasis on military command, control, and communications surveillance and with modern advances in sophisticated electronic systems, quartz-crystal devices are in even greater demand. In the field of optics, quartz meets many needs. It has certain optical properties that permit its use in polarized laser beams. The field of laser optics uses quartz as windows, prisms, optical filters, and timing devices. Smaller portions of high-quality quartz crystals are used for prisms and lenses in optical instruments. Scientists are experimenting with quartz bars to focus sunlight in solar-power applications. Quartz crystals possess a unique property called piezoelectricity. A piezoelectric crystal converts mechanical pressure into electricity and vice versa. When a quartz crystal is cut at an exact angle to its axis, pressure on it generates a minute electrical charge, and likewise, an electrical charge applied to quartz causes it to vibrate more than 30,000 times per second in some applications. Piezoelectric quartz crystals are used to make electronic oscillators, which provide accurate frequency control for radio transmitters and radio-frequency telephone circuits. Incoming signals of interfering frequencies can be filtered out by piezoelectric crystals. Piezoelectric crystals are also used for quartz watches and other time-keeping devices

USGS 2011 Minerals Yearbook U.S. Department of the Interior U.S. Geological Survey SAND AND GRAVEL, CONSTRUCTION

(It’s 2014 but 2011 is the most recent data available, only a third of those queried responded, stats for sand vs gravel are not broken out, no information about ecological damage or theft, a pretty inept, incomplete report overall, but for what it’s worth): A total of 810 million metric tons (Mt) of construction sand and gravel was produced in the United States in 2011. This was a slight increase of 5 Mt from the revised production of 2010, the first increase in annual production since 2006, following 4 consecutive years of decreases. The slight improvement came in response to increased demand from certain State economies experiencing the boom in natural gas and oil production and from some construction segments.

As sand and gravel became less available owing to resource constraint or economic conditions in some locales, builders began to crush bedrock to produce a manufactured sand and gravel often referred to as crushed stone

Of the 810 Mt of construction sand and gravel produced in 2011, 60% was reported or estimated without a breakdown by end use (tables 4–5). Of the remaining 327 Mt, 44% was used as concrete aggregate; 25% was used for road base and coverings and road stabilization; 13%, for asphaltic concrete aggregate and other bituminous mixtures; 12%, for construction fill; about 1% each, for concrete products, plaster and gunite sands, and snow and ice control; and the remainder was used for golf course maintenance, filtration, railroad ballast, road stabilization, roofing granules, and many other miscellaneous uses.

The high cost of transportation limit foreign trade to mostly local transactions across international boundaries. U.S. imports and exports were equivalent to less than 1% of domestic consumption.

http://forcechange.com/71868/stop-illegal-sand-mining-in-india/

Posted in Biodiversity Loss, Concrete, Peak Sand, Soil | Tagged , , , , , | Comments Off on Peak Sand

Boston Globe: the false promise of nuclear power

Last Updated August 2021.

Preface. This article raises many objections to nuclear power. Theoretically it could be cheaper, but the exact opposite has happened, it keeps getting more expensive. For example the only new reactors being built in the U.S. now are at Georgia Power’s Vogtle plant (Amy 2021). Costs were initially estimated at $14 billion; the latest estimate is $27 billion. The first reactors at the plant, built in the 1970s, took a decade longer to build than planned, and cost 10 times more than expected. The two under construction now were expected to be running 2016, but it’s now unlikely that they’ll be ready in 2022, or later, in the latest delay unit 4 won’t be ready until 2023 (Surran 2021).

The authors also point out that reactors are vulnerable to catastrophes from extreme weather, earthquakes, volcanoes, tsunamis; from technical failure; and unavoidable human error. Climate change has led to severe droughts that shut down reactors as the surrounding waters become too warm to provide the vital cooling function.

And much more.

Alice Friedemann  www.energyskeptic.com Women in ecology  author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity

***

Jay Lifton, Naomi Oreskes. 2019. The false promise of nuclear power. Boston Globe.

Commentators from Greenpeace to the World Bank agree that climate change is an emergency, threatening civilization and life on our planet. Any solution must involve the control of greenhouse gas emissions by phasing out fossil fuels and switching to alternative technologies that do not impair the human habitat while providing the energy we require to function as a species.

This sobering reality has led some prominent observers to re-embrace nuclear energy. Advocates declare it clean, efficient, economical, and safe. In actuality it is none of these. It is expensive and poses grave dangers to our physical and psychological well-being. According to the US Energy Information Agency,the average nuclear power generating cost is about $100 per megawatt-hour. Compare this with $50 per megawatt-hour for solar and $30 to $40 per megawatt-hour for onshore wind. The financial group Lazard recently said that renewable energy costs are now “at or below the marginal cost of conventional generation” — that is, fossil fuels — and much lower than nuclear.

In theory these high costs and long construction times could be brought down. But we have had more than a half-century to test that theory and it appears have been solidly refuted. Unlike nearly all other technologies, the cost of nuclear power has risen over time. Even its supporters recognize that it has never been cost-competitive in a free-market environment, and its critics point out that the nuclear industry has followed a “negative learning curve.” Both the Nuclear Energy Agency and International Energy Agency have concluded that although nuclear power is a “proven low-carbon source of base-load electricity,” the industry will have to address serious concerns about cost, safety, and waste disposal if it is to play a significant role in addressing the climate-energy nexus.

But there are deeper problems that should not be brushed aside. They have to do with the fear and the reality of radiation effects. At issue is what can be called “invisible contamination,” the sense that some kind of poison has lodged in one’s body that may strike one down at any time — even in those who had seemed unaffected by a nuclear disaster. Nor is this fear irrational, since delayed radiation effects can do just that. Moreover, catastrophic nuclear accidents, however infrequent, can bring about these physical and psychological consequences on a vast scale. No technological system is ever perfect, but the vulnerability of nuclear power is particularly great. Improvements in design cannot eliminate the possibility of lethal meltdowns. These may result from extreme weather; from geophysical events such as earthquakes, volcanoes, and tsunamis (such as the one that caused the Fukushima event); from technical failure; and from unavoidable human error. Climate change itself works against nuclear power; severe droughts have led to the shutting down of reactors as the surrounding waters become too warm to provide the vital cooling function.

Advocates of nuclear energy invariably downplay the catastrophic events at Fukushima and Chernobyl. They point out that relatively few immediate deaths were recorded in these two disasters, which is true. But they fail to take adequate account of medical projections. The chaos of both disasters and their extreme mishandling by authorities have led to great disparity in estimates. But informed evaluations in connection with Chernobyl project future cancer deaths at anywhere from several tens of thousands to a half-million.

Studies of Chernobyl and Fukushima also reveal crippling psychological fear of invisible contamination. This fear consumed Hiroshima and Nagasaki, and people in Fukushima painfully associated their own experiences with those of people in the atomic-bombed cities. The situation in Fukushima is still far from physically or psychologically stable. This fear also plagues Chernobyl, where there have been large forced movements of populations, and where whole areas poisoned by radiation remain uninhabitable.

The combination of actual and anticipated radiation effects — the fear of invisible contamination — occurs wherever nuclear technology has been used: not only at the sites of the atomic bombings and major accidents, but also at Hanford, Wash., in connection with plutonium waste from the production of the Nagasaki bomb; at Rocky Flats, Colo., after decades of nuclear testing; and at test sites in Nevada and elsewhere after soldiers were exposed to radiation following atomic bomb tests.

Nuclear reactors also raise the problem of nuclear waste, for which no adequate solution has been found despite a half-century of scientific and engineering effort. Even when a reactor is considered unreliable and is closed down, as occurred recently with the Pilgrim Point reactor in Plymouth, or closes for economic reasons, as at Vermont Yankee, the accumulated waste remains at the site, dangerous and virtually immortal. Under the 1982 Nuclear Waste Policy Act, the United States was required to develop a permanent repository for nuclear waste; nearly 40 years later, we still lack that repository.

Finally there is the gravest of dangers: plutonium and enriched uranium derived from nuclear reactors’ contributing to the building of nuclear weapons. Steps can be taken to reduce that danger by eliminating plutonium as a fuel, but wherever extensive nuclear power is put into use there is the possibility of its becoming weaponized. Of course, this potential weaponization makes nuclear reactors a tempting target for terrorists.

There are now more than 450 nuclear reactors throughout the world. If nuclear power is embraced as a rescue technology, there would be many times that number, creating a worldwide chain of nuclear danger zones — a planetary system of potential self-annihilation. To be fearful of such a development is rational. What is irrational is to dismiss this concern, and to insist, after the experience of more than a half-century, that a “fourth generation” of nuclear power will change everything.

Advocates of nuclear power frequently compare it to carbon-loaded coal. But coal is not the issue; it is already making its way off the world stage. The appropriate comparison is between nuclear and renewable energies. Renewables are part of an economic and energy revolution: They have become available far more quickly, extensively, and cheaply than most experts predicted, and public acceptance is high. To use renewables on the necessary scale, we will need improvements in energy storage, grid integration, smart appliances, and electric vehicle charging infrastructure. We should have an all-out national effort — reminiscent of World War II or, ironically, the making of the atomic bomb — that includes all of these areas to make renewable energies integral to the American way of life. Gas and nuclear will play a transitional role, but it is not pragmatic to bet the planet on a technology that has consistently underperformed and poses profound threats to our bodies and our minds.

Above all, we need to free ourselves of the “nuclear mystique” : the magic aura that radiation has had since the days of Marie Curie. We must question the misleading vision of “Atoms for Peace,” a vision that has always accompanied the normalization of nuclear weapons. We must free ourselves from the false hope that a technology designed for ultimate destruction could be transmogrified into ultimate life-enhancement.

References

Amy J (2021) Georgia nuclear plant cost tops $27B as more delays unveiled. Associated Press.

Surran C (2021) Southern’s Georgia nuclear plant delayed again. Seeking Alpha.

 

Posted in Nuclear Power Energy | Tagged , , , , , | 5 Comments

Rust Power

Preface.  This is yet another article with an energy generation idea that will probably never work out and become commercial.  But it gives hope and dreams to ordinary people who think what a cool idea, and who will never check in ten years to see if it happened.  It’s soothing to think that scientists are constantly coming up with Something. No need to worry about peak oil and other existential threats.

Now jump forward 100 years to after peak oil, which began sometime, let’s say, between 2020 and 2030. After the population has declined about 90%, the survivors will be 80-90% farmers in 2120.  Are they going to have the energy or know-how to run high-tech depositors of 10-nanometer thick iron? 

Or take this press release, Rice device channels heat into light, where engineers propose to use carbon nanotube film to create a device to recycle waste heat from industry and solar cells.

Really?  After a hard day of farming and trying to find wood and chop it to cook dinner and heat their home, the farmers are going create nanolayers and nanotubes?

Many are calling the time after peak oil “The Great Simplification”, so whatever proposals are made need to be low-tech. It’s only the unfathomably large abundance of cheap oil that’s allowed this mirage to appear and an extra 6 billion people to be born.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

David Grossman. July 30, 2019.  Could Rust Be a New Source of Renewable Energy? Using kinetic energy, it’s got the potential to be more efficient than solar panels. Popular Mechanics.

It’s been known for a long time that combining metals and salt water conducts electricity quite well.

This has spurred the idea of research into whether the kinetic energy of moving salt water could be transformed into electricity. At its best, this electrokinetic effect can generate electricity with around 30 percent efficiency, much higher than solar panels.

It occurred to scientists at Caltech and Northwestern that a really cheap and abundant metal to try would be iron rust. But not just any rust.  Rusty metal at the junkyard has too thick and uneven a layer to use

The rust required needs to be an extremely thin evenly spread film made in a laboratory using a very high tech process called physical vapor deposition which creates films a just 10 nanometers thick, thousands of times thinner than a human hair.

But don’t think you’ll be driving a boat anytime soon that magically moves across the salty ocean. A more practical application, if this passive electrical energy can ever be made to work, is for buoys floating in the ocean, or perhaps tidal energy.

Posted in Far Out | Tagged , | 2 Comments

Carbon capture could require 25% of all global energy

Preface.  This is clearly a pipedream. Surely the authors know this, since they say that the energy needed to run direct air capture machines in 2100 is up to 300 exajoules each year. That’s more than half of global energy consumption today.  It’s equivalent to the current annual energy demand of China, the US, the EU and Japan combined.  It is equal to the global supply of energy from coal and gas in 2018.

That’s a showstopper. This CO2 chomper isn’t going anywhere.  It simply requires too much energy, raw materials, and an astounding, impossibly large-scale rapid deployment of 30% a year to be of any use.

Reaching 30 Gt CO2/yr of CO2 capture – a similar scale to current global emissions – would mean building some 30,000 large-scale DAC factories. For comparison, there are fewer than 10,000 coal-fired power stations in the world today. 

The cement and steel used in DACCS facilities would require a great deal of energy and CO2 emissions that need to be subtracted from whatever is sequestered.

Nor can the CO2 be stored in carbon capture sorbents – these are between the research and demonstration levels, far from being commercial, and are subject to degradation which would lead to high operational and maintenance costs.  Their manufacture also releases chemical pollutants that need to be managed, adding to the energy used even more. Plus sorbents can require a great deal of high heat and fossil fuel inputs, possibly pushing up the “quarter of global energy” beyond that.

As far as I can tell the idea of sorbents, which are far from being commercial and very expensive to produce, is only being proposed because there’s not enough geological storage to put CO2.

By the time all of the many technical barriers were overcome, oil would probably be declining, rendering the point of a DACCS facility moot.  A decline of 4-8% a year of global oil production will reduce CO2 emissions far more than DACCS.  Within two decades we’ll be down to 10% of the oil and emissions we once had.

Carbon capture in the news

2020 Carbon Capture: Silver Bullet or Mirage? Fossil fuels emitted 36.7 billion tons of CO2 last year. A new project that would remove 4,000 tons of CO2 was recently announced. Well whoop-de-do, only 9.2 million more of these plants to go. Clearly this doesn’t scale up, and for it to take off, there needs to be a clear financial incentive.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Evans, S. 2019. Direct CO2 capture machines could use ‘a quarter of global energy’ in 2100. Carbonbrief.org

This article is a summary of: Realmonte, G. et al. 2019. An inter-model assessment of the role of direct air capture in deep mitigation pathways, Nature Communications.

Machines that suck CO2 directly from the air using direct air capture (DAC) could cut the cost of meeting global climate goals, a new study finds, but they would need as much as a quarter of global energy supplies in 2100 to limiting warming to 1.5 or 2C above pre-industrial levels.

But the study also highlights the “clear risks” of assuming that DAC will be available at scale, with global temperature goals being breached by up to 0.8C if the technology then fails to deliver.

This means policymakers should not see DAC as a “panacea” that can replace immediate efforts to cut emissions, one of the study authors tells Carbon Brief, adding: “The risks of that are too high.

DAC should be seen as a “backstop for challenging abatement” where cutting emissions is too complex or too costly, says the chief executive of a startup developing the technology. He tells Carbon Brief that his firm nevertheless “continuously push back on the ‘magic bullet’ headlines”.

Negative emissions The 2015 Paris Agreement set a goal of limiting human-caused warming to “well below” 2C and an ambition of staying below 1.5C. Meeting this ambition will require the use of “negative emissions technologies” to remove excess CO2 from the atmosphere, according to the Intergovernmental Panel on Climate Change (IPCC).

This catch-all term of negative emissions technologies covers a wide range of approaches, including planting trees, restoring peatlands and other “natural climate solutions”. Model pathways rely most heavily on bioenergy with carbon capture and storage (BECCS). This is where biomass, such as wood pellets, is burned to generate electricity and the resulting CO2 is captured and storedThe significant potential role for BECCS raises a number of concerns, with land areas up to five times the size of India devoted to growing the biomass needed in some model pathways.

Another alternative is direct air capture, where machines are used to suck CO2 out of the atmosphere. If the CO2 is then buried underground, the process is sometimes referred to as direct air carbon capture and storage (DACCS).

Today’s new study explores how DAC could help meet global climate goals with “lower costs”, using two different integrated assessment models (IAMs). Study author Dr Ajay Gambhir, senior research fellow at the Grantham Institute for Climate Change at Imperial College London, explains to Carbon Brief:

“This is the first inter-model comparison…[and] has the most detailed representation of DAC so far used in IAMs. It includes two DAC technologies, with different energy inputs and cost assumptions, and a range of energy inputs including waste heat. The study uses an extensive sensitivity analysis [to test the impact of varying our assumptions]. It also includes initial analysis of the broader impacts of DAC technology development, in terms of material, land and water use.

The two DAC technologies included in the study are based on different ways to adsorb CO2 from the air, which are being developed by a number of startup companies around the world.

One, typically used in larger industrial-scale facilities such as those being piloted by Canadian firm Carbon Engineering, uses a solution of hydroxide to capture CO2. This mixture must then be heated to high temperatures to release the CO2 so it can be stored and the hydroxide reused. The process uses existing technology and is currently thought to have the lower cost of the two alternatives.

The second technology uses amine adsorbents in small, modular reactors such as those being developed by Swiss firm Climeworks. Costs are currently higher, but the potential for savings is thought to be greater, the paper suggests. This is due to the modular design that could be made on an industrial production line, along with lower temperatures needed to release CO2 for storage, meaning waste heat could be used.

Delayed cuts

Overall, despite “huge uncertainty” around the cost of DAC, the study suggests its use could allow early cuts in global greenhouse gas emissions to be somewhat delayed, “significantly reducing climate policy costs” to meet stringent temperature limits.

Using DAC means that global emissions in 2030 could remain at higher levels, the study says, with much larger use of negative emissions later in the century.  

The use of DAC in some of the modelled pathways delays the need to cut emissions in certain areas. The paper explains: “DACCS allows a reduction in near term mitigation effort in some energy-intensive sectors that are difficult to decarbonise, such as transport and industry.

Steve Oldham, chief executive of DAC startup Carbon Engineering says he sees this as the key purpose of CO2 removal technologies, which he likens to other “essential infrastructure” such as waste disposal or sewage treatment.

Oldham tells Carbon Brief that while standard approaches to cutting CO2 remain essential for the majority of global emissions, the challenge and cost may prove too great in some sectors. He says:

“DAC and other negative emissions technologies are the right solution once the cost and feasibility becomes too great…I see us as the backstop for challenging abatement.

Comparing costs

Even though DAC may be relatively expensive, the model pathways in today’s study still see it as much cheaper than cutting emissions from these hard-to tackle sectors. This means the models deploy large amounts of DAC, even if its costs are at the high end of current estimates.

It also means the models see pathways to meeting climate goals that include DAC as having lower costs overall (“reduce[d]… by between 60 to more than 90%”). Gambhir tells Carbon Brief:

“Deploying DAC means less of a steep mitigation pathway in the near-term, and lowers policy costs, according to the modelled scenarios we use in this study.

Gambhir tells Carbon Brief:

“Large-scale deployment of DAC in below-2C scenarios will require a lot of heat and electricity and a major manufacturing effort for production of CO2 sorbent. Although DAC will use less resources such as water and land than other NETs [such as BECCS], a proper full life-cycle assessment needs to be carried out to understand all resource implications.

Deployment risk There are also questions as to whether this new technology could be rolled out at the speed and scale envisaged, with expansion at up to 30% each year and deployment reaching 30 GtCO2/yr towards the end of the century. This is a “huge pace and scale”, Gambhir says, with the rate of deployment being a “key sensitivity” in the study results.

Prof Jennifer Wilcox, professor of chemical engineering at Worcester Polytechnic Institute, who was not involved with the research, says that this rate of scale-up warrants caution. She tells Carbon Brief:

“Is the rate of scale-up even feasible? Typical rules of thumb are increase by an order of magnitude per decade [growth of around 25-30% per year]. [Solar] PV scale-up was higher than this, but mostly due to government incentives…rather than technological advances.

If DAC were to be carried out using small modular systems, then as many as 30m might be needed by 2100, the paper says. It compares this number to the 73m light vehicles that are built each year.

The study argues that expanding DAC at such a rapid rate is comparable to the speed with which newer electricity generation technologies such as nuclear, wind and solar have been deployed.

The modelled rate of DAC growth is “breathtaking” but “not in contradiction with the historical experience”, Bauer says. This rapid scale-up is also far from the only barrier to DAC adoption.

The paper explains: “[P]olicy instruments and financial incentives supporting negative emission technologies are almost absent at the global scale, though essential to make NET deployment attractive.

Carbon Engineering’s Oldham agrees that there is a need for policy to recognise negative emissions as unique and different from standard mitigation. But he tells Carbon Brief that he remains “very very confident” in his company’s ability to scale up rapidly.

(Today’s study includes consideration of the space available to store CO2 underground, finding this not to be a limiting factor for DAC deployment.)

Breaching limits

The paper says that the challenges to scale-up and deployment on a huge scale bring significant risks, if DAC does not deliver as anticipated in the models. Committing to ramping up DAC rather than cutting emissions could mean locking the energy system into fossil fuels, the authors warn.

This could risk breaching the Paris temperature limits, the study explains:

“The risk of assuming that DACCS can be deployed at scale, and finding it to be subsequently unavailable, leads to a global temperature overshoot of up to 0.8C.

Gambhir says the risks of such an approach are “too high”:

“Inappropriate interpretations [of our findings] would be that DAC is a panacea and that we should ease near-term mitigation efforts because we can use it later in the century.

Bauer agrees:

“Policymakers should not make the mistake to believe that carbon removals could ever neutralise all future emissions that could be produced from fossil fuels that are still underground. Even under pessimistic assumptions about fossil fuel availability, carbon removal cannot and will not fix the problem. There is simply too much low-cost fossil carbon that we could burn.

Nonetheless, Prof Massimo Tavoni, one of the paper’s authors and the director of the European Institute on Economics and the Environment (EIEE), tells Carbon Brief that “it is still important to show the potential of DAC – which the models certainly highlight – but also the many challenges of deploying at the scale required”.

The global carbon cycle poses one final – and underappreciated – challenge to the large-scale use of negative emissions technologies such as DAC: ocean rebound. This is because the amount of CO2 in the world’s oceans and atmosphere is in a dynamic and constantly shifting equilibrium.

This equilibrium means that, at present, oceans absorb a significant proportion of human-caused CO2 emissions each year, reducing the amount staying in the atmosphere. If DAC is used to turn global emissions net-negative, as in today’s study, then that equilibrium will also go into reverse.

As a result, the paper says as much as a fifth of the CO2 removed using DAC or other negative emissions technologies could be offset by the oceans releasing CO2 back into the atmosphere, reducing their supposed efficacy.

Posted in CO2 and Methane, Far Out | Tagged , | 2 Comments

Himalayan glaciers supplying water to a billion people melting fast, Andes too

Preface. The Himalayan glaciers that supply water to a billion people are melting fast, already 30% has been lost since 1975.

Adding to the crisis are the 400 dams under construction or planned for Himalayan rivers in India, Pakistan, Nepal, and Bhutan to generate electricity and for water storage.  The dams’ reservoirs and transmission lines will destroy biodiversity, thousands of houses, towns, villages, fields, 660 square miles of forests, and even parts of the highest highway of the world, the Karakoram highway. The dam projects are at risk of collapse from earthquakes in this seismically active region and of breach from flood bursts from glacial lakes upstream. Dams also threaten to intensify flooding downstream during intense downpours when reservoirs overflow (IR 2008, Amrith 2018).

Since the water flows to 16 nations, clearly these dams could cause turmoil and even war if river flows are cut off from downstream countries.  Three of these nations, India, Pakistan, and China, have nuclear weapons.

It’s already happening. After a terrorist attack that killed 40 Indian police officers in Kashmir, India decided to retaliate by cutting off some river water that continues on to Pakistan, adding an extra source of conflict between two nuclear-armed neighbors. Pakistan is one of the most water-stressed countries in the world with seriously depleted underground aquifers and less storage behind their 2 largest dams due to silt (Johnson 2019).

But here’s some good news: Glaciers in the Himalayas have 37% more ice than thought. this is good news for the 250 million people living near the Himalayas (NS 2022).  

But the glaciers of the Andean mountains have reached “peak water” much sooner than expected because glaciers are 27% thinner than previously estimated (NS 2022).

Related news:

June 15, 2020: A water crisis looms for 270 million people as South Asia’s glaciers shrink. National Geographic.

Nov 21, 2019. Maybe It Will Destroy Everything’: Pakistan’s Melting Glaciers Cause Alarm. NPR.org …pollution and global warming are causing the Ultar glacier to melt and form unstable lakes that could burst their icy banks at any moment. Already this summer, much of Harchi valley farms in Pakistan were destroyed in glacial floods. More than 3,000 glaciers have formed unstable lakes. At least 30 are at risk of bursting, which can trigger ice avalanches and flash floods that bring down water, debris and boulders.

Alice Friedemann  www.energyskeptic.com Women in ecology  author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity

***

Wu, K. 2019. Declassified spy images show Earth’s ‘Third Pole’ is melting fast.  Accelerating ice melt in the Himalayas may imperil up to a billion people in South Asia who rely on glacier runoff for drinking water and more. PBS.org

According to a study published today in the journal Science Advances, rising temperatures in the Himalayas have melted nearly 30% of the region’s total ice mass since 1975.

These disappearing glaciers imperil the water supply of up to a billion people throughout Asia.

Once nicknamed Earth’s ‘Third Pole’ for its impressive cache of snow and ice, the Himalayas may now have a bleak future ahead. Four decades of satellite data, including recently declassified Cold War-era spy film, suggest these glaciers are currently receding twice as fast as they were at the end of the 20th century.

Several billion tons of ice are sloughing off the Himalayas each year without being replaced by snow. That spells serious trouble for the peoples of South Asia, who depend on seasonal Himalayan runoff for agriculture, hydropower, drinking water, and more. Melting glaciers could also prompt destructive floods and threaten local ecosystems, generating a ripple effect that may extend well beyond the boundaries of the mountain’s warming peaks.

The study’s sobering findings come as the result of a massive compilation of data across time and space. While previous studies have documented the trajectories of individual glaciers in the Himalayas, the new findings track 650 glaciers that span a staggering 1,250-mile-wide range across Nepal, Bhutan, India, and China. They also draw on some 40 years of satellite imagery, which the scientists stitched together to reconstruct a digital, three-dimensional portrait of the glacier’s changing surfaces—almost like an ultra-enhanced panorama.

When a team of climatologists analyzed the time series, they found a stark surge in glacier shrinkage. Between 1975 and 2000, an average of about 10 inches of ice were shed from the glaciers each year. Post-Y2K, however, the net loss doubled to around 20 inches per year—a finding in keeping with accelerated rates of warming around the globe.

While previous studies have had difficulty disentangling the relative contributions of rising temperatures, ice-melting pollutants, and reduced rainfall to the boost in glacier melt, the latter two simply aren’t enough to explain the alarming drop in ice mass in recent years.

References

Amrith SS (2018) The race to dam the Himalayas. Hundreds of big projects are planned for the rivers that plunge from the roof of the world. New York Times.

IR (2008) Mountains of concrete: Dam building in the Himalayas. International Rivers.

Johnson K (2019) Are India and Pakistan on the verge of a water war? Foreign Policy.

NS (2022) Andes ‘peak water’ looms. New Scientist. 

Posted in Caused by Scarce Resources, Climate Change, Dams, Peak Water, Planetary Boundaries, Water, Water Infrastructure | Tagged , , , , | Comments Off on Himalayan glaciers supplying water to a billion people melting fast, Andes too

Billionaire apocalypse bunkers & other hideouts

Source: Cohen L (2018) A Survival Condo in a Missile Silo? It’s a Thing. Zillow

Preface. There are many reasons why people might want a bunker, but peak oil, peak food, peak everything for that matter were not mentioned in Rushkoff’s “Survival of the Richest” or the articles below. When billionaires and millionaires emerge they’ll have no skills to survive. You can’t run from the “end of the world” to a bunker or anywhere else. A bunker will just be a very fancy tombstone or place for the locals to look for food and other goodies when they run out.

Continue reading

Posted in Where are the rich going | Tagged , , | 10 Comments

How safe are utility-scale energy storage batteries?

This image has an empty alt attribute; its file name is 2MW-AZ-battery-that-exploded.jpg

This 2 MW battery, installed by the Arizona Public Service electric company, exploded in April 2019 and sent eight firefighters and a policeman to the hospital (Cooper 2019). At least 23 South Korean lithium-ion facilities caught fire in a series of incidents dating back to August 2017 (Deign 2019).

Preface.  Airplanes can be forced to make an emergency landing if even a small external battery pack, like the kind used to charge cell phones, catches on fire (Mogg 2019). So if even a small battery pack can force an airplane to land, imagine the conflagration a gigantic utility scale storage battery might cause.

A lithium-ion battery designed to store just one day of U.S. electricity generation (11 TWh) to balance solar and wind power would be huge.  Using data from the Department of Energy (DOE/EPRI 2013) energy storage handbook, I calculated that the cost of a utility-scale lithium ion battery capable of storing 24 hours of electricity generation in the United States would cost $11.9 trillion dollars, take up 345 square miles, and weigh 74 million tons.  Though of course there’d me many of them, if each one took an acre you’d have 220,800 acres of them, and they do take up an acre or more because there’s other equipment, transmission lines, substations and more on the site.

Since at least 6 weeks of energy storage is needed to keep the grid up during times when there’s no sun or wind, you’d need to have 1,325,800 acre sites of batteries.  This storage has to come mainly from batteries, because geographically there’s very few places to put Compressed Air Energy Storage (CAES), Pumped Hydro energy storage (PHS) (and also because they both have a very low energy density), or Concentrated Solar Power with Thermal Energy Storage, with sites of 5 square miles each, in deserts. Currently natural gas is the main energy storage, always available to quickly step in when the wind dies and sun goes down, as well as provide power around the clock with help from coal, nuclear, and hydropower.

Storing large amounts of energy, whether it’s in larger rechargeable batteries, or smaller disposable batteries, can be inherently dangerous. The causes of lithium battery failure can include puncture, overcharge, overheating, short circuit, internal cell failure and manufacturing deficiencies.  Nearly all of the utility-scale batteries now on the grid or in development are massive versions the same lithium ion technology that powers cellphones and laptops. If the batteries get too hot, a fire can start and trigger a phenomenon known as thermal runaway, in which the fire feeds on itself and is nearly impossible to stop until it consumes all the available fuel.

There are several articles below about battery hazards, the main one is at the end, a summary of an 82 page Department of Energy document on this subject. Clearly containing utility scale energy batteries will be difficult:

“Thermal damage to ESS chemistries and components presents unique challenges to the fire service community, building owners, and insurers. As evidenced in full-scale testing of EV battery fires, fire suppression required more water than anticipated, and significantly more in some cases. Additionally, confirming that the fire was completely extinguished was difficult due to the containment housings of EV batteries that can mask continued thermal reaction within undamaged cells. In one of the tests performed by Exponent, Inc., one battery reignited after being involved in a full-scale fire test some 22 hours post-extinguishment; in another case, an EV experienced a subsequent re-ignition 3 weeks post-crash testing.”

In the news:

2022-9-20 Officials closed Highway 1 in both directions in Moss Landing early Tuesday morning after a fire was detected at the PG&E Elkhorn Battery Storage facility. Officials became aware of a fire in one Tesla Megapack at PG&E’s Elkhorn Battery Storage facility in Monterey County at 1:30 a.m. There is an ongoing hazardous materials incident in Moss Landing. Please shut your windows and turn off your ventilation systems. In the event of changing weather patterns, impacted areas may change, Monterey County spokeswoman Maia Carroll warned. Capt. John Hasslinger with the North County Fire Protection District said when firefighters arrived at the facility, one of the battery packs was actively burning. https://www.santacruzsentinel.com/2022/09/20/caltrans-highway-1-temporarily-closed-in-moss-landing/

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Larsson F et al (2017) Toxic fluoride gas emissions from lithium-ion battery fires. Scientific Reports.

https://www.nature.com/articles/s41598-017-09784-z

[Note: this paper is looking at car batteries — utility scale batteries are thousands of times larger ]

Lithium-ion battery fires generate intense heat and considerable amounts of gas and smoke. Although the emission of toxic gases can be a larger threat than the heat, the knowledge of such emissions is limited.  The risks associated with gas and smoke emissions from malfunctioning lithium-ion batteries may in some circumstances be a larger threat, especially in confined environments where people are present, such as in an aircraft, a submarine, a mine shaft, a spacecraft or in a home equipped with a battery energy storage system.This paper presents quantitative measurements of heat release and fluoride gas emissions during battery fires for seven different types of commercial lithium-ion batteries. Large amounts of hydrogen fluoride (HF) can be generated, ranging between 20 and 200 mg/Wh of nominal battery energy capacity. In addition, 15–22 mg/Wh of another potentially toxic gas, phosphoryl fluoride (POF3), was measured in some of the fire tests. Fluoride gas emission can pose a serious toxic threat and the results are crucial findings for risk assessment and management, especially for large Li-ion battery packs.

USDOE. December 2014. Energy Storage Safety Strategic Plan. U.S. Department of Energy.

Energy storage is emerging as an integral component to a resilient and efficient grid through a diverse array of potential application. The evolution of the grid that is currently underway will result in a greater need for services best provided by energy storage, including energy management, backup power, load leveling, frequency regulation, voltage support, and grid stabilization. The increase in demand for specialized services will further drive energy storage research to produce systems with greater efficiency at a lower cost, which will lead to an influx of energy storage deployment across the country. To enable the success of these increased deployments of a wide variety of storage technologies, safety must be instilled within the energy storage community at every level and in a way that meets the need of every stakeholder. In 2013, the U.S. Department of Energy released the Grid

Energy Storage Strategy , which identified four challenges related to the widespread deployment of energy storage. The second of these challenges, the validation of energy storage safety and reliability, has recently garnered significant attention from the energy storage community at large. This focus on safety must be immediately ensured to enable the success of the burgeoning energy storage industry, whereby community confidence that human life and property not

The safe application and use of energy storage technology knows no bounds. An energy storage system (ESS) will react to an external event, such as a seismic occurrence, regardless of its location in relation to the meter or the grid. Similarly, an incident triggered by an ESS, such as a fire, is ‘blind’ as to the location of the ESS in relation to the meter.

Most of the current validation techniques that have been developed to address energy storage safety concerns have been motivated by the electric vehicle community, and are primarily focused on Li-ion chemistry and derived via empirical testing of systems. Additionally, techniques for Pb-acid batteries have been established, but must be revised to incorporate chemistry changes within the new technologies. Moving forward, all validation techniques must be expanded to encompass grid-scale energy storage systems, be relevant to the internal chemistries of each new storage system and have technical bases rooted in a fundamental-scientific understanding of the mechanistic responses of the materials.

Introduction

Grid energy storage systems are “enabling technologies”; they do not generate electricity, but they do enable critical advances to modernize the electric grid. For example, there have been numerous studies that have determined that the deployment of variable generation resources will impact the stability of grid unless storage is included.5 Additionally, energy storage has been demonstrated to provide key grid support functions through frequency regulation.6 The diversity in the performance needs and deployment environments drive the need of a wide array of storage technologies.

Often, energy storage technologies are categorized as being high-power or high-energy. This division greatly benefits the end user of energy storage systems because it allows for the selection of a technology that fits an application’s requirements, thus reducing cost and maximizing value. Frequency regulation requires very rapid response, i.e. high-power, but does not necessarily require high energy. By contrast, load-shifting requires very high-energy, but is more flexible in its power needs. Uninterruptible power and variable generation integration are applications where the needs for high-power versus high-energy fall somewhere in between the aforementioned extremes. Figure 1 shows the current energy storage techniques deployed onto the North American grid.7 This variety in storage technologies increases the complexity in developing a single set of protocols for evaluating and improving the safety of grid storage technologies and drives the need for understanding across length scales, from fundamental materials processes through full scale system integration. 5 Denholm, Paul; Ela, Erik; Kirby, Brendan; Milligan, Michael.

Figure 1. Percentage of Battery Energy Storage Systems Deployed8 Lithium Iron Total Megawatt PercentagePhosphate 4.84% Flow Other 2.62% 14.38% Lead acid 28.20% Sodium sulfur 8.17% Lithium ion 41.79%

Figure 1. Percentage of Battery Energy Storage Systems Deployed.

The variety of deployment environments and application spaces compounds the complexity of the approaches needed to validate the safety of energy storage systems. The difference in deployment environment impacts the safety concerns, needs, risk, and challenges that affect stakeholders. For example, an energy storage system deployed in a remote location will have very different potential impacts on its environment and first responder needs than a system deployed in a room in an office suite, or on the top floor of a building in a city center. The closer the systems are to residences, schools, and hospitals, the higher the impact of any potential incident regardless of system size.

Pumped hydro is one of the oldest and most mature energy storage technologies and represents 95% of the installed storage capacity. Other storage technologies, such as batteries, flywheels and others, make up the remaining 5% of the installed storage base, are much earlier in their deployment cycle and have likely not reached the full extent of their deployed capacity.

Though flywheels are relative newcomers to the grid energy storage arena, they have been used as energy storage devices for centuries with the earliest known flywheel being from 3100 BC Mesopotamia. Grid scale flywheels operate by spinning a rotor up to tens of thousands of RPM storing energy in a combination of rotational kinetic energy and elastic energy from deformation of the rotor. These systems typically have large rotational masses that in the case of a catastrophic radial failure need a robust enclosure to contain the debris. However, if the mass of the debris particles can be reduced through engineering design, the strength, size and cost of the containment system can be significantly reduced.

As electrochemical technologies, battery systems used in grid storage can be further categorized as redox flow batteries, hybrid flow batteries, and secondary batteries without a flowing electrolyte. For the purposes of this document, vanadium redox flow batteries and zinc bromine flow batteries are considered for the first two categories, and lead-acid, lithium ion, sodium nickel chloride and sodium sulfur technologies in the latter category. As will be discussed in detail in this document, there are a number of safety concerns specific to batteries that should be addressed, e.g. release of the stored energy during an incident, cascading failure of battery cells, and fires.

A reactive approach to energy storage safety is no longer viable. The number and types of energy storage deployments have reached a tipping point with dramatic growth anticipated in the next few years fueled in large part by major, new, policy-related storage initiatives in California14, Hawaii15, and New York. The new storage technologies likely to be deployed in response to these and other initiatives are maturing too rapidly to justify moving ahead without a unified scientifically based set of safety validation techniques and protocols. A compounding challenge is that startup companies with limited resources and experience in deployment are developing many of these new storage technologies. Standardization of the safety processes will greatly enhance the cost and viability of new technologies, and of the startup companies themselves. The modular nature of ESS is such that there is just no single entity clearly responsible for ESS safety; instead, the each participant in the energy storage community has a role and a responsibility. The following sections outline the gaps in addressing the need for validated grid energy storage system safety.

To date, the most extensive energy storage safety and abuse R&D efforts have been done for Electric Vehicle (EV) battery technologies. These efforts have been limited to lithium ion, lead-acid and nickel metal hydride chemistries and, with the exception of grid-scale lead-acid systems, are restricted to smaller size battery packs applicable to vehicles.

The increased scale, complexity, and diversity in technologies being proposed for grid- scale storage necessitates a comprehensive strategy for adequately addressing safety in grid storage systems. The technologies deployed onto the grid fall into the categories of electro-chemical, electromechanical, and thermal, and are themselves within different categories of systems, including CAES, flywheels, pumped hydro and SMES. This presents a significant area of effort to be coordinated and tackled in the coming years, as a number of gap areas currently exist in codes and standards around safety in the field. R&D efforts must be coordinated to begin to address the challenges.

An energy storage system can be categorized primarily by its power, energy and technology platform. For grid-scale systems, the power/energy spectrum spans from smaller kW/kWh to large MW/MWh systems. Smaller kW/kWh systems can be deployed for residential and community storage applications, while larger MW/MWh systems are envisioned for electric utility transmission and distribution networks to provide grid level services. This is in contrast to electric vehicles, for which the U.S. Advanced Battery Consortium (USABC) goals are both clearly defined and narrow in scope with an energy goal of 40 kWh. While in practice some EV packs are as large as 90 kWh, the range of energy is still small compared with the grid storage applications. This research is critical to the ability of first responders to understand the risks posed by ESS technologies and allow for the development of safe stratagies to minimize risk and mitigate the event.

Furthermore, the diversity of battery technologies and stationary storage systems is not generally present in the EV community. Therefore, the testing protocols and procedures used historically and currently for storage systems for transportation are insufficient to adequately address this wide range of storage systems technologies for stationary applications. Table 1 summarizes the high level contrast between this range of technologies and sizes of storage in the more established area of EV. The magnitude of effort that must be taken on to encompass the needs of safety in stationary storage is considerable because most research and development to improve safety and efforts to develop safety validation techniques are in the EV space. Notably, the size of EV batteries ranges by a factor of two; by contrast, stationary storage scales across many orders of magnitude. Likewise, the range of technologies and uses in stationary storage are much more varied than in EV. Therefore, while the EV safety efforts pave the way in developing R&D programs around safety and developing codes and standards, they are highly insufficient to address many of the significant challenges in approaching safe development, installation, commissioning, use and maintenance of stationary storage systems.

An additional complexity of grid storage systems is that the storage system can either be built on-site or pre-assembled, typically in shipping containers. These pre-assembled systems allow for factory testing of the fully integrated system, but are exposed to potential damage during shipping. For the systems built on site, the assembly is done in the field; much of the safety testing and qualification could potentially be done by local inspectors, who may or may not be as aware of the specifics of the storage system. Therefore, the safety validation of each type of system must be approached differently and each specific challenge must be addressed.

Batteries and flywheels are currently the primary focus for enhanced grid-scale safety. For these systems, the associated failure modes at grid-scale power and energy requirements have not been well characterized and there is much larger uncertainty around the risks and consequences of failures. This uncertainty around system safety can lead to barriers to adoption and market success, such as difficulty with assessing value and risk to these assets, and determining the possible consequences to health and the environment. To address these barriers, concerted efforts are needed in the following areas: • Materials Science R&D – Research into all device components • Engineering controls and system design • Modeling • System testing and analysis • Commissioning and field system safety research It is a notable challenge within the areas outlined above to develop understanding and confidence in relating results at one scale to expected outcomes at another scale, or predicting the interplay between components, as well as protecting against unexpected outcomes when one or more failure mode is present at the same time in a system. Extensive research, modeling and validation are required to address these challenges. Furthermore, it is necessary to pool the analysis approaches of failure mode and effects analysis (FMEA) and to use a safety basis in both research and commissioning to build a robust safety program. Furthermore, identifying, responding and mitigating to any observed safety events are critical in validating the safety of storage.

A holistic view with regard to setting standards to ensure thorough safety validation techniques is the desired end goal; the first step is to study on the R&D level failure from the cell to system level, and from the electrochemistry and kinetics of the materials to module scale behavior. Detailed hazards analysis must be conducted for entire systems in order to identify failure points caused by abuse conditions and the potential for cascading events, which may result in large scale damage and/or fire. While treating the storage system as a “black box” is helpful in setting practical standards for installation, understanding the system at the basic materials and chemistry levels and how issues can initiate failure at the cell and system level is critical to ensure overall system safety.

Batteries, understanding the fundamental electrochemistry and materials changes under selected operating conditions helps guide the cell level safety. Knowledge of cell-level failure modes and how they propagate to battery packs guides the cell chemistry, cell design and integration. Each system has different levels of risk associated with basic electrochemistry that must be understood; the trade-off between electrochemical performance and safety must be managed. There are some commonalities of safety issues between storage technologies. For example, breeching of a Na/S (NAS) or Na/NiCl2 (Zebra) battery could result in exposure of molten material and heat transfer to adjacent cells. Evolution of H2 from lead-acid cells or H2 and solvent vapor from lithium-ion batteries during overcharge abuse could results in a flammable/combustible gas mixture. Thermal runaway in lithium-ion (Li-ion) cells could transfer heat to adjacent cells and propagate the failure through a battery.

Moreover, while physical hazards are often considered, health and environmental safety issues also need to be evaluated to have a complete understanding of the potential hazards associated with a battery failure. These may include the toxicity of gas species evolved from a cell during abuse or when exposed to abnormal environments,  toxicity of electrolyte during a cell breech or spill in a Vanadium redox flow battery (VRB), environmental impact of water runoff used to extinguish a battery fire containing heavy metals. Flywheels provide an entirely different set of considerations, including mechanical containment testing and modeling, vacuum loss testing, and material fatigue testing under stress.

The topic of Li-ion battery safety is rapidly gaining attention as the number of battery incidents increases. Recent incidents, such as a cell phone runaway during a regional flight in Australia and a United Parcel Service plane crash near Dubai, reinforce the potential consequence of Liion battery runaway events. The sheer size of grid storage needs and the operational demands make it increasingly difficult to find materials with the necessary properties, especially the required thermal behavior to ensure fail-proof operation. The main failure modes for these battery systems are either latent (manufacturing defects, operational heating, etc.) or abusive (mechanical, electrical, or thermal).

Any of these failures can increase the internal temperature of the cell, leading to electrolyte decomposition, venting, and possible ignition. While significant strides are being made, major challenges remain in combating solvent flammability still remain, which is the most significant area that needs improvement to address safety of Li-ion cells, and is therefore discussed here in greater detail. To mitigate thermal instability of the electrolyte, a number of different approaches have been developed with varied outcomes and moderate success. Conventional electrolytes typically vent flammable gas when overheated due to overcharging, internal shorting, manufacturing defects, physical damage, or other failure mechanisms. The prospects of employing Li-ion cells in applications depend on substantially reducing the flammability, which requires materials developments (including new lithium salts) to improve the thermal properties. One approach is to use fire retardants (FR) in the electrolyte as an additive to improve thermal stability. Most of these additives have a history of use as FR in the plastics industry. Broadly, these additives can be grouped into two categories—those containing phosphorous and that containing fluorine. A concerted effort to provide a hazard assessment and classification of the event and mitigation when an ESS fails, either through internal or external mechanical, thermal, or electrical stimulus is needed by the community.

Electrolyte Safety R&D The combustion process is a complex chemical reaction by which fuel and an oxidizer in the presence of heat react and burn. Convergence of heat (an oxidizer) and fuel (the substance that burns) must happen to have combustion. The oxidizer is the substance that produces the oxygen so that the fuel can be burned, and heat is the energy that drives the combustion process. In the combustion process a sequence of chemical reactions occur leading to fire.41 In this situation a variety of oxidizing, hydrogen and fuel radicals are produced that keep the fire going until at least one of the three constituents is exhausted.

5.4.1 Electrolytes Despite several studies on the issue of flammability, complete elimination of fire in Li-ion cells has yet to be achieved. One possible reason for the failure could be linked to lower flash point (FP) (<38.7 °C) of the solvents.42 Published data shows that polyphosphazene polymers and ionic liquids used as electrolytes are nonflammable.43 However, the high FP of these chemicals is generally accompanied by increased viscosity, thus limiting low temperature operation and degrading cell performance at sub-ambient temperatures. These materials may also have other problems such as poor wetting of the electrodes and separator materials, excluding them from use in cells despite being nonflammable. Ideally, solvents would be used that have no FP while simultaneously exhibiting ideal electrolyte behavior (see below for a number of critical properties that the electrolytes need to meet) and would remain liquid at low temperatures down to -50 ºC or below for use in Li-ion cells. A number of critical electrochemical and thermal properties are given below that FR have to meet simultaneously. The tradeoffs between properties are possible but when it comes to safety there cannot be tradeoffs. • High voltage stability • Comparable conductivity to traditional electrolytes • Lower flame propagation rate or no fire at all • Lower self-heating rate • Stable against both the electrodes • Able to wet the electrodes and separator materials • Higher onset temperature for exothermic peaks with reduced overall heat production • No miscibility problems with co-solvents

The higher energy density of Li-ion cells can only result in a more volatile device, and while significant efforts have been put forth to address safety, significant research is still needed. To improve safety of Li-ion batteries, the electrolyte flammability needs significant advances or further mitigation is needed in areas that will contain the effects of failures to provide graceful failures with safer outcomes in operation.

Electrodes, separators, current collectors, casings, cell format headers and vent ports While electrolytes are by far the most critical component in Li-ion battery safety, research has been pursued into safety considerations around the other components of the cell. These factors can become more critical as research continues in wider ranges of chemistries for stationary storage.

Capacitors Electrostatic capacitors are a major failure mechanism in power electronics. These predominately fail because of the strong focus on low cost devices, and low control over manufacturing. In response, they are used at a highly de-rated level, and often with redundant design. When they fail they often show slow degradation with decreasing resistivity leading eventually to shorting. Cascading failures can lead to higher consequence failures elsewhere in a system. Arcs or cascading failures can occur. The added complexity of redundant design is a safety risk. While there is a niche market for high reliability capacitors, they are not economically viable for most applications, including grid storage. These devices are made of precious metals and higher quality ceramic processing that leads to fewer oxygen vacancies in the device.

Polymer capacitors can have a safety advantage as they can be self-healing, and therefore graceful failure; however these are poor performers at elevated temperatures and are flammable.

Currently, the low cost and low reliability of capacitors make them a very common component that fails in devices, affecting the power electronics and providing a possible trigger for a cascading failure. While improved reliability has been achieved in capacitors such devices are cost prohibitive due to their manufacturing and testing. Development of improved capacitors at reasonable cost, or design to prevent cascading failures in the event of capacitor failure should be addressed.

Pumps tubing and tanks Components specific to flow battery, and hybrid flow battery technologies have not been researched in the context of safety for battery technology. These include components such as pumps, tubing and storage tanks. Research from other areas that use similar components can be a starting point, but these demonstrate how the range of components is much broader than current R&D in battery safety.

Manufacturing defects The design of components and testing depends on understanding the range of purity in materials, and conformity in engineering. Defects are a large contributor to shorts in batteries for example. Understanding the reproducibility among parts, and the influence of defects on failure is critical to understanding and designing for safer storage systems.

The science of fault detection within large battery systems is still within its infancy; most analysis and monitoring of large battery systems is focused on monitoring issues such as state of health and state of charge monitoring, however limited work has been performed. Offer et al.53 first

Software Analytics. In this day and age of information technology, any comprehensive research, development, and deployment strategy for energy storage should be rounded out with an appropriate complement of software analytics. Software is on a par with hardware in importance, not only for engineering controls, but for performance monitoring; anomaly detection, diagnosis, and tracking; degradation and failure prediction; maintenance; health management; and operations optimization. Ultimately, it will become an important factor in improving overall system and system-of-systems safety. As with any new, potentially high consequence technology, improving safety will be an ongoing process. By analogy with airline safety, energy storage projects which use cutting-edge technologies would benefit from “black boxes” to record precursors to catastrophic failures. The black boxes would be located off-site and store minutes to months of data depending on the time scale of the phenomena being sensed. They would be required for large-scale installations, recommended for medium-scale installations, and optional for small installations. Evolving standards for what and how much should be recorded will be based on the results from research as well as experience.

Since some energy storage technologies are still early in their development and deployment, there should be an emphasis on developing safety cases. Safety cases should cover the full range of safety events that could reasonably be anticipated, and would therefore highlight the areas in which software analytics are required to ensure the safety of each system. Each case would tell a story of an initiating event, an assessment of its probability over time, the likely subsequent events, and the likely final outcome or outcomes. The development of safety cases need not be onerous, but they should demonstrate to everyone involved that serious thought has been given to safety.

Table 2. Common Tests to Assess Risk from Electrical, Mechanical, and Environmental Conditions55 Condition Electrical Mechanical Environmental Tests under development Tests Test of current flow Abnormal charging test, overcharging and charging time Forced discharge test Crush test Impact test Shock test Vibration test Heating test Temperature cycling test Low pressure altitude test Failure propagation Internal short circuit (non-impact test) Ignition/flammability IR absorption diagnostics Separator testing

The established tests for electrical, mechanical and environmental conditions are therefore tailored to identifying and quantifying the consequence and likelihood of failure in lead-acid and lithium ion technologies with typical analyses that include burning characteristics, off-gassing, smoke particulates, and environmental run off from fire suppression efforts. Even for the most studied abuse case of lithium ion technologies, some tests have been identified as very crude or ineffective with limited technical merit. For example, the puncture test, used to replicate failure under an internal short, is widely believed to lack the ability to accurately to mimic this particular failure mode. These tests are less likely to reproduce potential field failures when applied to technologies for which they were not originally designed. The above testing relates exclusively to cell/pack/module level and does not take into consideration the balance of the storage system. Other tests on Li-ion system are targeted at invoking and quantifying specific events; for example, impact testing and overcharging tests probe the potential for thermal runaway which occurs during anode and cathode decomposition reactions. Other failure modes addressed by current validation techniques include electrolyte flammability, thermal stability of materials including the separators, electrolyte components and active materials, and cell-to-cell failure.

Gap areas and opportunities An energy storage system deployed on the grid, whether at the residential (<10kW) or bulk generation scale on the order of MW, is susceptible to similar failures as described above for Li-ion. However, given the multiple chemistries and application space, there is a significant gap in our ability to understand and quantify potential failures under real-world conditions; in order to ensure safety as grid storage systems are deployed, it is critical to understand their potential failure modes within each deployment environment. Furthermore, it must be considered that grid-scale systems include at the very least: power electronics, transformers, switchgear, heating and cooling systems and housing structures or enclosures. The size and the variety of technologies necessitate a rethinking of safety work as it is adopted from current validation techniques in the electrified vehicle space. 

To address the component and system level safety concerns for all the technologies being developed for stationary energy storage, further efforts will be required to: understand these systems at the fundamental materials science, develop appropriate engineering controls, fire protection and suppression methods, system design, complete validation testing and analysis, and establish real world based models for operating. System level safety must also address several additional factors including the relevant codes, standards and regulations, the needs of first responders, and anticipate risks and consequences not covered by current CSR. The wide range of chemistries and operating conditions required for grid-scale storage presents a significant challenge for safety R&D. The longer life requirements and wider range of uses for storage require a better understanding of degradation and end of life failures under normal operating and abuse conditions. The size of batteries also necessitates a stronger reliance on modeling. Multi-scale models for understanding thermal runaway, and fire propagation; whether originated in the chemistry, the electronics, or external to the system; have not been developed. Currently gap areas for stationary energy storage exist from materials research and modeling through system life considerations such as operation and maintenance.

Engineering controls and system design. Currently the monitoring needs of batteries, as well as effectiveness of means to separate battery cells and modules, or various fire suppression systems and techniques in systems have not been studied extensively. Individual companies and installations have relied on past experience in designing these systems. For example: Na battery installations have focused on mitigating the potential impact of the high operating temperature, Pb-acid batteries has focused on controlling failures associated with hydrogen build up, while in technologies that don’t use electrochemistry like flywheels, have focused on mechanical concerns such as run-out and high temperature, or change in chamber pressure. Detailed testing and modeling are required to fully understand the needs in system monitoring and containment of failure propagation. Rigorous design of safety features that adequately address potential failures are also still needed in most technology areas. Current efforts have widely focused on monitoring cell and module level voltages in addition to the thermal environment; however the tolerances for safe operation are not known for these systems. Further development efforts are needed to help manufacturers and installers understand the appropriate level of monitoring in order to safely operate a system and prevent failure resulting from internal short circuits, latent manufacturing defects or abused batteries from propagating to the full system.

Modeling The size and cost of grid-scale storage system make it prohibitive to test full-scale systems, modeling can play a critical role in improved safety.

Fire suppression Large-scale energy storage systems can mitigate risk of loss by isolating parts of a system in different transportation containers, or using materials or assemblies to section off batteries. Most current systems have automated and manually triggered fire suppression systems within the enclosure but have limited knowledge if such suppression systems will be useful in the event of fire.

The interactions between fire suppressants and system chemistries must be fully understood to determine the effectiveness of fire suppression. Key variables include the: volume of suppressant required, rate of suppressant release, and distribution of suppressants. Basic assumptions about electrochemical safety have not been elucidated, for example it is not even clear whether a battery fire is of higher consequence than other types of fires, and if so at what scale this is of concern.

The National Fire Protection Association (NFPA) has provided a questionnaire regarding suppressants for vehicle batteries. Tactics for suppression of fires involving electric-drive vehicle (EDV) batteries: a. How effective is water as a suppressant for large battery fires? b. Are there projectile hazards? c. How long must suppression efforts be conducted to place the fire under control and then fully extinguish it? d. What level of resources will be needed to support these fire suppression efforts? 1 e. Is there a need for extended suppression efforts? f. What are the indicators for instances where the fire service should allow a large battery pack to burn rather than attempt suppression?

NFPA 13, Standard for the Installation of Sprinkler Systems,60 does not contain specific sprinkler installation recommendations or protection requirements for Li-ion batteries. Reports and literature on suppressants universally recommended the use of water.61 However, the quantity of water needed for a battery fire is large: 275-2639 gallons for a 40 kWh EV sized Liion battery pack. This is higher than recommended for internal combustion engine (ICE) vehicle fires.

Summary. Science-based safety validation techniques for an entire energy storage system are critical as the deployments of energy storage systems expand. These techniques are currently based on previous industry knowledge and experience with energy storage for vehicles, as well as experience with grid-scale Pb-acid batteries. Now, they must be broadened to encompass gridscale systems. The major hurtle to this expansion is encompassing both much broader range in scale stationary storage systems, as well as the much broader range of technologies. Furthermore, the larger scale of stationary storage over EV storage necessitates the consideration of a wider range of concerns, beyond the storage device. This includes areas such as power electronics and fire suppression. The required work to develop validation is significant. As progress is made in understanding validation through experiment and modeling, these evidence-based results can feed into codes, regulations and standards, and can inform manufacturers and customers of stationary storage solutions to improve the safety of deployed systems.

Currently, fire departments do not categorize ESS as stand-alone infrastructure capable of causing safety incidents independent of the systems that they support. Instead, fire departments categorize grid ESS as back-up power systems such as uninterruptible power supplies (UPS) for commercial, utility, communications and defense settings, or as PV battery-backed systems for on, or off-grid residential applications. This categorization results in limited awareness of ESS and their potential risks, and thus the optimal responses to incidents. This categorization of energy storage systems as merely back-up power systems also results in the treatment of ESS as peripheral to the risk management tools.

The energy storage industry is rapidly expanding due to market pressures. This expansion is surpassing both the updating of current CSR and development of new CSR needed for determining what is and is not safe and

No general, technology-independent standard for ESS integration into a utility or a stand-alone grid has yet been developed.

Incident responses with standard equipment are tailored to the specific needs of the incident type and location, whether it’s two “pumper” engines and a “ladder” truck with two to four personnel, plus a Battalion Chief to act as Incident Commander, for a total of 9 to 13 personnel responding to an injury/accident, or a structure fire that requires five engines, two trucks, and two Battalion Chiefs for a total of 17 to 30 personnel. With each additional “alarm” struck will send another two to three “pumper” engines and a “ladder” truck. In all of these cases, the incident response personnel typically arrive on scene with only standard equipment. This equipment is guided by various NFPA standards for equipment on each apparatus, personal protective equipment (PPE), and other rescue tools. In responding to an ESS incident, the fire service seldom incorporates equipment specialized for electrical incidents.

A number of unique challenges must be considered in developing responses to any energy storage incident. In particular, difficulties securing energized electrical components can present significant safety challenges for fire service personnel. Typically, the primary tasks are to isolate power to the affected areas, contain spills, access and rescue possible victims, and limit access to the hazard area. The highest priority is given to actions that support locating endangered persons and removing them to safety with the least possible risk to responders. Where the rescue of victims continues until it is either accomplished or determined that there are no survivors or the risk to responders is too great. Industrial fires can be quite dangerous depending on structure occupancy, i.e. the contents, process, and personnel inside. Water may be used from a safe distance on larger fires that have extended beyond the original equipment or area of origin, or which are threatening nearby exposures; however, determination of “safe” distance has been little researched by the fire service scientific community.

Fire suppression and protection systems. Each ESS installation is guided by application of existing CSR that may not reflect the unique and varied chemistries in use. Fire-suppressant selection should be based on the efficacy of specific materials and needed quantities on site based on appropriate and representative testing, conducted in consultation with risk managers, fire protection engineers, and others, as well as alignment with existing codes and standards. For example, non-halogenated inert gas discharge systems may not be adequate for thermally unstable oxide chemistries, as they generate oxides in the process of heating, which may lead to combustion in oxygen deficient atmospheres. Ventilation requirements imposed by some Authorities Having Jurisdiction (AHJs) may work against the efficacy of these gaseous suppression agents. Similarly, water-based sprinkler systems may not prove effective for dissipating heat dissipation in large-scale commodity storage of similar chemistries. Therefore, additional research is needed to provide data on which to base proper agent selection for the occupancy and commodity, and to establish standards that reflect the variety of chemistries and their combustion profile.

Current commodity classification systems used in fire sprinkler design (NFPA 13-Standard for Installation of Sprinkler Systems) do not have a classification for lithium or flow batteries. This is problematic, as the fire hazard may be significantly higher depending on the chemicals involved and will likely result in ineffective or inaccurate fire sprinkler coverage. Additionally, thermal decomposition of electrolytes may produce flammable gasses that present explosion risks.

Verification and control of stored energy. Severe energy storage system damage resulting from fire, earthquake, or significant mechanical damage may require complete discharge, or neutralization of the chemistry, to facilitate safe handling of components. Though the deployment of PV currently exceeds that of ESS, there is still a lack of a clear response procedure to de-energize distributed PV generation in the field. Fire fighters typically rely on the local utility to secure supply-side power to facilities.

In the case of small residential or commercial PV, the utility is not able to assist because the system is on the owner side of the meter, which presents a problem for securing a 600Vdc rooftop array. Identifying the PV integrators responsible for installation may not be possible, and other installers may be hesitant to assume any liability for a system they did not install. This leaves a vacuum for the safe, complete overhaul of a damaged structure with PV. Similarly, ESS faces the complication of unclear resources for assistance and the inabilities of many first responders to knowledgably verify that the ESS is discharged or de-energized.

Post-incident response and recovery. Thermal damage to ESS chemistries and components presents unique challenges to the fire service community, building owners, and insurers. As evidenced in full-scale testing of EV battery fires, fire suppression required more water than anticipated, and significantly more in some cases. Additionally, confirming that the fire was completely extinguished was difficult due to the containment housings of EV batteries that can mask continued thermal reaction within undamaged cells. In one of the tests performed by Exponent, Inc., one battery reignited after being involved in a full-scale fire test some 22 hours post-extinguishment; in another case, an EV experienced a subsequent re-ignition 3 weeks post-crash testing.

Governmental approvals and permits related to the siting, construction, development, operation, and grid integration of energy storage facilities can pose significant hurdles to the timely and cost effective implementation of any energy storage technology. The process for obtaining those approvals and permits can be difficult to navigate, particularly for newer technologies for which the environmental, health, and safety impacts may not be well documented or understood either by the agencies or the public.

References

Cooper, J. 2019.  Arizona fire highlights challenges for energy storage. Associated Press.

Deign, J. 2019.  The Safety Question Persists as Energy Storage Prepares for Huge Growth. Recent battery plant blazes and a hydrogen station blast have again raised questions about the safety of energy storage technologies.  greentechmedia.com

DOE/EPRI. 2013. Electricity storage handbook in collaboration with NRECA. USA: Sandia National Laboratories and Electric Power Research Institute.

Mogg, T. 2019. Battery pack suspected cause of recent Virgin Atlantic aircraft fire.  Digitaltrends.com

Posted in Batteries, Lithium-ion, Safety | Tagged , , , , , , | Comments Off on How safe are utility-scale energy storage batteries?

Scientists on where to be in the 21st century based on sustainability

Source: Shaw 2020, Xu 2020 The greater the climate change the more this zone moves North

Preface. The article below is based on Hall & Day’s book “America’s Most Sustainable Cities and Regions: Surviving the 21st Century Megatrends”.

Related articles:

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Day, J. W., et al. Oct 2013. Sustainability and place: How emerging mega-trends of the 21st century will affect humans and nature at the landscape levelEcological Engineering.

Five scientists have written a peer-reviewed article about where the best and worst places will be in the future in America based on how sustainable a region is when you take into account climate change, energy reserves, population, sea-level rise, increasingly strong hurricanes, and other factors.  Three of the scientists, John W. Day, David Pimentel, and Charles Hall, are “rock stars” in  ecology.

Below are some excerpts from this 16 page paper that I found of interest (select the title above to see the full original paper).

Best places to be

WhereToBe Day2013 good places greenestThe greener the better — unless there are too many people (circles indicate large cities).  modified from U.S. EPA (2013)

 

where to be day 2013 best is underperforming now

 

 

 

 

 

 

 

Move to an Under-performing Region (and away from a Mega-region): Many areas rich in natural resources often have high poverty rates, perhaps due to “the resource curse”, usually applied internationally to countries rich in fossil fuels, agriculture, forestry, and fisheries, but financially poor with stratified social classes. We believe this concept can be applied to states. You can see above that most under-performing counties are rural. These are regions that have not kept pace with national trends over the last 3 decades in terms of population, employment, and wages. Note that with the exception of the Great Lakes mega-region, the under-performing regions are outside of the 11 mega-regions. These underperforming areas generally have high natural resources and agricultural production.

Worst Places to Be

Worst places to be

Several areas of the U.S. will have compromised sustainability in the 21st century. These include the southern Great Plains, the Southwest, the southern half of California, the Gulf and Atlantic coasts, especially southern Louisiana and Southern Florida, and areas of dense population such as south Florida and the Northeast.

where to be day 2013 not in megaregions

[My comment: You should also consider how long forests will last in your area, people will be burning them to cook and heat their homes with, and eventually make furniture, homes, floors, spoons, and hundreds of other objects as shown in the FoxFire series.  There were 92 million people in 1920, just 29% of the population we have now.  To zero in on the details, see this account of what happened in Vermont]

forest virgin growth 1620 1850 1920

Avoid the large megaregions.

Future Trends

The trends of energy scarcity, climate change, population, and many other factors are likely to reduce the sustainability of the landscape humans depend on, in some places more than others, since materials and energy both limited and distributed unevenly.

Industrial agriculture is very energy intensive: 19% of the total energy use in the U.S.

14% of that is Agricultural production, food processing and packaging, 5% for transportation and preparation.

  • Each American uses 528 gallons/year in oil equivalents to supply their food, or 169 billion gallons for 320 million Americans.
  • About 33% of the energy required to produce 2.5 acres of crops is invested in machine operation.
  • On average, nearly 10 calories of energy are used to make 1 calorie of edible food.
  • Cropland provides 99.7% of the global human food supply (measured in calories) with less than 1% coming from the sea.
  • Global per capita use is      .50 acre for cropland and 1.25 acres of pasture land
  • The U.S. and Europe use 1.25 acres of cropland and 2     acres  of pasture land
  • Crop-land now occupies 17% of the total land area in the U.S., but little additional land is available or even suitable for future agricultural expansion.
  • As the U.S. population increases, climate impacts grow, and energy resources decrease, there will be less cropland area per capita.
  • A significant portion of food produced in the U.S. is irrigated and located in areas where water shortages will increase.

Agricultural land

  • 1950:   1,250,000,000 acres
  • 2000:      943,000,000 acres – down 21.5% from 1950

…………Acres              States (cropland is unequally distributed)

  • 508,000,000    N & S Dakota, Nebraska, Kansas, Oklahoma, Texas, New Mexico,                         Colorado, Montana, Wyoming
  • 135,700,000    Ohio, Indiana, Illinois, Wisconsin, Minnesota, Iowa, Missouri
  •   27,800,000    California (50% of vegetables, fruits, and nuts in USA

Crops need a lot of water.  Some use 265 to to 530 gallons of water per 2.2 pounds of crops produced (dry matter).  Corn needs 10 million liters per hectare, soybeans 6 million L/ha fora yield of 3.0 tons/ha. Wheat requires only about 2.4 million L/ha for a yield of 2.7 t/ha. Under semiarid conditions, yields of non-irrigated crops, such as corn, are low (1.0 t/ha to 2.5 t/ha) even when ample amounts of fertilizer are applied. Approximately 40% of water use in the United States is used solely for irrigation. Reducing irrigation dependence in the U.S. would save significant amounts of energy, but probably require that crop production shift from the dry and arid western regions to the more agriculturally suitable eastern U.S.

Why Cities will be Bad Places to be

The cities most dependent on cheap energy will be the most affected (especially the Southwest and southern great plains)

U.S. population increased steadily from 3.9 million in 1790 to nearly 310 million in 2010 (or almost 8,000% in just 220 years, an exponential growth rate of almost 2%). Life also became progressively more urbanized and by 2010, 259 million people or 83% lived in urban areas compared to 56 million in rural areas.

The maintenance of large urban megaregions requires enormous and continuous inputs of energy and materials. Modern industrial society and modern cities are inherently unsustainable.

Some have argued that large urban areas are more energy efficient than rural areas (Dodman, 2009). But Fragkias (2013) examined the relation between city size and greenhouse gas emissions and found that emissions scale proportionally with urban population size for U.S. cities and that larger cities are not more emissions efficient than smaller ones. In a review of energy and material flows through the world’s 25 largest urban areas, Decker(2000) also concluded that large urban areas are only weakly dependent on their local environment for energy and material inputs but are constrained by their local areas for supplying water and absorbing wastes. Rees  contends that if cities are to be sustainable in the future, they must rebalance production and consumption, abandon growth, and re-localize. The trajectory of megatrends of the 21st century will make this difficult for all large urban regions in the U.S. and impossible for some.

By 2025, it is estimated that 165 million people, or about half the population, will live in 4 megaregions; the Northeast, Great Lakes, Southern California, and San Francisco Bay regions. An additional 45 million will live in south Florida and the Houston-Dallas region. The supply lines that support these megaregions with food, energy, and other materials stretch for long distances across the landscape. Areas dependent on longer, energy intensive supply lines are vulnerable to the rising costs of energy for transportation.

The economies of urban areas, especially the currently most economically successful ones based on the human, financial, and information service sectors, are strongly dependent on the spending of discretionary income, which is predicted to decrease substantially over the 21st century.

Best cities to live in

But many cities have lost population, especially those that were based in the manufacturing sector of the economy during the 20th century. Detroit and Flint, Michigan, are often cited as examples but there are many others. Between 1950 and 2000, St. Louis lost 59% of its population. Pittsburgh, Buffalo, Detroit, and Cleveland lost more than 45% each. It is possible that many of the rust belt cities that have experienced population decreases will be more sustainable than more “successful” cities in the northeast and other areas. They now have a lower population density and tend to exist in rich agricultural regions. Indeed, abandoned land is being used for food production in a number of depopulating cities.

Worst cities to live in

By contrast, the northeast is the most densely populated region of the country. The population is expected to reach almost 60 million by 2025. The states that make up the region have about 34 million acres of farmland or about 0.2 ha per person. By contrast, it takes about 1.2 ha per capita to provide the food consumed in the U.S. If agriculture becomes more local and less productive as some predict due to increasing energy costs then it will be a challenge to maintain the current food supply to the northeast.

The least sustainable region will likely be the southwestern part of the country from the southern plains to California. Climate change is already impacting this region and it is projected to get hotter and drier. Winter precipitation is predicted to be more rain and less snow. These trends will lead to less water for direct human consumption and for agriculture. This is critical since practically all agriculture in the region is irrigated. The Southwest has the lowest level of ecosystem services of any region in the U.S. California is the most populous state in the nation with most people living in the southern half of the state, the area with highest water stress. The Los Angeles metro area is the second largest in the nation. But population density is low over much of the rest of the region and is concentrated in large urban areas such as Las Vegas, Phoenix, and Albuquerque. California is one of the most important food producing states in the nation but this will be threatened by water scarcity and increasing energy costs. Much of the region is strongly dependent on tourism and spending discretionary income, especially Las Vegas, so future economic health will likely be compromised in coming decades. Many cities and regions whose economy is dependent on tourism will have compromised sustainability.

Energy scarcity

World oil production peaked in 2005 and has been on a plateau since then.

400 giant fields discovered before 1960 provide 80% of world oil.

Shale oil and gas have very high depletion rates and the production of unconventional reserves such as the Canadian and Venezuelan tar sands are extremely unlikely to be scaled up sufficiently to offset conventional decline rates.

Society depends on the surplus energy provided from the energy extraction sector for the material and energy throughput that allows for economic growth and productivity. As energy becomes more expensive to extract and produce, more money and energy that might otherwise be spent in other sectors of the economy must be spent in the energy sector, decreasing real growth (my comment: that means fewer jobs and increasing poverty)

The transition to a less oil reliant, more sustainable society in the U.S. is many decades away.

Since so much of the economy depends upon the widespread availability of cheap oil for the production and distribution of goods, the onset of peak oil and the decline in net energy available to society has profound implications for overall societal well being (my comment: this is the understatement of the year – what this means is extreme social unrest from hunger and lack of oil or natural gas to heat homes and cook with, etc)

Just as the first half of the oil age consisted of constantly increasing production, the second half of the oil age will consist of a continual rate of depletion that cannot be offset by new discoveries or low EROI alternatives.

Descriptions of regions in the article

  • Most negatively affected areas: Southwest including much of California & Southern Great Plains. All of these regions will be drier with less water at the same time population is growing.
  • decreased fresh water availability: Southern Great Plains, SouthWest (Lake Mead has a 50% chance of drying up within 2 decades)
  • Eastern half of U.S.: abundant natural resources but avoid megaregions
  • Poor soil: southwest
  • Severest climate change impacts: Southwest
  • Driest, hottest, most extreme droughts and floods: Southwest
  • Most tree deaths, super forest fires, loss of species, dust: Southwest
  • Snow melting too fast: West Coast – fewer crops, especially California which grows 1/3 of America’s food
  • Flooding: Mississippi basin due to more intense storms in the future
  • Rising sea level: coastal zones
  • stronger hurricanes: Gulf and Atlantic coasts from warming surface waters of the oceans. Hurricanes are also expected to become more frequent.
  • Hurricane surge: Gulf and Atlantic Coasts with New Orleans the worst threatened
  • Mississippi delta: resources of the river can be used to rebuild and restore the rich natural systems of the area
  • Energy scarcity will affect everyone everywhere
  • Less rain: great plains
  • Ogallala aquifer depletion: great plains (energy scarcity will add to the cost and difficulties of pumping the water up)

Good areas

  • High rainfall and primary production: eastern states
  • High ecosystem services: river valleys and coastal areas
  • Estuaries, swamps, floodplains
  • Warmer, moist climates: higher primary productivity than colder climates

There’s a lot more, I especially liked the attack of the current economic paradigms (i.e. growth forever) on pages 6 – 9.

Also, many of the referenced papers in the article are good reads with important details not covered fully in this paper.

I personally think cities might be good for a few years into the crisis as governments concentrate resources and supply lines where the highest population densities are.  Gas stations out in the rural areas will be the first to close, throwing some places into sudden self-reliancy.  But at some point the whole system snaps like a volcano erupting from oil shocks, rusting oil and gas infrastructure falling apart (especially refineries), natural disasters, black swans like (cyber) warfare, electric grid down for a year or more, nuclear winter from nuclear war anywhere in the world, electromagnetic pulses from solar flares or a nuclear explosion, hunger and consequent social unrest, and other factors in the Decline, Collapse, and “A Fast Crash?” categories.  That will make cities the worst places to be.  Best to move to under-performing areas now since it will take years to become part of another community and learn the necessary skills.

References

Shaw A, et al. 2020. New climate maps show a transformed United States. Propublica.org. My note: too much focus on RCP 8.5 when RCP 2.6 to 4.5 is the most likely outcome given that peak oil has already happened, so there aren’t enough fossils to get to 8.5

Xu C, et al. 2020. Future of the human climate niche. Proceedings of the national academy of sciences.

Posted in Where to Be or Not to Be | Tagged , | 30 Comments

Microbes a key factor in climate change

Preface. The IPCC, like economists, assumes our economy and burning of fossil fuels will grow exponentially until 2100 and beyond, with no limits to growth. But conventional oil peaked and has stayed on a plateau since 2005, so clearly peak global oil production is in sight. As is peak soil, aquifer depletion, biodiversity destruction, and deforestation to name just a few existential threats besides climate change.

The lack of attention to microbes in the IPCC model further weakens their predictions about the trajectory of climate change. As this article notes, diatoms are our friends, they “perform 25–45% of total primary production in the oceans, owing to their prevalence in open-ocean regions when total phytoplankton biomass is maximal. Diatoms have relatively high sinking speeds compared with other phytoplankton groups, and they account for ~40% of particulate carbon export to depth”.

Diatoms didn’t appear until 40 million years ago, and sequester so much carbon that they caused the poles to form ice caps. So certainly scientists should study whether their numbers are decreasing or increasing. But also the IPCC needs to include diatoms and other microbes in their models. It’s a big deal that they haven’t, since microorganisms support the existence of all higher life forms.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

* * *

University of New South Wales. 2019. Leaving microbes out of climate change conversation has major consequences, experts warn. Science Daily.

Original article: Cavicchioli, R., et al. 2019. Scientists’ warning to humanity: microorganisms and climate change. Nature Reviews Microbiology.

More than 30 microbiologists from 9 countries have issued a warning to humanity — they are calling for the world to stop ignoring an ‘unseen majority’ in Earth’s biodiversity and ecosystem when addressing climate change.

The researchers are hoping to raise awareness both for how microbes can influence climate change and how they will be impacted by it — calling for including microbes in climate change research, increasing the use of research involving innovative technologies, and improving education in classrooms.

“Micro-organisms, which include bacteria and viruses, are the lifeforms that you don’t see on the conservation websites,” says Professor Cavicchioli. “They support the existence of all higher lifeforms and are critically important in regulating climate change. “However, they are rarely the focus of climate change studies and not considered in policy development.”

Professor Cavicchioli calls microbes the ‘unseen majority’ of lifeforms on earth, playing critical functions in animal and human health, agriculture, the global food web and industry.

For example, the Census of Marine Life estimates that 90% of the ocean’s total biomass is microbial. In our oceans, marine lifeforms called phytoplankton take light energy from the sun and remove carbon dioxide from the atmosphere as much as plants. The tiny phytoplankton form the beginning of the ocean food web, feeding krill populations that then feed fish, sea birds and large mammals such as whales.

Marine phytoplankton perform half of the global photosynthetic CO2 fixation and half of the oxygen production despite amounting to only ~1% of global plant biomass. In comparison with terrestrial plants, marine phytoplankton are distributed over a larger surface area, are exposed to less seasonal variation and have markedly faster turnover rates than trees (days versus decades). Therefore, phytoplankton respond rapidly on a global scale to climate variations.

Sea ice algae thrive in sea ice ‘houses’. If global warming trends continue, the melting sea ice has a downstream effect on the sea ice algae, which means a diminished ocean food web.

Climate change is literally starving ocean life,” says Professor Cavicchioli.

Beyond the ocean, microbes are also critical to terrestrial environments, agriculture and disease.

In terrestrial environments, microbes release a range of important greenhouse gases to the atmosphere (carbon dioxide, methane and nitrous oxide), and climate change is causing these emissions to increase,” Professor Cavicchioli says.

“Farming ruminant animals releases vast quantities of methane from the microbes living in their rumen — so decisions about global farming practices need to consider these consequences.

“And lastly, climate change worsens the impact of pathogenic microbes on animals (including humans) and plants — that’s because climate change is stressing native life, making it easier for pathogens to cause disease.

“Climate change also expands the number and geographic range of vectors (such as mosquitos) that carry pathogens. The end result is the increased spread of disease, and serious threats to global food supplies.”

Greater commitment to microbe-based research needed

In their statement, the scientists call on researchers, institutions and governments to commit to greater microbial recognition to mitigate climate change.

“The statement emphasizes the need to investigate microbial responses to climate change and to include microbe-based research during the development of policy and management decisions,” says Professor Cavicchioli.

Additionally, climate change research that links biological processes to global geophysical and climate processes should have a much bigger focus on microbial processes.

This goes to the heart of climate change, so if micro-organisms aren’t considered effectively it means models cannot be generated properly and predictions could be inaccurate,” says Professor Cavicchioli.

“Decisions that are made now impact on humans and other forms of life, so if you don’t take into account the microbial world, you’re missing a very big component of the equation.”

Professor Cavicchioli says that microbiologists are also working on developing resources that will be made available for teachers to educate students on the importance of microbes.

“If that literacy is there, that means people will have a much better capacity to engage with things to do with microbiology and understand the ramifications and importance of microbes.”

Posted in Climate Change, Scientists Warnings to Humanity | Tagged , , | 5 Comments