A Relentless Growth of Disparity in Wealth

Preface.  I write a lot about why electric vehicles won’t be widely adopted. One reason is that the bottom 95% can’t afford them.  This tremendous unfairness will likely make peak oil decline more violent and chaotic than it would have been otherwise.

Alice Friedemann    www.energyskeptic.com   author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity, XX2 report

***

Huddleston, C. 2019. Survey: 69% of Americans Have Less Than $1,000 in Savings

  • Almost half of respondents — 45% — said they have $0 in a savings account. Another 24% said they have less than $1,000 in savings.
  • The top reason respondents said they weren’t saving more was because they were living paycheck to paycheck. Nearly 33% said this obstacle was keeping them from saving, and about 20% said a high cost of living prevented them from saving more.
  • The No. 1 thing respondents said they need to save more money was a higher salary. About 38% said having a bigger paycheck would help them save more, while 18% said lowering their debt would make it easier to set aside cash.
  • The most common place where those with savings put their cash is in a savings account. Although 33% of respondents said they take advantage of a savings account to store their cash, 29% said they don’t have any savings.

Atkins, D. 2014. David Atkins. How the Rich Stole Our Money – and Made Us Think They Were Doing Us a Favor. Salon.

You’ve doubtless seen the charts and figures showing the decline of the American middle class and the explosion of wealth for the super-rich. Wages have stagnated over the last 40 years even as productivity has increased — Americans are working harder but getting paid less. Unemployment remains stubbornly high even though corporate profits and the stock market are near record highs. Passive assets in the form of stocks and real estate are doing very well. Wages for working people are not. Unfortunately for the middle class, the top 1 percent of incomes own almost 50 percent of asset wealth, and the top 10 percent own over 85 percent of it. When assets do well but wages don’t, the middle class suffers.  This ominous trend is particularly prominent in the United States. That shouldn’t surprise us: study after study shows that American policymakers operate almost purely on behalf of wealthy interests. Recent polling also proves that the American rich want policies that encourage the growth of asset values while lowering their own tax rates, and are especially keen on outcomes that favor themselves at the expense of the poor and middle classSo why isn’t the 99 percent in open revolt? 

The Super Rich Are Richer Than We Thought, Hiding Huge Sums, New Reports Find

4/12/2014. Professors Emmanuel Saez (UC Berkeley) and Gabriel Zucman (LSE and UC Berkeley)

The Shocking Rise of Wealth Inequality: Is it Worse Than We Thought? 

April 2, 2014. Jordan Weissmann.  Slate.com

A Relentless Widening of Disparity in Wealth

Eduardo Porter. 11 March 2014. New York Times.

The richest 10 percent of Americans take a larger slice of the economic pie than they did in 1913, at the peak of the Gilded Age.

What if inequality were to continue growing years or decades into the future? Say the richest 1% a quarter of the nation’s income, up from about a fifth today. What about half?  Thomas Piketty of the Paris School of Economics believes this future is not just possible. It is likely.

In “Capital in the Twenty-First Century,”  Professor Piketty provides a fresh and sweeping analysis of the world’s economic history that puts into question many of our core beliefs about the organization of market economies.

His most startling news is that the belief that inequality will eventually stabilize and subside on its own, a long-held tenet of free market capitalism, is wrong. Rather, the economic forces concentrating more and more wealth into the hands of the fortunate few are almost sure to prevail for a very long time.

History does not offer much hope that political action will  turn the tide: “Universal suffrage and democratic institutions have not been enough to make the system react.”

Professor Piketty’s description of inexorably rising inequality probably fits many Americans’ intuitive understanding of how the world works today. But it cuts hard against the grain of economic orthodoxy that prevailed throughout the second half of the 20th century and still holds sway today as shaped during  the Cold War by economist Simon Kuznets.  After assembling tax return data he estimated between 1913 and 1948, the slice of the nation’s income absorbed by the richest 10% of Americans declined from 50% to 33%.

Mr. Kuznets’s conclusion provided a huge moral lift to capitalism as the United States faced off with the Soviet Union. It suggested that the market economy could distribute its fruits equitably, without any heavy-handed intervention of the state.

This isn’t true anymore: Wages have been depressed for years. Profits account for the largest share of national income since the 1930s. The richest 10% of Americans take a larger slice of the economic pie than they did in 1913, at the peak of the Gilded Age.

Like Kuznets’s analysis, Mr. Piketty’s is based on data. He just has much more: centuries’ worth, from dozens of countries.

Kuznets’s misleading curve is easy to understand in this light. He used data from one exceptional period in history, when a depression, two world wars and high inflation destroyed a large chunk of the world’s capital stock. Combined with fast growth after World War II and high taxes on the rich, this flattened the distribution of income until the 1970s.

But this exceptional period long ago ran its course.

Americans will argue that this description does not fit the United States. Wealth here is largely earned, not inherited, we say. The American rich are “creators,” like Bill Gates of Microsoft or Lloyd Blankfein of Goldman Sachs, rewarded for their economic contributions to society.

Mr. Piketty doubts that the enormous remuneration of top executives and financiers in the United States — enhanced by the decline of top income tax rates since the 1980s — really reflects their contributions. What’s more, he points out, inherited inequality has been lower in the United States mainly because its population has grown so fast — from three million at the time of independence to 300 million today — driving a vast economic expansion.

But this population boom will not repeat itself. The share of national income absorbed by corporate profits, a major component of capital’s share, is already rising sharply.

If anything, this means future inequality in the United States will be driven by two forces. A growing share of national income will go to the owners of capital. Of the remaining labor income, a growing share will also go to the top executives and highly compensated stars at the pinnacle of the earnings scale.

Is there a politically feasible antidote? Professor Piketty notes that the standard recipe — education for all — is no match against the powerful forces driving inherited wealth ever higher.

Taxes are, of course, the most feasible counterweight. Progressive wealth taxes could reduce the after-tax return to capital so that it equaled the rate of economic growth.

But politically, “the fiscal institutions to redistribute incomes in a balanced and equitable way have been badly damaged,” Professor Piketty told me.

The holders of wealth, hardly a powerless bunch, will oppose any such move, even if that’s what is needed to preserve capitalism against the populist impulses of those left behind.

Professor Piketty offers early-20th-century France as an example. “France was a democracy and yet the system did not respond to an incredible concentration of wealth and an incredible level of inequality,” he said. “The elites just refused to see it. They kept claiming that the free market was going to solve everything.”

It didn’t.

Posted in Distribution of Wealth | 3 Comments

Updates to Life After Fossil Fuels: A Reality Check on Alternative Energy

Updates to “Life After Fossil Fuels”

Last updated 28 April 2024. Other posts related to this book here.

My book is about our many dependencies on fossil fuels, quickly depicted in these very short videos:  Life without Petroleum  A Day Without Oil  Can You Go a Day Without Fossil Fuels?

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Chapter 2 We Are Running Out of Time

Norway-based energy consultancy Rystad Energy has warned that Big Oil could see its proven reserves run out in less than 15 years, unless Big Oil makes more commercial discoveries quickly, thanks to produced volumes not being fully replaced with new discoveries (Kimani A (2021) Big Oil Is In Desperate Need Of New Discoveries. Oilprice.com).

“Global oil and gas discoveries have been on a constant shrinking trend prior to and over the last decade, with oil discoveries reaching a low of 3.8 BBO (billion barrels of oil) in 2016; in 2020 it was 4.3 BBO. During the decade, 89 BBO were discovered while 289 BBO of reserves were produced, a ratio of over 3 to 1, which is unsustainable.” Rafael Sandrea, Energy Policy Research Foundation.

However alarming Figure 3 in Chapter 2 (IEA 2018) may be, reality is even more worrisome, because this chart doesn’t depict Business As Usual, rather, it is an optimistic forecast called the IEA Sustainable Development scenario shown in figure 1 as requiring far less oil supply than other projections to 2040 above it.  The IEA Sustainable Development assumes that by 2030: global primary energy use declines 7% from 2019 to 2030 (compared to a 20% increase over the prior 11 years); solar generation grows by a factor of 5.6x, wind generation grows by a factor of 2.4x; nuclear generation increases by 23% (no decommissioning); coal use for power/heat declines by 51%; and electric vehicles sales reach 40% from today’s 4.5% levels (Cembalist 2021).

Figure 1. Oil “future production wedge”: demand vs existing field supply Million barrels per day

Chapter 4 We Are Alive Thanks to Fossil-Fueled Fertilizer

This chapter is about how we hit the wall at 1.6 billion in population in the early 1900s. Then natural gas based fertilizer was invented, which is responsible for allowing at least 4 billion more people to exist.  USDOE (2020) points out the myriad other ways natural gas aids agriculture: “Natural gas also is used to dry these crops. Further, plastics made from hydrocarbons provide bags for hay and silage, greenhouse covers, bale wrapping material, mulch film to prevent weed growth, and plant nursery containers.”

This chapter also explains the many ways natural gas fertilizers damage soil and water, and emit greenhouse gases. Made worse by plastic coated fertilizers which creates greenhouse gases during decay and microplastics that kill soil organisms. The United Nations’ Food and Agriculture Organization estimates that 100,000 tonnes of fertilizer encircled by plastic per year are dumped into the environment and now companies intend to add encapsulated chemicals (Nargi 2022).

Chapter 6  What Fuels Could Replace Diesel?

Peak diesel is the main civilization crusher, since heavy-duty transportation depends on it. Prices in March 2022 just hit an all time high, more than 2008. As the fuel of transportation, the price rally affects everything and everone, adding to inflationary pressures that are already running at a multi-decade high. This is partly because natural gas prices skyrocketed, which plays a key role in making diesel at refineries, where NG is used to produce hydrogen to remove sulfur from diesel. The spike in gas prices in late 2021 made that process prohibitively expensive, cutting diesel output. Low-sulfur crude is also in short supply: countries that pump that kind of oil, such as Nigeria and Angola, are unable to increase output. Any additional production has to come from Saudi Arabia and the United Arab Emirates, but both largely produce crude with high sulfur content. In the U.S., diesel stocks fell last week to their lowest seasonal level in 16 years (Blas J (2022) The Oil Price Rally Is Bad. The Diesel Crisis Is Far Worse. Bloomberg).

Non-renewable non-commercial exploding hydrogen. Most updates are in post “Hydrogen: The dumbest & most impossible renewable“, and  Energy/Hydrogen.

7 Why Not Electrify Commercial Transportation with Batteries? 

The U.S. would have to double today’s electric grid if 66% of all cars are EVs by 2050 (Groom 2021, NREL 2021). Yet the electric grid is falling apart, will be increasingly affected by climate change, and since wind and solar construction depends on fossil fuels for every step of their life cycle, their construction will be constrained by energy shortages due to peak oil happening in 2018 (chapter 2 in Life After Fossil Fuels).

Energyskeptic battery posts:

Chapter 9 Manufacturing Uses Over Half of All Fossil Energy

Energyskeptic manufacturing posts:

Geothermal power: Can Geothermal power replace declining fossil fuels?

2021-10-22 Hydrogen steel: From a scientist at LBNL on steel made from hydrogen: while hydrogen direct reduction can take pre-heated iron ore and convert it into direct reduced iron (H2 DRI), also known as sponge iron, even if done in an electric arc furnace (EAF) plant, it still needs carbon from coal or biomass charcoal to create steel. China produced 1.05 BILLION tonnes last year, of which only 14% was produced through the electric arc furnace process with scrap steel. Half of China’s iron & steel plants have been built since 2010. Are all of these going to be retired, replaced by H2 DRI, EAF capacity expanded, and sufficient non-fossil electricity provided to support the required H2 production in any time-frame that is relevant to the atmosphere?  Even if china cut steel production in half by 2050, and DRI/EAF increased from 14% to 60% penetration, the electricity just to produce just the H2 alone would take over 200 TWh. Which is 150 GW of solar capacity dedicated to only creating hydrogen, a capacity nearly as much as the total installed capacity of Europe today.

2021-10-24 Biomass charcoal steel (private communication from Thomas Troszak): The problem is that a charcoal smelter is tiny because charcoal is fragile. All of the charcoal smelters in Brazil only supply enough pig iron to meet 20% of melting capacity of a single electric furnace in a plant that refines pig iron to grey iron (wiki definition), and casts grey ingots for auto manufacturers to remelt and cast into engine blocks. And they burn up thousands of hectares of eucalyptus in the process of supplying the charcoal pigs for that one furnace. As far as I know they aren’t even making steel from the charcoal pig iron. That would represent another whole level of unsustainability. But with charcoal alone, you’re looking at the technology available prior to the 1850s. but at extravagant cost in land area for forests. Abraham Darby mentioned that when he first put the coke in his furnace, his burden capacity increased by 30 times. And that was in the early 1700s. Charcoal smelters like that could produce something like 200 tons of iron per year. By the late 1800s I think there was a mega charcoal furnace in the US that could smelt up to 200 tons per day, but that was unusual. A modern coke smelter can produce 12,000 tons of pig iron per day. And a modern foundry could cast 250 tons of steel in a single pour. So charcoal furnaces can’t support the kind of furnace burden that would be necessary for billeting chunks big enough for the components of a modern bridge, or submarine or nuclear reactor or whatever, nohow.

There is a growing awareness that there are no “renewable” ways to replace fossils for essential products like cement and steel. This article is of interest because it explains why this is challenging, and ideas that I think you will see are unlikely from reading my book, and probably too late to make commercial if peak oil was in 2018 (true so far in 2022) as shown in chapter 2 (and also see “Peak Oil is Here!“).

Fennell P et al (2022) Cement and steel — nine steps to net zero. Nature.

Chapter 10 What Alternatives Can Replace Fossil Fueled Electricity Generation

Fusion. Updates are in Why fusion power is forever away and Energy/Fusion.

Nuclear Power. Updates are in Nuclear Power problems, Nuclear waste, and other Nuclear Power posts.

Chapter 12 Half a Million Products Are Made Out of Fossil Fuels

This chapter lists a few of the 500,000 products such as plastic made from petroleum. USDOE (2020) lists additional natural gas and NG liquids products: “Homebuilders use many natural gas-based materials to build affordable and safe homes, including plastic foam insulation and sheathing materials, vinyl siding, weatherproof window frames, high performance caulks and paints, asphalt roofing materials, polyvinyl chloride (PVC) pipe, and chemically treated lumber. Within our homes, plastic foam insulation helps refrigerators, freezers, dishwashers, and heating and air conditioning systems operate quietly and efficiently. Healthcare: Surgical gloves, antiseptics, medications, anesthetics, heart valves, surgical devices, prosthetics, eyeglasses, pacemakers, stents, joint replacements…  Automakers have met increased fuel efficiency standards by replacing heavy metal parts with lightweight plastics, now 50% of car’s by volume and just 10% by weight, dramatically improving gas mileage, plus safety features like seat belts, air bags, interior cushioning, and crumple zones.

Paul Martin wrote “A mass shift from fossil petroleum to biomass sources for chemicals and materials is extremely unlikely in my view. Why is that? Simple. Biomass has an average general chemical formula of C6 H10 O5. There are exceptions- food oils being one example- but the greatest mass of biomass is cellulose and lignin, not vegetable oil. It is hydrogen deficient, and worse still, there’s nearly one oxygen atom for every carbon atom. To make most useful chemicals, those oxygens need to be removed by reacting them with hydrogen to produce water, or burned off to produce CO2. Both represent a huge loss of energy and mass (Martin 2024)”.

Chapter 15 Grow More Biomass: Where Is the Land?

Under the topic of “Genetically Engineer Plants to Grow Faster, Get Larger” I wrote: “Photosynthesis evolved about three billion years ago, and to this day, only converts a tiny fraction of sunlight into biomass. So seriously—we are going to enhance photosynthesis when Mother Nature did not figure that out over three billion years of random mutations? It is possible improved photosynthesis would make a plant less disease-resistant, or put more growth into leaves and stalks rather than edible fruit or grain, or require yet more water and soil nutrition. There are probably good reasons and limitations keeping nature from improving photosynthesis.”

Here’s another reason why we probably can’t improve photosynthesis — 14% of the energy goes into lifting water from the soil to their leaves, since photosynthesis requires water as well as light and CO2. Quetin GR et al (2022) Quantifying the Global Power Needed for Sap Ascent in Plants. Journal of Geophysical Research: Biogeosciences  DOI: 10.1029/2022JG006922

Chapter 16 The Ground is Disappearing Beneath Our Feet

More than one-third of the Corn Belt in the Midwest has completely lost its carbon-rich topsoil, which is critical for plant growth because of its water and nutrient retention properties. Thaler et al (2021) estimate the loss at about 100 million acres, which is 156,251 square miles — the size of Illinois, Iowa, and Wisconsin combined.   Degradation of soil quality by erosion reduces crop yields, which this research estimated has reduced corn and soybean yields by about 6%, almost $3 billion in annual economic losses for farmers across the Midwest.

Briggs H (2022) Farm machinery exacting heavy toll on soil – study. BBC
The weight of modern combine harvesters, tractors and other farm machinery risks compacting the soil, leading to flooding and poor harvests, according to researchers in Sweden. The researchers calculated that combine harvesters, when fully loaded, have ballooned in size from about 4,000 kg (8,800 pounds) in 1958 to around 36,000kg (80,000 pounds) in 2020. This makes it difficult for plants to put down roots and draw up nutrients, and the land is prone to flooding. The researchers think the growing weight of farm machinery poses a threat to agricultural productivity. Their analysis, published in the Proceedings of the National Academy of Sciences, suggests combine harvesters could be damaging up to a fifth of the global land used to grow crops. Thomas Keller, professor of soil management at the Swedish University of Agricultural Sciences in Uppsala, Sweden, says machinery should be designed not to exceed a certain load. “Compaction can happen within a few seconds when we drive on the soil, but it can take decades for that soil to recover,” he said.
Scientific paper: Kelly T, Or, D (May 16, 2022) Farm vehicles approaching weights of sauropods exceed safe mechanical limits for soil functioning. PNAS. https://doi.org/10.1073/pnas.2117699119

Lambert (2020: Just as replacing grasslands with crops caused the 1930s dustbowl, so too will the replacement of grasslands with corn crops bring on Dustbowl 2.0 and potentially desertification.  From 2006 to 2011 there was a 10% increase in land growing corn for ethanol over 2046 square miles.  Before that, grasslands protected the soil by holding it tightly in place. Dust storms remove nutrients from the soil, making it harder for crops to grow and for even more wind erosion to occur.  This destructive cycle, now aggravated by drought, can eventually lead to desertification, and is also a health hazard. The ultrafine dust particles can penetrate cells in the lungs and cause lung and heart disease. Dust storms increased by 5% a year for a whopping 100% increase over the 20 years of the study from 1998-2018.   Even the Midwest is seeing dust storms grow after the planting and harvesting of soybeans in June & October, an area also threatened by drought from climate change.  The lead author of the findings in Geophysical Research Letters, Andrew Lambert, points out that “It’s particularly ironic that the biofuel commitments were meant to help the environment.”   Lambert A et al (2020) Dust Impacts of Rapid Agricultural Expansion on the Great Plains. Geophysical Research Letters. https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2020GL090347

Chapter 19 Grow More Biomass: Dwindling Groundwater

Billions more people could have difficulty accessing water if the world opts for a massive expansion in growing energy crops to fight climate change.  The idea of growing crops and trees to absorb CO2 and capturing the carbon released when they are burned for energy is a central plank to most of the Intergovernmental Panel on Climate Change’s scenarios for the negative emissions approaches needed to avoid the catastrophic impacts of more than 1.5°C of global warming.

But the technology, known as bioenergy with carbon capture and storage (BECCS), could prove a cure worse than the disease, at least when it comes to water stress. The water needed to irrigate enough energy crops to stay under the 1.5°C limit would leave 4.58 billion people experiencing high water stress by 2100 – up from 2.28 billion today, especially in South America and south Africa (Vaughan 2021).

Chapter 21 Grow More Biomass: Pesticides

I’m adding updates to energyskeptic.com in the post below as well as others in category decline/pollution/pesticides here.

Chemical industrial farming is unsustainable. Why poison ourselves when pesticides don’t save more of our crops than in the past?

Chapter 24 Corn Ethanol. Why?

Renshaw J et al (2021) U.S. bread, doughnut makers urge Biden to roll back biofuel requirements. Reuters.  A trade group representing some of America’s biggest baked goods companies is urging the Biden administration to ratchet back its biofuel ambitions, arguing that using fuel made from crops could raise the cost of donuts, bread and other foods. They met with the Environmental Protection Agency (EPA) last week to urge reduced blending mandates, particularly for biodiesel since supplies of soy and canola oil are running low (40% of soybeans go to biodiesel fuels).

From Chapter 24: Ethanol Raises Food Prices and Harms People and Businesses

The Renewable Fuel Standard (RFS) mandating ethanol has led to a shortage of corn for food and animal feed. From 2007 to 2012, prices were driven up so much that farmers planted 17 million new acres of corn rather than soybeans, wheat, hay, cotton, and other crops, driving their prices up to all-time records as well. Cattle feed prices were so high that herds were culled to levels not seen in 60 years, causing beef prices to rise an incredible 60% from 2007 to 2012.

Restaurants were also affected because corn, meat, and other crops rose in price. It appears the interests of Archer Daniels Midland (ADM), Cargill, and 3.2 million farmers were favored over those of us who eat food. That includes the 15.6 million Americans who work in the restaurant industry—about one in ten US workers.

Fraud in the RFS program will likely increase (Cohn 2022): The Inflation Reduction Act signed into law by President Joe Biden in August includes historic investments to combat climate change. It may also open new avenues for fraud by expanding a program that has given federal authorities fits for years. It does not include any new provisions to prevent fraud.

Peter Whitfield, a partner at law firm Sidley Austin in Washington, D.C said that in [the Renewable Fuel Standard] program you have little oversight, so there’s a way to generate a massive amount of money fraudulently with little effort, so possibilities for fraud will still exist, and is skeptical investigators’ will catch frauds as the programs expand. One issue is that biofuel feedstock is in short supply yet biofuel incentives are being increased, which will tempt some to cheat to get the lucrative biofuel credits.

One example of fraud happened in 2019 when members of a polygamous, Utah-based religious sect known as “The Order” pleaded guilty to conspiring with a Los Angeles businessman who called himself “The Lion” to bilk the federal government out of some $1 billion in a scheme involving Renewable Fuel Standard credits and related IRS tax credits. Using a series of shell companies and sham transactions, the team made it look like they were producing massive amounts of biofuel at a plant in northern Utah and shipping it far and wide. That allowed them to rake in millions of dollars in incentives, even though they were producing very little fuel. The extent of the scam came to light only after a member of the sect who happened to work in the accounting department broke away from the group — she said she was about to be forced to marry her cousin — and told authorities what she knew.

26 Fill ‘er up with seaweed (see energyskeptic post here).

Bever (2021) Fighting climate change by farming kelp NPR:  An absurd  project to cash in on carbon sequestration funds to haul kelp out to sea until it’s so heavy the buoy sinks and the kelp CO2 is sequestered on the ocean floor. What could go wrong: whales entangled, ship propellers snarled, beaches fouled? The price and energy to do this? And why? As Life After Fossil Fuels explains, peak oil occurred in 2018 and the decline of emissions at 4% a year now that will exponentially increase dwarves all sequestration and renewable contraption dreams and schemes.

Chapter 27 “The Problems with Cellulosic Ethanol Could Drive You to Drink”

Ethanol is pointless: trucks, locomotives, and ships don’t run on ethanol or diesohol. Only diesel matters, peak diesel is more apt than peak oil.

The main reason there is yet to be a commercial ethanol production plant is that “Except for fruits and protected seeds, the rest of a plant evolved over hundreds of millions of years to not be eaten by herbivores or microbes, with barriers of toxins, spines, and thick bark. The most formidable defense is a rigid structure of indigestible cellulose, hemicellulose, and lignin, which even after death can take a year or more for microbes and fungi to consume and break down into new soil. Scientists try to speed up the process with brute force. Bioreactors create high pressures and temperatures, other machines mill, radiate, steam explode, accelerate electrons, hydrolyze with acids, freeze, drench in harsh chemicals, expand fbers with ammonia or ozone, and infict other torments to get the sugars out. Nothing much works. They have hit a cellulosic wall.”

Or as Service (2022) writes: “…and the spearlike corn stalks and other woody biomass often jams machines designed to grind it up. The chemical industry is built on handling liquids and gases, it’s much harder with solids. This extra handling and processing mean jet fuel from biofuels will never be as cheap as fuel made from petroleum.

Service RF (2022) Can biofuels really fly? Science.

In this chapter I discussed why attempts to use termites to make ethanol haven’t worked out: “…Scientists have been trying for many years to replicate a termite’s ability to break down plants. Termites digest wood by outsourcing the work to the protists in their gut. Protists, in turn, outsource the work to many bacteria that use enzymes to break wood down further. Just like at a factory, each microbe performs one task, and excretes a different substance than it consumed. In a termite gut factory, one working microbes’ poop is ambrosia for another. This intricate chain reaction has proven difficult to synthesize. Too much of anything along the chain of reactions and it can kill the process. For example, in ethanol production, when yeast has raised the concentration of excreted ethanol from 12 to 18%, the yeast dies. So far scientists haven’t been able to get termite or ruminant gut organisms to expand from their tiny world into the expansive gut of a 2,000-gallon stainless-steel tank.”

Altamia’s 2020 paper discusses the bacteria of shipworms, which have been destroying wooden ships and docks for thousands of years. There’s a hope their enzymes can be used to break down wood to make biofuels, but they sound a lot like underwater termites to me. Shipworms are long, thin mollusks famed and feared for their ability to eat wood. But they can’t do it alone. They rely on bacterial partners that don’t reside in the gut, but inside the cells of their gills. Perhaps their enzymes can be used to breakdown lignocellulose into sugars, and then into ethanol.

Chapter 28 Biodiesel to keep trucks running

Last month, several airline CEOs met with Biden administration officials to discuss emissions and the options for government incentives for aviation biofuels as a way of reducing these emissions. But to increase biofuel production to 20 million barrels of oil equivalent a day could cause cooking oil to become unaffordable for millions of people, and result in large-scale deforestation as 100 million more acres of land were cleared to grow biofuel crops on. Reforesting 100 million acres would offset 8 times more CO2 emissions. The Center for Biological Diversity objects also, since far more CO2 emissions reduction could come from phasing out dirty, aging aircraft, and maximizing operational efficiencies (Slav 2021).

Chapter 29 Can we Eat enough French Fries

In this chapter I reported that a sewer in London was clogged with a record-breaking fatberg of 140 tons.  Breaking news: that record has been broken with a 330 ton London fatberg (Picheta 2021). So that’s good news, more fat to propel our four ton autos.  Or maybe not, there’s a new competitor: Insulation for homes made of cooking oil, wool, and sulfur (Najmah 2021).

No worries about finding enough human fat from liposuction. There are 390 million tonnes of humans, but just 22 million tons of wild animals. Lots of fat available from us, and our domesticated animals — 630 million tons of sheep, rodents, dogs, pigs, cattle, and more (Greenspoon 2023).

Chapter 30 Combustion: Burn Baby Burn

The Ryegate Power Station’s biomass plant in Vermont may shut down sooner than expected, the contract that expires in 2022 is only being renewed for 2 years, rather than the 10 expected due to the much higher cost of electricity, which Vermonters subsidize with $5 million a year. It’s pricey because it’s only 23% efficient — so for every four trees burned, only one tree is converted to electricity. Biomass plants like Ryegate have been closing throughout the region, with plants in New Hampshire and Maine not being relicensed (Gockee 2021)

Chapter 33 Conclusion: Do You Want to Eat, Drink, or Drive?

I wrote: “Declining oil means you can stop worrying about robots taking over. What energy could they be built with and run on after fossils? Not that a robot overthrow was ever an issue. The human cortex is 600 billion times more complicated than any artificial network. The code to simulate the human brain would require hundreds of trillions of lines of code inevitably riddled with trillions of errors.

Nor do you need to fear artificial intelligence (AI), which many otherwise intelligent people think is an existential threat.  It isn’t. Nail (2021) describes how AI treats the brain like a computer with a very narrow range of tasks in a closed system where all possibilities are known, and breaks down when confronted with novel situations.  But brains are nothing like computers, which have fixed logic gates are a binary 0 or 1. Brain neurons are analog, changing their firing thresholds, with chemicals that further alter activity, efficiency, and connectivity. And then there’s the role of dreaming, and much more that makes our brains neuroplastic in ways a computer AI never will be, see the article for details.

The European Union has initiated an ambitious plan called Farm to Fork (EU 2021) that hopes to cut pesticide and excess nutrient use by 50%, and converting 25% of farms to organic agriculture by 2030 (Rosmino 2021).

Do you want to eat or drive? Many energy companies plan to increase their biofuel capacity by 2030, mainly with corn and soybean oil. This is driving price inflation for vegetable oils, including palm oil, canola and soybean oil, doubling corn futures and tripling lumber costs. The accelerating demand for renewable biodiesel fuels is directly responsible for price inflation. Food costs have been pushed to their highest in seven years (Kimani 2021).

And there may be a lot less oil than the EIA, IEA, BP Statistical review, and other estimates of world reserves have estimated. Laherrere et al (2022) explain the various methods used to calculate world fossil reserves, and why their method is probably most accurate — this is what Laherrere has written about for the past 60 years so I find this paper very plausible. Many geologists who’ve modeled likely fossil fuel decline within the IPCC climate model predicted that the most likely outcomes were RCP 2.6 to 4.5 (see the last chapter in “Life After Fossil Fuels”), though their papers came out before it became likely that 2018 was the world peak oil production year, so I expect that the lower RCP 2.6 is most likely. This paper estimates RCP 3.0 since the global CO2 emissions for the period 2020–2100 are approximately 1000 for coal, 750 for oil and 650 for natural gas GtCO2, a total of 2400 GtCO2, with a further ~850 GtCO2 being emitted beyond 2100. Clearly such emissions are incompatible with the 580 GtCO2 limit to CO2 emissions to 2100 assumed by Welsby et al 2021 to meet 1.5 °C goal in the 2022 IPCC report. If the 1750 GtCO2 emitted so far has led to a 1.1 C increase, 3250 GtCO2 would add another 2 C for a total of 3 C above pre-industrial levels.

But oil makes all other resources possible, including coal and natural gas, and its decline is likely to lead to social unrest, depressions, war and civil wars, supply chain failures, natural disasters like hurricanes taking out offshore oil platforms, floods and earthquakes affecting refineries, and more that disrupt oil production, so much so that even Laherrere et al (2022) much lower estimates of oil production and CO2 emissions may be too high. Plus the FLOW RATES will be lower.  Nor are unconventional tar sands (Canada) or heavy oil (Venezuela) likely to produce much oil since their energy return on invested is very low. So that leaves their estimate of remaining conventional oil of 1100 Gb (Table 1) to carbon of ~470 GtCO2, well under the 580 GtCO2 limit to CO2 emissions. To the extent that oil lasts despite wars and other disruptions, coal and natural gas emissions may go over the 580 GtCO2 limit.  But again, if whatever is produced takes a very long time compared to today, the ocean and land sinks will absorb some of the CO2, lowering the ultimate temperature rise. Perhaps.

Book Reviews:

Ennos R (2021) The Age of Wood: Our Most Useful Material and the Construction of Civilization.

References

Altamia MA et al (2020) Teredinibacter waterburyi sp. nov., a marine, cellulolytic endosymbiotic bacterium isolated from the gills of the wood-boring mollusc Bankia setacea…. International Journal of Systematic and Evolutionary Microbiology.

Cembalist M (2021) 2021 Annual Energy Paper. JP Morgan asset & wealth management.

Cohn S (2022) Inflation Reduction Act’s expanded biofuel incentives raise concerns about fraud. CNBC.

EU (2021) Farm to Fork Strategy. European Commission.

Gockee A (2021) Is time ticking on the Ryegate Power Station biomass plant? vtdigger.org

Greenspoon L et al (2023) The global biomass of wild mammals. PNAS https://doi.org/10.1073/pnas.2204892120

Groom N et al (2021) EV rollout will require huge investments in strained U.S. power grids. Reuters.

IEA (2018) International Energy Agency World Energy Outlook 2018, figures 1.19 and 3.13. International Energy Agency.

Laherrère J, Hall CAS, Bentley R (2022) How much oil remains for the world to produce? Comparing assessment methods, and separating fact from fiction. Current research in Environmental Sustainability.

Kimani A (2021) Global Food Prices Soaring As Demand For Biofuels Continues To Climb. oilprice.com

Martin P (2024) The Refinery of the Future- a thought experiment. https://www.linkedin.com/pulse/refinery-future-thought-experiment-paul-martin-4pfoc?utm_source=share&utm_medium=member_ios&utm_campaign=share_via

Nail T (2021) Artificial intelligence research may have hit a dead end. “Misfired” neurons might be a brain feature, not a bug — and that’s something AI research can’t take into account. Salon.

Najmah IB et al (2021) Insulating Composites Made from Sulfur, Canola Oil, and Wool. ChemSusChem, Wiley.

Nargi L (2022) Plastic-coated agricultural chemicals are destroying human and planetary health. Salon.com

NREL (2021) Electrification Futures Study. National Renewable Energy Laboratory.

Picheta R (2021) A 330-ton fatberg is clogging an English city’s sewer, and it won’t move for weeks. CNN.

Rosmino C (2021) Meet the EU farmers using fewer pesticides to make agriculture greener. Euronews.com.

Slav I (2021) The Biofuel Boom Could Threaten Food Security. Oilprice.com

Thaler EA et al (2021) The extent of soil loss across the US Corn Belt. PNAS.

USDOE (2020) U.S. Oil and natural gas: providing Energy Security and supporting our quality of life. U.S. Department of energy, office off oil & natural gas.

Vaughan A (2021) Carbon-negative crops may mean water shortages for 4.5 billion people. NewScientist.  Scientific article: Nature CommunicationsDOI: 10.1038/s41467-021-21640-3

 

 

 

Posted in Biofuels, Fusion, Groundwater, How Much Left, Hydrogen, Life After Fossil Fuels, Peak Water | Tagged , , , , , , , , , | 3 Comments

Why fusion power is Forever Away

Preface. When my husband Jeffery Kahn was a science writer at Lawrence Berkeley National Laboratory, we became friends with several astrophysicists who used to joke about how fusion was 30 years away and always would be.

If world peak oil was in 2018, then we’re out of time. ITER was supposed to be ready now, but its completion date for full fusion is now 2035.  By then energy decline will be in earnest, with diesel rationed to agriculture and other essential services.  Fusion, like all other contraptions that generate electricity, depends on fossil fuels for every single step of its construction in transportation, manufacturing and the making of its parts — the cement, steel, ceramics, microchips and more.

After the overview below, there are over half a dozen more articles about fusion. There are many issues with fusion not included in this post, see the others in category Energy/Fusion here.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Fusion is not likely to work out, yet it is the only possible energy source that could replace fossil fuels (No single or combination of alternative energy resources can replace fossil fuels).

Ugo Bardi (2014), in his book “Extracted” points out that even the minerals needed for nuclear fusion are finite, and the “infinitely abundant energy” thought possible at the beginning of the atomic age isn’t possible.  here’s why:

“In practice, past attempts to obtain controlled nuclear fusion as a source of energy had hinged on the possibility of fusing a heavier isotope of hydrogen, deuterium. But not even the controlled deuterium-deuterium reaction is considered feasible, and the current effort focuses on the reaction of a still heavier hydrogen isotope, tritium, with deuterium. Tritium is not a mineral resource, as it is so unstable that it doesn’t exist on Earth. But it can be created by bombarding a lithium isotope, Li-6, with neutrons that in turn can be created by the deuterium-tritium fusion reaction. (In this sense a fusion reactor is another kind of “breeder” reactor, as it produces its own fuel.) However, since the mineral resources of lithium are limited, and since the Li-6 isotope forms only 7.5 percent of the total, the problem of mineral depletion exists. 58″

The immense gravity of the sun creates fusion by pushing atoms together.  We can’t do that on earth, where the two choices (and the main projects pursuing them) are:

ITER uses magnetic fields to contain plasma until atoms collide and fuse. This has been compared to holding jello together with rubber bands.  Well, not really, ITER is far from being built:

  • The cost so far is $22.3 billion
  • The original deadline was 2016, the latest 2027 date is highly unlikely.
  • Their goal of a ‘burning plasma’ that produces more energy than the machine itself consumes is at least 20 years away
  • It’s so poorly run that a recent assessment found serious problems with the project’s leadership, management, and governance. The report was so damning the project’s governing body only allowed senior management to see it because they feared “the project could be interpreted as a major failure”.
  • April 2014: The U.S. contribution to ITER will cost a total of $3.9 billion — 4 times as much as originally estimated according to a report that came out April 10, 2014
  • Even if ITER does reach break-even someday, it will have produced just heat, not the ultimate aim, electricity. More work will be needed to hook it up to a generator. For ITER and tokamaks in general, commercialization remains several decades away.

Hirsch RL, Bezdek RH (2021) Fusion: Ten times more expensive than nuclear power. RealClearEnergy.org.

Hirsch & Bezdek wrote the 2005 Department of Energy Peak Oil report.

The U.S. and world fusion energy research programs are developing something that no one will want or can afford. Ever so slowly the promise of commercially viable fusion power from tokamaks has ebbed away.  Some recognized the worsening commercial outlook, but most researchers simply continued to study and increase the size of their tokamak devices — and to increase the size of their budgets.

Today, the ITER plant, which was initially expected to cost $5 billion, will now cost somewhere between $22 and 65 billion dollars.  Even at $22 billion, the cost is ten times more than a nuclear fission power plant, and 30 times more if $65 billion.  And nuclear fission power plants are considered to be too expensive for further adoption in the U.S.

The largest source of tritium in the world is heavy water nuclear reactors in Canada. The combination of very limited world production of tritium and its loss by radioactive decay means that world supplies of tritium are inherently limited.  It has recently become clear that world supplies of tritium for larger fusion experiments are limited to the point that world supplies are inadequate for future fusion pilot plants, let alone commercial fusion reactors based on the deuterium-tritium fuel cycle.  In other words, fusion researchers are developing a fusion concept for which there will not be enough fuel in the world to operate!

So fusion researchers are developing a fusion concept that stands no hope of being economically acceptable, running on a fuel that does not exist in adequate quantities.

To stop wasting funding on these pointless fusion projects, we recently suggested to the Secretary of Energy that she appoint a panel of non-fusion engineers and environmentalists to conduct the objective, independent evaluation we believe is necessary.

NRC (2021) Bringing Fusion to the U.S. Grid. National Research Council, National Academies Press.

This document starts out reasonably understandable in the summary.  Fusion science sounds cool, with words like corrosion, fracture toughness, peeling, and ballooning, like surfer slang action verbs of rip, ragdolled, and barreled.

Here’s a fairly understandable paragraph describing what needs to be solved before a pilot plant can be built.  A pilot plant is a tiny baby plant, far from producing profitable, commercial electricity with a positive energy return.

“Virtually every major component of a future nuclear fusion energy reactor will require materials development in order to provide confidence in the ability to withstand significant limits of essential material properties including: neutron damage, creep resistance, fracture toughness, surface erosion/re-deposition, corrosion, chemistry, thermal conductivity and many others. A particular challenge is the need to safely and efficiently close the fuel cycle, which for deuterium-tritium fusion designs involves the development of blankets to breed and extract tritium, as well as the fueling, exhausting, confining, extracting, and separating tritium in significant quantities.”

But then, like all “how far away is fusion” documents it gets into the weeds, where you need a degree in nuclear engineering to understand it.  For example, can you make any sense out of the following barrier to making fusion work?

Power exhaust in high power density, compact fusion systems has two key challenges. One is the experimentally observed narrow steady-state e-folding length of power flow in the scrape-off-layer (SOL). Since the peak heat flux at the divertor plate (qdiv) is inversely proportional to the power e-folding length [Greek formula with letters not on my keyboard], narrow power e-folding length gives rise to an excessive heat flux at the divertor plate. Experimental observation shows [Greek formula] where [Greek formula] and [Greek formula] are power e-folding length and the poloidal field at the plasma surface. Since [Greek formula], power e-folding length in a compact fusion device tends to be smaller. However, the high operating density of a compact fusion device is likely beneficial for enhancing radiative cooling and to achieve detached plasma state.  Another challenge is taming the transient heat flux including those due to ELMs (Edge Localized Mode). ELMs are an edge relaxation phenomena driven by the peeling/ballooning mode, whose onset is reasonably well understood and characterized.  ELM Suppression by methods such as application of 3D magnetic perturbations in DIII-D14 reveals the promise for minimizing the impact of transient heat fluxes on the first wall, but much more research is required, especially for managing the heat flux challenge of a compact, high power density fusion reactor.

But you can almost understand that the solution of “High plasma core power density presents a significant heat exhaust challenge for the plasma facing components, armor, and first wall in fusion systems” which probably means that a bunch of important stuff will melt If Solutions Aren’t Found.

The next paragraph is another mystery (not shown), and so is the solution: the  Department of Energy needs to “support studies of the compatibility of innovative divertor designs in toroidal confinement concepts with divertor plasma detachment, which can significantly relax the radiated power requirement, and including the possibility of liquid metal PFCs, and create a research program and facilities with linear devices for testing plasma facing components and non-plasma heat flux testing platforms, to identify, evaluate, and finalize a high-confidence, robust design for PFC and first wall armor materials, including both solid and liquid metal options, that are compatible with managing steady state and transient power loading.”

And so on.  The main reason to try read these documents even if you don’t understand them, is that you will really understand why fusion will be forever 30 years away, a phrase that’s a favorite of nuclear engineers themselves.

This document has nothing on fusion studies that go far more in depth describing the challenges.  Please do check out the following 247-page free book:  NRC. 2013 An Assessment of the Prospects for Inertial Fusion Energy. National Research Council, National Academies Press.   Or give it to an enemy to induce headaches.

My impression after reading many fusion books is that equipment has to be built with atomic precision, not a single atom out of place in some components.  Really???  And even if that were possible, with global conventional oil production flat-lining since 2005 and all oil, conventional and unconventional peaking in production in 2018, the world of the future will be much simpler, with precision of less than a thousandth of an inch (for more on that, read: Winchester S (2018) The Perfectionists: How Precision Engineers Created the Modern World).

Moyer (2010) Fusion’s False Dawn. Scientific American.

Scientists have long dreamed of harnessing nuclear fusion—the power plant of the stars—for a safe, clean and virtually unlimited energy supply. Even as a historic milestone nears, skeptics question whether a working reactor will ever be possible

The deuterium-tritium fusion only kicks in at temperatures above 150 million degrees Celsius — 25,00 times hotter than the surface of the sun.

Yet the flash of ignition may be the easy part. The challenges of constructing and operating a fusion-based power plant could be more severe than the physics challenge of generating the fireballs in the first place.  A working reactor would have to be made of materials that can withstand temperatures of millions of degrees for years on end. It would be constantly bombarded by high-energy nuclear particles–conditions that turn ordinary materials brittle and radioactive. It has to make its own nuclear fuel in a complex breeding process. And to be a useful energy-producing member of the electricity grid, it has to do these things pretty much constantly–with no outages, interruptions or mishaps–for decades.

Fusion plasmas are hard to control. Imagine holding a large, squishy balloon. Now squeeze it down to as small as it will go. No matter how evenly you apply pressure, the balloon will always squirt out through a space between your fingers. The same problem applies to plasmas. Anytime scientists tried to clench them down into a tight enough ball to induce fusion, the plasma would find a way to squirt out the sides. It is a paradox germane to all types of fusion reactors–the hotter you make the plasma and the tighter you squeeze it, the more it fights your efforts to contain it.  So scientists have built ever larger magnetic bottles, but every time they did so, new problems emerged.

No matter how you make fusion happen–whether you use megajoule lasers (like at Lawrence Livermore National Laboratory) or the crunch of magnetic fields–energy payout will come in the currency of neutrons. Because these particles are neutral, they are not affected by electric or magnetic fields. Moreover, they pass straight through most solid materials as well.

The only way to make a neutron stop is to have it directly strike an atomic nucleus. Such collisions are often ruinous. The neutrons coming out of a deuterium-tritium fusion reaction are so energetic that they can knock out of position an atom in what would ordinarily be a strong metal–steel for instance. Over time these whacks weaken a reactor, turning structural components brittle.

Other times the neutrons will turn material radioactive, dangerously so.

Other times the neutrons will turn benign material radioactive. When a neutron hits an atomic nucleus, the nucleus can absorb the neutron and become unstable. A steady stream of neutrons—even if they come from a “clean” reaction such as fusion—would make any ordinary container dangerously radioactive, Baker says. “If someone wants to sell you any kind of nuclear system and says there is no radioactivity, hang onto your wallet.”

A fusion-based power plant must also convert energy from the neutrons into heat that drives a turbine. Future reactor designs make the conversion in a region surrounding the fusion core called the blanket. Although the chance is small that a given neutron will hit any single atomic nucleus in a blanket, a blanket thick enough and made from the right material—a few meters’ worth of steel, perhaps—will capture nearly all the neutrons passing through. These collisions heat the blanket, and a liquid coolant such as molten salt draws that heat out of the reactor. The hot salt is then used to boil water, and as in any other generator, this steam spins a turbine to generate electricity.

Except it is not so simple. The blanket has another job, one just as critical to the ultimate success of the reactor as extracting energy. The blanket has to make the fuel that will eventually go back into the reactor.

Although deuterium is cheap and abundant, tritium is exceptionally rare and must be harvested from nuclear reactions. An ordinary nuclear power plant can make between two to three kilograms of it in a year, at an estimated cost of between $80 million and $120 million a kilogram. Unfortunately, a magnetic fusion plant will consume about a kilogram of tritium a week. “The fusion needs are way, way beyond what fission can supply,” says Mohamed Abdou, director of the Fusion Science and Technology Center at the University of California, Los Angeles.

For a fusion plant to generate its own tritium, it has to borrow some of the neutrons that would otherwise be used for energy. Inside the blanket channels of lithium, a soft, highly reactive metal, would capture energetic neutrons to make helium and tritium. The tritium would escape out through the channels, get captured by the reactor and be reinjected into the plasma.

When you get to the fine print, though, the accounting becomes precarious. Every fusion reaction devours exactly one tritium ion and produces exactly one neutron. So every neutron coming out of the reactor must make at least one tritium ion, or else the reactor will soon run a tritium deficit—consuming more than it creates. Avoiding this obstacle is possible only if scientists manage to induce a complicated cascade of reactions. First, a neutron hits a lithium 7 isotope, which, although it consumes energy, produces both a tritium ion and a neutron. Then this second neutron goes on to hit a lithium 6 isotope and produce a second tritium ion.

Moreover, all this tritium has to be collected and reintroduced to the plasma with near 100 percent efficiency. “In this chain reaction you cannot lose a single neutron, otherwise the reaction stops,” says Michael Dittmar, a particle physicist at the Swiss Federal Institute for Technology in Zurich. “The first thing one should do [before building a reactor] is to show that the tritium production can function. It is pretty obvious that this is completely out of the question.”

“This is a very fancy gadget, this fusion blanket,” Hazeltine says. “It is accepting a lot of heat and taking care of that heat without overheating itself. It is accepting neutrons, and it is made out of very sophisticated materials so it doesn’t have a short lifetime in the face of those neutrons. And it is taking those neutrons and using them to turn lithium into tritium.

ITER, unfortunately, will not test blanket designs. That is why many scientists—especially those in the U.S., which is not playing a large role in the design, construction or operation of ITER—argue that a separate facility is needed to design and build a blanket. “You must show that you can do this in a practical system,” Abdou says, “and we have never built or tested a blanket. Never.” If such a test facility received funding tomorrow, Abdou estimates that it would take between 30 and 75 years to understand the issues sufficiently well to begin construction on an operational power plant. “I believe it’s doable,” he says, “but it’s a lot of work.”

The Big Lie

Let’s say it happens. The year is 2050. Both the NIF and ITER were unqualified successes, hitting their targets for energy gain on time and under budget. Mother Nature held no surprises as physicists ramped up the energy in each system; the ever unruly plasmas behaved as expected. A separate materials facility demonstrated how to build a blanket that could generate tritium and convert neutrons to electricity, as well as stand up to the subatomic stresses of daily use in a fusion plant. And let’s assume that the estimated cost for a working fusion plant is only $10 billion. Will it be a useful option?

Even for those who have spent their lives pursuing the dream of fusion energy, the question is a difficult one to answer. The problem is that fusion-based power plants—like ordinary fission plants—would be used to generate baseload power. That is, to recoup their high initial costs, they would need to always be on. “Whenever you have any system that is capital-intensive, you want to run it around the clock because you are not paying for the fuel,” Baker says.

Unfortunately, it is extremely difficult to keep a plasma going for any appreciable length of time. So far reactors have been able to maintain a fusing plasma for less than a second. The goal of ITER is to maintain a burning plasma for tens of seconds. Going from that duration to around-the-clock operation is yet another huge leap. “Fusion will need to hit 90 percent availability,” says Baker, a figure that includes the downtime required for regular maintenance. “This is by far the greatest uncertainty in projecting the economic reliability of fusion systems.

It used to be that fusion was [seen as] fundamentally different from dirty fossil fuels or dangerous uranium. It was beautiful and pure—a permanent fix, an end to our thirst for energy. It was as close to the perfection of the cosmos as humans were ever likely to get. Now those visions are receding. Fusion is just one more option and one that will take decades of work to bear fruit…the age of unlimited energy is not [in sight].

Clery D (2013) The Most Expensive Science Experiment Ever. Popular Science.

Some people have spent their whole working lives researching fusion and then retired feeling bitter at what they see as a wasted career. But that hasn’t stopped new recruits joining the effort every year…, perhaps motivated by … the need for fusion has never been greater, considering the twin threats of dwindling oil supplies and climate change.  ITER won’t generate any electricity, but designers hope to go beyond break-even and spark enough fusion reactions to produce 10 times as much heat as that pumped in to make it work.

To get there requires a reactor of epic proportions:

  • The building containing the reactor will be nearly 200 feet tall and extend 43 feet underground.
  • The reactor inside will weigh 23,000 tons.
  • Rare earth metal niobium will be combined with tin to make superconducting wires for the reactor’s magnets. When finished, they will have made 50,000 miles of wire, enough to wrap around the equator twice.
  • There will be 18 magnets, each 46 feet tall and weighing 360 tons (as much as a fully-laden jumbo jet) with  giant D-shaped coils of wire forming the electromagnets used to contain the plasma

That huge sum of money is, for the nations involved, a gamble against a future in which access to energy will become an issue of national security. Most agree that oil production is going to decline sharply during this century.  That doesn’t leave many options for the world’s future energy supplies. Conventional nuclear power makes people uneasy for many reasons, including safety, the problems of disposing of waste, nuclear proliferation and terrorism.

Alternative energy sources such as wind, wave and solar power will undoubtedly be a part of our energy future. It would be very hard, however, for our modern energy-hungry society to function on alternative energy alone because it is naturally intermittent–sometimes the sun doesn’t shine and the wind doesn’t blow–and also diffuse–alternative technologies take up a lot of space to produce not very much power.

Difficult choices lie ahead over energy and, some fear, wars will be fought in coming decades over access to energy resources, especially as the vast populations of countries such China and India increase in prosperity and demand more energy. Anywhere that oil is produced or transported–the Strait of Hormuz, the South China Sea, the Caspian Sea, the Arctic–could be a flashpoint. Supporting fusion is like backing a long shot: it may not come through, but if it does it will pay back handsomely. No one is promising that fusion energy will be cheap; reactors are expensive things to build and operate. But in a fusion-powered world geopolitics would no longer be dominated by the oil industry, so no more oil embargoes, no wild swings in the price of crude and no more worrying that Russia will turn off the tap on its gas pipelines.

Hambling D (2011) Star power: Small fusion start-ups aim for break-even. NewScientist.

The deuterium-tritium fusion only kicks in at temperatures above 150 million degrees Celcius — 25,00 times hotter than the surface of the sun. Not only does reaching such temperatures require a lot of energy, but no known material can withstand them once they have been achieved. The ultra-hot, ultra-dense plasma at the heart of a fusion reactor must instead be kept well away from the walls of its container using magnetic fields. Following a trick devised in the Soviet Union in the 1950s, the plasma is generated inside a doughnut or torus-shaped vessel, where encircling magnetic fields keep the plasma spiraling clear of the walls – a configuration known as a tokamak. This confinement is not perfect: the plasma has a tendency to expand, cool and leak out, limiting the time during which fusion can occur. The bigger the tokamak, the better the chance of extracting a meaningful amount of energy, since larger magnetic fields hold the plasma at a greater distance, meaning a longer confinement time.

Break-even is the dream ITER was conceived to realize.

With a huge confinement volume, it should contain a plasma for several minutes, ultimately producing 10 times as much power as is put in.  But this long confinement time brings its own challenges. An elaborate system of gutters is needed to extract from the plasma the helium produced in the reaction, along with other impurities. The neutrons emitted, which are chargeless and so not contained by magnetic fields, bombard the inside wall of the torus, making it radioactive and meaning it must be regularly replaced. These neutrons are also needed to breed the tritium that sustains the reaction, so the walls must be designed in such a way that the neutrons can be captured on lithium to make tritium. The details of how to do this are still being worked out.

The success of the project is by no means guaranteed 

“We know we can produce plasmas with all the right elements, but when you are operating on this scale there are uncertainties,” says David Campbell, a senior ITER scientist. Extrapolations from the performance of predecessors suggest a range of possible outcomes, he says. The most likely is that ITER will work as planned, delivering 10 times break-even energy. Yet there is a chance it might work better – or produce too little energy to be useful for commercial fusion.

Richard Wolfson, in “Nuclear Choices: A Citizen’s Guide to Nuclear Technology”:

“In the long run, fusion itself could bring on the ultimate climactic crisis. The energy released in fusion would not otherwise be available on Earth; it would represent a new input to the global energy flow. Like all the rest of the global energy, fusion energy would ultimately become heat that Earth would have to radiate into space. As long as humanity kept its energy consumption a tiny fraction of the global energy flow, there would be no major problem. But history shows that human energy consumption grows rapidly when it is not limited by shortages of fuel. Fusion fuel would be unlimited, so our species might expand its energy consumption to the point where the output of our fusion reactors became significant relative to the global input of solar energy. At that point Earth’s temperature would inevitably rise. This long-term criticism of fusion holds for any energy source that could add to Earth’s energy flow even a few percent of what the Sun provides. Only solar energy itself escapes this criticism”. page 274

Robert L. Hirsch, author of the Department of Energy 2005 Peak Oil study, in his book “The Impending World Energy Mess”:

“Fusion has been in the research stage since the 1950s….Fusion happens when fuels are heated to hundreds of millions of degrees long enough for more energy to be released than was used to create the heat. Containment of fusion fuels on the sun is by gravity. Since gravity is not usable for fusion on earth, researchers have used magnetic fields, electrostatic fields, and inertia to provide containment. Thus far, no magnetic or electrostatic fusion concept has demonstrated success.”  Hirsch thinks this will never work out and it’s been a waste of tens of billions of dollars.

William Parkins, formerly the chief scientist at Rockwell International, asks in the 10 Mar 2006 edition of Science  “Fusion Power: Will it Ever Come?

When I read Parkins article and translated some of the measurements to ones more familiar to me, it was obvious that fusion would never see the light of day:

  • Fusion requires heating D-T (deuterium-tritium) to a temperature of 180 million degrees Fahrenheit — 6.5 times hotter than the core of the sun.
  • So much heat is generated that the reactor vacuum vessel has to be at least 65 feet long, and no matter what the material, will need to be replaced periodically because the heat will make the reactor increasingly brittle as it undergoes radiation damage.  The vessel must retain vacuum integrity, requiring many connections for heat transfer and other systems.  Vacuum leaks are inevitable and could only be solved with remotely controlled equipment.
  • A major part of the cost of a fusion plant is the blanket-shield component. Its area equals that of the reactor vacuum vessel, about 4,500 cubic yards in a 1000 MWe plant.  The surrounding blanket-shield, made of expensive materials, would need to be at least 5.5 feet thick and weigh 10,000 metric tons, conservatively costing $1.8 billion dollars.

Here are some of the other difficulties Parkins points out in this article:

The blanket-shield component “amounts to $1,800/kWe of rated capacity—more than nuclear fission reactor plants cost today. This does not include the vacuum vessel, magnetic field windings with their associated cryogenic system, and other systems for vacuum pumping, plasma heating, fueling, “ash” removal, and hydrogen isotope separation. Helium compressors, primary heat exchangers, and power conversion components would have to be housed outside of the steel containment building—required to prevent escape of radioactive tritium in the event of an accident. It will be at least twice the diameter of those common in nuclear plants because of the size of the fusion reactor.

Scaling of the construction costs from the Bechtel estimates suggests a total plant cost on the order of $15 billion, or $15,000/kWe of plant rating. At a plant factor of 0.8 and total annual charges of 17% against the capital investment, these capital charges alone would contribute 36 cents to the cost of generating each kilowatt hour. This is far outside the competitive price range.

The history of this dream is as expensive as it is discouraging. Over the past half-century, fusion appropriations in the U.S. federal budget alone have run at about a quarter-billion dollars a year. Lobbying by some members of the physics community has resulted in a concentration of work at a few major projects—the Tokamak Fusion Test Reactor at Princeton, the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory, and the International Thermonuclear Experimental Reactor (ITER), the multinational facility now scheduled to be constructed in France after prolonged negotiation. NIF is years behind schedule and greatly over budget; it has poor political prospects, and the requirement for waiting between laser shots makes it a doubtful source for reliable power.

Even if a practical means of generating a sustained, net power-producing fusion reaction were found, prospects of excessive plant cost per unit of electric output, requirement for reactor vessel replacement, and need for remote maintenance for ensuring vessel vacuum integrity lie ahead. What executive would invest in a fusion power plant if faced with any one of these obstacles? It’s time to sell fusion for physics, not power”.

Former House of Representatives Congressman Roscoe Bartlett (R-MD), head of the “Peak Oil Caucus”:

“…hoping to solve our energy problems with fusion is a bit like you or me hoping to solve our personal financial problems by winning the lottery. That would be real nice. I think the odds are somewhere near the same. I am about as likely to win the lottery as we are to come to economically feasible fusion.”

Bartlett’s full speech to congress: http://www.energybulletin.net/4733.html

National Academy of Sciences. 2013. An Assessment of the Prospects for Inertial Fusion Energy

The 3 principal research efforts in the USA are all trying to implode fusion fuel pellets by: (1) lasers, including solid state lasers at the Lawrence Livermore National Laboratory’s (LLNL’s) NIF and the University of Rochester’s Laboratory for Laser Energetics (LLE), as well as the krypton fluoride gas lasers at the Naval Research Laboratory; (2) particle beams, being explored by a consortium of laboratories led by the Lawrence Berkeley National Laboratory (LBNL); and (3) pulsed magnetic fields, being explored on the Z machine at Sandia National Laboratories. The minimum technical accomplishment that would give confidence that commercial fusion may be feasible—the ignition of a fuel pellet in the laboratory—has not been achieved.

This is 247 pages long chock-full of the problems that fusion must overcome – not just technical but the funding — billions of dollars in the unlikely event any of the various flavors of fusion makes enough progress to scale up to a higher level.  If you ever wanted to know the minutiae of why fusion will never work, this is a great document to read  — if you can understand it that is.  I spent about 10 minutes grabbing just a few of the hundreds of “challenges” that need to be overcome:

  • Making a reliable, long-lived chamber is challenging since the charged particles, target debris, and X-rays will erode the wall surface and the neutrons will embrittle and weaken the solid materials.
  • Unless the initial layer surfaces are very smooth (i.e., perturbations are smaller than about 20 nm), short-wavelength (wavelength comparable to shell thickness) perturbations can grow rapidly and destroy the compressing shell. Mix Similarly, near the end of the implosion, such instabilities can mix colder material into the spot that must be heated to ignition. If too much cold material is injected into the hot spot, ignition will not occur. Most of the fuel must be compressed to high density, approximately 1,000 to 4,000 times solid density.
  • To initiate fusion, the deuterium and tritium fuel must be heated to over 50 million degrees and held together long enough for the reactions to take place. Drivers must deliver very uniform ablation; otherwise the target is compressed asymmetrically. If the compression of the target is insufficient, the fusion reaction rate is too slow and the target disassembles before the reactions take place. Asymmetric compression excites strong Rayleigh-Taylor instabilities that spoil compression and mix dense cold plasma with the less dense hot spot. Preheating of the target can also spoil compression. For example, mistimed driver pulses can shock heat the target before compression. Also, interaction of the driver with the surrounding plasma can create fast electrons that penetrate and preheat the target.
  • The technology for the reactor chambers, including heat exhaust and management of tritium, involves difficult and complicated issues with multiple, frequently competing goals and requirements.  Understanding the performance at the level of subsystems such as a breeding blanket and tritium management, and integrating these complex subsystems into a robust and self-consistent design will be very challenging.
  • Avoiding frequent replacement of components that are difficult to access and replace will be important to achieving high availability. Such components will need to achieve a very high level of operational reliability.
  • Experimental investigations of the fast-ignition concept are challenging and involve extremely high-energy-density physics: ultraintense lasers (>1019 W cm–2); pressures in excess of 1 Gbar; magnetic fields in excess of 100 MG; and electric fields in excess of 1012 V/m. Addressing the sheer complexity and scale of the problem inherently requires the high-energy and high-power laser facilities

References

Bardi, Ugo. 2014. Extracted: How the Quest for Mineral Wealth Is Plundering the Planet. Chelsea Green Publishing.

Biello, David.  June 2014. A Milestone on the Long and Winding Road to Fusion.  Scientific American.

Chang, Ken. Mar 18, 2014. Machinery of an Energy Dream Machinery of an Energy Dream. New York Times.

Clery, D. 28 February 2014. New Review Slams Fusion Project’s Management. Science: Vol. 343 no. 6174 pp. 957-8.

Hinkel, D *, Springer P * , Standen, A, Krasny, M. Feb 13, 2014. Bay Area Scientists Make Breakthrough on Nuclear Fusion. Forum. (*) scientists at Lawrence Livermore National Laboratory.

Moyer, M. March/April 2010. Fusion’s False Dawn. Scientific American.

Perlman, David. Feb 13, 2014. Livermore Lab’s fusion energy tests get closer to ‘ignition’. San Francisco Chronicle.

Posted in Alternative Energy, Energy, Fusion | Tagged , , , , , | Comments Off on Why fusion power is Forever Away

Compressed air energy storage (CAES)

Figure 1. Potential salt dome locations for CAES facilities are mainly along the Gulf coast

Preface. Besides pumped hydro storage (PHS), which provides 99% of energy storage today, CAES is the only other commercially proven energy storage technology that can provide large-scale (over 100 MW) energy storage. But there are just two CAES plants in the world because there are so few places to put them, as you can see in Figure 1 and Figure i.

CAES is the most sustainable energy storage with no environmental issues like what PHS poses, such as the flooding of land and the damming of rivers. And Barnhart (2013) rates the ESOI, or energy stored on energy invested, the best of all for CAES. Batteries need up to 100 times more energy to create than the energy they can store.

A more detailed and technical article on CAES with wonderful pictures can be found here: Kris De Decker. History and Future of the Compressed Air Economy.

Alice Friedemann   www.energyskeptic.com  author of  “Life After Fossil Fuels – Back to Wood World”, 2021, Springer, “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

How it works: Using off-peak electricity, compressed air is pumped into very large underground cavities at a depth of 1650–4250 feet (Hovorka 2009), and then drawn out to spin turbines at peak demand periods.

Uh-oh — it still needs fossil fuels. But a big drawback of CAES is that it still needs fossil fuels, since electric generators use natural gas to supplement the energy from the stored compressed air. Natural gas also provides the power to compress and pump the air underground, and when the compressed air is withdrawn, natural gas is used a second time to heat it and force it through expanders to power a generator. Current CAES facilities are essentially gas turbines that consume 40–60 % less gas than conventional turbines (SBC 2013).

Few locations: Domal salt formations are rare (orange in figure i below)

Locations are scarce because they must be airtight. There are only two CAES plants in the world: Alabama (110 MW) built in 1991 and in Germany in 1979, both of them in Domal Salt formations.

There are only two because domal salt formations are so rare and exist in only a few states in the U.S. as shown in figure i.  These have one or more deep chambers within the salt dome that are airtight, so they can handle frequent charging and discharging, with pure, thick salt walls that self-heal with air moisture, preventing leaks. Bedded salt is not as ideal because it takes a huge amount of energy and water to carve salt chambers out. Domal salt is also superior because they are purer and thicker than bedded salt (Hovorka).

Areas with class 4+ wind and possible CAES locations. Succar. 2008. Compressed Air Energy Storage: Theory, Resources, And Applications For Wind Power. Princeton University.

Areas with class 4+ wind and possible CAES locations. Succar. 2008. Compressed Air Energy Storage: Theory, Resources, And Applications For Wind Power. Princeton University.

 

 

 

 

 

 

Ideally a CAES facility would store renewable wind power, but the best wind locations are seldom near domal salt areas.  Though there is a wind/CAES project being planned, an $8 billion dollar project in Utah. It would use the only known salt dome outside of Texas, Louisiana, or Alabama for a $1.5 billion dollar CAES to store electricity from a $4 billion wind farm in Wyoming to deliver power to Los Angeles over $2.6 billion of new transmission lines that run for 535 miles ($4.86 million/mile) (DATC 2014; Gruver 2014).

This is not exactly run of the mill geology. CAES has yet to be deployed in bedded salt, aquifers, or abandoned rock mines because these formations are less likely to be airtight, and hence able to charge and discharge frequently and to maintain constant pressure. Underground areas once but no longer used to store natural gas or oil would have to be free of blockages that could gum up the works. Water is another limiting factor. High volumes are needed to cool the compressed air before storing it.

CAES systems generally have twice as much up-ramping capability as down-ramping. Translation: They can produce electricity faster than they can store it (IEA 2011a).

They are inefficient

The CAES plants in operation in Germany and the US have an electric-to-electric efficiency of only 40–54%, respectively (Luo 2015).  A conversion efficiency this low will require a doubling of wind and solar power to make up for the loss. 

The Pacific Northwest National Laboratory calculated the cost of energy storage devices for balancing the grid if wind power reached 20 % of electric generation across the United States. The cost for CAES was the most expensive: 170.6 billion. Storage would fill spans ranging from milliseconds up to an hour. Not 2 hours, not a day, and not a week— that will cost you extra. In billions of dollars, the options examined included $54.03 NaS battery, $63.85 flywheel, $81.62 Li-ion battery, $116.61 redox flow battery, $125.06 demand response (car PHEV batteries), $130.24 pumped hydro storage (PHS), $135.48 combustion turbine (CT), and $170.62 compressed air energy storage (PNNL 2013).

Based on nine vendor estimates, to build CAES units able to store one day of U.S. electricity would cost from $912 billion to $1.48 trillion. That’s below ground. Above ground CAES would cost $3.8 trillion (DOE/EPRI 2013).

Locations must be near the electric grid: It’s far too expensive to add transmission from remote locations. It’s already too expensive to build them….

According to Alfred Cavallo, “The immense magnitude of stored energy required to transform the intermittent wind resource to a constantly available power supply is not widely appreciated. For example, a 200 MW wind/CAES plant would need a minimum storage capacity of 10,000 MWh, or 50 hours of full plant output (this assumes that the wind power density is constant throughout the year). If the wind was not constant, but seasonal, say mainly in the winter or spring, the energy storage for seasonal output would require a minimum of 40,000 MWh (200 hours of full power plant output). Clearly, only the most inexpensive of storage media, like air or water, could be used in such an application” (Cavallo 2007).

Since the wind is a seasonal resource, it would be ideal to be able to store weeks of wind energy, but that is impossibly expensive (Cavallo 1995).

CAES in aquifers has never been accomplished, and attempt to do so was abandoned after $8 million was spent in Iowa  because testing found it would leak (see Haugen below).  Aquifers are far more expensive than salt caverns, partly due to the high cost of conducting tests, such as seismic, drilling test wells, modeling the reservoir, and so on (Swensen, Hydrodynamics Group, Marchese). Aquifers may not be suitable for CAES– they have to have the right amount of porosity and permeability beneath an impermeable caprock with the right geometry (Succar). This makes it very expensive to find out.

Hard rock caverns, such as abandoned mines, are the least likely place to put CAES and this has never been attempted, leakage is too likely, and finding a mine at the exact right depth reduces the choices further.

Air has problems being stored that natural gas does not. Using underground storage that once had natural gas may not work, because “a CAES system used for arbitrage or backing wind power will likely switch between compression and generation at least once a day and perhaps several times a day. In contrast, most natural gas storage facilities are often only cycled once over the course of the year to meet the seasonal demand fluctuations for natural gas. Third, several oxidation processes might take place in the presence of oxygen from the air depending on the mineralogy of the formation. Also, introduction of air into the formation might promote propagation of aerobic bacteria that might pose a significant corrosion risk. Finally, additional corrosion mechanisms might be promoted due to the introduction of oxygen into the formation (Succar).

Haugen, D. 2012. Scrapped Iowa project leaves energy storage lessons. Midwest Energy News.

After spending $8 million on a CAES aquifer in Iowa, the project was halted when it was concluded that air didn’t flow fast enough through the aquifer for it to be effective as a compressed-air energy storage site.

Hydrodynamics Group. CAES in aquifers is problematic. Lack of geological data-poor. Reservoir properties.

Hydrodynamics has found that CAES in aquifer storage medium is problematic. We found that geological data for aquifer structures is typically very limited, resulting in costly exploration, field testing, and analysis development programs. Otrher challenges include constraint of air storage pressure around the hydrostatic pressure of the aquifer, limitations on well productivity, the potential for oxygen depletion, and the potential of water production with the air. We have found that the mitigation of the challenges of CAES development is dependent on the selection of an anticline structure at the proper depth, and the choice of highly permeable porous medium.

REFERENCES

Barnhart, CJ, et al. 2013. On the importance of reducing the energetic and material demands of electrical energy storage. Energy & Environmental Science. 

Cavallo, A.  et al. 1995. Cost effective seasonal storage of wind energy. Houston, TX, USA,  pp. 119-125.

Cavallo, A. 2007. Controllable and affordable utility-scale electricity from intermittent wind resources and compressed air energy storage (CAES). Energy 32: 120-127.

DATC. 2014. $8-billion green energy initiative proposed for Los Angeles. Los Angeles: Duke
American Transmission Co.

Denholm. September 23, 2013. Energy Storage in the U.S. National Renewable Energy Laboratory. Slide 15.

DOE/EPRI. 2013. Electricity storage handbook in collaboration with NRECA. USA: Sandia
National Laboratories and Electric Power Research Institute.

Gruver, M. 2014. Renewable energy plan hinges on huge Utah caverns. New York: Associated Press.

Hovorka, S. 2009. Characterization of Bedded Salt for Storage Caverns: Case Study from the Midland Basin . Texas Bureau of Economic Geology.

Hydrodynamics Group. 2009. Norton compressed air energy storage. http://hydrodynamics-group.com/mbo/content/view/16/40

IEA. 2011. IEA harnessing variable renewables: a guide to the balancing challenge. Paris: International Energy Agency.

Luo X, et al. 2015. Overview of current development in electrical energy storage technologies and the application potential in power system operation. Applied Energy 137: 511-536.

Marchese, D. 2009. Transmission system benefits of CAES assets in a growing renewable generation market. Energy Storage Association Annual Meeting.

NREL. 2014. Renewable Electricity Futures Study. National Renewable Energy Laboratory.

PNNL. 2013. National assessment of energy storage for grid balancing and arbitrage: phase II, vol 2: cost and performance characterization. Washington, DC: Pacific Northwest National Laboratory.

SBC. 2013. Electricity storage. SBC Energy Institute.

Succar, S. et al. 2008. Compressed Air Energy Storage: Theory, Resources, and Applications for Wind Power. Princeton Environmental Institute.

Swensen, E. et al. 1994. Evaluation of Bene ts and Identi cation of Sites for a CAES Plant in New York State. Energy Storage and Power Consultants. EPRI Report TR-104268.

Posted in CAES Compressed Air | Tagged , , , , | Comments Off on Compressed air energy storage (CAES)

Heinberg on what to do at home to conserve energy

Preface. A quick summary. Best investment: insulate exterior walls, ceiling, and floors for energy savings. Other good changes were to plant a garden and fruit-and-nut orchard, and buy solar hot water heater, solar food dryer, solar cooker, chickens, energy-efficient appliances

Lessons learned: These are expensive, especially energy storage. Solar cookers work mainly in the summer.

In the future there will be more bikes and ebikes than cars. There needs to be much more local production of food and other goods to shorten supply chains.

Bottom line: there’s very little we can do as individuals, we can’t mine for the minerals we need, few of us can grow all of our food, and despite all these investments Heinberg still heavily depends on the greater world for food, electricity, and clothes, cars and most other objects in our lives can’t be home-made. What is required to make a transition is much bigger than most people imagine.

Alice Friedemann   www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Richard Heinberg. 2020. If My House Were the World: The Renewable Energy Transition Via Chickens and Solar Cookers. Resilience.org

For the past two decades, my wife Janet and I have been trying to transition our home to a post-fossil-fuel future. I say “trying,” because the experiment is incomplete and only somewhat successful. It doesn’t offer an exact model for how the rest of the world might make the shift to renewable energy; nevertheless, there’s quite a bit that we’ve learned that could be illuminating for others as they contemplate what it will take to minimize climate change by replacing coal, oil, and gas with cleaner energy sources.

We started with a rather trashy 1950s suburban house on a quarter-acre lot. We didn’t design a solar-optimal house from scratch the way Amory Lovins did (we thought about it, but we just didn’t have the time or money). We did what we could afford to do, when we could afford to do it.

Our first step was to insulate our exterior walls, ceiling, and floors. That was probably our best investment overall: it saved energy, and it made the house quieter and more pleasant to live in. Then we installed a small (1.2 kw) photovoltaic system, and planted a garden and fruit-and-nut orchard. Gradually, over the years, we added battery backup for our PV system, a solar hot water heater, a solar food dryer, chickens, solar cookers, energy-efficient appliances (including a mini-split electric HVAC system), and an electric car.

Here are ten things we learned along the way.

  1. It’s expensive. Altogether, we’ve spent tens of thousands of dollars on our quest for personal sustainability. And we’re definitely not big spenders. We economized at every stage, and occasionally benefitted from free labor and materials (our solar hot water panels, for example, were donated, and we built our food dryer from scrap). Still, once every few years we made a significant outlay for some new piece of electricity-generating or energy-saving technology. True, solar panels have gotten cheaper in the intervening years. On the other hand, there are things we still haven’t gotten to: we continue to rely on an old natural gas-fired kitchen cooking stove, which really should be replaced with an induction range if we hope to be all-solar-electric.
  2. Some things didn’t work. Early on, we planned and built a glassed-in extension on the south side of our house. Our idea was that it would capture sunlight in the winter and reduce our heating bills. As it turned out, we didn’t get the window and roof angles right, and so we receive relatively little heating benefit from this add-on. Instead we use it as a garden room for starting seedlings in the early spring. I suspect the global renewable energy transition will similarly see a lot of good ideas go awry, and false starts repurposed.
  3. Some things worked well. Twenty years after purchase, we have an antique PV system, with museum-quality Siemens panels still spitting out electrons. We made a big investment up-front, and got free electricity for two decades. This is a very different economic bargain from the familiar one with fossil fuels, which is pay-as-you-go. Similarly, making a rapid global energy transition, though offering some economic benefits in the long run, will require an enormous up-front expenditure. We learned that solar cookers are extremely cheap and pleasing to work with—in the summer months. Finally, we learned that keeping chickens is an economical source of eggs, though hens are less cost-effective from a food-production standpoint if you choose to treat them well (and continue caring for them after their egg laying subsides), as we did. There can be valuable side benefits: one hen, who’s been with us for nearly 10 years, has become an emotional support animal who supplants our need for more costly sources of psychological aid. I could say much more about her—but that’s for another occasion. Our chickens also provide manure and eggshells that enrich our soil. We compost some of our greenwaste and keep a worm bin, thus reducing energy usage by diverting some of our waste that would otherwise go to a landfill; we seasonally dry some produce in our solar dehydrator; and we can some of our fruit. These activities require little financial investment, but need a noticeable ongoing investment of effort.
  4. Energy storage is especially expensive. Our solar panels have lasted a long time, but our battery backup system didn’t. It now provides only about 20 minutes of power. True, our battery system is far from being state-of-the-art (it consists of five high-capacity lead-acid cells). Nevertheless, this proved to be the least-durable, least cost-effective aspect of our whole effort. The truth is, on both a diurnal and a seasonal basis, we rely almost entirely on the grid for energy storage and for matching electricity supply with demand. The lesson for our global energy transition: even though batteries are getting cheaper, energy storage will still be a costly engineering challenge.
  5. Reduce energy usage before you transition. Because renewable energy generation requires a lot of up-front investment, and because energy storage is also costly, it makes sense to minimize energy demand. For a household, that’s not problematic: we were quite happy shrinking our energy usage to roughly a quarter of the California average. But for society as a whole, this has huge implications. It’s possible to reduce demand somewhat through energy-efficiency measures, but serious reduction will have economic repercussions. We have built our national and global economic systems on the expectation of always using more. A successful energy transition will necessarily entail moving away from a growth-based consumer economy to an entirely different way of organizing investment, production, consumption, and employment.
  6. Our house is not an industrial manufacturing site. We don’t make our own cement or glass. If we had tried, it would have been a more interesting experiment, but much harder. We were undertaking the easy aspects of energy transition. The really difficult bits include things like aviation and high-heat industrial processes.
  7. Adding personal transportation to our renewable energy regime shifted us into energy deficit mode. We like our electric car, but charging it takes a lot of electricity (the energy needed to manufacture the car is another story altogether). Once we bought the car, we realized we need a larger PV system (that’s on our to-do list). For society as a whole, this suggests that transitioning the transportation sector will require sacrifice (see number 5, above). A renewable future will likely be less mobile and more local, and will feature more bikes and ebikes than cars. We should start shortening supply chains immediately.
  8. True sustainability and self-sufficiency would have required a lot more money, a lot more work, adaptation to a lot less consumption—or all three. Our experiment was informal; we didn’t keep track of every way in which we were using energy directly or indirectly (for example, via the embodied energy in the products we purchased). We continue to depend on flows of energy and money, and stocks of resources, in the world at large. We don’t generate the energy needed to mine minerals, or to manufacture cars, solar panels, or other stuff we have bought, such as clothes, a TV, computers, and books. The same holds for food self-sufficiency: we get a lot of fruit, nuts, eggs, and veggies from our backyard with minimal fossil energy inputs, but we buy the rest of what we eat from a local organic market. The world as a whole doesn’t have the luxury of going elsewhere to get what it needs; the transition will have to be comprehensive.
  9. You can’t expect someone else to do it all for you. Many people assume that the cost of the energy transition will somehow be paid by society as a whole—primarily, by big utility companies acting under government regulations and incentives. But households like yours and mine will have to bear a lot of the expense, and businesses will have to do even more of the heavy lifting. If households can’t afford to buy new equipment, or businesses can’t do so profitably, that will make the transition that much harder and slower. If we make the transition more through energy demand reduction rather than new technology, that will require massive shifts in people’s (read: your and my) expectations and behavior.
  10. We’re glad we did what we did. Our experiment has been instructive and rewarding. As a result of it, we have a much better appreciation for where our energy and manufactured products come from, and how much they impact the environment. We are more keenly aware of what we formerly took for granted and how cluelessly privileged our nation has been in its reliance on cheap fossil fuels. Our quality of life has improved as our consumption declined.

We would do most of it all over again (though I’d put more effort into designing the solarium that now serves as our garden room). I would have thought, at the outset, that after 20 years we’d be more sustainable and self-sufficient than we actually are. My take-away: the energy transition is an enormous job, and people who look at it just in terms of politics and policy have little understanding of what is actually required.

Posted in Advice, Richard Heinberg | Tagged | Comments Off on Heinberg on what to do at home to conserve energy

Life After Fossil Fuels: manufacturing will be less precise

Preface. This is a book review and excerpts of Winchester’s “The Perfectionists: How Precision Engineers created the modern world”. The book describes how the industrial revolution was made possible with ever more precision.  First came the steam engine, possible to build when a way to make them to one tenth of an inch precision so the steam didn’t escape was invented.  By World War II parts could be made precise to within a millionth of an inch and today to 35 zeros of precision (0.00000000000000000000000000000000001), which is required for microchips, jet engines, and other high-tech.

This amazing precision is possible using machine tools to make precise parts by shaping metal, glass, plastic, ceramics and other rigid materials by cutting, boring, grinding, shearing, squeezing, rolling, and stamping plus riveting metals, plastic and other hard materials.  Most precision machine tools are powered by electricity today, and steam engines in the past.

Machine tools also revolutionized our ability to kill each other.  Winchester writes: “When any part of a gun failed, another part had to be handmade by an army blacksmith, a process that, with an inevitable backlog caused by other failures, could take days. As a soldier, you then went into battle without an effective gun, or waited for someone to die and took his, or did your impotent best with your bayonet, or else you ran. Once a gun had been physically damaged in some way, the entire weapon had to be returned to its maker or to a competent gunsmith to be remade or else replaced. It was not possible, incredible though this might, simply to identify the broken part and replace it with another. No one had ever thought to make a gun from component parts that were each so precisely constructed that they were identical one with another.”

Machine tools can not be used for wood because it is flexible. It swells and contracts in unpredictable ways. It can never be a fixed dimension and whether planed or jointed, lapped or milled, or varnished to a brilliant luster, since wood is fundamentally and inherently imprecise.

Since both my books, “When trucks stop running” and “Life After Fossil Fuels” make the case that we are returning to a world where the electric grid is down for good, and wood is the main energy source and infrastructure material after fossil fuels become scarce, the level of civilization we can achieve will depend greatly on how precisely we can make objects in the future.  Because wood charcoal makes inferior and weaker iron, steel, and other metals than coal, today’s precision will no longer be possible. Microchips, jet engines, and much more will be lost forever.  Wood, because of eventual deforestation, will lead to orders of magnitude less metal, brick, ceramics, glass and other products because of lack of wood charcoal. And since peak coal is here, and the remaining reserves in the U.S. are mostly lignite, not great for the high heat needed in manufacturing, civilization as we know it has a limited time-span.

“The Great Simplification” will reduce precision. The good news is that hand-crafting of beautiful objects will return, a far more rewarding way of life than production lines at factories today.

Alice Friedemann   www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Winchester, S. 2018. The Perfectionists: How Precision Engineers created the modern world. HarperCollins.

Two particular aspects of precision need to be addressed. First, its ubiquity in the contemporary conversation—the fact that precision is an integral, unchallenged, and seemingly essential component of our modern social, mercantile, scientific, mechanical, and intellectual landscapes. It pervades our lives entirely, comprehensively, wholly.

Because an ever-increasing desire for ever-higher precision seems to be a leitmotif of modern society, I have arranged the chapters that follow in ascending order of tolerance, with low tolerances of 0.1 and 0.01 starting the story and the absurdly, near-impossibly high tolerances to which some scientists work today—claims of measurements of differences of as little as 0.000 000 000 000 000 000 000 000 000 01 grams, 10 to the -28th grams, have recently been made, for example—toward the end.

Any piece of manufactured metal (or glass or ceramic) must have chemical and physical properties: it must have mass, density, a coefficient of expansion, a degree of hardness, specific heat, and so on. It must also have dimensions: length, height, and width. It must possess geometric characteristics: it must have measurable degrees of straightness, of flatness, of circularity, cylindricity, perpendicularity, symmetry, parallelism, and position—among a mesmerizing host of other qualities even more arcane and obscure.

The piece of machined metal must have a degree of what has come to be known as tolerance. It has to have a tolerance of some degree if it is to fit in some way in a machine, whether that machine is a clock, a ballpoint pen, a jet engine, a telescope, or a guidance system for a torpedo.

To fit with another equally finely machined piece of metal, the piece in question must have an agreed or stated amount of permissible variation in its dimensions or geometry that will allow it to fit. That allowable variation is the tolerance, and the more precise the manufactured piece, the greater the tolerance that will be needed and specified.

The tolerances of the machines at the LIGO site are almost unimaginably huge, and the consequent precision of its components is of a level and nature neither known nor achieved anywhere else on Earth. LIGO is an observatory, the Laser Interferometer Gravitational-Wave Observatory.  The LIGO machines had to be constructed to standards of mechanical perfection that only a few years before were well-nigh inconceivable and that, before then, were neither imaginable nor even achievable.

Precision’s birth derives from the then-imagined possibility of maybe holding and managing and directing this steam, this invisible gaseous form of boiling water, so as to create power from it,

The father of true precision was an eighteenth-century Englishman named John Wilkinson, who was denounced sardonically as lovably mad, and especially so because of his passion for and obsession with metallic iron. He made an iron boat, worked at an iron desk, built an iron pulpit, ordered that he be buried in an iron coffin, which he kept in his workshop (and out of which he would jump to amuse his comely female visitors), and is memorialized by an iron pillar he had erected in advance of his passing in a remote village in south Lancashire.

Though the eventual function of the mechanical clock, brought into being by a variety of claimants during the fourteenth century, was to display the hours and minutes of the passing days, it remains one of the eccentricities of the period (from our current viewpoint) that time itself first played in these mechanisms a subordinate role. In their earliest medieval incarnations, clockwork clocks, through their employment of complex Antikythera-style gear trains and florid and beautifully crafted decorations and dials, displayed astronomical information at least as an equal to the presentation of time.

The behavior of the heavenly bodies was ordained by gods, and therefore was a matter of spiritual significance. As such, it was far worthier of human consideration than our numerical constructions of hours and minutes, and was thus more amply deserving of flamboyant mechanical display.

John Harrison, the man who most famously gave mariners a sure means of determining a vessel’s longitude. This he did by painstakingly constructing a family of extraordinarily precise clocks and watches, each accurate to just a few seconds in years, no matter how sea-punished its travels in the wheelhouse of a ship.

An official Board of Longitude was set up in London in 1714, and a prize of 20,000 pounds offered to anyone who could determine longitude with an accuracy of 30 miles. John Harrison eventually, and after a lifetime of heroic work on five timekeeper designs, would claim the bulk of the prize.

The fact that the Harrison clocks were British-invented and their successor clocks firstly British-made allowed Britain in the heyday of her empire to become for more than a century the undisputed ruler of all the world’s oceans and seas. Precise-running clockwork made for precise navigation; precise navigation made for maritime knowledge, control, and power.

In place of the oscillating beam balances that made the magic of his large clocks so spectacular to see, he substituted a temperature-controlled spiral mainspring, together with a fast-beating balance wheel that spun back and forth at the hitherto unprecedented rate of some 18,000 times an hour. He also had an automatic remontoir, which rewound the mainspring eight times a minute, keeping the tension constant, the beats unvarying. There was a downside, though: this watch needed oiling, and so, in an effort to reduce friction and keep the needed application of oil to a minimum, Harrison introduced, where possible, bearings made of diamond, one of the early instances of a jeweled escapement.

It remains a mystery just how, without the use of precision machine tools—the development of which will be central to the story that follows—Harrison was able to accomplish all this. Certainly, all those who have made watches since then have had to use machine tools to fashion the more delicate parts of the watches: the notion that such work could possibly be done by the hand of a 66-year-old John Harrison still beggars belief. But John Harrison’s clockworks enjoyed perhaps only three centuries’ worth of practical usefulness.

For precision to be a phenomenon that would entirely alter human society, it has to be expressed in a form that is duplicable; it has to be possible for the same precise artifact to be made again and again with comparative ease and at a reasonable frequency and cost.

It was only when precision was created for the many that precision as a concept began to have the profound impact on society as a whole that it does today. And the man who accomplished that single feat, of creating something with great exactitude and making it not by hand but with a machine, and, moreover, with a machine that was specifically created to create it

A machine that makes machines, known today as a “machine tool,” was, is, and will long remain an essential part of the precision story—was the 18th-century Englishman denounced for his supposed lunacy because of his passion for iron, the then-uniquely suitable metal from which all his remarkable new devices could be made.

Wilkinson is today rather little remembered. He is overshadowed quite comprehensively by his much-better-known colleague and customer, the Scotsman James Watt, whose early steam engines came into being, essentially, by way of John Wilkinson’s exceptional technical skills.

On January 27, 1774, John Wilkinson, whose local furnaces, all fired by coal, were producing a healthy twenty tons of good-quality iron a week, invented a technique for the manufacture of guns. The technique had an immediate cascade effect very much more profound than those he ever imagined, and of greater long-term importance.  Up until then, naval cannons were cast hollow, with the interior tube through which the powder and projectile were pushed and fired

The problem with this technique was that the cutting tool would naturally follow the passage of the tube, which may well not have been cast perfectly straight in the first place. This would then cause the finished and polished tube to have eccentricities, and for the inner wall of the cannon to have thin spots where the tool wandered off track.  And thin spots were dangerous—they meant explosions and bursting tubes and destroyed cannon and injuries to the sailors who manned the notoriously dangerous gun decks.

Then came John Wilkinson and his new idea. He decided that he would cast the iron cannon not hollow but solid. This, for a start, had the effect of guaranteeing the integrity of the iron itself—there were fewer parts that cooled early and came out with bubbles and  spongy sections (“honeycomb problems,” as they were called) for which hollow-cast cannon were then notorious.

The secret was in the boring of the cannon hole. Both ends of the operation, the part that did the boring and the part to be bored, had to be held in place, rigid and immovable, because to cut or polish something into dimensions that are fully precise, both tool and workpiece have to be clasped and clamped as tightly as possible to secure immobility.

Cannon after cannon tumbled from the mill, each accurate to the measurements the navy demanded, each one, once unbolted from the mill, identical to its predecessor, each one certain to be the same as the successor that would next be bolted onto it. The new system worked impeccably from the very start.

Yet what elevates Wilkinson’s new method to the status of a world-changing invention would come the following year, 1775, when he started to do serious business with James Watt.

The principle of a steam engine is familiar, and is based on the simple physical fact that when liquid water is heated to its boiling point it becomes a gas. Because the gas occupies some 1,700 times greater volume than the original water, it can be made to perform work.

Newcomen then realized he could increase the work by injecting cold water into the steam-filled cylinder, condensing the steam and bringing it back to 1/1,700 of its volume—creating, in essence, a vacuum, which enabled the pressure of the atmosphere to force the piston back down again. This downstroke could then lift the far end of the rocker beam and, in doing so, perform real work. The beam could lift floodwater, say, out of a waterlogged tin mine.  Thus was born a very rudimentary kind of steam engine, almost useless for any application beyond pumping water.  The Newcomen engine and its like remained in production for more than 70 years, its popularity beginning to lessen only in the mid-1760s, when James Watt showed that it could be markedly improved.

Watt realized that the central inefficiency of the engine he was examining was that the cooling water injected into the cylinder to condense the steam and produce the vacuum also managed to cool the cylinder itself. To keep the engine running efficiently, the cylinder needed to be kept as hot as possible at all times, so the cooling water should perhaps condense the steam not in the cylinder but in a separate vessel, keeping the vacuum in the main cylinder, which would thus retain the cylinder’s heat and allow it to take on steam once more. To make matters even more efficient, the fresh steam could be introduced at the top of the piston rather than the bottom, with stuffing of some sort placed and packed into the cylinder around the piston rod to prevent any steam from leaking out in the process.

These two improvements (the inclusion of a separate steam condenser and the changing of the inlet pipes to allow for the injection of new steam into the upper rather than the lower part of the main cylinder) changed Newcomen’s so-called fire-engine into a fully functioning steam-powered machine.

Once perfected, it was to be the central power source for almost all factories and foundries and transportation systems in Britain and around the world for the next century and more.

Yet perpetually enveloping his engine in a damp, hot, opaque gray fog, were billowing clouds of steam, which incensed James Watt. Try as he might, do as he could, steam always seemed to be leaking in prodigious gushes from the engine’s enormous main cylinder. He tried blocking the leak with all kinds of devices and substances. The gap between the piston’s outer surface and the cylinder’s inner wall should, in theory, have been minimal, and more or less the same wherever it was measured. But because the cylinders were made of iron sheets hammered and forged into a circle, and their edges then sealed together, the gap actually varied enormously from place to place. In some places, piston and cylinder touched, causing friction and wear. In other places, as much as half an inch separated them, and each injection of steam was followed by an immediate eruption from the gap.

Watt tried tucking in pieces of linseed oil–soaked leather; stuffing the gap with a paste made from soaked paper and flour; hammering in corkboard shims, pieces of rubber, even dollops of half-dried horse dung.

By the purest accident, John Wilkinson asked for an engine to be built for him, to act as a bellows for one of his iron forges—and in an instant, he saw and recognized Watt’s steam-leaking problem, and in an equal instant, he knew he had the solution: he would apply his cannon-boring technique to the making of cylinders for steam engines.  Watt beamed with delight. Wilkinson had solved his problem, and the Industrial Revolution—we can say now what those two never imagined—could now formally begin.

And so came the number, the crucial number, the figure that is central to this story, that which appears at the head of this chapter and which will be refined in its exactitude in all the remaining parts of this story. This is the figure of 0.1—one-tenth of an inch. This was the tolerance to which John Wilkinson had ground out his first cylinder.  All of a sudden, there was an interest in tolerance, in the clearance by which one part was made to fit with or into another. This was something quite new, and it begins, essentially, with the delivery of that first machine on May 4, 1776.

The central functioning part of the steam engine was possessed of a mechanical tolerance never before either imagined or achieved, a tolerance of 0.1 inches.

Locks were a British obsession at the time. The social and legislative changes that were sweeping the country in the late eighteenth century were having the undesirable effect of dividing society quite brutally: while the landed aristocracy had for centuries protected itself in grand houses behind walls and parks and ha-has, and with resident staff to keep mischief at bay, the enriched beneficiaries of the new business climate were much more accessible to the persistent poor.

Envy was abroad. Robbery was frequent. Fear was in the air. Doors and windows needed to be bolted. Locks had to be made, and made well. A lock such as Mr. Marshall’s, pickable in 15 minutes by a skilled man, and by a desperate and hungry man maybe in 10, was clearly not good enough. Joseph Bramah decided he would design and make a better one. He did so in 1784, less than a year after picking the Marshall lock. His patent made it almost impossible for a burglar with a wax-covered key blank, the tool most favored by the criminals who could use it to work out the position of the various levers and tumblers inside a lock, to divine what was beyond the keyhole, inside the workings.

Maudslay solved Bramah’s supply problems in an inkling by creating a machine to make them.  He built a whole family of machine tools, in fact, that would each make, or help to make, the various parts of the fantastically complicated locks Joseph Bramah had designed. They could make the parts fast and well and cheaply, without the errors that handcrafting and hand tools inevitably cause. The machines that Maudslay made would, in other words, make the necessary parts with precision.

Metal pieces can be machined into a range of shapes and sizes and configurations, and provided that the settings of the leadscrew and the slide rest are the same for every procedure, and the lathe operator can record these positions and make certain they are the same, time after time, then every machined piece will be the same—will look the same, measure the same, weigh the same (if of the same density of metal) as every other. The pieces are all replicable. They are, crucially, interchangeable. If the machined pieces are to be the parts of a further machine—if they are gearwheels, say, or triggers, or handgrips, or barrels—then they will be interchangeable parts, the ultimate cornerstone components of modern manufacturing. Of equally fundamental importance, a lathe so abundantly equipped as Maudslay’s was also able to make that most essential component of the industrialized world, the screw.

Screws were made to a standard of tolerance of one in one ten-thousandth of an inch.

A slide rest allowed for the making of myriad items, from door hinges to jet engines to cylinder blocks, pistons, and the deadly plutonium cores of atomic bombs

Maudslay next created in truly massive numbers, a vital component for British sailing ships. He built the wondrously complicated machines that would, for the next 150 years, make ships’ pulley blocks, the essential parts of a sailing ship’s rigging that helped give the Royal Navy its ability to travel, police, and, for a while, rule the world’s oceans.  At the time, sails were large pieces of canvas suspended, supported, and controlled by way of endless miles of rigging, of stays and yards and shrouds and footropes, most of which had to pass through systems of tough wooden pulleys that were known simply to navy men as blocks—pulley blocks, beyond the maritime world as block and tackle.

A large ship might have as many as 1400 pulley blocks of varying types and sizes depending on the task required. The lifting of a very heavy object such as an anchor might need an arrangement of six blocks, each with three sheaves, or pulleys, and with a rope passing through all six such that a single sailor might exert a pull of only a few easy pounds in order to lift an anchor weighing half a ton.

Blocks for use on a ship are traditionally exceptionally strong, having to endure years of pounding water, freezing winds, tropical humidity, searing doldrums heat, salt spray, heavy duties, and careless handling by brutish seamen. Back in sailing ship days, they were made principally of elm, with iron plates bolted onto their sides, iron hooks securely attached to their upper and lower ends, and with their sheaves, or pulleys, sandwiched between their cheeks, and around which ropes would be threaded. The sheaves themselves were often made of Lignum vitae (trees from South America),

What principally concerned the admirals was not so much the building of enough ships but the supply of the vital blocks that would allow the sailing ships to sail. The Admiralty needed 130,000 of them every year The complexity of their construction meant that they could be fashioned only by hand. Scores of artisanal woodworkers in and around southern England but were notoriously unreliable.

The Block Mills still stand as testament to many things, most famously to the sheer perfection of each and every one of the hand-built iron machines housed inside. So well were they made—they were masterpieces, most modern engineers agree—that most were still working a century and a half later; the Royal Navy made its last pulley blocks in 1965.

The Block Mills were the first factory to run entirely by steam engine.  The next invention that mattered depended on flatness, without curvature, indentation or protuberance. It involves the creation of a base from which all precise measurement and manufacture can be originated. For, as Maudslay realized, a machine tool can make an accurate machine only if the surface on which the tool is mounted is perfectly flat, is perfectly plane, exactly level, its geometry entirely exact.

A bench micrometer would be able to measure the actual dimension of a physical object to make sure that the components of the machines they were constructing would all fit together, with exact tolerances, and be precise for each machine and accurate to the design standard.

The micrometer that performed all these measurements turned out to be extremely accurate and consistent: this invention of his could measure down to one one-thousandth of an inch and, according to some, maybe even one ten-thousandth of an inch: to a tolerance of 0.0001.

To any schoolchild today, Eli Whitney means just one thing: the cotton gin. To any informed engineer, he signifies something very different: confidence man, trickster, fraud, charlatan almost entirely from his association with the gun trade, with precision manufacturing, and with the promise of being able to deliver weapons assembled from interchangeable parts.  When Whitney won the commission and signed a government contract to do so in 1798, he knew nothing about muskets and even less about their components: he won the order largely because of his Yale connections and the old alumni network that, even then, flourished in the corridors of power in Washington, DC.

It was John Hall who succeeded in making precision guns. At every stage of the work, from the forging of the barrel to the turning of the rifling and the shaping of the barrel, his 63 gauges were set to work, more than any engineer before him, to ensure as best he could that every part of every gun was exactly the same as every other—and that all were made to far stricter tolerances than hitherto: for a lock merely to work required a tolerance of maybe a fifth of a millimeter; to ensure that it not only worked but was infinitely interchangeable, he needed to have the pieces machined to a fiftieth of a millimeter.

Precision shoes were made by turning a shapeless block of wood into a foot-shaped entity of specific dimensions, and repeated time and time again. These shoemaker lasts were of exact sizes, seven inches long, nine, and so on. Before precise shoes were made, they were offered up in barrels and customers pulled them out randomly trying to find a shoe that more or less fit.

Oliver Evans was making flour-milling machinery; Isaac Singer introduced precision into the manufacturing of sewing machines; Cyrus McCormick was creating reapers, mowers, and, later, combine harvesters; and Albert Pope was making bicycles for the masses.

Joseph Whitworth was an absolute champion of accuracy, an uncompromising devotee of precision, and the creator of a device, unprecedented at the time, that could truly measure to an unimaginable one-millionth of an inch.  Using his superb mechanical skills, in 1859 he created a micrometer that allowed for one complete turn of the micrometer wheel to advance the screw not by 1/20 of an inch, but by 1/4,000 of an inch, a truly tiny amount.

Whitworth then incised 250 divisions on the turning wheel’s circumference, which meant that the operator of the machine, by turning the wheel by just one division, could advance or retard the screw and provided the ends of the item being measured are as plane as the plates on the micrometer, opening the gap by that 1/1,000,000 of an inch would make the difference between the item being held firmly, or falling, under the influence of gravity.

Now metal pieces could be made and measured to a tolerance of one-millionth of an inch.

Until Whitworth, each screw and nut and bolt was unique to itself, and the chance that any one-tenth-inch screw, say, might fit any randomly chosen one-tenth-inch nut was slender at best.

With the Model T, Henry Ford changed everything. From the start, he was insistent that no metal filing ever be done in his motor-making factories, because all the parts, components, and pieces he used for the machine would come to him already precisely finished, and to tolerances of cruelly exacting standards such that each would fit exactly without the need for even the most delicate of further adjustment. Once that aspect of his manufacturing system was firmly established, he created a whole new means of assembling the bits and pieces into cars.  He demanded a standard of precision for his components that had seldom been either known or achieved before, and he now married this standard to a new system of manufacture seldom tried before.

The Model T had fewer than 100 parts. A modern car has more than 30,000.

Within Rolls-Royce, it may seem as though the worship of the precise was entirely central to the making of these enormously comfortable, stylish, swift, and comprehensively memorable cars. In fact, it was far more crucial to the making of the less costly, less complex, less remembered machines that poured from the Ford plants around the world. And for a simple reason: the production lines required a limitless supply of parts that were exactly interchangeable.

If one happened not to be so exact, and if an assembly-line worker tried to fit this inexact and imprecise component into a passing workpiece and it refused to fit and the worker tried to make it fit, and wrestled with it—then, just like Charlie Chaplin’s assembly-line worker in Modern Times or, less amusingly, one in Fritz Lang’s Metropolis, the line would slow and falter and eventually stop, and workers for yards around would find their work disrupted, and parts being fed into the system would create unwieldy piles, and the supply chain would clog, and the entire production would slow and falter and maybe even grind, quite literally, to a painful halt. Precision, in other words, is an absolute essential for keeping the unforgiving tyranny of a production line going.

Henry Ford had been helped in his aim of making it so by using one component (and then buying the firm that made it), a component whose creation, by a Swedish man of great modesty, turned out to be of profoundly lasting importance to the world of precision. The Swede was Carl Edvard Johansson, popularly and proudly known by every knowledgeable Swede today as the world’s Master of Measurement. He was the inventor of the set of precise pieces of perfectly flat, hardened steel known to this day as gauge blocks, slip gauges, or, to his honor and in his memory, as Johansson gauges, or quite simply, Jo blocks.

His idea was to create a set of gauge blocks that, if held together in combination, could in theory measure any needed dimension. He calculated that the minimum number of blocks that would be needed was 103 blocks made of certain carefully specified sizes. Arranged in three series, it was possible to take some 20,000 measurements in increments of one one-thousandth of a millimeter, by laying two or more blocks together. His 103-piece combination gauge block set has since directly and indirectly taught engineers, foremen and mechanics to treat tools with care, and at the same time given them familiarity with dimensions of thousandths and ten thousandths of a millimeter.

Gauge blocks first came to the United States in 1908.  Cars were precise only to themselves; maybe every manufactured piece fit impeccably because it was interchangeable to itself, but once another absolutely impeccably manufactured, gauge-block-confirmed piece from another company (a ball bearing from SKF, say) was introduced into the Ford system, then maybe its absolute perfection trumped that of Ford’s, and Ford was wrong—ever so slightly maybe, but wrong nonetheless

Gauge blocks after the Great War, achieved accuracies of up to one-millionth of an inch.

Modern jet engines have hundreds of parts jerking to and fro and they cannot be made more powerful without becoming too complicated.  Modern jet engines can produce more than 100,000 horsepower—still, essentially, they have only a single moving part: a spindle, a rotor, which is induced to spin and, in doing so, causes many pieces of high-precision metal to spin with it.

All that ensures they work as well as they do are the rare and costly materials from which they are made, the protection of the integrity of the pieces machined from these materials, and the superfine tolerances of the manufacture of every part of which they are composed.  Since any increase in engine power and thus aircraft speed would lead to heavier engines, perhaps too heavy for an aircraft to carry, a new kind of engine was invented. The gas turbine.  A crucial element in any combustion engine is air—air is drawn into the engine, mixed with fuel, and then burns or explodes. The thermal energy from that event is turned into kinetic energy, and the engine’s moving parts powered. But a factor in the amount of air sucked into a piston engine is limited by the size of the cylinders. In a gas turbine, there is almost no limit: a gigantic fan at the opening of such an engine can swallow vastly more air than can be taken into a piston engine.

Gas turbines were already beginning to power ships, to generate electricity, to run factories. The simplicity of the basic idea was immensely attractive. Air was drawn in through a cavernous doorway at the front of the engine and immediately compressed, and made hot in the process, and was then mixed with fuel, and ignited. It was the resulting ferociously hot, tightly compressed, and controlled explosion that then drove the turbine, which spun its blades and then performed two functions. It used some of its power to drive the aforementioned compressor, which sucked in and squeezed the air, but it then had a very considerable fraction of its power left, and so was available to do other things, such as turn the propeller of a ship, or turn a generator of electricity, or turn the driving wheels of a railway locomotive (didn’t happen, too many problems), or provide the power for a thousand machines in a factory and keep them running, tirelessly.

The first jet plane was invented in 1941 in Britain, and in 1944 that the public learned about it.  Inside a jet engine, everything is a diabolic labyrinth, a maze of fans and pipes and rotors and discs and tubes and sensors and a Turk’s head of wires of such confusion that it doesn’t seem possible that any metal thing inside it could possibly even move without striking and cutting and dismembering all the other metal things that are crammed together in such dangerously interfering proximity. Yet work and move a jet engine most certainly does, with every bit of it impressively engineered to do so, time and again, and under the harshest and fiercest of working conditions.

There are scores of blades of various sizes in a modern jet engine, whirling this way and that and performing various tasks that help push the hundreds of tons of airplane up and through the sky. But the blades of the high-pressure turbines represent the singularly truest marvel of engineering achievement—and this is primarily because the blades themselves, rotating at incredible speeds and each one of them generating during its maximum operation as much power as a Formula One racing car, operate in a stream of gases that are far hotter than the melting point of the metal from which the blades were made. What stopped these blades from melting?

It turns out to be possible to cool the blades by drilling hundreds of tiny holes in each blade, and of making inside each blade a network of tiny cooling tunnels, all of them manufactured at a size and to such minuscule tolerances as were quite unthinkable only a few years ago.

The first blades that Whittle made were of steel, which somewhat limited the performance of his early prototypes, since steel loses its structural integrity at temperatures higher than about 500 degrees Celsius. But alloys were soon found that made matters much easier, after which blades were constructed from these new metal compounds. They did not run the risk of melting, because the temperatures at which they operated were on the order of a thousand degrees, and the special nickel-and-chromium alloy from which they were made, known as Nimonic, remained solid and secure and stiff up to 1,400 degrees Celsius (2550 F).

the next generation of engines required that the gas mixture roaring out from the combustion chamber be heated to around 1,600 degrees Celsius, and even the finest of the alloys then used melted at around 1,455 degrees Celsius. The metals tended to lose their strength and become soft and vulnerable to all kinds of shape changes and expansions at even lower temperatures. In fact, extended thermal pummeling of the blades at anything above 1,300 degrees Celsius was regarded by early researchers as just too difficult and risky.

Most of that air bypasses the engine (for reasons that are beyond the scope of this chapter), but a substantial portion of it is sent through a witheringly complex maze of blades, some whirling, some bolted and static, that make up the front and relatively cool end of a jet engine and that compress the air, by as much as 50 times. The one ton of air taken each second by the fan, and which would in normal circumstances entirely fill the space equivalent of a squash court, is squeezed to a point where it could fit into a decent-size suitcase. It is dense, and it is hot, and it is ready for high drama. For very nearly all this compressed air is directed straight into the combustion chamber, where it mixes with sprayed kerosene, is ignited by an array of electronic matches, as it were, and explodes directly into the whirling wheel of turbine blades. These blades (more than ninety of them in a modern jet engine, and attached to the outer edge of a disc rotating at great speed) are the first port of call for the air before it passes through the rest of the turbine and, joining the bypassed cool air from the fan, gushes wildly out of the rear of the engine and pushes the plane forward. “Nearly all” is the key. Some of this cool air, the Rolls-Royce engineers realized, could actually be diverted before it reached the combustion chamber, and could be fed into tubes in the disc onto which the blades were bolted. From there it could be directed into a branching network of channels or tunnels that had been machined into the interior of the blade itself. And now that the blade was filled with cool air—cool only by comparison; the simple act of compressing it made it quite hot, about 650 degrees Celsius, but still cooler by a thousand degrees than the post–combustion chamber fuel-air mixture. To make use of this cool air, scores of unimaginably tiny holes were then drilled into the blade surface, drilled with great precision and delicacy and in configurations that had been dictated by the computers, and drilled down through the blade alloy until each one of them reached just into the cool-air-filled tunnels—thus immediately allowing the cool air within to escape or seep or flow or thrust outward, and onto the gleaming hot surface of the blade.

It is here that the awesome computational power that has been available since the late 1960s comes into its own, becomes so crucially useful. Aside from the complex geometry of the hundreds of tiny pinholes, is the fact that the blades are grown from, incredibly, a single crystal of metallic nickel alloy. This makes them extremely strong—which they need to be, as in their high-temperature whirlings, they are subjected to centrifugal forces equivalent to the weight of a double-decker London bus. Very basically, the molten metal (an alloy of nickel, aluminum, chromium, tantalum, titanium, and five other rare-earth elements that Rolls-Royce coyly refuses to discuss) is poured into a mold that has at its base a little and curiously three-turned twisted tube, which resembles nothing more than the tail of P and ends up with all its molecules lined up evenly.

It has become a single crystal of metal, and thus, its eventual resistance to all the physical problems that normally plague metal pieces like this is mightily enhanced. It is very much stronger—which it needs to be, considering the enormous centrifugal forces.

Electrical discharge machining, or EDM, as it is more generally known, employs just a wire and a spark, both of them tiny, the whole process directed by computer and inspected by humans, using powerful microscopes, as it is happening.  The more complex the engines, the more holes need to be drilled into the various surfaces of a single blade: in a Trent XWB engine, there are some 600, arranged in bewildering geometries to ensure that the blade remains stiff, solid, and as cool as possible. Their integrity owes much to the geometry of the cooling holes that are being drilled, which is measured and computed and checked by skilled human beings. No tolerance whatsoever can be accorded to any errors that might creep into the manufacturing process, for a failure in this part of a jet engine can turn into a swiftly accelerating disaster.

As the tolerances shrink still further and limits are set to which even the most well-honed human skills cannot be matched, automation has to take over. The Advanced Blade Casting Facility can perform all these tasks (from the injection of the losable wax to the growing of single-crystal alloys to the drilling of the cooling holes) with the employment of no more than a handful of skilled men and women. It can turn out 100,000 blades a year, all free of errors.

But failure was still possible. The fate of passengers depended on the performance of one tiny metal pipe no more than five centimeters long and three-quarters of a centimeter in diameter, into which someone at a factory in the northern English Midlands had bored a tiny hole, but had mistakenly bored it fractionally out of true. The engine part in question is called an oil feed stub pipe, and though there are many small steel tubes wandering snakelike through any engine, this particular one, a slightly wider stub at the end of longer but narrower snakelike pipe, was positioned in the red-hot air chamber between the high- and intermediate-pressure turbine discs. It was designed to send oil down to the bearings on the rotor that carried the fast-spinning disc. It was machined improperly due to a drill bit that did the work being misaligned, with the result that along one small portion of its circumference, the tube was about half a millimeter too thin.

Metal fatigue is what caused the engine to fail. The aircraft had spent 8,500 hours aloft, and had performed 1,800 takeoff and landing cycles. It is these last that punish the mechanical parts of a plane: the landing gear, the flaps, the brakes, and the internal components of the jet engines. For, every time there is a truly fast or steep takeoff, or every time there is a hard landing, these parts are put under stress that is momentarily greater than the running stresses of temperature and pressure for which the innards of a jet engine are notorious.

Heisenberg, in helping in the 1920s to father the concepts of quantum mechanics, made discoveries and presented calculations that first suggested this might be true: that in dealing with the tiniest of particles, the tiniest of tolerances, the normal rules of precise measurement simply cease to apply. At near-and subatomic levels, solidity becomes merely a chimera; matter comes packaged as either waves or particles that are by themselves both indistinguishable and immeasurable and, even to the greatest talents, only vaguely comprehensible.

The making of the smallest parts for today’s great jet engines, we are reaching down nowhere near the limits that so exercise the minds of quantum mechanicians. Yet we have reached a point in the story where we begin to notice our own possible limitations and, by extension and extrapolation, also the possible end point of our search for perfection.

An overlooked measurement error on the mirror amounting to one-fiftieth the thickness of a human hair managed to render most of the images beamed down from Hubble fuzzy and almost wholly useless.

Chapter 9 (TOLERANCE: 0.000 000 000 000 000 000 000 000 000 000 000 01)  35 places

Here we come to the culmination of precision’s quarter-millennium evolutionary journey. Up until this moment, almost all the devices and creations that required a degree of precision in their making had been made of metal, and performed their various functions through physical movements of one kind or another. Pistons rose and fell; locks opened and closed; rifles fired; sewing machines secured pieces of fabric and created hems and selvedges; bicycles wobbled along lanes; cars ran along highways; ball bearings spun and whirled; trains snorted out of tunnels; aircraft flew through the skies; telescopes deployed; clocks ticked or hummed, and their hands moved ever forward, never back, one precise second at a time. Then came the computer, into an immobile and silent universe, one where electrons and protons and neutrons have replaced iron and oil and bearings and lubricants and trunnions and the paradigm-altering idea of interchangeable parts.

Precision had by now reached a degree of exactitude that would be of relevance and use only at the near-atomic level.

Fab 42—of electronic microprocessor chips, the operating brains of almost all the world’s computers. The enormous ASML devices allow the firm to manufacture these chips, and to place transistors on them in huge numbers and to any almost unreal level of precision and minute scale that today’s computer industry, pressing for ever-speedier and more powerful computers, endlessly demands.

Gordon Moore, one of the founders of Intel, is most probably the man to blame for this trend toward ultraprecision in the electronics world. He made an immense fortune by devising the means to make ever-smaller and smaller transistors and to cram millions, then billions of them onto a single microprocessing chip. There are now more transistors at work on this planet (some 15 quintillion, or 15,000,000,000,000,000,000) than there are leaves on all the trees in the world. In 2015, the four major chip-making firms were making 14 trillion transistors every single second. Also, the sizes of the individual transistors are well down into the atomic level.

When the Broadwell family of chips was created in 2016, node size was down to a previously inconceivably tiny fourteen-billionths of a meter (the size of the smallest of viruses), and each wafer contained no fewer than seven billion transistors. The Skylake chips made by Intel at the time of this writing have transistors that are sixty times smaller than the wavelength of light used by human eyes, and so are literally invisible.

It takes three months to complete a microprocessing chip, starting with the growing of a 400-pound, very fragile, cylindrical boule of pure smelted silicon, which fine-wire saws will cut into dinner plate–size wafers, each an exact two-thirds of a millimeter thick. Chemicals and polishing machines will then smooth the upper surface of each wafer to a mirror finish, after which the polished discs are loaded into ASML machines for the long and tedious process toward becoming operational computer chips. Each wafer will eventually be cut along the lines of a grid that will extract a thousand chip dice from it—and each single die, an exactly cut fragment of the wafer, will eventually hold the billions of transistors that form the non-beating heart of every computer, cellphone, video game, navigation system, and calculator on modern Earth, and every satellite and space vehicle above and beyond it. What happens to the wafers before the chips are cut out of them demands an almost unimaginable degree of miniaturization. Patterns of newly designed transistor arrays are drawn with immense care onto transparent fused silica masks, and then lasers are fired through these masks and the beams directed through arrays of lenses or bounced off long reaches of mirrors, eventually to imprint a highly shrunken version of the patterns onto an exact spot on the gridded wafer, so that the pattern is reproduced, in tiny exactitude, time and time again. After the first pass by the laser light, the wafer is removed, is carefully washed and dried, and then is brought back to the machine, whence the process of having another submicroscopic pattern imprinted on it by a laser is repeated, and then again and again, until thirty, forty, as many as sixty infinitesimally thin layers of patterns (each layer and each tiny piece of each layer a complex array of electronic circuitry) are engraved, one on top of the other.

Rooms within the ASML facility in Holland are very much cleaner than that. They are clean to the far more brutally restrictive demands of ISO number 1, which permits only 10 particles of just one-tenth of a micron per cubic meter, and no particles of any size larger than that. A human being existing in a normal environment swims in a miasma of air and vapor that is five million times less clean.

The test masses on the LIGO devices in Washington State and Louisiana are so exact in their making that the light reflected by them can be measured to one ten-thousandth of the diameter of a proton.

Alpha Centauri A, which lies 4.3 light-years away. The distance in miles of 4.3 light-years is 26 trillion miles, or, in full, 26,000,000,000,000 miles. It is now known with absolute certainty that the cylindrical masses on LIGO can help to measure that vast distance to within the width of a single human hair.

 

Posted in Infrastructure & Collapse, Jobs and Skills, Life After Fossil Fuels, Manufacturing & Industrial Heat | Tagged , , , , , | Comments Off on Life After Fossil Fuels: manufacturing will be less precise

Rationing. Book review of “Any way you slice it” by Stan Cox

Preface. I can’t imagine that there’s a better book on rationing out there, but of course I can’t be sure, I don’t feel the need to find others on this topic after reading this book. As usual, I had to leave quite a bit out of this review, skipping medical care rationing entirely among many other topics. Nor did I capture the myriad ways rationing can go wrong, so if you ever find yourself in a position of trying to implement a rationing system, or advocating for a rationing system, you’ll wish you’d bought this book. I can guarantee you the time is coming when rationing will be needed, in fact, it’s already here with covid-19. I’ve seen food lines over a mile long.

As energy declines, food prices will go up and at some point gasoline, food, electricity, and heating as well, all of them ought to be rationed.

Though this might not happen in the U.S. where the most extreme and brutal capitalism exists.  Here the U.S. is the richest nation that ever existed but the distribution of wealth is among the most unfair on the planet.  When the need to ration strikes, economists will argue against it I’m sure, saying there’ll be too much cheating and it will be too hard to implement.  Capitalism hates price controls. That’s why “publicly raising the question of curbing growth or uttering the dread word “rationing” in the midst of a profit-driven economy has been compared to shouting an obscenity in church”.

Republicans constantly want to cut back the affordable care act and the food stamp program SNAP.  Companies keep their workforces as small as possible and shift jobs and factories overseas to nations with lower wages and fewer regulations. They fight hard to restrict the rights of organized labor. All this has resulted in higher productivity, but the rewards go to shareholders and executives, not employees.

So, I wouldn’t count on rationing when times get hard – hell, that’s already apparent with covid-19 aid. The Trump administration & republicans were happy to hand out a $2 trillion dollar tax cut to the already rich, but when it came to covid-19 relief so people wouldn’t be evicted from their homes and afford to buy food with, they gave out money just once and as I write this in mid-October 2020 Republicans won’t compromise with the Democrats to give out any more relief money. Even if Biden is elected, the economy can’t recover until a vaccine is invented and given to everyone. By then the economy may be so broken it will be hard to fix. And since peak oil has already happened, we can’t recover, growth is at an end. Soon “The Long Emergency” Kunstler wrote about it begins.

Let’s hope I’m wrong and that Homeland Security or some other government agency has already got emergency rationing plans in place.  I’ve seen the cities of Denver, Chicago, and other city-level plans. They are usually very high level, and cover who should call who, lists of nursing homes to evacuate and the like.  But there’s no actual stockpile of food or blankets or rationing plans. When I spoke to someone in California’s emergency planning unit, I was told it this won’t happen because it would be too costly a bureaucracy to set up, and any perceived maldistribution would undo the political fortunes of the party in power.

So, you’d better plan to grow as much of your own food as possible during energy decline, the level of inequality and selfishness in the United States is truly striking. There may be rationing in some localities. Try to find a good community (by reading the posts here), gain skills, and help others out whenever you have a chance to create a bubble of mutual aid and kindness in this cold cruel capitalistic world.

Related

2021 China Panic-Hoards Half Of World’s Grain Supply Amid Threats Of Collapse. Beijing has managed to stockpile more than half of the world’s maize and other grains that have resulted in rapid food inflation and triggered famine in some countries. China has approximately 69% of the globe’s maize reserves in the first half of the crop year 2022, 60% of its rice, and 51% of its wheat. The one thing Beijing cannot have is discontent among its citizens triggered by food shortages and or soaring prices; that’s why central planners spent $98.1 billion importing food in 2020, up 4.6 times from a decade earlier, according to the General Administration of Customs of China. For the first eight months of this year, China imported more food than in 2016. China’s acquisition of the world’s food supply has helped push food prices to decade highs. The U.N. Food and Agriculture Organization estimated the food price index is currently at a ten-year high.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Stan Cox. 2013. Any Way You Slice It: The Past, Present, and Future of Rationing. The New Press.

When energy trading companies, led by Enron Corporation, created shortages in the state’s recently deregulated power industry, they caused wholesale electricity prices to jump by as much as 800%, with economic turmoil and suffering the result. The loss to the state was estimated at more than $40 billion. That same year, Brazil had a nationwide electricity shortfall of 10%, which was proportionally larger than the shortage in California. But the Brazilian government avoided inflation and blackouts simply by capping prices and limiting all customers, residential and commercial, to 10% lower consumption than that of the previous year, with severe penalties for exceeding the limit. No significant suffering resulted. The California crisis is viewed as one of America’s worst energy disasters, but, says Farley, “No one even remembers a ‘crisis’ in Brazil in 2001.

In Zanzibar, for example, resort hotels and guesthouses sited in three coastal villages consume 180 to 850 gallons of water per room per day (with the more luxurious hotels consuming the most), while local households have access to only 25 gallons per day for all purposes.

The mechanisms for non-price rationing are many and varied. The more familiar include rationing by queuing, as at the gas pump in the 1970s; by time, as with day-of-week lawn sprinkling during droughts; by lottery, as with immigration visas and some clinical trials of scarce drugs; by triage, as in battlefield or emergency medicine; by straight quantity, as governments did with gasoline, tires, and shoes during World War II; or by keeping score with a nonmonetary device such as carbon emissions or the points that were assigned to meats and canned goods in wartime.

If we allow the future to be created by veiled corporate planning, the fairly predictable consequence will be resource conflicts between the haves and have-nots—or rather, among the haves, the hads, and the never-hads.

It’s quite possible (indeed very common, I would guess) to be simultaneously concerned about the fate of the Earth and worried that the necessary degree of restraint just isn’t achievable. We’ve been painted into a corner by an economy that has a bottomless supply of paint. Overproduction, the chronic ailment of any mature capitalist economy, creates the need for a culture whose consumption is geared accordingly.

Whenever there’s a ceiling on overall availability of goods, no one is happy. And when a consumer unlucky enough to be caught in such a situation is confronted with explicit rationing—a policy that she experiences as the day-to-day face of that scarcity—it’s no wonder that rationing becomes a dirty word. That has always been true, but an economy that is as deeply dependent on consumer spending as ours would view explicit rationing as a doubly dirty proposition. In America, freedom of consumption has become essential to realizing many of our more fundamental rights—freedom of movement, freedom of association, ability to communicate, satisfactory employment, good health care, even the ability to choose what to eat and drink—and no policy that compromises those rights by limiting access to resources is going to be at all welcome.

No patriotic American can or will ask men to risk their lives to preserve motoring-as-usual. —Secretary of the Interior Harold Ickes explaining the U.S. government’s gasoline rationing plan, April 23, 1942

Carter was neither the first nor the last leader to use martial language when urging conservation and sacrifice. According to the environmental scholar Maurie Cohen, “Experience suggests that the use of militaristic representations can be an effective device with which to convey seriousness of purpose, to marshal financial resources, to disable opponents, and to mobilize diverse constituencies behind a common banner. Martial language can also communicate a political message that success may take time and that public sacrifice may be required as part of the struggle.”

As World War I ground on into its third year in the summer of 1917, U.S. exports of wheat and other foods were all that stood between Europe’s battle-weary populations and mass hunger. America’s annual food exports rose from 7 to 19 million tons during the war. As a result, the farms of the time, which were far less productive than those of today, were hard-pressed to satisfy domestic demand. By August 1917, with the United States four months into its involvement in the war, Congress passed the Lever Act, creating the United States Food Administration and the Federal Fuel Administration and giving them broad control of production and prices. Commodity dealers were required to obtain licenses from Food Administrator Herbert Hoover, and he had the power to revoke licenses, shut down or take over firms, and seize commodities for use by the military. In September, the Toledo News-Bee announced that the “entire world may be put on rations soon” with Hoover acting as “food dictator of the world.”8 But as it turned out, Hoover wasn’t much of a dictator. According to the historian Helen Zoe Veit, restrictions consisted mostly of jawboning, as “food administrators simultaneously exalted voluntarism while threatening to impose compulsory rations should these weak, ‘democratic’ means prove insufficient”; however, “many Americans wrote to the Food Administration to say that they believed that compulsion actually inspired cheerful willingness, whereas voluntarism got largely apathetic results.”9 There were in fact a few mandatory restrictions. Hoarding of all kinds of products was prohibited, and violators could be punished with fines or even imprisonment. “Fair-price lists” ran in local newspapers and retailers were expected to adhere to them. But controls on prices of wheat and sugar were not backed up with regulation of demand. That led to scarcities of both commodities, as consumers who could afford to buy excessive quantities often did so.

Meanwhile, the Fuel Administration had to deal with shortages of coal, which at that time was the nation’s most important source of energy for heating, transportation, and manufacturing. Heavy heating demand in the frigid winter of 1917–18 converged with higher-than-normal use of the railway system (largely for troop movements) to precipitate what has been called the nation’s first energy crisis. The administration resorted to a wide range of stratagems to conserve coal, including factory shutdowns, division of the country into “coal zones” with no trade between zones, and a total cutoff of supplies to country clubs and yacht owners. The administration announced that Americans would be allowed to buy only as much coal as was needed to keep their houses at 68 degrees F. in winter.

The need to conserve petroleum led to a campaign to stop the pastime of Sunday driving.

The campaign against Sunday driving was carried out enthusiastically, perhaps overly so, by self-appointed volunteer citizens. Garfield complained that the volunteers had become “tyrranous,” punishing violators in ways that “would have staggered the imagination of Americans twelve months earlier.” Government officials assumed that their

Although food shortages persisted despite the drive for voluntary moderation, rationing remained off the table. Veit explained how the U.S. government’s insistence on voluntarism was an effort to draw a contrast between democratic America and autocratic, “overrationed” Germany. Rationing, the argument went, had undermined German morale while the United States was managing to rescue Europe and feed its own population “precisely because it never forced Americans to sacrifice, but instead inspired them to do so willingly.” (Hoover’s Home Conservation director Ray Wilbur asserted that, before the war, “we were a soft people,” and that voluntary sacrifice had strengthened the nation.) But in World War I America, price controls acting alone did not prevent shortages, unfair distribution, and deprivation. From that experience, the economic historian Hugh Rockoff concluded that “with prices fixed, the government must substitute some form of rationing or other means of reducing demand” because “appeals to voluntary cooperation, even when backed by patriotism, are of limited value in solving this problem.” The reluctance to use rationing was tied to views on democracy. According to Veit, the most powerful men in Washington, including Hoover and President Woodrow Wilson, viewed democracy as “synonymous with individual freedom,” while another view of democracy that was widely held at the time required “equality of burden.” Under the second definition, “rationing was inherently more democratic as it prevented one group (the patriotic) from bearing the double-burden of compensating for another (the shirkers).

In practice, Hoover’s Food Administration valued free-market economics more highly than either personal freedom or fairness. Official language was always of voluntary sacrifice, but there’s more than one way to rope in a “volunteer.” Ad hoc committees in schools, workplaces, churches, and communities kept track of who did and didn’t sign Hoover’s food-conservation pledge or display government posters in their kitchen windows. In urging women to sign and comply with the Hoover pledge, door-to-door canvassers laid on the hard sell, often with the implication of an “or else” if the pledge was refused. Statements from government officials to the effect of “we know who you are” and explicit branding of nonsigners as “traitors” were highly effective recruiting techniques. But millions of poor and often hungry Americans had no excess consumption to give up. A Missouri woman told Hoover canvassers that yes, she would accept a pledge card so that she could “wipe her butt with it,” because she “wasn’t going to feed rich people.” As Veit put it, “the choice to live more ascetically was a luxury, and the notion of righteous food conservation struck those who couldn’t afford it as a cruel joke.”

With pressure building, the U.S. government probably would have resorted to rationing had World War I continued through 1919. The major European combatants, whose ordeal had been longer and tougher, did have civilian rationing, and the practice reappeared across Europe with the return of war in 1939.

When World War II broke out in Europe, the United States once again mounted a campaign to export food and war materials to its allies. Soon after America entered the war, the first items to require rationing were tires and gasoline. Those moves can be explained, in Rockoff’s words, by the “siege motive,” the result of absolute scarcity imposed by an external cutoff of supply. The rubber and tire industries were indeed under siege, with supplies from the Pacific having been suddenly cut off. The processes for making synthetic rubber were known, but there had not been time to build sufficient manufacturing capacity. The government’s first move was to buy time by calling a halt to all tire sales. With military rubber requirements approaching the level of the economy’s entire prewar output, Leon Henderson, head of the Office of Price Administration (OPA), urged Americans to reduce their driving voluntarily to save rubber. But, unwilling to rely solely on drivers’ cooperation, the government got creative and decided to ration gasoline as an indirect means of reducing tire wear. The need for gas rationing had already arisen independently in the eastern states. At the outbreak of the war, the United States was supplying most of its own oil needs. With much of the production located in the south-central states, tankers transported petroleum from ports on the Gulf Coast to the population centers of the East Coast. But in the summer of 1941, oil tankers began to be diverted from domestic to transatlantic trade in support of the war effort, and all shipping routes became highly vulnerable to attack by German submarines. With supplies strictly limited, authorities issued ration coupons that all drivers purchasing gasoline were required to submit, and also banned nonessential driving in many areas.

Police were asked to stop and question motorists if they suspected a violation and “check on motorists found at race tracks, amusement parks, beaches, and other places where their presence is prima facie evidence of a violation.” Drivers also were required to show that they had arranged to carry two or more passengers whenever possible. Energy consumption was further curtailed by restrictions on the manufacture of durable goods, including cars. At one point, passenger-car production was shut down altogether. That, according to Rockoff, was in a sense “the fairest form of rationing. Each consumer got an exactly equal share: nothing.”

LIMITED RATIONING HAD LIMITED SUCCESS

It became clear early on that rationing of food and other goods would become necessary as well. The OPA announced that “sad experience has proven the inadequacy of voluntary rationing. . . . Although none would be happier than we if mere statements of intent and hortatory efforts were sufficient to check overbuying of scarce commodities, we are firmly convinced that voluntary programs will not work.”26 With some exceptions, such as coffee and bananas, the trigger for rationing foodstuffs was not the siege motive. The United States was producing ample harvests and continued to do so throughout the war, but the military buildup of 1942 included a commitment to supply each soldier and sailor in the rapidly expanding armed services with as much as four thousand calories per day. Those hefty war rations, along with exports of large tonnages of grain to Britain and other allies, pulled vast quantities of food out of the domestic economy. Without price controls, inflation would have ripped through America’s food system and the economy, and the price controls could not have held without rationing.

The first year of America’s involvement in the war, there was only loose coordination among agencies responsible for production controls, price controls, and consumer rationing, and as a result the government was unable to either keep prices down or meet demand for necessities. In late 1941 and early 1942, polls showed strong public demand for broader price controls. Across-the-board controls were imposed in April 1942. But over the next year, prices still rose at a 7.6 percent annual rate, so in early 1943 comprehensive rationing of foods and other goods was announced. In April, Roosevelt issued a strict “Hold-the-Line Order” that allowed no further price increases for most goods and services. Only that sweeping proclamation, backed up as it was by a comprehensive rationing system, was able to keep inflation in check and achieve fair distribution of civilian goods. In late 1943, the OPA was getting very low marks in polls—not because of opposition to rationing or price controls, but because people were complaining that they needed even broader and stricter enforcement. It’s important to note that OPA actions were often motivated as much by wariness of political unrest as by a concern for fairness. Amy Bentley, a historian, explains that the experience of the Great Depression was fresh in the minds of government officials, and they felt that, with the war having re-imposed nationwide scarcity, ensuring equitable sharing of basic needs was essential if a new wave of upheaval and labor radicalization was to be avoided. In publicity materials, the OPA stressed the positive, buoyed by comments from citizen surveys, such as the view of one woman that “rationing is good democracy.”

Consumer rationing by quantity took two general forms: (1) straight rationing (also referred to at various times as “specific” or “unit” rationing), which specified quantities of certain goods (tires, gas, shoes, some durable goods) that could be bought during a specified time period at a designated price; and (2) points rationing, in which officials assigned point values to each individual product (say, green beans or T-bone steak) within each class of commodity (canned vegetables or meats). Each household was allocated a certain number of points that could be spent during the specified period. Price ceilings were eventually placed on 80 percent of foodstuffs, and ceilings were adjusted for cost of living city by city. Determining which goods to ration and what constituted a “fair share” required a major data-collection effort. The OPA drew information from a panel of 2,500 women who kept and submitted household food diaries. The general rules and mechanics of wartime rationing, while cumbersome, were at least straightforward. Ration stamps were handled much like currency, except that they had fixed expiration dates. Businesses were required to collect the proper value in stamps with each purchase so that they could pass them up the line to wholesalers and replenish inventories. Many retailers had ration bank accounts from which they could write ration checks when purchasing inventory; that spared them the inconvenience of handling bulky quantities of stamps and avoided the risk of loss or theft. Although stamps expired at the end of the month for consumers, they were valid for exchange by retailers and wholesalers for some time afterward. Therefore, the OPA urged that households destroy all expired ration stamps, warning that pools of excess stamps could “breed black markets.” The link between the physical stamp and the consumer was tightly controlled. Only a member of the family owning a ration book could use the stamps, and stamps had to be torn from the book by the retailer, not the customer. Stamps for butter had to be given to the milkman in person at time of delivery; they were not to be left with a note.

When consumption of some products is restricted by rationing, people spend the saved money on nonrationed products, driving up their prices. Therefore, Britain’s initial, limited program covering bacon and butter did little to protect the wider economy. Families were plagued by inflation, as well as by shortages and unfair distribution of still-uncontrolled goods; demand swelled for “all-around rationing.”32 Restrictions on sugar and meat began early in 1940, in order to keep prices down, ensure fairness, and reduce dependence on imports. Tea, margarine, and cooking fats were included at midyear. As food scarcity took hold, worsening in the winter of 1940–41, Britons demanded that rationing be extended to a wider range of products to remedy growing inequalities in distribution. They got what they asked for.

The quantities allowed per person varied during the course of the rationing period but were never especially generous: typical weekly amounts were four to eight ounces of bacon plus ham, eight to sixteen of sugar, two to four of tea, one to three of cheese, six of butter, four to eight of jam, and one to three of cooking oil. Allowances were made. Pregnant women and children received extra shares of milk and of foods high in vitamins and minerals, while farmworkers and others who did not have access to workplace canteens at lunchtime received extra cheese rations. Quantities were adjusted for vegetarians. In its mechanics, the system differed from America’s in that each household was required to register with one—and only one—neighborhood shop, which would supply the entire core group of rationed foods. As the war continued, it became clear that this exclusive consumer-retailer tie was unpopular, so the government introduced a point-rationing plan in December 1941, permitting consumers to use points at any shop they chose.33 In both the UK and America, most of the day-to-day management of the rationing systems was, necessarily, handled at the local level. Administration of the system was decentralized. According to Bentley, “The 5,500 local OPA boards scattered across the country, run by over 63,000 paid employees and 275,000 volunteers, possessed a significant amount of autonomy, enabling them to base decisions on local considerations. The real strength of the OPA, then, lay less in the federal office than in its local boards.” In large cities from Baltimore to San Francisco, a “block leader plan” was instituted to help families deal with scarcity.

The block leader, always a woman, would be responsible for discussing nutritional information and sometimes rationing procedures and scrap drives with all residents of her city block. The Home Front Pledge (“I will pay no more than the top legal prices—I will accept no rationed goods without giving up ration points”), administered to citizens by the millions, was backed by clear-cut rules and was legally enforceable, so it was taken much more seriously than the Hoover Pledge of 1917–18. In Britain’s system, the Ministry of Food oversaw up to nineteen Divisional Food Offices, and below them more than 1,500 Food Control Committees, each of which included ten to twelve consumers, five local retailers, and one shop worker, that dealt with the public through local Food Offices.

 “FAIR SHARES FOR ALL” ARE ESSENTIAL Other Allied nations, as well as Germany and the other Axis powers, also imposed strict rationing. In the countries they occupied, the Nazis enforced extremely harsh forms of rationing among local populations in order to provide more plentiful resources to German troops and civilians. A 1946 report by two Netherlands government officials, poignant in its matter-of-factness, shows in meticulous detail through numerous graphs and descriptions how the calorie consumption and health status of that country’s population suffered and how many lives were lost under such strict rationing. Average adult consumption dropped as low as 1,400 calories per day during 1944. Meager as it was, that was an average; because of restrictions on food distribution, many people, especially in the western part of the country, received much less food and starved. By that stage of the war, according to the authors, “People were forced more and more to leave the towns in search of food in the production areas. Many of them, however, did not live through these food expeditions.

The OPA’s job was made easier, notes Bentley, by the fact that “most Americans understood that their wartime difficulties were minor compared with the hardships in war-torn countries.” Soon after the initiation of food rationing, the Office of War Information estimated that, conservatively, “civilians will have about 3 percent more food than in the pre-war years but about 6 percent less than in 1942. There will be little fancy food; but there will be enough if it is fairly shared and conserved. Food waste will be intolerable.” Total availability of coffee, canned vegetables, meat, cheese, and canned milk was often as high as before the war. Those items were rationed not because they were especially scarce but in order to hold down demand that otherwise would have ballooned under the price controls that were in effect. There was, for instance, an explosion of demand for milk in the early 1940s, when prices were fixed, but the dairy industry blocked attempts to initiate rationing. Consumption shot up, and severe shortages developed in pockets all over the country. Everything but rationing was attempted: relaxing health-quality standards, prohibiting the sale of heavy whipping cream, and reducing butterfat requirements. But the problem of excess demand persisted. Huge quantities of fruits and vegetables were exported in support of the war effort, leaving limited stocks for civilian use. The OPA kicked off 1943 with a plan under which households would be allowed to keep in their own homes no more than five cans of fruits or vegetables per occupant at any one time. A penalty of eight ration points would be assessed for each excess can. There is little evidence that the ban was actually enforced, and neither home-canned goods nor fresh produce was covered by the order.40 Home canners could get a pound of sugar for each four quarts of fruits they planned to can without surrendering ration coupons; however, sugar restrictions sidelined home brewers and amateur wine makers. Commercial distilling for civilian consumption ceased, but the industry reassured customers that it had a three-year supply of liquor stockpiled and ready to sell, so there was no need to ration.

Bread and potatoes were exempted from rationing, to provide a dietary backstop. With caloric requirements thus satisfied by starchy foods, protein became the chief preoccupation. Red meat had already held center stage in the American diet for decades; consumption at the beginning of World War II was more than 140 pounds per person per year, well above today’s average of about 115 pounds. During the war, the government aimed to provide a full pound of red meat per day to each soldier; therefore, according to officials, only 130 pounds per year would remain for each civilian. A voluntary “Share the Meat” program, introduced in 1942, managed to lower average annual consumption by a mere three pounds. When the necessity for stronger curbs became evident, rationing was introduced in 1943, and soon consumption dropped steeply, to 104 pounds per civilian. Farm families were permitted to consume as much as they wanted of any kind of meat they produced on the farm without surrendering ration coupons, but farm owners who did not cultivate their own land were not. Elsewhere, the feeling of scarcity was pervasive. For those who craved more meat, there was little consolation to be found in a chicken leg. At that time, poultry was not highly regarded as a substitute for red meat, so average consumption was only a little over twenty pounds per year—less than one-third of today’s level.42 The OPA tightened price ceilings on poultry but did not ration it.43

By April 1, 1943, even vegetable-protein sources such as dried beans, peas, and lentils had been added to the list of rationed items. To make a half pound of hamburger go further, the American Red Cross Nutrition Service suggested the use of “extenders,” including dried bread and bread crumbs, breakfast cereals, and “new commercial types” of filler. Cooks became accustomed to substituting jelly and preserves for butter; preparing sardine loaf, vegetable loaf, cheese, lard, and luncheon meat; and substituting “combination dishes such as stews, chop suey, chili, and the like for the old standby dishes such as choice roasts and steaks and chops.” Americans sought out protein-and calorie-heavy food wherever they could, partly because, in those days, thinness evoked memories of hard times. The OPA, for example, “served notice on Americans . . . that they will do well, if they want to preserve that well-fed appearance, to stop dreaming of steaks and focus their appetites and point purchasing power on hamburger, stew, and such delicacies as pig’s ears, pork kidneys, and beef brains.”

Starting the next morning, footwear would be subject to rationing, with each American entitled to three pairs of shoes per year.

Tthe reaction to rationing was instantaneous and frantic. Most shoe and department stores were closed on Sundays in that era, but in the few hours that remained before shoe rationing began, there was a rush on footwear at the handful of open stores. During the following week, after the order went into effect, the stampede continued, partly because some shoppers had misunderstood the rationing order to mean that shoes were already in short supply.

The apparel industry succeeded in blocking rationing plans from being implemented for any articles other than footwear, and that made it very difficult to control demand for clothing.48 But efforts to reduce resource consumption at the manufacturing stage were ambitious. For most clothing, the WPB established “square-inch limitations on the amount of material which may be used for all trimmings, collars, pockets, etc.,” while clothing was designed “to keep existing wardrobes in fashion” so that consumers would wear them longer. In discussing a WPB order regulating women’s clothing, the government publication Victory Bulletin observed, “The Basic Silhouette—termed the ‘body basic’ by the order—must conform to specified measurements of length, sweep, hip, hem, etc., listed in the order.” Such micromanagement even extended to swimwear, when a skimpier two-piece bathing suit was promoted for requiring less fabric.

Appliance manufacturing for civilian use was tightly restricted. From April 1942 to April 1943, no mechanical refrigerators were produced; that saved a quarter million tons of critical metals and other materials for use in war production. Starting in April 1943, sale of refrigerators, whether electric-run, gas-run, or a nonmechanical icebox type, was resumed; however, in order to be allowed to make a purchase, a household member had to attest on a federal form that “I have no other domestic mechanical refrigerator, nor do I have any other refrigerator equipment that I can use.” Stoves for heating and cooking were similarly rationed, requiring a declaration that the purchaser owned no functional stove. The OPA ruled that the 150,000 stovetop pressure cookers to be produced in 1943 would be allocated by County Farm Rationing Committees and that “community pools,” each comprising several families who agreed to the joint use of a pressure cooker, would receive preference. The WPB exerted its influence on production of radio tubes, light fixtures, lightbulbs, and even can openers. Bed and mattress production was maintained at three-fourths its normal level.

Reports noted, “Sacrificing metal-consuming inner springs, mattress manufacturers have reverted to the construction of an earlier period,” using materials such as cotton felt, hair, flax, and twine. The industry produced “women’s slips made from old summer dresses; buttons from tough pear-tree twigs; life-jacket padding from cattails; and household utensils from synthetic resins.”

In Britain, a series of “Limitations of Supplies” orders governed sales.

Soap was rationed because its production required fats, which had to be shared with the food and munitions industries.

The idea of clothes rationing was no more popular in Britain than it was in the United States. Prime Minister Winston Churchill didn’t like the idea at all, but neither he nor anyone else could come up with an alternative means to keep prices down and all Britons clothed. Apparel was given a page in the ration book originally reserved for margarine, which at the time was not being rationed. Annual allowances fluctuated between approximately one-third and two-thirds of average prewar consumption.

Price-controlled, tax-exempt “utility clothing” was made of fabric manufactured to precise specifications meant to ensure quality and long life. It was conceived in part by “top London fashion designers” and was not necessarily cheap. Yet it was generally well received because of its potential to delay one’s next clothing purchase. Utility plans eventually encompassed other textiles, shoes, hosiery, pottery, and furniture. Items had to be made to detailed specifications, and the number of styles was tightly limited.

Average people also got many opportunities to sit back and enjoy the public humiliation of well-heeled or politically powerful ration violators. In the summer of 1943, the OPA initiated proceedings against eight residents of Detroit’s posh Grosse Pointe suburb for buying meat, sugar, and other products without ration coupons. This gang of “socialites,” as they were characterized, included a prominent insurance executive, the wife of the president of the Detroit News, and the widow of one the founders of the Packard Motor Car Company who tried to buy four pounds of cheese under the table and got caught. In Maryland, the wife of the governor had to surrender her gas ration book after engaging in pleasure driving in a government vehicle.

In a column, Mathews demanded that “Washington start cracking down on the big fellows if you expect cooperation from the little fellows.” But it was Mathews himself who was arrested, on libel

\Wealthy Britons did not suffer much either under food rationing. Upscale restaurants could serve as much food as their customers could eat, and they were not subject to price controls. Such costly luxuries as wild game and shellfish were not rationed,

The rationing of goods at controlled prices provides a strong incentive for cheating, as the World War II example shows. For administering wartime rationing and price controls, the UK Ministry of Food had an enforcement staff of around a thousand, peaking in 1948 at over thirteen hundred. They pursued cases involving pricing violations; license violations; theft of food in transit; selling outside the ration system; forgery of ration documents; and, most prominently, illicit slaughter of livestock and sale of animal products. Illegal transactions accounted for 3 percent of all motor fuel sales and 10% of those involving passenger cars. Enforcement of rationing regulations and price controls by the ministry from 1939 to 1951 resulted in more than 230,000 convictions; the majority of offenders were retail merchants guilty of mostly minor offenses. An estimated 80% of the convictions resulted in fines of less than £5, and only 3 to 5 percent led to imprisonment of the offender. There were fewer problems involving quantity-rationed goods (for which consumers were paired up with a single retailer) than there were with rationing via points, which could be used anywhere. Zweiniger-Bargielowska writes that, although most people at one time or another made unauthorized purchases, the most corrosive effect of illicit markets was to subvert the ideal of “fair shares for all,” since it was only those better off who could afford to buy more costly contraband goods routinely.

In the United States, enforcement of price controls and rationing regulations made up 16% of the OPA’s budget. The agency identified more than 333,000 violations in 1944 alone but prosecuted just 64,000 people that year. Forty percent of prosecutions were for food violations, with the largest share for meat and dairy, and 17% were for gasoline. Along with flagrant overcharging, selling without ration-stamp exchange, and counterfeiting of stamps and certificates, businesses resorted to work-arounds: “tie-in sales” that required purchase of another product in addition to the rationed one, “upgrading” of low-quality merchandise to sell at the high-quality price, and short-weighting. As in Britain, the off-the-books meat trade got a large share of attention.

Illicit meat was sold for approximately double the legal price, and it tended to be the better cuts that ended up in illegal channels. Official numbers of hogs slaughtered under USDA inspection dropped 30% from February 1942 to February 1943, with the vanished swine presumably diverted into illegal trade. Off-the-books deals by middlemen were common, as was “the rustler, who rides the range at night, shooting animals where he finds them, dressing them on the spot, and driving away with the carcasses in the truck.” It wasn’t only meat that was squandered. Victory Bulletin warned, “Potential surgical sutures, adrenalin, insulin, gelatin for military films and bone meal for feeds are disregarded by the men who slaughter livestock illegally”; also lost was glycerin, needed for manufacturing explosives.

Retailers didn’t always play strictly by the ration book. A coalition of women’s organizations in Brooklyn urged Chester Bowles, director of the OPA, to prohibit shopkeepers from holding back goods for selected customers, demanding that all sales be first come, first served. But some OPA officials pointed out that such a policy would discriminate against working women who had time to shop only late in the day. Restaurants were free to serve any size portions they liked; however, if they decided to continue serving ample portions (for which they were allowed to charge a fittingly high price), they faced the prospect of having to close for several days each week when their meat ration ran out. Private banquets featuring full portions could be held with the permission of local rationing boards.

The ration stamps issued to a single household were not usually sufficient to purchase a large cut of meat such as a roast, and because stamps had expiration dates they could not be saved up from one ration period to the next in order to do so. Because consumers were required to present their own ration books in person when buying meat, announced the OPA, guests invited to a dinner party would have to buy their own meat and deliver it beforehand to the host cook—an awkward but workable solution if, say, pork chops were on the menu. However, if a single large cut such as a pot roast were to be served, the OPA noted, the host and invitees would have to “go to the butcher shop together, each buying a piece of the roast, and ask the butcher to leave it in one piece.”

The extension of rationing to bread in 1946–48, a move intended to ensure the flow of larger volumes of wheat to areas of continental Europe and North Africa that were threatened by famine, was highly controversial. People had come to depend on bread, along with potatoes, as a “buffer food” that helped feed manual workers and others for whom ration allowances did not provide sufficient calories. Rationing of the staff of life was unpopular from the start, even though allowances were adjusted to meet varying nutritional requirements and rations themselves were ample.

On November 15, Nixon asked all gasoline stations to close voluntarily each weekend, from Saturday evening to Sunday morning. As during World War II, a national allocation plan was put in place to ensure that each geographic region had access to adequate fuel supplies. In establishing allocation plans, the Federal Energy Office assigned low priority to the travel industry and, in an echo of World War II, explicitly discouraged pleasure driving. That same month, Nixon announced cuts in deliveries of heating oil—reductions of 15 percent for homes, 25% for commercial establishments, and 10%t for manufacturers—under a “mandatory allocation program.” The homes of Americans who heated with oil were to be kept six to ten degrees cooler that winter. Locally appointed boards paired fuel dealers with customers and saw to it that the limits were observed. Supplies of aviation fuel were cut by 15 percent. The national speed limit was lowered to 55 miles per hour. With Christmas approaching, ornamental lighting was prohibited. Finally, Nixon took the dramatic step of ordering that almost 5 billion gasoline ration coupons be printed and stored at the Pueblo Army Depot in Colorado, in preparation for the day when gas rationing would become necessary.

Here is how Time magazine depicted the national struggle for fuel during the 1973–74 embargo: The full-tank syndrome is bringing out the worst in both buyers and sellers of that volatile fluid. When a motorist in Pittsburgh topped off his tank with only $1.10 worth and then tried to pay for it with a credit card, the pump attendant spat in his face. A driver in Bethel, Conn., and another in Neptune, N.J., last week escaped serious injury when their cars were demolished by passenger trains as they sat stubbornly in lines that stretched across railroad tracks. “These people are like animals foraging for food,” says Don Jacobson, who runs an Amoco station in Miami. “If you can’t sell them gas, they’ll threaten to beat you up, wreck your station, run over you with a car.” Laments Bob Graves, a Lexington, Mass., Texaco dealer: “They’ve broken my pump handles and smashed the glass on the pumps, and tried to start fights when we close. We’re all so busy at the pumps that somebody walked in and stole my adding machine and the leukemia-fund can.”

President Gerald Ford laid out a plan to reduce American dependence on imported oil by imposing tariffs and taxes on petroleum products. His plan was met with almost universal condemnation. A majority of Americans polled said they would prefer gasoline rationing to the tax scheme. Time agreed, arguing that rationing would have three crucial qualities going for it—directness, fairness, and familiarity—and adding that “support for rationing is probably strongest among lower-income citizens who worry most about the pocketbook impact of Ford’s plan.

The federal government, he said, should challenge Americans to make sacrifices, and its policies must be fair, predictable, and unambiguous. But, he warned, “we can be sure that all the special-interest groups in the country will attack the part of this plan that affects them directly. They will say that sacrifice is fine as long as other people do it, but that their sacrifice is unreasonable or unfair or harmful to the country. If they succeed with this approach, then the burden on the ordinary citizen, who is not organized into an interest group, would be crushing.” He was right. Critics in both the private and public sectors rejected Carter’s characterization of the energy crisis as the “moral equivalent of war” and viewed any discussion of limits, conservation, or sacrifice as a threat to the economy. Opponents then mocked his call to arms by abbreviating it to “MEOW,” while Congress simply ignored Carter’s warnings and avoided taking any effective action on energy.

Ground zero for the gas shortages of 1979 was California. The state imposed rationing on May 6, allowing gas purchases only on alternate days: cars with odd license-plate numbers could be filled on odd days of the month and even numbers on even days. Several other states followed suit, but that move alone didn’t relieve the stress on gas stations. Many station attendants refused to fill tanks that were already half full or more. That first Monday morning, many drivers who woke up early to allow time to buy gas on the way to work instead found empty, locked cars already standing in long lines at the pumps. The cars had been left there the previous evening by drivers who then walked or hitchhiked back to the station in the morning. Two Beverly Hills attorneys tied their new rides—a pair of Arabian horses—to parking meters outside their office as they prepared to petition the city to suspend an ordinance against horse riding in the streets. The National Guard was called out to deliver gas to southern Florida stations. A commercial driver hauling a tankful to a Miami station found a line of 25 cars following him as if, he later said, he’d been “the Pied Piper.” In some cities, drivers were seen setting up tables alongside their cars in gas lines so the family could have breakfast together while waiting to fill the tank.

One of the worst incidents occurred in Levittown, Pennsylvania, where a crowd of 1500 gasoline rioters “torched cars, destroyed gas pumps, and pelted police with rocks and bottles.” A police officer responded to a question from a motorist by smashing his windshield, whacking the driver’s son with his club, and putting the man’s wife in a choke hold. In all, 82 people were injured, and almost 200 were arrested. Large numbers of long-haul truckers across the nation went on strike that summer, parking their rigs. Some blockaded refineries, and a few fired shots at non-striking truckers. The National Guard was called out in nine states, as “the psychology of scarcity took hold.” A White House staffer told Newsweek, “This country is getting ugly.

During World War II, gasoline scarcity was far worse in some regions than in others. But increasing desperation in the nation’s dry spots prompted talk of rationing even in conservative quarters. The columnist George F. Will observed, “There are, as yet, no gas lines nationwide. If there ever are, the nation may reasonably prefer rationing by coupon, with all its untidiness and irrationality, to the wear and tear involved in rationing by inconvenience.” A New York Times–CBS News poll in early June found 60 percent of respondents preferring rationing to shortages and high prices.

Carter wanted the government to have the ability to ration gas, thereby freeing up supplies that could then go to regions that were suffering shortages. Thanks largely to the oil companies’ fierce opposition, Congress refused to pass standby rationing in May, but support for the idea continued to grow.

Most of his policy recommendations were again focused on conservation. His most specific move was asking Congress once again for authority to order mandatory conservation and set up a standby gasoline rationing system. Of the five thousand or so telegrams and phone calls received by the White House in response to that speech, an astonishing 85% were positive. Carter’s approval jumped 11 points overnight. The next day, he spoke in Detroit and Kansas City, both times to standing ovations. But Carter was still being vague about what, specifically, Americans were supposed to do. Meanwhile, renewed political wrangling on other issues and a drop in gas prices drained away the nation’s sense of urgency over energy. The deeper problems had not gone away, but without the threat of societal breakdown that had so alarmed the public and stirred Carter to bold oratory, the incentive to take action vanished.

Despite a 28% improvement in vehicle fuel economy, America’s total annual gasoline consumption has increased 47% since 1980, with the consumption rate per person 10% higher today than in 1980. Had there been a 20% gasoline shortfall at the start of the 1980s, triggering Congress’s gas-rationing plan, and had we managed to hold per-capita consumption at the rationed level for the next 30 years (taking into account the rate of population increase that we actually experienced), we would have saved 800 billion gallons—equal to about six years of output from U.S. domestic gasoline refiners. That’s a lot in itself, but such long-term restraint would have caused a chain reaction of dramatic changes throughout the economy, changes so profound that America would probably be a very different place today had rationing been instituted and had it continued. That didn’t happen. Instead, the U.S. economy focused again on developing new energy-dependent goods and services.

The clearest expression of the current goals of our foreign policy came in an address to the 1992 Earth Summit in Rio de Janeiro by President George H.W. Bush, a year after the first Persian Gulf war. There he announced to the world that “the American way of life is not negotiable,” signaling that the country had changed profoundly since the day almost exactly fifty years earlier when Harold Ickes had declared that patriotic citizens would never risk the lives of their soldiers to preserve “motoring as usual.”

According to calculations by Vaclav Smil of the University of Manitoba, the human economy has already reduced the total weight of plant biomass on Earth’s surface by 45%. About 25% of each year’s plant growth worldwide, and a similar proportion of all freshwater flowing on Earth’s surface, is already being taken for human use. If you could put all of our livestock and other domestic animals on one giant scale, they would weigh 25 times as much as Earth’s entire dwindling population of wild mammals. In 2009, a group of 29 scientists from seven countries published a paper in which they defined nine “planetary boundaries” that define a “safe operating space” for humanity. If we cross those boundaries and don’t pull back, they concluded, the result will be catastrophic ecological breakdown. Given the uncertainties involved in any such projections, they proposed to set the limits not at the edge of the precipice but at some point this side of it, prudently leaving a modest “zone of uncertainty” as a buffer. The boundaries were defined by limits on atmospheric carbon dioxide concentration; air pollutants other than carbon dioxide; stratospheric ozone damage; industrial production of nitrogen fertilizer; breakdown of aragonite, a calcium compound that’s an indicator of the health of coral and microscopic sea organisms; human use of freshwater; land area used for cultivation of crops; species extinction; and chemical pollution. The group noted that we have already transgressed three of the limits: carbon dioxide concentration, species extinction, and nitrogen output. Furthermore, they concluded, “humanity is approaching, at a rapid pace, the boundaries for freshwater use and land-system change,” while we’re dangerously degrading the land that is already sown to crops.

The International Energy Agency (IEA) concludes that extraction of conventional oil peaked in 2006, but that with increases in mining of oil from unconventional deposits like the tar sands of Canada, the plateau will bump along for decades.

Demand for gas may rise even faster than that. It seems that everyone these days is looking to natural gas to bail the world out of all kinds of crises: big environmental groups urge that it be substituted for coal to reduce carbon emissions; the transportation industry wants to substitute it for increasingly costly oil by burning it directly, converting it to liquid fuel, or by feeding power plants that in turn will feed the batteries in electric cars; enormous quantities will be consumed in the process of extracting oil from tar-sand deposits; and high-yield agriculture requires increasing quantities of nitrogen fertilizer manufactured with natural gas.

In the near term, the process of hauling enough rock phosphate, lime, livestock manure, or even human waste to restore phosphorus-deficient farm soils will be burdened by increasing transportation costs. Then there are tractors, 4.2 million of them on farms and ranches in the United States alone. Field operations on almost all farms in America, including organic farms, are heavily dependent on diesel fuel or gasoline. Finally, the farm economy supports a much larger off-farm food economy, one that is heavily dependent on fossil energy. Now we are asking the industrial mode of agriculture, with its own low energy efficiency, to supply not only food and on-farm power but also billions of gallons of ethanol and biodiesel for transportation.

If enough good soils and waters are to be maintained to support that life, the currently wasteful means of using water and growing food must be not just adjusted but transformed. Until that happens, the interactions among energy, water, and food will come to look even more like a game of rock-paper-scissors. Energy shortages or restrictions can keep irrigation pumps, tractors, and fertilizer plants idle or make food unaffordable.

Current methods for producing food are huge energy sinks and major contributors to greenhouse-gas warming, while the conversion of food-producing land to substitute for mineral resources in providing fuel, fabric, rubber, and other industrial crops will accelerate soil degradation while contributing to wasteful energy consumption.

The next best course is to make it later rather than sooner by leaving fossil fuels in the ground longer. But can economies resist burning fossil fuels that are easily within reach? Might even renewable energy sources be harnessed to the task of obtaining much more potent and versatile fossil energy? That is already happening in various parts of the world, including the poverty-plagued but coal-rich state of Jharkhand in India. Strip mining there is pushing indigenous people off their land, ruining their water supply, and driving them to desperate means of earning an income. Every day for the past decade, it has been possible to witness a remarkable spectacle along a main highway between the coal-mining district of Hazaribagh and the state capital, Ranchi: men hauling coal on bicycles. Each bike, with its reinforced frame, supports up to four hundred pounds of coal in large sacks. The men, often traveling in long convoys, push the bicycles up steep ridges and sometimes stand on one pedal to coast down. Their cargo has been scavenged from small, shallow, village-dug mines, from government-owned deposits that are no longer economically suitable for large-scale mining, or from roadsides where it has fallen, they say, “off the back of a truck.” Hauling coal the forty miles from Hazaribagh to Ranchi takes two days, and the men make the round-trip twice a week. These “cycle-wallahs” travel roads throughout the region, delivering an estimated 2.5 million metric tons of coal and coke annually to towns and cities for cooking, heating, and small industry.

If scarcity, either absolute or self-imposed, becomes a pervasive fact of life, will rationing no longer be left to the market? Will more of it be done through public deliberation? Ask ecologists and environmentalists that question today, and you frequently hear that quantity rationing is coming and that we should get ready for it. David Orr, a professor of environmental studies and politics at Oberlin College in Ohio and a leading environmental thinker, believes that “one way or another we’re going to have rationing. Rationing goes to the heart of the matter.” Although “we assume that growth is humanity’s destiny and that markets can manage scarcity,” Orr believes that “letting markets manage scarcity is simply a way of not grappling with the problem.” And because “there is no question that rationing will happen,” he says the key question is how. “Will it be through growth in governance, either top-down or local, or will we let it happen ‘naturally,’” through prices? The latter course, Orr believes, would lead to chaos.36 Likewise, Fred Magdoff, co-author (with John Bellamy Foster) of What Every Environmentalist Needs to Know About Capitalism, among other books, sees rationing as very likely necessary in any future economy that takes the global ecological crisis seriously.

He says there is no escaping the problem of distribution: “There is rationing today, but it’s never called that. Allocation in our economy is determined almost entirely in one of two ways: goods tend to go to whoever has the most money or wherever someone can make the most profit.” As an alternative, he says, rationing by quantity rather than ability to pay “makes sense if you want to allocate fairly. It’s something that will have to be faced down the line. I don’t see any way to achieve substantive equality without some form of rationing.” But, Magdoff adds, “there’s a problem with using that terminology. There are certain ‘naughty’ words you don’t use. ‘Rationing’ is not considered as naughty as ‘socialism,’ but it’s still equivalent to a four-letter word.”37

Ask almost any economist today, however, and you will learn that non-price rationing simply doesn’t work and should be avoided. For example, Martin Weitzman, at Harvard University, who developed some of the basic theory of rationing decades ago, takes the view that “generally speaking, most economists, myself included, think that rationing is inferior to raising prices for ordinary goods. It can work for a limited time on patriotic appeal, say during wartime. But without this aspect, people find a way around the rationing.” He adds that rationing would also “require a large bureaucracy and encounter a lot of resistance. I am hard-pressed to think of when rationing and price controls would be justified for scarce materials.” Others see rationing as unworkable not only for technical reasons but simply because people in affluent societies today cannot even imagine life under consumption limits. Maurie Cohen has little confidence that residents of any industrialized society would accept comprehensive limits on consumption because, in his view, “following a half century of extraordinary material abundance, public commitments to consumerist lifestyles are now more powerfully resolute.”39 David Orr agrees that prospects for consumption restraints in America today are dim at best: “We have to reckon with the fact that from about 1947 to 2008 we had a collision with affluence, and it changed us as a people. It changed our political expectations, it changed us morally, and we lost a sense of discipline. Try to impose a carbon tax, let alone rationing, today and you’ll hear moaning and groaning from all over.”40

In theory, shortages are always temporary. As the price of a scarce good rises, fewer and fewer people are able and willing to buy it, while at the same time producers are stimulated to increase their output. The price stops rising when demand has been driven low enough to meet the rising supply. If for whatever reason (often because of absolute scarcity, as with Yosemite campsites) the price is not allowed to rise to the heights required to bring demand and supply into alignment, and there is no substitute product that can draw away demand, the good is apportioned in some other way. At that point, nonprice rationing, often referred to simply as “rationing,” begins.

With basic necessities as much as with toys, rationing by queuing tends to create not buzz but belligerence. Dreadful memories of rationing by queuing—like the lines that formed at gas stations across America and outside bakeries in the Soviet Union in the 1970s—are burned into the memories of those who lived through those times; few regard such methods of allocation as satisfactory when it comes to essential goods.

Weitzman then summarized the case in favor of rationing: The rejoinder is that using rationing, not the price mechanism, is in fact the better way of ensuring that true needs are met. If a market clearing price is used, this guarantees only that it will get driven up until those with more money end up with more of the deficit commodity. How can it honestly be said of such a system that it selects out and fulfills real needs when awards are being made as much on the basis of income as anything else? One fair way to make sure that everyone has an equal chance to satisfy his wants would be to give more or less the same share to each consumer independent of his budget size. Acknowledging that arguments both for and against rationing of basic needs “are right, or at least each contains a strong element of truth,” Weitzman went on to demonstrate mathematically how rationing by price performs better when people’s preferences for a commodity vary widely but there is relative equality of income. Rationing by quantity appeared superior in the reverse situation, when there is broad inequality of buying power and demand for the commodity is more uniform (as can be the case with food or fuel, for example).45 In a follow-up to Weitzman’s analysis, Francisco

Rivera-Batiz showed that rationing’s advantage increases further if the income distribution is skewed—that is, if the majority of households are “bunched up” below the average income while a small share of the population with very high incomes occupies the long upper “tail” of the distribution. Rivera-Batiz concluded that quantity rationing “would work more effectively (relative to the price system) in allocating a deficit commodity to those who need it most in those countries in which economic power and income are concentrated in the hands of the few.

Writing back in the early days of World War II, the Dutch economist Jacques Polak had come to a similar conclusion: that rationing had become necessary because even a small rise in price can make it impossible for the person of modest income to meet basic needs, while in a society with high inequality there is a wealthy class that can “push up almost to infinity the prices of a few essential commodities.” Therefore, he stressed, it is not shortages alone that create the need for rationing with price controls; rather, it is a shortage that occurs in a society with “substantial inequalities of income.”

The burden of consumption taxes weighs most heavily on people in lower-income brackets. It has been suggested that governments can handle that problem by redistributing proceeds from consumption taxes in the form of cash payments to low-income households. But determining the size of those payments is no easier than finding the right tax rate; furthermore, means-tested redistribution programs often come to be seen by more affluent non-recipients as “handouts” to undeserving people and are therefore more politically vulnerable than universal programs or policies. Weitzman has also observed that problems always seem to arise when attempts are made to put compensation systems into practice. The argument that the subsidies can blunt the impact of the taxes, he says, “is true enough in principle, but not typically very useful for policy prescriptions because the necessary compensation is practically never paid.”

Eighteen years later, continuing his examination of the potential of taxes and income subsidies for addressing inequality, Tobin observed that redistributing enough income to the lower portion of the American economic scale through a mechanism like the “negative income tax” being contemplated by the Nixon administration at the time (which would have provided subsidies to low-income households much like today’s Earned Income Tax Credit) would require very high—and, by implication, politically impossible—tax rates on higher incomes.

With rationing by quantity, people or households use coupons, stamps, electronic credits, or other parallel currencies that entitle them to a given weight or measure of a specific good—no more, no less—over a given time period. Normally, as was the case in World War II–era America and Britain, rationed goods or the credits to obtain them may be shared among members of a household but may not be sold or traded outside the household. The plan may be accompanied by subsidies and/or price controls.

 “Rationing in time,” cannot ensure that savings of the resource will be proportional to the length of time for which supply is denied. For example, consumption doesn’t fall by half when alternate-day lawn-watering restrictions are in force, because people can water as much as they like on their assigned days.

Unlike straight rationing, quantity rationing by points cannot guarantee everyone access to every item in the group of rationed items, but it can ensure a fair share of consumption from a “menu” of similar items. Points, like all ration credits, are a currency. Every covered item requires payment in both a cash price and a point price. But points differ from money in that every recipient has the same point “income,” which does not have to be earned; points can be spent only on designated commodities; point prices are not necessarily determined by supply and demand in the market; and trading in points is usually not permitted.

The range of goods covered by a given point scheme could in theory be as narrow as it was with canned goods during World War II or as broad as desired—if, for example, there were a point scheme covering all forms of energy, with different point values for natural gas, gasoline, electricity, etc.

The values of items in terms of points can be set according to any of several criteria. In the case of wartime meat, items with higher dollar prices also tended to be the ones assigned higher point values (for a time in Britain, dollar and point values were identical), but for other types of products, an item’s point value might reflect the quantity of a scarce resource required to produce it—or, as we will see, the greenhouse-gas emissions created during its manufacture, transport, and use. The more closely point values are adjusted to reflect the level of consumer demand that would exist without rationing, the less they interfere with functioning of the market.

Among people with differing preferences, there will be winners and losers.

If only a few items are restricted, people take the extra money that they would otherwise have spent on additional rationed goods and spend it on non-rationed ones, driving up their prices. If price controls are then extended to other goods without rationing them, demand for those goods shoots up even higher, and stocks are further depleted. These goods are then brought into the rationing scheme, thereby extending it to larger and larger numbers of essential goods.

But what about nonessential goods, such as swimming pools or rare wines? If the main concern is fair access to necessities, there seems little reason to ration nonessentials. If wealthy people, prohibited from buying as much gasoline or food as they would like, use their increased disposable income to bid up the prices of luxuries, is too little harm done even to worry about? Maybe, but it would depend on the motive for rationing. If the goal is to reduce total resource consumption, the prices of vintage wines or rare books might be left to the market, while the construction of swimming pools would be restricted.

As an alternative to a vast, complex system of quantity-rationing schemes for many products, Kalecki proposed simply to ration total spending. Each person would be permitted expenditures only up to a fixed weekly limit in retail shops, with the transactions tracked through coupon exchange. Up to that monetary limit, families could buy any combination of goods and quantities, as long as their total per-person spending stayed under the limit. No such system of “general” or “expenditure” rationing has ever been adopted, but during and after the war, several British and American economists examined the possible consequences of employing it during some future crisis. Once again, they realized, income inequality would complicate things. If the spending ceiling was the same for everyone, as was proposed, then lower-income families could spend their entire paycheck and still have coupons left over. Such families might be tempted to sell their excess coupons to people who had more cash to spend than their coupon allotment would allow. Some economists worried that that would not only stimulate unwanted demand but violate the “fair shares for all” principle.56 It was Kalecki who finally proposed a workable solution: that the government offer to buy back any portion of a person’s expenditure allowance that the person could not afford to use. For example, if the expenditure ration were £30 but a family had only £10 worth of cash to spend, they could, under Kalecki’s proposal, sell one-third of their allowance back to the government and be paid £10 in cash. That could be added to the £10 they had on hand, and they could spend it all while staying within the limit imposed by their £20 worth of remaining ration coupons.

The government buyback system would be intended to prevent the exploitation of the worse off by the better off and ensure a firm limit on total consumption, creating a “fairly comprehensive, democratic, and elastic system of distributing commodities in short supply,” and it would provide an automatic benefit to low-income families without an artificial division of the population into “poor” and “non-poor” categories. Well-to-do families would tend to accumulate savings under expenditure rationing, and Kalecki urged that those savings be captured by the government through an increase in upper-bracket income tax rates. That would not only curb inflation, it could also help pay for the coupon buyback scheme. Kalecki’s idea of allowing people to return ration credits for a refund rather than sell them to others has since been suggested as a feature of future carbon-rationing schemes. Quantity rationing of specific goods and general

In a report written for the U.S. National Security Resources Board in 1952, the economist Gustav Papanek looked back at the wartime discussion of expenditure rationing and saw plenty of deficiencies when he compared the concept with that of straight rationing of individual goods. He noted that if the same spending ceiling were applied to everyone, it could mean a dramatic change in lifestyle for the wealthy, who would probably push back hard against such restrictions. As one of many examples, he cited people with larger houses, who would plead that they had much higher winter heating bills and that allowances would have to be made. Nevertheless, a uniform spending ceiling would be necessary, wrote Papanek, because allowing those with larger incomes to spend more money “not only would make inequality of sacrifice in wartime evident, but would also place upon it the stamp of government approval.”

In many countries, a large portion of the supply of food, water, cooking fuel, or other essentials is subsidized and rationed by quantity, while the remaining supply is traded on the open market. Such two-tier systems provide a floor to ensure access to the necessary minimum but have no ceiling to contain total consumption. Some also treat different people or households differently by, for example, steering subsidized, rationed goods toward lower income brackets. And in some, there is the option of allowing the barter or sale of unused ration credits among consumers and producers. Such markets were proposed as part of Carter’s standby gas rationing plan, and they have been included in more recent proposals for gas rationing and for limiting greenhouse-gas emissions.

The consequences of rationing can be difficult to predict, but one thing is certain: nobody wants to be told what we can and cannot buy. There will be cheating. There are always people—often many people—who want to buy more of a rationed product than they can obtain legally; otherwise, there would be no need to ration it.

Widespread circumvention of regulations poses a dilemma. On the one hand, Cohen wrote, attempts to enforce total compliance are “ineffectual in the short term and counterproductive in the long term,” while on the other, lax enforcement “will lead to the erosion of public support.” Cohen concluded that “there is no easy solution to this dilemma other than agile and adept management.

Evasion of wartime price controls and rationing was less extensive in Britain than in the United States. Conventional wisdom long held that the difference could be explained by a greater respect for government authority among British citizens, reinforced by their more immediate sense of shared peril (the “Dunkirk spirit”). But an analysis of wage and price data before, during, and after the war shows that the most important factor was the British government’s tighter and more comprehensive control of supply and demand. Enforcement in both countries went through mid-war expansion in response to illicit activity, but key measures taken by the British—complete control of the food supply; standardization of manufacturing and design in clothing, furniture, and other products; concentration of manufacturing in a smaller number of plants; the consumer–retailer tie; and rationing of textiles and clothing—were not adopted in America, where industry opposition to such interference was stronger. The British also invested much more heavily in the system. In 1944, with rationing at its peak in both countries, British agencies were spending four and a half times as much (relative to GDP) as their American counterparts on enforcement of price controls. They were also employing far more enforcement personnel and filing eight times as many cases against ration violators, relative to population.

The differential impact of rationing and underground markets on economic classes should not be ignored. Theory says that the rich are better off in a pure market economy, while those with the lowest incomes are better off in an economy that incorporates rationing; however, the poor benefit even more under rationing (whether it’s by coupons or queuing) that is accompanied by an underground market, because secure access to necessities is accompanied by some flexibility in satisfying family needs. This is thought to be one of the many reasons that there was such widespread dissatisfaction with the conversion from a controlled economy with illegal markets to an open, legal market economy in the former Soviet Union and Eastern Europe in the 1990s.

Ration cards, books, stamps, and coupons are not only clumsy and inconvenient; they invite mischief as well. Some of the biggest headaches for past and current systems have involved the theft, sale, and counterfeiting of ration currency. Some have suggested that in the future it would be easier to head off cheating by using technologies such as smart cards and automatic bank debits that were not available during previous rationing eras.

Several countries are currently pursuing electronic transfers for their food-ration systems. Of course, electronic media are far from immune to outlaw trading; consumers, businesses, and governments have long battled a multibillion-dollar criminal market that exploits credit and debit cards, ATMs, and online vulnerabilities. Were rationing mechanisms added to that list of targets, enforcers would be drawn into similar kinds of cat-and-mouse games with hackers and thieves. Cohen argues that while smart cards and similar technologies can reduce administrative costs and red tape, they cannot eliminate cheating and that it is unrealistic to expect “high-tech rationing” to wipe out evasion and fraud. It will still be necessary, he predicts, “to use customary enforcement tools to limit the corrosive effects of unlawful practices.”64 Some have

No law is ever met with total compliance, but there are many examples of laws and regulations that appear to be accomplishing their goals despite routine violations. Compare limits imposed by rationing to speed limits. Around the world it is common for a large share of motorists to be exceeding posted speed limits at any one time. Like the majority of wartime rationing violators, who dipped only lightly into the underground market, most drivers fudge just a few miles per hour. A relatively small proportion of drivers break the limit by ten miles per hour or more, and a still smaller percentage of speeders are ticketed; nevertheless, speed limits succeed in preventing accidents and fatalities. Existence of a speed limit is cited by drivers as an important reason for driving more slowly than they otherwise would, whereas concern about pollution or fuel consumption is not. Also relevant to a discussion of rationing

compliance is the example of tax laws, which are routinely violated in all countries. U.S. federal income tax evasion has been estimated to result in a loss of approximately 19% of the total due—almost $500 billion. The 2012 budget of the Internal Revenue Service (IRS) was about $13 billion, less than half of which was for enforcement. The IRS estimates that it can recover $4.5 million in lost revenue for each $1 million spent on enforcement. The federal government could reduce its budget deficits by spending heavily to eliminate much of the fraud and evasion that occurs, but the necessary expansion of government intervention and the ill will that would result are prices too high to pay. Here, polls show that the percentage of Americans who choose not to believe in the existence of human-induced greenhouse gas warming has increased dramatically. Much of that change can be credited to a vigorous climate-denial industry that conjures up apocalyptic visions of deprivation, lost liberty, and stunted opportunity that, it claims, would result from any interference with the right of people and corporations to emit greenhouse gases without restraint. Denial industry spokespeople have developed a kind of shorthand language for talking about the nightmare world they say awaits us, and the key word in that language is “rationing.” Here, for example, is the conservative commentator Daniel Greenfield writing in 2011: “For environmentalists alternative energy was never really about independence, it was about austerity and rationing for the good of the earth. . . . [T]hey will use any conceivable argument to ram their agenda through, but they are not loyal to anything but their core austerity rationing manifesto. Their goal is expensive sustainable energy. If it isn’t sustainable, than it had damn well better be expensive.

Opponents of recent proposals to install two-way thermostats in all California homes or use the Endangered Species Act to protect polar bears raise the specter of “energy rationing.” Companies that have resigned their memberships in the U.S. Chamber of Commerce over its opposition to climate legislation are “energy rationing profiteers.” Green urban design leads to “land rationing.” Even First Lady Michelle Obama’s anti-obesity campaign was, according to the website EmergingCorruption.com, aimed at “preparing us for food rationing.

right-wing groups are implicitly urging that sub rosa rationing of necessary goods via individual ability to pay continue as the norm in society, however unfair the results.

The rare occasions when total carbon emissions in the United States have declined were almost always years of economic hardship: 1981–83, 1990–91, 2001, and 2008–12. Over that last five-year period, emissions from energy use fell a startling 13 percent.

economic crises and the rising unemployment, growing hunger, and general misery that they engender are not in themselves acceptable means to achieve ecological stability.

if, for example, numbers ending in 1 or 2 must stay off the street on Mondays, 3 and 4 on Tuesdays, etc., with weekends open to all cars, then weekday traffic, in theory, would be reduced by 20 percent. Extending such restrictions to an entire nation and to all times and days of the week has been proposed as a simple strategy to curb greenhouse-gas emissions from transportation. Experience, however, says it wouldn’t work. The longest-running such program, initiated by Mexico City in 1989, was almost instantly undermined by several factors. People drove cars more miles on those days when they were allowed to circulate, and there was increased traffic volume on weekends. Soon after the program was initiated, the more well-off families began acquiring additional cars with contrasting final digits on the tags; often the extra car was a cheap, older used car with poor fuel economy and high pollutant emissions. Traffic volume was reduced by 7.6 percent, not 20, and gas consumption continued to increase. Research on license-plate rationing systems in Mexico City, São Paulo, Bogotá, Beijing, and Tianjin concluded that “there is no evidence that these restrictions have improved the overall air quality.

 “Congestion isn’t an environmental problem; it’s a driving problem. If reducing it merely makes life easier for those who drive, then the improved traffic flow can actually increase the environmental damage done by cars, by raising overall traffic volume, encouraging sprawl and long car commutes.”

Rationing via rolling blackouts has been employed to deal with emergency shortages in the global North, but scheduled blackouts, emergency outages, and denial of connections have been used routinely in India, China, South Africa, Venezuela, and a host of other countries that suffer chronic shortfalls in generation capacity. Chiefly a device to hold down peak demand, the rolling blackout has little or no impact on total consumption and emissions, because people and businesses tend to increase their rate of use when the power is on. The way the burden of blackouts is shared among communities determines how fair the compromise is.

In India during the summer season, farmers with irrigation pumps are given priority over city dwellers, and poorer areas tend to have much longer blackouts than wealthy ones. The State of California, in an effort to avoid the necessity for rolling blackouts like the ones that struck in 2001, while at the same time curbing greenhouse-gas emissions, has been using “progressive pricing” of electricity, a rationing-by-price mechanism that seeks to ensure a basic supply to everyone while providing a heavy disincentive for overconsumption. Pacific Gas and Electric Company’s customers, for example, pay 12 cents per kilowatt hour for all monthly consumption up to the “baseline,” which is the average consumption for a household in a customer’s climatic region, adjusted by season. Customers who use more electricity pay higher rates for the amount above the baseline. For consumption ranging between 100 percent and 130 percent of the baseline, the rate is just 14 cents, but between 130 and 200 percent, it is 29 cents and rises to 40 cents for consumption exceeding 200 percent of the baseline. An analysis published in 2008 found that the California system provides a modest benefit to lower-income consumers; however, there is no statistical evidence that consumers consciously alter their consumption patterns to stay below any of the thresholds.16 Progressive pricing of electricity is also used in parts of India, China, and other countries, with similar results. However, neither rolling power cuts nor progressive pricing was sufficient to prevent the total electrical eclipse that struck India on July 30–31, 2012, leaving 684 million people—more than half the population of the world’s second-largest nation—without power.

Along with transportation, home energy consumption accounts for a large share of personal emissions.

The TEQ system can serve to illustrate how personal carbon trading might work, in its generalities if not all specifics. A 2011 report published by the organization Lean Economy Connection (founded by David Fleming), along with twenty members of Parliament, provides many of the details.19 The plan envisions the UK government’s Committee on Climate Change setting an overall annual carbonemissions budget, one that starts somewhere below the nation’s current emissions total and is lowered further every year thereafter. About 40 percent of total UK emissions currently come from direct energy use by individuals and households, primarily for heating, electricity consumption, and driving. Under the TEQ scheme, one year’s worth of “carbon units” (each unit represents the release of one kilogram of carbon dioxide) toward such usage is issued. Forty percent of the total national stock of units is shared among individuals, with one equal portion going to each adult, while the other 60 percent of units are sold to primary energy users (utilities, industry, government, etc.) at the same weekly auction where treasury bills are sold. Individuals must surrender carbon units with each purchase of electricity or fuel, typically when paying the household utility bill or filling up the family car. Payments are debited directly from each person’s “carbon account.” To facilitate transactions, a card similar to a bank debit card is issued to every adult. According to the proposal, “The TEQ units received by the energy retailer for the sale of fuel or electricity are then surrendered when the retailer buys energy from the wholesaler who, in turn, surrenders them to the primary provider. Finally, the primary provider surrenders units back to the Registrar when it pumps, mines or imports the fuel. This closes the loop.” Low energy users who build up a surplus of TEQ units in their accounts can sell them on the national market, and those who require energy above their personal entitlement can buy them.

Households buy and sell TEQ units in the same market where primary energy dealers trade. Most units in the market are bought through the weekly auction and sold by banks and other large-scale brokers; however, businesses, other organizations, and individuals can buy and sell TEQ units as well. Brokers sell to private businesses, public agencies, and other organizations, all of whom need units when buying energy. A farmer, for example, buys units on the TEQ market in order to buy tractor fuel, while a furniture manufacturer and the corner pub buy and use units to pay their electric bills. Any firm, institution, or individual can sell excess units back into the national market through brokers. Energy sellers like gas stations and utilities can buy units through the market and sell them to customers. In a typical situation, the customer who has run out of units but needs to buy gas or pay the electric bill will have to not only pay for the energy but also buy enough TEQ units (ideally, available for sale directly from the gas station or utility) to cover the transaction.

The mechanics of PCAs are similar to those of TEQs. The PCA idea has emerged in various forms, including in the 2004 book How We Can Save the Planet by Mayer Hillman, Tina Fawcett, and Sudhir Chella Rajan. PCAs would cover both home and transportation emissions but leave upstream emissions to be dealt with through other mechanisms. Like TEQs, PCAs feature an annual carbon budget that declines over time, equal distribution of personal allowances (with the exception that children receive a partial allowance), electronic accounting, and a market in allowances. PCAs cover only individual consumption, however; businesses and other organizations are not included in the plan. Hillman and his co-authors foresee smart cards and automatic bank debits playing key roles. Alongside a sketch of their proposed “Carbon Allowance Card,” they explain, “Each person would receive an electronic card containing the year’s credits. This smart card would be presented every time energy or travel services are purchased, and the correct number of units would be deducted. The technologies and logistics would be no different from debit card systems.” (Today, handheld wireless devices presumably could be used as well.) Carbon allowances for electricity or gas consumption could be surrendered as part of paying the monthly utility bill, most conveniently through automatic bank debit.

When access to energy is thereby limited, the plan’s authors anticipate that consumers will seek out more efficient technologies in order to maintain their accustomed lifestyle. They write, “It will be in the interests of manufacturers to supply low-energy goods because this is where the demand will lie.”21 The authors acknowledge that PCAs will not necessarily reduce emissions produced by industry and commerce—they might even increase them if more efficient, more durable consumer goods require more energy and resources to manufacture—so they suggest, “There may need to be a parallel system of rationing with a reducing allocation over time” applied to business and government as well. No national, mandatory carbon-rationing scheme

Hansen has also roundly denounced all forms of carbon trading, telling the U.S. House Ways and Means Committee in 2009, “Except for its stealth approach to taxing the public, and its attraction to special interests, ‘cap and trade’ seems to have little merit.” Hansen calls his alternative a “tax and 100% dividend” plan, arguing that “a tax on coal, oil and gas is simple. It can be collected easily and reliably at the first point of sale, at the mine or oil well, or at the port of entry.

Some think Hansen is overselling the carbon tax. Because the biggest individual emitters tend to be more affluent and more willing to spend up to a very high level to maintain their lifestyle, analysts at the Lean Economy Connection argue that “if taxation were high enough to influence the behavior of the better-off, it would price the poor out of the market.” Even with redistribution of the tax receipts back to the public, they say there would be no assurance of fair access to energy in times of scarcity. They also contend that a carbon tax won’t work if the goal is a stable decline in emissions over the long term: “It is impossible for tax to give a long-term steady signal: if it remains constant, it will be inappropriate at certain periods of the economic cycle; if it fluctuates, it does not provide a steady signal.

Carbon rations deal more explicitly with emissions reduction and require deeper engagement by the public; therefore, Fawcett and her colleagues have “generally proposed them in opposition to taxes, not as a complement.” There’s always that third, far more palatable alternative, voluntary restraint; however, she has written, such a policy “could not even begin to tackle the scale of the problem because few individuals could be expected to start taking action for the common good, with ‘free riders’ having so much to gain.

The Congressional Budget Office, considering a broad range of possible scenarios under the bill were it passed, estimated that the pool of basic carbon allowances would reach an annual total of $50 billion to $300 billion by 2020—hefty sums in themselves—but that the carbon-derivatives market based on the value of those allowances could have been seven to forty times as large, reaching $2 trillion by 2017.52 The ostensible purpose of such devices would be to dampen the volatility of the carbon market, but what are the chances that the tail would end up wagging the dog, as happened in the pre-2008 U.S. mortgage market? Here is how the Minneapolis-based Institute for Agriculture and Trade Policy describes the risks of creating a carbon derivatives market—something that a member of the U.S. Commodity Futures Trading Commission has predicted could be “the most important commodity market ever”: “Once a carbon market, and its associated secondary market, is established, it is likely that carbon derivatives will be bundled into index funds. The sharp projected increase in the volume and value of carbon derivative contracts will induce extreme price volatility in commodity markets. To the extent that carbon derivatives are bundled into commodity index funds, it is likely that carbon prices will strongly influence both agricultural futures contract and cash prices.

Speculation in carbon and carbon-derivatives markets could put the cost of extra carbon rations beyond the reach of many. That wouldn’t matter to those who manage to keep their energy consumption below their allowance. But many people live far from work with no transportation other than an old car, or reside in poorly insulated houses, and when they cannot afford to move closer to work or buy energy-efficient technology, they could go broke buying increasingly costly carbon credits. It has been proposed that the government take the money received in the auction of carbon credits and spend it on programs to improve insulation for low-income households or provide affordable means of commuting, but there are no estimates of how much could be accomplished with the funds available.

Mark Roodhouse, a historian at the University of York, has drawn lessons from Britain’s wartime experience that he feels help explain the low level of interest. Britons accepted rationing in good spirits because they knew the war would end within a few years and were bolstered by promises that a future of plenty lay ahead. Likewise, writes Roodhouse, people might accept carbon rationing if “the scheme is a temporary measure during the transition from a high-carbon economy to a low-carbon economy and will be removed when the carbon price and/or consumption levels drop below a certain level.” But PCT schemes, in view of the very deep reductions necessary to avoid climate disaster, all envision the ceiling on emissions lowering year by year well into the future at a rate that would outpace any conceivable improvements in efficiency or renewable energy. They are, in effect, permanent schemes that would utterly transform our lives and could not be sold to the public as anything else.

During World War II, there simply was not enough gasoline to go around, so people were not permitted to buy as much as they could afford to buy. Today, even after everyone’s demand for fuel is satisfied, there is some left over. (Even though that will not always be the case, the economy behaves as if it will.) It’s far harder to institute rationing in a world of such apparent abundance than it is in a world of obvious scarcity. So will carbon rationing have to wait until supplies of fossil fuels are depleted to a point that falls far short of demand? If so, it may be too late.

BIRTH RATIONING? One wing of the climate movement has argued for almost half a century that unless decisive action is taken to halt or reverse human population growth, all other efforts to prevent runaway climate change or other catastrophes will fail. For example, J. Kenneth Smail wrote in 2003, “Earth’s long-term sustainable carrying capacity, at what most would define as an ‘adequate’ to ‘moderately comfortable’ standard of living, is probably not much greater than 2–3 billion people.” Given that, he argued, “time is short, with a window for implementation that will last no more than the next 50–75 years, and perhaps considerably less. A deliberate program of population stabilization and reduction should have begun some two or more generations ago (say in 1950, when human numbers were ‘only’ 2.5 billion and demographic momentum more easily arrested) and certainly cannot be delayed much longer.” Prominent in the population-reduction campaign is the London-based Population Matters. Leading figures in the trust have argued that the number of people on Earth should somehow be reduced by 60%. One of Population Matters’s initiatives includes the selling of “pop offsets,” through which anyone can, on paper at least, cancel out the greenhouse impact of, say, a Caribbean vacation by contributing money that will go to fund birth-control programs. This, critics have said, can be interpreted as giving people the opportunity to say, “If I can stop them having babies, we won’t have to change our ways.

Although the global emergency described by population activists would appear to be a problem far too formidable to be resolved by voluntary means, few have proposed mandatory curbs—and with good reason. In most countries, public reaction against laws governing reproduction would be almost certainly far more negative than reactions against rationing of, say, gasoline. It would not just be anti-contraception politicians, anti-environment libertarians, and pro-procreation religious leaders who would condemn any form of reproductive rationing; the resistance would be almost universal. In light of that, it is worth examining some of the very few examples of non-voluntary privatization and commodification of water are unsustainable and fragmenting forces ecologically, temporally, geographically, socially, ethically, politically, and even economically. The examples are legion: Atlanta’s water privatization debacle; failed privatization ventures in Laredo, Texas; Felton, California; and East Cleveland, Ohio; the severely stressed Colorado River; the conflict-ridden Upper Klamath Basin in Oregon and northeastern California; the unresolved and unsustainable demands on the Apalachicola–Chattahoochee–Flint River System in the Southeastern U.S.; the once-declining but now-recovering Mono Lake; excessive groundwater pumping in Tucson, Arizona, Tampa, Florida, San Antonio, Texas, and Massachusetts’s Ipswich River Basin; and even emerging water crises. Given such problems, many local governments have de-privatized, once again treating water as a public utility. In times of severe water shortage, water utilities, whether public or private, face no choice but to impose rationing by slapping restrictions on lawn watering and other outdoor uses in order to achieve an immediate reduction.

It isn’t a simple matter to enforce indoor water conservation (what with customers lingering longer under their low-flow showerheads), whereas lawn watering and car washing are highly visible to neighbors and local authorities. Therefore, by far the most common methods of water rationing in America aim at outdoor use.

Mandatory restrictions reduced water use, whereas voluntary restrictions were of little value;

When Los Angeles was hit with a rash of major water-main blowouts in the summer of 2009—some of which sent geysers several stories into the air and one of which opened a sinkhole that half-swallowed a responding fire truck—officials who tried to identify a cause were initially stumped. Then they realized that the incidence of line breaks had risen immediately following the initiation of a citywide water rationing mandate. Faced with severe drought, the city had, for the first time ever, limited lawn watering to only two days per week. But the schedule did not involve rotating days; everyone was supposed to use water for lawns only on Mondays and Thursdays. Experts suspected (and a report six months later confirmed) that sudden pressure drops in aging pipes, caused when sprinklers came on across the city on Mondays and Thursdays, followed by pressure spikes when the sprinklers were turned off, caused many of the blowouts.

raising the price of water to match the cost of providing it is more cost-effective than non-price approaches such as installation of low-flow fixtures or imposition of lawn-watering restrictions.

raising prices is always politically unpopular, while restricting outdoor water use during a drought creates a sense of common adversity and shared burden (and people are more likely to assign the blame to natural causes rather than public officials). Therefore, “water demand management through non-price techniques is the overwhelmingly dominant paradigm in the United States,” the report concludes.

the most effective conservation messages during droughts in Australia were ones alerting consumers to the fact that the water level was dropping in the reservoir that supplied the target area.

When information on reservoir level was provided by electronic roadside signs, people responded to alarming drops in the reservoir by reducing their consumption.

Around the world, 1.8 billion more people have safe drinking water today than had it in 1990.

almost a billion people lack adequate access to water; more than 60 percent of those people live in sub-Saharan Africa or South Asia.

In the city’s wealthier neighborhoods, which receive up to ten hours of water supply each day, there are few big domestic guzzlers. Thirty percent of Mumbai homes exceed the government’s goal of 26 gallons per person daily, but only 7% get more than thirty-seven gallons. In comparison, per-capita domestic consumption of publicly supplied water in the United States is about 100 gallons daily.

Mumbai’s municipal government has made plans to more than double its water supply by 2021. That may bring some relief to people in Kadam Chawl, who get only three to seven gallons of water per person, but it also will require construction of new dams that will submerge tens of thousands more acres and dozens of villages east of the city, driving those villagers off their land. Many will end up in Mumbai, filling their hundis each day with water piped from those new reservoirs.33

the castellum’s purpose was to rotate the supply, providing most or all of the incoming water to only one-third of the city during any given part of the day. Today, such rotation of water services is the most common method of rationing in cities with a shortage of water. A survey of 22 cities and countries in Asia, Africa, and Latin America found that water services were provided for various portions of the day rather than continuously: from less than four hours in Karachi, Pakistan, to four hours in Delhi and Chennai, India; six hours in Haiti, Honduras, and Kathmandu; three to ten hours in Dar es Salaam, Tanzania; and 17 hours in Manila.  

Rationing water in time rather than quantity is a blunt instrument, providing anything from a deficient to an ample supply to each home or business. The more generous the ration, the lower the incentive to conserve. Higher-income areas often receive more hours per day of service, and more affluent residents also have the economic means to install and fill storage tanks that would allow relief from rationing and subvert the goal of reducing consumption. Few among the poor enjoy such a buffer.

Governments around the world, including Egypt’s, learned long ago about the dangers of exposing their citizens’ daily food needs to the whims of global markets. Recognizing the existence of a right to food—and anticipating the political and social upheaval that can happen if that right is not fulfilled—many countries routinely buy and store staple grains and other foods and then ration consumers’ access to those stores at subsidized prices. Like water- and energy-rationing policies, existing public food-distribution systems are designed to provide fair, affordable access to a limited pool of resources. As we will see, no food-ration program so far has been entirely successful; nevertheless, a ration card or food stamp booklet may be all that stands between a family and a week (or even a lifetime) of hunger. And with food rationing, unlike carbon rationing, we can be guided by experience. Public provision of subsidized food rations has been pursued in countries as diverse as Argentina, Bangladesh, Brazil, Chile, China, Colombia, Cuba, Egypt, India, Iran, Iraq, Israel, Mexico, Morocco, Pakistan, the Philippines, the Soviet Union, Sri Lanka, Sudan, Thailand, Venezuela, and Zambia.  In those and other nations, we can find examples ranging from excellent to terrible, sometimes within the same country.

The frequent failure of markets acting alone to direct food to where it is most needed can be seen not only in hungry nations but in well-fed ones as well. In the United States, the share of households that suffer from food insecurity has climbed to almost one in six, according to the Department of Agriculture,

In countries rich and poor, the publicly funded monthly food ration has two faces: first, staving off widespread hunger and the societal disruption that could well arise in its absence; and second, making it possible for the private sector to pay below-subsistence wages. In the latter role, the ration can provide a subsidy to business, allow society to tolerate high unemployment and underemployment, or help undemocratic governments keep a lid on political unrest.

Many food-ration plans take the form of a public distribution system (PDS) that provides specific food items to consumers on a regular, usually monthly, basis. In the typical PDS, consumer food rations are situated at the downstream end of a network that buys up and stockpiles grain at guaranteed prices, imposes price controls and provides subsidies, and rations the stocks it provides to retailers. Today’s typical PDS parallels World War II–era food systems in that ration entitlements are adjusted according to the available supply. But most contemporary PDSs differ from wartime floor-and-ceiling rationing in that they provide only for a floor, a minimum supply. The supply of, say, subsidized wheat controlled by the government constitutes only a portion (if sometimes a large portion) of total consumption; there are usually, but not always, other supplies legally available on the open market outside the system as well, for those who can afford them.

India, for example, normally maintains national stocks of between 60 and 70 million metric tons. To do that, the government buys up about one-third of the nation’s crop each year—enough to fill a five-thousand-mile-long train of hopper cars stretching from Delhi to Casablanca.

Widespread food insecurity is a risk few governments are willing to run, so few PDSs have been completely eliminated. The usual compromise has been to “target” food assistance to households in greatest need. Attempts to replace PDSs with cash payments have typically failed; for many, apparently, money is not an adequate substitute for an ensured ration of food that can be touched, smelled, and tasted.

India and Egypt have operated PDSs of staggering sizes for decades, ones that continue to evolve, while Iraq and Cuba have run comprehensive rationing systems to deal with absolute scarcity.

in 1997, with the liberalization of the Indian economy, the PDS was narrowed to target primarily the low end of the income scale. Two types of ration cards were created, one for “above poverty line” (APL) and one for “below poverty line” (BPL) households.

in and around Mumbai, as in most places, it is rationed kerosene for cooking that is in greatest demand: two liters can be bought on the ration for thirty rupees, whereas the market price is eighty. When kerosene is in stock, word gets around Diwa fast and endless queues form. A longtime customer named Vijaya says that once her mother stood in line at the Diwa shop from noon until four o’clock without reaching the head of the queue; at that point, Vijaya took over her spot and waited another four hours before finally getting the fuel.15

The government buys, at a fixed price, every kilogram of grain that Indian farmers offer to sell to it, but before hauling their harvest to the government market, farmers sell as much of their higher-quality grain as possible on the private market, where they can get a better price. The government gets what’s left, and PDS customers who cannot afford to buy most or all of their food from private markets often get stuck with inferior rice and wheat. Likewise, in some areas the vegetable oil is usually low-quality palm oil, which, customers say, they would rather use in ceremonial lamps than in their food. Offering low-quality foods saves the government some money, but it’s also a means of informal targeting; it discourages middle-class families, who choose to buy better food on the open market instead. In any case, today in most villages and

Targeting errors have become so serious that by 2010, 50% to 75% of all families living below the poverty line in the most badly poverty-stricken states had no subsidized ration card. And India is not alone. In virtually every country that has ended universal food rations in favor of targeting, the rate of errors of exclusion has increased.

Between 1998 & 2005 per-capita calorie consumption decreased for all income groups in India. For the poor it was probably due to stagnant incomes and rising food prices. Over 36% of adult Indian women are now underweight, one of the highest rates in the world. About 70% of the grain that could be used to feed the poor is lost from spoilage, but even more diverted illegally into the cash market for profit. Moneylenders often hold a client’s ration card until their debt is paid.  In West Bangal people grow so desperate they looted shops, ran their owners out of their homes, set fire to barrels of kerosene, and fought police with rods, swords, and brickbats.

Studies have shown that the best way to feed people is food aid. For example, in the U.S. the SNAP program (formerly known as Food Stamps) increases nutrient consumption by two to 10 times as much as a dollar in cash.

Food rations are as old as civilization. In Mesopotamia five thousand years ago, the families of semi-free laborers received rations amounting to about 60 quarts of barley and 2-5 of cooking oil a month, plus four pounds of wool a year, and occasionally wheat, flour, bread, fish, dates, peas, and cloth.

Ancient Egypt was also fueled by public distribution of food rations, and today the government is managing one of the most comprehensive food subsidy / ration systems in the world. On almost any street corner in Cairo one can buy a falafel sandwich for a few cents. The fava beans in the sandwich are subsidized, as is the oil, bread, and more. There is little room to maneuver. There is just one-tenth acre of cropland per person, one of the smallest in the world. So today Egypt is the #1 importer of wheat and much of their other food as well. The government stockpiles food, with the key food being wheat in the form of flour or bread, which Egyptians on average eat a pound of daily. By 2012, 63 million Egyptians had access to the ration system. One of the reasons Mubarak was thrown out in a revolution were his plans to phase out universal food subsidies.

Iraq imports 70% of their food. Citizens depend on public handouts of food, and despite that 15% have problems getting food, and a third of Iraqi’s would be in trouble if the distribution system were terminated. There are 45,000 licensed “food and flour” agents who distribute the food.

In Cuba before the 1959 revolution, most poor people suffered from malnutrition. One of Castro’s goal was to ensure no one went hungry. So, they set about boosting the earnings of the poor, fuller employment, and other reforms.  Basic social serves were free including schooling, medical care, medicine, and social security.  In addition, there was free water, sports facilities, and public telephones. The cost of electricity, gas, and public transport was subsidized.  Finally, people could afford to eat more and better food, which drove up prices, so price controls were imposed and the government took charge of distribution. But demand continued to outstrip supply, and people hoarded. Rather than make some staples available to the poor at low prices, they instituted a ration system for all Cubans in 1962. Each household registered with a specific shop.  Despite the U.S. embargo, childhood malnutrition was almost completely eradicated and public health improved through the 1980s, and all this despite no improvements in food production.

When oil stopped in 1990 when the Soviet Union collapsed, oil, fertilizers, pesticides and other products stopped coming. The government still managed to supply two-thirds of calories to the people.  Today, Cuba imports 80% of its food, and 70% depend on rationed food. Raul Castro is trying to increase agricultural production by land redistribution, higher prices to farmers, and legalizing some private sales. Some rations have been cut as well, so urban farming continues. Without rationing, may Cubans would face starvation, some of it sent from China.

UNITED STATES: despite 4,000 calories of food available per person, the U.S. has one of the largest food-assistance programs in the world: SNAP.

As grain production continues to fall, soil erodes and fresh water vanishing, it is likely that more countries will adopt rationing or subsidize food.

Posted in Agriculture, Rationing | Tagged , | 1 Comment

Peak oil in the news 2023

Preface. Conventional crude oil production may have already peaked in 2008 at 69.5 million barrels per day (mb/d) according to Europe’s International Energy Agency (IEA 2018 p45). The U.S. Energy Information Agency shows global peak crude oil production at a later date in 2018 at 82.9 mb/d (EIA 2020) because they included tight oil, oil sands, and deep-sea oil. Though it will take several years of lower oil production to be sure the peak occurred. Regardless, world production has been on a plateau since 2005.

What’s saved the world from oil decline was unconventional tight “fracked” oil, which accounted for 63% of total U.S. crude oil production in 2019 and 83% of global oil growth from 2009 to 2019. So it’s a big deal if we’ve reached the peak of fracked oil, because that is also the peak of both conventional and unconventional oil and the decline of all oil in the future.

Peak oil means peak everything else, including coal and natural gas, because diesel transportation makes all goods possible.

Peak oil in the news:

Geiger J (2023) Permian Growth Expected To Be Slow Before Peaking In 2030 (my comment: meanwhile the other seven shale/fracked oil basins are declining): The largest oil basin in the USA is set to grow its output by 40% over the next 7 years, according to a Bloomberg survey of four major forecasters—but that growth will be slow and steady instead of explosive. Major forecasters surveyed by Bloomberg are predicting that production from the Permian basin will hit 7.86 million barrels per day in 2030—the point at which they are expecting it to peak. According to the most recent Drilling Productivity Report published on Monday, the Energy Information Administration estimates that April’s 2023 crude oil production will average 5.681 million bpd, and rise to 5.694 million bpd in May. If forecasters are correct, their 2030 predictions would mean a production increase for the basin of more than 2 million bpd from today’s levels. or somewhere near 38 percent.

U.S. Shale Boom Shows Signs of Peaking as Big Oil Wells Disappear (WSJ) – The boom in oil production that over the last decade made the U.S. the world’s largest producer is waning, suggesting the era of shale growth is nearing its peak.  Frackers are hitting fewer big gushers in the Permian Basin, America’s busiest oil patch, the latest sign they have drained their catalog of good wells. Shale companies’ biggest and best wells are producing less oil, according to data reviewed by The Wall Street Journal.  The Journal reported last year companies would exhaust their best U.S. inventory in a handful of years if they resumed the breakneck drilling pace of prepandemic times. Now, recent results out of the Permian, spread across West Texas and New Mexico, are mimicking the onset of a production plateau that has taken place at other, more mature U.S. shale plays. Oil production from the best 10% of wells drilled in the Delaware portion of the Permian was 15% lower last year, on average, than top 2017 wells, according to data from analytics firm FLOW Partners LLC. Meanwhile, the average well put out 6% less oil than the prior year, according to an analysis of data from analytics firm Novi Labs.  The atrophy of once-booming sweet spots has big implications for the global oil market, which years ago could count on rapidly growing U.S. oil production to blunt the effects of supply disruptions and rising demand. Without successful exploration or technological advances, the industry’s inventory constraints are expected eventually to push companies to tap lower quality wells that would require higher oil prices to attract investment, industry executives say.

Exxon Has a Message for Europe: Don’t Mess With Oil and Gas (Bloomberg) – Exxon Mobil Corp. Chief Executive Officer Darren Woods used his prime-time address at the CERAWeek by S&P Global conference in Houston to criticize European energy policy, which in his view has gone too far.  After years of deterring investment in oil and gas, Europe had no alternative but to burn coal to keep the lights on when Russian gas stopped flowing. But by continuing to hit the oil and gas industry with “punishing” measures like the European Union’s windfall tax, things will only get worse, Woods said Tuesday.  “What we saw in Europe should be a wake-up call,” he said. Exxon “stepped back and reevaluated” its investment strategy in the continent, he continued. Meanwhile, it’s plowing ahead with new projects in the US, where the Inflation Reduction Act offers incentives for companies rather than punitive measures. For all the criticism Exxon has endured, including from its own shareholders, Woods insists that his decision not to waver from oil and gas has been the right one. Investors appear to agree with him: Exxon’s stock is up 134% since the pandemic, nearly double the performance of its closest peer Chevron Corp.

Peak-Oil Fears Cast Shadow Over US Supply Outlook as Costs Climb (Bloomberg) – The specter of peak oil that haunted global energy markets during the first decade of the 21st century is once again rearing its head.   Major US oil producers are warning that production from one of the fastest growing sources of supply appears likely to top out by the end of the decade. ConocoPhillips and Pioneer Natural Resources Co. are among those saying the American shale-oil juggernaut soon will be a spent force as the best drilling targets are exhausted and financing new wells gets more difficult.  “You see the plateau on the horizon,” ConocoPhillips Chief Executive Officer Ryan Lance said during a panel discussion at the CERAWeek by S&P Global conference in Houston on Tuesday. Once US crude production peaks around 2030, it’ll plateau for a time before commencing a decline, he added. Although output in the world’s biggest economy is set to continue rising for a least a few more years, the zenith is fast approaching, executives and analysts said. “I wish we could get world leaders to realize that we need hydrocarbons for another 50 years,” said Pioneer CEO Scott Sheffield, who expects US production to peak in five or six years.

Opec back in charge as US shale oil growth flags, executives say (FT) – The Opec cartel is back in control of the world oil market as the shale revolution peters out, according to a number of industry executives who warned of higher prices for crude in the year ahead.  Despite recent record profits, the heads of American shale producers told the Financial Times that rising costs and investor pressure to return cash to shareholders would continue to hamper US supply growth.  The dim outlook is a reversal from the previous decade, when the shale industry’s ability to quickly boost production prompted claims the sector had become a new “swing producer” with market power to rival Opec kingpin Saudi Arabia.  “I think the people that are in charge now are three countries — and they’ll be in charge the next 25 years,” said Scott Sheffield, chief executive of Pioneer Natural Resources, the biggest independent US shale oil company. “Saudi first, UAE second, Kuwait third.”

OPEC Concerned About Demand Slowdown in US, Europe, Chief Says (Bloomberg) – OPEC’s top official said slowing oil demand in Europe and the US are posing a concern for the global market, even as Asia experiences “phenomenal” growth. “We see a divided market — almost like two markets,” OPEC Secretary-General Haitham Al-Ghais said at the CERAWeek by S&P Global conference Tuesday. Ensuring “security of demand” in regions where inflation is crimping consumption is as critical as ensuring supplies, he said. For now, though, rebounding demand in Asia will help keep the market broadly balanced in the first half of the year, with global consumption seen rising by 2.3 million barrels a day to average 101.87 million barrels a day in 2023, according to the latest report from OPEC’s Vienna-based secretariat. After that, the market is expected to tighten as global inventories decline and the 23-nation OPEC+ coalition, led by Saudi Arabia, aims to keep production levels unchanged for the rest of the year. The group is due to hold an online monitoring meeting to review market conditions early next month, followed by a full ministerial conference in June to set policy for the rest of the year.

Russian Oil Gets More Pricey as Pool of Asian Buyers Expands (Bloomberg) – The price of Russian crude and fuel is rising for buyers in Asia as a pool of bigger customers from China and India expands, putting pressure on smaller refiners that have eagerly consumed the cheap oil. Offer levels for Russia’s Urals and ESPO crude, as well as fuel oil, surged over the past weeks, according to traders with knowledge of the matter. Increased interest from Chinese state-owned and large private refiners such as Sinopec, PetroChina Co. and Hengli Petrochemical Co., in addition to a jump in Indian demand, led cargoes to be snapped up at higher prices, they said. Offers for ESPO that’s typically loaded at Kozmino port was close to $6.50 to $7 a barrel below ICE Brent on a delivered basis to China, while flagship Urals shipped from western ports was around $10 under the same benchmark, said traders. That’s an increase of as much as $2 from last month, marking one of the steepest jumps since sanctions were imposed on Dec. 5, they added.

U.S. regulator orders lower pressure on Keystone pipeline system after spill (Reuters) – The U.S. pipeline regulator said on Tuesday it would require TC Energy to reduce operating pressure on more than 1,000 additional miles (1,609 kilometers) of its Keystone pipeline that spilled about 13,000 barrels of oil in rural Kansas in December.  The Canadian pipeline operator completed a controlled restart of the 622,000-barrel-per-day (bpd) pipeline to Cushing, Oklahoma, on Dec. 29 last year, returning it to service after a 21-day outage following the biggest U.S oil spill in nine years. The amended corrective action order requires TC Energy to keep the operating pressure on the affected pipeline segment under the previously agreed upon 923 pounds per square gauge (psig) limit. The pressure reductions will lower crude flow rates on the entire Keystone system, which could affect price differentials between U.S. and Canadian crude, Credit Suisse analyst Andrew Kuske said in a note.

And Copper:

China Copper Exports Set to Jump as Domestic Demand Disappoints (Bloomberg) – China is poised to export a significant volume of copper in coming weeks, a relatively infrequent occurrence that underscores a tepid demand recovery in the biggest market. At least four major smelters are planning to deliver between 23,000 and 45,000 tons of refined copper in total to London Metal Exchange depots in Asia, according to people with knowledge of the sales, who asked not to be identified because the plans are private. The burst of exports confirms China’s weak economic rebound, with manufacturing and construction still gearing up after Covid-related disruptions over the past year. Market inventories have jumped recently, and a key question is whether the wave of outbound shipments will extend beyond this month. China imports huge amounts of copper in various forms, but refined metal occasionally flows out when there’s too much domestic supply and not enough elsewhere. Price differentials now make it more lucrative to sell at least some material on the world market. There are some signs that copper demand in the country is picking up. On-exchange inventories have begun to fall and the nation’s manufacturing index for February notched its highest reading in more than a decade.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

2023. What the end of the US shale revolution would mean for the world. Financial times

The “golden age of shale mining” is coming to an end in the US, which could lead to unpredictable consequences. Rapid shale growth delivered a huge stimulus to the global economy by keeping fuel prices low, and freed Washington’s hands to take on oil-rich rivals in Iran and Venezuela without fear of economic blowback for voters at home. Shale supply the past 15 years shelterrf Americans from the sky-high natural gas and fuel prices rattling other economies, giving its industry a competitive advantage and its households more disposable income.

Today new wells are yielding less oil. “The aggressive growth era of US shale is over,” says Scott Sheffield, chief executive of Pioneer Natural Resources, the country’s biggest shale producer.“The shale model definitely is no longer a swing producer.  

The world could enter “an even more volatile energy market” after the end of the “aggressive growth” phase of US oil production, as demand for oil globally does not decrease significantly. States that import face a new round of higher prices, and strengthen OPEC’s influence back to what it was in 2009 and persist unless fracking can be done in other countries successfully .

Tobben S (2023) North Dakota’s Bakken shale “holding back” U.S. oil production. Bloomberg 

https://www.worldoil.com/news/2023/2/8/north-dakota-s-bakken-shale-holding-back-u-s-oil-production/

North Dakota’s Bakken shale — traditionally one of America’s larger, busier shales — is showing signs of maturation, threatening to hold back U.S. oil production as the world thirsts for more crude. Mature wells that are producing more gas than expected are hurting crude output from the Bakken, the Energy Information Administration said in an email on 2/7. The deteriorating performance was the main reason the agency cut its estimate for 2024 U.S. oil output to 12.65 MMbpd from an earlier projection of 12.8 million.

Messler D (2023) Will U.S. shale ever return to its glory days? oilprice  

https://oilprice.com/Energy/Crude-Oil/Will-US-Shale-Ever-Return-To-Its-Glory-Days.html

In November 2022 I discussed signs of maturing of the drilling inventory: the number of Top-Tier locations in on the decline, and operators are being forced to drill less productive, lower-tier reservoirs to maintain output. While there is no concern about production from shale reservoirs “falling off a cliff” anytime soon, the fall-off in productivity in at least some basins, is becoming noticeable by a number of metrics.

The shale “boom” is about thirteen years old, if you date it from 2010 and there are clear signs the meteoric growth of past years are behind us. Our expectations are for shale production to maintain an upward trend for most of this year, but with an arc that flattens as 2023 waxes on, and then begins to bend down. Perhaps as soon as the end of this year. This opinion flies in the face of generally accepted industry and governmental forecasts that show shale production exceeding 10 mm BOEPD at the end of 2024.

One of the problems with shale production, is the best locations in the various shale basins are well past their prime and shale output could be in the early stages of a death by a thousand cuts. (A death, I remind you again is decades hence, but hanging out there none the less.) This declaration runs in stark contrast to other data taken from the EIA Drilling Productivity Report-DPR, showing shale production is on the increase. How is this possible? The report shows production increasing significantly only in the Bakken, rising a little in the Permian, and barely staying even in the Anadarko, Appalachia, Eagle Ford, Haynesville, and Niobara.

There are observable trends indicating there may be a peak coming. If this trend continues, the only way to maintain output at or above current levels will be with increased drilling.

Wethe D, Crowley K (Dec 12, 2022) Oil Wells Creeping Into Texas Cities Herald Shale Era’s Twilight The world can no longer rely on the Permian Basin to keep crude prices in check, with new wells pumping less oil per foot drilled. Bloomberg.

https://www.bloomberg.com/news/articles/2022-12-12/oil-wells-creeping-into-texas-cities-herald-shale-era-s-twilight

An uptick in drilling within the city limits signals that the very best rock in one of the world’s most prolific oil fields has already been tapped. In the shale boom’s early days, with so much crude-soaked land up for grabs elsewhere in the Permian Basin, there was little reason to deal with the red tape needed to bore underneath populated areas. But with over two-thirds of the Permian’s premium land now drilled, according to BMO Capital Markets, producers are seeking more permits than ever to burrow beneath Midland and its 130,000 residents. 

Observers have long been predicting shale’s demise or heralding its rebirth. But this time is different: After years of honing their craft to boost output, producers in the Permian’s two main zones are pumping less oil per foot drilled in each new well, not more. Analysts say the Permian could reach a production plateau within five years. Wells drilled this year produced between 8% and 13% less oil per lateral foot than a year earlier.

That’s a problem that reaches far beyond Texas. US shale, led by the Permian, has provided 90% of global oil output growth in the past decade. A shale slowdown means the world can no longer rely on the US to be its swing oil supplier, capable of ramping up or down quickly to temper a volatile market. It complicates the Biden administration’s efforts to tame pump prices, and it hands more power back to OPEC as Russia’s invasion of Ukraine upends oil and gas supply.

Most of the top-tier land has already been developed in the Permian and in the Bakken of North Dakota, the top-producing shale regions. That leaves explorers with a lower inventory of the most valuable yet-to-be-drilled sites.  “We’re going to run out of inventory in the next four to six years,” said James West, an analyst at Evercore ISI. “We probably saw it earlier in other shales, which is why we left those other shales and moved so much activity into the Permian. It’s now rearing its ugly head in the Permian.”

Brower D (2020) Shale binge has spoiled US reserves, top investor warns. Financial Times

A fracking binge in the American shale industry has permanently damaged the country’s oil and gas reserves, threatening hopes for a production recovery and US energy independence, according to one of the sector’s top investors.

Wil VanLoh, chief executive of Quantum Energy Partners, a private equity firm that through its portfolio companies is the biggest US driller after ExxonMobil, said too much fracking had “sterilised a lot of the reservoir in North America”.

“That’s the dirty secret about shale,” Mr VanLoh told the Financial Times, noting wells had often been drilled too closely to one another. “What we’ve done for the last five years is we’ve drilled the heart out of the watermelon.”

Soaring shale production in recent years took US crude output to 13m barrels a day this year and brought a rise in oil exports, allowing President Donald Trump to proclaim an era of “American energy dominance”. 

Total US oil reserves have more than doubled since the start of the century as hydraulic fracturing, or fracking, and horizontal drilling unleashed reserves previously considered out of reach.

Line chart of million of barrels a day showing US oil production has tumbled this year

But the pandemic-induced crash, which sent US crude prices to less than zero in April, has devastated a shale patch that was already out of favor with Wall Street for its failure to generate profits, even while it made the country the world’s biggest oil and gas producer. 

The number of operating rigs has collapsed by more than 60% since the start of the year. US output is now about 11m barrels a day, according to the US Energy Information Administration, or 15% less than the peak. Line chart of number of rigs showing US drilling activity has plummeted

“Even if we wanted to, I don’t think we could get much above 13m” barrels a day, Mr VanLoh said. “I don’t think it’s physically possible, because we’ve messed up so much reservoir. I would argue that what the US was touting three or four years ago, in theoretical deliverability, is nowhere close to what we think it is now.”

He said operators had carried out “massive fracks” that created “artificial, permanent porosity”, inadvertently reducing the pressure in reservoirs and therefore the available oil. 

The comments will cause alarm in the shale patch, given the crucial role of investors such as QEP in financing the onshore American oil business.

The Houston-based investor has assets under management of about $11.2bn, according to data provider PitchBook, and is one of the few private equity groups still focused on shale.

Private companies account for about 30% of US oil production excluding Alaska and Hawaii, about 2.7m b/d, according to consultancy Rystad Energy.

Other private equity investors have warned that the shale growth story has ended, despite an oil-price recovery in recent months to about $40 a barrel.

“They were making lousy returns at $65 a barrel,” said Adam Waterous, head of Waterous Energy Fund. “You need at least north of $70 before you start achieving a cost-of-capital return in the US oil business.”

Production from the Permian, the prolific shale field of west Texas and New Mexico, peaked even before the crash this year, Mr Waterous said. At current prices, only 25 per cent of US shale was economical, he added.

Analysts also say US oil output will struggle to recover its previous heights. Artem Abramov, head of shale research at Rystad, said production would remain between 11.5m b/d and 12m b/d at $40 a barrel. S&P Global Platts forecasts a decline to 10m b/d by mid-2021. 

But the crash could create opportunities for QEP in the short term, Mr VanLoh said, especially if prices recovered.

While listed producers had mostly sworn off production growth, some QEP-backed companies, such as DoublePoint Energy — which played host to Mr Trump during the president’s July fundraising visit to Midland, Texas — were increasing drilling activity. It says its Permian acreage can still be profitable at current prices.

QEP’s portfolio companies would increase output this year by about 25 per cent, to 500,000 barrels of oil and gas a day, Mr VanLoh said. 

“The next five years may be the best five years we’ve ever had for hydrocarbon investing,” he said. 

But he is also adjusting his company’s strategy to reflect investors’ growing disquiet with fossil fuels. QEP’s new 10-year fund, VIII, would be launched in early November, he said, with $1bn of about $5.6bn of total capital commitment reserved for “energy transition” investments. 

The company would soon appoint someone from outside the oil industry to enforce better environment, social and governance performance at QEP’s companies, Mr VanLoh added. 

He said they would have to improve ESG “because ultimately you’re not going to get capital from us if you don’t . . . And we won’t be able to get capital from our limited partners if you don’t.”

A more efficient US shale sector would re-emerge from the crash, Mr VanLoh said, but it would be smaller and require a reduced workforce. He is now advising his friends’ children not to pursue a career in oil

“I tell all of them — honestly, it’s a very risky bet and, if I were you, I would not go into it today.”

A comment from a reader:

“A technical point: shale has a lot of porosity as a function of its many tiny grain particles, but no permeability as the pore necks are too small to allow flow. All reservoirs are under pressure due to the burial of the reservoir. Fracking creates instantaneous fissures into which hydro carbons are spontaneously released and the pressure keeps flow going. But that pressure flags quickly. Hence the rapid decline of shale wells, typically 50% of initial flow after 6 month. The bigger the frack, the higher the first release in the perimeter of the well. Refacking will not do anything as the matrix is already destroyed. Poor practice does the reservoir in, but the frackers needed to keep it up to maintain production. The fag end of the oil businesses.”

References

EIA. 2020. International Energy Statistics. Petroleum and other liquids. Data Options. U.S. Energy Information Administration. Select crude oil including lease condensate to see data past 2017.

IEA. 2018. International Energy Agency World Energy Outlook 2018, figures 1.19 and 3.13. International Energy Agency.

Posted in Oil & Gas Fracked, Peak Oil | Tagged , , , , , , | 1 Comment

Forests make the wind that carries the rain across continents

Preface. This is a controversial theory that if true, “could help explain why, despite their distance from the oceans, the remote interiors of forested continents receive as much rain as the coasts—and why the interiors of unforested continents tend to be arid. It also implies that forests from the Russian taiga to the Amazon rainforest don’t just grow where the weather is right. They also make the weather.

This biotic pump theory has faced a head wind of criticism, especially from climate modelers, some of whom say its effects are negligible and dismiss the idea completely. The dispute has made Makarieva an outsider: a theoretical physicist in a world of modelers, a Russian in a field led by Western scientists, and a woman in a field dominated by men.”

Keep in mind that the idea forests could generate rain wasn’t accepted until 1979. This theory was first proposed in 2007, but it hasn’t been proven or disproved, and is hard to test.

Their theory may also explain why cyclones rarely form in the South Atlantic Ocean: The Amazon and Congo rainforests between them draw so much moisture away that there is too little left to fuel hurricanes.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Pearce F. 2020. Weather makers. Forests supply the world with rain. A controversial Russian theory claims they also make wind. Science 368: 1302-5.

For more than a decade, Makarieva has championed a theory, developed with Victor Gorshkov, her mentor and colleague at the Petersburg Nuclear Physics Institute (PNPI), on how Russia’s boreal forests, the largest expanse of trees on Earth, regulate the climate of northern Asia. It is simple physics with far-reaching consequences, describing how water vapor exhaled by trees drives winds: winds that cross the continent, taking moist air from Europe, through Siberia, and on into Mongolia and China; winds that deliver rains that keep the giant rivers of eastern Siberia flowing; winds that water China’s northern plain, the breadbasket of the most populous nation on Earth.

With their ability to soak up carbon dioxide and breathe out oxygen, the world’s great forests are often referred to as the planet’s lungs. But Makarieva and Gorshkov, who died last year, say they are its beating heart, too. “Forests are complex self-sustaining rainmaking systems, and the major driver of atmospheric circulation on Earth,” Makarieva says. They recycle vast amounts of moisture into the air and, in the process, also whip up winds that pump that water around the world. The first part of that idea—forests as rainmakers—originated with other scientists and is increasingly appreciated by water resource managers in a world of rampant deforestation. But the second part, a theory Makarieva calls the biotic pump, is far more controversial.

Many meteorology textbooks still teach a caricature of the water cycle, with ocean evaporation responsible for most of the atmospheric moisture that condenses in clouds and falls as rain. The picture ignores the role of vegetation and, in particular, trees, which act like giant water fountains. Their roots capture water from the soil for photosynthesis, and microscopic pores in leaves release unused water as vapor into the air. The process, the arboreal equivalent of sweating, is known as transpiration. In this way, a single mature tree can release hundreds of liters of water a day. With its foliage offering abundant surface area for the exchange, a forest can often deliver more moisture to the air than evaporation from a water body of the same size.

The importance of this recycled moisture for nourishing rains was largely disregarded until 1979, when Brazilian meteorologist Eneas Salati reported studies of the isotopic composition of rainwater sampled from the Amazon Basin. Water recycled by transpiration contains more molecules with the heavy oxygen-18 isotope than water evaporated from the ocean. Salati used this fact to show that half of the rainfall over the Amazon came from the transpiration of the forest itself.

Salati and others surmised the jet carried much of the transpired moisture, and dubbed it a “flying river.” The Amazon flying river is now reckoned to carry as much water as the giant terrestrial river below it, says Antonio Nobre, a climate researcher at Brazil’s National Institute for Space Research.

For some years, flying rivers were thought to be limited to the Amazon. In the 1990s, Hubert Savenije, a hydrologist at the Delft University of Technology, began to study moisture recycling in West Africa. Using a hydrological model based on weather data, he found that, as one moved inland from the coast, the proportion of the rainfall that came from forests grew, reaching 90% in the interior. The finding helped explain why the interior Sahel region became dryer as coastal forests disappeared over the past half-century.

In 2010, van der Ent and his colleagues reported the model’s conclusion: Globally, 40% of all precipitation comes from the land rather than the ocean. Often it is more. The Amazon’s flying river provides 70% of the rain falling in the Río de la Plata Basin, which stretches across southeastern South America. Van der Ent was most surprised to find that China gets 80% of its water from the west, mostly Atlantic moisture recycled by the boreal forests of Scandinavia and Russia. The journey involves several stages—cycles of transpiration followed by downwind rain and subsequent transpiration—and takes 6 months or more. “It contradicted previous knowledge that you learn in high school,” he says. “China is next to an ocean, the Pacific, yet most of its rainfall is moisture recycled from land far to the west.”

If this theory is correct, forests supply not just the moisture, but the winds that carry it. In 2007, in Hydrology and Earth System Sciences, they first outlined their vision for the biotic pump. It was provocative from the outset because it contradicted a long-standing tenet of meteorology: that winds are driven largely by the differential heating of the atmosphere. When warm air rises, it lowers the air pressure below it, in effect creating space at the surface into which air moves. In summer, for example, land surfaces tend to heat faster and draw in moist breezes from the cooler ocean.

Makarieva and Gorshkov argued that a second process can sometimes dominate. When water vapor from forests condenses into clouds, a gas becomes a liquid that occupies less volume. That reduces air pressure, and draws in air horizontally from areas with less condensation. In practice, it means condensation above coastal forests turbocharges sea breezes, sucking moist air inland where it will eventually condense and fall as rain. If the forests continue inland, the cycle can continue, maintaining moist winds for thousands of kilometers.

The theory inverts traditional thinking: It is not atmospheric circulation that drives the hydrological cycle, but the hydrological cycle that drives the mass circulation of air.

Sheil, who became a supporter of the theory more than a decade ago, thinks of it as an embellishment of the flying river idea. “They are not mutually exclusive,” he says. “The pump offers an explanation of the power of the rivers.” He says the biotic pump could explain the “cold Amazon paradox.” From January to June, when the Amazon Basin is colder than the ocean, strong winds blow from the Atlantic to the Amazon—the opposite of what would be expected if they resulted from differential heating.

Even those who doubt the theory agree that forest loss can have far-reaching climatic consequences. Many scientists have argued that deforestation thousands of years ago was to blame for desertification in the Australian Outback and West Africa. The fear is that future deforestation could dry up other regions, for example, tipping parts of the Amazon rainforest to savanna. Agricultural regions of China, the African Sahel, and the Argentine Pampas are also at risk.

In 2018, Keys and his colleagues used a model, similar to van der Ent’s, to track the sources of rainfall for 29 global megacities. He found that 19 were highly dependent on distant forests for much of their water supply, including Karachi, Pakistan; Wuhan and Shanghai, China; and New Delhi and Kolkata, India. “Even small changes in precipitation arising from upwind land-use change could have big impacts on the fragility of urban water supplies,” he says.

Some modeling even suggests that by removing a moisture source, deforestation could alter weather patterns beyond the paths of flying rivers. Just as El Niño, a shift in currents and winds in the tropical Pacific Ocean, is known to influence weather in faraway places through “teleconnections,” so, too, could Amazon deforestation diminish rainfall in the U.S. Midwest and snowpack in the Sierra Nevada.

Another example: a study showed that as much as 40% of the total rainfall in the Ethiopian highlands, the main source of the Nile, is provided by moisture recycled from the forests of the Congo Basin. Egypt, Sudan, and Ethiopia are negotiating a long-overdue deal on sharing the waters of the Nile. But such an agreement would be worthless if deforestation in the Congo Basin, far from those three nations, dries up the moisture source.

If true, water resource managers in the Midwest, Sierra Nevada, and Middle East need to care as much about the deforestation in the far away Amazon or Congo Basins as much as their local water.

The biotic pump would raise the stakes even further, with its suggestion that forest loss alters not just moisture sources, but also wind patterns. The theory, if correct, would have crucial implications for planetary air circulation patterns, especially those that take moist air inland to continental interiors.

 

Posted in Climate Change, Deforestation | Tagged , , , | Comments Off on Forests make the wind that carries the rain across continents

How to make biomass last longer

Preface. Before fossil fuels, societies were able to make their forests last longer than today. Felling tall trees and killing them was rare except for special needs such as making bridges or ships. For firewood and other needs, trees were cut in a way that encouraged new shoots to sprout that could be harvested every few years. Coppiced forests were more biodiverse than today’s plantations, since many kinds of wood were planted, each kind suited to different purposes.  Coppiced firewood needed to be harvested within 9-18 miles (15-30 km) away over land, since the wood was hauled on carts over bad roads. Beyond 18 miles the energy content of the wood was less than the energy of the pasture used to feed the horse.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Kris De Decker. 2020. How to make biomass energy sustainable again. Low-tech magazine.

Nowadays, most wood is harvested by killing trees. Before the Industrial Revolution, a lot of wood was harvested from living trees, which were coppiced. The principle of coppicing is based on the natural ability of many broad-leaved species to regrow from damaged stems or roots – damage caused by fire, wind, snow, animals, pathogens, or (on slopes) falling rocks. Coppice management involves the cutting down of trees close to ground level, after which the base – called the “stool” – develops several new shoots, resulting in a multi-stemmed tree.

Coppice-stool-min

A coppice stool. Image: Geert Van der Linden.

Coppiced-patch-min

A recently coppiced patch of oak forest. Image: Henk vD. (CC BY-SA 3.0)

Surrey-min

Coppice stools in Surrey, England. Image: Martinvl (CC BY-SA 4.0)

When we think of a forest or a tree plantation, we imagine it as a landscape stacked with tall trees. However, until the beginning of the twentieth century, at least half of the forests in Europe were coppiced, giving them a more bush-like appearance. [1] The coppicing of trees can be dated back to the stone age, when people built pile dwellings and trackways crossing prehistoric fenlands using thousands of branches of equal size – a feat that can only be accomplished by coppicing. [2]

Coppice-forests-czech-republic
Historical-coppice-forests-spain

The approximate historical range of coppice forests in the Czech Republic (above, in red) and in Spain (below, in blue). Source: “Coppice forests in Europe”, see [1]

Ever since then, the technique formed the standard approach to wood production – not just in Europe but almost all over the world. Coppicing expanded greatly during the eighteenth and nineteenth centuries, when population growth and the rise of industrial activity (glass, iron, tile and lime manufacturing) put increasing pressure on wood reserves.

Short Rotation Cycles

Because the young shoots of a coppiced tree can exploit an already well-developed root system, a coppiced tree produces wood faster than a tall tree. Or, to be more precise: although its photosynthetic efficiency is the same, a tall tree provides more biomass below ground (in the roots) while a coppiced tree produces more biomass above ground (in the shoots) – which is clearly more practical for harvesting. [3] Partly because of this, coppicing was based on short rotation cycles, often of around two to four years, although both yearly rotations and rotations up to 12 years or longer also occurred.

Short-coppice-rotation-min
Coppice-rotation-min

Coppice stools with different rotation cycles. Images: Geert Van der Linden. 

Because of the short rotation cycles, a coppice forest was a very quick, regular and reliable supplier of firewood. Often, it was cut up into a number of equal compartments that corresponded to the number of years in the planned rotation. For example, if the shoots were harvested every three years, the forest was divided into three parts, and one of these was coppiced each year. Short rotation cycles also meant that it took only a few years before the carbon released by the burning of the wood was compensated by the carbon that was absorbed by new growth, making a coppice forest truly carbon neutral. In very short rotation cycles, new growth could even be ready for harvest by the time the old growth wood had dried enough to be burned.

In some tree species, the stump sprouting ability decreases with age. After several rotations, these trees were either harvested in their entirety and replaced by new trees, or converted into a coppice with a longer rotation. Other tree species resprout well from stumps of all ages, and can provide shoots for centuries, especially on rich soils with a good water supply. Surviving coppice stools can be more than 1,000 years old.

Biodiversity

A coppice can be called a “coppice forest” or a “coppice plantation”, but in reality it was neither a forest nor a plantation – perhaps something in between. Although managed by humans, coppice forests were not environmentally destructive, on the contrary. Harvesting wood from living trees instead of killing them is beneficial for the life forms that depend on them. Coppice forests can have a richer biodiversity than unmanaged forests, because they always contain areas with different stages of light and growth. None of this is true in industrial wood plantations, which support little or no plant and animal life, and which have longer rotation cycles (of at least twenty years).

Dutch-coppice-min

Coppice stools in the Netherlands. Image: K. Vliet (CC BY-SA 4.0)

Biodiversity-coppice-min

Sweet chestnut coppice at Flexham Park, Sussex, England. Image: Charlesdrakew, public domain.

Our forebears also cut down tall, standing trees with large-diameter stems – just not for firewood. Large trees were only “killed” when large timber was required, for example for the construction of ships, buildings, bridges, and windmills. [4] Coppice forests could contain tall trees (a “coppice-with-standards”), which were left to grow for decades while the surrounding trees were regularly pruned. However, even these standing trees could be partly coppiced, for example by harvesting their side branches while they were alive (shredding).

Multipurpose Trees

The archetypical wood plantation promoted by the industrial world involves regularly spaced rows of trees in even-aged, monocultural stands, providing a single output – timber for construction, pulpwood for paper production, or fuelwood for power plants. In contrast, trees in pre-industrial coppice forests had multiple purposes. They provided firewood, but also construction materials and animal fodder.

The targeted wood dimensions, determined by the use of the shoots, set the rotation period of the coppice. Because not every type of wood was suited for every type of use, coppiced forests often consisted of a variety of tree species at different ages. Several age classes of stems could even be rotated on the same coppice stool (“selection coppice”), and the rotations could evolve over time according to the needs and priorities of the economic activities.

Geriefhoutbos-min

A small woodland with a diverse mix of coppiced, pollarded and standard trees. Image: Geert Van der Linden.  

Coppiced wood was used to build almost anything that was needed in a community. [5] For example, young willow shoots, which are very flexible, were braided into baskets and crates, while sweet chestnut prunings, which do not expand or shrink after drying, were used to make all kinds of barrels. Ash and goat willow, which yield straight and sturdy wood, provided the material for making the handles of brooms, axes, shovels, rakes and other tools.

Young hazel shoots were split along the entire length, braided between the wooden beams of buildings, and then sealed with loam and cow manure – the so-called wattle-and-daub construction. Hazel shoots also kept thatched roofs together. Alder and willow, which have almost limitless life expectancy under water, were used as foundation piles and river bank reinforcements. The construction wood that was taken out of a coppice forest did not diminish its energy supply: because the artefacts were often used locally, at the end of their lives they could still be burned as firewood.

Leaf-fodder-min

Harvesting leaf fodder in Leikanger kommune, Norway. Image: Leif Hauge. Source: [19]

Coppice forests also supplied food. On the one hand, they provided people with fruits, berries, truffles, nuts, mushrooms, herbs, honey, and game. On the other hand, they were an important source of winter fodder for farm animals. Before the Industrial Revolution, many sheep and goats were fed with so-called “leaf fodder” or “leaf hay” – leaves with or without twigs. [6]

Elm and ash were among the most nutritious species, but sheep also got birch, hazel, linden, bird cherry and even oak, while goats were also fed with alder. In mountainous regions, horses, cattle, pigs and silk worms could be given leaf hay too. Leaf fodder was grown in rotations of three to six years, when the branches provided the highest ratio of leaves to wood. When the leaves were eaten by the animals, the wood could still be burned.

Pollards & Hedgerows

Coppice stools are vulnerable to grazing animals, especially when the shoots are young. Therefore, coppice forests were usually protected against animals by building a ditch, fence or hedge around them. In contrast, pollarding allowed animals and trees to be mixed on the same land. Pollarded trees were pruned like coppices, but to a height of at least two metres to keep the young shoots out of reach of grazing animals.

Coppicing-methods
Segovia-min

Pollarded trees in Segovia, Spain. Image: Ecologistas en Acción.

Wooded meadows and wood pastures – mosaics of pasture and forest – combined the grazing of animals with the production of fodder, firewood and/or construction wood from pollarded trees. “Pannage” or “mast feeding” was the method of sending pigs into pollarded oak forests during autumn, where they could feed on fallen acorns. The system formed the mainstay of pork production in Europe for centuries. [7] The “meadow orchard” or “grazed orchard” combined fruit cultivation and grazing — pollarded fruit trees offered shade to the animals, while the animals could not reach the fruit but fertilised the trees.

Dehesa-min

Forest or pasture? Something in between. A “dehesa” (pig forest farm) in Spain. Image by Basotxerri (CC BY-SA 4.0).

Cows-huelva-min

Cattle grazes among pollarded trees in Huelva, Spain. (CC BY-SA 2.5)

Meadow-orchard-min

A meadow orchard surrounded by a living hedge in Rijkhoven, Belgium. Image: Geert Van der Linden.

While agriculture and forestry are now strictly separated activities, in earlier times the farm was the forest and vice versa. It would make a lot of sense to bring them back together, because agriculture and livestock production – not wood production – are the main drivers of deforestation. If trees provide animal fodder, meat and dairy production should not lead to deforestation. If crops can be grown in fields with trees, agriculture should not lead to deforestation. Forest farms would also improve animal welfare, soil fertility and erosion control.

Line Plantings

Extensive plantations could consist of coppiced or pollarded trees, and were often managed as a commons. However, coppicing and pollarding were not techniques seen only in large-scale forest management. Small woodlands in between fields or next to a rural house and managed by an individual household would be coppiced or pollarded. A lot of wood was also grown as line plantings around farmyards, fields and meadows, near buildings, and along paths, roads and waterways. Here, lopped trees and shrubs could also appear in the form of hedgerows, thickly planted hedges. [8]

Hedge-landscape-min

Hedge landscape in Normandy, France, around 1940. Image: W Wolny, public domain.

Line-plantings-map-min

Line plantings in Flanders, Belgium. Detail from the Ferraris map, 1771-78. 

Although line plantings are usually associated with the use of hedgerows in England, they were common in large parts of Europe. In 1804, English historian Abbé Mann expressed his surprise when he wrote about his trip to Flanders (today part of Belgium): “All fields are enclosed with hedges, and thick set with trees, insomuch that the whole face of the country, seen from a little height, seems one continued wood”. Typical for the region was the large number of pollarded trees. [8]

Like coppice forests, line plantings were diverse and provided people with firewood, construction materials and leaf fodder. However, unlike coppice forests, they had extra functions because of their specific location. [9] One of these was plot separation: keeping farm animals in, and keeping wild animals or cattle grazing on common lands out. Various techniques existed to make hedgerows impenetrable, even for small animals such as rabbits. Around meadows, hedgerows or rows of very closely planted pollarded trees (“pollarded tree hedges”) could stop large animals such as cows. If willow wicker was braided between them, such a line planting could also keep small animals out. [8]

Hedgerow-detail-min

Detail of a yew hedge. Image: Geert Van der Linden. 

Meidoornhaag-min

Hedgerow. Image: Geert Van der Linden. 

Pollarded-tree-hedge-min

Pollarded tree hedge in Nieuwekerken, Belgium. Image: Geert Van der Linden.

Hakhoutstoven-min

Coppice stools in a pasture. Image: Jan Bastiaens.

Trees and line plantings also offered protection against the weather. Line plantings protected fields, orchards and vegetable gardens against the wind, which could erode the soil and damage the crops. In warmer climates, trees could shield crops from the sun and fertilize the soil. Pollarded lime trees, which have very dense foliage, were often planted right next to wattle-and-daub buildings in order to protect them from wind, rain and sun. [10]

Dunghills were protected by one or more trees, preventing the valuable resource from evaporating due to sun or wind. In the yard of a watermill, the wooden water wheel was shielded by a tree to prevent the wood from shrinking or expanding in times of drought or inactivity. [8]

Watermill-protection-min

A pollarded tree protects a water wheel. Image: Geert Van der Linden. 

Schermbeplanting-gebouw-min

Pollarded lime trees protect a farm building in Nederbrakel, Belgium. Image: Geert Van der Linden.

Location Matters

Along paths, roads and waterways, line plantings had many of the same location-specific functions as on farms. Cattle and pigs were hoarded over dedicated droveways lined with hedgerows, coppices and/or pollards. When the railroads appeared, line plantings prevented collisions with animals. They protected road travellers from the weather, and marked the route so that people and animals would not get off the road in a snowy landscape. They prevented soil erosion at riverbanks and hollow roads.

All functions of line plantings could be managed by dead wood fences, which can be moved more easily than hedgerows, take up less space, don’t compete for light and food with crops, and can be ready in a short time. [11] However, in times and places were wood was scarce a living hedge was often preferred (and sometimes obliged) because it was a continuous wood producer, while a dead wood fence was a continuous wood consumer. A dead wood fence may save space and time on the spot, but it implies that the wood for its construction and maintenance is grown and harvested elsewhere in the surroundings.

Rij-knotbomen-min

Image: Pollarded tree hedge in Belgium. Image: Geert Van der Linden.

Local use of wood resources was maximised. For example, the tree that was planted next to the waterwheel, was not just any tree. It was red dogwood or elm, the wood that was best suited for constructing the interior gearwork of the mill. When a new part was needed for repairs, the wood could be harvested right next to the mill. Likewise, line plantings along dirt roads were used for the maintenance of those roads. The shoots were tied together in bundles and used as a foundation or to fill up holes. Because the trees were coppiced or pollarded and not cut down, no function was ever at the expense of another.

Nowadays, when people advocate for the planting of trees, targets are set in terms of forested area or the number of trees, and little attention is given to their location – which could even be on the other side of the world. However, as these examples show, planting trees closeby and in the right location can significantly optimise their potential.

Shaped by Limits

Coppicing has largely disappeared in industrial societies, although pollarded trees can still be found along streets and in parks. Their prunings, which once sustained entire communities, are now considered waste products. If it worked so well, why was coppicing abandoned as a source of energy, materials and food? The answer is short: fossil fuels. Our forebears relied on coppice because they had no access to fossil fuels, and we don’t rely on coppice because we have.

Our forebears relied on coppice because they had no access to fossil fuels, and we don’t rely on coppice because we have

Most obviously, fossil fuels have replaced wood as a source of energy and materials. Coal, gas and oil took the place of firewood for cooking, space heating, water heating and industrial processes based on thermal energy. Metal, concrete and brick – materials that had been around for many centuries – only became widespread alternatives to wood after they could be made with fossil fuels, which also brought us plastics. Artificial fertilizers – products of fossil fuels – boosted the supply and the global trade of animal fodder, making leaf fodder obsolete. The mechanization of agriculture – driven by fossil fuels – led to farming on much larger plots along with the elimination of trees and line plantings on farms.

Less obvious, but at least as important, is that fossil fuels have transformed forestry itself. Nowadays, the harvesting, processing and transporting of wood is heavily supported by the use of fossil fuels, while in earlier times they were entirely based on human and animal power – which themselves get their fuel from biomass. It was the limitations of these power sources that created and shaped coppice management all over the world.

Pollarding-1940s-min

Harvesting wood from pollarded trees in Belgium, 1947. Credit: Zeylemaker, Co., Nationaal Archief (CCO)

Animal-cart-min

Transporting firewood in the Basque Country. Source: Notes on pollards: best practices’ guide for pollarding. Gipuzkoaka Foru Aldundía-Diputación Foral de Giuzkoa, 2014.

Wood was harvested and processed by hand, using simple tools such as knives, machetes, billhooks, axes and (later) saws. Because the labor requirements of harvesting trees by hand increase with stem diameter, it was cheaper and more convenient to harvest many small branches instead of cutting down a few large trees. Furthermore, there was no need to split coppiced wood after it was harvested. Shoots were cut to a length of around one metre, and tied together in “faggots”, which were an easy size to handle manually.

It was the limitations of human and animal power that created and shaped coppice management all over the world

To transport firewood, our forebears relied on animal drawn carts over often very bad roads. This meant that, unless it could be transported over water, firewood had to be harvested within a radius of at most 15-30 km from the place where it was used. [12] Beyond those distances, the animal power required for transporting the firewood was larger than its energy content, and it would have made more sense to grow firewood on the pasture that fed the draft animal. [13] There were some exceptions to this rule. Some industrial activities, like iron and potash production, could be moved to more distant forests – transporting iron or potash was more economical than transporting the firewood required for their production. However, in general, coppice forests (and of course also line plantings) were located in the immediate vicinity of the settlement where the wood was used.

In short, coppicing appeared in a context of limits. Because of its faster growth and versatile use of space, it maximized the local wood supply of a given area. Because of its use of small branches, it made manual harvesting and transporting as economical and convenient as possible.

Can Coppicing be Mechanized?

From the twentieth century onwards, harvesting was done by motor saw, and since the 1980s, wood is increasingly harvested by powerful vehicles that can fell entire trees and cut them on the spot in a matter of minutes. Fossil fuels have also brought better transportation infrastructures, which have unlocked wood reserves that were inaccessible in earlier times. Consequently, firewood can now be grown on one side of the planet and consumed at the other.

The use of fossil fuels adds carbon emissions to what used to be a completely carbon neutral activity, but much more important is that it has pushed wood production to a larger – unsustainable – scale. [14] Fossil fueled transportation has destroyed the connection between supply and demand that governed local forestry. If the wood supply is limited, a community has no other choice than to make sure that the wood harvest rate and the wood renewal rate are in balance. Otherwise, it risks running out of fuelwood, craft wood and animal fodder, and it would be abandoned.

Mechanized-coppice-min

Mechanically harvested willow coppice plantation. Shortly after coppicing (right), 3-years old growth (left). Image: Lignovis GmbH (CC BY-SA 4.0). 

Likewise, fully mechanized harvesting has pushed forestry to a scale that is incompatible with sustainable forest management. Our forebears did not cut down large trees for firewood, because it was not economical. Today, the forest industry does exactly that because mechanization makes it the most profitable thing to do. Compared to industrial forestry, where one worker can harvest up to 60 m3 of wood per hour, coppicing is extremely labor-intensive. Consequently, it cannot compete in an economic system that fosters the replacement of human labor with machines powered by fossil fuels.

Some scientists and engineers have tried to solve this by demonstrating coppice harvesting machines. [15] However, mechanization is a slippery slope. The machines are only practical and economical on somewhat larger tracts of woodland (>1 ha) which contain coppiced trees of the same species and the same age, with only one purpose (often fuelwood for power generation). As we have seen, this excludes many older forms of coppice management, such as the use of multipurpose trees and line plantings. Add fossil fueled transportation to the mix, and the result is a type of industrial coppice management that brings few improvements.

Coppiced-brook-min

Coppiced trees along a brook in ‘s Gravenvoeren, Belgium. Image: Geert Van der Linden. 

Sustainable forest management is essentially local and manual. This doesn’t mean that we need to copy the past to make biomass energy sustainable again. For example, the radius of the wood supply could be increased by low energy transport options, such as cargo bikes and aerial ropeways, which are much more efficient than horse or ox drawn carts over bad roads, and which could be operated without fossil fuels. Hand tools have also improved in terms of efficiency and ergonomics. We could even use motor saws that run on biofuels – a much more realistic application than their use in car engines. [16]

The Past Lives On

This article has compared industrial biomass production with historical forms of forest management in Europe, but in fact there was no need to look to the past for inspiration. The 40% of the global population consisting of people in poor societies that still burn wood for cooking and water and/or space heating, are no clients of industrial forestry. Instead, they obtain firewood in much of the same ways that we did in earlier times, although the tree species and the environmental conditions can be very different. [17]

A 2017 study calculated that the wood consumption by people in “developing” societies – good for 55% of the global wood harvest and 9-15% of total global energy consumption – only causes 2-8% of anthropogenic climate impacts. [18] Why so little? Because around two-thirds of the wood that is harvested in developing societies is harvested sustainably, write the scientists. People collect mainly dead wood, they grow a lot of wood outside the forest, they coppice and pollard trees, and they prefer the use of multipurpose trees, which are too valuable to cut down. The motives are the same as those of our ancestors: people have no access to fossil fuels and are thus tied to a local wood supply, which needs to be harvested and transported manually.

Adrican-women-firewood-min

African women carrying firewood. (CC BY-SA 4.0)

These numbers confirm that it is not biomass energy that’s unsustainable. If the whole of humanity would live as the 40% that still burns biomass regularly, climate change would not be an issue. What is really unsustainable is a high energy lifestyle. We can obviously not sustain a high-tech industrial society on coppice forests and line plantings alone. But the same is true for any other energy source, including uranium and fossil fuels. 

Written by Kris De Decker. Proofread by Alice Essam. 

References: 

[1] Multiple references:

Unrau, Alicia, et al. Coppice forests in Europe. University of Freiburg, 2018. 

Notes on pollards: best practices’ guide for pollarding. Gipuzkoako Foru Aldundia-Diputación Foral de Gipuzkoa, 2014.

A study of practical pollarding techniques in Northern Europe. Report of a three month study tour August to November 2003, Helen J. Read.

Aarden wallen in Europa, in “Tot hier en niet verder: historische wallen in het Nederlandse landschap”, Henk Baas, Bert Groenewoudt, Pim Jungerius and Hans Renes, Rijksdienst voor het Cultureel Erfgoed, 2012.

[2] Logan, William Bryant. Sprout lands: tending the endless gift of trees. WW Norton & Company, 2019.

[3] Holišová, Petra, et al. “Comparison of assimilation parameters of coppiced and non-coppiced sessile oaks“. Forest-Biogeosciences and Forestry 9.4 (2016): 553. 

[4] Perlin, John. A forest journey: the story of wood and civilization. The Countryman Press, 2005.

[5] Most of this information comes from a Belgian publication (in Dutch language): Handleiding voor het inventariseren van houten beplantingen met erfgoedwaarde. Geert Van der Linden, Nele Vanmaele, Koen Smets en Annelies Schepens, Agentschap Onroerend Erfgoed, 2020. For a good (but concise) reference in English, see Rotherham, Ian. Ancient Woodland: history, industry and crafts. Bloomsbury Publishing, 2013.

[6] While leaf fodder was used all over Europe, it was especially widespread in mountainous regions, such as Scandinavia, the Alps and the Pyrenees. For example, in Sweden in 1850, 1.3 million sheep and goats consumed a total of 190 million sheaves annually, for which at least 1 million hectares deciduous woodland was exploited, often in the form of pollards. The harvest of leaf fodder predates the use of hay as winter fodder. Branches could be cut with stone tools, while cutting grass requires bronze or iron tools. While most coppicing and pollarding was done in winter, harvesting leaf fodder logically happened in summer. Bundles of leaf fodder were often put in the pollarded trees to dry. References: 

Logan, William Bryant. Sprout lands: tending the endless gift of trees. WW Norton & Company, 2019.

A study of practical pollarding techniques in Northern Europe. Report of a three month study tour August to November 2003, Helen J. Read.

Slotte H., “Harvesting of leaf hay shaped the Swedish landscape“, Landscape Ecology 16.8 (2001): 691-702. 

[7] Wealleans, Alexandra L. “Such as pigs eat: the rise and fall of the pannage pig in the UK“. Journal of the Science of Food and Agriculture 93.9 (2013): 2076-2083.

[8] This information is based on several Dutch language publications: 

Handleiding voor het inventariseren van houten beplantingen met erfgoedwaarde. Geert Van der Linden, Nele Vanmaele, Koen Smets en Annelies Schepens, Agentschap Onroerend Erfgoed, 2020.

Handleiding voor het beheer van hagen en houtkanten met erfgoedwaarde. Thomas Van Driessche, Agentschap Onroerend Erfgoed, 2019

Knotbomen, knoestige knapen: een praktische gids. Geert Van der Linden, Jos Schenk, Bert Geeraerts, Provincie Vlaams-Brabant, 2017.

Handleiding: Het beheer van historische dreven en wegbeplantingen. Thomas Van Driessche, Paul Van den Bremt and Koen Smets. Agentschap Onroerend Erfgoed, 2017.

Dirkmaat, Jaap. Nederland weer mooi: op weg naar een natuurlijk en idyllisch landschap. ANWB Media-Boeken & Gidsen, 2006.

For a good source in English, see: Müller, Georg. Europe’s Field Boundaries: Hedged banks, hedgerows, field walls (stone walls, dry stone walls), dead brushwood hedges, bent hedges, woven hedges, wattle fences and traditional wooden fences. Neuer Kunstverlag, 2013.

If line plantings were mainly used for wood production, they were planted at some distance from each other, allowing more light and thus a higher wood production. If they were mainly used as plot boundaries, they were planted more closely together. This diminished the wood harvest but allowed for a thicker growth.

[9] In fact, coppice forests could also have a location-specific function: they could be placed around a city or settlement to form an impenetrable obstacle for attackers, either by foot or by horse. They could not easily be destroyed by shooting, in contrast to a wall. Source: [5]

[10] Lime trees were even used for fire prevention. They were planted right next to the baking house in order to stop the spread of sparks to wood piles, haystacks and thatched roofs. Source: [5]

[11]  The fact that living hedges and trees are harder to move than dead wood fences and posts also has practical advantages. In Europe until the French era, there was no land register and boundaries where physically indicated in the landscape. The surveyor’s work was sealed with the planting of a tree, which is much harder to move on the sly than a pole or a fence. Source: [5]

[12] And, if it could be brought in over water from longer distances, the wood had to be harvested within 15-30 km of the river or coast. 

[13] Sieferle, Rolf Pieter. The Subterranean Forest: energy systems and the industrial revolution. White Horse Press, 2001.

[14] On different scales of wood production, see also: 

Jalas, Mikko, and Jenny, Rinkinen. “Stacking wood and staying warm: time, temporality and housework around domestic heating systems“, Journal of Consumer Culture 16.1 (2016): 43-60.

Rinkinen, Jenny. “Demanding energy in everyday life: insights from wood heating into theories of social practice.” (2015).

[15] Vanbeveren, S.P.P., et al. “Operational short rotation woody crop plantations: manual or mechanised harvesting?” Biomass and Bioenergy 72 (2015): 8-18.

[16] However, chainsaws can have adverse effects on some tree species, such as reduced growth or greater ability to transfer disease. 

[17] Multiple sources that refer to traditional forestry practices in Africa:

Leach, Gerald, and Robin Mearns. Beyond the woodfuel crisis: people, land and trees in Africa. Earthscan, 1988. 

Leach, Melissa, and Robin Mearns. “The lie of the land: challenging received wisdom on the African environment.” (1998)

Cline-Cole, Reginald A. “Political economy, fuelwood relations, and vegetation conservation: Kasar Kano, Northerm Nigeria, 1850-1915.” Forest & Conservation History 38.2 (1994): 67-78.

[18] Multiple references:

Bailis, Rob, et al. “Getting the number right: revisiting woodfuel sustainability in the developing world.” Environmental Research Letters 12.11 (2017): 115002

Masera, Omar R., et al. “Environmental burden of traditional bioenergy use.” Annual Review of Environment and Resources 40 (2015): 121-150.

Study downgrades climate impact of wood burning, John Upton, Climate Central, 2015.

[19] Haustingsskog. [revidert] Rettleiar for restaurering og skjøtsel, Garnås, Ingvill; Hauge, Leif ; Svalheim, Ellen, NIBIO RAPPORT | VOL. 4 | NR. 150 | 2018. 

Posted in Plant Trees | Tagged , , , | Comments Off on How to make biomass last longer