Using manure for fertilizer in the future – it won’t be easy

Animals produce 44 times more manure than humans in the U.S.

Preface. At John Jeavons Biointensive workshop back in 2003, I learned that phosphorous is limited and mostly being lost to oceans and other waterways after exiting sewage treatment plants.  He said it can be dangerous to use human manure without proper handling, and wasn’t going to cover this at the workshop, but to keep it in mind for the future.

Modern fertilizers made with the Nobel-prizing winning method of using natural gas as feedstock and energy source can increase crop production up to 5 times, but at a tremendous cost of poor soil health and pollution (see Peak soil).  Fossil fuels will inevitably decline some day, and force us back to organic agriculture and using crop wastes, animal and human manure again.

Below are excerpts from three sources.

The first is about North Korea. Despite tremendous efforts to use all manure, this country is a barren, destroyed landscape that can grow little food, which McKenna describes here: Inside North Korea’s Environmental Collapse.

The second section describes what it was like to live over a century ago when human and animal manure was routinely collected.

The third Below is a NewScientist book review of The Wastewater Gardener: Preserving the planet, one flush at a time by Mark Nelson.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

Park, Y. 2015. In order to live: A North Korean girl’s journey to freedom. Penguin.

“One of the big problems in North Korea was a fertilizer shortage. When the economy collapsed in the 1990s, the Soviet Union stopped sending fertilizer to us and our own factories stopped producing it. Whatever was donated from other countries couldn’t get to the farms because the transportation system had also broken down. this led to crop failures that made the famine even worse. So the government came up with a campaign to fill the fertilizer gap with a local and renewable source: human and animal waste. Every worker and schoolchild had a quota to fill.  Every member of the household had a daily assignment, so when we got up in the morning, it was like a war. My aunts were the most competitive.

“Remember not to poop in school! Wait to do it here!” my aunt in Kowon told me every day. Whenever my aunt in Songnam-ri traveled away from home and had to poop somewhere else, she loudly complained that she didn’t have a plastic bag with her to save it.

The big effort to collect waste peaked in January so it could be ready for growing season. Our bathrooms were usually far from the house, so you had to be carefu lneighbors didn’t steal from you at night. Some people would lock up their outhouses to keep the poop thieves away. At school the teachers would send us out into the streets to find poop and carry it back to class.  If we saw a dog pooping in the street, it was like gold. My uncle in Kowon had a big dog who made a big poop—and everyone in the family would fight over it.

Our problems could not be fixed with tears and sweat, and the economy went into total collapse after torrential rains caused terrible flooding that wiped out most of the rice harvest…as many as a million North Koreans died from starvation or disease during the worst years of the famine.

When foreign food aid finally started pouring into the country to help famine victims, the government diverted most of it to the military, whose needs always came first. What food did get through to local authorities for distribution quickly ended up being sold on the black market”

Vaclav Smil. 2015. Energy and Civilization A History. MIT Press. 

“In Chinese cities, high shares of human waste (70–80%) were recycled. Similarly, by the 1650s virtually all of Edo’s (today’s Tokyo) human wastes were recycled. But the usefulness of this practice is limited by the availability of such wastes and their low nutrient content, and the practice entails much repetitive, heavy labor. Even before storage and handling losses, the annual yield of human wastes averaged only about 3.3 kg N/capita. The collection, storage, and delivery of these wastes from cities to the surrounding countryside created large-scale malodorous industries, which even in Europe persisted for most of the 19th century before canalization was completed. By 1869, Paris was generating annually about 4.2 Mt N, about 40% from horse manure and about 25% from human wastes…

The recycling of much more copious animal wastes—which involved cleaning of stalls and sties, liquid fermentation or composting of mixed wastes before field applications, and the transfer of wastes to fields—was even more time-consuming. And because most manures have only about 0.5% N, and pre-application and field losses of the nutrient had commonly added up to 60% of the initial content, massive applications of organic wastes were required to produce higher yields.  Every conceivable organic waste was used as a fertilizer in traditional farming: pigeon, goat, sheep, cattle, all other dung, composts made of straw, lupines, chaff, bean stalks, husks, and oak leaves.

Any theoretical estimates of nitrogen in recycled wastes are far removed from its eventual contribution. This is because of very high losses (mainly through ammonia volatilization and leaching into groundwater) between voiding, collection, composting, application, and eventual nitrogen uptake by crops. These losses, commonly of more than two-thirds of the initial nitrogen, further increased the need to apply enormous quantities of organic wastes. Consequently, in all intensive traditional agricultures, large shares of farm labor had to be devoted to the unappealing and heavy tasks of collecting, fermenting, transporting, and applying organic wastes.

Barnett, A. August 2, 2014. Excellent excrement. Why do we waste human waste? We don’t have to. NewScientist.

Below is a review of The Wastewater Gardener: Preserving the planet, one flush at a time, by Mark Nelson, Synergetic Press.

Would you dine in an artificial wetland laced with human waste? In The Wastewater Gardener, Marc Nelson makes an inspiring case for a new ecology of water

Rainforest destruction, melting glaciers, acid oceans, the fate of polar bears, whales and pandas. You can understand why we get worked up about them ecologically. But wastewater?

The problem is excrement. Psychologically, we seem to be deeply averse to the stuff and want to avoid contact whenever possible – we don’t even want to think about it, we just want it out of the way.

The solution, a universal pipe-based waste network, works well until domestic and industrial chemicals and other non-biological waste are mixed in. Treating the resulting toxic soup, as Mark Nelson explains in The Wastewater Gardener, is not only a major technological challenge, but also uses enormous amounts of one of the planet’s most limited resources: fresh water.

Each adult produces between 7 and 18 ounces of faeces per day. With our current population, that’s a yearly 500 million tonnes. Centralized sewage systems use between 1000 and 2000 tons of water to move each ton of faeces, and another 6000 to 8000 tons to process it.

Even then, this processed waste often ends up in waterways, affecting wildlife and communities downstream, and it eventually finds its way to the ocean. There it contributes to the process of eutrophication, which creates dead zones, killing coral reefs and other sea creatures.

But it doesn’t have to be like that. As head of Wastewater Gardens International, Nelson has traveled the world, developing and promoting artificial wetlands as the most logical way to use what we otherwise flush away.

Except that, as Nelson points out, with 7 billion-plus people, there really is no “away”. Besides, what the public purse pays to detox and dump can be put to profitable work, fertilising greenery for urban spaces and fruits and vegetables for domestic and commercial use, for example.

Less than 3% of Earth’s water is fresh, and only a tiny portion of that is easily available to us. Most of the water that standard sewage systems use to move human waste is drinkable. Diminishing water resources mean alternatives are pressingly needed. Wastewater gardens, where marsh plants are used to filter lavatory output and allow cleaned water to enter natural watercourses, are very much part of that solution.

Nelson clearly understands the yuck factor and goes to great lengths to show that having a shallow vat of human-waste-laced water nearby is far less vile than we might imagine, especially when it is covered by gravel and interlaced with plant roots. Restaurants with tables dotted between ponds containing the ever-filtering artificial wetlands provide convincing proof.

Constructed wetlands can take on big jobs, too: a mixture of papyrus, lotus and other plants have successfully and beautifully detoxified water from Indonesian batik-dying factories. This water had killed cows downstream and caused running battles between farmers and factory workers.

The Wastewater Gardener is not a “how to” story, but more a “how it was done” account. Nelson tells how these wetlands started to become mainstream in less than 30 years. With humility and humour, he recounts how, as a boy from New York City, he acquired hands-on ranching knowledge in New Mexico, then studied under American ecology guru, Howard Thomas Odum.

And stories of his experiences everywhere from urban Bali and the Australian outback to Morocco’s Atlas mountains and Mexico’s Cancún coast illustrate the gravelly, muddy evolution of his big idea. An inspiring read, not just for the smallest room.

Posted in Life Before Oil, Soil, Waste, Water | Tagged , , , , , , | 5 Comments

North Korea’s less-known military threat: Biological weapons

Preface.  Oh no! North Korea is developing bioweapons probably.  Here are some excerpts from the New York Times article.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


“Pound for pound, the deadliest arms of all time are not nuclear but biological. A single gallon of anthrax, if suitably distributed, could end human life on Earth.

Even so, the Trump administration has given scant attention to North Korea’s pursuit of living weapons — a threat that analysts describe as more immediate than its nuclear arms. President Trump did not broach the subject of biological weapons during his meeting with Mr. Kim in Singapore.

“North Korea is far more likely to use biological weapons than nuclear ones,” said Andrew C. Weber, a Pentagon official in charge of nuclear, chemical and biological defense programs under President Obama. “The program is advanced, underestimated and highly lethal.”

The North may want to threaten a devastating germ counterattack as a way of warding off aggressors. If so, its bioweapons would act as a potent deterrent.

But experts also worry about offensive strikes and agents of unusual lethality, especially the smallpox virus, which spreads person-to-person and kills a third of its victims.

Germ production is small-scale and far less expensive than creating nuclear arms. Deadly microbes can look like harmless components of vaccine and agricultural work. And living weapons are hard to detect, trace and contain.

Last century, most nations that made biological arms gave them up as impractical. Capricious winds could carry deadly agents back on users, infecting troops and citizens.  But today, analysts say, the gene revolution could be making germ weapons more attractive. They see the possibility of designer pathogens that spread faster, infect more people, resist treatment, and offer better targeting and containment. If so, North Korea may be in the forefront.

Several North Korean military defectors have tested positive for smallpox antibodies, suggesting they were either exposed to the deadly virus or vaccinated against it.

Smallpox claimed up to a half billion lives before it was declared eradicated. Today, few populations are vaccinated against the defunct virus.

Starting three years ago, Amplyfi, a strategic intelligence firm, detected a dramatic increase in North Korean web searches for “antibiotic resistance,” “microbial dark matter,” “cas protein” and similar esoteric terms, hinting at a growing interest in advanced gene and germ research.

Federal budgets for biodefense soared after the attacks but have declined in recent years.

“The level of resources going against this is pitiful,” said Mr. Weber, the former Pentagon official. “We are back into complacency.”

Dr. Robert Kadlec, the assistant secretary for preparedness and response at the Department of Health and Human Services, said, “We don’t spend half of an aircraft carrier on our preparedness for deliberate or natural events.”

Posted in Biowarfare, Dieoff, Pandemic | Tagged , , | Leave a comment

Hydropower dams and the ways they destroy the environment

Preface. Hydropower comprises 71% of renewable energy worldwide.  Nations like the U.S. and Europe have dams that have reached the end of their lifespan, so more are being torn down than built. In the U.S. 546 dams were removed between 2006 and 2014.

This contains excerpts and paraphrasing of three news stories

  1. 11 Jan 2019 the costs of environmental damage and dam removal need to be added into calculations for whether to build a dam or not
  2. 19 November 2014 NewScientist article by Peter Hadfield “River of the dammed“,about the Chinese Three Gorges project
  3. 2012: the greenhouse gas emissions of hydropower

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Moran, E. F. et al. 2018. Sustainable hydropower in the 21st century, Proceedings of the National Academy of Sciences.

Before developing countries build more dams, they need to take the following into account when estimating the cost

  • Deforestation
  • Loss of biodiversity, especially fish species
  • Social consequences, such as the displacement of thousands of people and the financial harm done
  • That climate change, especially drought, and evaporation from higher temperatures, which will lead to less water stored for agriculture and electricity
  • The cost of removing a dam is extremely high, so high dams wouldn’t be built if this cost were included.  Many new dams in Brazil and other nations will have a short lifespan — just 30 to 50 years

Hadfield, P. 2014. River of the dammed“. NewScientist.

Dams typically last 60 to 100 years, but whether Three Gorges can last this long is questionable given the unexpectedly high amounts of silt building up. Since fossil fuels are finite, as is uranium, to keep the electric grid up many see building more dams for hydropower as absolutely essential. Hydropower is also one of the few energy resources that can balance variable wind and solar as well. In addition, climate change is likely to lead to a state of permanent drought and dams could help cope with water shortages.  But dams have a dark side and we should proceed with caution as you’ll see from some of the damage done from the three gorges dam ]

Three Gorges dam stats:

  • 13 cities, 140 towns and 1350 villages drowned under the rising water of the Three Gorges dam requiring 1.3 million people to move
  • Required  27 million cubic metres of concrete to build the 2-kilometer-long dam.
  • Provides 2% of China’s electricity
  • 32 turbines, each weighing as much as the Eiffel tower
  • Trash litters the water — discarded plastic bottles, bags, algae and industrial crud — because garbage that used to be flushed downriver and out to sea is now trapped and backing up in the Yangtze’s numerous tributaries. It covers a massive area despite 3000 tonnes being collected a day.
  • The fish population has crashed:  lower water levels, slower flow, and pollution have crashed the Yangtze’s fish population and also decreasing the productivity of fisheries in the South China Sea.
  • Drinking water is being affected because the dam is allowing more seawater than before to intrude into the Yangtze estuary.

Silt will drastically shorten the lifespan of Three Gorges

All dams eventually are rendered useless in 30 to 200 years.  But Three Gorges is silting faster than expected. Far more silt is entering the river and being carried far further than predicted by the models, resulting in silt buildup to depths of up to 60 meters, almost two-thirds the maximum depth of the reservoir itself. The dam continues to accumulate silt at the rate of around 200 million cubic meters a year.

As a result, one of the two navigation channels that pass on either side of an island in the reservoir has been completely blocked, forcing ship traffic in both directions to follow a single channel.

Worse yet, silt is building up at the dam wall. A lot of it has to be cleared by dredgers to make sure it doesn’t interfere with the turbines that generate  China’s electricity and the massive locks that allow ships to travel through.

The only way to slow the process is to build more dams upstream to trap the silt. Many were already being planned. If they are all built, the Yangtze will become a series of dams instead of a river.


The filling of the reservoir has also destabilized some of the steep slopes lining the dam. Landslides are common, blocking roads and threatening villages.

This reduces the flow downstream, bringing forward the start of the Yangtze’s natural low-water period. The result is that the Yangtze’s once bountiful floodplain is now drying up. “China’s two largest freshwater lakes – Poyang and Dongting – now find themselves higher than the river,” says Patricia Adams of Probe International, a Canadian environmental foundation that has written a number of critical reports about the Three Gorges dam. “The effect of that is that their water is flowing into the river and essentially draining these very important flood plains.

Like all deltas, the mouth of the Yangtze is a tug of war between deposition and erosion. Between 1050 and 1990, according to a 2003 study, deposition won. During these 900 years the Nanhui foreland, which marks the south bank of the estuary, grew nearly 13 kilometers. But more recently, erosion began to dominate.

The dam has made things even worse by nearly halving the amount of silt entering the delta, leading to a threefold jump in the erosion rate. This could become a major problem for China’s largest city, Shanghai, which is only a meter above sea level, which is expected to rise up to 2 meters over the next century.

List of Serious Problems from The Guardian

  • The dam reservoir has been polluted by algae and chemical runoff that would normally have floated away had the dam not been built. Algae and pollution are building up.
  • The weight of the extra water is being blamed for earthquake tremors, landslides and erosion of hills and slopes.
  • Because of the project’s instability and unpredictability, scientists are calling on the government to: establish water treatment plants, warning systems, shore up and reinforce riverbanks, boost funding for environmental protection and increase benefits to the displaced.
  • Some scientists are advocating the reestablishment of ecosystems that were destroyed by the project and are suggesting the additional movement of hundreds of thousands of residents to safer ground.
  • Before the project, there were 1,392 fresh reservoirs of water that have become “dead water”, destroying drinking water of over 300,000 people.
  • Boat traffic on the Yangtze River has been negatively affected as the depths and shallows of the river have been completely transformed and thousands of boats regularly run aground.
  • The design of the project has resulted in damage to the Yangtze River in that water no longer pushes mud and silt downstream but stagnates it above the dam.
  • While the current problem is a drought over the past decade floods and droughts have come and gone, the flow control mechanism of the dam project doesn’t seem operational; it does not affect water levels in any way.

Rogner, H.H., et al. 2012. Global Energy Assessment:  Toward a Sustainable Future. Cambridge University Press, International Institute for Applied Systems Analysis  423–512.

Ecosystem impacts usually occur downstream from hydropower sites and range from changes in fish biodiversity and in the sediment load of the river to coastal erosion and pollution.

GHG emissions associated with hydropower are one or two orders of magnitude lower than those from fossil-generated electricity, but can be non-negligible in cases where sites inundate large areas of biomass and consequent CH 4 releases to the atmosphere.

Large hydropower projects requiring large reservoirs and extensive relocation of communities increasingly encounter public resistance and, as a result, face higher costs.

Population density is a major constraint for future development. If a project requires resettlement, the high costs and uncertainty make planning quite difficult.

most of the suitable sites for large hydropower implementation in OECD countries have already been developed

Posted in Dams, Hydropower | Tagged , , , | 1 Comment

Book review of Wrigley’s “Energy and the English Industrial revolution”

Preface. I’ve made a strong case in my book “When trucks stop running” and this energyskeptic website that we will eventually return to wood and a 14th century lifestyle after fossil fuels are depleted.

So if you’re curious about what that lifestyle will be like, and how coal changed everything, this is the book for you.

One point stressed several times is that in all organic economies a steady state exists.  Or as economists put it, that there were just three “components essential in all material production; capital, labor, and land. The first two could be expanded as necessary to match increased demand, but the third could not, and rising pressure on this inflexible resource arrested growth and depressed the return to capital and the reward of labor.”

Then along came coal (and today oil and natural gas), which for a few centuries removed land as a limiting factor (though we’re awfully close the Malthusian limits as well, population is growing, cropland is shrinking as development builds over the best farm land near cities, which exist where they do because that was good crop land).

In today’s world, energy set the limits to growth, but in the future land once again will.  So will the quality of roads, how many forests exist whose wood can be gotten to towns and cities, and so on.  So if you’re in a transition town group or in other ways trying to make the future better, perhaps this book will give you some ideas.

If this world is too painful to contemplate, read some books about the Amish, which would be an ideal society for me minus the religious side of it.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


A. Wrigley. 2010. Energy and the English Industrial revolution. Cambridge University Press.

Wood uses: brewing, lime burning, salt production, dye industries, brick and tile making, glassmaking, alum boiling, sugar and soap production, smithying, and a wide range of metal processing trades.

Glass manufacture, brickmaking, beer brewing, textile dyeing, metal smelting and working, lime burning, brewing, brickmaking, sugar refining, bleaching and dyeing, and the production of salt.

All industrial production depended upon vegetable or animal raw materials. This is self-evidently true of industries such as woollen textile production or shoemaking but is also true of iron smelting or pottery manufacture, although their raw materials were mineral, since production was only possible by making use of a source of heat and this came from burning wood or charcoal.

Thus the production horizon for all organic economies was set by the annual cycle of plant growth.

The total quantity of energy arriving each year on the surface of the earth from the sun is enormous, far exceeding the amount of energy expended each year across the world today, but in organic economies human access to this superabundant flow of energy was principally through plant photosynthesis.

Plant growth was the sole source of sustenance for both people and animals, whether herbivores, carnivores, or omnivores. Plant photosynthesis is the food base of all living organisms. This is as true of a pride of lions as of a herd of antelopes. Photosynthesis, however, is an inefficient process. Estimates of its efficiency in converting the incoming stream of energy from the sun normally lie only in the range between 0.1 and 0.4 per cent of the energy arriving on a given surface. Moreover, insufficient or excessive rainfall and very high or low temperature may prohibit or greatly limit plant growth over large areas.

The truism concerning the fixed supply of land may obscure the underlying point which makes it so telling. The key variable, which translates the observation about the land constraint into an immediate reality, is the process of photosynthesis in plants. This was the bottleneck through which men and women, in common with all other animate creatures, gained access to the energy without which life is impossible. Every living thing is constantly expending energy in order simply to remain alive. This is as true of mankind as of any other animal species. Additional energy was needed if a man or woman was to make an active contribution to production. To be economically active in the past, whether in wielding an axe, thrusting a shuttle, or pushing a wheelbarrow, required additional energy inputs over and above what was needed simply to sustain life. The useful energy secured might be in the form of food for the individual or fodder for draught animals, or it might consist of the production of a wide range of organic raw materials needed for manufacture, but in every case the basic problem was the same. A fixed supply of land meant an upper limit to the quantity of energy which could be tapped as long as the dominant means of securing it was from the conversion, by plant photosynthesis, of a tiny fraction of the flood of energy reaching the earth in the form of sunlight.

Unless this restriction could be overcome, no exercise of ingenuity could do more than alleviate the problem; a solution was out of reach. The problem was finally overcome by breaking free from dependence upon photosynthesis, or more accurately by finding a way of gaining access to the photosynthesis of past geological ages. 

Better transportation enabled larger and larger tracts of the country to enjoy the benefits afforded by access to cheap and abundant energy derived from burning coal. Each reduction in the cost of transporting coal from the pithead to a distant center widened the range of activities which were no longer constrained by the energy limitations of organic economies. When coal could be substituted for other energy sources, expansion could occur without simultaneously creating a matching rise in the pressure on the land. Access to the store of the products of past photosynthesis could relieve pressure on the current supply.

Shoemakers, weavers, carpenters, blacksmiths, brewers, framework knitters, printers, and basket makers were all dependent on animal or vegetable raw materials. The great bulk of this demand was met from plants grown on English soil, or from animals fed by those plants.

In an organic economy plant photosynthesis was by far the most important source of energy, both mechanical and thermal. Wind and water power added little to what was secured via photosynthesis.9

The writings of the classical economists provide an illuminating, in many respects a definitive, account of the reasons why it had seemed impossible to secure prolonged expansion of production at a rate which would allow the living standards of the mass of the population to rise progressively. There were, they argued, three factors involved in all material production: labor, capital, and land. The supply of the first two could, in favorable circumstances, expand as required. The supply of the third was fixed. This created a tension which must grow steadily greater in any period of expansion. More people meant more mouths to feed. An expansion in woolen textile production meant raising more sheep and therefore devoting more land to sheep pasture. A rise in iron output involved cutting down more wood to feed the furnaces and implied an increase in the area to be committed to forest. Each type of production was in competition with every other for access to the products of the land. Such pressures in turn must mean either taking land of inferior fertility into agricultural use, or working existing farmland more intensively, or, more probably, both simultaneously. The result must be a tendency for the return to both labor and capital to fall. Growth must slow and eventually come to a halt. Improvements in production techniques and institutional change might for a time offset the problems springing from the fixed supply of land. This might delay but could not indefinitely postpone the inevitable. In short, the very fact of growth, because of the nature of material production in an organic economy, must ensure that growth would grind to a halt. And this impasse was reached not because of human deficiencies, or of failure in political, social, or economic structures but for an ineluctable physical reason, the fixed supply of land.

If the wages of the bulk of the population must in the long run necessarily drift towards a conventional minimum, comforts and luxuries will be limited and hence the inducement to invest in their production will be slight. Such demand as there might be for any but the most basic of commodities will come from a tiny minority of the privileged and wealthy and will be met from the workshops of small groups of specialist craftsmen. In the absence of large-scale demand for standard industrial products there will be no large-scale production and therefore little incentive to introduce or invest in new techniques of production.

The great bulk of the labor force will be employed on the land and many of the rest in producing simple textiles and in basic construction.

Mechanical power was principally provided by human and animal muscle. Thermal energy came from burning wood or charcoal. The mechanical energy derived from muscle power was only a limited fraction of the calories consumed in food and fodder because men and women in common with all warm-blooded creatures must devote a large part of their food intake to basic body maintenance. For example, about 1,500 kilocalories are needed daily to keep a man alive even if no work is performed. Thus if the daily food intake is 2,500 kilocalories only 40 per cent of the energy consumed is available for productive work. It follows that the amount of useful work that each man could perform might vary substantially according to the prevailing levels of food intake per head. With a daily intake of 3,500 kilocalories a man could undertake double the amount of physical effort which he could perform if his intake was 2,500 (3,500 – 1,500 = 2,000: 2,500 – 1,500 = 1,000). The same basic point applies to draught animals just as to man. Ill-fed animals will use a high proportion of their food intake to stay alive, leaving only a small proportion of their energy intake to drag a plough or pull a cart.

A horse can carry out about six times as much work as a man and where horses or oxen were abundant the quantity of useful work which each man performed was in effect greatly magnified.  

Maize was cultivated in Mexico 75 years ago both by hand and oxen. Without the assistance of oxen 1,140 man hours were needed to till and cultivate a hectare of maize. Where oxen were used the number of man hours involved fell to 380, though in addition 200 hours of work by oxen was needed. Assigning large areas of land for animal pasture meant reducing the area which could be used for growing human food and therefore limited the size of the human population which could be supported, but, on the other hand, it could raise output per head in agriculture substantially by increasing the quantity of useful work which each man could perform.

Animal muscle power also normally provided the bulk of the energy needed in land transport,

Heat energy, like muscle energy, depended on plant photosynthesis. Burning wood provided the great bulk of the heat energy consumed. Many industrial processes required large quantities of heat energy. Glass manufacture, brickmaking, beer brewing, textile dyeing, metal smelting and working, lime burning, and many similar processes required much heat energy. Wood was the dominant, indeed in most organic economies virtually the sole source of heat energy. But on a sustained-yield basis an acre of woodland could normally produce only 1–2 tons of dry wood per annum. Two tons of dry wood yields the same amount of heat as one ton of coal. To produce a ton of bar iron in 17th-century England involved consuming about 30 tons of dry wood. If half the land surface of Britain had been covered with woodland, it would only have sufficed to produce perhaps 1¼ million tons of bar iron on a sustained-yield basis. Simple arithmetic, therefore, makes it clear that it was physically impossible to produce iron and steel on the scale needed to create a modern railway system, or to construct large fleets of steel ships, or to enable each family to have a car, if the heat energy needed to smelt and process the iron and steel came from wood and charcoal.

Because it was necessary to devote the bulk of the land surface to the production of so many other commodities, the effective ceiling on production was far lower than the notional figure of 1¼ million tons of bar iron just quoted.

In 2008 China produced 500 million tons of steel in her drive to transform her productive potential. No organic economy could have produced even a tiny fraction of this total.

Where demographic conditions push real incomes close to the subsistence minimum the bulk of demand will be for the four necessities of life: food, shelter, clothing, and fuel (it is convenient to express the situation in terms appropriate to market economies, but the effect is the same in economies where market exchange is limited; poor peasants, buying little for cash and selling only a fraction of what they produce, labor primarily to provide for basic wants). Lack of demand for comforts and luxuries will restrict the opportunity for the development of a wider range of secondary industries (manufactures) and discourage innovation and technological change.

A necessary condition for the escape from the constraints of an organic economy was success in gaining access to an energy source which was not subject to the limitations of the annual cycle of insolation and the nature of plant photosynthesis.

If societies thought and acted in terms of millennia rather than decades the limitations of coal as an energy source (and still more of oil and gas) would be evident, but in the short run coal offers a means of escape from the constraints of organic economies which photosynthesis does not.

Organic economies were essentially fungible in nature. A field may be tilled to grow wheat in a given year but the taking of the crop does not prevent the field being available to grow barley in the following year.

The nature of the land as a fungible guaranteed that a roughly similar level of production could be maintained year after year. It was in this respect a stable world. The potential for securing energy for human use was limited but could be maintained indefinitely.

A ton of coal, like a slice if cake, once consumed, cannot be consumed again. Fossil fuel deposits constitute a very large cake but if they remain the principal source of energy they will be exhausted in decades or at most centuries rather than millennia.

While the output of all cereal crops rose markedly between late medieval times and the early 19th century, oats outstripped other grains both in the percentage rise in total production and in the percentage rise in output per acre. The dominant use of oats was to feed horses. The energy output of a horse well supplied with oats was substantially greater than that of a largely grass fed animal. This was helpful not only in a farm context but also in the economy generally. There was a massive rise in the scale of road transport in the later seventeenth and eighteenth centuries, facilitated by the rapid increase in the mileage of turnpike roads, and therefore a parallel rise in the need to employ more horses. Ville has reported estimates, for example, showing that over the period 1681–1840 the annual rate of growth of goods traffic by road between London and the provinces was in excess of 1%, which would imply a roughly 6-fold cumulative growth over the period. Passenger traffic was rising even more rapidly. Between 1715 and 1840 the rate of growth probably exceeded 2% annually, implying that by the end of the period the traffic was twelve times larger than at the beginning.

In the later 18th century many new canals were built. Canal barges also depended on horses for motive power, thus adding further to the need for a plentiful supply of fodder. The fact that agriculture was able to meet the ‘fuel’ needs of a growing population of horses engaged in transport and industry is testimony to the absence of pressure arising from the need to meet human food requirements in England in the ‘long’ eighteenth century despite the very rapid growth of population in its latter half. England, it should be noted, remained largely self-sufficient in foodstuffs until the early decades of the 19th century, apart from those which could not be grown in a temperate climate.

The population of England increased substantially between 1600 and 1800 which meant, given the absence of any major change in employment in agriculture, that the proportion of the labor force working on the land fell sharply from about 70% to less than 40%. This implies that the proportion of the labor force engaged in secondary and tertiary activities doubled from 30 to over 60% during these two centuries and the absolute number increased far more dramatically since population was rising fast. In 1600 the population was 4.2 million; in 1800 8.7 million. If for simplicity we take the population as doubling and the percentage engaged outside agriculture as doubling also, this implies that the total employment in the secondary and tertiary sectors quadrupled over the period, a change which can fairly be termed sensational.

Without the striking gains in manpower productivity in agriculture which took place in early modern England it is very doubtful whether the industrial revolution would have occurred.

The four largest British industries by value added in 1801 were cotton, wool, building, and leather. Between them they accounted for 68% of the total of value added in British industry as a whole and they were of roughly equal size. The wool and hides which formed the raw material input of two of these four industries were very largely home produced in 1800.

In the mid-16th century, coal, though it already supplied a tenth of English energy consumption, was substantially less important than human and animal muscle power, and firewood was the prime source of heat energy. By 1700 about half of the total energy consumption of England came from coal. At the end of the 18th century the proportion exceeded 75%, and by 1850 was over 90%. Much coal was consumed for domestic purposes. Until the end of the 17th century it is likely that domestic heating and cooking accounted for more than half the total consumption, but by the early 19th century this figure appears to have declined to roughly one third of the total.

In 1700, when the English coal output is estimated at about 2.2 million tons, providing the same heat energy from wood on a sustained-yield basis would have required devoting 2 or 3 million acres to woodland. This assumption may well underestimate the area required but is unlikely to overestimate it. By 1800, 11 million acres of woodland would have been needed. This would have meant devoting more than a third of the surface area of the country to provide the quantity of energy in question.

The small Danish town of Odense, which had about 5,000 inhabitants in the later 18th century, received roughly 15,000 cartloads of firewood and 12,000 cartloads of peat each year to cover its domestic heating and industrial needs. A city a hundred times larger, like London towards the end of the 17th century, had lesser requirements due to a warmer climate, but even so would have needed  perhaps two million cartloads of firewood each year to cover heating needs in the absence of coal. This level of consumption is roughly equivalent to 1.5 tons of firewood per head of the population of London. It would have required setting aside a very large acreage to produce the firewood in question (approximately 1,250 square miles), and in addition still more land would have been required to provide fodder for the large number of horses needed to bring the firewood overland, either direct to London or to a suitable shipping point. In contrast, coal made only a minimal claim on land for its production and animal haulage was required only in getting the coal from the pithead to the coal wharf to deliver to the consumer.

By the end of the 17th century the switch to coal was largely complete in brewing, lime burning, salt production, dye industries, brick and tile making, glassmaking, alum boiling, sugar and soap production, smithying, and a wide range of metal processing trades. Summarizing his detailed description of the increasing use of coal in industrial processes, Hatcher wrote, By 1700 coal was the preferred fuel of almost all fuel-consuming industries,

As long as the mechanical energy needed in most industrial processes and many forms of transport was secured from human or animal muscle power, there was a comparatively low ceiling to the level of productivity per head that could be attained. The final step in the process by which the use of fossil fuel broke the bonds of the organic economy was taken with the discovery of ways of using the energy in steam to extend the breakthrough in the availability of heat energy to overcome the mechanical energy bottleneck also.

Switching to coal as an energy source produced two further benefits of great importance. The first relates to investment in transport facilities. The production of coal from a mine occupies very little ground yet can produce as much energy and an entire forest, making it worthwhile to spend a great deal of energy and money on good roads or rails to convey it to the nearest ocean, canal, or lake for delivery.  

In contrast, production of wood happens over huge areas. To produce an equivalent amount of energy from wood a very large acreage of woodland must be felled.  And then there’s not one road as with a coal mine, but hundreds of dendritic paths that eventually become roads near towns and cities.

Pack horses remained in widespread use in early modern England because road surfaces were often unsuitable for wagons. Keeping the roads in good order might involve expense not justified by easier traffic movement because the volume of traffic was too small to result in an adequate return on investment.

The rise in the volume of coal production created an incentive not only to invest in more efficient land transport but also to construct canals. A large proportion of the traffic on most canals consisted of coal. Much of the final cost of coal to the consumer, whether domestic or industrial, represented the cost of moving it from the pithead to the place of consumption. The market for coal expanded rapidly wherever its price fell because of canal construction. In later decades rail construction had a similar effect.

Without benefit of canal or rail transport the price of coal carried overland doubled within ten miles of the pithead, which meant that before canal and railway facilities existed much of the country had no access to coal at an economic price.

The size of accessible coal reserves in early modern England was a function of drainage technology since water accumulated in every mine and became an increasingly severe problem as the depth of working increased. Having reviewed the use of drainage passages where circumstances made it possible to use gravity to evacuate the water, and the use of wind, water, and horse power to combat the problem where pumping was unavoidable, Flinn concluded as follows: Gravity, wind-, water- and horse-power, then, were capable of only a very modest contribution to the drainage of mines. If drainage technology were to stand still at the point reached at the beginning of the 18th century, mining in Britain could scarcely have expanded and must probably have begun to show diminishing returns. At depths of between 90 and 150 feet the influx of water almost invariably created problems insoluble by the technology of the day, so that when seams of lesser depths were exhausted mining must cease. Most British coal-reserves, of course, lay at greater depths.

Coal had been very widely used as a source of heat energy. It overcame the bottleneck in providing heat energy which was inherent in dependence on wood. But without a parallel breakthrough in the provision of mechanical energy to solve the comparable problem associated with dependence on human or animal muscle to supply motive power in industry and transport, energy problems would have continued to frustrate efforts to raise manpower productivity.

By 1870 steam engines consumed an estimated 30% of UK coal production.

Growth led to an increased demand for food and raw materials. Both were obtained principally from the land. At some point in the growth process this must mean taking inferior land into cultivation or using existing land more intensively. The returns to labor and to capital would both decline as a result, and growth would grind to a halt. The two men were in agreement that the last case, when growth had petered out, might be as uninviting as that found in countries in which no improvements had taken place, even though, for an extended period in between, the speed of growth might bring substantial benefit to all members of society. The classical economists proved to be mistaken in their pessimism, if not in their logic. Negative feedback was indeed inescapable in organic economies and many cycles of growth followed by stagnation had occurred in earlier centuries,

The productivity of those employed in agriculture was the most important single determinant of the possibility of growth and change in all organic economies. Where it was low it was unavoidably necessary for the bulk of the population to live and work on the land if there was to be food for all. Where this was the case it was also inevitable that there was little demand for any but the bare necessities other than food – clothing, shelter, and fuel – and therefore little employment in secondary or tertiary activities. Low productivity might arise for many reasons. High population densities might result in fragmentation of holdings, reducing the amount of land available per head to a level well below the optimum. In some, though not all, types of agriculture a shortage of draught animals for whatever reason might produce a similar result. A list of this kind could be much extended. But frequently, where agricultural productivity was low, the problem lay elsewhere, with weakness of demand rather than inability to increase production. In an archetypal peasant society the first concern of each family is to cover its own needs rather than produce a surplus for sale, and this attitude makes excellent sense where the scale of demand outside the peasant sector is slight. A bad harvest focuses attention exclusively on the needs of the family. A good harvest, while relieving anxiety on this score, does not create much opportunity for profitable sale, since others will also enjoy a surplus and the market price will fall to a level which creates little incentive to make efforts to increase productive capacity.

Because virtually all raw material supply was animal or vegetable in character, everything hinged on increasing agricultural output. This was intensely difficult to achieve without incurring the penalty of declining marginal returns to labor and capital, but for a time more extensive and effective division of labor, which was facilitated by rural–urban exchange, could allow the basic problem to be side-stepped. In England the difficulty was further eased and eventually overcome by exploiting inorganic sources of raw materials and energy

Removing English urban totals from those for Europe suggests that in continental Europe as a whole urbanization was almost at a standstill between 1600 and 1800. The 18th century was, if anything, more sluggish than the 17th in this regard.

Between 1600 and 1700 England accounted for 33% of the European urban increase; between 1700 and 1750 57%; and between 1750 and 1800 70%. Over the two centuries taken together the comparable figure is 53%. Given that in 1600 the population of England amounted to only 5.8% of the European total, and in 1800 7.7%, this is extraordinary testimony to the exceptional character of the urban growth taking place in England at the time.

Those who work the land can count on a local demand for food to satisfy local need but any stimulus to produce beyond this level must come from those living elsewhere in towns and cities. Even in largely rural communities there will, of course, always be a proportion of the population who do not produce the food which they eat but if that fraction is modest and unchanging there will be little or no incentive to change current practice. Population growth in the rural counties of England was generally modest. The local demand for food therefore showed little growth. If, however, there is a substantial and steadily growing urban demand for food the situation is different. A rising trend in the volume of demand creates an incentive to invest and improve. It also stimulates specialization. Farmers in areas well suited to beef cattle, for example, may find that it pays them to reduce or abandon cereal culture in favor of cattle rearing, with the reverse taking place where the soils favor cereals. This in turn gives rise to inter-regional exchange of foodstuffs between areas with different agricultural specialisms.

In the later sixteenth and seventeenth centuries London grew so markedly that by the end of the period it had become the largest city in Europe. It grew from c.55,000 to c.575,000 between 1520 and 1700. The size and rapid growth of London provided a massive stimulus to the farming sector.

Poor transport facilities reduce the area which can respond to urban food price signals, acting in a fashion similar to the existence of tariff barriers in restricting trade. If transport is slow, uncertain, and expensive the limits to growth will be severe. However, there also exists the possibility that rising urban demand will encourage both rising agricultural productivity and improvement in transport facilities. When any of the three factors change this will encourage sympathetic change in the other two. It is ultimately idle to try to determine primacy among the three since they are so intimately intertwined,

The growth of London not only transformed the market prospects for farmers, because its inhabitants produced little or no food themselves, but disposed of much purchasing power. It also led to a steady increase in the demand for farm produce indirectly. There was a parallel, marked rise in the volume of road transport and therefore in the demand for fodder to ‘fuel’ the rising number of horses needed to pull carts and wagons.16 Urban growth, moreover, implies an increased demand for raw materials no less than for food, and, as Adam Smith noted, almost all the raw materials in question were vegetable or animal in nature, and were therefore produced in the countryside. A steadily rising proportion of the labor force no longer worked on the land. Most of them were engaged in secondary activities. Shoemakers, weavers, carpenters, blacksmiths, brewers, framework knitters, printers, and basket makers were all dependent on animal or vegetable raw materials. The great bulk of this demand was met from plants grown on English soil, or from animals fed by those plants.

The existence of a large and rising demand for food, fodder, and organic raw materials associated with dynamic urban growth brought major changes in the scale and character of the demand for agricultural products and thereby induced matching changes in their supply. And once in train there was feedback between the two. The expectation that such demand would grow made increased investment in agriculture appear prudent rather than hazardous. As a result the growth of the urban sector was not constrained by increasingly tight supplies of food and industrial raw materials. The ability of the agricultural sector to sustain hectic growth in urban populations and the raw material needs of the wide swathes of industry which still depended on home-produced organic products was an essential factor in facilitating the growth which took place.

Perhaps the most truly remarkable feature of these two centuries was that the number of men working on the land increased only marginally, yet the agricultural workforce continued to meet the food needs of a population which more than doubled. The area under cultivation increased only modestly, which necessarily implies a very marked increase in output per acre, but this is less striking than the fact that labor productivity in agriculture rose in parallel with the demand for food and industrial raw materials occasioned by the population increase. Because of the nature of an organic economy it is normally to be expected that the price paid for securing a large increase in output is an even larger proportional increase in the input of labor for reasons set out so forcefully by the classical economists. That this did not happen in England may be regarded as a necessary condition for the sweeping changes which are conventionally taken to comprise the industrial revolution.

Urban life implies dependence on the market to a degree which may not hold in the countryside. Urban growth connotes a change in occupational structure which is likely to cause average incomes to rise.37 And with experience of and exposure to urban norms forming part of the lives of a rising proportion of those still living in the countryside, it is not surprising that many of the features of the ‘consumer revolution’ should become visible countrywide rather than being found only in towns. Much the same changes occurred in the Netherlands a century earlier. Indirectly, and perhaps somewhat paradoxically, a sustained rise in agricultural productivity lay behind these changes.

When discussing the reasons why a population might never attain the maximum that might in theory be approached, he noted a feature of English agriculture which ensured that population growth would stop well short of this level: ‘With a view to the individual interest, either of a landlord or farmer, no laborer can ever be employed on the soil, who does not produce more than the value of his wages; and if these wages be not on an average sufficient to maintain a wife, and rear two children to the age of marriage, it is evident that both population and produce must come to a stand.’

A doubling in cereal output, for example, such as occurred in England between the late 16th and late 18th centuries, implies a commensurate increase in the volume of the crop to be harvested and transported to barns, and this in turn implies a substantial increase in the labor involved. No doubt there was a substantial increase in the expenditure of muscle energy in English agriculture as a direct result of the rising volume of output. Much of this increase, however, may have been secured from animal rather than human muscles. Bigger, better fed, and more numerous farm horses limited the need for greater human energy inputs.39 Again, one of the reasons for declining labor productivity as population increases in peasant agriculture is the increased subdivision of holdings. In early modern England, however, capitalist farming tended to increase the average size of farm units both by individual purchase and as a by-product of enclosure, and large farms employed fewer men per acre than small farms.

When Arthur Phillip, the first governor of the colony, took the first convict fleet out to Australia, the home government assumed that it would become self-sufficient in food within a couple of years. But it took decades, partly because of unfamiliarity with the new environment, imposing years of learning by trial and error, and also because most of the convicts were from towns and cities with no clue of how to farm.  Above all, it was due to a lack of draught animals, which for the most part died on the long sea voyage, about 6 months   The fact that gangs of convicts were yoked to carts to drag loads of bricks from brick fields to building sites might appear at first glance to reflect a brutal penal regime but in fact merely demonstrated the inescapable reality of an organic economy which lacked draught animals.

In the early years of the colony all its inhabitants, both convicts and their guardians, were at times gravely malnourished. The men were sometimes too weak from hunger to labor in the fields for more than a couple of hours a day.

I haven’t excerpted the myriad ways farms produced more food for growing cities in England or even more importantly oats to feed canal and farm horses, but without this increased production per unit of land the industrial revolution wouldn’t have happened. 

Human energy intake was broadly similar in the two countries, though somewhat lower in Italy than in England. Part of the difference may be related to the higher average temperatures in Italy, which would tend to reduce the calorie intake needed to sustain body temperature. The energy consumed by draught animals was more than twice as great in England as in Italy, probably a reflection of the greater suitability of the English climate and soils for grass growth and hence for pastoral production. Heat energy from the use of firewood was more widely employed in Italy (though accurate estimation is especially difficult for this energy source) but even in the 1560s England was deriving more heat energy per head of population from coal than Italy in the 1860s so that the combined total consumption of heat energy was not greatly different between the two. In neither country was wind or water a major energy source and it is notable that the absolute figures for the two countries are remarkably similar. The table makes it clear that human and animal muscle was the dominant source of mechanical energy in the two countries, and that in both countries firewood supplied most of the heat energy. Yet even in the 1560s coal was beginning to be a significant source of heat energy in England though its contribution was still dwarfed by that of firewood.

Accessible reserves of peat in the Netherlands  played a role similar to coal in England. As a result, the Netherlands in the 17th century was ‘energy-rich’ economy when compared to her neighbors, favoring the growth of energy-intensive industries such as brewing, brickmaking, sugar refining, bleaching and dyeing, and the production of salt.

Coal and wind power were the only two energy sources which increased in absolute terms, as a percentage of total energy consumption, and when expressed per head of population. Coal’s proportionate share in energy consumption rose from 10% to 90% of the total. The increase in wind power reflects the rapid expansion of the merchant fleet, which remained entirely wind-powered until the beginning of the nineteenth century. Coal consumption per head increased by a multiple of about 45 between Tudor (1485-1603) and Victorian (1837-1901) eras, an average annual rate of growth of approximately 1.3% a year, which implies almost a doubling every half-century.

Coal already dominated the energy picture in England as early as the end of the seventeenth century, and in the nineteenth century eclipsed all rival sources almost entirely. But this was not true in other European countries until a much later date. Belgium was the first continental country to dig coal on a substantial scale and remained the largest individual continental producer until the 1850s. In 1850–4 the average annual Belgian production was 6.8 million metric tons. In the same period the comparable figures for France and Germany were 5.3 and 6.5 millions respectively. These three countries were the largest continental producers. In the same period the average annual output in England and Wales was 61.4 millions. At the beginning of the nineteenth century the disparity was substantially greater. In the early 1850s the combined output of Belgium, France, and Germany was about 30% of the total for England and Wales. Half a century earlier the comparable figure was probably less than 20%. Expressed per head of population the contrast was even starker. In the 1850s the average output per head in the three continental countries combined was c.0.24 tons: the comparable figure for England and Wales was c.3.41 tons.

As already noted, one way of bringing home the degree to which England had moved away from the constraints associated with organic economies by 1800 is to convert coal production into the equivalent acreage of wood which would have been required to produce the same quantity of energy on a sustained-yield basis.

Using the production totals for England and Wales and the assumption that, on a sustained-yield basis, an acre of woodland can produce wood providing the same heat energy as a ton of coal, the acreages in question in 1750, 1800, and 1850 are 4.3, 11.2, and 48.1 million respectively. As a proportion of the land surface of the country these figures represent 13, 35, and 150% of the total area. Even the first figure of 13% would have represented a significant proportion of the land surface for which there were many other competing uses. The second would have been quite impractical, while the third is self-evidently impossible.

[my note: and would it take four generations for the forests to regrow to be exploited again?]

Before the steam engine arrived coal had shown that it could transform the thermal energy scene but muscle power remained by far the most important source of mechanical energy. Neither water nor wind power was of more than limited significance, except in the case of sailing ships. The steam engine meant that coal could be exploited to supply mechanical energy as readily as heat energy, thus overcoming the last remaining barrier to the application of fossil fuel energy to all the main productive processes.

Consider first inland transport. Most production in organic economies happened across large areas of land in nature. To produce the tens of thousands of bushels of wheat needed to feed a large town involved cultivating thousands of acres of arable land. To secure firewood to meet its needs for domestic heating similarly meant cutting and collecting wood from a very large area. Only when the carts and wagons carrying the wheat or wood neared the town did they become concentrated on a few roads bearing a large traffic. Their early miles on the way to the town were inevitably along roads which carried little traffic.  Since the bulk of the journey was on poor roads, transport costs per ton-mile were high. The continued use of pack horses rather than carts into the 18th century, and even in some areas into the 19th, reflected the existence of many road surfaces so rutted or muddy that wheeled traffic was impractical.

Local traffic was also normally light. A large investment in minor roads, whether in new construction or maintenance, was unlikely to produce savings large enough to repay the outlay. Yet roads which were of poor quality and in poor repair discouraged heavier usage, producing a vicious circle of neglect and little traffic.

Often, in the circumstances prevailing in organic economies, the high cost of transport was instrumental in limiting growth possibilities. It limited severely the possible gains to be achieved by the division of labor, since the size of the accessible market determined how far the division of labor could be carried. However, in relation to transport provision, as in relation to energy provision, the rising scale of coal production brought solutions to problems which had previously proved intractable.

The cost per ton-mile when coal was transported by water was taken to be only 5% of the price of land carriage. Where the potential savings from the transport of other goods very seldom appeared to justify constructing a canal to reduce costs, coal, because of the quantity produced, and because its production was a single mine and its consumption also often a large city or town, digging a canal between two points could be very profitable.

The creation of a railway network carried access to cheap coal a stage further. Advantages which were once confined to coalfield areas and to cities like London which could use coastal shipping to supply their fuel needs were extended to the bulk of the country by the mid-19th century. The canal network took shape only slowly. It took over 50 years to produce a national canal network in the later decades of the 18th century and early in the following century. Most canals were built to meet a local need and even on trunk canals the length of an average haul was only about twenty miles. Yet the cumulative impact of canal construction both in stimulating growth and in changing the location of industrial activity was marked.

Roads in the country were converted to wagon ways by laying down planks to reduce friction and enable a greater load to be transported with a smaller expenditure of energy. The results were striking. One horse on a wagonway could pull as much as two horses and two oxen on an unimproved road. Steps were taken to reduce the gradients on wagonways, which added to the gain from reducing friction. Further gains in productivity came in the course of the 18th century when cast-iron and later wrought-iron rails and flanged wheels were introduced to reduce friction still further. As a result, with the same expenditure of energy, a horse could produce still more ton-miles.

Adam Smith emphasized the importance of water transport in determining the possible scale and nature of economic growth in an organic economy. He stressed the benefit of access to water transport, especially for heavy and bulky goods.  He went on to give details of the number of men and animals, wagons and ships, needed to transport goods between Edinburgh and London by the two means of transport, together with the journey times of each type, and summarized his findings as follows:

200 tons of goods carried by the cheapest land-carriage from London to Edinburgh required the maintenance of 100 men for 3 weeks, and the maintenance of 400 horses and 50 great wagons.  The same quantity carried by ship requires only 6 to 8 men, with little wear and tear on the ship.

Smith then pointed out that if only land carriage were possible between the two cities only goods with a very high value to weight ratio would be exchanged between them, to the detriment of the prosperity of both.

Geological good fortune therefore made it possible for London to replace wood with coal even though the coalfield from which it was mined was almost 300 miles distant.

To satisfy London’s demand, however, implied the creation of a large fleet of vessels to satisfy this demand, which reflected the requirements both of domestic heating and of a range of industrial purposes. On the banks of the Thames, for example, glassworks and breweries were built to take advantage of access to a cheap source of heat. London’s population grew rapidly and its demand for coal grew roughly in parallel. At the beginning of the 17th century the annual import of coal to London was probably in the range 125,000 to 150,000 tons. By the end of the century it was approaching 500,000 tons. Over the same period the population of the capital rose from c.200,000 to c.575,000 people. Consumption per head therefore appears to have risen only very slightly, if at all, during the century. By the end of the 18th century London was importing a total of about 1.2 million tons of coal annually, almost exclusively from the same north-east ports. Since London’s population had risen to 950,000 by 1800, consumption per head had again changed only modestly, increasing by perhaps a quarter during the century. Yet the capital’s growth was so marked that the absolute tonnage of coal imported to London increased roughly 10-fold over the 17th and 18th centuries.

The first turnpike trust was created in 1663, but turnpike construction only increased markedly from early in the 18th century. By 1770 there were 15,000 miles of turn-pike roads and this figure had risen to 22,000 miles in the mid-1830s, managed by more than 1,100 turnpike trusts in England and Wales. Adopting the principle that the user should pay proved a most effective way of securing better road surfaces. The incentive to do so arose as the volume of current and prospective traffic increased. The results reflect the scale of the benefit. Journey times and costs per ton-mile both fell, while traffic volumes increased sharply. The reduction in journey times was dramatic. Between the 1750s and the 1830s journey times between major centers fell by 80%.  The result was a marked contrast in journey times between England and continental Europe. For example, in the 1760s French services travelled between 25 and 35 miles a day, whereas in England it was 50 to 80 miles a day. In both countries services quickened in the following decades but a marked difference in average speed continued.  The movement of goods was revolutionized as much or more than the movement of people by the improvements made to the road system. Much larger wagons could be used on turnpike roads. The biggest and most sophisticated road haulage operations centered on London. It has been estimated that the weekly output of the London road haulage industry rose from 13,000 ton-miles in 1715 to 80,000 in 1765, 275,000 in 1816, and 459,000 in 1840. Transport by canal barge was much cheaper per ton-mile than sending goods by turnpike road but the road might still be preferred for some goods.  For example, for the long-distance transport of cotton goods turnpike roads were often favored because they provided a regular and reliable service and were quicker.

High transport costs may be compared to high tariff barriers. Products from other places are denied access to a local market as effectively by the lack of cheap and reliable transport as by an arbitrary charge at an entry gate. Where roads are rutted in summer and muddy in winter movement is difficult, slow, and intermittently dangerous. Their condition may prohibit the use of carts and wagons. In such circumstances a village may have little option other than to satisfy from within its borders the bulk of its material needs. Poor transport facilities and a ‘peasant’ mentality go hand in hand. Conversely, if transport is relatively easy, cheap, and reliable, economic activity can be organized very differently. Movement along a spectrum of transport provision with difficult, expensive, and unreliable facilities at one extreme and dependable, cheap facilities at the other will produce a host of associated changes. Szostak, for example, suggested that in the early 18th century merchants would load their products on pack horses and travel through the country selling their goods directly at fairs and markets. By the end of the century, in contrast, travelling salesmen carrying samples sought orders which were fulfilled by dispatching goods by road carriers. Turnpike roads could accommodate regular wagon traffic and orders taken by the salesmen could be dealt with quickly and reliably. Aikin is quoted by Szostak as noting that the shift from loaded pack horses to travelers with samples took place between 1730 and 1770 in the Lancashire textile industry. Another linked change was the gradual transformation of fairs from a major point of contact between producer and retailer and final purchaser into chiefly social events. The retail shopkeeper assumed the role once played by the fair.

In his pioneering study of migration during the industrial revolution period, Redford laid stress upon the evidence that agricultural wages were highest near the new concentrations of industry and declined steadily with distance from these centers. In rural areas close to manufacturing, mining, or commercial centers people moved to the town from the country to better their lot. The increase in the prevailing wage level in agriculture which resulted in turn attracted agricultural laborers to move from more distant parishes to replace them. He insisted that ‘the motive force controlling the migration was the positive attraction of industry rather than the negative repulsion of agriculture’. As Chaloner remarked in his preface to the third edition of Labor migration, Redford insisted that ‘The rural population was attracted into the towns by the prospect of higher wages and better opportunities for employment, rather than expelled from the countryside by the enclosure movement.’

Expectation of life at birth declined substantially during the 17th century, reaching a nadir in the period 1661–90 when, for the sexes combined, it averaged only 33.8 years. By the beginning of the nineteenth century there had been a major change. In 1801–30 it averaged 40.8 years.

Although overall levels of mortality improved markedly, the improvement was not evenly spread among the different age groups. In the 17th century adult mortality had been very severe; infant and child mortality, in contrast, though crippling by the standards of the 21st century, had been relatively mild. During the ensuing century adult mortality improved sharply. Expectation of life at age 25 for the sexes combined rose by five years from 30 to 35 years between the end of the 17th and the end of the 18th century. At younger ages any improvement was very limited, with one exception. Mortality within the first month of life, often termed endogenous mortality, fell dramatically due to falling rates in maternal mortality and the rate of stillbirths. Deaths later in the first year of life were mainly caused by infectious disease, and were as high in the early 19th century as the past century

From the mid-16th century onwards England’s chance of escaping the Ricardian curse gradually improved as its dependence on the land as the prime source of energy was reduced by the steadily increasing use of coal. This in itself, however, was no guarantee of ultimate success. Put simply, coal use could overcome a barrier which had long appeared insuperable on the supply side, but without a matching change in demand a breakthrough might have proved elusive. Coal was mined and consumed on a substantial scale in parts of China from the 4th century onwards and may have reached a peak in the eleventh century, but it did not lead to a transformation of the economy. It is in this context that the demographic characteristics of a country assume importance.

Production only takes place in response to the existence of demand, immediate or potential. And it is less the absolute scale of demand than its structure which is important. Where poverty is widespread and severe the demand for products other than food, clothing, fuel, and housing will be slight. Rising real incomes rapidly alter the structure of aggregate demand because, although the absolute amount spent on the four basics will rise, the proportion spent on them falls.

If the rising level of energy consumption can be met not from the products of current plant photosynthesis but from the accumulated store of energy represented by past plant photosynthesis present in coal seams, the constraints present in all organic economies can be first eased, and then largely by-passed. In the course of the seventeenth and eighteenth centuries, the increasing resort to this alternative energy source gradually changed the growth prospects of the country. For a long time it was only a partial escape from the traditional constraints. As long as coal was only a source of heat energy the issue was doubtful. Once, however, the energy released by burning coal could also be converted into mechanical energy, future growth was no longer put at risk by the limitations on energy use imposed by dependence on the annual cycle of plant growth.

If coal was so important in the industrial revolution why were there not parallel developments to those taking place in England elsewhere in Europe or farther afield and perhaps at an earlier date? There can be no definitive answer to this question. It is reasonable to claim that without coal no industrial revolution was possible in the circumstances of an organic economy. The presence of coal measures, on the other hand, clearly carried no guarantee that it would be exploited. One consideration, however, should be borne in mind in this connection, since it strongly conditioned access to coal measures in the past. When pit drainage depended upon wind, water, and horse power it was impracticable to mine coal at depths greater than 100–150 feet. Most of the world’s richest coalfields are concealed fields covered by an overburden of rock, often many hundreds of feet thick. The great bulk of the Ruhr field, for example, existed as a geological fact but not as an economic possibility before steam drainage. Indeed the same was true of coal in the huge coalfield which extended, with some gaps, from the Pas-de-Calais in the west, through the Sambre–Meuse valley, to Aachen and the Ruhr. The coal in the concealed fields was inaccessible (and often unknown) at the beginning of the nineteenth century. The bulk of the reserves in British coalfields were similarly inaccessible before steam drainage but coal outcropped to the surface more widely than in many other countries, making initial exploitation simpler.

Whereas in the mid-16th century coal provided only 11% of energy consumed, by the mid-18th this figure had increased to 61%, and the overall scale of energy consumption per head in England dwarfed that of her neighbors, with the partial exception of the Netherlands. The presence of a cheap and abundant source of heat energy in the form of coal played a major part in facilitating expansion in a range of industries by holding down production costs as production volumes increased; brick making, glass manufacture, lime burning, brewing, dyeing, salt boiling, and soap and sugar manufacture all benefited. The traditional dependence upon wood as a heat source had vanished in almost all branches of industry apart from iron manufacture by the early eighteenth century. It is probable, if not conclusively demonstrable, that London would not have grown so freely but for the east coast coal shipments from northern England (Tyneside).

The classical economists provided a formal framework to describe something which was widely understood intuitively in all organic economies. They held that three components were essential in all material production; capital, labor, and land. The first two could be expanded as necessary to match increased demand, but the third could not, and rising pressure on this inflexible resource arrested growth and depressed the return to capital and the reward of labor.

Capital and labor remained as essential as ever if output was to expand, but for wider and wider swathes of the economy land was no longer a factor of central importance. Energy was still needed in every aspect of the production process and an adequate supply of raw materials remained essential, but the land could be by-passed in securing the first, and to an increasing degree the second. Land was losing its place in the trinity of factors determining production possibilities.

A coal miner who consumes in his own body about 3,500 calories a day, will, if he mines 500 pounds of coal, produce coal with a heat value 500 times the heat value of the food which he consumed while mining it. At 20% efficiency he expends about 1 horsepower-hour of mechanical energy to get the coal. Now, if the coal he mines is burned in a steam engine of even 1% efficiency it will yield about 27 horsepower-hours of mechanical energy. The surplus of mechanical energy gained would thus be 26 horsepower-hours, or the equivalent of 26 man-days per man-day. A coal miner, who consumed about one-fifth as much food as a horse, could thus deliver through the steam engine about 4 times the mechanical energy which the average horse in Watt’s day was found to deliver.

This is a very conservative estimate of the multiplier involved, since the average coal miner produced considerably more than 500 pounds of coal a day and the efficiency of steam engines commonly dwarfed the figure used in the illustration.

Conscious recognition of coal as the arbiter of industrial success came only in the later nineteenth century, symbolised when Jevons published The coal question, in which he wondered anxiously about the brevity of British industrial supremacy given that other parts of the world had much larger reserves of coal and were already beginning to take advantage of their good fortune. When the first edition of The coal question was published in 1865, little was known about the scale of the coal resources in other countries and Jevons was relatively optimistic about the future, but by the time of the third edition in 1906 it was clear that several countries, and especially China and the United States, possessed far larger reserves, and his tone changed: ‘When coalfields of such phenomenal richness are actively developed, countries in which there no longer remain any large supplies of easily and cheaply mined coal are likely to feel the effect of the resulting severe competition.’

The increase in the productive powers of an industrialized society were such that for the first time in human history the miseries of poverty, from which previously only a small minority were exempt, could be put aside for whole populations. Success in escaping from the constraints which affected all organic economies did not, however, mean a swift and uninterrupted move towards greatly improved material circumstances for all. The potential for such a change existed. Realizing it proved to be another matter. Economic structures which divided the benefits of increasing productive power very unevenly; political ineptitude, prejudice, or mismanagement; various kinds of discrimination; and the destruction of war – all were still capable of depriving much of the population of this benefit.

Organic economies necessarily operated within strict limits. The industrial revolution made it possible to escape them. But for the country in which an industrial revolution first took place the definitive release from poverty was long in arriving for much of the population. If the industrial revolution did indeed occur between c.1780 and c.1840, and if the possibility of abolishing the traditional concomitants of poverty is one of its defining characteristics, then the realization of the promise was long delayed for much of the population, as the social investigations of Mayhew, Booth, Rowntree, and others in the decades before and immediately after the First World War make clear.41 Many contemporaries were bitter about the sufferings of the urban poor where others were triumphalist about the achievements of the Victorian age.

From mid-Victorian times the level of real incomes was rising, and in most respects the circumstances of life for the bulk of the population were better in 1900 than they had been in 1850. Further progress for half a century was delayed and at times reversed by the effects of two world wars and the Great Depression. Only in the second half of the twentieth century was improvement in health, education, and general welfare widespread, substantial, and sustained.

Looking back over the last century-and-a-half it is perhaps unsurprising that progress was initially limited and spasmodic. In part this was due to ‘external’ factors, the impact of major wars and the great slump, but it reflected also the unfamiliarity of many both of the problems and of the opportunities which arose with the acquisition of unprecedented powers of production. The enormous and very rapid growth of cities and towns, for example, which reflected the changing importance of different sectors of the economy, posed massive problems which were initially difficult to resolve. Urban mortality was for many years much higher in cities than in small towns or the countryside, but limited progress in improving the health of the urban populations in many areas was unavoidable until the modes of transmission of many diseases were better understood. Cholera epidemics, for example, could not be eliminated until the importance of securing a supply of pure water had been appreciated. And even when the knowledge had been gained, the infrastructural investment needed to reduce and eventually overcome this problem took time. Securing educational provision for all children took place only over several decades. This was due in part to the nature of the politics of the day, but even without delay for this reason it could not have happened overnight. In other words, it is reasonable to suggest that the fact that the nature of the industrial revolution was so little understood at the time and that the changes which came in its train were so radical should lessen any surprise that its potential benefits were not realized instantly.

England was essentially self-sufficient in temperate zone foodstuffs until the end of the eighteenth century. The government in Westminster made the assumption that this was both the norm and highly desirable. It was periodically thrown into something approaching panic by the prospect of a seriously defective grain harvest, which gave rise to restrictions on the use of grain, notably the malting of barley to produce beer, and to desperate endeavors to secure supplies from overseas. The Netherlands imported Baltic grain on a large scale routinely, since there was no prospect of local self-sufficiency. The import of food was balanced by a large export trade in foodstuffs, notably fish (the scale of Dutch fish exports was remarkable, especially in the seventeenth century 19), but also dairy produce. During the later eighteenth century, exports of dairy produce grew rapidly and by the beginning of the nineteenth century accounted for half of all agricultural exports. 20 English agriculture improved its efficiency by an increasing regional specialization in, say, beef cattle, dairy produce, or barley for malting, but the specialisation was predominantly in relation to demand within the country. Dutch agriculture, reflecting a salient feature of the Dutch economy in its golden age, specialized, so to speak, internationally rather than just nationally.

The scale of peat production and consumption in the Netherlands was truly remarkable.  The quantity of energy from peat available per person in the Netherlands was 13.6 megajoules annually. The comparable English figure from coal is 7.5 megajoules, barely half the Dutch figure. It should occasion no surprise, therefore, that the Dutch industries which enjoyed a marked comparative advantage at this time, because they were all in need of heat energy on a large scale, were almost identical to the English industries whose prospects improved markedly with the availability of coal on a large scale and at a competitive price.  Peat was first exploited in the low-lying bogs of the alluvial areas which were close to navigable waterways. But exploitation of peat in the hoogveen, where the land was higher above sea level, depended upon a heavy prior capital expenditure on canal construction, without which the peat was economically inaccessible.

It took a quarter of a millennium for coal to change from supplying a tenth of the energy consumed in England and Wales to nine-tenths. Its increasing importance reduced the pressure on other energy sources, and notably on forest land.

Access to coal meant that the rate of growth could be maintained or even accelerated rather than having to slow down, as was otherwise unavoidable.

Using a 1 per cent per annum as an illustration, since even this very modest level of growth would mean that, over two centuries, output would expand roughly 8-fold.

In the Victorian period, harvest festival services were often held in parish churches in the autumn, with the church decorated with sheaves of corn and baskets of fruit. The harvest festival service was in a sense the celebration of acquisition of a store of energy which could be used to ‘fuel’ people and farm stock, or to provide the raw material for industries such as straw plaiting for the forthcoming year. Earlier the hay harvest had provided a similar food source for cattle and sheep and so, indirectly, for the production of wool and hides. To hold a celebration once the harvest had been safely gathered in was highly appropriate. For many generations the stock of energy acquired in the wake of a season of plant growth had provided the basis for both life and work between one harvest and the next. At the level of the local community it exemplified dependence upon the annual cycle of insolation and its conversion into a form which was useful to man by photosynthesis.

The mining of coal was not subject to a similar annual rhythm. It was a store which could be drawn down at any time and in any required quantity, at least for a period of centuries. The local parish church in a mining community was not decorated annually with coal, and indeed might well celebrate the getting in of the harvest in the traditional fashion, but the new mineral source of energy had come to dwarf older sources by the Victorian age even though its significance was not celebrated in a comparable fashion.

The plea in the Lord’s Prayer, ‘Give us this day our daily bread’, may well seem quaint in an age when in advanced economies superabundant nutrition is a greater threat than malnourishment. For a large majority of the population of England and other industrialized countries, homes are warm and dry even in midwinter; and they are rarely over-run with vermin, a state of affairs beyond attainment for most families in earlier times. Literacy was once the privilege of a tiny minority of the population and formal education played no part in the upbringing of most children. Today school and other types of formal education form a major part of the lives of children for anything between a dozen and twenty years. A list of this sort could be greatly extended, and all such changes can be said to have been made possible by the creation of wealth and plenitude of resources which lie downstream from the industrial revolution.



Posted in Agriculture, Agriculture, Energy, Life Before Oil, Limits To Growth | Tagged , , , | 6 Comments

Ugo Bardi: “Energy Dominance,” what does it mean? Decoding a Fashionable Slogan

Preface.  A very good article about energy and war, explains a lot about how the world really works.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Ugo Bardi. 2019.Energy Dominance,” what does it mean? Decoding a Fashionable Slogan. Cassandra’s legacy.

“Now, I know for a fact that American energy dominance is within our grasp as a nation.” Ryan Zinke, U.S. Secretary of the Interior (source)

“All Warfare is Based on Deception” Sun Tzu, “The Art of War”

Over nearly a half-century, since the time of Richard Nixon, American presidents have proclaimed the need for “energy independence” for the US, without ever succeeding in attaining it. During the past few years, it has become fashionable to say that the US has, in fact, become energy independent, even though it is not true. And, doubling down on this concept, there came the idea of “energy dominance,” introduced by the Trump administration in June 2017.  It is now used at all levels in the press and in the political debate.

No doubt, the US has good reasons to be bullish on oil production. Of the three major world producers, it is the only one growing: it has overtaken Saudi Arabia and it seems to be poised to overtake Russia in a few years. (graphic source).

This rebound in the US production after the decline that started in the early 1970s is nearly miraculous. And the miracle as a name: shale oil. A great success, sure, but, if you think about it, the whole story looks weird: the US is trying to gain this “dominance” by means of resources which, once burned, will be forever gone. It is like people competing at who is burning their own house faster. What sense does it make?

Art Berman keeps telling us that shale oil is an expensive resource that could be produced at a profit only for market conditions that are unrealistic to expect. So far, much more money has been poured into shale oil production than it has returned from the sales of shale oil. “Energy dominance” seems to be just an elaborate way to lose money and resources. Again, what sense does that make?

But there is a logic in the term “energy dominance.” It has to do with the way slogans are used in politics: a slogan is not just a compact way of expressing a certain political concept, it is often a coded message that hides much more than it says. So, we know that “bringing democracy” to a foreign country means to bomb it to smithereens. “Make America great again” means subsidizing the fossil fuel industry. “The Indispensable Country” means, “The American Empire.” And more.

There is nothing wrong in using coded slogans: you only have to know how to decode them. So, “energy dominance” has to be decoded and turned into “military dominance.” Then, things start making sense.

One quick note before you accuse me of being a conspiracy theorist: I am reasonably sure that there is no “control room” in a dark basement of the Pentagon or of the White House deciding long-term economic and military objectives. The decision mechanism of modern states is collective and networked. It is akin to that of anthills: there is nobody in charge, plenty of people push in different directions and, eventually, the giant structure may start moving in a certain direction.

So, the fact that so much money has been directed toward the exploitation of shale oil and gas doesn’t mean that someone at the top decided that it was the thing to be done. It is simply, that investors tend to direct their financial resources where they think they’ll have returns, and that may well be the result of a collective hallucination. Investing in shale oil is, basically, a Ponzi scheme but if Ponzi schemes exist there is a reason for them to exist. Even if investing in something doesn’t generate overall profits, it moves money, benefits contractors, raises the GDP, and the more money is invested the more expectations of profits grow. And so it goes until the bubble bursts, but that may take time.

But there is more than that in this story: it is the military side. We all know that wars are won by the side that can pour more resources into the fight. It was in this way that the first and the second world war were won: the allies could produce more energy in the form of oil, coal, and gas. And, with these energy sources, they could produce more stuff: planes, tanks, cannons, bombs, bullets, and more stuff that was thrown at the Germans until they gave up. Matthieu Auzeannau gives us plenty of examples of this mechanism in his book “Oil, Power, and War.” The Germans were always lacking enough oil to power their military machine and that’s why they were doomed from the beginning.

For the military, the lesson of the past world wars is that wars are won by the side which has the largest oil supply. And they remember it. So, if you want to attain military dominance, energy independence is not enough, you need to attain energy dominance.

Everything makes sense also in view of some recent results on the statistical patterns of wars. Wars, it seems, are correlated to the thermodynamic phenomenon of entropy dissipation in complex systems. The more energy there is to dissipate, the faster it is dissipated. And if this dissipation is really fast, it may take the shape of a war — war is the fastest way to destroy (dissipate) accumulated resources. But, in order to dissipate resources, you need to accumulate them first, and that’s the role of shale oil in the current situation.

Which means that shale oil is not a natural resource, it is a military resource. As such, it doesn’t matter if it brings a profit or not for the investors. What matters is how it can be used to maintain and expand that gigantic social and economic structure that we call “Globalization” (another slogan that can be decoded as “the global empire”).

As long as the production of shale oil increases, we face the risk of a new, major world war. We can only hope that the shale bubble bursts by itself first. One more good reason why a Seneca Collapse of oil production would be good for all of us.

Posted in Over Oil, Ugo Bardi | Tagged , | 3 Comments

California’s central valley aquifers may be gone in 2030s, Ogallala 2050-2070

Preface. Clearly the human population isn’t going to reach 10 billion or more. California grows one-third of the nation’s food, the 10 high-plains states over the Ogallala about a quarter of the nations food, and exports a great deal of food other nations as well.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]


December 15, 2016. Groundwater resources around the world could be depleted by 2050s.  American Geophysical Union.

Human consumption could deplete groundwater in parts of India, southern Europe and the U.S. in the coming decades, according to new research presented here today.

In the U.S., aquifers in California’s Central Valley, Tulare Basin and southern San Joaquin Valley, could be depleted within the 2030s.

Aquifers in the southern High Plains, which supply groundwater to parts of Texas, Oklahoma and New Mexico, could reach their limits between the 2050s and 2070s, according to the new research.

New modeling of the world’s groundwater levels finds aquifers—the soil or porous rocks that hold groundwater—in the Upper Ganges Basin area of India, southern Spain and Italy could be depleted between 2040 and 2060.By 2050, as many as 1.8 billion people could live in areas where groundwater levels are fully or nearly depleted because of excessive pumping of groundwater for drinking and agriculture, according to Inge de Graaf, a hydrologist at the Colorado School of Mines in Golden, Colorado.

“While many aquifers remain productive, economically exploitable groundwater is already unattainable or will become so in the near future, especially in intensively irrigated areas in the drier regions of the world,” said de Graaf, who will present the results of her new research today at the 2016 American Geophysical Union Fall Meeting.

Knowing the limits of groundwater resources is imperative, as billions of gallons of groundwater are used daily for agriculture and drinking water worldwide, said de Graaf.

Previous studies used satellite data to show that several of the world’s largest aquifers were nearing depletion. But this method can’t be used to measure aquifer depletion on a smaller, regional scale, according to de Graaf. In the new research, de Graaf and colleagues from Utrecht University in the Netherlands used new data on aquifer structure, water withdrawals, and interactions between groundwater and surrounding water to simulate groundwater depletion and recovery on a regional scale. The research team used their model to forecast when and where aquifers around the world may reach their limits, or when water levels drop below the reach of modern pumps.

Limits were considered “exceeded” when groundwater levels dropped below the pumping threshold for two consecutive years.

The new study finds heavily irrigated regions in drier climates, such as the U.S. High Plains, the Indus and Ganges basins, and portions of Argentina and Australia, face the greatest threat of depletion.

Although the new study estimates the limits of global groundwater on a regional scale, scientists still lack complete data about aquifer structure and storage capacity to say exactly how much groundwater remains in individual aquifers, she said.

“We don’t know how much water there is, how fast we’re depleting aquifers, or how long we can use this resource before devastating effects take place, like drying up of wells or rivers,” de Graaf said.

Posted in Groundwater, Water, Water | Tagged , , , , , | 1 Comment

Book review of “The Soul of an Octopus”

Preface.  The octopus is an amazing creature, more than can be conveyed in the bits and pieces I’ve selected below.  The only downside to reading it is that you may not want to eat octopus anymore!

2018: A team of researchers bathed octopuses in MDMA and found that it makes the typically asocial animals more social. The experiment had a hypothesis that some neurotransmitter systems are shared across vertebrate and invertebrate species. In this case, the authors were studying a serotonin transporter binding site of MDMA that they believed octopuses share evolutionarily with humans — even though our lineages are separated by over 500 million years. Basically, they thought that MDMA would have a similar effect on octopus behavior to the effect it has on human behavior.   To find out more, do an internet search on psychedelic octopus

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Sy Montgomery. 2016. The Soul of an Octopus: A Surprising Exploration into the Wonder of Consciousness. Atria books

Not just anyone can open up the octopus tank, and for good reason. A giant Pacific octopus—the largest of the world’s 250 or so octopus species—can easily overpower a person. Just one of a big male’s three-inch-diameter suckers can lift 30 pounds, and a giant Pacific octopus has 1,600 of them [48,000 pounds]. An octopus bite can inject a neurotoxic venom as well as saliva that has the ability to dissolve flesh. Worst of all, an octopus can take the opportunity to escape from an open tank, and an escaped octopus is a big problem for both the octopus and the aquarium.

The giant Pacific octopus is one of the fastest-growing animals on the planet. Hatching from an egg the size of a grain of rice, one can grow both longer and heavier than a man in three years.

Athena is about two and a half years old and weighs roughly 40 pounds. I reach to touch her head, which is silky and softer than custard. Her skin is flecked with ruby and silver, a night sky reflected on the wine-dark sea. As I stroke her with my fingertips, her skin goes white beneath my touch. White is the color of a relaxed octopus.  Later, Athena rises up from her lair like steam from a pot. She’s coming to Wilson so quickly it takes my breath away—much faster than she had come to see me earlier.

Octopuses can taste with their entire bodies, but this sense is most exquisitely developed in their suckers. Athena’s is an exceptionally intimate embrace. She is at once touching and tasting my skin, and possibly the muscle, bone, and blood beneath. Though we have only just met, Athena already knows me in a way no being has known me before.

Truman and George were laid-back octopuses, but Athena had earned her name, that of the Greek goddess of war and strategy. She was a particularly feisty octopus: very active, and prone to excitement, which she showed by turning her skin bumpy and red. Octopuses are highly individual.

At the Seattle Aquarium, one giant Pacific octopus was named Emily Dickinson because she was so shy that she spent her days hiding behind her tank’s backdrop; the public almost never saw her.

Another was dubbed Lucretia McEvil, because she constantly dismantled everything in her tank. George had been relaxed and friendly with his keeper, senior aquarist Bill Murphy. Some people find them very creepy and slimy,” he said, “but I enjoy it a lot. In some ways they’re just like a dog. I actually pet his head or scratch his forehead. He loves it.

Octopuses realize that humans are individuals too. They like some people; they dislike others. And they behave differently toward those they know and trust.

Occasionally an octopus takes a dislike to a particular person. At the Seattle Aquarium, when one biologist would check on a normally friendly octopus each night, she would be greeted by a blast of painfully cold salt water shot from the funnel. The octopus hosed her and only her.

Wild octopuses use their funnels not only for propulsion but also to repel things they don’t like, just as you might use a snow blower to clear a sidewalk.  One volunteer at the New England Aquarium always got this same treatment from Truman, who would shoot a soaking stream of salt water at her every time he saw her. Later, the volunteer left her position at the aquarium for college. Months later, she returned for a visit. Truman—who hadn’t squirted anyone in the meantime—instantly soaked her again.

A lion is a mammal like us; an octopus is put together completely differently, with three hearts, a brain that wraps around its throat, and a covering of slime instead of hair. Even their blood is a different color from ours; it’s blue, because copper, not iron, carries its oxygen.

Back home, I tried to replay my interaction with Athena. It was difficult. There was so much of her, everywhere. I could not keep track of her gelatinous body and its eight floaty, rubbery arms. I could not keep track of her continually changing color, shape, or texture. One moment, she’d be bright red and bumpy, and the next, she’d be smoother and veined with dark brown or white. Patches on different parts of her body would change color so fast—in less than a second—that by the time I registered the last change, she would be on to another.

Unconstrained by joints, her arms were constantly questing, coiling, stretching, reaching, unfurling, all in different directions at once. Each arm seemed like a separate creature, with a mind of its own. In fact, this is almost literally true. Three fifths of octopuses’ neurons are not in the brain but in the arms.

An octopus can also voluntarily control its skin texture—raising and lowering fleshy projections called papillae—as well as change its overall shape and posture. The sand-dwelling mimic octopus, an Atlantic species, is particularly adept at this. One online video shows the animal altering its body position, color, and skin texture to morph into a flatfish, then several sea snakes, and finally a poisonous lionfish—all in a matter of seconds.

Human eyes have three visual pigments, allowing us to see color. Octopuses have only one—which would make these masters of camouflage, commanding a glittering rainbow of colors, technically color-blind. How, then, does the octopus decide what colors to turn? New evidence suggests cephalopods might be able to see with their skin.


I was impressed that she even recognized a face so unlike her own, and wondered whether Athena might like to taste my face as well as look at it. I asked Bill if that was ever allowed. “No,” he said emphatically, “we don’t let them near the face.” Why? Could she pull out an eye? “Yes,” Bill said, “she could.” Bill has gotten into futile tugs-of-war with octopuses who have grabbed the handles of cleaning brushes. “The octopus always wins. You have to know what you’re doing,” he said. “You cannot let her go near your face.” “I felt as if she wanted to pull me into the tank,” I told him. “She could pull you into the tank, yes,” he said. “She will try.

Octavia grabbed my left arm with three of her arms and my right arm with yet another of hers, and began to pull—hard. Her thorny red skin showed her excitement. Her suction was strong enough that I felt her drawing the blood to the surface of my skin. I would go home with hickeys that day. I tried to stroke her, but my hands were immobilized. She kept me at arm’s length, each arm was at least three feet long.

Scott was pulling with all his considerable strength on the tongs to keep Octavia from pulling me into the tank. I submitted to the tug-of-war. I had no choice. Though fairly fit for a person of my size (five foot five, 125 pounds), age (53), and sex (female), I didn’t have the upper-body strength to resist Octavia’s hydrostatic muscles. An octopus’s muscles have both radial and longitudinal fibers, thereby resembling our tongues more than our biceps, but they’re strong enough to turn their arms to rigid rods—or shorten them in length by 50 to 70%. An octopus’s arm muscles, by one calculation, are capable of resisting a pull one hundred times the octopus’s own weight. In Octavia’s case, that could be nearly 4,000 pounds.

William Wyatt Gill spent two decades in the South Seas, among octopuses much smaller than the giant Pacific; but even these species are strong enough to overwhelm a young, strong, fit man. He wrote that “no native of Polynesia doubts the fact” that octopuses are dangerous.

Octavia was using only a tiny fraction of her great strength. Compared to what she could do, this was just a playful tug.

Octopuses live fast and die young: Giant Pacific octopuses are probably among the longest-lived of the species, and they usually live only about three or four years. And by the time they arrive at the aquarium, they are usually at least a year old, sometimes more.

Dying Octopus

“I had no idea George was about to die,” Bill said. “Usually they change in body and behavior and coloration. They don’t stay as red. They’re whitish all the time. The intensity isn’t there. They’re less playful. It’s like old age in people. Sometimes they get age spots, white patches on their skin that seem to be sloughing off.

The bliss of stroking an octopus’s head is difficult to convey to most people, even to animal lovers. A friend asked, aren’t they slimy? Slime is a very specialized and essential substance, and there’s no denying that octopuses have slime in spades, almost everyone who lives in the water does.  Slime helps sea animals reduce drag while moving through the water, capture and eat food, keep their skin healthy, escape predators, protect their eggs.  Octopus slime is sort of a cross between drool and snot.  And it’s very useful. It helps to be slippery if you’re squeezing your body in and out of tight places. Slime keeps the octopus moist if it wants to emerge from the water, which some species of octopus do with surprisingly frequency in the wild.

How did the octopus get to be so smart?

  1. The event driving the octopus toward intelligence was the loss of the ancestral shell, which freed up mobility. An octopus, unlike a clam, does not have to wait for food to find it; the octopus can hunt like a tiger.
  2. A single octopus may hunt many dozens of different prey species, each of which demands a different hunting strategy, a different skill set, a different set of decisions to make and modify. Will you camouflage yourself for a stalk-and-ambush attack? Shoot through the sea with your siphon for a quick chase? Crawl out of the water to capture escaping prey?
  3. But losing the shell was a trade-off, now the octopus became a big packet of unprotected protein, so just about anything big enough to eat it will do so.
  4. From building shelters to shooting ink to changing color, the vulnerable octopus must be ready to outwit dozens of species of animals, some of which it pursues, others it must escape.
  5. How do you plan for so many possibilities? Doing so demands, to some degree, anticipating the actions—in other words, imagining the minds—of other individuals. The octopus must assess whether the other animal believes its ruse or not, and if not, try something different.
  6. In Jennifer’s book, she and her coauthors report that specific displays are directed at particular species under specific conditions. The Passing Cloud display, for instance, is used by an octopus to scare an immobile crab into moving and thus giving itself away. But to fool a hungry fish, an octopus is more likely to use a different strategy: to rapidly change color, pattern, and shape. Most fish have excellent visual memories for particular search images, but if the octopus changes from dark to pale, jets away, and then turns on stripes or spots, the fish can’t keep track of it.
  7. An octopus has to match with many different species of bird, whale, seal, sea lion, shark, crab, fish, and turtle, as well as other octopuses and human divers—all with different kinds of eyes, different lifestyles, different senses, different motives, different personalities, and different moods.

In the wild, over the course of about three weeks, a female giant Pacific octopus might lay between 67,000 and 100,000 eggs. In the wild, most female octopuses lay eggs only once, and then guard them so assiduously they won’t leave them even to hunt for food. The mother starves herself for the rest of her life. A deep-sea species holds the record for this feat, surviving four and a half years without feeding while brooding her eggs near the bottom of Monterey Canyon, nearly a mile below the surface of the ocean.

The octopus goes all the way back beyond the Cenozoic, the time when our ancestors descended from the trees; back the Mesozoic, when dinosaurs ruled the land; the Permian and the rise of the ancestors of the mammals; back, the Carboniferous’s coal-forming swamp forests; back past the Devonian, when amphibians emerged from the water;  past the Silurian, when plants first took root on land—all the way to the Ordovician, to a time before the advent of wings or knees or lungs, before the fishes had bony jaws, before blood pumped from a multichambered heart, to more than 500 million years ago

A giant Pacific octopus can regenerate up to one third of a lost arm in as little as six weeks. Unlike a lizard’s regenerated tail, which is invariably of poorer quality than the original, the regrown arm of an octopus is as good as new, complete with nerves, muscles, chromatophores, and perfect, virgin suckers.

Arms can have a personality

But the bold versus shy arms could be something quite different. While arms can be employed for specialized tasks—for example, as your left hand holds the nail while your right hand wields the hammer—each arm may have its own personality, almost like a separate creature. Researchers have repeatedly observed that when an octopus is in an unfamiliar tank with food in the middle, some of its arms may walk toward the food—while some of its other arms seem to cower in a corner, seeking safety. Each octopus arm enjoys a great deal of autonomy. In experiments, a researcher cut the nerves connecting an octopus’s arm to the brain, and then stimulated the skin on the arm. The arm behaved perfectly normally—even reaching out and grabbing food. The experiment demonstrated, as one colleague told National Geographic News, “there is a lot of processing of information

As science writer Katherine Harmon Courage put it, the octopus may be able to “outsource much of the intelligence analysis [from the outside world] to individual body parts.” Further, it seems “that the arms can get in touch with one another without having to go through the central brain.

Another problem is that, this time of year, most of the octopuses are missing from one to four arms. Lingcod, voracious predators that grow to 80 pounds, with eighteen sharp teeth, are spawning, and will bite and bully octopuses to evict them from their dens and claim the holes as their own. This is likely how our octopus lost her arm.


The Octopus Blind Date has been a regular event at the Seattle Aquarium for nine years—the jewel in the crown of Octopus Week, the biggest draw of the aquarium year.

Octopus Week might bring 6,000 visitors. “It’s funny to think they come to see two animals mate,” says Kathryn Kegel, thirty-one, the aquarium’s lead invertebrate biologist. But for her, too, even after working here seven years, it’s one of the most thrilling days of the year. “The matings I’ve seen are such a ball of arms, you can’t tell apart the individual animals.” She’s never missed a Blind Date during her tenure. She reckons there’s “about a fifty-fifty chance they’ll be interested.” They may do nothing. Or one might attack the other. If this happens, she and another diver will try to separate them—if they can. “There’s too many arms to do much about it, though,” she admits.

One year, the female killed the male and began to eat him. And once, one octopus managed to remove the barrier separating the two tanks, and the two mated the night before the Blind Date.

Although there are exceptions, most species of octopus usually mate in one of two familiar ways: the male on top of the female, as mammals usually do, or side by side. The latter is sometimes called distance mating, an octopus adaptation to mitigate the risk of cannibalism. (One large female Octopus cyanea in French Polynesia mated with a particular male twelve times—but after an unlucky thirteenth bout, she suffocated her lover and spent the next two days eating his corpse in her den.) Distance mating sounds like the ultimate in safe sex. The male extends his hectocotylized arm some distance to reach the female; in some species, this can be done while neither octopus leaves its adjacent den.

Pacific striped octopus lives in communities of up to forty animals. Males and females cohabit in dens, mate beak-to-beak, and produce not just one but many broods of eggs over their lifetimes.

In the ocean, not a tank

Three hours south of Sydney that they call Octopolis, where, at a depth of about 60 feet, they have found as many as 11 Octopus tetricus living within one or two yards of each other. These are fairly large octopuses, with arm spans of six feet or more, and distinctive, soulful white eyes that also give the species the nickname “the gloomy octopus.” Matthew told me, “I’ve had a couple of experiences where we were diving at this site and an octopus grabbed my hand, and took me to its den, five meters away.” Once, an octopus took him on what he called “a big circuit” around the area, a tour that lasted for ten or twelve minutes. Afterward, the octopus climbed all over Matthew and investigated him with his suckers, as if, having shown him around the neighborhood, he now wanted to explore his human guest in turn. The octopuses he met, Matthew told me, were “not aggressive—they’re curious.” Because he dives Octopolis regularly, Matthew is certain the octopuses there recognize him. Perhaps, he mused, they even look forward to his visits. He often brings them toys—bottles, plastic screw-apart Easter eggs, and GoPro underwater video cameras—all of which they dismantle with interest and sometimes drag into their dens.

To Keith’s amazement, after giving him a guided tour, the first octopus met up with a second octopus. Keith couldn’t decide which one to photograph. How can you decide which of your subjects is more photogenic, when both change color and shape before your eyes? Keith chose to stick with the first one, who crawled around the side of a rock. As Keith was photographing it, the second octopus traveled up and over a higher rock nearby, stood up tall on its arms, as if on tiptoe, and, with what looked like keen interest, leaned toward Keith and the other octopus he was photographing. “It actively positioned itself so it could observe me,” Keith said. “It was so amazing to be observed like that. In all my years photographing animals underwater—sharks, tuna, turtles, fish—I’ve never encountered anything that watched me like this. It was like a person watching a model at a fashion-photo shoot, or watching a pro football player at a game. Most of the time, fish observe you and notice you. But they don’t look at you like this, like they are watching and learning. It was one of the most incredible experiences of my life.

Keith points to a school of yellowfin goatfish, their chin whiskers equipped with chemoreceptors that let them taste and smell food hidden among coral and under sand. Right now these 11-inch fish sport electric yellow stripes over satiny white; but, like those of the octopus, their colors aren’t static. These fish are capable of a feat that earned their Mediterranean relatives an unenviable star turn at Roman feasts. Goatfish were presented to guests live, so that diners could watch them, in their death throes, change color.

Beneath us, emerald and turquoise parrot fish pluck algae from coral with their beaks—actually mosaics of tightly packed teeth. Each sleeps in its own private mucous cocoon, a slimy sleeping bag secreted from the mouth, to conceal its scent from predators. Parrot fish are sequential hermaphrodites: All are born female, and later transform themselves to males.

In the village of Papetoai, just a short drive from CRIOBE, there was once a temple dedicated to the octopus, the guardian spirit of the place. To Mooréa’s seafaring people, the supernaturally strong, shape-shifting octopus was their divine protector, its many reaching arms a symbol of unity and peace. Today, a Protestant church occupies that site. Built in 1827, the oldest church in Mooréa still honors the octopus. The eight-sided building nestles in the shadow of Mount Rotui, whose shape, to the people here, resembles the profile of an octopus.

Keith and I are the only foreigners to join the packed congregation of about 120 people. Almost everyone around us has a tattoo; many of the women wear elaborate hats made of bamboo and live flowers. The minister wears a long, waist-length garland of green leaves, yellow hibiscus, white frangipani, and red and pink bougainvillea; the women in the choir are adorned with headdresses of flowers and leaves.


  1. Hawaii, where ancient myths tell us our current universe is really the remnant of a more ancient one—the only survivor of which is the octopus, who managed to slip between the narrow crack between worlds.
  2. On the Gilbert Islands, the octopus god, Na Kika, was said to be the son of the first beings, and with his eight strong arms, shoved the islands up from the bottom of the Pacific Ocean
  3. Pm the northwest coast of British Columbia and Alaska, the native people say the octopus controls the weather, and wields power over sickness and health

What goes on in Karma’s head—or the larger bundle of neurons in her arms—when she sees us? Do her three hearts beat faster when she catches sight of Bill, or Wilson, or Christa, or Anna, or me? Would she feel sad if we disappeared? What does sadness feel like for an octopus—or for anyone else, for that matter? What does Karma feel like when she pours her huge body into a tiny crevice of her lair? What does capelin taste like on her skin?

An octopus’s mouth is in its armpits. Octopuses generally grab prey with their suckers, then pass it from sucker to sucker, as if along a conveyor belt, until it reaches the mouth.

Christa places a second fish in the pillowy, white cups of another arm. Instantly Kali becomes exceptionally calm. Lying upside down at the surface, arms splayed, she gives us an extraordinary view of her shiny, black beak. This is the first time even Wilson has seen the beak inside a living octopus. It is a private and trusting moment, her sharing with us this surprising part of her, normally hidden inside at the confluence of her arms.

On his first few dives, Ken had not found a suitable octopus. Sometimes he saw no octopus at all. “Sometimes you just get skunked,” he said. But Ken was determined. It took him six dives, but finally he found the octopus that would be destined for Boston. He spotted her at a depth of about 75 feet, hiding in a rock formation, with just her suckers sticking out. Ken had touched her gently and she had jetted from her crevice—directly into his waiting monofilament net.

“The net is so soft you wouldn’t feel its abrasion on your face,” Ken told me. “You have to treat these animals with kid gloves. You can’t yank them to the surface. You don’t want to shock them.” The water temperature at that depth may be more than 15°F colder than the water at the surface, so he had transferred her from the net to a closed container in about 50 gallons of water, and hauled everything slowly to the surface.

Karma now rises to the top of the barrel when I slap the water, so calm in our presence she often turns nearly pure white when we play with her. She’s active, but not nearly as exuberant as Kali. She prefers to suck on us with her larger suckers, sometimes hard enough to give us hickeys that persist for twenty-four hours. When we try to interact with the tips of her arms she lets them slip from our hands. After twenty minutes or so she typically relaxes, holding us gently. But then she grabs us again, more emphatically, as if to remind us: I am strong enough to pull you in. I am gentle because I choose to be.


A friend who works with elephants told me of a woman who called herself an animal communicator, who was visiting an aggressive elephant at a zoo. After her telepathic conversation with the elephant, the communicator told the keeper, “Oh, that elephant really likes me. He wants to put his head in my lap.” What was most interesting about this interaction was the part the communicator may have gotten right: Elephants do sometimes put their heads in the laps of people. They do this to kill them. They crush people with their foreheads like you would grind out a cigarette butt with your shoe.

Marion Britt further demonstrated the positive power of interesting, gentle, loving interaction between keepers and the animals in their care. And she did it by directly handling the most fearsome animals in the aquarium—the 13-foot-long, 300-pound anacondas. “Before Marion,” says Wilson, “nobody would go into the tank with the anacondas.

South America’s top predators, anacondas readily hunt and kill adult deer, as well as 130-pound capybaras, and have been known to eat jaguars. I happen to have met one of the best-known biologists studying anacondas, Jesus Rivas, who has documented two predatory attacks by these powerful constricting snakes on his assistants in the field. Humans “are well within the predator-to-prey ratio” of anacondas, who can grow to 30 feet, he said. The only reason anacondas don’t attack humans more often is that, other than Rivas and his field team, people don’t venture where they know anacondas are found.

But Marion did. When she started at the aquarium as a twenty-four-year-old intern in Scott’s gallery in 2007, there were three anacondas—whom nobody could safely touch.

South America’s top predators, anacondas readily hunt and kill adult deer, as well as 130-pound capybaras, and have been known to eat jaguars. I happen to have met one of the best-known biologists studying anacondas, Jesus Rivas, who has documented two predatory attacks by these powerful constricting snakes on his assistants in the field. Humans “are well within the predator-to-prey ratio” of anacondas, who can grow to 30 feet, he said. The only reason anacondas don’t attack humans more often is that, other than Rivas and his field team, people don’t venture where they know anacondas are found.

By the time Marion stopped working at the aquarium, the two larger anacondas, Kathleen and Ashley, would slither up to her and curl up with their heads in her lap. And now, thanks to Marion, no more are snakes traumatized by head restraint whenever they need to be moved from their tank for their yearly veterinary checkup, or to treat an illness, or when the tank needs to be drained. The staff no longer dreads interacting with them. Clearly, the snakes are happier and healthier for it.

The rest of the staff has also learned to recognize when the snakes are not in the mood to be handled, and back off at these times to try another day.

“Just about every animal,” Scott says—not just mammals and birds—“can learn, recognize individuals, and respond to empathy.

Scott reads other fish cues just as fluently. When we visited the cichlids in their new home, he compared those who had just been moved to those who had been living there for weeks or months. The stripes on the new immigrants were paler. “And look at this one,” he said, pointing to a fish who was already at home in the tank. “See the sparkle in the eye? Now look at this other one. You don’t see the sparkle.” Scott can read the faces of fishes as easily as you or I read a person’s.

Every day, animals at the aquarium are being born and dying, arriving from collection expeditions or from U.S. Fish & Wildlife Service agents, or getting shipped to and from other aquariums throughout the United States and Canada. The comings and goings are always delicate, frequently surprising events. One morning I find Bill has been gifted with a 21-pound lobster caught off Nauset Beach in Orleans, Massachusetts—given by the anonymous winner of a raffle at Cap’n Elmer’s fish market to benefit Dana-Farber Cancer Institute. The lobster’s claws are so heavy he cannot lift them out of water. Another day, eighteen Amazon stingrays arrive in Freshwater, each as large as a bathmat. They had been living in a huge tank owned by a paraplegic man whose ground-floor apartment is being renovated; they have grown too large for him to keep.

Animal-keeping institutions aren’t all the same in the care they give sick inmates. When a friend of mine was working at a small zoo in the early ’80s, their kangaroo fell ill. She called a zoo in Australia for help. “What do you do when your kangaroo gets sick?” she asked. “Shoot it and go catch another one,” came the reply.

Posted in Natural History | Tagged , | 1 Comment

Rare: The High-Stakes Race to Satisfy Our Need for the Scarcest Metals on Earth by Keith Veronese

Preface.  Capitalism believes there’s a solution for everything due to Man’s Inventive Brain, but when it comes to getting metals out of the earth, there are some very serious limitations.  In parts per billion, there’s only 4 of platinum, 20 of silver, and less than 1 part for many important metals. Yet they are essential for cars, wind turbines, electronics, military weapons, oil refining, and dozens of other uses listed below.

China controls 97% of rare earth metals.   Uh-oh.

The overwhelming majority of Earth’s crust is made of hydrogen and oxygen. The only metals present in large amounts within the crust are aluminum and iron, with the latter also dominating the planetary core. These four elements make up about 90% of the mass of the crust, with silicon, nickel, magnesium, sulfur, and calcium rounding out another 9% of the planet’s mass.

Our civilization is far more dependent on very rare elements than I’d realized, which are extremely scarce and being dissipated since so few are recycled (it’s almost impossible to recycle them though, the cost is too high, and many elements are hard to separate from one another).

So in addition to peak oil, add in peak metals to the great tidal wave of collapse on the horizon.

What follows are my kindle notes.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Keith Veronese. 2015. Rare: The High-Stakes Race to Satisfy Our Need for the Scarcest Metals on Earth. Prometheus books.

Scientifically, metals are known for a common set of properties. Almost all metals have the ability to transmit electricity and heat—very useful properties in the world of electronics. Most metals can be easily bent and molded into intricate shapes. As a nice bonus, most metals are resistant to all but the most extreme chemical reactions in the outside environment, with the added stability increasing their usefulness.

A very apparent exception to this stability, however, is the rusting of iron, a natural process that occurs as iron is exposed to oxygen and water over time in junkyards, barns, and elsewhere.

Is a particular metal hard to find because there is a limited amount, is it simply difficult to retrieve, or does technological demand outpace supply? The acquisition difficulty is likely due to a combination of all these reasons

Parts per billion

4          Platinum, a scarce, precious metal, exists in four parts per billion of Earth’s crust—only four out of a billion atoms within the crust are platinum. This is an extremely small amount. To put the amount of platinum on Earth in an easier-to-visualize light, imagine if one took all the platinum mined in the past several decades and melted it down; the amount of molten platinum would barely fill the average home swimming pool.

20        Silver, a metal many use on a daily basis to eat with, exists at only a 20-parts-per-billion value—20 out of every billion atoms on the planet are silver.

1          Osmium, rhenium, iridium, ruthenium, and even gold exist in smaller quantities, much less than one part per billion, while some are available in such small concentrations that no valid measurement exists.

On the extreme end of the scarcity spectrum is the metal promethium. The metal is named for the Greek Titan Prometheus, a mythological trickster who is known for stealing fire from the gods. Scientists first isolated promethium in 1963 after decades of speculation about the metal. Promethium is one of the rarest elements on Earth and would be very useful if available in substantial amounts. If enough existed on the planet, promethium could be used to power atomic batteries that would continue to work for decades at a time. Estimates suggest there is just over a pound of promethium within the crust of the entire planet. When the density of the metal is accounted for, this is just enough of the metal to fill the palm of a kindergartner’s hand.

This special attraction to iron explains why so many prized metals are hard to find. Earth’s molten core is estimated to be comprised of up to 90% iron, leading the elements to sink into the depths of Earth’s crust and continually move closer to the planet’s iron core over billions of years. At the same time, this drive to the core depletes the amount of the metals available in Earth’s crust. The pull poses a problem to mining efforts—a pull to the core prevents the formation of concentrated deposits that would be useful to mine, leading the metals to instead reside in the crust of our planet in spread-out, sparse amounts.

The mass of Earth is approximately 5.98 × 1024 kilograms. There is absolutely no easy (or useful) way to put a number of this magnitude into a reasonable context. I mean, it’s the entire Earth. I could say something silly, like the mass of the planet is equal to 65 quadrillion Nimitz-class aircraft carriers, each of which weighs 92 million kilograms a piece. This comparison might as well be an alien number, as it lends no concept of magnitude.

The overwhelming majority of Earth’s crust is made of hydrogen and oxygen. The only metals present in large amounts within the crust are aluminum and iron, with the latter also dominating the planetary core. These four elements make up about 90% of the mass of the crust, with silicon, nickel, magnesium, sulfur, and calcium rounding out another 9% of the planet’s mass.

Making up the remaining 1% are the 100+ elements in the periodic table, including a number of quite useful, but very rare, metals.

What is easier to understand are reports of the ages and proportion of metals and other elements that reside on the surface of the planet and just below. At the moment, Earth’s crust is the only portion of the planet that can be easily minded by humans.

Deposits of rare metals, including gold, are found under the surface of the planet’s oceans, but these deposits are rarely mined for a number of reasons. These metals often lie within deposits of sulfides, solid conjugations of metal and the element sulfur that occur at the mouth of hydrothermal vents. While technology exists that allows for the mining of deep-sea sulfide deposits, extremely expensive remotely operated vehicles are often necessary to recover the metals. Additionally, oceanic mining is a politically charged issue, as the ownership of underwater deposits can be easily contested. As technology advances, underwater mining for rare metals and other elements will become more popular, but, for the moment, due to cost and safety reasons, we are restricted to the ground beneath our feet that covers about one-third of the planet. Earth’s crust varies in thickness from 25 to 50 kilometers along the continents, and so far, humankind has been unable to penetrate the full extent of the layer. The crust is thickest in the middle of the continent and slowly becomes thinner the closer one comes to the ocean. So what does it take to dig through the outer crust of our planet? It takes a massive budget, a long timescale, and the backing of a superpower, and even this might not be enough to reach the deepest depth. Over the course of two decades during the Cold War, the Soviet Union meticulously drilled to a depth of 12 kilometers into the crust of northwest Russia’s Kola Peninsula. No, this was not part of a supervillain-inspired plan to artificially create volcanoes but was rather an engineering expedition born out of the scientific head-butting that was common during the Cold War. The goal of this bizarre plot? To carve out a part of the already thin crust north of the Arctic Circle to see just how far humans could dig along and to see exactly how the makeup of the outer layer of the planet would change. Work on the Kola Superdeep Borehole began in 1970, with three decades of drilling leaving a 12-kilometer-deep hole in the Baltic crust, a phenomenal depth, yet it penetrated but a third of the crust’s estimated thickness. As they tore through the crust in the name of science and national pride, the team repeatedly encountered problems due to high temperatures. While you may feel cooler than ground-level temperatures in a basement home theater room or during a visit to a local cavern, as we drill deep into the surface, the temperature increases 15 degrees Fahrenheit for every 1.5 kilometers. At the depths reached during the Kola Borehole expeditions, temperatures well over 200 degrees Fahrenheit are expected. The extremely hot temperatures and increased pressure led to a series of expensive mechanical problems, and the project was abandoned.

The Kola Superdeep Borehole is the inspiration for the late 1980s and 1990s urban legend of a Soviet mission to drill a “Well to Hell,” with the California-based Trinity Broadcasting Network reporting the high temperatures encountered during drilling as literal evidence for the existence of hell. The Soviet engineers failed to reach hell, and they also failed to dig deep enough to locate rare earth metal reserves. At the moment, we simply lack the technology to breach our planet’s crust. The Kola Borehole fails to reach the midpoint of the crust, with at least twenty more kilometers of drilling to go at the time the project was shut down in 1992. Although Earth’s crust holds a considerable amount of desirable metals, if the metals are not in accessible, concentrated deposits, it is usually not worth the cost it would take for a corporation to retrieve them

The composition of metals within the planet’s crust is not uniform, unfortunately, further dividing the world’s continents into “haves” and “have nots” when it comes to in-demand metals.

Copper is very hard to isolate from the crust in a pure form. Bronze, a combination of copper with tin, was sufficient for our ancestors to make weapons and tools, but purer forms of copper and other metals are necessary for the varied number of modern uses. Copper is found within the mineral chalcopyrite. To isolate pure copper from chalcopyrite calls for a work-intensive process that involves crushing a large mass of chalcopyrite, smelting the mineral, removing sulfur, a gaseous infusion, and electrolysis before 99% pure, usable copper is obtained. Aluminum, a metal so common it is used to make disposable containers for soft drinks, undergoes a similar process before a form that meets standards for industrial use is obtained.

ROCKS INTO SMARTPHONES. The use of exotic metals has become commonplace to improve the activity of existing consumer goods. The piece of aluminum used as part of a capacitor within a smartphone is exchanged for a sliver of tantalum in order to keep up with processor demands, creating an enormous market for the rare metal. Rhodium, ruthenium, palladium, tellurium, rhenium, osmium, and iridium join the extremely well-known platinum and gold as some of the rarest metals on the planet that find regular uses in the medical industry. These rare metals play interesting roles in protecting the environment. A great example is the use of platinum, palladium, and rhodium in catalytic converters, a key component in every automobile built and sold in the United States since the 1970s. Each converter contains a little over five grams of platinum, palladium, or rhodium, but this meager amount acts as a catalyst that turns carbon monoxide into a water vapor and harmless emissions for hundreds of thousands of miles, with the metal unchanged throughout the process. An extremely recent and highly relevant example of a little-known metal that jumped to the forefront of demand is tantalum. Tantalum is in almost every smartphone, with a sliver in each of the nearly one billion smartphones sold worldwide each year.

Europium is used to create the color red in liquid-crystal televisions and monitors, with no other chemical able to reproduce the color reliably. As copper communication wires are replaced with fiber-optic cable, erbium is used to coat fiber-optic cable to increase the efficiency and speed of information transfer, and the permanently magnetic properties of neodymium lead to its extensive use in headphones, speakers, microphones, hard drives, and electric car batteries.

Conflict metals share a number of parallels with a much sought-after and contested resource: oil. These metals may serve to be the catalyst for a number of political and even military conflicts in the coming centuries. All our heavy metal elements, to which many of the rare metals belong, were born out of supernovas occurring over the past several billion years. These metals, if not recycled or repurposed, are finite resources. Inside the stories of these rare metals are human trials and political conflicts. In the past decade, the Congo has been ravaged by tribal wars to obtain tantalum, tungsten, and tin, with over five million people dying at the crossroads of supply and demand. Afghanistan and regions near the Chinese border are wellsprings for technologically viable rare metals due to the disproportionate spread of these high-demand metals in the planet’s crust. In an interesting move, the United States tasked geologists with estimating available resources of rare metals during recent military actions in Afghanistan. California, specifically the Mountain Pass Mine within San Bernardino County, was a leading supplier of rare earth metals in North America well into the 1990s. Mountain Pass, however, was shut down in the early 1990s after a variety of environmental concerns outweighed the additional cost of acquiring the rare earth metals mined there compared to overseas sources. Since the metals rarely form concentrated deposits, the places in the world that play home to highly concentrated deposits of in-demand metals become the target of corporations and governments.

The amount of europium, neodymium, ytterbium, holmium, and lanthanum is roughly the same as the amount of copper, zinc, nickel, or cobalt.  Simply put, the majority of the 17 are not rare; they are spread throughout the planet in reasonable amounts. The metals are in high demand and inordinately difficult to extract and process, and it is from a combination of these factors that the 17 derive their rarity.

RARE VERSUS DIFFICULT TO ACQUIRE.  While the 17 metals may be distributed throughout the planet, finding an extractable quantity is a challenge. The elements are spread so well that they appear in very small, trace quantities—a gram here, a milligram there—in deposits and are rarely, if ever, found in a pure form. Extracting and accumulating useful, high-purity quantities of these 17 metals is what lends them the “rare earth” name, as their scattered nature spreads them throughout the planet, but in tiny, tiny amounts. To obtain enough of any one of these 17 to secure a pure sample, enormous quantities of ore must be sifted through and chemically separated through a series of complex, expensive, and waste-creating processes. The basics of chemical reactions act as a spanner in the works through processing, as the desired metal is lost through side-reactions along the way. Small losses in multiple steps add up quickly, further decreasing the amount of metal available for use. Why expend so much effort to discover and refine these 17 rare metals? Many of them are necessary to fabricate modern electronics, metals woven into our everyday lives and used by brilliant scientists and engineers to fix problems and make electronics more efficient at the microscopic level. Think of the 17 rare earth metals like vitamins—you may not need a large amount of any one of them to survive, but you do need to meet a regular quota of each one. If not, your near future might resemble that of a passenger traveling in steerage from Europe to the New World as you develop scurvy from lack of vitamin C. Yes, we can make substitutes of one of the rare metals for a similarly behaving one on a case-by-case basis, but we need every metal from lanthanum to lutetium, and in sufficient amounts, if we want the remainder of the twenty-first and the upcoming twenty-second centuries to enjoy the progress we benefited from in the twentieth.

What is it about these 17 metals that make them useful? Reasons vary, but the 15 elements between lanthanum and lutetium huddled for shelter under the periodic table have a subatomic level of similarity—the 15 can hide electrons better than the rest of the elements on the periodic table.

When the new electron is added to its set (one electron for each element after lanthanide), another set of electrons is left unprotected to the positive pull of protons in the nucleus.

The extra “tug” from protons in the nucleus does not play a role as long as the atom is neutral, but should an electron become dislodged (as often occurs with metals) and an ion is formed, the ion will be smaller in size than normal due the extra pull. When metals form bonds with other atoms and elements, they often do so as ions, with this break from the norm giving the rare earth metals some of their interesting properties. Because of this phenomenon, ions of the rare earth metals from lanthanum to lutetium grow smaller in diameter from left to right across the row. This is the reverse of typical trends seen in the periodic table, as ions of elements typically become larger across the row. As seen in the rare earth metals, this alteration leads to making ions of these rare earth metals smaller; the electrons traveling along their unique path bestow on the elements interesting magnetic abilities, properties that make rare earth metals particularly sought after for use in electronics and a variety of military applications.

Minerals contain a variety of elements, with multiple metals often found in a single mineral deposit. Rocks with a consistently high concentration of a given metal, like magnetite, which has a large amount of iron, are often commonly traded.

Mineral deposits differ in the amount of usable metal they contain, with the concentration of metal, ease of extraction, and rarity playing a role in determining how mining operations proceed. Metals are found in a variety of purities, interwoven in a matrix of organic materials and often with other similar metals. Aluminum is found within bauxite deposits, tantalum and niobium are found with the coveted ore coltan, while cerium, lanthanum, praseodymium, and neodymium are found in the crystalline mineral monazite. Recovering a sample from the ground through hours of digging and manual labor is just the first step—before any of these metals can be used, an extensive process of purification is often necessary. This purification process is essential because high levels of purity are necessary for their efficient use. Five species of minerals dominate our concern in the hunt for rare earth metals: columbite, tantalite, monazite, xenotime, and bastnäsite. We can further reduce this to four species, since columbite and tantalite are often found together in the ore coltan. Coltan ore contains large deposits of tantalum and niobium, two of the most sought-after rare metals. Central Africa is home to large deposits of coltan, but the fractured nature of the nations in the region and opposing factions have taken the lives of thousands and disrupted countless more as rival groups swoop in to make money off of legal and illegal mining operations in the region. Raw monazite, xenotime, and bastnäsite are relatively inexpensive. You can buy a rock of the red-and-caramel-colored minerals on any one of a number of websites, with a fingertip-sized piece of monazite or bastnäsite available for the price of a steak dinner at a truck stop diner. Unlike the concentrated deposits of tantalum and niobium in coltan, samples of monazite, xenotime, and bastnäsite minerals hold small amounts of multiple rare earth metals within them.

Sizable deposits of monazite, xenotime, and bastnäsite are found in North America,

Searching for rare earth metals in monazite brings with it a major problem with the ore—most samples are radioactive. The naturally radioactive metal thorium is a large component of monazite, with the fear of environmental damage, additional economic cost, and employee health concerns acting as barriers to monazite mining operations. Once a sufficient quantity of any one of these minerals is obtained, there is a long road to tread before the desired metals are pulled from the rocks. Eighteen steps are necessary before monazite can begin to be purified into individual rare earth metals, while bastnäsite requires 24. Some of these steps are simple—crushing and subsequent heating of the raw mineral ore—while others are large-scale chemical reactions requiring highly trained professionals.

The minerals hold tiny amounts of several different rare metals within them. Until recently, carrying out mining operations solely to garner rare earth metals was considered much too expensive. But if the rare earth metals were a useful by-product of other mining and processing efforts, then so much the better. A great example of this phenomenon is carbonatite, a rock of interest but one less prized than coltan, bastnäsite, xenotime, or monazite. Carbonatite, is sought for the rich copper content within, with the added bonus of small amounts of rare earth metals that can be teased out as the mineral is broken down.

The light rare earth elements (LREEs) are lanthanum, cerium, praseodymium, neodymium, and samarium, while europium, gadolinium, terbium, dysprosium, holmium, erbium, thulium, ytterbium, lutetium, and yttrium make up the heavy rare earth elements (HREEs). As a general rule, an HREE is harder to find in substantial usable quantities than an LREE, making the heavy rare earth elements more valuable.

Overall, elements that have lower atomic masses (in day-to-day language, these elements weigh less per atom) are more abundant than atoms with higher atomic masses. Hydrogen atoms (a proton and an electron, so its atomic mass is just over one) and helium atoms (two protons, two electrons, and two neutrons for an atomic mass of four) are two of most abundant in the universe, while the number of elements at the other end of the periodic table with larger masses like gold (79 protons, 79 electrons, and an average of 118 neutrons for an atomic mass of just under 179) are far less abundant. This trailing phenomenon across the periodic table is part of the answer as to why there are fewer of the heavy rare earths on and within the planet (as well as the rest of the universe) than there are light rare earth elements.

At the moment, 90% of the world’s current supply of rare industrial metals originates from two countries. The export of raw supplies from these countries is increasingly coming under fire, with the countries championing a movement to convince corporations to move away from the quick monetary gain that exporting raw materials offers and moving toward making a profit by exporting finished consumer electronics. At present, we are seeing the beginning of territorial wars over a far more common resource, fresh water, in the United States and elsewhere in the world. If governments are experiencing difficulties sharing and parceling out water, as we see in ongoing disputes between Alabama, Georgia, and Florida over the Apalachicola-Chattahoochee-Flint River and Alabama-Coosa-Tallapoosa River basins, the quarrels possible over rights to desperately needed metals between non-civil or even warring nations could be frightening.

In the 1990s, a number of successful Chinese mining operations began, with their rich supply of high-quality rare earths flooding the global market and driving prices down to near-record lows.

China’s population is consuming rare earth metals at an astonishing rate. By the year 2016, the population of China is projected to consume one hundred and thirty thousand tons of rare earth metals a year, a number equivalent to the entire planet’s consumption in the beginning of this decade.

China holds one-third of the planet’s rare earth supply, but a vast number of mining and refining operations ongoing within its borders allow China to account for roughly 97% of the available rare earth metals market at any given time. Yes, other countries have rare earth metal resources, but they lack the infrastructure or means to put them to use. The addition of politics into the equation places China in an enviable position of power should a nation or group of nations interfere with the country’s interests on any level. Unhappy with the Japanese presence in the South China Sea? Prohibit exports to Japan.

Military weaponry relies on the same goods that require these rare-metal components, further indebting a sovereign nation.

Neodymium magnet motor can outwork an iron-based magnet motor of more than twice its size—but these benefits are not without a substantial price. Rare earth magnet components often cost ten or more times the price of their less efficient, more common counterparts, and any disruption in supply will only lead to a widening of the price gap. When faced with a long-term drop in the supply of rare earth metals, manufacturers will be forced to choose between passing the costs onto the consumer and in the process risk losing market share, or selecting cheaper, older parts and manufacturing methods—the same ones many of the rare earth metals helped replace—that would lead to inferior products and eliminate a number of technological advances.

There are over 30 pounds of rare earth metals inside of each Toyota Prius that comes off a production line, with most of that mass split between rare earth components essential to motors and the rechargeable battery. Of this 30, 10 to 15 pounds is lanthanum, with the lanthanum used as the metal component of nickel metal hydride (NiMH) batteries. As the first generation of hybrid automobiles reaches the end of its lifetime, owners will be forced to replace their battery or move on to a different car, with both alternatives bringing an uptick in rare earth metal consumption.

The amount of rare earth metals needed to create of a state-of-the-art wind turbine dwarfs that needed for an electric car, with 500 pounds of rare earth metals needed to outfit the motors and other interior components of a single energy-generating wind turbine.

Each of the 17 rare earth metals exhibits similar basic chemical and physical properties, with these similarities providing quite the challenge when it comes to separating them from one another in raw mineral ore. If you heat a mineral sample containing several of the rare earth metals to extremely high temperatures, it becomes difficult, if not impossible, to differentiate and physically separate each one because they share similar melting points. The rare earth elements are intricately bound to one another along with abundant elements like carbon and oxygen, making it impossible for industrious at-home refiners and large corporations to pick up a hundred pounds of raw mineral rocks and chip away for hours to separate the elements as one could do, in theory, with gold. Instead, concentrated acids and bases are needed to extract the individual elements, with chemists trying thousands of combinations before settling on the proper method to separate and purify a rare earth metal like cerium, a metal needed for use in pollution-eliminating catalytic converters, from a sample of bastnäsite or monazite.

Beryllium, an element now deemed vital to US national security due to its inclusion in next-generation fighter jets and drones.

Gadolinium is used to create the memory-storage components of hard drives.

Despite the eventual separation into praseodymium and neodymium, the use of didymium continues to evolve. Oil refineries use the mixture of two elements as a catalyst in petroleum cracking, a heat-intensive process necessary to break down carbon to carbon bonds present in extremely large molecules en route to the culling of octane for use in gasoline.

A myriad of weapons devices used by the United States and a handful of other countries rely on rare earth metals to operate. Neodymium and its neighbor on the periodic table, samarium, are relied on to manufacture critical components of smart bombs and precision-guided missiles, ytterbium, terbium, and europium are used to create lasers that seek out mines on land and under water, and other rare earth elements are needed to build the motors and actuators used for Predator drones and various electronics like jamming devices.

Each element from position 84 to the end of the periodic table at 118 is radioactive, and of these 36 elements, only 12 are available in large enough quantities to be useful to humans.

Deep in the interior of nuclear power plants the fuel rods are arranged in arrays within a cooling pool to maximize safety. The goal is to allow the heat generated from the billions of neutron additions to safely flow through the water—without the liquid, the heat created as a result of reactions ongoing within fuel rods would quickly overrun any containment units and lead to a meltdown. Water is chosen as the mediating material due to its ability to take on a substantial quantity of heat before evaporating.

Uranium fuel poses an ever-present danger during the reprocessing period since, once uranium and plutonium are separated from their metal housings and dissolved in acid, it is still theoretically possible (although extremely unlikely) for them to gather in localized hot spots within the processing tanks and reach dangerous critical mass. Even if the economic hurdles and safety issues are overcome, the inherent nature of reprocessing sites and the substantial quantity of nuclear fuel within their walls could leave them vulnerable to direct attacks from terrorist groups or the theft of still-fissionable nuclear material. It would be foolish to think an attack making use of nuclear material en route for reprocessing would not be devastating. Even if the attackers failed to turn stolen spent fuel into a high-power nuclear weapon, threats will forever loom from less scientifically advanced attacks stemming from the addition of radioactive waste into an existing explosive device or a strike on a nuclear reprocessing facility that would turn the entire site into an unconventional dirty bomb. Such an attack could exact minimal physical damage and still render the surrounding area unfit for habitation for many years. The psychological toll would be unlike any disaster seen in the Western Hemisphere, with hundreds of billions of dollars necessary to decontaminate and clean the area and tremendous upheaval as several generations would find their lives and homes severely impacted in a single attack. These fears are not merely the creation of a post-9/11 think tank but are a hypothetical plague that has occupied the highest office in the land for six decades. Presidents Gerald Ford and Jimmy Carter halted reprocessing of plutonium and spent nuclear fuel during their terms in office in an effort to stop the spread of national nuclear weapons programs and clandestine attempts to secure a nuclear device across the globe—a fear bolstered by ongoing tensions in India and Iran during the late 1970s.

President Ronald Reagan lifted this ban during his tenure, only to have his successor, George H. W. Bush, prevent New York’s Long Island Power Authority from teaming with the French government–owned corporation Cogema to process reactor fuel. President William J. Clinton followed Bush’s lead, while President George W. Bush went on to embrace nuclear reprocessing by forming the sixteen-country Global Nuclear Energy Partnership and encouraging private corporations to develop new reprocessing technology.5 This trend of “stop-start” policy on the matter reversed once again with President Barack Obama, who signaled what appears to be the death knell for commercial nuclear processing in the United States, at least for the first half of the twenty-first century. Fiscal concerns informed his decision to cancel plans to build a large-scale nuclear reprocessing facility in 2009 and a South Carolina reprocessing site in 2014.  At the moment, the United States does not reprocess reactor fuel previously used to generate power for public consumption; it instead chooses to focus recycling efforts on radioactive materials created in the course of scientific research. Regardless of one’s personal political views, the reticence of five presidents to pursue nuclear processing—Ford, Carter, G. H. W. Bush, Clinton, and Obama—should be a sign to those championing the cause of nuclear processing. Financial issues aside, concentrating large amounts of nuclear material in one area, no matter how secure, with hundreds, if not thousands, of workers coming in contact with the material makes the site ripe for thievery and attack. Acquisition of radioactive material by clandestine individuals is not isolated to action movie plots and Tom Clancy novels but is a plausible threat. A dirty bomb has yet to be detonated anywhere in the world, thankfully confining these radiological weapons to movies and novels, wherein the bombs play the role of an all-too abundant plot device and source of melodrama. The most feeble of dirty bombs needs only a sufficient source of radioactive waste and an explosive device to disperse the waste in order to render a location unfit for years.

Almost every step of a reprocessing effort creates additional radioactive waste. Liters upon liters of strong acids and harsh carcinogenic solvents are used en route to reclaiming metallic uranium and plutonium that can used in a new way. This “new” waste created in the dissolving states contains only a fraction of the radioactivity in a sample of reactor-grade uranium, but nevertheless, the radioactive waste must be locked away until the natural decay of radiation over time occurs.

in the process it is possible to create considerable quantities waste.

A metric ton of fuel rod waste contains four to five kilograms of recoverable rare metals, making the effort worthwhile in dire circumstances.

If you are devious and looking for a way to swindle people out of gold, tungsten sounds really great at this point, right? One big problem lies in the path for any would-be gold counterfeiter—tungsten metal is grayish-white, a very different hue than traditional yellow gold. A visual problem such as this can be rectified with willpower and a drill, leading gold-adulterers to hide tungsten metal within solid-gold objects to create a passable fake.  Reports of precious metal traders learning they were scammed by keen counterfeits of one-kilogram gold bars with newly drilled holes filled with tungsten prior to the transaction are popping up in China, Australia, and New York City, a sordid trend brought about in recent years by the astronomical run-up in the price for gold.8 The gold removed from the bar then enters the pocket of the driller, while the bar is passed along to an uninformed buyer at its normal face value. Tales of tungsten bars coated with twenty-four-carat gold also swirl, with purchasers learning of their exceptional misfortune when the top layer peels away like the gold foil covering a chocolate bar.

The cost of melting down zinc and a smidgen of copper (pennies have gone from being made entirely of copper up until 1982 to less than 3% copper currently), parceling it out into discs, stamping the visage of our 16th president on the face, and trucking rolls of the coin from the mint averages two per every penny created. In this case, the seigniorage is a net loss for the Treasury Department, as the department loses a little less than a cent on each newly minted penny, and the net loss continues with the nickel, with eleven cents’ worth of materials, wages, and machine upkeep going into creating each one.

All the gold-plated tungsten items are sold as fakes, but they improve upon techniques used in sordid deals of counterfeit bars. These commercially manufactured and advertised “fake” tungsten-core coins are currently seen as a blight by the coin-collecting and gold-trading community, but someone with an ultrasonic or x-ray fluorescence detector could always use one of these elaborately produced plated coins to test the device in question. If you are a pessimist, the fake coins may turn out to be useful if you lack the financial assets to hoard gold and live your life prepping for an imminent worldwide financial collapse or natural disaster. Gold is desired foremost among precious metals due to historical and traditional sentiment. In a rebooted world where those bargaining for goods lack any sort of detection devices, the look and feel of gold may be all you need. Corporations and nations seek out rare and scarce metals for their value, their ability to improve human life.

Thallium became so popular as a murder weapon that the chemical earned the name “inheritance powder” in the dawn of the Industrial Revolution due to the metal’s dubious link to convenient deaths benefiting wealthy heirs. When used for ill intent, thallium is dosed not as a spoonful of metal shavings but in the form of the crystalline thallium sulfate. By itself, thallium metal will not dissolve readily in water, making it difficult to hide this form of the poison in a drink. On the other hand, thallium sulfate retains the poisonous characteristics of thallium while behaving similarly to table salt, sodium chloride, bestowing upon the substance a crystalline appearance at room temperature while making the chemical far more concealable. This form is still quite potent, as less than a single gram of thallium sulfate is enough to kill an adult.  Availability mingled with potency and concealment combine to make thallium sulfate an excellent murder weapon. Prior to 1972, thallium sulfate sat on the shelves of supermarkets across the United States as the main ingredient in commercial rat killers. Thallium ends life by forcing the body to shut down as it takes the place of potassium in any number of the body’s cellular reactions and physiochemical processes. Once ingested, the poisonous compound thallium sulfate dissolves, separating the thallium atoms and allowing the metal to enter the bloodstream. The body then begins to incorporate thallium into molecular-level events needed to maintain proper working order, and that’s where trouble begins. Thallium atoms are remarkably similar in size to potassium atoms, and this is a problem for the human body. Potassium is a vital part of energy-manufacturing mechanisms and a gatekeeper for a number of cellular channels. Due to similarity between the size and charge of thallium and potassium, the body confuses the metals and allows thallium to substitute for potassium. Unfortunately, this substitution is a deadly one, leading to a shutdown of a number of delicate submicroscopic events that brings about death in a handful of weeks. Erosion of fingernails and hair loss are two prominent late-stage flags denoting thallium poisoning, with the first signs of hair loss showing as soon as a week after consumption of the poison.  If you are poisoned with thallium and do not die from acute kidney failure or its complications within a few weeks, your way of life will likely be changed forever, thanks to recurring dates with a dialysis machine.

Swiss scientists studying the exhumed body of Palestinian leader Yasser Arafat in November of 2010 found nearly 20 times the baseline amount of polonium in his bones, along with traces of the radioactive element in his clothes and the soil where he was laid to rest. Arafat died in 2004 from what is described as a stroke by his attending physician after a bout with the flu characterized by vomiting—a symptom that plagued Litvinenko immediately after his poisoning. The discovery of such a large concentration of polonium has changed the way historians and political scientists view Arafat’s death, this finding fostering a growing movement to paint it as murder by an unknown culprit. This is not the first intimation of foul play surrounding Arafat’s death: his former adviser Bassam Abu Sharif publicly accused Israeli intelligence operatives of poisoning the Palestinian figurehead’s medicine and placing thallium in his food and drinking water.

The title “wonder drug” is thrown around frequently in the pharmaceutical world, but a small-molecule drug that can effectively treat lung, ovarian, bladder, cervical, and testicular cancer with fewer side effects than radiotherapy? The integration of platinum atoms in a small molecule to create a drug yields a tool effective at treating a wide variety of cancers. Cis-diamminedichloroplatinum(II), which moonlights as the much-easier-to-say trade name cisplatin, is a simple molecule at the forefront of cancer treatment starring a single atom of platinum at its core. Structurally cisplatin is a quite simple molecule featuring chlorine, nitrogen, and hydrogen oriented at ninety-degree angles around a platinum core. Making cisplatin is not difficult; the reaction requires only four steps, with the difficulty of the synthesis on par with a typical lab session from an undergraduate student’s sophomore year. The high cost of the platinum materials, however, keep the metal out of the teaching labs of even the most wealthy universities due to perceived waste and the thought that a devious lab student might run off with a bottle of platinum tetrachloride in the hope of purifying the platinum metal within. The discovery of cisplatin’s important role in the war on cancer came about as many great scientific achievements do—by complete accident. In a 1965 study of Escherichia coli bacteria—the fecal matter component and model bacteria most often used by researchers—a trio of Michigan State University scientists observing the impact of electrical fields on bacteria noted that their cell samples quit replicating, an outcome that failed to correlate with their experimental logic. Like all good scientists, the researchers went into detective mode and began mentally dissecting every part of their experimental setup. Their in-depth look revealed that the platinum metal used in the electrodes to create their experimental electrical fields was being leached slowly into the bacteria’s growth medium, inadvertently dosing the bacteria with platinum and causing the E. coli to grow to phenomenal sizes and bypass the life checkpoints that would trigger a fission process to create new cells. While the trio did not come across any interesting happenings when they placed their precious E. coli in a variety of electrical fields, they did discover that platinum could prevent bacteria from reproducing. The finding was warmly received by the medical world and led to the incorporation of cisplatin in cancer treatment by the end of the next decade. Cisplatin brings about apoptosis in cancer cells shortly after reacting with the cell’s DNA. Once bound to DNA, the information-carrying molecule becomes cross-linked and thus unable to divide—a step necessary for the cell to undergo its form of reproduction: fission. If tumor cells cannot reproduce, the runaway train of unbounded growth is halted. Cisplatin’s effect on DNA can also have another cancer-fighting effect—the wholesale destruction of cancer cells. Cells can stimulate the repair of DNA after determining that it can no longer divide, however, once the repair efforts are unsuccessful—thanks to the presence of cisplatin—the cell starts its own self-destruction sequence—apoptosis—resulting in the destruction of the tumor cell. If apoptosis can be successfully triggered in enough cancer cells, the tumor will begin to shrink. Patients given cisplatin and two other drugs making use of similar platinum chemistry to achieve the same result—carboplatin and oxaliplatin—experience fewer side effects than those who are treated with radioactive materials, making the pharmaceutical a great option since it gained approval from the Federal Drug Administration in 1978. The popularity of platinum in cancer treatment led medical researchers to investigate the possibility of antitumor properties in rhodium and ruthenium, metals often used in conjunction with platinum in catalytic converters, but with little success due to unforeseen toxic effects not observed with cisplatin.

Tantalum is a corrosion-proof metal used to increase the efficiency of capacitors—a useful application that has allowed mobile devices to shrink in size or increase in processing power at a rapid pace in the past decade. Tantalum is found alongside the metals tin and tungsten,

Sadly, tantalum mining funded rebel factions during the Second Congo War (1998–2003), the bloodiest war since World War II, with five million people killed as a result of the fighting.

In a disturbing nod to the current strife surrounding tantalum, the metal’s name comes from the disturbing tale of the Greek mythological figure Tantalus. Tantalus’s life was awful—he lived in the deepest corner of the underworld, Tartarus, where he cut up and cooked his son Pelops as a sacrifice to the gods. His sins did not end there, however, as Tantalus forced the gods to unwittingly commit cannibalism by dining on Pelops’s appendages. To punish Tantalus for this gruesome gesture, the gods condemned him to a state of perpetual longing and temptation by placing him in a crystalline pool of water near a beautiful tree with low-hanging fruit. Whenever Tantalus raised his hands to grasp a piece of fruit to eat, the delicate branches would move to a position just of out of reach; whenever he dipped down for a drink, the water pulled back from his cupped hand. Mythological lore finishes this mental image of eternal temptation by suspending a massive stone above Tantalus. He was condemned to a world of immense desires constantly within reach but of which he was forever unable to partake, leaving him to perpetually starve against a backdrop of plenty.

Coal naturally contains uranium—one to four parts per million. This is not a lot of uranium, but it is a quantifiable amount of the radioactive material nonetheless. A heavy-duty train car like the BNSF Railway Rotary Open Top Hopper can carry a hundred tons of coal, with a hundred similar cars linked together for a total just over ten thousand tons. This run-of-the-mill train sounds a good bit more ominous with a quick calculation using the parts per million of the uranium in coal. After a few minutes of number crunching, the sensationalist could claim that the bituminous coal train is carrying between 20 to 80 pounds of uranium, and this hypothetical individual, in the midst of making a hysteria-inducing statement, would be correct. Although the movement of 80 pounds of uranium across the heartland of the United States resembles a plot point from a spy movie, black helicopters filled with FBI and Homeland Security agents will not be descending on the trains of North America anytime soon, because the uranium is safely split between millions of pieces of coal spread throughout the train. This is the same dispersal pattern we see with the distribution of rare earth metals in rocks and quarries. During World War II the United States and Germany did not destroy their coal mines to get a small allowance of uranium to use in the building of nuclear bombs—the coal by itself is far more valuable. Instead, these countries looked to well-known deposits featuring high concentrations of uranium to build their stockpiles.

Concentrated deposits of metals—often the only deposits worth mining—are created over millions of years.

The majority of the rare earth metals, including two of the most useful, niobium and tantalum, are found in igneous rock, leading to several theories that place the origin of rocks containing these metals in the slow release of rare earth element–rich magma from chambers deep below the surface of the earth. The formation could have taken place underground as small portions of magma exited the chamber and cooled slowly, or as the magma pushed through the surface and became the lava flows often associated with volcanic activity.

China’s available supply of rare metals rivals the material wealth of oil underneath the sands of Saudi Arabia and the Middle East. A crippling share of the planet’s supply of rare earth metals is in China—the United States Geological Survey estimates more than 96% of the available supply of these metals is centered within its boundaries, leaving the rest of the world to fend for crumbs under their borders or to rely on Chinese-manufactured products.

The minerals containing tantalum, niobium, and other rare metals likely accumulated over the course of a four-hundred-million-year span in the Middle Proterozoic period,

While we will never truly know how such substantial quantities of varied metals gathered in this section of Inner Mongolia, a number of theories are bandied about by geologists.

The shuffling of Earth’s tectonic plates and the movement of lava during the periods of geologic tumult that characterized formation of our planet’s landmasses is central to the most prominent theories, with the possibility that the movement of magma could have triggered hydrothermal vents that pelted the earth at Bayan Obo with metals brought from deep below the surface. The rare metals present at Bayan Obo, and throughout the world, are found in the repeating, organized forms of familiar chemical compounds. These molecules typically consist of two atoms of the metal joined by three atoms of oxygen, with variations of the number of metal and oxygen atoms present. This odd couple forms a very stable type of chemical compound, the oxide. Thanks to this combination of metal and oxygen, the molecules are readily taken into mineral deposits. This stroke of luck is not without its own problems, however: the metals must be separated from oxygen before we can use them.

Despite its vast mineral wealth, Bayan Obo is far from the only reason China rose to dominate the rare earth markets during the first decade of the 21st century. Selling at astonishingly low prices is the clever move that made China the undisputed source for rare earths. By taking advantage of the abundant supply at Bayan Obo, Chinese production of these metals all but ran the previous corporate leaders in the United States and Australia from the world market.  Within a decade and a half this economic plan guided countries and corporations to the cheap and available supply of Bayan Obo, soon putting each at the mercy of China’s economic and political policies. A brilliant yet simple tactic effectively yielding a sea change normally only brought about through the devastation of a war, but in this case it occurred without a single shot being fired. This brand of economic policy is convincing foreign corporations in Japan and the USA to open manufacturing plants and offices within China’s borders in hope of securing favor and a continuous supply of the rare metals they can rely on in manufacturing.  Corporations willing to make the jump into China’s metal market are also positioning themselves wisely in the event that China radically increases export taxes on its metal supply, an ever-looming possibility that could destabilize market sectors overnight.

Will we see a day when the dependence on China for rare earth metals ceases? Not likely. The supply of rare earth metals could last several decades if not longer if China exercises wisdom in domestic and foreign economic policy. The rest of the world has little recourse in the face of price increases, as any cache of commercially viable rare metals would likely cost more to retrieve than those sold by corporations inside China. Even if countries drew the political ire of China or simply decided to forge their own path by exploring and making use of a newly found untapped deposit of metals within their borders, it could take well over a decade and phenomenal expense before a semblance of self-sufficiency is actually achieved.

North America has a few rare earth metal mining sites, with the crown jewel being the oft-maligned Mountain Pass site deep in California’s Mojave Desert.  The Mountain Pass site looks nothing like the series of caves and tunnels often associated with coal or gold mining. Molycorp’s prize, a gem tucked in the middle of the California sprawl and seventy-five miles from the nearest city, is more rock quarry than classical mine, with this hole in the face of the earth growing larger, one transit ring at a time as rocks containing mineral ore are transferred from the bottom to the surface and then to processing plants.

Mountain Pass performed well as the United States’ key source of rare metals well into the late 1990s, when two factors led to the closure of the site. China’s meteoric rise as a rare earth manufacturer came at the expense of Mountain Pass’s supply. Chinese corporations flooded the market with inexpensive rare earth metals, softening the international market for rare earths to the extent that it was no longer cost effective to maintain Mountain Pass.

Mountain Pass came under intense public scrutiny in 1997 after a series of environmental incidents. Chief among these problems were seven spills that sent a total of three hundred thousand gallons of radioactive waste emanating from Mountain Pass across the Mojave Desert.  Cleanup of these spills cost Chevron 185 million dollars, sending the United States’ most fruitful rare earth metal mine into a death spiral.

The mine stayed dormant until the price of rare earths increased in the past decade, when Chevron sold the mine to Molycorp, which spent an estimated 500 million dollars to resume operations. A risky move, but one with an underlying sense of wisdom if Mountain Pass could return to its former glory.  Stating that keeping a corporation, its workers, and shareholders afloat in the rare earth mining industry is an arduous task would be an enormous understatement. Mining is a difficult if not damned industry, one where profit margins are eternally slim and political events can change the world stage in a handful of days, if not overnight. Before Molycorp and other mining entities can earn a single dollar, the corporations must find and acquire a mineral-rich site, tear the prized rocks from the crust of the earth, and then carry out 30-plus refining steps to isolate a single rare earth metal. The financial markets of the world continue to fluctuate the entire time, with minor changes bringing about a sea change in the mining world as commodity prices fluctuate wildly.

For example, what if the state-owned corporate entities of China are encouraged by the nation’s government to limit exports to North America and Europe? Prices soar the next morning, quickly eating up every kilogram a company has in its reserves. But what about the opposite scenario—a private mining corporation announces the discovery of an unexplored cache of bastnäsite in Scotland? Prices plummet, and corporations across the world are forced to limit mining and processing efforts to ensure a market glut years in the future will not kill the industry.

Gold, platinum, tantalum, and several other rare and valuable metals are used in small quantities in smartphones and computers, but the employee skill sets and time necessary to obtain and refine these metals often makes metal-specific recycling efforts cost prohibitive.

Why are jewelry-grade precious metals used in electronics? It’s a simple answer—using the metals makes your electronics faster, more stable, and longer lasting. For example, gold is a spectacular conductor. As an added benefit, the noble metal doesn’t corrode, so gold-plated electronics do not experience a drop-off in efficiency over time. Gold is plated on HDMI cables and a plethora of computer parts in a very thin layer—a thickness commonly between three and fifteen micrometers (there are a thousand micrometers in a millimeter, if it has been a while since you’ve darkened the halls of a chemistry or physics department). This very thin, very light superficial coating—thinner than a flimsy plastic grocery store bag—is enough to enhance the efficiency of signal transfer, making it worthwhile to use gold over cheaper metals with similar behavior, like copper or aluminum.

The amateur scientists looking to recover gold and platinum from computer parts are not too different from the elderly men and women clad in socks and sandals who wander along beaches combing the sands with a small shovel and metal detector in hand. There is one major difference between these two groups of treasure seekers, however. Those performing at-home recycling and recovery from computer parts know where their treasure lies; it’s just a matter of performing a series of chemical reactions to retrieve the desired precious metals.

A number of companies sell precious metal recycling and refining kits on the Internet, with prices starting as low as seventy dollars, provided the amateur recycler already owns a supply of protective equipment and personally manages chemical waste disposal. More expensive kits make use of relatively safer electrolysis reactions—similar to the hair-removal method touted in pop-up kiosks at shopping malls. This slightly safer method brings with it a much higher price tag, with retail starter kits beginning in the $600 range before rising to several thousand dollars. This high price is the cost of doing business for someone with time and (literally) tons of discarded computer equipment to refine,

While the “scorched-earth” hobbyist approaches used by Ron and Anthony are dangerous, the Third World equivalent is disturbingly post-apocalyptic. Venturing into mountains of discarded monitors, desktop towers, and refrigerators, children and teenagers fight over sun-and-rain-exposed electronic parts in search of any metals—

Once electronic waste is deposited in the landfills of poor villages, the waste will not stay there for long. Locals in Accra and numerous small towns spread across India and China learned of the possibilities for parts from abandoned computer monitors, televisions, and towers and, like the hobbyists mentioned earlier, took up efforts to retrieve the precious components. In a society where economic prosperity and annual average incomes are measured in the hundreds and not tens of thousands of dollars, the few dollars one might make during a twelve-hour foray through massive piles of rubbish is well worth the effort and risk. The electronics wastelands littered throughout developing countries could not exist, however, without complicit partners in the destination countries. How do these relationships begin?

TOOLS OF THE POOR Those who choose to make a living by retrieving electronic waste from dumps, tearing the equipment down, and refining the rare metals found within them are exposed to many of the same hazards as our hypothetical hobbyists, but on a much higher scale. While inquisitive First World hobbyists like Anthony and Ron refine scrap for fun in their spare time, a recycler in the developing world performs the same work but for 12 to 14 hours a day and with minimal protective equipment due to the prohibitive cost of respirators, gloves, and goggles. They carry out these activities in an even more dangerous environment as well, exposing themselves to the physical hazards of landfills before the first step of metal recovery begins. Their tools are often crude. Workers place the metals in clay kilns or stone bowls and heat them over campfires. Heating the refuse loosens the solder present on many electronic parts—solder that is typically made of lead and tin. Children huddle over the fire as the scraps are heated to the point where the solder is liquefied and a desired component can be pulled away for further processing. The cathode-ray tubes in older computer monitors—an item not even contemplated for recovery by First World hobbyists because of the danger and minimal reward—are boons for profit-seeking recyclers in the developing world. Tube monitors contain large amounts of lead dust—as much as seven pounds of lead in some models—and at the end of these fragile tubes is a coveted coil of copper. While copper is not the most precious of metals, it is valuable due to its many applications, turning the acquisition of one of these intact copper coils into a windfall for a working recycler. Smashing a monitor to retrieve the coil often involves shattering the lead-filled cathode ray-tube, doing a phenomenal amount of environmental damage while covering the worker with millions of lead particulates. What is done with the unwanted scrap after the useful parts are plucked out is another problem altogether. In many situations, unwanted pieces are gathered into a burn pile and turned to ash, emitting harmful pollutants into the atmosphere. What remains in solid form is often deposited in waterways—Mother Nature’s trashcan—and coastal areas. There is rarely a municipal waste system in place to recover the unwanted scraps in these villages, and years of workers dumping broken and burnt leftovers into local streams has contaminated the soil and local water supply. Drinking water is already trucked into the recycling village of Guiyu from a nearby town due to an abundance of careless dumping. Cleaning the water system would likely be too costly and a losing battle if the landfill recyclers are unwilling to change their ways. The physiological impact of recycling electronic waste has been best studied among the inhabitants of China’s Guiyu village. Academic studies show children in Guiyu to have elevated levels of lead in their blood, leading to a decrease in IQ along with an increase in urinary tract infections and a sixfold rise in miscarriages.6 Many of the young workers flocking to the landfills feel compelled to sift through the electronic waste in order to provide for their elders under China’s one-child policy, a policy placing an undo financial burden on the current generation. In addition to complications from lead exposure, hydrocarbons released into the air during the burning of waste have led to an uptick in chronic obstructive pulmonary disease and other respiratory problems, as well as permanent eye damage. Fixing the long-term electronic waste problem in these villages is a complicated and costly proposition. Apart from a generation of children poisoned and possibly lost, this is a relatively new revenue source, with the oldest of the children involved just now entering their thirties. The area of Guiyu was once known for its rice production, but a decade of pollution stemming from electronic waste dumping and refining has rendered the area unfit for agriculture.

Tantalum is particularly coveted for its use in electronics. The metal is stable up to 300 degrees Fahrenheit, a temperature well within the range of most industrial or commercial uses of the element. It works as an amazing capacitor, allowing for the size of hardware to become smaller—an evergreen trend in the world of consumer electronics. Tantalum is also useful for its acoustic properties, with filters made with the metal placed in smartphone handsets to increase audio clarity by reducing the number of extraneous frequencies. The metal can also be used to make armor-piercing projectiles. A run-of-the-mill smartphone has a little over 40 milligrams of tantalum—a piece roughly half the size of a steel BB gun pellet when one accounts for the variation in density between the metals.

Ammonium nitrate is a small molecule used as a fertilizer that can also be incorporated into explosive devices. Karzai enacted the ban in the hope of making it more difficult for the Taliban and other groups to fashion homemade explosive devices used to kill NATO troops stationed in the region. Once denied access to ammonium nitrate, farmers in Afghanistan noted an astonishing drop-off in crop yields, yet they received little to no help from the Afghan government to transition away from the use of ammonium nitrate after the ban. Farmers harvesting a nine-hundred-pound-prune yield the previous year saw their yields plummet to one hundred and fifty pounds after Karzai’s ban.  A drop in yields of as little as 5 or 10% in a developed country would be very damaging to its financial bottom line, but in a country in which 36% of its people live at or below the poverty line, the absence of ammonium nitrate is downright devastating.  Farmers either had to raise the price of their produce or make the move to illegal opium farming to make a living. The allure of opium is, pardon the pun, intoxicating. Raw opium sells for several hundred dollars per pound, and with a probable harvest of roughly fifty pounds of poppies per acre, the attraction is strong for even the most pious of farmers.

While farmers suffered, the Taliban simply turned to a source not subject to Karzai’s ban to construct explosives: potassium chlorate, a chemical used in textile mills across the region. In addition, national and local government efforts to reduce environmental damage continually ran afoul of the Afghan people, including an environmentally conscious ban on the use of brick kilns and an effort to limit automobile traffic in the populous city of Mazar-e Sharif.  While their intentions were no doubt noble, the actions were shortsighted and resulted in decreased income for the vast numbers of the less well-off living and working in the city. These are excellent examples of the troubles such a developing country faces as it tries to advance its economy and infrastructure while at the same time doing minimal damage to the environment, a problem that continues to plague Afghanistan as the country tries to make the most of its vast resources. And when government mandates fail or a situation is in need of an immediate response, there is little money available to develop a solution. Erosion and deforestation are blights on the already parched earth of Afghanistan, turning more and more useful acreage into the desert that already covers the majority of the country. A 2012 initiative through Afghanistan’s National Environment Protection Agency set aside six million dollars to fight climate change and erosion, an embarrassingly small sum to dedicate to preserving the farmland that provides the livelihood for 79% of the country’s people.

Weak electrical system plagues the country as it lurches into the third decade of the twenty-first century. Blackouts limit the access of electricity in a significant portion of the country to a mere one to two hours a day, putting modern necessities like refrigeration out of reach. Industrial efforts are also stymied by breakdowns in the electrical system, with money lost and manufacturing forced to halt production due to frequent electrical outages.

Nine years into the United States’ war in Afghanistan, the Pentagon released the results of the US Geological Survey operation carried out to observe and catalog the potential rare earth resources in Afghanistan. The fabled 2010 report—already bolstered by rumors of a Pentagon memorandum christening Afghanistan the “Saudi Arabia of Lithium”—revealed a treasure trove of previously unknown mineral resources including gold, iron, and rare earth metals. Early speculation placed a one-trillion-dollar value on the accessible deposits, but there is a substantial problem—Afghanistan lacks sufficient modern mining technology to tackle retrieval efforts. Separate estimates made by Chinese and Indian interests dwarf the figure, placing the mineral wealth of Afghanistan closer to three trillion dollars.

The wealth reported in 2010 is likely a continuation of the work carried out by the US Geological Survey Mineral Resources Project, which aided members of Afghanistan’s sister group, the Afghanistan Geological Survey, from 2004 to 2007, to help the country’s government determine a workable baseline of their mineral wealth.18 While cynicism often reigns when we look at North American incursion into Afghanistan, this may not have been a solely profit-minded gesture, as the USGS also teamed up with the Afghan government to assess earthquake hazards as well as to catalog oil and gas resources in the country during the same time period.

The United States cannot produce useful quantities of eight of the 17 elements commonly labeled as rare earth metals—terbium, dysprosium, holmium, europium, erbium, thulium, ytterbium, and lutetium—because they simply do not exist within our borders.

According to the US Department of Defense, high-purity beryllium is necessary to “support the defense needs of the United States during a protracted conflict,” but procuring a supply is not easy. Making a case for the defense industry’s reliance on beryllium is easy. No fewer than five US fighter craft, including the F-35 Joint Strike Fighter that will be employed by the United States, Japan, Israel, Italy, and five other countries over the next several decades, rely on beryllium to decrease the mass of their frames in order to allow the nimble movements that make the planes even more deadly. Copper-beryllium alloys are a crucial component of electrical systems within manned craft and drones, along with x-ray and radar equipment used to identify bombs, guided missiles, and improvised explosive devices (IEDs). The metal also has a use far removed from such high-tech applications. Mirrors are fashioned out of beryllium and used in the visual and optical systems of tanks because it makes the mirrors resistant to vibrational distortion. High-purity beryllium is worth just under half a million dollars per ton when produced domestically, with Kazakhstan and Germany supplying the only significant amounts to the United States through import. In 2008 the Department of Defense approved the construction of a high-purity beryllium production plant in Ohio after coming to the conclusion that commercial domestic manufacturers could not supply enough of the processed metal for defense applications nor did sufficient foreign suppliers exist. While the plant in Ohio is owned by a private corporation, Materion, the Department of Defense is apportioned two-thirds of the plant’s annual output.

Lanthanum is the key component of nickel-metal hydride, with each Toyota Prius on the road requiring twenty pounds of lanthanum in addition to two pounds of neodymium. Like many of the rare earth metals, lanthanum is not as rare as the description would suggest; it is the separation and extraction of lanthanum that complicates matters and thereby results in the metal’s relative scarcity. With the Nissan Leaf and Tesla Motors’ Roadster becoming trendy choices for new car buyers, the need for lanthanum will remain and no doubt grow in the foreseeable future. The metal will become even more relevant as automobile manufacturers push the limits of battery storage, an effort that will require significantly more lanthanum for each car rolling off the assembly line.

In liquid fuel reactors, energetic uranium compounds are mixed directly with water, with no separation between nuclear fuel and coolant. Liquid fuel reactors can make use of lesser-quality uranium and appear to be safer at first glance because the plants do not need to operate under high pressure to prevent water from evaporating. On the downside, they pose an even larger contamination and waste storage problem than conventional solid fuel reactors. Since there is no separation between the cooling waters and uranium, much more waste is produced, waste that, in theory, must be stored for tens of thousands of years in geological repositories before the murky waters no longer pose a danger.

Thorium power plants would need constant maintenance and a highly skilled set of workers on around-the-clock watch to oversee energy production. This is not to say solid fuel nuclear power plants are worry-free, but the solid fuel plant is the comfortable dinner-and-a-movie alternative to taking a high-maintenance individual out for a night on the town. Why would molten salt plants need constant observation? Thorium molten salt reactors create poisonous xenon gas, a contaminant that must be monitored and removed to maintain safe and efficient energy generation. Because of this toxic by-product, a thorium molten salt reactor would not succeed with just a technician overseeing a thoroughly automated plant but would require a squad of highly educated and dedicated engineers analyzing data and making changes around the clock. Luckily, most of the world’s current power plant employees are quite educated, but the act of retraining each and every worker is a substantial barrier that prevents the switch to thorium fuel plants in North America.

No country currently possesses a functional thorium plant, but China is on the inside track thanks to an aggressive strategy that aims to begin electricity generation by the second half of this decade. India is committed to generating energy using thorium as well, aiming to make use of their own extensive thorium reserves to meet 30% of their energy needs by 2050.


Neodymium—one of the two elements derived from Carl Gustaf  Mosander’s incorrect, but accepted, discovery of didymium in 1841—is the most widely used permanent magnet, with the rare earth metal being found in hard drives and wind turbines as well as in lower-tech conveniences like the button clasp of a purse. Along with the rare earth metal neodymium, niobium metal magnets are becoming increasingly necessary in recreational items, in particular, safety implements, electronics, and the tiny speakers contained in the three-hundred-dollar pair of headphones

Niobe is known as the daughter of Tantalus (for whom the rare metal element tantalum is named). Like her father, she is a thorn in the side of the gods.

Niobe is lucky in one part of life—she is the mother of 50 boys and 50 girls, and she takes a considerable amount of pride from this fact. Her pride is too much for Apollo and Artemis to take—the mythological super couple are only able to bear a single boy and girl, and when Niobe gloats in their midst, Apollo and Artemis slay all 100 of Niobe’s offspring. Mass murder is not enough to quench the godly anger in this bummer of a story, as Apollo and Artemis take the scenario one step further and turn Niobe into stone.

Niobium, a metal typically used to make extremely strong magnets, is also quite stable and has the added bonus of mild hypoallergenic properties—a boon to the medical world in which niobium became an obvious choice for use in implantable devices, specifically pacemakers.

Magnetism and electricity go hand in hand in modern life—magnetic fields affect electrical fields and vice versa. This connection is used to create superconducting magnets, which run electrical current through metal coils to generate the strongest magnetic fields possible with our current understanding of technology. Using a wire made of a permanent magnet, like neodymium, turns the basic run-of-the-mill electromagnet into a superconducting one.

Greenland has long been hypothesized to have rich resources of the metals, but any and all attempts at commercial mining have been halted because uranium is commonly discovered during excavations of rare earth metals. Once Greenland’s parliament overturned legislation banning the extraction of uranium, the parliament also freed up the country for mining of treasure troves of rare earth metals.

While pearls can be grown and harvested in a few short years, polymetallic nodules grow a mere half an inch in diameter over the course of a million years—not exactly the timetable we see with renewable resources. Once the last manganese nodule is harvested and refined, that will be the end of underwater rare metal mining.

When nodule mining becomes a reality, the process will build upon the existing foundation put in place through the underwater mining of diamonds. The De Beers Corporation currently operates five full-time vessels for this purpose, with all five dedicated to sifting through shallow sediment beds off the coast of the African country of Namibia. The German-based company found underwater operations far more efficient than above-ground mining efforts, as a fifty-man crew armed with state-of-the-art technology can match the output of three thousand traditional mine workers.  Two methods used for underwater diamond mining are directly applicable to retrieving manganese nodules from the ocean floor. Drilling directly into the seabed is a possible retrieval option, with this avenue penetrating deep below the floor to bring up broken-up rock, sediment, and nodules through alien-looking, mile-long tubes. Once the debris is brought to the hull of a mining ship, chemical and physical processes are used sift to through the cargo, with any undesired rock and sediment returned to the bottom of the ocean floor. The second method shuns the use of drilling and instead uses a combination of conveyor belts and hydraulic tubes to cover larger areas than are accessible by drilling.


Posted in Infrastructure, Rare Minerals | Tagged , , , | 1 Comment

Book review of Bryce’s “Power hungry: the myths of green energy and the real fuels of the future”

This image has an empty alt attribute; its file name is 0634fc17-aa35-4307-8f75-90b02c84f0f1

Preface.  This is a book review of: Robert Bryce. 2009. Power Hungry: The Myths of “Green” Energy and the Real Fuels of the Future.

This is a brilliant book, very funny at times, a great way to sharpen your critical thinking skills, and complex ideas and principles expressed so enough anyone can understand them.

I have two main quibbles with his book.  I’ve written quite a bit about energy and resources in “When trucks stop running” and energyskeptic about why nuclear power and natural gas cannot save us from the coming oil shortages — after all, natural gas and uranium are finite also.

This book came out in 2009. As far as Bryce’s promotion of nuclear power as a potential solution, perhaps he would have been less enthusiastic if he’d read the 2013 “Too Hot to Touch: The Problem of High-Level Nuclear Waste” by W. A. Alley et al., Cambridge University Press.  And also the 2016 National Research Council “Lessons Learned from the Fukushima Nuclear Accident for Improving Safety and Security of U.S. Nuclear Plants: Phase 2”.  As a result of this study, MIT (Massachusetts Institute of Technology) and Science Magazine concluded that a nuclear spent fuel fire at Peach Bottom in Pennsylvania could force up to 18 million people to evacuate. This is because the spent fuel is not stored under the containment vessel where the reactor is, which would keep the radioactivity from escaping, so if electric power were out for 12 to 31 days (depending on how hot the stored fuel was), the fuel from the reactor core cooling down in a nearby nuclear spent fuel pool could catch on fire and cause millions of flee from thousands of square miles of contaminated land.

Bryce on why the green economy won’t work:

There’s tremendous political appeal in “green jobs,” a “green collar economy,” and in what U.S. President Barack Obama calls a “new energy future.”  We’ve repeatedly been told that if we embrace those ideas, provide more subsidies to politically favored businesses, and launch more government-funded energy research programs, then we would resolve a host of problems, including carbon dioxide emissions, global climate change, dependence on oil imports, terrorism, peak oil, wars in the Persian Gulf, and air pollution. Furthermore, we’re told that by embracing “green” energy we would also revive our struggling economy, because doing so would produce more of those vaunted “green jobs.”

These claims ignore the hard realities posed by the Four Imperatives: power density, energy density, cost, and scale.

It may be fashionable to promote wind, solar, and biofuels, but those sources fail when it comes to power density. We want energy sources that produce lots of power (which is measured in horsepower or watts) from small amounts of real estate.

And that’s the key problem with wind, solar, and biofuels: They require huge amounts of land to generate meaningful amounts of power. If a source has low power density, it invariably has higher costs, which makes it difficult for that source to scale up and provide large amounts of energy at reasonable prices.

What follows are my kindle notes of what I found useful, so as usual, a bit disjointed as new topics come up with no segue.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Robert Bryce. 2009. Power Hungry: The Myths of “Green” Energy and the Real Fuels of the Future.

The deluge of feel-good chatter about “green” energy has bamboozled the American public and U.S. politicians into believing that we can easily quit using hydrocarbons and move on to something else that’s cleaner, greener, and, in theory, cheaper. The hard truth is that we must make decisions about how to proceed on energy very carefully, because America simply cannot afford to waste any more money on programs that fail to meet the Four Imperatives.

Energy is the ability to do work; power is the rate at which work gets done.  The more power we have, the quicker the work gets done.  We use energy to make power.

A 2007 study by Michigan State University determined that:  just 28% of American adults could be considered scientifically literate (ScienceDaily 2007).  In February 2009, the California Academy of Sciences released the findings of a survey which found that most Americans couldn’t pass a basic scientific literacy test (CAS 2009). The findings:

  • Just 53% of adults knew how long it takes for the Earth to revolve around the Sun.
  • Just 59% knew that the earliest humans did not live at the same time as dinosaurs.
  • Only 47% of adults could provide a rough estimate of the proportion of the Earth’s surface that is covered with water. (The academy decided that the correct answer range for this question was anything between 65 and 75%)
  • A mere 21% percent were able to answer those three questions correctly.

This centuries-long suspicion of science, which continues today with regular attacks on Charles Darwin and his theory of evolution, was recognized by British scientist and novelist C. P. Snow in the 1950s when he delivered a lecture called “The Two Cultures.”

A good many times I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice I have been provoked and have asked the company how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative. Yet I was asking something which is about the scientific equivalent of: Have you read a work of Shakespeare’s? 

These important laws — the first law of thermodynamics—energy is neither created nor destroyed—and the second law—energy tends to become more random and less available—are relegated to the realm of too much information.50 This apathy toward science makes it laughably easy for the public to be deceived, or for people to deceive themselves.

Energy is an amount, while power is a measure of energy flow. And that’s a critical distinction. Energy is a sum. Power is a rate. And rates are often more telling than sums.

When it comes to doing work, we insist on having power that is instantly available. We want the ability to switch things on and off whenever we choose. And that desire largely excludes wind and solar from being major players in our energy mix, because we can’t control the wind or the sun. Weather changes quickly.

Renewable energy has little value unless it becomes renewable power, meaning power that can be dispatched at specific times of our choosing. But achieving the ability to dispatch that power at specific times means solving the problem of energy storage. And despite decades of effort, we still have not found an economical way to store large quantities of the energy we get from the wind and the sun so that we can convert that energy into power when we want it.

Power density refers to the amount of power that can be harnessed in a given unit of volume, area, or mass.  Watts per square meter may be the most telling of these. Using watts per square meter allows us to make a direct comparison between renewable energy sources such as wind and solar and traditional sources such as oil, natural gas, and nuclear power.)

Energy density refers to the amount of energy that can be contained in a given unit of volume, area, or mass. Common energy density metrics include Btu per gallon and joules per kilogram.

When it comes to questions about power and energy, the higher the density, the better. For example, a 100-pound battery that can store, say, 10 kilowatt-hours of electricity is better than a battery that weighs just as much but can only hold 5 kilowatt-hours. Put another way, the first battery has twice the energy density of the second one. But both of those batteries are mere pretenders when compared with gasoline, which, by weight, has about 80 times the energy density of the best lithium-ion batteries.

Ever since Watt’s day, the world of engineering has been dominated by the effort to produce ever-better engines that can more quickly and efficiently convert the energy found in coal, oil, and natural gas into power. And that effort to increase the power density of our engines, turbines, and motors has resulted in the production of ever-greater amounts of power from smaller and smaller spaces.

Comparing the engine in the Model T with that of a modern vehicle.   In 1908, Henry Ford introduced the Model T, which had a 2.9-liter engine that produced 22 horsepower (HP), or about 7.6 HP per liter of displacement.  A century later, Ford Motor Company was selling the 2010 Ford Fusion. It was equipped with a 2.5-liter engine that produced 175 HP, which works out to 70 HP per liter.  So even though the displacement of the Fusion’s engine is about 14% less than the one in the Model T, it produces more than 9 times as much power per liter. In other words, over the past century, Ford’s engineers have made a 9-fold improvement in the engine’s power density.

But with both wind and solar, and with corn ethanol and other biofuels, engineers are constantly fighting an uphill battle, one that requires using lots of land, as well as resources such as steel, concrete, and glass, in their effort to overcome the low power density of those sources.

One of the biggest problems when it comes to energy transitions is that we’ve invested trillions of dollars in the pipelines, wires, storage tanks, and electricity-generation plants that are providing us with the watts that we use to keep the economy afloat. The United States and the rest of the world cannot, and will not, simply jettison all of that investment in order to move to some other form of energy that is more politically appealing.

The idea that hydrocarbons beget more hydrocarbons can also be seen by looking at the Cardinal coal mine in western Kentucky. The mine produces more than 15,000 tons of coal per day. And the essential commodity that facilitates the mine’s amazing productivity is electricity. The massive machines that claw the coal from the earth run on electricity provided by power plants on the surface that burn coal. In fact, about 93% of Kentucky’s electricity is produced from coal. To paraphrase Goodell, at the Cardinal Mine, the coal, in effect, is mining itself.

Hydrocarbons are begetting more hydrocarbons in the oil and gas business. Modern drilling rigs can bore holes that are five, six, or even eight miles long in the quest to tap new reservoirs of oil. And the energy they use to access that oil is … oil. Diesel fuel has long been the fuel of choice for drilling rigs around the world. On offshore drilling rigs, the power is often supplied by diesel fuel. But in some cases, the power is provided by natural gas that the rig itself produces. Thus, on those offshore platforms, the natural gas is, in effect, mining itself.

If we tried to make biodiesel from soybeans it wouldn’t provide anything close to the scale needed to keep diesel engines running.  Even if the U.S. converted all of the soybeans it produces in an average year into biodiesel, that would be less than 10% of America’s total diesel-fuel needs (4).

Multiplying global energy use (226 million barrels of oil equivalent in primary energy each day) by horsepower per barrel, we find that the world consumes about 6.8 billion horsepower—all day, every day. Therefore, roughly speaking, the world consumes about 1 horsepower per person. Of course, this power availability is not spread evenly across the globe. Americans use about 4.5 horsepower per capita, while their counterparts in Pakistan and India use less than 0.25.

In figure 10 Bryce shows this  energy use as lightbulbs, with India and Pakistan consuming the least: 1.5 lightbulbs, or 167 watts per capita.  China is at 7 light bulbs (673 watts/capita), and if everyone in the U.S. wore a giant chandelier with the lightbulbs representing their energy use, there’d be 33.5 lightbulbs (673 watts).

Power density and land area (I added material from Smil as well)

Extracts from:

Vaclav Smil. 2017. Energy transitions: history, requirements, prospects.

Robert Bryce. 2009. Power Hungry: The Myths of “Green” Energy and the Real Fuels of the Future.

Smil: The fact that wind, solar, and biomass have incredibly low energy density per square meter means that a fully renewable system to replace the 320 GW of fossil fueled electricity generation and 1.8 TW of coal, oil, and gas with biofuels would extend over 25 to 50% of the country’s territory, or 965,000 to 1.81 million square miles (250-470 Mha) with an average power density of just 0.45 W/m2, mainly due to the enormous area needed to produce liquid biofuels.

If we were to cultivate phytomass at 1 W/m2 to replace today’s 12.5 TW of fossil fuels would require 4,826,275 million square miles (12.5 million square kilometers), roughly the size of the U.S. and India.  If all of America’s gasoline demands were derived from ethanol, that would take an area 20% larger than the nation’s total arable land.  It would be worse elsewhere — the U.S. produces twice as much corn per acre than the rest of the world.

If the U.S. tried to generate 10% of electricity (405 Twh in 2012) it would require wood chips from forests growing in an area the size of Minnesota (84,950 square miles) since the power density is only 0.6 W/m2.

Currently the area used by fossil fuel production and extraction, hydro power, and nuclear generation takes up only 0.5% of the land (21,235 square miles, 5.5 Mha).  The low energy density of biofuels restricts facilities to small areas or the fossil fuel used to transport it to the biorefinery is more than the energy of what’s made (i.e. corn for ethanol needs to be less than 50 miles away)

Power density in watts per square meter

  • Rich middle eastern oil fields: > 10,000 W/m2
  • American oil fields: 1,000-2,000 W/m2
  • Natural gas 1,000 to 10,000 W/m2
  • Coal: 250-500 W/m2 (used to be much higher but the best coal mines were mined first, remaining mines have lower energy density coal) though it can be 1,000 to 10,000 W/m2 in bituminous thick coal seams
  • Fast growing trees in plantations: 1 W/m2 (arid) 1 W/m2 (temperate) 1.2 W/m2 tropical
  • Bioengineered trees that don’t exist yet: 2 W/m2 but not really, they’d be constrained by nutrients, fertilizer inputs, soil erosion, and 10 years or more between harvests
  • Harvesting mature virgin forests or coppiced beech or oak: 0.22-0.25 W/m2
  • Crop residues: 0.05 W/m2
  • ethanol: 0.25 W/m2
  • Biodiesel: 0.12 to 0.18 W/m2
  • Solar 2.7 W/m2 (Germany’s Waldpolenz)
  • Wind turbines: 2 to 10 W/m2.
  • hydropower: 3 W/m2 due to large reservoir size, Three gorges will be as high as 30 W/m2 though

Consumption.  Wind, solar, biomass take too much land to support today’s industries and cities

500 W/m2 to 1,000 W/m2 industrial facilities (especially steel mills and refineries), downtowns in northern cities in the winter, high-rise buildings.

Bryce: All About Power Density: A Comparison of Various Energy Sources in Horsepower (and Watts)

  • Nuclear: 56 Watts per square meter (W/m2). 300 Horsepower (HP)/acre (56 W/m2)
  • Average U.S. natural gas well @ 115,000 cubic feet per day: 53 W/m2. 287.5 hp/acre
  • Solar PV: 7 W/M2. 36 hp/acre
  • Wind turbines: 2 W/m2.  6.4 hp/acre
  • Biomass-fueled power plant: 4 W/M2. 2.1 hp/acre
  • Corn ethanol: 05 W/M2. 0.26 hp/acre

The Milford Wind Corridor is a 300-megawatt wind project that was built in Utah in 2009. The project was the first to be approved under the Bureau of Land Management’s new wind program for the western United States. To construct the wind farm, which uses 139 turbines spread over 40 square miles, the owners of the project installed a concrete batch plant that ran 6 days a week, 12 hours per day, for 6 months. During that time, the plant consumed about 14.3 million gallons of water to produce 44,344 cubic meters of concrete. Thus, each megawatt of installed wind capacity consumed about 319 cubic meters of concrete.

But those numbers must be adjusted to account for wind’s capacity factor—the percentage of time the generator is running at 100% of its designed capacity. Given that wind generally has a capacity factor of 33% or less, the deployment of 1 megawatt of reliable electric-generation capacity at Milford actually required about 956 cubic meters of concrete.

Peterson, a professor in the nuclear engineering department at the University of California at Berkeley, reported that when accounting for capacity factor, each megawatt of wind power capacity requires about 870 cubic meters of concrete and 460 tons of steel.

Each megawatt of power capacity in a combined-cycle gas turbine power plant (the most efficient type of gas-fired electricity production) requires about 27 cubic meters of concrete and 3.3 tons of steel. In other words, a typical megawatt of reliable wind power capacity requires about 32 times as much concrete and 139 times as much steel as a typical natural gas-fired power plant.


Studies proving that wind power reduced carbon emissions, ignored the fact that all wind-power installations must be backed up with large amounts of dispatchable electric generation capacity. In Denmark’s case, that has meant having large quantities of available hydropower resources in Norway and Sweden that can be called upon when needed. But even with a perfect zero-carbon backup system, the Danes haven’t seen a reduction in carbon dioxide emissions.

That bodes ill for countries that don’t have the access to hydropower that Denmark has. Nearly every country that installs wind power must back up its wind turbines with gas-fired generators.

The Electric Reliability Council of Texas (ERCOT), which manages 85% of the state’s electric load, pegs wind’s capacity factor at less than 9%. In a 2007 report, the grid operator determined that just “8.7% of the installed wind capability can be counted on as dependable capacity during the peak demand period for the next year.” It added that “conventional generation must be available to provide the remaining capacity needed to meet forecast load and reserve requirements.”

By mid-2009, Texas had 8,203 megawatts (MW) of installed wind-power capacity. But ERCOT, in its forecasts for that summer’s demand periods, when electricity use is the highest, was estimating that just 708 MW of the state’s wind-generation capacity could actually be counted on as reliable. With total summer generation needs of 72,648 MW, the vast majority of which comes from gas-fired generation, wind power was providing just 1% of Texas’s total reliable generation portfolio.

It’s clear that wind power cannot be counted on as a stand-alone source of electricity but must always be backed up by conventional sources of electricity generation. In short, wind power does not reduce the need for conventional power plants.

Because wind cannot be called up on demand, especially at the time of peak demand, installed wind generation capacity does not reduce the amount of installed conventional generating capacity required. So wind cannot contribute to reducing the capital investment in generating plants. Wind is simply an additional capital investment.”

Wind power does not, and cannot, displace power plants, it only adds to them.

In September 2009, Jing Yang of the Wall Street Journal reported that “China’s ambition to create ‘green cities’ powered by huge wind farms comes with a dirty little secret: Dozens of new coal-fired power plants need to be installed as well.” Chinese officials are installing about 12,700 megawatts of new wind turbines in the northwestern province of Gansu. But along with those turbines, the government will install 9,200 megawatts of new coal-fired generating capacity in Gansu, “for use when the winds aren’t favorable.” That quantity of coal-fired capacity, Jing noted, is “equivalent to the entire generating capacity of Hungary.”

The obvious problem with the Chinese plan is that coal-fired plants are designed to provide continuous, baseload power. They cannot be turned on and off quickly. That likely means that all of the new coal plants being built in Gansu province to back up the new wind turbines will be run continuously in order to assure that the regional power grid doesn’t go dark.

In November 2009, Kent Hawkins, a Canadian electrical engineer, published a detailed analysis on the frequency with which gas-fired generators must be cycled on and off in order to back up wind power. Hawkins’ findings: The frequent switching on and off results in more gas consumption than if there were no wind turbines at all. His analysis suggests that it would be more efficient in terms of carbon dioxide emissions to simply run combined-cycle gas turbines on a continuous basis than to use wind turbines backed up by gas-fired generators that are constantly being turned on and off. Hawkins concluded that wind power is not an “effective CO2 mitigation” strategy “because of inefficiencies introduced by fast-ramping (inefficient) operation of gas turbines (Hawkins 2009).


Between 1999 and 2007, according to data from the Danish Energy Agency, the amount of electricity produced from the country’s wind turbines grew by about 136%, from 3 billion kilowatt-hours (kWh) to some 7.1 billion (kWh).   By the beginning of 2007, wind power was accounting for about 13.4% of all the electricity generated in Denmark.  And yet, over that same time period, coal consumption didn’t change at all. In 1999, Denmark’s daily coal consumption was the equivalent of about 94,400 barrels of oil per day.  By 2007, Denmark’s coal consumption was exactly the same as it was back in 1999.  In fact, Denmark’s coal consumption in both 2007 and 1999 was nearly the same as it was back in 1981.

The basic problem with Denmark’s wind-power sector is the same as it is everywhere else: It must be backed up by conventional sources of generation. For Denmark, that means using coal as well as the hydropower resources of its neighbors. As much as two-thirds of Denmark’s total wind power production is exported to its neighbors in Germany, Sweden, and Norway. In 2003, 84% of the wind power generated in western Denmark was exported, much of it at below-market rates.

The Danes are providing an electricity subsidy to their neighbors. And they are doing so because Denmark cannot use all of the wind-generated electricity it produces. The intermittency of the wind resources in western Denmark—located far from the main population center in Copenhagen—means that the country must rely on its existing coal-fired power plants. When excess electricity comes on-stream from the country’s wind turbines, the Danes ship it abroad, particularly to Sweden and Norway, because those countries have large amounts of hydropower resources that Denmark then uses to balance its own electric grid.

“Exported wind power, paid for by Danish householders, brings material benefits in the form of cheap electricity and delayed investment in new generation equipment for consumers in Sweden and Norway but nothing for Danish consumers.” (CEPOS 2009)

In 2007, the country’s total primary energy use, about 363,000 barrels of oil equivalent per day, was roughly the same as it was in 1981 (BP 2009). Denmark’s ability to keep energy consumption growth flat over such a long period is anomalous. But let’s be clear: That near-zero growth in energy consumption has been achieved in part by imposing exorbitant energy taxes and by maintaining near-zero growth in population.

Denmark is even more reliant on oil—as a percentage of primary energy—than the United States is. In fact, the Danes are among the most oil-reliant people on Earth. In 2007, Denmark got about 51% of its primary energy from oil. That’s far higher than the percentage in the United States (40%) and significantly higher than the world average of 35.6%. As stated above, Denmark is more coal dependent than the United States, getting about 26% of its primary energy from coal

Between 1990 and 2006, Denmark’s overall greenhouse gas emissions increased by 2.1 percent (EEA).

If Denmark’s huge wind-power sector were reducing carbon dioxide emissions, you’d expect the Danes to be bragging about it, right? Well, guess what? They’re not.

Denmark has become largely self-sufficient in oil and gas, not because it’s more virtuous or because it’s using more alternative energy, but be-cause it has fully committed to drilling in the North Sea.

Between 1981 and 2007, the country’s oil production jumped from less than 15,000 barrels per day to nearly 314,000 barrels per day—an increase of nearly 2,000 percent. The focus on sustained oil and gas exploration and production led to a corresponding increase in oil reserves, which jumped from about 500 million barrels to nearly 1.3 billion barrels. Denmark has had similar success with its natural gas production. In 1981, the country was producing no natural gas. By 2007, natural gas production was nearly 900 million cubic feet per day—enough to supply all of the country’s own consumption needs and to allow for substantial exports.

Hydrocarbons provide Denmark with 48 times as much energy as the country gets from wind power.

The September 2009 study by CEPOS said that Denmark’s wind industry “saves neither fossil fuel consumption nor carbon dioxide emissions.” The final page of the report even offers a warning for the United States: “The Danish experience also suggests that a strong US wind expansion would not benefit the overall economy. It would entail substantial costs to the consumer and industry, and only to a lesser degree benefit a small part of the economy, namely wind turbine owners, wind shareholders and those employed in the sector.”

Wind does not substitute for natural gas

The International Energy Agency, in its “Natural Gas Market Review 2009,” said that as renewable capacity is added, “gas-fired capacity will increase while its overall load factor may be reduced…. This switching will have an impact on the profitability of new investments.”

Though it is true that gas consumption declines during periods when the wind is providing lots of electricity, it’s not yet clear how large those savings will be. Nor is it clear that the savings in fuel costs will be enough to offset the capital costs incurred to install the needed gas storage capacity, pipelines, and generators. Furthermore, all of that gas- and power-delivery infrastructure—and the generators, in particular—must be staffed continually. The utilities cannot send workers home only when the wind is blowing. The generators must be available and staffed to meet demand 24/7.

Americans have been repeatedly told that electricity generated from wind costs less than electricity produced by other forms of power generation. That’s only true if you don’t count the investments that must be made in other power-delivery infrastructure that assures that the lights don’t go out.

The costs of all the new gas-related infrastructure that must be installed in order to accommodate increased use of wind power should be included in calculations about the costs of adding renewable sources of energy to the U.S. electricity grid. Those calculations should be done on a state-by-state basis.

Neodymium for wind power is used in neodymium-iron-boron magnets, which are powerful, lightweight, and relatively cheap—at least they are when compared to the magnets they replaced, which were made with samarium (another lanthanide) and cobalt. The Toyota Prius uses neodymium-iron-boron magnets in its motor-generator and its batteries. Analysts have called the Prius one of the most rare-earth-intensive consumer products ever made, with each Prius containing about 1 kilogram (2.2 pounds) of neodymium and about 10 kilograms (22 pounds) of lanthanum. And it’s not just the Prius. Other hybrids, such as the Honda Insight and the Ford Fusion, also have them.

China’s near-monopoly control of the green elements likely means that most of the new manufacturing jobs related to “green” energy products will be created in China, not the United States. Chinese companies have made it clear that—thanks to huge subsidies provided by the Chinese government—they are willing to lose money on their solar panels in order to gain market share.

Environmental activists in the United States and other countries may lust mightily for a high-tech, hybrid-electric, no-carbon, super-hyphenated energy future. But the reality is that that vision depends mightily on lanthanides and lithium. That means mining. And China controls nearly all of the world’s existing mines that produce lanthanides.

Given that energy efficiency results in increased energy use, it’s obvious that, although energy efficiency should be pursued, it cannot be expected to solve the dilemmas posed by the world’s ever-growing need for energy.


If we are going to agree that carbon dioxide is bad, then what?

  • Where are the substitutes for hydrocarbons? Hydrocarbons now provide about 88% of the world’s total energy needs. Replacing them means coming up with an energy form that can supply 200 million barrels of oil equivalent per day.
  • Increasing energy consumption equals higher living standards. Always. Everywhere. Given that last fact, how can we expect the people of the world—all 6.7 billion of them—to use less energy? The answer to that question is obvious: We can’t.

Three billion tons is a difficult number to comprehend, especially when it represents something that is widely dispersed the way carbon emissions are in the atmosphere. According to calculations done by Vaclav Smil, if that amount of carbon dioxide (remember, it’s just 10% of global annual carbon dioxide emissions) were compressed to about 1,000 pounds per square inch, it would have about the same volume as the total volume of global annual oil production (Smil 2006).

In 2008, global oil production was about 82 million barrels per day.  Thus, 10% of global carbon dioxide emissions in one day would be approximately equal to the daily volume of global oil production. So here’s the punch line: Getting rid of just 10% of global carbon dioxide per day would mean filling the equivalent of 41 VLCC supertankers every day. Each VLCC, or very large crude carrier, holds about 2 million barrels (Apache 2008).

Smil emphasized the tremendous difficulty of “putting in place an industry that would have to force underground every year the volume of compressed gas larger than or (with higher compression) equal to the volume of crude oil extracted globally by [the] petroleum industry whose infrastructures and capacities have been put in place over a century of development.” “Such a technical feat,” he said, “could not be accomplished within a single generation (Smil 2006)”.


  • 1911: The New York Times declares that the electric car “has long been recognized as the ideal solution” because it “is cleaner and quieter” and “much more economical.”(NYT 1911)
  • 1915: The Washington Post writes that “prices on electric cars will continue to drop until they are within reach of the average family.”(WP 1915)
  • 1959: The New York Times reports that the “Old electric may be the car of tomorrow.” The story said that electric cars were making a comeback because “gasoline is expensive today, principally because it is so heavily taxed, while electricity is far cheaper” than it was back in the 1920s (Ingraham 1959)
  • 1967: The Los Angeles Times says that American Motors Corporation is on the verge of producing an electric car, the Amitron, to be powered by lithium batteries capable of holding 330 watt-hours per kilogram. (That’s more than two times as much as the energy density of modern lithium-ion batteries.) Backers of the Amitron said, “We don’t see a major obstacle in technology. It’s just a matter of time.” (Thomas 1967)
  • 1979: The Washington Post reports that General Motors has found “a breakthrough in batteries” that “now makes electric cars commercially practical.” The new zinc-nickel oxide batteries will provide the “100-mile range that General Motors executives believe is necessary to successfully sell electric vehicles to the public.”(Knight, J. September 26, 1979. GM Unveils electric car, New battery. Washington Post, D7.
  • 1980: In an opinion piece, the Washington Post avers that “practical electric cars can be built in the near future.” By 2000, the average family would own cars, predicted the Post, “tailored for the purpose for which they are most often used.” It went on to say that “in this new kind of car fleet, the electric vehicle could pay a big role—especially as delivery trucks and two-passenger urban commuter cars. With an aggressive production effort, they might save 1 million barrels of oil a day by the turn of the century.” (WP 1980)

Recharging the 53-kilowatt-hour battery pack in the Tesla takes about 4 hours, or 240 minutes. The total cost of refueling my Honda van: $44.32. Now, were I to buy 53 kilowatt-hours of electricity from the local utility, at an average cost of $0.10 per kWh, the total cost of the fuel would only be about $5.30—far less than the $44 I paid to refill my minivan. But then, my van doesn’t need recharging every night.

Diesel and gasoline vehicles are not overly reliant on rare earth elements such as neodymium and lanthanum.


The power density of biomass production is simply too low: approximately 0.4 watts per square meter (Ausubel 2007). Even the best-managed tree plantations can only achieve power densities of about 1 watt per square meter.  For comparison, recall that even a marginal natural gas well has a power density of about 28 watts per square meter.

To replace just 10% of the coal-fired electricity capacity in the United States with wood-fired capacity would mean more than doubling overall U.S. wood consumption.

The wood requirements for the Georgia Power facility and the East Texas generation project are about the same: 1 million tons of wood per year.  Thus, both projects will require 10,000 tons of wood per year to produce 1 megawatt of electricity. The United States now has about 336,300 megawatts of coal-fired electricity generation capacity.  Let’s assume that we want to replace just 10% of that coal-fired capacity—33,630 megawatts—with wood-burning power plants. Simple math shows that doing so would require about 336.3 million tons of wood per year.  How much wood is that? According to estimates from the United Nations Environmental Program, total U.S. wood consumption is now about 236.4 million tons per year.  Given those numbers, if the United States wants to continue using wood for building homes, bookshelves, and other uses—while also replacing 10% of its coal-fired generation capacity with wood-fired generators— it will need to consume nearly 573 million tons of wood per year, or about 2.5 times its current consumption.

The problems with biomass-to-electricity schemes are the same ones that haunt nearly every renewable energy idea: power density and energy density.  Wood has only half the energy density of coal.

Combine that low energy density with the low power density of wood and biomass production, the challenges become even more apparent. The power density of the best-managed forests is only about 1 watt per square meter.  And when a particular energy source, in this case, wood, has low power density and low energy density, that leads to problems with the other two elements of the 4 Imperatives: cost and scale.

Tad Patzek, the head of the petroleum engineering department at the University of Texas at Austin, and Gregory Croft, a doctoral candidate in engineering at the University of California at Berkeley, have come to similar conclusions. Patzek and Croft have concluded that world coal production will peak in 2011. Furthermore, in a report that they completed in 2009, they projected that global coal production “will fall by 50% in the next 40 years” and that carbon dioxide emissions from coal combustion will fall by the same percentage (Patzek 2009). For Patzek and Croft, the implications of the looming peak in coal production makes it apparent that the world must focus increasing effort on energy efficiency.

The physical production limits on oil and coal may keep carbon dioxide emissions far below the projections put forward by the Intergovernmental Panel on Climate Change, which has said that carbon dioxide concentrations could reach almost 1,000 parts per million by 2099 (Rutledge 2009). In his analysis, Rutledge predicted that due to peak coal, global carbon dioxide concentrations will not rise much above 450 parts per million by 2065.

Though we cannot predict the future, we can look backward and see that the beginning of the latest economic recession—like many recessions before it—coincided with a major spike in oil prices. History shows that sharp increases in oil prices are often followed by recessions. Those oil price spikes also lead to sharp decreases in oil demand. For instance, in 1978, U.S. oil consumption peaked at 18.8 million barrels per day. But the high prices that came with the 1979 oil shock, the second big price spike in six years, sent U.S. consumption tumbling. In fact, it took two decades for U.S. oil demand to recover after the price shocks of the 1970s. It wasn’t until 1998, when U.S. consumption hit 18.9 million barrels per day, that the 1978 level of consumption was surpassed.  And it took two decades for oil demand to recover, even though oil prices were remarkably low. From the mid-1980s through the early 2000s, prices largely stayed under $20 per barrel, and they even fell as low as $9.39 per barrel in December 1998.

In 2007, the EPA admitted that increased use of ethanol in gasoline would increase emissions of key air pollutants like volatile organic compounds and nitrogen oxide by as much as 7%. In the documents the EPA released on October 13, 2010, announcing the approval of the 15% ethanol blends, the agency again acknowledged that more ethanol consumption will mean higher emissions of key pollutants.

One more example of the egregiousness of the ethanol scam: U. S. ethanol producers and blenders are now exporting record amounts of ethanol. Through the first nine months of 2010, the U. S. exported about 251 million gallons of the alcohol fuel—that’s more than double the export volume recorded in 2009. Among the countries getting U. S. ethanol exports:  Saudi Arabia and the United Arab Emirates. To summarize: In October, the Obama administration bailed out the ethanol industry because the industry had built too much capacity. Administration officials and the ethanol scammers justified the bailout by saying it will help the United States achieve energy independence and cut oil imports. But rather than reduce oil imports, the ethanol scammers are collecting about $7 billion per year in subsidies from U. S. taxpayers so that they can ship increasing amounts of American-made ethanol abroad (Furlow 2010). And in doing so, the ethanol scammers are consuming nearly 40% of all the corn grown in the United States.


Apache corporation. July 14, 2008. Topic report: tanker market review.

Ausubel, J. 2007. The future environment for the energy business. APPEA Journel.

  1. 2009. Denmark’s total primary energy consumption in 1981 was 18.2 million tons of oil equivalent per year, or 365,000 barrels of oil equivalent per day. By 2007, the figure was 18.1 million tons of oil equivalent per year. BP Statistical review of world energy.

CAS. February 25, 2009. American adults flunk basic science. California academy of sciences

ScienceDaily. February 2007, 2007. Scientific literacy: how do Americans stack up?

CEPOS. 2009. Wind energy: the case of Denmark. Danish center for political studies.

EEA. 2008. Greenhouse gas emission trends and projections in Europe 2008: tracking progress towards Kyoto targets. European Environment Agency.

Furlow, B. 2010. Senator Bingaman supports push to cut ethanol subsidies. New Mexico Independent.

Hawkins, K. November 13, 2009. Wind integration: incremental emissions from back-up generation cycling, part 1.

NYT. Novermber 12, 1911. Foreign trade in Electric vehicles. New York Times C8.

Thomas, B. December 17, 1967. AMC does a turnabout: starts running in black. Los Angeles Times, K10.

Patzek, T.W., Croft, G. D. 2009. A global coal production forecast with multi-Hubbert cycle analysis. Energy Journal.

Peterson, Per F. September 16, 2008. Issues for Nuclear Power Construction Costs and Waste Management.

Rutledge, D. 2009. Hubbert’s peak, the coal question, and climate change.

Smil, V. 2006. Energy at the crossroads: background notes for a presentation at the global science forum conference on scientific challenges for energy research, Paris, May 17-18, 2006.

WP. October 31, 1915. Prophecies come true. Washington Post, E18.

WP. June 7, 1980. Plug ‘Er In?”. Washington Post, A10.

Hydropower:  Over the past decade, more than 200 dams in the United States have been dismantled.

Posted in Energy, Other Experts | Tagged , , , , | 11 Comments

Book review of Heinberg’s “Afterburn: society beyond fossil fuels”

Preface. This book has 15 essays Heinberg wrote from 2011 to 2014, many of them available for free online.  These are some of my Kindle notes of parts that interested me, so to you it will be disjointed and perhaps not what you would have chosen as important — but it gives you an idea of what a great writer Heinberg is and hopefully inspires you to buy his book.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Heinberg, R. 2015. Afterburn: Society Beyond Fossil Fuels. New Society Publishers.

The most obvious criticism that could be leveled at the book “The Party’s Over”, which came out in 2005, is the simple observation that, as of 2014, world oil production is increasing, not declining. However, the following passage points to just how accurate the leading peakists were in forecasting trends: “Colin Campbell estimates that extraction of conventional oil will peak before 2010; however, because more unconventional oil—including oil sands, heavy oil, and oil shale—will be produced during the coming decade, the total production of fossil-fuel liquids (conventional plus unconventional) will peak several years later. According to Jean Laherrère, that may happen as late as 2015.”

In the “Party’s Over”, I also summarized Colin Campbell’s view that “the next decade will be a ‘plateau’ period, in which recurring economic recessions will result in lowered energy demand, which will in turn temporarily mask the underlying depletion trend.

Economics 101 tells us that supply of and demand for a commodity like oil (which happens to be our primary energy source) must converge at the current market price, but no economist can guarantee that the price will be affordable to society. High oil prices are sand in the gears of the economy. As the oil industry is forced to spend ever more money to access ever-lower-quality resources, the result is a general trend toward economic stagnation. None of the peak oil deniers warned us about this.

Peakists within the oil industry are usually technical staff (usually geologists, seldom economists, and never PR professionals) and are only free to speak out on the subject once they’ve retired. The industry has two big reasons to hate peak oil. First, company stock prices are tied to the value of booked oil reserves; if the public (and government regulators) were to become convinced that those reserves were problematic, the companies’ ability to raise money would be seriously compromised—and oil companies need to raise lots of money these days to find and produce ever-lower-quality resources. It’s thus in the interest of companies to maintain an impression of (at least potential) abundance.

The problem is hidden from view by gross oil and natural gas production numbers that look and feel just fine—good enough to crow about. President Obama did plenty of crowing in his 2014 State of the Union address, where he touted “More oil produced at home than we buy from the rest of the world—the first time that’s happened in nearly 20 years.” It’s true: US crude oil production increased from about 5 million barrels per day (mb/d) to nearly 7.75 mb/d from 2009 through 2013, with imports still over 7.5 mb/d. And American natural gas production has been at an all-time high. Energy problem? What energy problem?

We’ll never run out of any fossil fuel, in the sense of extracting every last molecule of coal, oil, or gas. Long before we get to that point, we will confront the dreaded double line in the diagram, labeled “energy in equals energy out.” At that stage, it will cost as much energy to find, pump, transport, and process a barrel of oil as the oil’s refined products will yield when burned in even the most perfectly efficient engine (I use oil merely as the most apt example; the same principle applies for coal, natural gas, or any other fossil fuel). As we approach the energy break-even point, we can expect the requirement for ever-higher levels of investment in exploration and production on the part of the petroleum industry; we can therefore anticipate higher prices for finished fuels. Incidentally, we can also expect more environmental risk and damage from the process of fuel “production” (i.e., extraction and processing), because we will be drilling deeper and going to the ends of the Earth to find the last remaining deposits, and we will be burning ever-dirtier fuels. Right now that’s exactly what is happening.

Unless oil prices remain at current stratospheric levels, significant expansion of tar sands operations may be uneconomic.

Lower energy profits from unconventional oil inevitably show up in the financials of oil companies. Between 1998 and 2005, the industry invested $1.5 trillion in exploration and production, and this investment yielded 8.6 million barrels per day in additional world oil production. Between 2005 and 2013, the industry spent $4 trillion on E&P, yet this more-than-doubled investment produced only 4 mb/d in added production.


It gets worse: all net new production during the 2005–13 period was from unconventional sources (primarily tight oil from the United States and tar sands from Canada); of the $4 trillion spent since 2005, it took $350 billion to achieve a bump in their production. Subtracting unconventionals from the total, world oil production actually fell by about a million barrels a day during these years. That means the oil industry spent more than $3.5 trillion to achieve a decline in overall conventional production.

Daniel L. Davis described the situation in a recent article in the Financial Times: The 2013 [World Energy Outlook, published by the International Energy Agency] has the oil industry’s upstream [capital expenditure] rising by nearly 180% since 2000, but the global oil supply (adjusted for energy content) by only 14%. The most straightforward interpretation of this data is that the economics of oil have become completely dislocated from historic norms since 2000 (and especially since 2005), with the industry investing at exponentially higher rates for increasingly small incremental yields of energy.

The costs of oil exploration and production are currently rising at about 10.9% per year, according to Steve Kopits of the energy analytics firm Douglas-Westwood.  This is squeezing the industry’s profit margins, since it’s getting ever harder to pass these costs on to consumers. In 2010, The Economist magazine discussed rising costs of energy production, musing that “the direction of change seems clear. If the world were a giant company, its return on capital would be falling.”

The critical relationship between energy production and the energy cost of extraction is now deteriorating so rapidly that the economy as we have known it for more than two centuries is beginning to unravel.

The average energy profit ratio (a.k.a. Energy Returned on Invested) for US oil production has fallen from 100:1 to 10:1, and the downward trend is accelerating as more and more oil comes from tight deposits (shale) and deepwater. Canada’s prospects are perhaps even more dismal than those of the United States: the tar sands of Alberta have an EROEI that ranges from 3.2 : 1 to 5 : 1.  A 5-to-1 profit ratio might be spectacular in the financial world, but in energy terms this is alarming. Everything we do in industrial societies—education, health care, research, manufacturing, transportation—uses energy. Unless our investment of energy in producing more energy yields an averaged profit ratio of roughly 10 : 1 or more, it may not be possible to maintain an industrial (as opposed to an agrarian) mode of societal organization over the long run.

Our economy runs on energy, and our energy prospects are gloomy, how is it that the economy is recovering? The simplest answer is, it’s not—except as measured by a few misleading gross statistics.

Unemployment statistics don’t include people who’ve given up looking for work. Labor force participation rates are at the lowest level in 35 years.

Claims of economic recovery fixate primarily on one number: gross domestic product, or GDP. Is any society able to expand its debt endlessly? If there were indeed limits to a country’s ability to perpetually grow GDP by increasing its total debt (government plus private), a warning sign would likely come in the form of a trend toward diminishing GDP returns on each new unit of credit created. Bingo: that’s exactly what we’ve been seeing in the United States in recent years. Back in the 1960s, each dollar of increase in total US debt was reflected in nearly a dollar of rise in GDP. By 2000, each new dollar of debt corresponded with only 20 cents of GDP growth. The trend line looked set to reach zero by about 2015.

We won’t quickly and easily switch to electric cars. For that to happen, the economy would have to keep growing, so that more and more people could afford to buy new (and more costly) automobiles. A more likely scenario: as fuel gets increasingly expensive the economy will falter, rendering the transition to electric cars too little, too late.

Most nations have concluded that nuclear power is too costly and risky, and supplies of uranium, the predominant fuel for nuclear power, are limited anyway. Thorium, breeder, fusion, and other nuclear alternatives may hold theoretical promise, but there is virtually no hope that we can resolve the remaining myriad practical challenges, commercialize the technologies, and deploy tens of thousands of new power plants within just a few decades.


Many economists and politicians don’t buy the assertion that energy is at the core of our species-wide survival challenge. They think the game of human success-or-failure revolves around money, military power, or technological advancement. If we toggle prices, taxes, and interest rates; maintain proper trade rules; invest in technology research and development (R&D); and discourage military challenges to the current international order, then growth can continue indefinitely and everything will be fine. Climate change and resource depletion are peripheral problems that can be dealt with through pricing mechanisms or regulations.

Some policy wonks buy “it’s all about energy” but are jittery about “renewables are the future” and won’t go anywhere near “growth is over.” A few of these folks like to think of themselves as environmentalists (sometimes calling themselves “bright green”)—including the Breakthrough Institute and writers like Stewart Brand and Mark Lynas. A majority of government officials are effectively in the same camp, viewing nuclear power, natural gas, carbon capture and storage (“clean coal”), and further technological innovation as pathways to solving the climate crisis without any need to curtail economic growth.

Other environment-friendly folks buy “it’s all about energy” and “renewables are the future” but still remain allergic to the notion that “growth is over.” They say we can transition to 100% renewable power with no sacrifice in terms of economic growth, comfort, or convenience. Stanford professor Mark Jacobson3 and Amory Lovins of Rocky Mountain Institute are leaders of this chorus. Theirs is a reassuring message, but if it doesn’t happen to be factually true (and there are many energy experts who argue persuasively that it isn’t), then it’s of limited helpfulness because it fails to recommend the kinds or degrees of change in energy usage that are essential to a successful transition.

The general public tends to listen to one or another of these groups, all of which agree that the climate and energy challenge of the 21st century can be met without sacrificing economic growth. This widespread aversion to the “growth is over” conclusion is entirely understandable: during the last century, the economies of industrial nations were engineered to require continual growth in order to produce jobs, returns on investments, and increasing tax revenues to fund government services.


Anyone who questions whether growth can continue is deeply subversive. Nearly everyone has an incentive to ignore or avoid it. It’s not only objectionable to economic conservatives; it is also abhorrent to many progressives who believe economies must continue to grow so that the working class can get a larger piece of the proverbial pie, and the “underdeveloped” world can improve standards of living. But ignoring uncomfortable facts seldom makes them go away. Often it just makes matters worse. Back in the 1970s, when environmental limits were first becoming apparent, catastrophe could have been averted with only a relatively small course correction—a gradual tapering of growth and a slow decline in fossil fuel reliance. Now, only a “cold turkey” approach will suffice. If a critical majority of people couldn’t be persuaded then of the need for a gentle course correction, can they now be talked into undertaking deliberate change on a scale and at a speed that might be nearly as traumatic as the climate collision we’re trying to avoid? To be sure, there are those who do accept the message that “growth is over”: most are hard-core environmentalists or energy experts. But this is a tiny and poorly organized demographic. If public relations consists of the management of information flowing from an organization to the public, then it surely helps to start with an organization wealthy enough to be able to afford to mount a serious public relations campaign.

All animals and plants deal with temporary energy subsidies in basically the same way: the pattern is easy to see in the behavior of songbirds visiting the feeder outside my office window. They eat all the seed I’ve put out for them until the feeder is empty. They don’t save some for later or discuss the possible impacts of their current rate of consumption. Yes, we humans have language and therefore the theoretical ability to comprehend the likely results of our current collective behavior and alter it accordingly. We exercise this ability in small ways, where the costs of behavior change are relatively trivial—enacting safety standards for new automobiles, for example. But where changing our behavior might entail a significant loss of competitive advantage or an end to economic growth, we tend to act like finches.


Some business-friendly folks with political connections soon became alarmed at both the policy implications of—and the likely short-term economic fallout from—the way climate science was developing, and decided to do everything they could to question, denigrate, and deny the climate change hypothesis. Their effort succeeded: Especially in the United States, belief in climate change now aligns fairly closely with political affiliation. Most elected Democrats agree that the issue is real and important, and most of their Republican counterparts are skeptical. Lacking bipartisan support, legislative climate policy has languished. From a policy standpoint, climate change is effectively an energy issue, since reducing carbon emissions will require a nearly complete revamping of our energy systems. Energy is, by definition, humanity’s most basic source of power, and since politics is a contest over power (albeit social power), it should not be surprising that energy is politically contested. A politician’s most basic tools are power and persuasion, and the ability to frame issues. And the tactics of political argument inevitably range well beyond logic and critical thinking. Therefore politicians can and often do make it harder for people to understand energy issues than would be the case if accurate, unbiased information were freely available. So here is the reason for the paradox stated in the first paragraph: As energy issues become more critically important to society’s economic and ecological survival, they become more politically contested; and as a result, they tend to become obscured by a fog of exaggeration, half-truth, omission, and outright prevarication.

Who is right? Well, this should be easy to determine. Just ignore the foaming rhetoric and focus on research findings. But in reality that’s not easy at all, because research is itself often politicized. Studies can be designed from the outset to give results that are friendly to the preconceptions and prejudices of one partisan group or another. For example, there are studies that appear to show that the oil and natural gas production technique known as hydraulic fracturing (or “fracking”) is safe for the environment. With research in hand, industry representatives calmly inform us that there have been no confirmed instances of fracking fluids contaminating water tables. The implication: environmentalists who complain about the dangers of fracking simply don’t know what they’re talking about.


Renewable energy is just as contentious. Mark Jacobson, professor of environmental engineering at Stanford University, has coauthored a series of reports and scientific papers arguing that solar, wind, and hydropower could provide 100% of world energy by 2030. Clearly, Jacobson’s work supports Politician B’s political narrative by showing that the climate problem can be solved with little or no economic sacrifice.

If Jacobson is right, then it is only the fossil fuel companies and their supporters that stand in the way of a solution to our environmental (and economic) problems. The Sierra Club and prominent Hollywood stars have latched onto Jacobson’s work and promote it enthusiastically. However, Jacobson’s publications have provoked thoughtful criticism, some of it from supporters of renewable energy, who argue that his “100 percent renewables by 2030” scenario ignores hidden costs, land use and environmental problems, and grid limits. Jacobson has replied to his critics, well, energetically.

Here’s a corollary to my thesis: Political prejudices tend to blind us to facts that fail to fit any conventional political agendas. All political narratives need a villain and a (potential) happy ending. While Politicians A and B might point to different villains (government bureaucrats and regulators on one hand, oil companies on the other), they both envision the same happy ending: economic growth, though it is to be achieved by contrasting means. If a fact doesn’t fit one of these two narratives, the offended politician tends to ignore it (or attempt to deny it). If it doesn’t fit either narrative, nearly everyone ignores it. Here’s a fact that apparently fails to comfortably fit into either political narrative: The energy and financial returns on fossil fuel extraction are declining—fast.

The top five oil majors (ExxonMobil, BP, Shell, Chevron, Total) have seen their aggregate production fall by more than 25% over the past 12 years—but it’s not for lack of effort. Drilling rates have doubled. Rates of capital investment in exploration and production have likewise doubled. Oil prices have quadrupled. Yet actual global rates of production for regular crude oil have flattened, and all new production has come from expensive unconventional sources such as tar sands, tight oil, and deepwater oil. The fossil fuel industry hates to admit to facts like this that investors find scary—especially now, as the industry needs investors to pony up ever-larger bets to pay for ever-more-extreme production projects.


The past few years, high oil prices have provided the incentive for small, highly leveraged, and risk-friendly companies to go after some of the last, worst oil and gas production prospects in North America—formations known to geologists as “source rocks,” which require operators to use horizontal drilling and fracking technology to free up trapped hydrocarbons. The ratio of energy returned to energy invested in producing shale gas and tight oil from these formations is minimal. While US oil and gas production rates have temporarily spiked, all signs indicate that this will be a brief boom.

During the 1930s, the US-based National Association of Manufacturers enlisted a team of advertisers, marketers, and psychologists to formulate a strategy to counter government efforts to plan and manage the economy in the wake of the Depression. They proposed a massive, ongoing ad campaign to equate consumerism with “The American Way.” Progress would henceforth be framed entirely in economic terms, as the fruit of manufacturers’ ingenuity. Americans were to be referred to in public discourse (newspapers, magazines, radio) as consumers, and were to be reminded at every opportunity of their duty to contribute to the economy by purchasing factory-made products, as directed by increasingly sophisticated and ubiquitous advertising cues.

Veblen asserted in his widely cited book The Theory of the Leisure Class that there exists a fundamental split in society between those who work and those who exploit the work of others; as societies evolve, the latter come to constitute a “leisure class” that engages in “conspicuous consumption.” Veblen saw mass production as a way to universalize the trappings of leisure so the owning class could engage workers in an endless pursuit of status symbols, thus deflecting workers’ attention from society’s increasingly unequal distribution of wealth and their own political impotence.

The critics have insisted all along, consumerism as a system cannot continue indefinitely; it contains the seeds of its own demise. And the natural constraints to consumerism—fossil fuel limits, environmental sink limits (leading to climate change, ocean acidification, and other pollution dilemmas), and debt limits—appear to be well within sight. While there may be short-term ways of pushing back against these limits (unconventional oil and gas, geoengineering, quantitative easing), there is no way around them.


Consumerism is inherently doomed. But since consumerism now effectively is the economy (70% of US GDP comes from consumer spending), when it goes down the economy goes too. A train wreck is foreseeable. No one knows exactly when the impact will occur or precisely how bad it will be. But it is possible to say with some confidence that this wreck will manifest itself as an economic depression accompanied by a series of worsening environmental disasters and possibly wars and revolutions. This should be news to nobody by now, as recent government and UN reports spin out the scenarios in ever grimmer detail: rising sea levels, waves of environmental refugees, droughts, floods, famines, and collapsing economies. Indeed, looking at what’s happened since the start of the global economic crisis in 2007, it’s likely the impact has already commenced—though it is happening in agonizingly slow motion as the system fights to maintain itself.

World conventional crude oil production has been flat-to-declining since about 2005. Declines of output from the world’s supergiant oilfields will steepen in the years ahead. Petroleum is essential to the world economy and there is no ready and sufficient substitute. The potential consequences of peak oil include prolonged economic crisis and resource wars.

Other unconventionals, like extra-heavy oil in Venezuela and kerogen (also known as “oil shale,” and not to be confused with shale oil) in the American West, will be even slower and more expensive to produce.

Why no collapse yet? Governments and central banks have inserted fingers in financial levees. Most notably, the Federal Reserve rushed to keep crisis at bay by purchasing tens of billions of dollars in US Treasury bonds each month, year after year, using money created out of thin air at the moment of purchase.

Virtually all of the Fed’s money has stayed within financial circles; that’s a big reason why the richest Americans have gotten much richer in the past few years, while most regular folks are treading water at best.

What has the too-big-to-fail, too-greedy-not-to financial system done with the Fed’s trillions in free money? Blown another stock market bubble and piled up more leveraged bets. No one knows when the latest bubble will pop, but when it does the ensuing crisis may be much worse than that of 2008. Will central banks then be able to jam more fingers into the leaky levee? Will they have enough fingers?

ExxonMobil is inviting you to take your place in a fossil-fueled 21st century. But I would argue that Exxon’s vision of the future is actually just a forward projection from our collective rearview mirror. Despite its hi-tech gadgetry, the oil industry is a relic of the days of the Beverly Hillbillies. This fossil-fueled sitcom of a world that we all find ourselves trapped within may on the surface appear to be characterized by smiley-faced happy motoring, but at its core it is monstrous and grotesque. It is a zombie energy economy.


Oil and gas are finite resources, so it was clear from the start that, as we extracted and burned them, we were in effect stealing from the future. In the early days, the quantities of these fuels available seemed so enormous that depletion posed only a theoretical limit to consumption. We knew we would eventually empty the tanks of Earth’s hydrocarbon reserves, but that was a problem for our great-great-grandkids to worry about.

In a few years we will look back on late 20th-century America as a time and place of advertising-stoked consumption that was completely out of proportion to what Nature can sustainably provide. I suspect we will think of those times—with a combination of longing and regret—as a lost golden age of abundance, but also an era of foolishness and greed that put the entire world at risk.

Making the best of our new circumstances will mean finding happiness in designing higher-quality products that can be reused, repaired, and recycled almost endlessly and finding fulfillment in human relationships and cultural activities rather than mindless shopping. Fortunately, we know from recent cross-cultural psychological studies that there is little correlation between levels of consumption and levels of happiness. That tells us that life can in fact be better without fossil fuels. So whether we view these as hard times or as times of

Nations could, in principle, forestall social collapse by providing the bare essentials of existence (food, water, housing, medical care, family planning, education, employment for those able to work, and public safety) universally and in a way that could be sustained for some time, while paying for this by deliberately shrinking other features of society—starting with military and financial sectors—and by taxing the wealthy. The cost of covering the basics for everyone is still within the means of most nations. Providing human necessities would not remove all the fundamental problems now converging (climate change, resource depletion, and the need for fundamental economic reforms), but it would provide a platform of social stability and equity to give the world time to grapple with deeper, existential challenges. Unfortunately, many governments are averse to this course of action. And if they did provide universal safety nets, ongoing economic contraction might still result in conflict, though in this instance it might arise from groups opposed to the perceived failures of “big government.” Further, even in the best instance, safety nets can only buy time. The capacity of governments to maintain flows of money and goods will erode. Thus it will increasingly be up to households and communities to provide the basics for themselves while reducing their dependence upon, and vulnerability to, centralized systems of financial and governmental power. This will set up a fundamental contradiction. When the government tries to provide people the basics, power is centralized—but as the capacity of the government wanes, it can feel threatened by people trying to provide the basics for themselves and act to discourage or even criminalize them.

Theorists on both the far left and far right of the political spectrum have advocated for the decentralization of food, finance, education, and other basic societal support systems for decades. Some efforts toward decentralization (such as the local food movement) have led to the development of niche markets.

The decentralized provision of basic necessities is not likely to flow from a utopian vision of a perfect or even improved society (as have some social movements of the past). It will emerge instead from iterative human responses to a daunting and worsening set of environmental and economic problems, and it will in many instances be impeded and opposed by politicians, bankers, and industrialists. It is this contest between traditional power elites and growing masses of disenfranchised poor and formerly middle-class people attempting to provide the necessities of life for themselves in the context of a shrinking economy that is shaping up to be the fight of the century.

When Civilizations Decline

In his benchmark 1988 book The Collapse of Complex Societies, archaeologist Joseph Tainter explained the rise and demise of civilizations in terms of complexity. He used the word complexity to refer to “the size of a society, the number and distinctiveness of its parts, the variety of specialized social roles that it incorporates, the number of distinct social personalities present, and the variety of mechanisms for organizing these into a coherent, functioning whole.”


Civilizations are complex societies organized around cities; they obtain their food from agriculture (field crops), use writing and mathematics, and maintain full-time division of labor. They are centralized, with people and resources constantly flowing from the hinterlands toward urban hubs.

Thousands of cultures have flourished throughout the human past, but there have only been about 24 civilizations. And all—except our current global industrial civilization (so far)—have ultimately collapsed.

Tainter describes the growth of civilization as a process of investing societal resources in the development of ever-greater complexity in order to solve problems. For example, in village-based tribal societies an arms race between tribes can erupt, requiring each village to become more centralized and complexly organized in order to fend off attacks. But complexity costs energy. As Tainter puts it, “More complex societies are costlier to maintain than simpler ones and require higher support levels per capita.” Since available energy and resources are limited, a point therefore comes when increasing investments become too costly and yield declining marginal returns. Even the maintenance of existing levels of complexity costs too much (citizens may experience this as onerous levels of taxation), and a general simplification and decentralization of society ensues—a process colloquially referred to as collapse.

During such times societies typically see sharply declining population levels, and the survivors experience severe hardship. Elites lose their grip on power. Domestic revolutions and foreign wars erupt. People flee cities and establish new, smaller communities in the hinterlands. Governments fall and new sets of power relations emerge. It is frightening to think about what collapse would mean for our current global civilization.


Nevertheless, as we are about to see, there are good reasons for concluding that our civilization is reaching the limits of centralization and complexity, that marginal returns on investments in complexity are declining, and that simplification and decentralization are inevitable. Thinking in terms of simplification, contraction, and decentralization is more accurate and helpful, and probably less scary, than contemplating collapse. It also opens avenues for foreseeing, reshaping, and even harnessing inevitable social processes so as to minimize hardship and maximize possible benefits.

Some of the effects of declining energy will be nonlinear and unpredictable, and could lead to a general collapse of civilization. Economic contraction will not be as gradual and orderly as economic expansion has been. Such effects may include an uncontrollable and catastrophic unwinding of the global system of credit, finance, and trade, or the dramatic expansion of warfare as a result of heightened competition for energy resources or the protection of trade privileges.

Further stimulus spending would require another massive round of government borrowing, and that would face strong domestic political headwinds as well as resistance from the financial community (in the form of credit downgrades, which would make further borrowing more expensive).

Without increasing and affordable energy flows a genuine economic recovery (meaning a return to growth in manufacturing and trade) may not be possible.

The evidence for the efficacy of austerity as a path to increased economic health is spotty at best in “normal” economic times. Under current circumstances, there is overwhelming evidence that it leads to declining economic performance as well as social unraveling. In nations where the austerity prescription has been most vigorously applied (Ireland, Greece, Spain, Italy, and Portugal), contraction has continued or even accelerated, and popular protest is on the rise.

Austerity is having similar effects in states, counties, and cities in the United States. State and local governments cut roughly half a million jobs during 2009–10; had they kept hiring at their previous pace to keep up with population growth, they would instead have added a half-million jobs. Meanwhile, due to low tax revenues, local governments are allowing paved roads to turn to gravel, closing libraries and parks, and laying off public employees. It’s not hard to recognize a self-reinforcing feedback loop at work here. A shrinking economy means declining tax revenues, which make it harder for governments to repay debt. In order to avoid a credit downgrade, governments must cut spending. This shrinks the economy further, eventually resulting in credit downgrades anyway. That in turn raises the cost of borrowing. So government must cut spending even further to remain credit-worthy. The need for social spending explodes as unemployment, homelessness, and malnutrition increase, while the availability of social services declines. The only apparent way out of this death spiral is a revival of rapid economic growth. But if the premise above is correct, that is a mere pipedream.

Centralized provision of the basics. In this scenario, nations directly provide jobs and basic necessities to the general public while deliberately simplifying, downsizing, or eliminating expendable features of society such as the financial sector and the military, and taxing those who can afford it—wealthy individuals, banks, and larger businesses—at higher rates. This is the path outlined at the start of the essay; at this point it is appropriate to add a bit more detail. In many cases, centralized provision of basic necessities is relatively cheap and efficient. For example, since the beginning of the current financial crisis the US government has mainly gone about creating jobs by channeling tax breaks and stimulus spending to the private sector. But this has turned out to be an extremely costly and inefficient way of providing jobs, far more of which could be called into existence (per dollar spent) by direct government hiring. Similarly, the new US federal policy of increasing the public’s access to health care by requiring individuals to purchase private medical insurance is more costly than simply providing a universal government-run health insurance program, as every other industrial nation does. If Britain’s experience during and immediately after World War II is any guide, then better access to higher-quality food could be ensured with a government-run rationing program than through a fully privatized food system. And government banks could arguably provide a more reliable public service than private banks, which funnel enormous streams of unearned income to bankers and investors. If all this sounds like an argument for utopian socialism, read on—it’s not. But there are indeed real benefits to be reaped from government provision of necessities, and it would be foolish to ignore them. A parallel line of reasoning goes like this.


Immediately after natural disasters or huge industrial accidents, the people impacted typically turn to the state for aid. As the global climate chaotically changes, and as the hunt for ever-lower-grade fossil energy sources forces companies to drill deeper and in more sensitive areas, we will undoubtedly see worsening weather crises, environmental degradation and pollution, and industrial accidents such as oil spills. Inevitably, more and more families and communities will be relying upon state-provided aid for disaster relief. Many people would be tempted to view an expansion of state support services with alarm as the ballooning of the powers of an already bloated central government. There may well be substance to this fear, depending on how the strategy is pursued. But it is important to remember that the economy as a whole, in this scenario, would be contracting—and would continue to contract—due to resource limits.

In any case, it’s hard to say how long this strategy could be maintained in the face of declining energy supplies. Eventually, central authorities’ ability to operate and repair the infrastructure necessary to continue supporting

As central governments seek to maintain complexity at the expense of more dispersed governmental nodes (city, county, and state governments), then conflict between communities and sputtering national or global power hubs is likely. Communities may begin to withdraw streams of support from central authorities—and not only governmental authorities, but financial and corporate ones as well.

Communities that have to contend with declining tax revenues, competition from larger governments, and predatory mega-corporations and banks, then nonprofit organizations—which support tens of thousands of local charity efforts—face perhaps even greater challenges. The current philanthropic model rests entirely upon assumed economic growth: foundation grants come from returns on the foundation’s investments (in the stock market and elsewhere). As economic growth slows and reverses, the world of nonprofit organizations will shake and crumble, and the casualties will include tens of thousands of social services agencies, educational programs, and environmental protection organizations . . . as well as countless symphony orchestras, dance ensembles, museums, and on and on. If national government loses its grip, if local governments are pinched simultaneously from above and below, and if nonprofit organizations are starved for funding, from where will come the means to support local communities with the social and cultural services they need?

Local movements to support localization—however benign their motives—may be perceived by national authorities as a threat.


Scenarios are not forecasts; they are planning tools. As prophecies, they’re not much more reliable than dreams. What really happens in the years ahead will be shaped as much by “black swan” events as by trends in resource depletion or credit markets. We know that environmental impacts from climate change will intensify, but we don’t know exactly where, when, or how severely those impacts will manifest; meanwhile, there is always the possibility of a massive environmental disaster not caused by human activity (such as an earthquake or volcanic eruption) occurring in such a location or on such a scale as to substantially alter the course of world events. Wars are also impossible to predict in terms of intensity and outcome, yet we know that geopolitical tensions are building.

The success of governments in navigating the transitions ahead may depend on measurable qualities and characteristics of governance itself. In this regard, there could be useful clues to be gleaned from the World Governance Index, which assesses governments according to criteria of peace and security, rule of law, human rights and participation, sustainable development, and human development. For 2011, the United States ranked number 32 (and falling: it was number 28 in 2008)—behind Uruguay, Estonia, and Portugal but ahead of China (number 140) and Russia (number 148).

One wonders how many big-government centralists of the left, right, or center—who often see the stability of the state, the status of their own careers, and the ultimate good of the people as being virtually identical—are likely to embrace such a prescription.

History teaches us at least as much as scenario exercises can. The convergence of debt bubbles, economic contraction, and extreme inequality is hardly unique to our historical moment. A particularly instructive and fateful previous instance occurred in France in the late 18th century. The result then was the French Revolution, which rid the common people of the burden of supporting an arrogant, entrenched aristocracy, while giving birth to ideals of liberty, equality, and universal brotherhood. However, the revolution also brought with it war, despotism, mass executions—and an utter failure to address underlying economic problems. So often, as happened then, nations suffering under economic contraction double down on militarism rather than downsizing their armies so as to free up resources. They go to war, hoping thereby both to win spoils and to give mobs of angry young men a target for their frustrations other than their own government. The gambit seldom succeeds; Napoleon made it work for a while, but not long. France and (most of) its people did survive the tumult. But then, at the dawn of the 19th century, Europe was on the cusp of another revolution—the fossil-fueled Industrial Revolution—and decades of economic growth shimmered on the horizon. Today we are just starting our long slide down the decline side of the fossil fuel supply curve.

The world supply of uranium is limited, and shortages are likely by mid-century even with no major expansion of power plants. And, atomic power plants are tied to nuclear weapons proliferation.

None of this daunts Techno-Anthropocene proponents, who say new nuclear technology has the potential to fulfill the promises originally made for the current fleet of atomic power plants. The centerpiece of this new technology is the integral fast reactor (IFR). Unlike light water reactors (which comprise the vast majority of nuclear power plants in service today), IFRs would use sodium as a coolant. The IFR nuclear reaction features fast neutrons, and it more thoroughly consumes radioactive fuel, leaving less waste. Indeed, IFRs could use current radioactive waste as fuel. Also, they are alleged to offer greater operational safety and less risk of weapons proliferation.

Fast-reactor technology is highly problematic. Earlier versions of the fast breeder reactor (of which IFR is a version) were commercial failures and safety disasters. Proponents of the integral fast reactor, say the critics, overlook its exorbitant development and deployment costs and continued proliferation risks. IFR theoretically only “transmutes,” rather than eliminates, radioactive waste. Yet the technology is decades away from widespread implementation, and its use of liquid sodium as a coolant can lead to fires and explosions.

David Biello, writing in Scientific American, concludes that, “To date, fast neutron reactors have consumed six decades and $100 billion of global effort but remain ‘wishful thinking.’”

But we don’t have the luxury of limitless investment capital, and we don’t have decades in which to work out the bugs and build out this complex, unproven technology.

Degrading topsoil in order to produce enough grain to feed ten billion people? Just build millions of hydroponic greenhouses (that need lots of energy for their construction and operation). As we mine deeper deposits of metals and minerals and refine lower-grade ores, we’ll require more energy.

Governments are probably incapable of leading a strategic retreat in our war on nature, as they are systemically hooked on economic growth. But there may be another path forward. Perhaps citizens and communities can initiate a change of direction.

Wes Jackson of the Land Institute in Salina, Kansas, has spent the past four decades breeding perennial grain crops (he points out that our current annual grains are responsible for the vast bulk of soil erosion, to the tune of 25 billion tons per year).

Population Media Center is working to ensure we don’t get to ten billion humans by enlisting creative artists in countries with high population growth rates (which are usually also among the world’s poorest nations) to produce radio and television soap operas featuring strong female characters who successfully confront issues related to family planning. This strategy has been shown to be the most cost-effective and humane means of reducing high birth rates in these nations.

It’s hard to convince people to voluntarily reduce consumption and curb reproduction. That’s not because humans are unusually pushy, greedy creatures; all living organisms tend to maximize their population size and rate of collective energy use. Inject a colony of bacteria into a suitable growth medium in a petri dish and watch what happens. Hummingbirds, mice, leopards, oarfish, redwood trees, or giraffes: in each instance the principle remains inviolate—every species maximizes population and energy consumption within nature’s limits. Systems ecologist Howard T. Odum called this rule the Maximum Power Principle: throughout nature, “system designs develop and prevail that maximize power intake, energy transformation, and those uses that reinforce production and efficiency.”

In many countries, including the US, government efforts to forestall or head off uprisings appear to be taking the forms of criminalization of dissent, the militarization of police, and a massive expansion of surveillance using an array of new electronic spy technologies. At the same time, intelligence agencies are now able to employ up-to-date sociological and psychological research to infiltrate, co-opt, misdirect, and manipulate popular movements aimed at achieving economic redistribution. However, these military, police, public relations, and intelligence efforts require massive funding as well as functioning grid, fuel, and transport infrastructures. Further, their effectiveness is limited if and when the nation’s level of economic pain becomes too intense, widespread, or prolonged. A second source of conflict consists of increasing competition over access to depleting resources, including oil, water, and minerals. Among the wealthiest nations, oil is likely to be the object of the most intensive struggle, since oil is essential for nearly all transport and trade. The race for oil began in the early 20th century and has shaped the politics and geopolitics of the Middle East and Central Asia; now that race is expanding to include the Arctic and deep oceans, such as the South China Sea. Resource conflicts occur not just between nations but also within societies: witness the ongoing insurgencies in the Niger Delta, where oil revenue fuels rampant political corruption while drilling leads to environmental ravages felt primarily by the Ogoni ethnic group; see also the political infighting in fracking country here in the United States, where ecological impacts put ever-greater strains on the social fabric.

Lastly, climate change, water scarcity, high oil prices, vanishing credit, and the leveling off of per-hectare productivity and the amount of arable land are all combining to create the conditions for a historic food crisis, which will impact the poor first and most forcibly. High food prices breed social instability—whether in 18th-century France or 21st-century Egypt. As today’s high prices rise further, social instability could spread, leading to demonstrations, riots, insurgencies, and revolutions.

In the current context, a continuing source of concern must be the large number of nuclear weapons now scattered among nine nations. While these weapons primarily exist as a deterrent to military aggression, and while the end of the Cold War has arguably reduced the likelihood of a massive release of them in an apocalyptic fury, it is still possible to imagine several scenarios in which a nuclear detonation could occur as a result of accident, aggression, preemption, or retaliation. We are in a race—but it’s not just an arms race; indeed, it may end up being an arms race in reverse.

We can only hope that historical momentum can maintain the Great Peace until industrial nations are sufficiently bankrupt that they cannot afford to mount foreign wars on any substantial scale.


In his recent and important book Carbon Democracy: Political Power in the Age of Oil, Timothy Mitchell argues that modern democracy owes a lot to coal. Not only did coal fuel the railroads, which knitted large regions together, but striking coal miners were able to bring nations to a standstill, so their demands for unions, pensions, and better working conditions played a significant role in the creation of the modern welfare state. It was no mere whim that led Margaret Thatcher to crush the coal industry in Britain; she saw its demise as the indispensable precondition to neoliberalism’s triumph. Coal was replaced, as a primary energy source, by oil. Mitchell suggests that oil offered industrial countries a path to reducing internal political pressures. Its production relied less on working-class miners and more upon university-trained geologists and engineers. Also, oil is traded globally, so that its production is influenced more by geopolitics and less by local labor strikes. “Politicians saw the control of oil overseas as a means of weakening democratic forces at home,” according to Mitchell, and so it is no accident that by the late 20th century the welfare state was in retreat and oil wars in the Middle East had become almost routine. The problem of “excess democracy,” which reliance upon coal inevitably brought with it, has been successfully resolved, not surprisingly by still more teams of university-trained experts—economists, public relations professionals, war planners, political consultants, marketers, and pollsters. We have organized our political life around a new organism—“the economy”—which is expected to grow in perpetuity, or, more practically, as long as the supply of oil continues to increase.

Andrew Nikiforuk also explores the suppression of democratic urges under an energy regime dominated by oil in his brilliant book The Energy of Slaves: Oil and the New Servitude. The energy in oil effectively replaces human labor; as a result, each North American enjoys the services of roughly 150 “energy slaves.” But, according to Nikiforuk, that means that burning oil makes us slave masters—and slave masters all tend to mimic the same attitudes and behaviors, including contempt, arrogance, and impunity.

As power addicts, we become both less sociable and easier to manipulate. In the early 21st century, carbon democracy is still ebbing, but so is the global oil regime hatched in the late 20th century. Domestic US oil production based on hydraulic fracturing (“fracking”) reduces the relative dominance of the Middle East petro-states, but to the advantage of Wall Street—which supplies the creative financing for speculative and marginally profitable domestic drilling. America’s oil wars have largely failed to establish and maintain the kind of order in the Middle East and Central Asia that was sought. High oil prices send dollars cascading toward energy producers but starve the economy as a whole, and this eventually reduces petroleum demand.

Governance systems appear to be incapable of solving or even seriously addressing looming financial, environmental, and resource issues, and “democracy” persists primarily in a highly diluted solution whose primary constituents are money, hype, and expert-driven opinion management. In short, the 20th-century governance system is itself fracturing. So what comes next?

Posted in By People, Energy, Peak Oil, Richard Heinberg | Tagged , , | 10 Comments