After the harvest – protecting food from rats, mold, insects, fire, and bacteria

[ Current post-harvest storage is totally out of scale with what future generations will need. Now we have mainly gargantuan grain elevators, some so huge they can store enough grain for most of the U.S. for weeks at a time.  These silos are highly energy-intensive and far apart, requiring legions of trucks both hauling grain long distances for storage and again for distribution burning massive amounts of finite diesel fuel.  In the future there need to be thousands of much smaller, widely distributed storage silos that don’t require fossil fuels.  For example, before oil, grain elevators could be found about 7 miles apart, the distance a horse could travel in one day hauling a heavy load of grain.

Without a massive redistribution of people back to the land, even that won’t be enough, since 80% of the food will be stored where just 20% of the population lives — 80% of Americans live within 200 miles of the coasts.  

At some point of energy decline, there won’t be enough oil to distribute crops by rail, truck, or barge, and 80% of communities are completely dependent on trucks, with no rail or water ports..  Yet as climate change kicks in and successful harvests grow rarer and produce less food, even more storage will be needed at a much smaller scale across the nation.  It would help future generations if we built new storage silos now.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Peter Golob, et. al. 2002. Crop Post-Harvest: Science and Technology. Volume 1 Principles and Practice. Volume 2 Durables. Volume 3 Perishables.  Blackwell Science.


This is a book review of Golob’s book.  After reading it, I found it amazing farmers can grow anything, since crops can be destroyed by drought, wildfire, flood, insects, birds, snails, rodents, fungi, bacteria, viruses, hail, frost, lack of vital nutrients, too much pesticide, and so on.

But that’s only half the story, the story most familiar to everyone.  You’d think that once the harvest is over, it’s time to relax and celebrate.  Yet the crop is still not safe from harm, wherever it is stored it’s still susceptible to all of the above, plus spoilage and silo explosions. Civilization exists because our ancestors figured out how to store grain for several years to make it through bad harvests.

Before fossil fuels initiated the Industrial Revolution, 90% of the population was rural, unlike now, where over 80% of us in the United States are urban.  Most people spent a good deal of time preserving perishable food like meat, vegetables, and fruit by drying them out, or with preservatives such as salt and alcohol.  Canning didn’t begin until the early 1800s when Napoleon began using canned food to feed his troops.

Most people in the world got, and still get, the majority of their calories and nutrition from long-lasting food, mainly grains and beans.

Brian Fagan, in The Little Ice Age 1300-1850, described how hard it was to store a harvest to last beyond one bad harvest and for the next planting, even if barns were stuffed to the eaves and local lords and religious foundations also stored crops. It was usually impossible to keep mice and rats away or prevent spoilage. During this period of climate change, crops failed often from blazing hot summers, excessive cold, or torrential rain. Two or more bad years in a row happened every ten years.

In the 20th century, post harvest food technology was developed and enormous granaries were built that can store grain for many years. These modern granaries keep rodents and other pests out. Durables are fumigated or sprayed with pesticides to kill insects at all stages of their life cycle. Grain elevators keep durables cool and dry, vastly extending their storage life.

Post harvest technology preserves food after harvest and before delivery. Although transportation isn’t part of the discussion, it’s important to mention that the main reason famines stopped was the invention of the railroad. Areas with good crops could send their surplus to regions where crops had failed.

The length of time and amount of durables that can be stored with fossil-fuel built and controlled food storage technology is amazing. This technology has also made food safer to eat. Fossil fuels allow produce just harvested from the field to be cooled immediately, and kept cool throughout the supply chain, which makes it possible for us to enjoy fresh food year round — often produce that’s come thousands of miles before reaching our plates.

Golob et al’s Crop Post-Harvest volumes 1-3 are heavy textbooks that provide an in-depth look at the continuing war to get perishables to market and to preserve food. Both the old methods, still used in developing countries, and the amazing energy-intensive modern technology we’ve developed, are explained in great detail.

Humans are now using nearly all of the arable, ranch, and forested land on the planet, so preserving as much harvested food for as long as possible is our main hope of increasing food supplies in the future.

Why does good food go bad?  How durables like grains and beans are destroyed.

High temperatures and moisture are the enemies of harvested crops.  In places with both grain can spoil in months.

Temperature affects how quickly insects, mites, fungus, and mycotoxins develop and germination qualities are lost.  The biological activity of insects, mites, fungi, and the grain itself doubles for every eighteen degree Fahrenheit rise in temperature.  At low temperatures, insect breeding stops.

It’s hard for insects and microorganisms to survive if there’s no water, so low moisture is critical as well.  This is why it’s so hard to preserve fresh food for a long time — fresh food has a very high water content, i.e. on average, the percent of water in apples is 84%, turnips 92%, pork 56%, Beef 58%, and fish 81% .

Damage and cleanliness

Produce that isn’t stored sterilely is bound to be degraded by some biological agent.  Damaged produce provides a point of entry for secondary pests and saprobic fungi.  Attack usually begins with one or a few species followed by the invasion of a broad range of non-specific microorganisms and secondary insect pests.  Primary pests can also lead to quality losses since some insects feed on the germ region of seed, leading to a loss of nutrition or viability if planted.

Infection after harvest often occurs at the site of wounds from insect feeding or mechanical injury during the harvest.  The main insect pests in stored food are Coleoptera (beetles) and Lepidoptera (moths), as well as diptera, psocoptera, and dictyoptera.  There are also some bacterial infections of stored foods that can be serious, poisonous even, especially for those who are old, young or sick.


There are over 200 species of rodents that damage crops while they’re growing, but rodents haven’t coevolved with grain storage long enough yet – only 40 species of rodents prey on food stores.  Rodents can eat 10% of their body weight every day.  They reproduce quickly, so if even two of the opposite sex get in, it won’t be long before exponential growth begins.  Rats live about a year, can get pregnant at 3 weeks with litters of four to eight, and reach adulthood in two to three months.  You’ve got to go for 100% rat mortality or they’ll quickly come back.  Rodents do even more damage by contamination with urine and feces than the food they eat.

Rodents can cause extensive damage to storage structures.  They almost impossible to keep out – they can climb smooth surfaces, walk along wires, ropes, electric cables, etc.  They’re also good at digging and tunneling, can gnaw through anything less than 5.5 on the Mohr hardness scale, i.e. lead, aluminum, tin, etc, so structures need to avoid edges rodents can get purchase on to gnaw.  Some species of rats can jump five feet high, squeeze through 1/5th of an inch cracks, and swim long distances.

Birds and Insects

Birds not only eat grain directly from bins, but they’ll peck bags open.  Twenty pigeons eat as much as a human does.  Birds contaminate food and spread pathogens like salmonella and zoonoses.

Insects not only eat grain, but can affect the quality and taste of grain, affect the ability of making dough, and ruin the flavor.  In the USA, some areas are more likely to succumb to insects than others.  The highest risk area are the southernmost states, the lowest risk area are the states of South & North Dakota, Montana, Minnesota, Iowa, Wisconsin, Michigan, Oregon, Washington, Idaho, and Montana.

In developing countries, termites can devour wood storage structures.

Fungi, mold, and microorganisms

Fungi flourish when moisture is over 22%.  They can cause blemishes, blights, discoloration, and even wreak revenge in the next generation, when the fungi-damaged seed produces a diseased plant or reduced germination rates.

Molds can produce toxic myco- and afla-toxins making them unsafe to eat and of poor quality.

If rodents, birds, insects, mites, fungi, and mold don’t harm the stored grain, then bacteria, viruses, yeasts, nematodes, anthracnose, blight, blotch, brown rot, canker, scab, dry rot, hyperplasia, hypertrophy, leaf spot, mildew, mould, mosaic virus, rust, smut, vascular disease, wet rot, soft rot, and toxins are still a peril.

And more…

In addition to all the pests and diseases, grain can suffer from mechanical damage at harvest, threshing, or any point thereafter — while being hauled to market, and careless handling at the market.

Grain can be damaged if drying is done incorrectly, or through temperature extremes at any point.  Moisture over 10-14% will lead to deterioration from fungi and biological degradation.

If grain is harvested too early, it will be green and therefore have high moisture content, causing it to rapidly deteriorate in storage.  If harvesting is too late, the mature grain may be attacked by insects and microorganisms, or cracked from repeated rain and dry weather, making it easier for microorganisms to attack in storage.

Fresh produce

However hard it is to store durables like grain and beans, it’s much easier than fruits or vegetables, which must be delivered to the consumer quickly, often within days.  The new, high-yield varieties of produce have higher nutrition, but they also have greater likelihood of spoilage in storage.   Lack of plant nutrients in the soil affects the quality at harvest and the ability of the produce to store.  Nitrogen may be good for growth, but it can lead to problems in some produce in storage.

For example, ideally lettuce is picked when the temperature is less than 60 degrees Fahrenheit and cooled within two hours.  If kept cool, it won’t spoil for nine and a half days.  But if lettuce is picked when it’s over 75 degrees Fahrenheit and isn’t cooled down until ten hours later, spoilage will begin in two and a half days.

Produce is pre-cooled by evaporative cooling, positive ventilation with ice banks, ice cooling, forced air cooling, hydro-cooling, and vacuum cooling.

Both durables (grains, legumes) and perishables are sprayed with chemicals to keep biota from attacking.

Fumigants can be essential to killing insects as well. Since Methyl Bromide causes ozone depletion, there’s a race on to invent a new fumigant, but this isn’t easy because there are so many essential properties. Fumigants must be a gas at room temperature, good at diffusing, kill all stages of pests, not be greatly heavier than air, and not leave harmful chemical residues. So other, costlier, methods of controlling insects are being tried, such as airtight storage, vacuums, and carbon dioxide atmospheres.

This is a very small subset of what’s covered in these three textbooks, which go in depth into the details of plant physiology, how to measure important storage parameters, detect pests, a long list of specific pests and the damage they do, how to build storage structures, manage pests, preserve food, the chemical structure of plants and oils, milling grains, trade and international agreements, applied research and dissemination, food systems, how food is preserved in developing countries, and much, much more.

Conclusion – Energy descent implications

If you’ve ever driven through the Midwest, you’ve seen enormous grain elevators from miles away.

These are built to protect against theft, rodents, birds, and insects.  They’re designed to keep the durables stored within as dry and cool as possible, by preventing cold humid air from getting into the grain at night and keeping the roof from getting so hot that condensation forms.

Climate change will make harvests far less assured in the future, with more years between successful harvests, as Brian Fagan describes in “The Little Ice Age”.  Research into how to store food after harvesting for long periods is essential to prepare for the double whammy of extreme weather and declining energy.

Long distance fresh produce will be the first to vanish from grocery store shelves as energy declines, but as Marion Nestle points out in “What to Eat”, the longer it takes food to reach market, the more nutrition is lost.  Locally produced produce is far healthier.

One solutions is to fund more research into low-energy, potentially manual post-harvest storage of durable crops ought to be increased.   Currently, modern storage technology is very energy intensive, and favors large farms over small farms because:

  • Small farms are expensive to include in horizontal and vertical supply chains
  • Small farms can’t meet as stringent quantity and quality demands as large firms supplying food to markets
  • Fruits and vegetables are hard for smaller or medium farms to deal with – they need special packing and refrigeration equipment to cool down the produce, transport it, and large growers can afford the computer-controlled deep irrigation systems, intense fertilizers and pesticides, and sophisticated packing plants to keep produce cool throughout the entire supply chain.
  • Small and medium farms don’t have the money to keep up with the latest research on hygiene, health, aesthetics, development, and marketing
  • The cost to build and operate high-tech storage structures is huge

Because agriculture, infrastructure, and western civilization are so dependent on fossil fuels, many writers have concluded the best way to lower suffering as energy declines, and to make as orderly and peaceful a transition as possible, is for millions of families to go back to land.  Clearly most families would prefer to be independent small farmers on their own land rather than poorly paid seasonal workers.

I hope, but doubt, there is funding for engineers and scientists to figure out the best ways to adapt existing infrastructure each step downward on the energy curve.  In the case of post-harvest technology, one puzzle that needs to be solved is how to continue using the enormous durable storage facilities we’ve built.  If long-term, it’s impossible to load half-mile-long 120-foot high grain elevators without fossil-fuel driven energy, then let’s start building smaller grain elevators and other post-harvest storage technology now, while the energy to do so still exists.

Posted in Agriculture, Agriculture, Books | Tagged , , | 2 Comments

Tilting at Windmills, Spain’s disastrous attempt to replace fossil fuels with Solar PV, Part 2

[ In Charles Hall’s latest 2017 book, “Energy Return on Investment: A Unifying Principle for Biology, Economics, and Sustainability“, he says that he has embarked on a project to discover why solar advocate EROI results are so much higher than what was found in Prieto & Hall’s “Spains Solar Revolution”.

Since fossil fuels are finite, the electric grid must be 100% renewable someday. The goal of EROI studies is to see which renewables are the most worth investing in long-term.  Someday they will be without any help from oil, coal, and natural gas, so it is necessary to have wide boundaries that includes all of the other essential infrastructure that makes solar and wind possible, especially  energy storage, the transmission system, and other renewables that can provide both millisecond balancing power and 6 to 12 weeks of energy storage, depending on size of grid and amount of renewable power in a region.

The biggest difference Hall has found so far is due to solar advocates multiplying solar electricity generation by a factor of 2.6 (BP) or 3 (IEA) because advocates claim that solar power is worth 3 times as much as fossil electricity since two-thirds of fossil generation is lost as heat.

According to Gail Tverberg, in her post “The Wind and Solar Will Save Us Delusion“, this is done by BP to account for the loss of energy when fossil fuels or biomass are burned and transformed into electricity. BP corrects for this by showing the amount of fuel that would need to be burned to produce this amount of electricity, assuming a conversion efficiency of 38%. Thus, the energy amounts shown by BP for nuclear, hydro, wind and solar don’t represent the amount of heat that they could make, if used to heat apartments or to cook food. Instead, they reflect an amount 2.6 times as much (=1/38%), which is the amount of fossil fuels that would need to be burned in order to produce this electricity.

But wait! Fossil fuels used for heat are several times more effective than heat generated with electricity, and burning natural gas at home in a 98% efficient furnace is far cheaper than burning it at a natural gas power plant and losing two-thirds of it to create electricity.

EROI simply must include all of the energy inputs required to build a solar plant as in Prieto & Hall’s study at a minimum, so we can see if it is worth subsidizing solar (or wind) in the first place.   If solar and wind can’t replace fossil fuels, because they depend on them too much, then the money/energy would be better spent building passive solar homes that would last for hundreds of years, preparing to go back to muscle power, expanding organic agriculture and departments at high schools and universities, and so on.

We have very limited time and energy left to cope with the energy crisis.  Energy transitions take at least 50 years. The 2005 DOE report by Hirsch said that you’d want to prepare at least 20 years ahead for peak oil, with time the most limiting factor, and here it is 12 years after conventional oil peaked with an energy cliff rather than a bell curve looming.

If anything, solar and wind electricity should probably be reduced considerably, because they have a very low quality energy since they can’t be counted on. Here are some reasons to reduce their EROI (see “When Trucks Stop Running” for citations and more details):

  1. All wind and solar do is add more wood to the fire. They do nothing to get rid of fossil fuels because they can’t be counted on. In the 2012 IEA world energy outlook, it was calculated that 450 GW installed capacity of wind in 2035 would only produce 112 GW of power since it’s not always blowing. But that’s no good for the grid, when it needs power, it needs power RIGHT NOW. The IEA calculated wind could be counted on only 5% of the time (the capacity credit) for just 22.5 GW at peak demand times. That means an additional 89.5 GW (112–22.5 GW) of reliable fossil, nuclear, or biomass power is needed to back up wind power. So the more you replace conventional power plants with wind, the more you depend on wind. And, the more you depend on the wind, the less you can depend on it. Great Britain’s office of science and technology estimated wind could be counted on reliably only 7–9 % of the time if the overall penetration of wind power ever reached 50%. So if 25 GW of wind capacity were built to replace 25 GW of fossil and nuclear plants that have double the lifespan of wind turbines, and the capacity credit of wind at peak demand was 5 GW, then an additional 20 GW of fossil and nuclear plants would be needed for backup, with nearly double the energy generation as before (45 GW). In regions where peak demand occurs in the winter, the capacity credit of solar power is zero, because peak demand occurs after dark. And thus the stark reality: “Investment in renewable generation capacity will largely be in addition to, rather than a replacement for power stations” (GBHL 2007). Worse yet, backup fossil and nuclear power plants must be online ready to step in immediately if wind or solar power falter, burning fuel meanwhile.
  2. So you could build energy storage to store excess wind and solar generation. Since on average the wind is blowing 33% of the time, you’d want to build 3 times more wind turbines to keep the energy storage (batteries, pumped hydro, compressed air) charged up.
  3. You’d also need to have a hugely expanded national grid, since most of the wind is in the midwest, most of the solar is in the southwest, and most of the hydropower is along the west coast.

Instead of tripling the energy credited to electricity, since wind and solar aren’t always available, the EROEI to build, maintain, and operate energy storage facilities, backup fossil and nuclear power plants, and an expanded national grid ought to be subtracted from the energy generated by intermittent renewables.  Energy storage, a national grid, and fossil fuel electricity generation plants are not separate entities which can be ignored in EROI studies, because solar and wind and the electric grid itself can’t exist without them.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Part 2: Critiques and rebuttals of Spain’s Photovoltaic Revolution. The Energy Return on Investment”, by Pedro Prieto and Charles A.S. Hall

Part 1 is an introduction and overview, followed by a book review of “Spain’s PV revolution” 

Below are 5 rebuttals of criticism of Prieto & Hall’s book:

  1. 2017.    Hall, Charles A.S. Energy Return on Investment: A Unifying Principle for Biology, Economics, and Sustainability. Springer.
  2. 2016-5-26. The real EROI of photovoltaic systems: Professor Hall weighs in. Ugo Bardi’s blog: Cassandra’s Legacy.
  3. 2015-4-1: Stanford Net Energy conference
  4. 2015-4-11 Pedro Prieto responds to criticism (private communication)
  5. 2015-4-11 Ted Trainer responds to criticism of Prieto & Hall

This book was only available electronically at the University of California, not in print form, and not available to the public. It’s a shame libraries are putting more and more journals and books into electronic versions only.  Especially this book.  Microchips, motherboards, and computers will be among the first casualties of declining fossil fuels, because they have the most complex supply chains with many single points of failure, dependence on rare metals, and so on (see Peak Resources and the Preservation of Knowledge for details). Nor is it guaranteed the electric system can be 100% renewable, as it must be some day, or that transportation, especially trucks, can be electrified (see my 2015 Springer book ““When Trucks Stop Running: Energy and the Future of Transportation”.

I encourage you to get your (university) library to buy a hard copy of this book, so that future scientists, historians, and the public, who only have access to hard-copy books even though our taxes pay for the University, will understand why our society didn’t replace fossil fuels with “renewables” even though we knew oil couldn’t last forever.

On an energy forum in March 2014, Prieto said: “Since we wrote the book, I have been able to experience a few more incidental factors: mice delightfully gnawing the cables and covers and optical fiber communication color cables, and storks excreting on modules with about 6 inches size -one cell- per excretion. Real life has many factors that they are not accounted in organized studies in labs, universities with particular technologies and plants in perfect irradiation places.”

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Hall, Charles A.S. 2017. Energy Return on Investment: A Unifying Principle for Biology, Economics, and Sustainability. Springer.

[ Mostly verbatim, sometimes cut or paraphrased. See the book for cited references, tables, and graphs]

In the first decades of the 21st century a number of studies gave EROIs of 6–10:1, though rather than EROI studies more often used the energy pay back times of just one or two years for photovoltaic (PV) systems (e.g. Fthenakis et al. 2011; Raugei et al. 2012). These numbers were used by solar advocates to argue for the importance and economic viability of solar PV systems, and in some cases that solar PV systems were comparable to fossil fueled systems.

But in 2013 Prieto and Hall came out with a much lower estimate of EROI of 2.45:1 for sunny Spain with sophisticated engineers, which caused a great stir amongst solar advocates and initially greeted with disbelief by many in the industry.

This book provides the most comprehensive assessment of all of the energy costs of solar PV.  It differs from many earlier analyses in that

  1. it attempts to include (nearly) ALL energy costs actually used, not just the costs of the modules and some related hardware
  2. it uses measured rather than estimated energy output.
  3. It uses actual data from Spain which has a much higher insolation than Switzerland, Germany or the Netherlands.
  4. Of particular importance is that Prieto and Hall attempted to calculate the complete energy used to support the PV system by “following the money”, i.e. by attempting to assess all the money flows necessary for the system to operate (understood by Prieto because of his extensive on-site experience as Project Director, Project Designer, Consultant and Director of Development of a solar PV company).

They assigned an energy cost to each monetary cost using specific energy intensities: the mean energy use for the Spanish economy (7.16 MJ per Euro), and twice that for manufactured or engineering items, and one third that for business services as given in the protocol paper by Murphy et al. (2011). They derived about the same energy cost when they took all money spent times the national mean (7.16 MJ/Euro, similar to the global mean) as they found when they did a very detailed analysis of 24 categories of items, including such things as energy costs of roads and cleaning, surveillance, business services, meetings attended by engineers as well as modules.

This is consistent with the view of Herendeen and Bullard (1975) that when one purchases a complex product from final demand all the different energy intensities tend to “come out in the wash”. Raugei and Leccisi (2016), for example, did not calculate any energy cost for which they could not get a direct energy measurement for in their assessment of PV and fossil fuel derived energy for England. To me this seems to miss some costs.

Since then similar results were published by Palmer (2013) for rooftop PVs with battery back up in Australia, and Weissbach et al. (2013), for Germany (see also Raugei 2013; Weissbach et al. 2014; Raugei et al. 2015). In 2016 Ferroni and Hopkirk published an estimate of a negative EROI for cloudy Switzerland and Germany.

Yet Leccisi et al. (2016) and Raugei and Leccisi (2016) came back with estimates of values of 9:1 or higher. How could two different groups of competent investigators get such different estimates?

Why do some solar PV studies have a low EROI and others a high EROI?

1. The largest difference in EROI between these investigators is based on corrections for quality between fossil fuels and electricity.

Raugei et al. (2012) were very critical of comparing the apples of fossil fuels (where EROIs at the source were generally higher) with the oranges of higher quality electricity. They said that a number of summaries (e.g. the “widely cited ‘balloon graphs’ (Hall et al. 2008; Murphy and Hall 2010) and bar charts (Hall and Day 2009) have compared many technologies simply in ‘heat equivalents’, i.e. the energy values are given in terms of their abilities to heat water with no correction for energy quality”. The fundamental issue is that since we are willing in society to trade about 3 heat units of coal, oil or gas to generate one heat unit of electricity, the EROIs of the electricity derived from PVs or wind turbines (or nuclear power plants) should be weighted by a value of some three times that of a heat unit of fossil fuels.

2) Theoretical versus actual electricity output

EROI values in many studies are too high because they used “nameplate” values (1,800 kWh/M2-year) for assessing electricity outputs from PV facilities rather than the actual output. [My comment: this is because private solar facilities usually won’t give researchers this information.  But Prieto & Hall had 3 years of government data from all the facilities in Spain].  Nameplate is inaccurate since the actual electricity output is reduced by clouds, bird droppings, overheating, dust accumulation, lightning, equipment failures, and degradations over time to less than “Nameplate” value.  Also, too much output can fry electrical components at various locations in the grid.

Prieto and Hall found that the actual output for a facility in Spain with a nominal output of 1,800 kWh/m2-yr was measured at an actual 1,375 kWh/m2-year.  Ferroni and Hopkirk (2016) also found measured values considerably less than nameplate values.

3) Solar facilities probably don’t last for 25 to 30 years

A related issue is the assumptions about how long the facility will last.  Most investigators have applied a life span of 25 years for PV facilities, and the IEA guidelines suggest 30 years.  Since most solar facilities are new, this is hard to measure, but in reality it may be less.  Ferroni and Hopkirk (2016) came up with an estimate of a mean of 18 years for Switzerland.  Prieto (personal communication) believes it is much less than 25 years in Spain because many companies have declared bankruptcy and thus do not honor their warranties.  Without warranties or the specific parts to fix failures, many PV facilities in Spain have been abandoned or completely reconfigured.

4) Boundaries and Comprehensiveness of the cost assessments

Carbajalis Dale et al. (2015) state in a footnote in reference to Prieto and Hall’s study that they are inconsistent in their definition of system boundary and arbitrary in the inclusion of a large number of non-energy inputs. In their online Commodities and Future trading where they claim that “Renewables have a higher EROI than fossil fuels” they state “Prieto and Hall add every incidental energy cost they can think of, like the energy costs of building fences around the solar farm, and so on. They even add energy costs for things like corporate management, security, taxes, fairs, exhibitions, notary public fees, accountants, and so on (monetary costs are converted into energy by means of a formula)”.

We respond: “As if these were not legitimate energy costs to build and operate PV plants?” In fact they are. For example, fences and security are necessary, given the high value of things like scrap copper, so plants are very susceptible to thieves stealing electrical components (a cost, incidentally, Prieto and Hall included).  Without fences and security, the EROI goes to zero.

Nor can facilities exist without roads, module washing, and financial institutions.

Based on earlier studies (e.g. Hannon 1981), it is clear that all services and goods require substantial amounts of energy, about a third to half per dollar compared to societal means to undertake. In order to make a comprehensive assessment we “followed the money” and assigned a very conservative one-third the national mean energy cost to all service expenditures, which Prieto knew because as chief site engineer he signed for every penny and activity at a large gigawatt plant in Spain. The services we mentioned are not incidental but necessary and should be included in energy costs, and we’ve yet to hear a good reason why we should have excluded any of them.

Prieto and Hall found that the construction of modules and basic electronic components such as inverters were only about a third of the total energy cost of building and operating a solar facility in Spain.  We assume this is true elsewhere, yet our critics have yet to do a study that includes much of the real energy costs that we did.

Raugei et al. (2013) have argued that we included costs such as site preparation and environmental issues that are not included in our assessments of oil or coal.  This is not true since all such costs are included in our indirect energy assessments which are based on total “upstream” expenditures by industries. We agree with them that the boundaries should include all energy costs that any energy gathering activity experiences.

5) Technological changes over time

Another issue raised by Raugei and other solar analysts is that the monetary and energy cost of making solar PV modules has been declining for decades and will continue to do so, although perhaps at a declining rate. They criticize the Prieto and Hall study for using technology appropriate in 2008 (actually we used 2009-2011 technology) when there has been a 10 to 20% decline in the energy cost to make modules since then (some of which, in terms of money if not energy, is attributable to such things as subsidies by the Chinese government). As far as we know, there has not been a similar decline in the other inputs to PV systems.  I agree with them that one should do costs and benefits for particular years.

The EROI of Storage for Solar Energy

Swenson (2016) argues that at large scale, solar PV technologies will become more efficient.  But that adds to the energy and monetary costs to build storage and integration into the grid, lowering EROI.

Since sunshine and wind are dependent on nature’s only partially predictable whims, and can’t be programmed in advance, meeting the demand load can be very difficult.  A day might be sunny or cloudy (with half or less of the insolation), and wind blows on average only about 30% of the time (closer to 20% in Germany), and there can be periods of two weeks or more with no wind at all.  Although PV systems are slightly more predictable, storage is required to compensate for these intermittencies. Yet even if we used all of the batteries in the worldwide, they’d store less than one minute of global electrical output, nor is this cost effective on a massive scale.

And not just storage, but some other kind of readily dispatchable power. Right now the only storage option feasible at large scale is elevated storage of water in existing facilities or specially constructed pumped storage, but:

  • there’s an electricity of loss of 25-35% in the pump up and later release systems
  • the availability of such sites are limited
  • the intermittent release of water harms fish and aquatic ecosystems (Ward and Stanford 1979)

Carbajalis Dale et al. (2014) estimate that adding a relatively small amount of storage to PV systems would quickly put them into energy deficit.

Palmer (2013) found that batteries doubled the energy cost of rooftop solar systems.

These energy costs tend to be ignored by PV and wind advocates, who also argue that coal and nuclear facilities have their own problems with responding to variable loads (which however are being met readily now).

Future EROI assessments should include the large energy costs of storage, which will only grow larger as more intermittent renewables are added to the grid.

Exponential Growth of energy production

Many say we must grow these systems very rapidly and indefinitely.  But Neumeyer and Goldston (2016) found that an initial EROI of 10:1 quickly dropped to 2:1 as most of the power output went to generating new plants.  Carbajalis Dale and Benson (2013) and Kaufmann and Shiers (2008) found a similar very sharp drop in net power output if growth were large.

Thus a large exponentially growing PV system will have to be constructed using fossil fuels.

But there may not be enough materials to exponentially grow PV facilities.  Fizaine and Court (2015) and Gupta and Hall (2012) found that an exponentially growing PV system might run out of copper in a very few decades.  Hertwich et al. (2014) found that PV systems can use 11 to 40 times more copper than conventional fossil generation systems, though they thought there was enough copper to build a large renewable system.

Business Services and Taxes

All investigators agree that direct (on site) and obvious indirect energy costs should be included (Murphy et al. 2011).

But what about the energy to support business services used?  These require energy to build brick and mortar buildings, which need to be heated, cooled, and lighted with electricity and fossil fuels.  What about the energy to support the taxes paid?  Taxes, when spent, require energy also, such as the energy to build and maintain roads, provide schooling, and so on.  Oil and gas fields as well as PV facilities require considerable construction and maintenance costs that are paid for by governments which in turn operate from taxes.

For example, Pennsylvania has found there are very high costs associated with the new “fracked” gas wells due to the heavy trucks full of water driven over low quality roads during all seasons of the year, creating damage that has to be fixed with heavy equipment.  In addition the driller’s children require schooling, and there’s also an increase in the need for policing and health services (Dutzik et al 2012; Food & water watch 2013).  Therefore, tax expenditures and the energy required to generate these governmental services should be included in the energy cost of a project.


Perhaps most controversial is whether to include the energy required to support labor. There are various kinds of energy costs that might be included, such as the 1.8 MJ/hour a hard working person requires to do heavy labor.  People are basically machines that operate at about 20% efficiency, requiring food with 9 MJ/hour.  This is trivial compared to the machines most laborers use, such as a diesel engine burning 135 MJ per hour.

Labor is not available without pay, so the energy to support the worker’s paycheck might be included. Assume a worker is paid $50,000 a year.  Energy must be spent within the economy to produce the goods and services demanded by the worker or his/her family spending the paycheck.  In 2015 the U.S. economy used roughly 5.6 MJ per average dollar of GDP.  Thus, assuming that our worker’s family spends their money on “average” goods and services, it would take about 280,000 MJs of energy, equal to 46 barrels of oil, to support their paycheck.

When first presented in around 1970 as a potential factor in EROI, economists said this was inappropriate since this was consumption which shouldn’t be added to production.   But I think the energy to support worker’s paychecks is a legitimate part of the cost of production, but it is so controversial that we do not include it.


2016-05-26 The real EROI of photovoltaic systems: Professor Hall weighs in. Ugo Bardi’s blog: Cassandra’s Legacy.

May 8, 2016: a recent paper in Energy Policy on the EROI of photovoltaic solar systems came up with a NEGATIVE EROI of 0.85:1 (Ferroni and Hopkirk 2016).

They found that “at today’s state of development, PV technology cannot offer an energy source but a NET ENERGY LOSS, since its ERoEIEXT is not only very far from the minimum value of 5 for sustainability suggested by Murhpy and Hall (2011), but is less than 1 [0.85].

Prieto (private communication) notes that “Ferroni/Hopkirk calculate the failure rates of PV modules as per the public statistics of PV Cycle (a recycling European entity), that do not count modules that are simply abandoned.  The data contradicts the IEA PVPS program, calculating 30 years of life span for PV systems: the failure rates give a figure much closer to 18 years.”  

Prieto also says that a criticism of their paper is that Spain is a lousy country, but Germany is the “flagship of perfect workmanship”. Yet after 10 years, the reality is that Spain is producing twice as much energy as Germany in per MWp installed basis, not only because Spain is better irradiated (about 50%), but also because of the more efficient and better maintained utility scale installations in Spain versus the scattered rooftop individual home installation in Germany.” He also notes that the Ferroni (2016) paper “is dramatic and is raising some blisters.”

Ferroni, F., Hopkirk, R. J. 2016. Energy Return on Energy Invested(ERoEI)for photovoltaic solar systems in regions of moderate insolation. Energy Policy 94:336–344


2015-4-1: Stanford Net Energy conference 

Notably, the founder of EROI, Charles A. S. Hall, wasn’t invited.

At this Stanford University conference the goal was to start a new net energy think-tank that would standardize net energy by having a specific way researchers ought to conduct their studies, with the most up-to-date life cycle and other data, boundaries and assumptions, and so on.  If researchers strayed from this format or added additional material, they’d need to say why. The lack of standardization is one of the many reasons policy makers don’t take EROI studies seriously.

In my opinion, this makes it easy for proponents of various renewable solutions to calculate the EROI to be much higher than it actually is.  Without standards, it is easy to increase EROI by not counting the energy to make steel because the researcher claims it was 100% recycled, or cherry-picking the best performing wind or solar farms over the best performing time period, and so on.  Policy makers can’t be expected to make policy decisions or recommendations when EROI studies of wind ranges from 4 to 115.

Meta-studies can’t be done either because there is too much missing data, and/or unstated assumptions, and/or different models used, and rarely is real data available, since private companies don’t have to, and often don’t want to reveal their true performance, operation, and maintenance costs or they’d get less investment and lower stock prices.

Yet even at the conference, several EROI presentations were not clear about their boundaries.  Long after the artificial photosynthesis presentation which would combine hydrogen with CO2 to make liquid fuels (with a spectularly low EROI of only 1.66), I found that the outside boundary was set at 300 feet outside the factory gate and didn’t include storage or delivery to the customer.  Probably not calculated because the EROI would be less than 1, an energy sink.

By the end of the conference I was a bit frustrated at the lack of discussion of boundaries, because this has been a problem for 40 years and is the main problem to be solved to get policy leaders to pay attention, and more importantly fund such studies, since the researchers often have to pay for these studies out of their own pocket.

So at the end of the conference, with this issue rarely referred to the entire time, I asked the panel what they thought should be done about the boundary issue. For example, ethanol studies using narrow boundaries found higher EROI values than those with the widest boundaries, which often found a negative EROI.  I recommended Spain’s solar PV revolution by Prieto and Hall which used real production data over several years rather than theoretical data used in 99.9% of other studies as a good way to decide what to include or not include, since it made sense to make the boundaries wide, not narrow. Also, since nearly every presentation was on renewables that generated electricity, perhaps new standards for a fossil-free world should include how much electricity it would take to transport the 8,000 pieces of a wind turbines supply chain, the electricity to mine iron and make steel, cement, fiberglass, copper, and electric trucks and electric grid and batteries or a catenary system for trucks to deliver goods and the final wind turbine to its site.

I had the strong impression this was not a welcome question. No one leaped to answer, and finally one of the panelists said that the boundaries ought to be wide but that this question was best talked about over a glass of wine.

After this session one of the speakers, Marco Raugei, at Oxford Brookes University, came over.  He was very upset by my question because he thought Prieto and Hall’s book was awful. He told me it was so bad that several scientists had tried to prevent Springer from printing it.

I told Raugei that I had looked very hard for any criticism of the book but had not been able to find any rebuttals, so what exactly was wrong with it?  Raugei replied that the book wasn’t peer-reviewed. Well hello, books aren’t peer-reviewed, surely he knew this…and I pointed out that Farrell in 2006 had used non peer-reviewed papers in his famous ethanol EROI study. So I asked why someone didn’t write an analysis to refute the book? Raugei replied that since it wasn’t peer-reviewed, why bother.

When I asked Raugei to tell me more about what was wrong, he said that it was inconsistent in so many ways, not defensible the way economic inputs were converted from money to energy such as the insurance figures, some air travel expenses, too haphazard, inconsistent in method and goal, not clear enough in stating that this is just one snapshot moment in time in Spain and that it used an ill-advised subsidy scheme, that the EROI is not the same in other countries and parts of the world, and that the goals should have been more explicitly explained. I thought: What goals? Did he think Prieto and Hall had a goal of a low EROI figure?

Prieto has strong motivation to find a high EROI, since he built some of the solar plants he writes about in the book. He could make more money by exaggerating solar PV EROI.  Hall certainly has no dog in this fight.  In general, the scientists who are funded by industry produce the most problematic research.  For example National Corn Growers Association funded scientists found the highest EROI results for ethanol in their non-peer-reviewed papers.  Recently it was discovered that several Harvard scientists were paid by the sugar industry to blame fat, not sugar, for obesity.

It was ironic that Steven Chu was the opening keynote speaker at this net energy conference, since Tad Patzek once wrote me that “Steven Chu decided not to fund my Laboratory Directed Research and Development (at Lawrence Berkeley Laboratory) project whose goal it would have been to arrive at a consistent thermodynamic description of all major energy capture schemes bio and fossil, so that we compare apples with apples. What I did not appreciate is that no one wants to know that they may be working on a senseless project, such as industrial hydrogen from algae. I despair seeing the rapid corruption and sovietization of American science (without the Soviet strengths in basic sciences), but can do little about it. … It is not easy to get funded on the subjects I have proposed.  …In fact, my LDRD proposal to develop the comprehensive thermodynamic language to talk about the different energy resources was just not funded…”

Someday when a future history of science author attempts to write about the history of EROI, I hope that Patzek, Hall, and others have written memoirs that discuss how hard it was to get funding, get published (did scientists really try to prevent Spain’s solar revolution from being published?!), the criticism they received, and so on, because I think it will be of great interest to the grandchildren and further generations down the line.  Understanding why renewables have such low EROI might prevent cargo-cult like behavior to spend huge amounts of resources and time to build them after the dark age that may ensue at some point on the downslope of Hubbert’s curve.


2015-4-11 Pedro Prieto responds to criticism (private communication)

(Bold is my emphasis):

Alice, as promised, let’s start answering and commenting on some of your wise comments.

The first thing is to confirm that no EROI studies can be taken seriously if the range of results varies so wildly. So it is quite a sensible approach to try to reconcile the different studies and methodologies.

Having said that, the prevailing methodology is what fails, specifically in the case of Solar PV analyses, but also in others. Experts in solar PV will have more and more available data as time passes from global installations.

Until now, we had seen many studies on different solar PV technologies with different typologies and topologies. Even before our book “ Spain’s Photovoltaic Revolution. The Energy Return on Investment” (Prieto & Hall. Springer 2013) appeared, there were already many variances and divergences.

Even works of Fthenakis or Raugei have contemplated significant variances in the EROI results over time and with different studies of solar plants.

But they all had a methodology in common: they generally used, as you have correctly pointed out, the best material recovery, the best theoretical solar PV system in each case, the best irradiated areas, the assumption that systems will operate in full along the lifetime with no problems. In summary, a methodology that has helped or served as documentary support or reference to many to reach global conclusions on the long term ability of modern renewables to replace, take over or substitute fossil fuels, from a given particular plant analysis extrapolated massively. That was the case, for instance, of Mark Jacobson and Mark Delucci in their studies on how modern renewables could replace fossils and supply the present global consumption. This is a traditional bottom-up approach.

After my experiences of several years in the field with different technologies, typologies, topologies, latitudes and state of development countries and confronting with the real world results, Charles Hall and myself, after having had a pint of beer in an Irish Pub in Cork commenting these issues, in the ASPO International Conference held there in 2007, decided to embark in a study on solar PV. But we tried to do it in a radically different form. It took us several years of back and forth, discussions, checks and double checks, consulting with other experts and so on.

The study, as many of you may already know, was on a real world installed plant in the best irradiated country in Europe (Spain), with the official and very accurate energy production records of the Ministry of Industry (read by telemetry to more than 40,000 digital sealed meters in each of the respective individual plants) over a period of three complete years (2009-2011). That was the main innovation: a top-down analysis and the huge scope of the solar PV plants working in the real world, rather than theoretical academic bottom-up approaches.

With more than 140 GW of installed plants worldwide, and several complete yearly cycles of operation of many of them, it is going to be increasingly difficult for some authors to continue with the academic approach, to verify real behavior of the EROI.

Now, about the energy input boundaries.

Of course, if we focus only on the energy inputs of the solar modules and their composition (glass, aluminum frame, connection box, copper or silver soldering, doping materials, silicon, ingots, wafers, cells, etc.) and perhaps inverters or metallic structures orienting and tilting the arrays, then we may come with spectacular results in a very good irradiated area with the theoretical module yield. This is what has been generally considered in most of the studies carried out to date and what is proposed by some authors as the recommended methodology.

But this is just one of the factors we looked into when we decided to analyze the energy inputs of a complete solar PV system, not just what appears in the marketing pictures of the solar plants.

After many years working in the field, one can appreciate the number of activities that are indispensable (sine qua non conditions), for a solar PV plant to work and operate as some of the authors of several EROI/LCA/EPBT studies consider they are going to work.

We differentiate some 24 factors and additional analysis that was not absolutely complete nor exhaustive, but proven and existing. None of these factors had been considered or hardly appeared in but few of the analyses made by the most renowned solar PV EROI authors. Your study of our book already identifies some of them and I have mentioned them on many occasions.

One of the factors, “a7” (the energy input required for modules, inverters, trackers (if any) and metallic infrastructures, labor excluded), was precisely the EROI as usually calculated by many authors. We decided not to judge the different results of this universe of conclusions but to accept a sensible average of the range of many publications that gave us an EROI in itself for this concept of 8:1; that is, for 25 years of lifespan an Energy Pay Back Time (EPBT) of 3.1 years. Or an energy input cost equivalent to 0.125 of the total generation along the lifespan of the system.

But then we started to consider the rest of the factors (boundaries or extended energy input boundaries) and discovered that conventional EROI studies were ignoring two-thirds of the energy inputs indispensable to get the solar PV plants into operation.

The list calculated the energy inputs, based on the experience of several plants in Spain and extrapolating to the 4 GW installed power studied in the book, to road accesses to the plants, foundations, canalizations, perimeter fences, evacuation lines, rights of way, O&M module washing or cleaning self consumption, security and surveillance, transportation — sometimes as far as from China, premature phase-out or un-amortized manufacturing and other equipment, insurances, fairs exhibitions, promotions or conferences (like the one you had in Stanford –to whom to attribute the involved energy expenses?), administration expenses, municipality taxes, duties and levies, cost of land rent or ownership, circumstantial labor (notary publics, public officers, civil servants, etc.) agent representative or market agent, equipment stealing or vandalism, communications, remote control and plant management, pre-inscription, inscription and registration bonds and fees as required by the authorities, electrical networks and power lines restructuring ass a consequence of the newly injected 4 GW in a national network with about 100 GW, in unexpected and not previously planned nodes of the grid, faulty modules, inverters or trackers, associated costs to the injection of intermittent loads: network stabilization associated costs (only referred to combined cycle gas fired plants, well known costs).

Some of these factors may certainly have diminished with time. Many others, have certainly increased over time. Taxes, for instance, have raised sharply. Stealing in Spain, for instance, is not relevant, but in many countries of the world it is a problem.

We mentioned and developed a little of the associated energy costs of the injection of intermittent loads, by pump up or other massive electric energy storage systems, because we knew it was going to be fundamental and relevant and did not want to open any more the old wounds in an already meager EROI. These costs are still today in a fierce debate in Spain and in many other countries, but they are certainly relevant, should the modern renewables have to replace the present fossil fueled global societal functions.

As you can see, the BOUNDARIES are of essence to determine the real life EROI, rather than an academic EROI. No one critical of our book, could say, to the best of my knowledge, that any of these briefly listed factors was not a real one and was not needed to have (at least in Spain) a solar PV system up and running along its lifetime. But for some strange reason they had never considered them.

Once they recognized the facts of real life, then this battlefield was rapidly abandoned and shifted to the “comparison” with other energy sources, namely the fossil fuel sources. Some authors were claiming that if fossil fuels were treated with these ‘extended’ energy input boundaries and factors, their EROIs should obviously go down in a similar proportion.

What they did, then, was to use a multiplying factor on the order of 3 for solar PV, arguing that it has a logic, when comparing equivalent systems and using an equivalent methodology. I fully disagree and I have shown in several occasions the reason why:

The world uses (mostly burns) about 13 BToe/year of primary energy or more than 510 EJ/year.

Of that, approximately 170 EJ of fossil + nuclear go to produce an equivalent of 40 EJ of clean and useful electricity, this making the point of Raugei valid to some extent, if the solar PV systems would entirely go to replace electricity produced by fossil fuels, because of the losses of about 2/3 of the primary energy in the conversion process.

But the world is not behaving in this way, as scientists like Raugei and Fthenakis must know. New renewables just enter into the energy equation to simply provide more energy to the global system.

Above all, the most important flaw in this assumption is that the world also consumes about 285 EJ in non-electrical uses, like aviation, civil works, mining, transportation, merchant fleets, armies or agriculture (eating fossil fuels, Dale Allen Pfeiffer). And it happens that if we would pretend to use electricity from renewables to replace the fossil fuels used for these global activities, likely through an energy carrier like the eternal hydrogen promise, the pretended multiplication factor used by Carbajales et. al, would immediately operate in the reverse form and become a division factor, probably in the order of 3, with respect to the direct use of fossil fuels of today. That is why we did not employ this “correction factor” used by Carbajales et al.

I will not enter into this debate further, because I find it futile. I do not care if when treating the EROI of coal, oil or gas with these extended boundaries may go down two-thirds from already published studies, now ranging with the old methodologies, for instance, from 100 to 12:1 for oil, depending on the period and places, or 60 to 20:1 or coal or gas in similar levels.

Taking down these two-thirds of present EROI studies will not change the fact that this society is now operating on 80% fossil fuels.  And makes it possible to move it. This is the final proof.

An important part of the rest (excluding perhaps a part of biomass in underdeveloped countries) is also being produced because the energy subsidies given by fossil fuels to the other sources, like nuclear, or hydro, that we could not have dreamed of having if a well endowed fossil fueled society and its related machinery and technology weren’t available. Nuclear, hydro, solar PV, solar thermal or wind energies are absolutely underpinned by a fossil fueled society, not the vice versa. The global society has been making its growing economic, industrial and technological life basically without those energy sources. But we could not imagine these sources working and feeding themselves in all the complex value chain, plus providing an important net energy surplus to the global society. Not now, nor in a foreseeable horizon.

We can not ignore this crucial fact: biomass helped initially to coal to develop, but 60 years from the first massive use of coal, this fossil fuel had already passed biomass in volume and versatility of use and became quite independent of biomass.

This happened circa 1900, at the level of 800 MToe/year of global primary energy consumption and with about 1.6 billion inhabitants.

Then came oil, much more dense and versatile than coal. It took oil again about 60-70 years to pass coal and biomass as the main global energy source. This happened circa 1960, but then, in a consumption level of 3,000 MToe/year and with 3 billion people on Earth.

Now, we move in the level of 13,000 Mtoe/year of global primary energy consumption and with about 7.2 billion people. But gas or nuclear have not passed oil as the prime energy source. And we have to wonder why, if they were discovered and used massively more than 60 years ago.

Quite the contrary, we are moving fast, because of peak oil, back to the possibility of coal surpassing oil again in a decade or so, as the main energy contributor, but this time, probably at a lower global consumption level and probably with a world population still growing in numbers and in poverty.

The first two big energy transitions (biomass to coal and coal to oil) were made with the surpassed energy source still growing and helping to initially boost the coming one, but soon proved to be quite self sufficient to feed a growing and demanding global society, well after paying for their own energy inputs in the exploration, mining or drilling, extraction, transporting, refining and distributions processes WITHOUT ANY DOUBT, because nobody will doubt the evolution of the last century and the role of the fossil fuels on it. Now, we have to face the third big energy transition, in the highest level of energy consumption and population and with the main energy fuel, oil, in depletion.

Of course, one has to accept that in this complex world, all energy sources are somehow interrelated, but, as Orwell said in The Animal Farm, ‘all animals are equal, but some animals are more equal than others’. This is exactly what is happening with the energy sources and its properties and qualities: they can all be measured in EJ or in TWh or whatever, but some are more equal than others. Meaning that there is an obvious ASYMMETRIC interdependence of energy sources, since in the last century, fossil fuels (and oil in a very first place), were responsible for our present global status.

To me, then, there is a non sequitur to shift the EROI battlefield to try to extend the boundaries in the fossil fuel EROI studies, to lower them and favor renewables by comparison, because whatever the EROI and boundaries are considered, it is obvious that the present global society spending 13 BToe/year of primary energy (80% fossils), has been able in the last century (we shall see for how long) to pay their own energy expenses, AND provide a huge net energy surplus at the disposal of 7.2 billion humans who have grown at a spectacular rate for more than a century.

For instance, when the IEA mentions in their WEOs the costs of ‘subsidies’ to different energy sources, it always calculates much bigger subsidies for fossil fuels than for the modern renewables. It is a sort of energy fallacy, from my point of view.

If the global society has resources to subsidize anything, it is because it has previously gotten a surplus of resources from somewhere. And this ‘somewhere’ is obviously a global society that has created them using mainly fossil fuels at discretion. I can ‘subsidize’ my son to go to the cinema, but I cannot ‘subsidize’ myself from the salary I  earn by myself and saved in my left pocket, by changing it to my right pocket.

I understand that some fossil fueled activities may certainly be ‘subsidized’ in certain forms. For instance, kerosene for aviation in the airports, which is tax exempted in many countries, when compared with gasoline. Or ‘subsidized’ coal prices paid to depleted coal basins in Spain to continue producing low quality brown coal, to keep the social peace in the region and avoid the miners revolting. But it is a fallacy to conclude that ‘somebody’ is ‘subsidizing’ fossil fuels globally speaking, when fossil fuels are 80% of our global activities creating surplus. From a strict energy point of view, fossil fuels are subsidizing basically all world activities. Period.

What in reality the OECD watchdog does is a mystifying operation. When digging up the IEA figures of ‘subsidies’ of fossil fuels, one discovers that they are really talking about ‘prices’ or ‘price levels’ of fuel in the producing countries that are selling them domestically at prices lower than those the IEA  would wish they had, to leave more ground to the big OECD importers to buy this fuel from producers at prices OECD can afford.

Coming back to the energy input expenses in extended boundaries, we also left out the financial costs, despite knowing that they were quite large and generally also a sine qua non factor. Most of the plants have been financed in an 80% of the total turnkey projects at about 10 years term, with interest, that ranged from 2% to 5% per year. I firmly believe that finance is a form of using a pre-stored available resource (in a fossil fueled society, coming from fossil fuel related activities) to erect or put in place and operate a given system. In that case, an energy system. So, when one asks for credit or leasing and has to pay back this resource both the principal and the interest to the bank, in let’s say a 10 year term, this is energy evaporating into the system through the bank.

Labor energy input costs were also left aside, even though we had a very good set of data from industry in Spain, classified by categories, skills and full time and part time employees in the sector. The reason was that some of our factors may have had already included part of this labor in, to avoid some limited duplication.

If we had included these financial (even just the additional money created and having to pay back in the form of interests by the requested credits or leasing) and labor energy input cost, the solar PV EROI would have probably plummeted to  <1:1.

In fact, it is very surprising how they criticize the methodology we used to evaluate the financial data (which they did not question basically in numbers), by stating that the conversion of monetary into energy units is not adequate and do not conform to conventional input-output methodologies. Our methodology is clear in these conversion units and reflects a quite direct relation between GDP and total primary energy spent in Spain or between active labor and energy spent per laborer or any given and specific related industrial activity or service rendered. This despite we mentioned that Spain hasn’t published, for years, any input-output tables for the economy (Carpintero, Oscar).

However, it seems remarkable how some are incapable of detecting any anomaly in describing EPBT’s of solar systems recovering the energy spent in them in a question of few months for a life time of 30 years (EROI’s of 40:1 !!) and the astounding divorce with the economic reality, of a world or promoters that look for about 10 years economic recovery, this including heavy premium tariffs (Germany, Spain, Italy, now UK or France) or tax holidays or exemption (US and others) or economic recoveries that last more than the expected life time, if no economic incentives are given.

Without these incentives, the rest of the world is a renewables wasteland. Promoters are virtually not investing (with few exceptions in volume worldwide) in modern renewables, if there are no such incentives. The 140 GW world installed base so certifies, with about 70% of the global installed base made in developed countries with incentive schemes and some 25% made by emerging countries, like China or India (now Brazil or South Africa in a much lesser amounts), also with strong political incentives to cope world markets, leaving a meager 5% for the rest of the world. Doesn’t this crude reality show anything in their conversion of monetary units to energy units methodologies, to the ones giving EPBTs of few months and financial recoveries of many years?

So, I am not surprised, Alice, that some experts, having in their records tens of papers published with high solar PV EROI results, would have shown some annoyance at your question on our book. I would humbly ask from here that when somebody mentions that we work with some methodological ‘inconsistencies’, -a term to which they are so fond of to disqualify other disturbing views- they should rather look into the above explanations and facts of the real world.

I have kept silent until now on what I consider a very regrettable behavior now made public by Raugei, as per your comments. It is true that they dared to write our publisher asking him to stop publishing the book when it was in a draft version in a sort of censorship I had not seen since several centuries in medieval Spain. The recommendation came after somebody took the draft from our publisher without our consent some time before the release and they tried to stop the publication, even threatening that they would discredit it, as they have been doing since it was published, if it were published.  I have never seen such a type of behavior, even less in the academic instances.

The reason they gave first is that we missed our final EROI (2-3:1 being quite conservative and I reaffirm myself more and more as years are passing) by an order of 3. That was precisely the Raugei view on the penalty to be imposed on fossil fuels, if a clean electricity source could replace every kWh of fossil fuel origin, considering that in conventional fossil fuel (or nuclear plants for the case) we need about 3 units of primary energy to get out 1 unit of electric energy. We tried to clarify this in some posts, but unsuccessfully.

Fortunately, the publisher did not consider this a direct threat and the book was finally published.

As for the Raugei comment that the book was ‘awful’ because it had not been ‘peer reviewed’, he qualifies himself. Just look at the acknowledgements of the book. Two professors in Physics from different universities did review the book and produce sensible comments. Charles Hall, the coauthor, is an institution in EROI, that is here questioned with superficial comments. Besides, I understand that publishing a book is a free decision, that does not necessarily require peer revisions, yet despite that, we did have our work reviewed. Perhaps what Raugei wanted to say is that the peer review was not made by the usual reviewers in an inbreeding game.

I have been observing that in the academic world, things are getting unfortunately tougher. Some of the technical papers have sometimes more pages of references than pages of content (see more of my comments on the article below). In the case of solar PV systems, and the references in published papers, it seems there is an excess of ‘selfies’ which were a fashion in the academic papers. And secondly, it appears that credits are gained or given by the number of references that a given person is quoted and this has started a race for a sort of interbreeding cross-quotations, affirming Tadeusz Patzek’s fears about the ‘Sovietization’ of the American science. Perhaps what disturbed Raugei about our book is that we also skipped somehow from these habits and did not leave to the usual teams a review that, with all probability, would have ended up in the garbage.

Of course, Raugei is right when he presumes that our case is perhaps valid for Spain and for the 4 GW installed within the period 2009-2011. Because if we had considered Germany and its public production of solar PV systems within the same period, the Energy Return in terms of MWh per MWp installed would have been less than half of those of Spain.

I am now retired and happily growing my organic farm. Not now or since 2001, when I left working for a telecom corporation, have I had any interest in discrediting or crediting solar PV systems. I am not making my life by publishing papers and trying to gain credibility on a given subject. If anything, I should have defended, as you very well stated, the solar PV systems, because I own 50 kW within a 1 MW plant that I manage and I have helped to design, develop and done some consulting (including what we call here ‘permisología’, the intricate paperwork to get all permits and licenses to the the solar PV plants) of more than 30 MW that are working with different technologies, typologies, and topologies in different latitudes in Spain. I have also cooperated with projects in some Latin American and African countries and I have worked as director of Development of Alternative Energies for a listed Spanish company for a couple of years within the period.

Just a final nota bene, with additional comments on the paper Energy return on investment (EROI) of solar PV: an attempt at reconciliation. Michael Carbajales-Dale, Marco Raugei, Vasilis Fthenakis, Charles Banhart Journal of Latex Class Files. Volume 11 No. 4 December 2012


2) select the thumbnail picture on the left and then download

I can’t get the link provided to work, but maybe it’s my computer settings:

The title of this paper, is a supposed attempt to reconcile different views on solar PV EROI, but I have never been informed by the authors of it, even though I have the dubious honor of being cited several times in it.

I did not know that I had formed a so-called “Prieto group in Madrid”, in second place, after Fthenakis group in Brookhaven and before Weissbach group in Berlin or Brandt group in Stanford.

Also surprising is that the document is dated December 2012 and our book was not published until the spring of 2013. Even more surprising, is that the book is criticized several times with the wrong citation:  P. Prieto and C. Hall, “Eroi of Spain’s solar electricity system,” 2012 rather than the correct Prieto & Hall. Spain’s Photovoltaic Revolution. The energy Return on Investment”. Springer, 2013, in the bulky references, that occupy almost as much space as the article in itself. This does not seem to be a very edifying example in referencing others.

Then, the paper comments that “an average energy payback time (EPBT) of 3 years and lifetime of 25 years are used to calculate the EROI subscript PE-eq = 8.33 value for this part of the system. No references are given for any other input data; though it appears that anecdotal worst cases of installations were generalized by the authors”.

Well, a brief look to the a7 factor (page 78) of Energy derived from Conventional Life Cycle Analysis Studies and Calculated as an Inverse Factor of EPBT”, comes out with an EROI of 8:1 for the energy content in modules, inverters, trackers and metallic infrastructure, quotes some works of Fthenakis, Alsema and Kim among others not cited, not to make too boring the EROI publications ranging around 8:1 in their conclusions and with these parameters analyzed (without extended energy input boundaries). Some more could be found in many places. In fact, these levels of EROI for solar PV were quite common in the early years of 21st century. See, for instance, Bankier and Gale in its Energy Payback of Roof Mounted Photovoltaic Cells. Energy Bulletin. June 16. 2006, where they come out with a number of EROI’s ranging from EPBT’s from 1 year (EROI 25:1) to 25 years (EROI = 1:1)

Author Low Estimate (years) Low Estimate Key Assumptions High Estimate (years) High Estimate Key Assumptions
Alsema (2000). 2.5 Roof mounted thin film module 3.1 Roof mounted mc-Si module
Alsema. & Nieuwlaar (2000) 2.6 Thin film module 3.2 mc-Si module
Battisti & Corrado (2005) 1.7 Hybrid photovoltaic / thermal module 3.8 Tilted roof, retrofitted mc-Si module
Jester (2002) 3.2 150W peak power mc-Si module 5.2 55W peak power mc-Si module
Jungbluth, N. (2005) 4 mc-Si module if emissions are not taken into account 25.5 sc-Si module if emissions are taken into account
Kato, Hibino, Komoto, Ihara, Yamamoto & Fujihara (2001) 1.1 100MW/yr a-Si, modules including BOS 2.4 10MW/yr mc-Si module including BOS
Kato, Murata & Sakuta (1997) 4 Sc-Si module. Excludes all processes required for micro-electronics industries. 15.5 sc-Si module. Includes all processes required for micro-electronics industries.
Kato, Murata & Sakuta, (1998) 1.1 a-Si module. Excludes all processes required for micro-electronics industries. 11.8 sc-Si module. Includes all processes required for micro-electronics industries.
Knapp & Jester (2001). 2.2 Production thin film module 12.1 Pre-pilot thin film module
Lewis & Keoleian (1996). 1.4 36.7 kWh/yr frameless a-Si module located in Boulder, CO 13 22.3 kWh/yr a-Si module with frame located in Detroit, MI
Meijer, Huijbregts, Schermer & Reijnders (2003). 3.5 mc-Si module 6.3 Thin-film module
Pearce & Lau (2002). 1.6 a-Si module 2.8 sc-Si module
Peharz & Dimroth (2005). 0.7 FLATCON (Fresnel-lens all-glass tandem-cell concentrator) module – 1900 kWh/(m2 yr) insolation 1.3 FLATCON (Fresnel-lens all-glass tandem-cell concentrator) module – 1000 kWh/(m2 yr) insolation
Raugei, Bargigli & Ulgiati (2005) 1.9 CdTe module including BOS 5.1 mc-Si module including BOS
Schaefer & Hagedorn (1992). 2.6 25 MWp a-Si module 7.25 2.5 MWp sc-Si module
Tripanagnostopoulos, Souliotis, Battisti & Corrado (2005). 1 Glazed Hybrid photovoltaic / thermal 4.1 Unglazed Hybrid photovoltaic / thermal
Alsema E. (2000). Energy Pay-back Time and CO2 Emissions of PV Systems. Progress in Photovoltaics: Research And Applications, 8, 17-25.
Alsema. E. Nieuwlaar, E. (2000). Energy viability of photovoltaic systems. Energy Policy, 28, 999-1010.
Battisti, R. Corrado, A. (2005). Evaluation of technical improvements of photovoltaic systems through life cycle assessment methodology. Energy, 30, 952–967.
CSIRO, Advanced Gasification Research Facility, Queensland Centre for Advanced Technologies,
Jester, T. (2002). Crystalline Silicon Manufacturing Progress. Progress in Photovoltaics: Research and Applications, 10, 99–106.
Jungbluth, N. (2005). Life Cycle Assessment of Crystalline Photovoltaics in the Swiss ecoinvent Database. Progress in Photovoltaics: Research and Applications, 13, 429–446.
Kato, K. Hibino, T. Komoto, K. Ihara, S. Yamamoto, S. Fujihara, H. (2001). A life-cycle analysis on thin-film CdS/CdTe PV modules. Solar Energy Materials & Solar Cells, 67, 279-287.
Kato, K. Murata, A. Sakuta, K. (1997). An evaluation on the life cycle of photovoltaic energy system considering production energy of off-grade silicon. Solar Energy Materials and Solar Cells, 47, 95-100.
Kato, K. Murata, A. Sakuta, K. (1998). Energy Pay-back Time and Life-cycle CO2 Emission of Residential PV Power System with Silicon PV Module. Progress in Photovoltaics: Research and Applications, 6, 105-115.
Knapp, K. Jester, T. (2001). Empirical Investigation of the Energy Payback Time for Photovoltaic Modules. Solar Energy, 71, 165–172.
Lewis, G. Keoleian, G. (1996). Amorphous Silicon Photovoltaic Modules: A Life Cycle Design Case Study. National Pollution Prevention Center, School of Natural Resources and Environment, University of Michigan.
Meijer, A., Huijbregts, M., Schermer, J. Reijnders, L. (2003). Life-cycle Assessment of Photovoltaic Modules: Comparison of mc-Si, InGaP and InGaP/mc-Si Solar Modules. Progress in Photovoltaics: Research and Applications, 11, 275–287.
Odum, H. (1996). Environmental Accounting: Emergy and Environmental Decision Making. John Wiley & Sons, New York.
Pearce, J., Lau, A. (2002). Net Energy Analysis for Sustainable Energy Production from Silicon Based Solar Cells. Proceedings of Solar 2002 Sunrise on the Reliable Energy Economy June 15-20, 2002, Reno, Nevada
Peharz, G., Dimroth, F. (2005). Energy Payback Time of the High-concentration PV System FLATCON. Progress in Photovoltaics: Research and Applications, 13, 627–634.
Raugei, M. Bargigli, S. Ulgiati, S. (2005). Energy and Life Cycle Assessment of Thin Film CdTe Photovoltaic Modules. Energy and Environment Research Unit, Department of Chemistry, University of Siena, Italy.
Schaefer, H. Hagedorn G. (1992). Hidden Energy and Correlated Environmental Characteristics of P.V. Power Generation. Renewable Energy, 2, 15-166.
Tripanagnostopoulos, Y. Souliotis M. Battisti R. Corrado A. (2005). Energy, Cost and LCA Results of PV and Hybrid PV/T Solar Systems. Progress in Photovoltaics: Research and Applications, 13, 235–250.

As can be seen from the above, we were far from using as an EROI for modules+inverters, plus metallic infrastructure in a sort of anecdotal worst cases of installations generalized by the authors. On the contrary, we were more in the low estimate in years (high estimate EROI), than using worst cases.

Now, for the record, it should also be very convenient for all the prolific authors on solar PV EROI to revise the figures given in papers published several years ago, to double check how are they performing (Energy return statistics). We are very anxious and expectant to learn how it has gone with, for instance, the hybrid PV/Thermal promising analysis, or even better, the results, years after publication, of the Fresnel lenses combined with high efficiency cells in concentration mode.

I recall specifically in this respect  the V.M. Fthenakis and H.C. Kim paper, titled “Life Cycle Assessment of High-Concentration PV Systems”, in which they analyzed the estimated EPBT of the Amonix 7700 PV high concentration system with Fresnel lenses in operation at Phoenix , AZ, and found 0.9 yrs for its EPBT. I wonder if they could still support this analysis, just five years after their study and how the promising system has contributed to the grid parity worldwide, considering they recovered the energy spent on it in less than one year.

Scientific authors should be more careful when accusing to others of using ‘anecdotal worst cases’, especially for the expected Energy Return along a life time, when they are probably using ‘anecdotal best cases’, instead of basing their research on real life 3 years cycle proven and official statistics of production for 4 GW of installed parks.

Talking about the life time (directly involving the Energy Return), it is very interesting to see how some papers have changed the estimated life time of solar PV Systems from 25 years to 30 years. It is curious that virtually all manufacturers give a maximum of 25 years of power guarantee of their modules (with the corresponding degradation process over the years) and 5 years of material guarantee (the latter superseding or prevailing on the former in case of failure) and we find scientists happily granting 30 years for the EROI studies. In my opinion this is a clear attempt to produce higher EROI’s and lower EPBT’s with no rational grounds.

The fact that the Carbajales et al paper ends recommending “that the conventions outlined by the EIA PV Systems Program Task 12 (Environmental, Health and Safety) be followed in conducting EROI calculations, considering that the IEA methodology has easily swallowed the 30 years life time for solar PV modules, gives us a very clear clue of what is going on with these recommendations.

In our discussions on this topic a couple of years ago, an editor came to say that if our factors were really sine qua non (indispensable) for the system to be up and running and the IEA methodology did not considered them, it was time to change the IEA methodology.

I would just recommend the IEA tour Spain (it is not the worst country in solar PV systems; on the contrary, it is one of the most efficient in terms of MWh produced per Mw installed). The IEA should come and check and double check how many solar PV plants have not lasted, for a variety of reasons, the 25 year life time of the manufacturers or the 30 years of the IEA backed by some scientists. Just in 2015 alone about 40 MW have been dismantled, with a lifetime averaging about 5 years. Trials are the delight of reputable and expensive law firms, which earn quite a lot of money preparing lawsuits against promoters, manufacturers, banks and the government. That is real life, far beyond the academic instances. I am following now a demand of a promoter that has decided to buy 2/7th of the modules he originally bought for his 500 kW plant, because the manufacturer (not Chinese), he originally bought from 6 years ago, has disappeared, as have most of the European manufacturers in the last 5 years. One wonders what is the value of a technical guarantee on power, if the life time of the manufacturers becomes much shorter than the one of the power of the promised modules. This is, of course, ‘anecdotal’, although not for the interests of the affected promoters.


After a couple of years from the publication, I have much more data to reaffirm for myself that we were really conservative in our 2.4:1 EROI for many different reasons and factors. But I will not publish more data. I will go back now to my organic garden and wish you all the best for what I suspect may be a grim future.

Antonio Gramsci: “I’m a pessimist because of intelligence, but an optimist because of will.”


2015-4-11 Ted Trainer responds to criticism of Prieto & Hall

Trainer is the author of “Renewable Energy Cannot Sustain a Consumer Society” 

It is very disappointing that so much confusion and acrimony surrounds the crucial issue of boundaries, and that they seem not to be moving to a resolution as quickly as they should be. There are of course big interests at stake, with the conventional high EROI assumption suiting the industry and the theorists who put out such claims. At the very least Prieto and Hall should be commended for getting the whole messy issue of boundaries and components, and appropriate energy cost assumptions for the various components, on the agenda. Sadly the disputation over this issue illustrates the way scientists are not immune from prejudiced and nasty behavior, (a considerable amount of which my efforts to analyze renewables has evoked.) When large scale research funding is at stake there can be strong incentive for competitors to reinforce perspectives that suit them.

As I see it, the goal should not be a single EROI figure for PV, because much depends on the situation and conditions. We need values for modules operating at the average site in Spain with its level of radiation and losses, and we need figures for the various components in the system, such as energy used to produce modules in the factory, energy used to produce the factory, energy lost in inversion, in typical inefficiency due to dust, poor alignment…, and in transmission, energy embodied in inverter replacement, energy used to get workers to the factory, energy used for Operations and Management at the solar farm, energy “retrieved” when the modules are recycled. A fairly thorough provision of these elements would enable anyone to work out the EROI for a particular plant at a particular location, and most importantly the EROI assuming a given set of boundary assumptions. Graham Palmer has just begun a PhD at Melbourne U intended to sort all this out.

I strongly object to Raugei’s comments to you re peer review. I have little respect for the entire peer review edifice, due to my unsatisfactory experience in trying to get critical analyses published. Very often I have found the comments of reviewers to range between nit picky imposition of the way they would have expressed things or gone about the job, through reasoning that I see as at least challengeable and at times dead wrong, to rejection on utterly idiotic grounds … such as being told that my recent 20 page detailed critique of the 2014 IPCC report on renewables was “not scientific”, after waiting seven months for review. (That phrase constituted the full case given for rejection.) On another occasion, where it took over a year to get through the difficulties, I was presented with a seven page essay disagreeing with elements in my case. If that reviewer wanted to express a different view he should have done it somewhere else, not try to insist that I say what he would have said. I have another case where a 50 word review from probably the most prestigious individual in the field said the paper was good, but the paper was rejected because a second even shorter review was unfavourable. The reasons were so unintelligible that I had to ask what they meant. It eventuated that the editor said he didn’t think it was the kind of paper his journal published … after I had waited seven months.

I see the process as far too prone to the whims, prejudices and in fact arrogance of reviewers and editors. They should get out of the way and let people say what they have found or think, and focus only on things like pointing out mistakes or pointing to overlooked evidence or assumptions, or logical errors. Their role should be to help get ideas and analyses out to others, and to block only as a last resort. Too often I have found that reviewers think their role is to make authors conform to their preferred style and they assume the right to condemn work that doesn’t proceed as they would have. I have written reviews in which I say I think the argument is wrong and the procedure not satisfactory but I think the paper should be published, because I could be mistaken and the paper does present a case that it is important for us to think about.

Ultimately what matters is not whether some guru approves of your analysis, what matters is whether the case is sound/convincing/persuasive/well supported, and that judgment should be up to readers, and the quality of the work should be established over time as others in the field comment on it. My main concern here is what must be the large amount of time and good work that doesn’t get published because of the whims of some guru. I would assume that most of us have had papers rejected by one set of reviewers but regarded highly by those from another journal.

So I see any attempt to block publication of controversial, and even flimsy/challengeable cases, on grounds to do with “peer review” as very annoying. I have no interest in whether or not it was peer reviewed; what matters is whether or not the case it argues is sound, or valuable, or ought to be heard. (Theses that are dead wrong can turn out to be valuable contributions, by helping subsequent discussion to clarify an issue.) Whether or not it was peer reviewed has nothing to do with whether or not it is correct, or a valuable contribution, and, Alice, should certainly not be regarded as “a valid criticism”.

In my view Raugei raises some important problems, such as the effect on the Pietro and Hall conclusions had by the Spanish subsidy system, but it’s appropriate to now sort these, not to regard them as reasons why the gook should be rejected.

The most important issue he raises is in claiming that the energy input to PV production should be reduced to one-third, on he grounds that it is electricity and PV produces electricity. As I see it this simply depends on whether the electricity used to produce the modules is coming from PV (or wind or CSP) generating systems … and at present it isn’t. In a world where all electricity came from PV farms it would make sense to put the value of the electricity input into the denominator of an EROI, but in the presenter world the energy going into production is (mostly) coal.




Posted in Charles A. S. Hall, EROEI Energy Returned on Energy Invested, Pedro Prieto, Photovoltaic Solar | Tagged , , , | Leave a comment

Agricultural Transportation and Energy Issues. Senate hearing 2005.

[ After reading this hearing, I can’t help but wonder if the main reason for ethanol is to subsidize farmers.  It certainly does nothing for the energy crisis since the net energy is probably negative and at best break-even.    Since the heavy-duty transportation involved in all aspects of agriculture mainly burns diesel, ethanol is of no help, since diesel engines can’t burn ethanol or diesohol, and most engine warranties allow 5% biodiesel to be mixed in with petroleum diesel.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

Senate 109–510. November 9, 2005. Agricultural Transportation and Energy Issues. U.S. Senate. 123 pages. 



Our transportation system is the lifeblood of agriculture.

U.S. agriculture is highly dependent upon the effectiveness of our integrated agriculture transportation system, and poor transportation directly adds to farmers’ bottom lines. Truck, rail, and river must be able to work together to compete with each other and keep the price of transportation down.

The transportation and energy challenges we face this year hit our farmers particularly hard.  Both transportation and energy are basic inputs into almost every farm and business, so high transportation and energy costs go to the heart of our competitiveness as a nation.

Congress recently passed a Highway Bill to address many of our surface transportation needs, but we have yet to pass the Water Resources Development Act, known as ‘‘WRDA,’’ to authorize crucial funding for our water infrastructure. Improving our river navigation will not only lower the cost of doing business for producers, but also mean less highway congestion


Hurricane Katrina certainly highlighted the importance of river transportation to farmers, which was devastating to the agriculture transportation system in and around the Mississippi Gulf region. Overall, this area is responsible for about 60 to 70% of U.S. world grain exports.


It is estimated that one in four acres of U.S. production is destined for export channels; 60% of which goes through New Orleans to the Gulf.


Rail and truck transport have been critical for agriculture in this time of interrupted river traffic; but clearly, agriculture is heavily dependent on our rivers. And we cannot expect to compete with the rest of the world using locks over 70 years old, as we have on the Upper Mississippi River system.


But all of us here know transportation costs can’t be just boiled down to infrastructure. The price paid for energy has an enormous impact. And beyond transportation, energy prices are taking a severe toll on our farmers.


On average, energy accounts for about 13% of a farmer’s expenses. The increased costs of fertilizer caused by high natural gas prices, combined with extraordinarily high diesel prices and high transportation costs, have been a true challenge for producers today, who can’t raise their prices and are forced to absorb these very severe increases.


Clearly, our energy problems go far beyond Hurricane Katrina. I want to share  a few numbers with you: 37, 53, 60, 74. These four numbers represent the percentage of petroleum supplies we purchased overseas in 1980, 2002, today, and the projected purchases we will make in 2025: from 37 to 74. We were addicted to foreign oil in 1980; wherein our costs double our dosage down the road.


I am serious when I say that this Nation’s energy dependence is the greatest threat to our economy, our security, and our freedom that this Nation faces.



Fletcher R. Hall, Executive Director. The agricultural and food transporters conference of the American Trucking Associations.


According to U.S. government estimates, the transportation of agricultural commodities and products accounts for a significant portion of all U.S. freight traffic. In fact, defining agricultural movement to include movements of farm inputs, raw agricultural commodities, and processed agricultural commodities, agriculture is a primary user of transportation services in the U.S. at over 23% of total tonnage and over 31% of the total ton-miles, moved every year.


The U.S. agricultural sector depends extensively upon truck transportation for a number of reasons. Agricultural production typically occurs in areas substantially removed from the final markets of agricultural products.  Production and processing are generally dispersed over wide areas or regions. Agricultural commodities and products also tend to require a wide range of transportation services which are significantly impacted by energy issues and energy prices.  Agricultural commodities and products such as grains, are bulky and of low value. Others, such as fresh fruits and vegetables, and meats are highly perishable and of high value. Still others, such as livestock, require specialized handling and equipment. Modern commercial agriculture is also input-intensive, using a broad range of products from fertilizers to feed additives. These inputs generate demands for truck transportation, and their costs are affected by the price and availability of various forms of energy.


The trucking industry is essential to agriculture as trucks are now the primary transport mode for the movement of all major agricultural commodities.

  • Trucks are the leading transport mode for the movement of fresh fruits and vegetables in the U.S., with a market share of over 90%
  • 95% of livestock transportation is handled by truck, and fresh dairy products are primarily handled by trucks as well
  • According to the USDA’s latest grain transportation modal share analysis (October 2004), trucks transported 68.4% of all domestic grain movements ni the U.S. during the year 2000. Rail and barge shares decreased, while truck shares increased through 2000,making trucks the dominant mode for grain transport.
  • Trucks are the largest carrier of produce to ocean ports for export


Rising fuel costs have the potential to create a ripple effect through the economy whereby consumers are likely to see higher costs for whatever they are purchasing whether grown on a farm or delivered by truck. This is significant because 80% of communities in the U.S. get their goods solely by truck.


Higher diesel prices will raise the cost of harvesting and post-harvesting treatment e.g., drying, moving and storing of crops in and from the field. Higher energy costs in agricultural transportation will cause food prices to rise, as much as 3.5% this year (versus 2.5% per year in the preceding decade).


SENATOR KEN SALAZAR, COLORADO.    Here is what I am hearing from my state during harvest. Agriculture producers are some of the largest fuel consumers in the U.S., and producers are facing enormous fuel costs. For example, in Grand Junction, Colorado, diesel prices today are still over $3 a gallon. I have heard from a farmer in Brandon, Colorado, who has a dry land wheat farm of about 5,000 acres. He has seen a 217% increase in diesel costs, and about a 71% increase in gasoline costs since the summer of 2004. This operation will use about 200 to 250 gallons of diesel per day during the heavy farming season. If fuel prices do not moderate, this farmer will realize a doubling of fuel costs for 2006; equating to an additional $16,000 annually, just for his fuel expenses on his farm. I heard from another farmer in northeastern Colorado who, in order to cover the increasing price of fuel, has applied for additional loans from his local bank; only to be turned down because he was already over-extended on his existing loans. These anecdotes illustrate a problem which goes far beyond the borders of Colorado. After 5 years of weather-related disasters, such as droughts, hurricanes, or fires, these higher-input costs are having a severe impact not only on producers’ ability to harvest this year, but also in their ability to secure financing to operate for the next year. This is a crisis that is undermining the stability of farming operations across our country. This is a crisis and emergency that we must address.


I believe they need economic loss assistance, which will help offset the staggering increases in fuel and fertilizer costs.  Our producers are in a downward spiral, and we must help end that downward spiral. Each day, this energy crisis continues to drive farmers and ranchers into deeper debt, putting the life of our rural communities at risk.


SENATOR BLANCHE LINCOLN, ARKANSAS. The severe drought conditions which the country has seen, particularly in our region, combined with the high fuel costs, have forced our farmers to experience extremely high operating costs. We are hearing from our bankers, as well, our financial institutions. I have got three counties of banks that are telling me that they are going to have a record number of farm operations that will not be able to pay out or cash-flow because of the record amounts of resource they have had to put into producing a crop, and then to find the natural disasters that have wreaked havoc on them at harvest time. So it is a time when we have to remember what it is our producers do. And they do it very quietly. Very quietly, they produce the safest, most abundant and affordable food supply in the world. They make sure that, per capita, we pay less for our food supply than any other developed nation in the world. Our farmers are devastated, in terms of these fuel costs. And it is not just in terms of the diesel they put in their tractors. It is also the feedstock for their fertilizer. They are paying record prices for fertilizer, the feedstock, in the natural gas that is causing that to happen. And the projection is that in the next several years, we will no longer have a domestic production of fertilizer. So once again, we are going to set another variable onto our producers of not knowing what and when they can depend on the products that they need in order to produce this safe and abundant food supply.


Those small, rural county roads oftentimes are not able to transport the large cotton modules and the other crops that we grow. So we have got a lot of different issues there. But without a doubt, the fuel costs are the greatest burden that our farmers are carrying right now.


I would like to also echo Senator Salazar, in terms of relieving our dependence on foreign oil.


One consistent thing I hear from our ag producers in the South, it is, ‘‘Please, please, allow us to be a part of providing the kind of fuels, the renewable fuels, that we need in this country, to lessen our dependence on foreign oil and give us yet one more secondary market where we can market our products and our crops.’’

SENATOR DEBBIE STABENOW, MICHIGAN.   one of the reasons I was a strong supporter of the energy provision of the 2002 Farm Bill was because of the important ways in which we in agriculture can help to solve the problem of our dependence, over-dependence, on foreign oil.


KEITH COLLINS, PH. D., CHIEF ECONOMIST, U.S. DEPARTMENT OF AGRICULTURE.  The hurricanes also worsened the already tight energy situation. Farmers paid 43% more for diesel fuel in October 2005 than a year earlier; while prices paid for fertilizer by farmers were up 13% this October, compared with last October.




Hurricanes Katrina and Rita wrought incredible devastation on the central Gulf Coast; most importantly, in terms of human suffering, but also in energy impacts that have spread well beyond the stricken area. At its peak impact, Katrina shut down over 25 % of U.S. crude oil production, 20 % of our crude imports, 10 % of our domestic refining, and over 15 % of U.S. natural gas production. Rita compounded those impacts. For example, nearly 30 percent of total U.S. refining was shut in ahead of Rita, and outages continued at nearly 20 percent of refining capacity for some weeks thereafter.


The farm sector, as many of you have mentioned in your opening statements, is a significant consumer of energy, particularly diesel fuel, propane, and electricity. In addition to direct farm use of energy, agriculture is indirectly affected by energy requirements in the fertilizer industry, specifically in nitrogenous fertilizers.


Even before Hurricane Katrina struck, crude oil and petroleum prices were setting records. Oil prices worldwide have been rising steadily since 2002, due in large part to growth in global demand which has used up much of the world’s surplus production capacity. Refineries have been running at increasingly high levels of utilization in many parts of the world, including the United States.


Using previous information about energy use on farms and in closely related sectors, every additional dime added to the price of gasoline and diesel oil per gallon, sustained over a year, costs U.S. agriculture almost $400 million annually. Every dollar added to the price per 1,000 cubic feet of natural gas costs agriculture over $200 million annually in direct expense, and costs the fertilizer industry almost $500 million annually. Every dime increase in the price of propane costs agriculture over $200 million per year. Every penny increase in the price per kilowatt-hour of purchased electricity costs agriculture about $500 million annually in direct expense, and also adds about $35 million to the costs of the nitrogenous fertilizer industry.


Mr. COLLINS.  Early in my career, we used to always say that truck transportation was 3 times as expensive as rail, and rail was 3 times as expensive as barge. So if rail or barge wasn’t available, you did truck. If it was between rail and barge, you did barge. But that is not so true anymore. Because of the high energy prices, because of the demand, because of an economy that grew at 3.8 % last quarter, there has just been tremendous demand for all modes of transportation.


As far as farmers that would be exiting agriculture or unable to finance their operations,  I can’t answer that question. There are too many factors that determine whether someone is going to go out of business or not. You can’t take a change in energy costs in 1 year and translate that into somebody leaving the business. American agriculture is incredibly diverse. People have tremendous sources of income outside of farming. Farm income accounts for 13 percent of total household income of all 2.1 million farms, so they have other sources of income to draw on if they wanted to stay in business.


SENATOR TOM HARKIN, IOWA.  Our inland waterways transport 16 % of our goods, at 2 % of the cost of fuel usage. So it is very efficient, very effective.


Senator TALENT.  I think the ability of our producers to continue to produce the safest and most abundant and highest quality food supply in the world is not just an economic issue. It is a national security issue. I don’t want to be in a position where we are importing food the way we import oil. And part of that means, when there is some extraordinary hit on the farm sector, we should ameliorate a little bit some of the costs that they have had to take because of that. I don’t view that from an ideological perspective. For me, that is just a question of trying to protect the food security of the people of the country. To say it is unprecedented, is factually incorrect.


Mr. COLLINS.   I think providing a payment for energy price increases that would affect farmers like they affect every other business in America, like every other household in America—would be unprecedented. I think that would be unprecedented. Certainly, in the disasters that you spoke about, we did provide assistance. And those were focused on agriculture and on crop losses; and they were special, localized, specific disasters. We face a $5 billion increase in energy costs in agriculture this year. We are predicting next year we will face a $2 billion increase in interest costs. Interest is an input just like energy is an input. So how do you distinguish covering interest rate increases from energy increases, when this would be a national impact that affects everybody; not just unique to agriculture?


DANIEL T. KELLEY, National Council of Farmer Cooperatives, Normal, Illinois, on behalf of the AG Energy Alliance 


U.S. agriculture and related agribusinesses use natural gas for irrigation, crop drying, food processing, crop protection, and nitrogen fertilizer production.


Since 2002, 36% of the U.S. nitrogen fertilizer industry, which uses natural gas as a raw material, has been either shut down or mothballed. According to the U.S. Department of Agriculture, farmers’ fuel, oil, and electricity expenses have increased from $8.6 billion to $11.5 billion, from the period 1999 to 2005. Over that same period, fertilizer expenditures went from $9.9 billion to $11.5 billion. Combined, these expenditure increases represent a $4.5 billion decline in U.S. farmers’ bottom line over that 6–year period. The U.S. chemical industry has been especially hard hit by high energy prices, since natural gas is needed as a feedstock. Its natural gas costs increased by $10 billion since 2003, and $40 billion of business has been lost to overseas competitors, who pay much less for natural gas. Chemical companies closed 70 facilities in the United States in 2004 alone, and at least 40 more have been tagged for shutdown. Of the 120 chemical plants being built around the world with price tags of $1 billion or more, only one of those is being built in the U.S. Our Nation’s current natural gas crisis has two solutions: to increase supply; and second, to reduce demand. The challenge is to find ways to balance our Nation’s dwindling available supply of, and rising demand for, natural gas.


Congress can adopt measures to ensure potential Federal lands and Outer Continental Shelf areas are open for leasing; that leases and permits are issued promptly; that the appropriate tax and royalty policies are in place; and that the necessary pipeline infrastructure is available to bring supplies to market; while leaving behind as small an environmental impact as possible. The agriculture community believes that it is strategically critical for Congress to remove these production barriers now, to provide new sources of natural gas and oil supplies.




The transportation system in the United States has for many decades been one of the true competitive strengths of U.S. agriculture. For a number of reasons, this asset has turned from a potential strength to a potential weakness. Higher energy costs, congestion on railroads and highways, lack of investment in modernizing and maintaining the inland waterway system, as well as the recent storm-related problems, are combining to sharply escalate the costs of moving agricultural products to market.


The U.S. transportation system serving agriculture, including barges, railroads, and trucks, was running at virtually full capacity at the time Katrina struck the United States. The loss in transport capacity from that storm proved how vulnerable the U.S. is to such disruptions.


Barge transportation is 2.5 times as fuel efficient as rail movements, and almost nine times as efficient as trucking product. So as energy is likely to remain expensive, and energy conservation is a national goal, the time is nigh to begin seriously investing in modernizing the commercial navigation system.





America, I would say, is in an energy straitjacket right now.  It will take several years, if not longer, to make significant expansion in energy resources. However, there is one resource that is available to us today, and that is energy efficiency and conservation. This is a resource that we can bring to the market both quickly and cost effectively. And we have seen several examples of those in recent years. In California and New York in 2001, energy efficiency and conservation played a major role in reducing demand and rebalancing energy markets; which avoided major economic losses.


RYAN NEIBUR, ROCKY MOUNTAIN FARMERS UNION, BURLINGTON, COLORADO.   The price of natural gas has increased 215 % in the last 3 years. This increase has raised my cost of irrigation per crop year from $50 an acre in 2003, to $158 expected in 2006. At this rate, farmers will not be able to afford irrigation, and will be forced to dry-land farm in an area that has been in a drought for 5 years. In my situation, dry-land farming irrigated ground is not an option with my bank.


Natural gas is the main ingredient used to make anhydrous ammonia and liquid nitrogen. In 2003, we paid $295 a ton, compared to $495 a ton in 2005. In the production of our corn crop, this price increase translates into a cost-per-acre change of $37–per-acre in 2003, to $62–an-acre in 2005; almost doubling the cost.

In December 2003, I paid $1.10 a gallon for farm fuel. In October 2005, I paid $2.85 a gallon, for the same farm fuel; an increase of over 155 percent. On my farm, fuel expense has gone from $60,700 in 2004, to over $135,000 in 2005. If you put this into a per-acre basis, it is extremely scary. Fuel cost for harvesting corn in 2004 was costing $9.80 per acre. In 2005, fuel cost for harvesting this year was over $22 per acre. Remember, the price of corn has not increased; nor has the yield. Farmers and ranchers are in a situation that does not allow us to pass on these additional costs as a surcharge; which other industries, such as truck lines and airlines, are able to do.

As a farmer, I have no means by which to pass on the higher costs of energy. And it seems that Congress should consider approving some type of mechanism to help farmers and ranchers offset these higher costs.

NFU has been a longtime advocate for renewable fuel standards and renewable bio-based fuels. And we believe that more efforts need to be made to produce fuel and energy from our farms.

[ Scorecard: energy dependence 3 times ]




Posted in Energy Dependence, Transportation, Transportation, Trucks | Leave a comment

Doomsday: Will peak phosphate get us before global warming?

Price, Ed.  July 22, 2013. Doomsday: Will Peak Phosphate Get us Before Global Warming?

Although climate change catches the headlines, it is not the only doomsday scenario out there. A smaller but no less fervent band of worriers think that peak phosphate—a catastrophic decline in output of an essential fertilizer—will get us first.

One of the worriers is Jeremy Grantham of the global investment management firm GMO. Grantham foresees a coming crash of the earth’s population from a projected 10 billion to no more than 1.5 billion. He thinks the rest of humanity will starve to death because we are running out of phosphate fertilizer. This post on Business Insider from late last year provides an array of alarming charts to back up his warning.

Foreign Policy agrees that phosphate shortages are a potential threat. “If we fail to meet this challenge,” write contributors James Elser and Stuart White, “humanity faces a Malthusian trap of widespread famine on a scale that we have not yet experienced. The geopolitical impacts of such disruptions will be severe, as an increasing number of states fail to provide their citizens with a sufficient food supply.”

What is going on here? Is this really “the biggest problem we’ve never heard of,” as Elser puts it? Or are phosphate shortages something that global markets can cope with? Let’s take a closer look.

Why we need phosphates and why we are trouble if they run out

The element phosphorus is as essential to life as carbon or oxygen. It forms part of the structure of cell walls and DNA without which no plant or animal can exist. Phosphates are phosphorus in chemical forms that are available to plants. Some phosphates occur naturally in the soil as the result of weathering of rocks, but since the dawn of agriculture, farmers have added phosphate fertilizers to increase crop production. Manure, the traditional source, still accounts for about 15 percent of all phosphates used in agriculture, but since mid- twentieth century, most such fertilizer has come from phosphate rock.

What we appear to be running out of are deposits of phosphate rock that can be mined at reasonable cost with today’s technology. Up to now, the United States has been a big producer, but its reserves are declining. China has a lot, but its domestic use is soaring and it is not a big exporter. North Africa has the biggest reserves, but some of them are in politically unstable regions like the Western Sahara.

The following widely reproduced diagram from a 2009 paper in Global Environmental Change depicts the peak phosphorus hypothesis in the form of a “Hubbert curve” that shows production declining at an accelerating rate after hitting a maximum around 2035. After that, say peak phosphate proponents, we are in big trouble.

Peak Phosphorus

Can the market save us?

Yes, a shortage of phosphates could spell trouble, but don’t forget about markets. Adjusting to shortages is just what markets are for. As economists see it, depleting a resource like phosphate rock is supposed to cause its price to rise. As the price rises, two things are supposed to happen. First, users are supposed to figure out ways to get by with less, and second, producers are supposed to find new sources of supply. Will this happen in the case of phosphates, or do they have unique properties that will prevent markets from working their magic?

Some think the latter. For example, the authors of the peak phosphorus diagram write that:

“a key difference between peak oil and peak phosphorus, is that oil can be replaced with other forms of energy once it becomes too scarce. But there is no substitute for phosphorus in food production. It cannot be produced or synthesized in a laboratory. Quite simply, without phosphorus, we cannot produce food.”

Fortunately, the biological impossibility of substituting some other element for phosphorus in food production is not enough to thwart the operation of supply and demand in the phosphate market. One sign that the market is working is that phosphate prices are already rising. As the following chart shows, the U.S. prices of two of the most commonly used phosphate fertilizers soared in the early 2000s. Along with the prices of many other commodities, they dropped back from their peaks after the global financial crisis, but they are heading up again as the economy recovers.

Phosphate Fertilizers

The price increases have already had an impact on phosphate use. As the next chart shows, despite rising farm output, the growth rate of phosphate fertilizer use has slowed over time. The question for the future is whether it is technically feasible to increase food output further while actually reducing phosphate use.

Phosphate Use

Experts appear to think the answer is yes. A report published in Environmental Research Lettersestimates that improvements in farm management practices and consumer waste could cut the phosphates needed to produce the present U.S. farm output by half, even with today’s technologies. In the future, even greater reductions may be possible. According to Roberto Gaxiola of Arizona State University, generations of phosphate fertilizer use have reduced the efficiency of phosphorus uptake by domesticated crop plants. His experiments indicate that selective breeding and genetic engineering can produce plants that can flourish with much lower phosphorus use.

There are significant developments on the supply side, as well. Michael Mew of the Fertecon Research Centernotes that producers are already learning how to upgrade lower quality phosphate rock reserves and are modifying processing plants to accept lower quality inputs. Also, he notes that increasing vertical integration of the industry has resulted in a reduction in transportation costs. Those cost savings slow the rate of price increase and give more time for supply and demand to adjust.

Furthermore, although it is true that we cannot create or synthesize phosphorus, we can recover useable phosphorus from waste streams, including urban sewage. As this source explains, existing systems already remove phosphorus from sewage in order to preserve water quality in the rivers and streams into which they discharge treated waste. Given the low prices for phosphate that prevailed until recently, it did not pay to recover that phosphorus in usable forms. Much of it has ended up as sludge buried in landfills. However, several methods could recover a high percentage of the phosphorus from wastewater. At some price, doing so will become a profitable alternative to producing phosphate fertilizers from increasingly low-grade phosphate rock. It may even become worthwhile to mine phosphate from sewage sludge buried in old landfills.

The bottom line

The problems posed by depletion of finite supplies of high-grade phosphate rock are not trivial. However, it is highly misleading to forecast a sharp peak of phosphate fertilizer production in the near future, let alone to predict that mass starvation and population collapse lie on the downslope of the curve. The fact that there are no substitutes for phosphorus when it comes to building DNA or cell walls does not mean that markets are incapable of managing increasing scarcity.

What does seem likely is a period of continued high or rising phosphate prices, which will trigger three reactions. First, higher prices will make it economical to process ever-lower grades of phosphate rock. Second, they will spur changes in farm management and development of improved crop varieties; these in turn will accelerate incipient trends toward increased food output per unit of phosphate input. Third, higher prices will provide incentives for improved recycling of phosphorus from waste streams.

Putting all this together, Michael Mew dismisses the peak phosphate hypothesis. Instead, he foresees a phosphate plateau as higher prices cause historical growth rates to level off gradually.

Phosphate Produstion

Such a phosphate plateau does not preclude the need for changes in how people live and eat. It could well mean the relative price of food will rise over time, something that could cause hardship for many of the world’s poor. Furthermore, the price of phosphorus-intensive meat is likely to rise relative to those of other foods, making it unrealistic for the world’s emergent middle classes ever to attain the kind of meat-rich diet to which residents of today’s wealthy countries have become accustomed—a diet that, in the age of obesity,  is sometimes less of a blessing than a curse.

When all is said and done, a plateau is not a cliff. There is no phosphate doomsday on the horizon.

Posted in Phosphorus | Tagged | 2 Comments

Sand mines used to frack oil & gas are destroying the best topsoil in the Midwest

Nancy C. Loem. May 23, 2016. The sand mines that ruin farmland.  New York Times.

Chicago — While the shale gas industry has been depressed in recent years by low oil and gas prices, analysts are predicting that it will soon rebound. Many of the environmental hazards of the gas extraction process, called hydraulic fracturing or fracking, are by now familiar: contaminated drinking water, oil spills and methane gas leaks, exploding rail cars and earthquakes.

A less well-known effect is the destruction of large areas of Midwestern farmland resulting from one of fracking’s key ingredients: sand.

Fracking involves pumping vast quantities of water and chemicals into rock formations under high pressure, but the mix injected into wells also includes huge amounts of “frac sand.” The sand is used to keep the fissures in the rock open — acting as what drilling engineers call a “proppant” — so that the locked-in oil and gas can escape.

Illinois, Wisconsin and Minnesota are home to some of the richest agricultural land anywhere in the world.

But this fertile, naturally irrigated farmland sits atop another resource that has become more highly prized: a deposit of fine silica sand known as St. Peter sandstone. This particular sand is valued by the fracking industry for its high silica content, round grains, uniform grain size and strength. These qualities enable the St. Peter sand to withstand the intensity of fracking, and improve the efficiency of drilling operations.

In the Upper Midwest, this sandstone deposit lies just below the surface. It runs wide but not deep. This makes the sand easy to reach, but it also means that to extract large quantities, mines have to be dug across hundreds of acres.

At the end of 2015, there were 129 industrial sand facilities — including mines, processing plants and rail heads — operating in Wisconsin, up from just five mines and five processing plants in 2010. At the center of Illinois’s sand rush, in LaSalle County, where I am counsel to a group of farmers that is challenging one mine’s location, The Chicago Tribune found that mining companies had acquired at least 3,100 acres of prime farmland from 2005 to 2014.

In the jargon of the fracking industry, the farmland above the sand is “overburden.” Instead of growing crops that feed people, it becomes berms, walls of subsoil and topsoil piled up to 30 feet high to hide the mines.

But the effects cannot be hidden indefinitely. These mines are destroying rural communities along with the farmland. Homesteads and small towns are being battered by mine blasting, hundreds of diesel trucks speed down rural roads dropping sand along the way, stadium lighting is so bright it blots out the night sky, and 24-hour operations go on within a few hundred feet of homes and farms. As a result, some farmers are selling and moving away, while for those determined to stay, life is changed forever.

Quality of life is not their only concern. Silica is a human carcinogen and also causes lung disease, including silicosis. Because of its dangers, silica is heavily regulated in the workplace, but there are generally no regulations for silica blown around from the sand-mining operations. These mines also use millions of gallons of groundwater every day. Local wells are running dry, and the long-term availability of water for homes and farms is threatened.

Because of the recent slowdown in the fracking industry, many of the sand mines stopped or slowed production, providing temporary respite to these rural communities. But with oil edging back up toward $50 a barrel, and projected to go higher, the Midwest farmlands face a renewed threat.

The sand mines do promise jobs. But it’s shortsighted to rely on a new fracking boom when we’ve already seen how vulnerable the business is to cyclical dips. America’s frac sand industry shrank to about $2 billion last year from $4.5 billion after the price of oil plummeted in 2014. As mines were mothballed or shuttered, hundreds of miners and truckers were laid off.

Even assuming a coming recovery, there may be as few as 20 to 30 jobs in a mine covering hundreds of acres — a mine that may operate for only 20 years. When the sand is exhausted, the mine is a hole in the ground and the jobs are gone. The farms that it replaced provided employment and sustenance for centuries.

There are alternatives to this despoliation. Not all frac sand is buried under prime farmland. Texas, Kansas, Arkansas and Oklahoma all have usable frac sand that is not “burdened” by rich prairie earth, and transportation costs there are often lower.

In the Midwest, we badly need more legal restraints on how frac sand mines operate. People must be protected from blowing silica. Sand piles should be covered and mines set a safe distance from homes, farms, schools and public spaces. At present, such regulations are often lax, and local residents have rarely won the needed protections from local or state governments eager to cash in on the boom.

Groundwater, too, needs stronger safeguards. A good example to follow is LaSalle County, which in 2013 placed a moratorium on new high-capacity wells needed for mining pending the results of a United States Geological Survey study in part funded by Northwestern, where I teach, of the capacity of groundwater supplies to support new mines.

Unfettered frac sand mining is ruining the rural communities of the Midwest. All people are left with are thousands of acres of holes in the ground in place of what was once rich, productive farmland. That is too high a price to pay.

Posted in Agriculture, Sand, Soil, Soil | Tagged , , , | 3 Comments

HSBC bank report predicts another financial crisis in 2018

[ Bill Hill of the Hill’s group predicted in June 2016 (at a forum): “We expect to have reached permanent depression by the end of 2017. The reduction will not hit all nations the same way. The richer Western countries will be able to afford fuels for longer than smaller poorer counties. But, how that will feed back into their general economies is yet an unknown. It will definitely have a negative impact, and perhaps a gigantic one. Like the S&P collapsing, an explosion of corporate bankruptcies, and supply chains breaking. But all and all we will just have to wait and see. It has been four years since petroleum hit its energy half way point. We should not have to wait much longer. ]

Is an Economic Oil Crash Around the Corner? By Nafeez Ahmet, January 2017, Alternet.

A report by HSBC shows that contrary to industry mythology, even amidst the glut of unconventional oil and gas, the vast bulk of the world’s oil production has already peaked and is now in decline, while European government scientists show that the value of energy produced by oil has declined by half within the first 15 years of the 21st century.

The upshot? Welcome to a new age of permanent economic recession driven by ongoing dependence on dirty, expensive, difficult oil—unless we choose a fundamentally different path.

Last September, a few outlets were reporting the counter intuitive findings of a new HSBC research report on global oil supply. Unfortunately, the true implications of the HSBC report were largely misunderstood.

New scientific research suggests that the world faces an imminent oil crunch, which will trigger another financial crisis.

The HSBC research note — prepared for clients of the global bank — found that contrary to concerns about too much oil supply and insufficient demand, the situation was opposite: global oil supply in coming years will be insufficient to sustain rising demand.

Yet the full, striking import of the report, concerning the world’s permanent entry into a new age of global oil decline, was never really explained. The report didn’t just go against the grain of the industry’s hype about “peak demand”: it vindicated what is routinely lambasted by the industry as a myth: peak oil ,  the concurrent peak and decline of global oil production.

The HSBC report you need to read

Insurge Intelligence obtained a copy of the report in December 2016, and for the first time we are exclusively publishing the entire report in the public interest. Read and/or download the full HSBC report.

Headquartered in London, HSBC is the world’s sixth largest bank, holding assets of $2.67 trillion. So when it produces a research report for its clients, we should listen. Among the report’s most shocking findings is that, “81% of the world’s total liquids production is already in decline.”

Between 2016 and 2020, non-OPEC production will be flat due to declines in conventional oil production, even though OPEC will continue to increase production modestly. This means that by 2017, deliverable spare capacity could be as little as 1% of global oil demand.

This heightens the risk of a major global oil supply shock around 2018 which could “significantly affect oil prices.”

The report asserts that peak demand (the idea that demand will stop growing leaving the world awash in too much supply), while certainly a relevant issue due to climate change agreements and disruptive trends in alternative technologies, is not the most imminent challenge:

“Even in a world of slower oil demand growth, we think the biggest long-term challenge is to offset declines in production from mature fields. The scale of this issue is such that in our view rather there could well be a global supply squeeze some time before we are realistically looking at global demand peaking.”

Under the current supply glut driven by rising unconventional production, falling oil prices have damaged industry profitability and led to dramatic cut backs in new investments in production. This, HSBC says, will exacerbate the likelihood of a global oil supply crunch from 2018 onwards.

Four Saudi Arabias, anyone?

The HSBC report examines two main datasets from the International Energy Agency and the University of Uppsala’s Global Energy Systems Program in Sweden.

The latter has consistently advocated a global peak oil scenario for many years — the HSBC report confirms the accuracy of this scenario, and shows that the IEA’s data supports it.

The rate and nature of new oil discoveries has declined dramatically over the last few decades, reaching almost negligible levels on a global scale, the report finds. Compare this to the report’s warning that just to keep production flat against increasing decline rates, the world will need to add four Saudi Arabia’s worth of production by 2040. North American production, despite remaining the most promising in terms of potential, will simply not be able to fill this gap.

Business Insider, the Telegraph and other outlets that covered the report last year acknowledged the supply gap, but failed to properly clarify that HSBC’s devastating findings basically forecast the long-term scarcity of cheap oil due to global peak oil, from 2018 to 2040.

The report revises the way it approaches the concept of peak oil — rather than forecasting it as a single global event, the report uses a disaggregated approach focusing on specific regions and producers. Under this analysis, 81% of the world’s oil supply has peaked in production and so now “is post-peak.”

Using a more restrictive definition puts the quantity of global oil that has peaked at 64%. But either way, well over half the world’s global oil supply consists of mature and declining fields whose production is inexorably and irreversibly decreasing:

“If we assumed a decline rate of 5%pa [per year] on global post-peak supply of 74 mbd — which is by no means aggressive in our view — it would imply a fall in post-peak supply of c.38mbd by 2030 and c.52mbd out to 2040. In other words, the world would need to find over four times the size of Saudi Arabia just to keep supply flat, before demand growth is taken into account.”

What’s worse is that when demand growth is taken into account — and the report notes that even the most conservative projections forecast a rise in global oil demand by 2040 of more than 8 mbd above that of 2015 — then even more oil would be needed to fill the coming supply gap.

But with new discoveries at an all-time low and continuing to diminish, the implication is that oil can simply never fill this gap.

Technological innovation exacerbates the problem

Much trumpeted improvements in drilling rates and efficiency will not make things better, because they will only accelerate production in the short term while, therefore, more rapidly depleting existing reserves. In this case, the report concludes: “the decline-delaying techniques are only masking what could be significantly higher decline rates in the future.”

This does not mean that peak demand should be dismissed as a serious concern. As Michael Bradshaw, professor of global energy at Warwick University’s Sloan Business School, told me for my previous Vice article, any return to higher oil prices will have major economic consequences.

Price spikes, economic recession

Firstly, oil price spikes would have an immediate recessionary effect on the global economy, by amplifying inflation and leading to higher costs for social activity at all levels, driven by the higher underlying energy costs.

Secondly, even as spikes may temporarily return some oil companies to potential profitability, such higher oil prices will drive consumer incentives to transition to cheaper renewable energy technologies like solar and wind, which are already becoming cost-competitive with fossil fuels.

That means a global oil squeeze could end up having a dramatic impact on continued demand for oil, as twin crises of peak oil and peak demand end up intensifying and interacting in unfamiliar ways.

The demise of fossil fuels

But the HSBC report’s specific forecasts of global oil supply and demand are part of a wider story of global net energy decline.

A new scientific research paper authored by a team of European government scientists, published on Cornell University’s Arxiv website in October 2016, warns that the global economy has entered a new era of slow and declining growth. This is because the value of energy that can be produced from the world’s fossil fuel resource base is declining inexorably.

The paper—currently under review with an academic journal—was authored by Francesco Meneguzzo, Rosaria Ciriminna, Lorenzo Albanese, Mario Pagliaro, who collectively conduct research on climate change, energy, physics and materials science at the Italian National Research Council,  Italy’s premier government agency for scientific research.

According to HSBC, oil prices are likely to rise and stabilize for some time around the $75 per barrel mark. But the Italian scientists find that this is still too high to avoid destabilizing recessionary effects on the economy.

The Italian study offers a new model combining “the competing dynamics of population and economic growth with oil supply and price,” with a view to evaluate the near-term consequences for global economic growth.

Data from the past 40 years shows that during economic recessions, the oil price tops $60 per barrel, but during economic growth remains below $40 a barrel. This means that prices above $60 will inevitably induce recession. Therefore, the scientists conclude that to avoid recession, “the oil price should not exceed a threshold located somewhat between $40/b [per barrel] and $50/b, or possibly even lower.”

More broadly, the scientists show that there is a direct correlation between global population growth, economic growth and total energy consumption. As the latter has steadily increased, it has literally fueled the growth of global wealth.

But even so, the paper finds that the world is experiencing: “declining average EROIs [Energy Return on Investment] for all fossil fuels; with the EROI of oil having likely halved in the short course of the first 15 years of the 21st century.”

EROI is the total value of energy a resource can generate, calculated by comparing the quantity of energy extracted, to the quantity of energy put in to enable the extraction.

This means that overall, despite total liquids production increasing, as the energy value it generates is declining, the overall costs of extraction are simultaneously increasing. This is acting as an increasing geophysical brake on global economic growth. And it means the more the economy remains dependent on fossil fuels, the more the economy is tied to the recessionary impact of global net energy decline: “The chance of future economic growth matching the current trajectory of the human population is inextricably bound to the wide and growing availability of highly concentrated energy sources enjoying broad applicability to energy end uses.”

The problem is that since the 1980s, the share of oil in the global energy mix has declined. To make up for this, economic growth has increasingly had to rely on clever financial instruments based on debt: in effect, the world is borrowing from the future to sustain our present consumption levels.

In an interview, lead author Francesco Meneguzzo explained:  “Global conventional oil peaked around the year 2005. All the following supply increase was due to unconventional oil exploitation and, since 2009, basically to U.S. shale (tight) oil, which in turn peaked around March, 2015.

“What looks like to be even more important is the fact that global oil supply has failed to keep the pace with the increase in total energy consumption, which ‘natural’ growth requires to be approximately proportional to population increase, leading to the decline of the oil share in the energy mix. While governments have struggled to fuel their economies with ever increasing energy supply, other sources have steadily replaced oil in the energy mix, such as coal in China. Yet, no other conventional source has proved to be a valuable substitute for oil, hence the need for debt in order to replace the vanishing oil share.”

On a business-as-usual trajectory then, the economy can quite literally never recover — unless it transitions to a truly viable new energy source which can substitute for oil.

“In order to avoid the [oil] price affordable by the global economy falling below the extraction cost, debt piling (borrowing from the future) becomes a necessity, yet it is a mere trick to gain some time while hoping for something positive to happen,” said Meneguzzo. “The reality is that debt, basically as a substitute for oil, does not work to produce real wealth, as apparent for example from the decline of the industry value added as a percentage of GDP.”

Where will this end up?

“Recently, debt has started shrinking, basically because it has failed to generate real wealth. Assuming no meaningful (and fast) transition to renewable energy, the economic growth can only deteriorate further and further.”

Basically, this means, Meneguzzo adds, “delocalizing manufacturing to economies using local, cheaper and dirtier energy sources (such as coal in China) as well as lower wages, further shrinking domestic aggregate demand and fueling a downward spiral of deflation and/or debt.”

Is there a way out? Not within the current trajectory: “Unless that debt is immediately used to exploit renewable sources on a massive scale, along with ‘accessories’ such as storage making them as qualified as oil, social and political derangements, even before an economic crash, look to be unavoidable.”

Crisis convergence

Seen in this broader scientific context, the HSBC global oil supply report provides stunning confirmation that for the most part, global oil production is already in post-peak ,  and that after 2018, this is going to manifest in not simply a global supply shock, but a world in which cheap, high quality fossil fuels is increasingly hard to find.

What will this mean? One possible scenario is that by 2018 or shortly thereafter, the world will face a similar convergence of global crises that occurred a decade earlier.

In this scenario, oil price hikes would have a recessionary affect that destabilizes the global debt bubble, which for some years has been higher than pre-2008 crash levels, now at a record $152 trillion.

In 2008, oil price shocks played a key role in creating pre-crisis economic conditions for consumers in which rising living costs helped trigger debt-defaults in housing markets, which rapidly spiraled out of control.

In or shortly after 2018, economic and energy crisis convergence would drive global food prices up, regenerating the contours of the triple crunch we saw ravage the world from 2008 to 2011, the debilitating impacts of which we have yet to recover from.

2018 is likely to be crunch year for another reason. Jan. 1, 2018 is the date when a host of new regulations are set to come in force, which will “constrain lending ability and prompt banks to only advance money to the best borrowers, which could accelerate bankruptcies worldwide,” according to Bloomberg. Other rules to come in play will require banks to stop using their own international risk assessment measures for derivatives trading.

Ironically, the introduction of similar well-intentioned regulation in January 2008 (through Basel II) laid the groundwork to rupture the global financial architecture, making it vulnerable to that year’s banking collapse.

In fact, two years earlier in July 2006, David Martin, an expert on global finance, presciently forecast that Basel II would interact with the debt bubble to convert a collapse of the housing bubble into a global financial conflagration. Just a month after that warning, I was told by a former senior Pentagon official with wide-ranging high-level access to the U.S. military, intelligence and financial establishment that a global banking collapse was imminent, and would likely occur in 2008.

My source insisted that the event was bound up with the peak of global conventional oil production about two years earlier (which according to the U.K.’s former chief government scientist Sir David King did indeed occur around 2005, even though unconventional oil and gas production has offset the conventional decline so far).

Having first outlined my warning of a 2008 global banking collapse in August 2006, I re-articulated the warning in November 2007, citing Martin’s forecast and my own wider systems analysis at a lecture at Imperial College, London. In that lecture, I predicted that a housing-triggered banking crisis would be sparked in the context of the new era of expensive fossil fuels.

I called it then, and I’m calling it now. Some time after January 2018, we are seeing the probability of a new crisis convergence in global energy, economic and food systems, similar to what occurred in 2008.

Today, we are all supposed to quietly believe that the economy is in recovery, when in fact it is merely transitioning through a fundamental global systemic phase-shift in which the unsustainability of prevailing industrial structures are being increasingly laid bare. The truth is that the cycles of protracted economic crisis are symptomatic of a deeper global systemic process.

One way we can brace ourselves for the next crash is to recognize it for what it is: a symptom of global system failure, and therefore of the inevitable transition to a post-carbon, post-capitalist future. The future we are stepping into simply doesn’t work the way we are accustomed to.

The old, industrial era rules for the dying age of energy and technological super-abundance must be re-written for a new era beyond fossil fuels, beyond endless growth at any environmental cost, beyond debt-driven finance.

This year, we can prepare for the post-2018 resurgence of crisis convergence by planting seeds — however small — for that future in our own lives, and with those around us, from our families, to our communities and wider societies.
Nafeez Ahmed is an investigative journalist and international security scholar. He writes the System Shift column for VICE’s Motherboard, and is the winner of a 2015 Project Censored Award for Outstanding Investigative Journalism for his former work at the Guardian. He is the author of A User’s Guide to the Crisis of Civilization: And How to Save It (2010), and the scifi thriller novel Zero Point, among other books.

Posted in Crash Coming Soon, Decline | Tagged , | Comments Off on HSBC bank report predicts another financial crisis in 2018

Peak coal 2013-2045 — most likely 2025-2030

Dennis Coyne. March 11, 2016. Coal Shock Model.

Coal is an important energy resource, but we do not know how the size of the economically recoverable resource that will eventually be recovered. The mainstream view is that there are extensive coal resources that are economically recoverable. But research by Rutledge, Mohr, and Laherrere contradicts this view.

My estimates of the coal URR are based on the work of David Rutledge and Steve Mohr. Recent work by Jean Laherrere has coal URR estimates which are higher than my estimates, his medium scenario (650 Gtoe) is higher than my high case (630 Gtoe) and his estimates are usually conservative. My estimate may be too conservative, though my medium case (URR=510 Gtoe) is somewhat higher than the best estimate of Steve Mohr (465 Gtoe), whose work on coal is the best that I have found.

The average of the best estimate of Mohr and Laherrere’s medium case is about 550 Gtoe, a little higher than my medium case and similar to Laherrere’s low case. Based on the recent work by Laherrere, my best estimate would be 560 Gtoe (570 Gtoe is the average of my medium and high cases and 550 Gtoe is the average of the Mohr and Laherrere medium cases, the average of all 4 is 560 Gtoe).

The peak for world coal output will be sooner than most people think, the range is 2013 to 2045, my estimate is 2025 to 2030 with peak output between 4 and 5 Gtoe/year (2014 output was about 4 Gtoe/year).


The eventual peak in World fossil fuel output is a potentially serious problem for human civilization. Many people have studied this problem, including Jean Laherrere, Steve Mohr, Paul Pukite (aka Webhubbletelescope), and David Rutledge.

I have found Steve Mohr’s work the most comprehensive as he covered coal, oil, and natural gas from both the supply and demand perspective in his PhD Thesis. Jean Laherrere has studied the problem extensively with his focus primarily on oil and natural gas, but with some exploration of the coal resource as well. David Rutledge has studied the coal resource using linearization techniques on the production data (which he calls logit and probit).

Paul Pukite introduced the Shock Model with dispersive discovery which he has used primarily to look at how oil and natural gas resources are developed and extracted over time. In the past I have attempted to apply Paul Pukite’s Shock Model (in a simplified form) to the discovery data found in Jean Laherrere’s work for both oil and natural gas, using the analysis of Steve Mohr as a guide for the URR of my low and high scenarios along with the insight gleaned from Hubbert Linearization.

In the current post I will apply the Shock model to the coal resource, again trying to build on the work of Mohr, Rutledge, Laherrere, and Pukite.

A summary of URR estimates for World coal are below:blog1603/

The “Laherrere+Rutledge” estimate uses the Rutledge best estimate for the low case and Laherrere’s low and medium cases for the medium and high cases. Laherrere also has a high case of 750 Gtoe for the World coal URR, which seems too optimistic in my opinion. The “high” estimate of Steve Mohr has been reduced from his “Case 3” estimate of 670 Gtoe by 40 Gtoe because I have assumed lignite and black coal resources are lower than his high estimate.

An update of David Rutledge’s estimate using the latest BP data through 2014 gives a URR of about 400 billion tonnes of oil equivalent (Gtoe) for coal. The Rutledge 2009 estimate was about 350 Gtoe.

My initial estimate was in billions of tonnes (Gt) of coal at 800 Gt for the low estimate (a round number near Steve Mohr’s low estimate of 770 Gt) and 1300 Gt for the high estimate (about the same as Steve Mohr’s high estimate), my medium estimate was simply the average of the high and low estimates. I came across Jean Laherrere’s estimate after I had developed my model, surprisingly his medium estimate is a little higher than my guess, which is usually not the case (for other fossil fuels).

I do not have access to discovery data for coal, but based on World Resource estimates gathered by David Rutledge, most coal resources had been discovered by the 1930s. I developed simple dispersive discovery models with peak discovery around 1900 for each of the three cases, these are rough estimates, I only know is that coal was discovered over time. The cumulative coal discovery models in Gtoe are shown in the chart below for the low, medium and high URR cases.


In each case about 75% of coal discovery was prior to 1940.  Coal resources have been developed very slowly, especially since the discovery of oil and natural gas. As a simplification I assume that the rate that the discovered coal is developed remains constant over time.

A maximum entropy probability density function with a mean time from discovery to first production of 100 years is used to approximate how quickly new proved developed producing reserves are added to any reserves already producing each year. For example a 1000 million tonne of oil equivalent (1 Gtoe) coal discovery would be developed (on average) as shown in the chart below:


Reading from the chart, about 9 Mtoe of new producing reserves would be developed from this 1850 discovery in 1860 and about 5 Mtoe of new producing reserves would be developed in 1920. About half of the 1000 Mt discovered in 1850 would have become producing reserves by 1920, so the median time from discovery to producing reserve is about 70 years (the mean is 100 years due to the long tail of the exponential probability density function).

The model takes all the discoveries for each year and applies the probability density function (pdf) above to each year’s discoveries (the pdf is 1000 less than shown in the chart because we multiplied the pdf by 1000 to show the new producing reserves in Mtoe.) Then the new producing reserves from each year’s discoveries are simply added together in a spreadsheet, not complicated, just an accounting exercise.  The new producing reserves curve (when everything is added up) is shown below for the medium URR case (510 Gtoe):


Each year new producing reserves are added to the pool of producing reserves while some of these reserves are produced and become fossil fuel output. This is indicated schematically below:


If the Fossil fuel output is less than the new producing reserves added in any year, then the producing reserves would increase during that year, if the reverse is true they would decrease.

The fossil fuel output divided by the producing reserves is called the extraction rate.

Using data from David Rutledge for fossil fuel output to 1980 and data from BP’s Statistical Review of World Energy from 1981 to 2014, I extrapolated the extraction rate trend from 2000 to 2014 to estimate future coal output. The chart below shows the discovery curve, new producing reserves curve, and the output curve for the scenario with a URR of 510 Gtoe.


Note that when new producing reserves are more than output the producing reserves will increase (up to 1986), after 1993 output is higher than the new producing reserves added each year so producing reserves start to decrease. Producing reserves are in the following chart for the medium scenario (URR=510 Gtoe).


The fall in producing reserves combined with increased World output of coal from 2000 to 2013 required an increase in extraction rates from 1.5% to 2.9%. I assume after 2014 that this increase in extraction rates continues at a similar rate until reaching 4% in 2026 and then extraction rates gradually flatten, reaching 5.1% in 2070.

Clearly I do not know the future extraction rate, this is an estimate assuming recent trends continue. For this scenario with a coal URR of 510 Gtoe output peaks in 2026 at about 4250 Mtoe/year.


For the low and high URR cases the details of the analysis are covered at the end of the post. The extraction rate trend from 2000 to 2014 was also extended until a peak was reached and then the increase in extraction rates were assumed to lessen until a constant rate of extraction was reached.

The three scenarios(low, medium, and high) are presented in the chart below.


The low scenario peaks in 2013 at about 4 Gtoe/a, the medium scenario peaks in 2025 at about 4.3 Gtoe/a, and the high scenario peaks in 2045 at about 4.9 Gtoe/a. Note that the medium scenario is not my best estimate, it is simply a scenario between possible low or high URR cases, reality might fall on any path between the high and low scenarios, depending on the eventual URR and extraction rates in the future.

A blog post by Luis de Sousa covered Jean Laherrere’s estimate of future coal output with URR between 550 Gtoe and 750 Gtoe.


For comparison, I have adjusted my chart (shown above) to have a similar scale as Jean Laherrere’s chart.

Note that only the two higher scenarios in my chart can be roughly compared with the lower two scenarios in Laherrere’s chart (510 compared with 550 Gtoe and 630 compared with 650 Gtoe). My scenarios peak at higher output at a later year and decline more steeply as a result.

The chart below is Steve Mohr’s medium independently dynamic scenario, where supply responds to coal demand.


The Chart above labelled C Case 2 is figure 5-8 from page 69 of Steve Mohr’s PhD Dissertation, the peak output is 210 EJ/year in 2019 (from Table 5-7 on page 71), Case 2 has a URR of 19.4 ZJ or 465 Gtoe (ZJ=zettajoule=1E21 J). My medium scenario (URR of 21.3 ZJ) has a lower peak output of 180 EJ/year, which occurs 6 years later than Mohr’s scenario. (1 Gtoe=41.868 EJ=4.1868E-2 ZJ).

It is interesting that Jean Laherrere’s larger URR scenario (550 Gtoe) has a peak of 4 Gtoe/year, while Mohr’s smaller URR (465 Gtoe) has a peak of 5 Gtoe/year. Mohr’s scenario was created in 2010 before the 2014 slowdown in Chinese coal consumption and he may have assumed that China and India would resume their rapid increase in coal consumption from 2010 to 2025. Jean Laherrere’s scenario was created in 2015 and in his 550 Gtoe scenario he may assume that the recent decrease in World coal output (in 2014) will continue in the future.

My medium scenario (510 Gtoe) is between Mohr’s medium (case 2) scenario and Laherrere’s low scenario. I have created two new scenarios using a URR of 510 Gtoe which match the peak output of Laherrere’s 550 Gtoe scenario and Mohr’s 465 Gtoe scenario. I have also created a “plateau” scenario with URR=510 Gtoe with World output remaining at the 2014 level until 2025. The various scenarios are presented in the chart below.


The extraction rates in the 4 different 510 Gtoe scenarios can be compared in the chart that follows.


Generally  a higher peak in output leads to steeper annual decline rates, the chart below compares annual decline rates for the 4 different 510 Gtoe URR scenarios.


Works Cited

  • De Sousa, Luis. “Peak Coal in China and the World, by Jean Laherrère.” Web. 11 March. 2016.
  • Mohr, Steve. Projection of world fossil fuel production with supply and demand interactions. 2010. Web. 11 March. 2016.
  • Oil Conundrum. Web. 11 March. 2016.
  • Rutledge, David. “Estimating long-term world coal production with logit and probit transforms.” International Journal of Coal Geology. 85 (2011): 23-33. Web. 11 March. 2016.

Appendix with details of Low and High cases

With links to Excel files at end of appendix

Low case-URR=390 Gtoe





High Case- URR=630 Gtoe





Further reading

Posted in Coal, Peak Coal | Tagged | Comments Off on Peak coal 2013-2045 — most likely 2025-2030

Why Nuclear Power is not an alternative to fossil fuels

[ Economic reasons are the main hurdle to new nuclear plants now, with capital costs so high it’s almost impossible to get a loan, especially when natural gas is so much cheaper and less risky. But there are other reasons nuclear power is in trouble as well. Far more plants are in danger of closing than are being built (37 or more may close).

This is a liquid transportation fuels crisis. The Achilles heel of civilization is our dependency on trucks of all kinds, which run on diesel fuel because diesel engines are far more powerful than steam, gasoline, electric or any other engine on earth (Vaclav Smil. 2010. Prime Movers of Globalization: The History and Impact of Diesel Engines and Gas Turbines. MIT Press).  Billions of trucks (and equipment) are required to keep the supply chains going that every person and business on earth depends on, as well as mining, agriculture, road / construction, logging trucks and so on)  Since trucks can’t run on electricity, anything that generates electricity is not a solution, nor is it likely that the electric grid can ever be 100% renewable (read “When trucks stop running”, this can’t be explained in a sound-bite), or that we could replace billions of diesel engines in the short time left.  According to a study for the Department of energy society would need to prepare for the peaking of world oil production 10 to 20 years ahead of time (Hirsch 2005).  But conventional oil peaked in 2005 and been on a plateau since then. Here we are 12 years later, totally unprepared, and the public is still buying gas guzzlers whenever oil prices drop, freeway speed limits are still over 55 mph.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Nuclear power costs too much

U.S. nuclear power plants are old and in decline. By 2030, U.S. nuclear power generation might be the source of just 10% of electricity, half of production now, because 38 reactors producing a third of nuclear power are past their 40-year life span, and another 33 reactors producing a third of nuclear power are over 30 years old. Although some will have their licenses extended, 37 reactors that produce half of nuclear power are at risk of closing because of economics, breakdowns, unreliability, long outages, safety, and expensive post-Fukushima retrofits (Cooper 2013. Nuclear power is too expensive, 37 costly reactors predicted to shut down and A third of Nuclear Reactors are going to die of old age in the next 10-20 years.

New reactors are not being built because it takes years to get permits and $8.5–$20 billion in capital must be raised for a new 3400 MW nuclear power plant (O’Grady, E. 2008. Luminant seeks new reactor. London: Reuters.). This is almost impossible since a safer 3400 MW gas plant can be built for $2.5 billion in half the time. What utility wants to spend billions of dollars and wait a decade before a penny of revenue and a watt of electricity is generated?

In the USA there are 104 nuclear plants (largely constructed in the 1970s and 1980s) contributing 19% of our electricity.  Even if all operating plants over 40 years receive renewals to operate for 60 years, starting in 2028 it’s unlikely they can be extended another 20 years, so by 2050 nearly all nuclear plants will be out of business.

Joe Romm “The Nukes of Hazard: One Year After Fukushima, Nuclear Power Remains Too Costly To Be A Major Climate Solution” explains in detail why nuclear power is too expensive, such as:

  • New nuclear reactors are expensive. Recent cost estimates for individual new plants have exceeded $5 billion (for example, see Scroggs, 2008; Moody’s Investor’s Service, 2008).
  • New reactors are intrinsically expensive because they must be able to withstand virtually any risk that we can imagine, including human error and major disasters
  • Based on a 2007 Keystone report, we’d need to add an average of 17 plants each year, while building an average of 9 plants a year to replace those that will be retired, for a total of one nuclear plant every two weeks for four decades — plus 10 Yucca Mountains to store the waste
  • Before 2007, price estimates of $4000/kw for new U.S. nukes were common, but by October 2007 Moody’s Investors Service report, “New Nuclear Generation in the United States,” concluded, “Moody’s believes the all-in cost of a nuclear generating facility could come in at between $5,000 – $6,000/kw.”
  • That same month, Florida Power and Light, “a leader in nuclear power generation,” presented its detailed cost estimate for new nukes to the Florida Public Service Commission. It concluded that two units totaling 2,200 megawatts would cost from $5,500 to $8,100 per kilowatt – $12 billion to $18 billion total!
  • In 2008, Progress Energy informed state regulators that the twin 1,100-megawatt plants it intended to build in Florida would cost $14 billion, which “triples estimates the utility offered little more than a year ago.” That would be more than $6,400 a kilowatt.  (And that didn’t even count the 200-mile $3 billion transmission system utility needs, which would bring the price up to a staggering $7,700 a kilowatt).

Extract from Is Nuclear Power Our Energy Future, Or in a Death Spiral? March 6th, 2016, By Dave Levitan, Ensia:

In general, the more experience accumulated with a given technology, the less it costs to build. This has been dramatically illustrated with the falling costs of wind and solar power. Nuclear, however has bucked the trend, instead demonstrating a sort of “negative learning curve” over time.

According to the Union of Concerned Scientists, the actual costs of 75 of the first nuclear reactors built in the U.S. ran over initial estimates by more than 200 percent. More recently, costs have continued to balloon. Again according to UCS, the price tag for a new nuclear power plant jumped from between US$2 billion and US$4 billion in 2002 all the way US$9 billion in 2008. Put another way, the price shot from below US$2,000 per kilowatt in the early 2000s up to as high as US$8,000 per kilowatt by 2008.

Steve Clemmer, the director of energy research and analysis at UCS, doesn’t see this trend changing. “I’m not seeing much evidence that we’ll see the types of cost reductions [proponents are] talking about. I’m very skeptical about it — great if it happens, but I’m not seeing it,” he says.

Some projects in the U.S. seem to face delays and overruns at every turn. In September 2015, a South Carolina effort to build two new reactors at an existing plant was delayed for three years. In Georgia, a January 2015 filing by plant owner Southern Co. said that its additional two reactors would jump by US$700 million in cost and take an extra 18 months to build. These problems have a number of root causes, from licensing delays to simple construction errors, and no simple solution to the issue is likely to be found.

In Europe the situation is similar, with a couple of particularly egregious examples casting a pall over the industry. Construction began for a new reactor at the Finnish Olkiluoto 3 plant in 2005 but won’t finish until 2018, nine years late and more than US$5 billion over budget. A reactor in France, where nuclear is the primary source of power, is six years behind schedule and more than twice as expensive as projected.

“The history of 60 years or more of reactor building offers no evidence that costs will come down,” Ramana says. “As nuclear technology has matured costs have increased, and all the present indications are that this trend will continue.”

Nuclear plants require huge grid systems, since they’re far from energy consumers. The Financial Times estimates that would require ten thousand billion dollars be invested world-wide in electric power systems over the next 30 years.

In summary, investors aren’t going to invest in new reactors because:

  • of the billions in liability after a meltdown or accident
  • there may only be enough uranium left to power existing plants
  • the cost per plant ties up capital too long (it can take 10 billion dollars over 10 years to build a nuclear power plant)
  • the costs of decommissioning are very high
  • properly dealing with waste is expensive
  • There is no place to put waste — in 2009 Secretary of Energy Chu shut down Yucca mountain and there is no replacement in sight.

Nor will the USA government pay for the nuclear reactors given that public opinion is against that — 72% said no (in E&E news), they weren’t willing for the government to pay for nuclear power reactors through billions of dollars in new federal loan guarantees for new reactors.

Cembalest, an analyst at J.P. Morgan, wrote “In some ways, nuclears goose was cooked by 1992, when the cost of building a 1 GW plant rose by a factor of 5 (in real terms) from 1972” (Cembalest).

Nuclear power depends on fossil fuels to exist (Ahmed 2017)

“One extensive study finds that the construction, mining, milling, transporting, refining, enrichment, waste reprocessing/disposal, fabrication, operation and decommissioning processes of nuclear power are heavily dependent on fossil fuels (Pearce 2008). This raises serious questions about the viability of nuclear power in about two decades time, when hydrocarbon resources are likely to be well past their production peaks.

Further, the study concludes that nuclear power is simply not efficient enough to replace fossil fuels, an endeavor which would require nuclear production to increase by 10.5% every year from 2010 to 2050-an “unsustainable prospect”. This large growth rate requires a “cannibalistic effect”, whereby nuclear energy itself must be used to supply the energy to construct future nuclear power plants. The upshot is that the books cannot be balanced as the tremendous amounts of energy necessary for mining and processing uranium ore, building and operating the power plant, and so on, cannot be offset by output in a high growth scenario. In particular, growth limits are set by the grade of uranium ore available-and high-grade uranium is predicted to become rapidly depleted in coming decades, leaving largely low-grade ore falling below 0.02% (Pearce 2008)”.

Peak Uranium

Energy experts warn that an acute shortage of uranium is going to hit the nuclear energy industry. Dr Yogi Goswami, co-director of the Clean Energy Research Centre at the University of Florida warns that proven reserves of uranium will last less than 30 years. By 2050, all proven and undiscovered reserves of uranium will be over.  Current nuclear plants consume around 67,000 tonnes of high-grade uranium per year. With present world uranium reserves of 5.5 million tons, we have enough to last last 42 years.  If more nuclear plants are built, then we have less than 30 years left (Coumans).

Uranium production peaked in the 1980s but supplies continued to meet demand because weapons decommissioned after the Cold War were converted commercial fuel. Those sources are now drying up, and a new demand-driven peak may be on the horizon.

The only way we could extend our supplies of uranium is to build breeder reactors.  But we don’t have any idea how to do that and we’ve been trying since the 1950s.

China switched on its 19th nuclear power reactor as it rushes to increase nuclear generation. The country plans to switch on 8.64 gigawatts of nuclear generating capacity in 2014 as compared to 3.24 gigawatts of new capacity in 2013. The availability of uranium for China’s nuclear industry is becoming an issue. Beijing may have to import some 80 percent of its uranium by 2020, as compared to the current 60 percent.

There may not even be enough uranium to power existing plants

Source: Colorado Geological survey

Related articles:

Nuclear power is Way too Dangerous

In 2016, top journal Science, based on the National Academy of Sciences of lessons learned from Fukushima, reported that a nuclear spent fuel fire at Peach Bottom in Pennsylvania could force 18 million people to evacuate.  This is because there’s still nowhere to put nuclear waste, so it’s stored in pools of water on-site that are not under the containment dome, but open to the air, and a prime target for terrorists at over 100 locations.  If electric power were ever down more than 10 days due to a natural disaster, electromagnetic pulse from a nuclear weapon / solar flare, or any other reason, these nuclear pools would catch on fire and spew out radiation for many square miles and force millions of people to evacuate.  Also see: Shocking state of world’s riskiest nuclear waste sites

The dangers of nuclear waste is the main reason California and many other states won’t allow new nuclear power plants to open. To find out more about the dangers of nuclear waste and why we have nowhere to store it, read by book review of “Too Hot to touch“.

Greenpeace has a critique of nuclear power called Nuclear Reactor Hazards (2005) which makes the following points:

  1. As nuclear power plants age, components become embrittled, corroded, and eroded. This can happen at a microscopic level which is only detected when a pipe bursts. As a plant ages, the odds of severe incidents increase. Although some components can be replaced, failures in the reactor pressure vessel would lead to a catastrophic release of radioactive material. The risk of a nuclear accident grows significantly each year after 20 years. The average age of power plants now, world-wide, is 21 years.
  2. In a power blackout, if the emergency backup generators don’t kick in, there is the risk of a meltdown. This happened recently in Sweden at the Fosmark power station in 2006. A former director said “It was pure luck that there was not a meltdown. Since the electricity supply from the network didn’t work as it should have, it could have been a catastrophe.” Another few hours and a meltdown could have occurred. It should not surprise anyone that power blackouts will become increasingly common and long-lasting as energy declines.
  3. 3rd generation nuclear plants are pigs wearing lipstick – they’re just gussied up 2nd generation — no safer than existing plants.
  4. Many failures are due to human error, and that will always be the case, no matter how well future plants are designed.
  5. Nuclear power plants are attractive targets for terrorists now and future resource wars. There are dozens of ways to attack nuclear and reprocessing plants. They are targets not only for the huge number of deaths they would cause, but as a source of plutonium to make nuclear bombs. It only takes a few kilograms to make a weapon, and just a few micrograms to cause cancer.

If Greenpeace is right about risks increasing after 20 years, then there’s bound to be a meltdown incident within ten years, which would make it almost impossible to raise capital. (And indeed there was, Fukushima had a meltdown in 2011).

It’s already hard to raise capital, because the owners want to be completely exempt from the costs of nuclear meltdowns and other accidents. That’s why no new plants have been built in the United States for decades.

The Energy Returned on Energy Invested may be too low for investors as well. When you consider the energy required to build a nuclear power plant, which needs tremendous amount of cement, steel pipes, and other infrastructure, it could take a long time for the returned energy to pay back the energy invested. The construction of 1970’s U.S. nuclear power plants required 40 metric tons of steel and 190 cubic meters of concrete per average megawatt of electricity generating capacity (Peterson 2003).

The amount of greenhouse gases emitted during construction is another reason many environmentalists have turned away from nuclear power.

The costs of treating nuclear waste have skyrocketed. An immensely expensive treatment plant to cleanup the Hanford nuclear plant went from costing 4.3 billion in 2000 to 12.2 billion dollars today. If the final treatment plant is ever built, it will be twelve stories high and four football fields long (Dininny 2006).

Nuclear power plants take too long to build

It often takes 10 years to build a nuclear power plant because it takes years to get licensed, fabricate components, and another 4 to 7 years to actually build it. That’s too long for investors to wait, they want far more immediate returns than that. Techno-optimists can argue that some new-fangled kind of reactor could be built more quickly.  But the public is afraid of reactors (rightly so), so it’s bound to go slowly as protestors demand stringent inspections every step of the way.  The public also is concerned with the issues of long-term nuclear waste storage.  So even a small, simple reactor would have many hurdles to overcome.

Financial markets are wary of investments in new nuclear plants until it can be demonstrated they can be constructed on budget and on schedule. Nuclear plants have not been built in the United States for decades, but there are unpleasant memories, because construction of some of the currently operating plants was associated with substantial cost overruns and delays. There is also a significant gap between when construction is initiated and when return on investment is realized.

A crisis will harden public opinion against building new Nuclear Power Plants

I wrote this section before the Fukushima disaster, and there will be more disasters as aging nuclear power plants, extended beyond their lifetime and being pushed to produce electricity full-tilt, succumb to many hazards detailed in the Green Peace International report “Nuclear Reactor Hazards“.  It’s only a matter of time before one of our aging reactors melts down.  When that happens, the public will fight the development of more nuclear power plants.  Other factors besides aging that could cause a disaster are natural disasters, failure of the electric grid, increased and more severe flooding, drought, and severe and unstable weather from climate change, lack of staffing as older workers retire with few educated engineers available to replace them.

Even Edward Teller, father of the hydrogen bomb, thought Nuclear Power Plants were dangerous and should be put underground for safety in case of a failure and to make clean-up easier.

Five of the six reactors at the Fukushima plant in Japan were Mark 1 reactors. Thirty-five years ago, Dale G. Bridenbaugh and two of his colleagues at General Electric quit after they became convinced that the Mark 1 nuclear reactor design they were reviewing was so flawed it could lead to a devastating accident (Mosk).

Nuclear power plants are extremely attractive targets for terrorists and in a war.  Uranium is not only stored in the core, but the “waste” area near the plant, providing plenty of material for “dirty” or explosive atom bombs.

For details, read the original document or my summary of the Greenpeace report.

EROEI and decommissioning

See: Decommissioning a nuclear reactor

The energy to build, decommission, dispose of wastes, etc., may be more than the plant will ever generate  a negative Energy Returned on Energy Invested (EROEI).  A review by Charles Hall et al. of net energy studies of nuclear power found the data to be “idiosyncratic, prejudiced, and poorly documented,” and concluded the most reliable EROEI information was too old to be useful (results ranged from 5 to 8:1). Newer data was unjustifiably optimistic (15:1 or more) or pessimistic (low, even less than 1:1).  One of the main reasons EROEI is low is due to the enormous amount of energy used to construct nuclear power plants, which also create a great deal of GHG emissions.


“To produce enough nuclear power to equal the power we currently get from fossil fuels, you would have to build 10,000 of the largest possible nuclear power plants. That’s a huge, probably nonviable initiative, and at that burn rate, our known reserves of uranium would last only for 10 or 20 years.” (Goodstein). Are there enough sites for 10,000 plants near water for cooling yet not so low that rising sea levels destroy them or drought remove cooling water supplies?


Nuclear power has been unpopular for such a long time, that there aren’t enough nuclear engineers, plant operators and designers, or manufacturing companies to scale up quickly (Torres 2006).  The number of American Society of Mechanical Engineers (ASME) nuclear certificates held around the world fell from 600 in 1980 to 200 in 2007. There is also an insufficient supply of people with the requisite education or training at a time when vendors, contractors, architects, engineers, operators, and regulators will be seeking to build up their staffs. In addition, 35% of the staff at U.S nuclear utilities are eligible for retirement in the next 5–10 years.

There could be shortages in certain parts and components (especially large forgings), as well as in trained craft and technical personnel, if nuclear power expands significantly worldwide.

There are fewer suppliers of nuclear parts and components now than in the past.

Nuclear Proliferation & terrorism targets

Can we really prevent crazed dictators for 30,000 years from using plutonium and other wastes to wage war?  Even if a nuclear bomb is beyond the capabilities of society in the future, the waste could be used to make a dirty bomb. Meanwhile, reactors make good targets for terrorists who do have the money to hire scientists help them make a nuclear bomb from stolen uranium or plutonium.


Nuclear plants must be built near water for cooling, and use a tremendous amount of water. Scientists are certain that global warming will raise sea levels — about half of existing power plants would be flooded.  Climate change will cause longer and more severe droughts, with the potential for not enough water to cool the plant down, and more severe storms will bring more hurricanes and tornadoes.


Never underestimate NIMBYism, which is already preventing nuclear power plants from being built. The political opposition to building thousands of nuclear plants will be impossible to overcome.

No good way to store the energy

One of the most critical needs for power is a way to store it. Utility scale storage batteries  have not been invented despite decades of research, and only enough materials exist on earth to build NaS batteries at a cost of over $44 trillion that would take up 945 square miles of real estate (Friedemann 2015)

A great deal of the electric power generated would need to be used to replace the billions of combustion engine machines and vehicles rather than providing heat, cooling, cooking power and light to homes and offices. It takes decades to move from one source of power to another. It’s hard to see how this could be accomplished without great hardship and social chaos, which would slow the conversion process down. Desperation is likely to lead to stealing of key components of the new infrastructure to sell for scrap metal, as is already happening in Baltimore where 30-foot tall street lights are being stolen (Gately 2005).

Related posts:  Energy Storage

Breeder reactors. You’d need 24,000 Breeder Reactors, each one a potential nuclear bomb (Mesarovic)

  • We’ve known since 1969 that we needed to build breeder reactors to stretch the lifetime of radioactive material to tens of thousands of years, and to reduce the radioactive wastes generated, but we still don’t know how to do this. (NAS)
  • If we ever do succeed, these reactors are much closer to being bombs than conventional reactors – the effects of an accident would be catastrophic economically and in the number of lives lost if it failed near a city (Wolfson).
  • The by-product of the breeder reaction is plutonium. Plutonium 239 has a half-life of 24,000 years. How can we guarantee that no terrorist or dictator will ever use this material to build a nuclear or dirty bomb during this time period?

Assume, as the technology optimists want us to, that in 100 years all primary energy will be nuclear. Following historical patterns, and assuming a not unlikely quadrupling of population, we will need, to satisfy world energy requirements, 3,000 “nuclear parks” each consisting of, say, 8 fast-breeder reactors. These 8 reactors, working at 40% efficiency, will produce 40 million kilowatts of electricity collectively. Therefore, each of the 3,000 nuclear parks will be converting primary nuclear power equivalent to 100 million kilowatts thermal. The largest nuclear reactors presently in operation convert about 1 million kilowatts (electric), but we will give progress the benefit of doubt and assume that our 24,000 worldwide reactors are capable of converting 5 million kilowatts each. In order to produce the world’s energy in 100 years, then, we will merely have to build, in each and every year between now and then, 4 reactors per week! And that figure does not take into account the lifespan of nuclear reactors. If our future nuclear reactors last an average of thirty years, we shall eventually have to build 2 reactors per day to replace those that have worn out.  By 2025, sole reliance on nuclear power would require more than 50 major nuclear installations, on the average, in every state in the union.

For the sake of this discussion, let us disregard whether this rate of construction is technically and organizationally feasible in view of the fact that, at present, the lead time for the construction of much smaller and simpler plants is seven to ten years. Let us also disregard the cost of about $2000 billion per year — or 60 percent of the total world output of $3400 billion — just to replace the worn-out reactors and the availability of the investment capital. We may as well also assume that we could find safe storage facilities for the discarded reactors and their irradiated accessory equipment, and also for the nuclear waste. Let us assume that technology has taken care of all these big problems, leaving us only a few trifles to deal with.

In order to operate 24,000 breeder reactors, we would need to process and transport, every year, 15 million kilograms (16,500 tons) of plutonium-239, the core material of the Hiroshima atom bomb. Only 10 pounds are needed to construct a bomb.  If inhaled, just ten micrograms (.00000035 ounce) of plutonium-239 is likely to cause fatal lung cancer. A ball of plutonium the size of a grapefruit contains enough poison to kill nearly all the people living today. Moreover, plutonium-239 has a radioactive life of more than 24,000 years. Obviously, with so much plutonium on hand, there will be a tremendous problem of safeguarding the nuclear parks — not one or two, but 3000 of them. And what about their location, national sovereignty, and jurisdiction? Can one country allow inadequate protection in a neighboring country, when the slightest mishap could poison adjacent lands and populations for thousands and thousands of years? And who is to decide what constitutes adequate protection, especially in the case of social turmoil, civil war, war between nations, or even only when a national leader comes down with a case of bad nerves. The lives of millions could easily be beholden to a single reckless and daring individual.


Ahmed, Nafeez. 2017. Failing States, Collapsing Systems BioPhysical Triggers of Political Violence. Springer.

Cembalest, M.21 Nov 2011. Eye on the Market. The quixotic search for energy solutions.  J P Morgan

Coumans, C.  4 Sep 2010. Uranium reserves to be over by 2050. Deccan Chronicle

Dininny, S. 7 Sep 2006. Cost for Hanford waste treatment plant grows to $12.2 billion. The Olympian / Associated Press.

Friedemann, A. 2015. When Trucks stop running: Energy and the Future of Transportation. Springer.

Gately, G. 25 Nov 2005. Light poles vanishing — believed sold for scrap by thieves 130 street fixtures in Baltimore have been cut down. New York Times.

Goodstein, D. April 29, 2005. Transcript of The End of the Age of Oil talk

(Greenpeace) H. Hirsch, et al. 2005. Nuclear Reactor Hazards: Ongoing Dangers of Operating Nuclear Technology in the 21st Century

Heinberg, Richard. September 2009. Searching for a Miracle. “Net Energy” Limits & the Fate of Industrial Society. Post Carbon Institute.

Hirsch, R. L., et al. February 2005. Peaking of World Oil Production: Impacts, mitigation, & risk management. Department of Energy.

Hoyos, C. 19 OCT 2003 Power sector 'to need $10,000 bn in next 30 years'. Financial Times.

Mesarovic, Mihajlo, et al. 1974. Mankind at the Turning Point.  The Second Club of Rome Report.  E.P. Dutton, 1974 pp. 132-135

Mosk, M. 15 Mar 2011. Fukushima: Mark 1 Nuclear Reactor Design Caused GE Scientist To Quit In Protest. ABC World News.

(NAS) “It is clear, therefore, that by the transition to a complete breeder-reactor program before the initial supply of uranium 235 is exhausted, very much larger supplies of energy can be made available than now exist. Failure to make this transition would constitute one of the major disasters in human history." National Academy of Sciences. 1969. Resources & Man. W.H.Freeman, San Francisco. 259.

Peterson, P. 2003. Will the United States Need a Second Geologic Repository? The Bridge 33 (3), 26-32.

Pearce, J. M. 2008. Thermodynamic Limitations to nuclear energy deployment as a greenhouse gas mitigation technology. International Journal of Nuclear Governance, Economy and Ecology 2(1): 113.

Torres, M. “Uranium Depletion and Nuclear Power: Are We at Peak Uranium?”

Wolfson, R. 1993. Nuclear Choices: A Citizen's Guide to Nuclear Technology. MIT Press

To see what plants are open, closing, or being built (excel):

United States Nuclear Regulatory Commission 2014-2015 Information Digest. Nuclear materials, radioactive waste, nuclear reactors, nuclear security.

Posted in Alternative Energy, Energy, Nuclear Power | 14 Comments

Civilization goes over the net energy cliff in 2022 — just 6 years away

[ Below are excerpts from 3 posts by Louis Arnoux (see the full versions here) and a 1-hour video explaining the Hill’s group report here.  Since then I’ve been researching the Hills Group report and didn’t know enough math to make sense of it, so I asked many top-notch scientists what they thought, and most said it was no good, though their projections match other estimates roughly of peer-reviewed papers. 

Update: On Feb 2, 2017 I heard that the Hills Group hopes to publish their paper in a refereed academic journal.  I’ll report on their publication and reactions to it if it is published.  Meanwhile, if you’re curious, here are some links where you can see their report and Bill Hill explaining it on an on-going forum yourself:

  1. The Etp Model Q&A 2017 to present
  2. The Etp Model Q&A pt.6 December 2016 to January 2017
  3. The Etp Model Q&A pt. 5 November 2016 to December 2016
  4. The Etp Model Q & A pt. 4 October 2016 to Nov 2016

The Hills Group models predicted the price of oil going down before it began in 2014, and as far as I know, they are the only ones who predicted this.

Their models conclude that the end of the age of oil for most of us ends around 2030 — though really 2022 since 2030 assumes total energy efficiency. 

One aspect of their theory I like is that Bill Hill says it is testable.  So here are some of the claims made in their paper and on the Etp Model Q&A forum. We should see how accurate their model is fairly soon:

We expect to have reached permanent depression by the end of 2017 (prediction made June 2016).

The 2012 energy half way point, as set out by the Etp Model, marked the point where much of world oil production started being better off without oil than with it. That conversion will be complete by no later than 2030.

Conventional crude production will fall to 44 mb/d by 2030, after that it goes into catastrophic decline.

Our analysis indicates that it will probably be in the range of 15 to 20 years after that when the majority of petroleum production will cease.  The oil age is coming to an end. The Etp Model provides a very important time line; one that informs us that we have at most 14 years to put into place an alternate energy system; one beyond oil. Past that point the world will have fallen into such a deep depression that it will no longer be able to help itself.

Things are a lot worse than oil producers are admitting. The Etp Model indicates that in the present price environment that only about 35% of the world’s producers are making money over their full life cycle costs. Their desperation for cash ensures that production will not decline until many of them start to fail. The energy dynamics of the situation point to falling prices until at least 2020. By then much of the world’s petroleum production capacity will be gone forever!

Damage is being inflicted on the industry that will never be repaired. CapEx is being cut everywhere in the industry, and future development is likely to never fully recover. The Etp Model indicates that only about an additional 320 Gb will now ever be extracted. In 2012 petroleum contributed $6.22 trillion to the $16.16 trillion GDP of the US. That contribution will fall by more than half during the next decade.

Very low priced oil is a catastrophe for the petroleum industry, and the world. The oil age might have staggered forward for another 14 to 15 years, but now it looks like it might all come unglued over the next 5 or 6 years.

The industry’s net worth is now declining by 24% per year. If the price decline continues, as expected, trillions of dollars will be lost to bond and equity holders over the next few years. Pension funds, and Sovereign wealth fund will be hit particularly hard.

EROEI.  When the Petroleum production system reaches an EROEI of 6.9 : 1 it will have reached its theoretical limit and be over with.  Here are some EROI’s from the past 1945 167, 1980 30.4, 2014 9.1, 2015 8.9

 At 6.9 : 1 it will have reached its the theoretical limit, or were the PPS (Petroleum Production System) reaches the “dead state”. That will be dependent on its accumulated production, which has had a very consistent rate of increase for the last 100 years. The accumulated production has followed Hubbert’s curve almost exactly; by 2009 it had deviated from that curve by 0.04 Gb. In other words the amount remaining to be extracted is a product of how much has already been removed. Any amount after 1,780 Gb will remain in the ground as it will no longer be able to act as an energy source.

Saudi Arabia

When Ghawar will start to collapse has been the subject of heated discussion for a very long time. Looking at its water cut, as reported by Aramco reserve engineers, and the fact that they have been drilling horizontal wells to skim the last few feet off the top of the oil column indicates that it probably won’t be long in coming. A better indication is probably the price. The Affordability Curve gives a pretty good indication as to what is likely to transpire, and The Price of Oil puts the maximum affordability at:

2015  $77.28     2016 $65.94    2017 $54.18    2018 $41.16   2019 $26.88

It looks like sometime between 2018 and 2019 the Saudi’s will no longer be able to cover their lifting cost. Once that happens their production will collapse, and they will likely break the peg. My WAG (wild ass guess) would be sometime in that time frame.   Of course, the Iranians may decide to blow the crap out of them at any time, and that would put a real crimp onto their production. It looks like the best case scenario is 2 to 3 years before Saudi Arabia implodes.

Shale / Light Tight Oil

U.S. LTO production will not start to decline because of a lack of drilling opportunities, lack of funds (the FED has their back), or because of high well decline rates. It will decline when it runs out of buyers for it. That will happen in the next couple of years.

It now requires about 74,000 BTU to extract, process, and distribute a gallon of petroleum. Only the lower API fractions have an energy content that is sufficient to provide a surplus of energy after their process energy is subtracted.

The energy dynamics imply that once conventional crude is depleted, that other alternative liquid fuels will not be able to maintain enough of the economy needed to produce them, or provide for their demand. Shale is a good example of this phenomenon. Most shale is incapable of driving the economy, and its only use is as a feedstock for other processes.

Civilization is likely to experience something resembling a brown out. Voltage drops until the motors grind to halt, and burn up. Imagine billions of people milling around trying to figure out why things are running slower, and slower. Not much has yet fully stopped working, but nothing is working quite right!

Petroleum is providing just enough energy at this point in time to keep what is running going. If any additional load is placed on the system, like having to bail out the banks again, a good sized war, or even some natural disaster something is going to burn up. Maybe a big chunk of the health care system, the consumer economy, or the petroleum industry but something will no longer be maintainable. The world no longer has the extra energy to expend on anything but what it is presently using. The danger is that when it starts it could cascade into a black out!

For the average barrel of oil this may happen in 2022 — just 6 years away

So by 2022 half the oil industry is likely to be out of business. Oil production won’t end — there will still be “above average” barrels produced, but dramatically less and less as we fall over the energy cliff, with the tail end around 2095.

  • The rapid end of the Oil Age began in 2012 and will be over within some 10 years. By 2022 the number of service stations in the US will have shrunk by 75%.
  • The critical parameter to consider is not the million barrels produced per day, but the net energy from oil per head of global population, since when this gets too close to nil we must expect complete social breakdown, globally.
  • We are in an unprecedented situation.  As stressed by Tainter, no previous civilization has ever managed to survive the kind of predicament we are in.  However, the people living in those civilizations were mostly rural and had a safety net, in that their energy source was 100% solar, photosynthesis for food, fiber and timber – they always could keep going even though it may have been under harsh conditions.  We no longer have such a safety net; our entire food systems are almost completely dependent on the net energy from oil that is in the process of dropping to the floor and our food supply systems cannot cope without it.

Or for an easier read look at this short summary of Dr. Alister Hamilton’s talk “Brexit, Oil and the World Economy” here, and view the hour video here  on YouTube. 

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Louis Arnoux. July 12, 2016. Some reflections on the Twilight of the Oil Age – part I.


Since at least the end of 2014 there has been increasing confusions about oil prices, whether so-called “Peak Oil” has already happened, or will happen in the future and when, matters of EROI (or EROEI) values for current energy sources and for alternatives, climate change and the phantasmatic 2oC warming limit, and concerning the feasibility of shifting rapidly to renewables or sustainable sources of energy supply. Overall, it matters a great deal whether a reasonable time horizon to act is say 50 years, i.e. in the main the troubles that we are contemplating are taking place way past 2050, or if we are already in deep trouble and the timeframe to try and extricate ourselves is some 10 years. Answering this kind of question requires paying close attention to system boundary definitions and scrutinizing all matters taken for granted.

It took over 50 years for climatologists to be heard and for politicians to reach the Paris Agreement re climate change (CC) at the close of the COP21, late last year. As you no doubt can gather from the title, I am of the view that we do not have 50 years to agonise about oil. In the three sections of this post I will first briefly take stock of where we are oil wise; I will then consider how this situation calls upon us to do our utter best to extricate ourselves from the current prevailing confusion and think straight about our predicament; and in the third part I will offer a few considerations concerning the near term, the next ten years – how to approach it, what cannot work and what may work, and the urgency to act, without delay.

Part 1 – Alice looking down the end of the barrel

In his recent post, Ugo contrasted the views of the Doomstead Diner‘s readers  with that of energy experts regarding the feasibility of replacing fossil fuels within a reasonable timeframe. In my view, the Doomstead’s guests had a much better sense of the situation than the “experts” in Ugo’s survey. To be blunt, along current prevailing lines we are not going to make it. I am not just referring here to “business-as-usual” (BAU) parties holding for dear life onto fossil fuels and nukes. I also include all current efforts at implementing alternatives and combating CC. Here is why.

The energy cost of system replacement

What a great number of energy technology specialists miss are the challenges of whole system replacement – moving from fossil-based to 100% sustainable over a given period of time. Of course, the prior question concerns the necessity or otherwise of whole system replacement. For those of us who have already concluded that this is an urgent necessity, if only due to Climate Change, no need to discuss this matter here. For those who maybe are not yet clear on this point, hopefully, the matter will become a lot clearer a few paragraphs down.

So coming back for now to whole system replacement, the first challenge most remain blind to is the huge energy cost of whole system replacement in terms of both the 1st principle of thermodynamics (i.e. how much net energy is required to develop and deploy a whole alternative system, while the old one has to be kept going and be progressively replaced) and also concerning the 2nd principle (i.e. the waste heat involved in the whole system substitution process). The implied issues are to figure out first how much total fossil primary energy is required by such a shift, in addition to what is required for ongoing BAU business and until such a time when any sustainable alternative has managed to become self-sustaining, and second to ascertain where this additional fossil energy may come from.

The end of the Oil Age is now

If we had a whole century ahead of us to transition, it would be comparatively easy. Unfortunately, we no longer have that leisure since the second key challenge is the remaining timeframe for whole system replacement. What most people miss is that the rapid end of the Oil Age began in 2012 and will be over within some 10 years. To the best of my knowledge, the most advanced material in this matter is the thermodynamic analysis of the oil industry taken as a whole system (OI) produced by The Hill’s Group (THG) over the last two years or so (

THG are seasoned US oil industry engineers led by B.W. Hill. I find its analysis elegant and rock hard. For example, one of its outputs concerns oil prices. Over a 56 year time period, its correlation factor with historical data is 0.995. In consequence, they began to warn in 2013 about the oil price crash that began late 2014 (see: In what follows I rely on THG’s report and my own work.

Three figures summarize the situation we are in rather well, in my view.

Figure 1 – End Game


For purely thermodynamic reasons net energy delivered to the globalized industrial world (GIW) per barrel by the oil industry (OI) is rapidly trending to zero. By net energy we mean here what the OI delivers to the GIW, essentially in the form of transport fuels, after the energy used by the OI for exploration, production, transport, refining and end products delivery have been deducted.

However, things break down well before reaching “ground zero”; i.e. within 10 years the OI as we know it will have disintegrated. Actually, a number of analysts from entities like Deloitte or Chatham House, reading financial tea leaves, are progressively reaching the same kind of conclusions.[1]

The Oil Age is finishing now, not in a slow, smooth, long slide down from “Peak Oil”, but in a rapid fizzling out of net energy. This is now combining with things like climate change and the global debt issues to generate what I call a “Perfect Storm” big enough to bring the GIW to its knees.

In an Alice world

Under the prevailing paradigm, there is no known way to exit from the Perfect Storm within the emerging time constraint (available time has shrunk by one order of magnitude, from 100 to 10 years). This is where I think that Doomstead Diner’s readers are guessing right. Many readers are no doubt familiar with the so-called “Red Queen” effect illustrated in Figure 2 – to have to run fast to stay put, and even faster to be able to move forward. The OI is fully caught in it.

Figure 2 – Stuck on a one track to nowhere


The top part of Figure 2 highlights that, due to declining net energy per barrel, the OI has to keep running faster and faster (i.e. pumping oil) to keep supplying the GIW with the net energy it requires. What most people miss is that due to that same rapid decline of net energy/barrel towards nil, the OI can’t keep “running” for much more than a few years – e.g. B.W. Hill considers that within 10 years the number of petrol stations in the US will have shrunk by 75%

What people also neglect, depicted in the bottom part of Figure 2, is what I call the inverse Red Queen effect (1/RQ). Building an alternative whole system takes energy that to a large extent initially has to come from the present fossil-fueled system. If the shift takes place too rapidly, the net energy drain literally kills the existing BAU system.[2] The shorter the transition time the harder is the 1/RQ. 

I estimate the limit growth rate for the alternative whole system at 7% growth per year. So growth rates for solar and wind, well above 20% and in some cases over 60%, are not viable globally. However, the kind of growth rates, in the order of 35%, that are required for a very short transition under the Perfect Storm time frame are even less viable. As the last part of Figure 2 suggests, there is a way out by focusing on current huge energy waste, but presently this is the road not taken.

On the way to Olduvai

In my view, given that nearly everything within the GIW requires transport and that said transport is still about 94% dependent on oil-derived fuels, the rapid fizzling out of net energy from oil must be considered as the defining event of the 21st century – it governs the operation of all other energy sources, as well as that of the entire GIW. Therefore the critical parameter to consider is not that absolute amount of oil mined (as even “peakoilers” do), such as Million barrels produced per year, but net energy from oil per head of global population, since when this gets too close to nil we must expect complete social breakdown, globally.

The overall picture, as depicted ion Figure 3, is that of the “Mother of all Senecas” (to use Ugo’s expression).  It presents net energy from oil per head of global population.[3] The Olduvai Gorge as a backdrop is a wink to Dr. Richard Duncan’s scenario (he used barrels of oil equivalent which was a mistake) and to stress the dire consequences if we do reach the “bottom of the Gorge” – a kind of “postmodern hunter-gatherer” fate.

Oil has been in use for thousands of year, in limited fashion at locations where it seeped naturally or where small well could be dug out by hand. Oil sands began to be mined industrially in 1745 at Merkwiller-Pechelbronn in north east France (the birthplace of Schlumberger). From such very modest beginnings to a peak in the early 1970s, the climb took over 220 years. The fall back to nil will have taken about 50 years.

The amazing economic growth in the three post WWII decades was actually fueled by a 321% growth in net energy/head. The peak of 18GJ/head in around 1973, was actually in the order of some 40GJ/head for those who actually has access to oil at the time, i.e. the industrialized fraction of the global population.

Figure 3 – The “Mother of all Senecas”

arnoux-peak-net-end-user-energy-1970sIn 2012 the OI began to use more energy per barrel in its own processes (from oil exploration to transport fuel deliveries at the petrol stations) than what it delivers net to the GIW. We are now down below 4GJ/head and dropping fast.

This is what is now actually driving the oil prices: since 2014, through millions of trade transactions (functioning as the “invisible hand” of the markets), the reality is progressively filtering that the GIW can only afford oil prices in proportion to the amount of GDP growth that can be generated by a rapidly shrinking net energy delivered per barrel, which is no longer much. Soon it will be nil. So oil prices are actually on a downtrend towards nil.

To cope, the OI has been cannibalizing itself since 2012. This trend is accelerating but cannot continue for very long. Even mainstream analysts have begun to recognize that the OI is no longer replenishing its reserves. We have entered fire-sale times (as shown by the recent announcements by Saudi Arabia (whose main field, Ghawar, is probably over 90% depleted) to sell part of Aramco and make a rapid shift out of a near 100% dependence on oil and towards “solar”.

Given what Figure 1 to 3 depict, it should be obvious that resuming growth along BAU lines is no longer doable, and that incurring ever more debt that can never be reimbursed is no longer a solution, not even short-term

Part 2 – Inquiring into the appropriateness of the question

Let’s acknowledge it, the situation we are in, is complex. As many commentators like to state, there is still plenty of oil, coal, and gas left “in the ground”. Since 2014, debates have been raging, concerning the assumed “oil glut”, concerning how low oil prices may go down, how high prices may rebound as demand possibly picks up and the “glut” vanishes, and, in the face of all this, what may or may not happen regarding “renewables”. However, my Part 1 data have indicated that most of what’s left in terms of fossil fuels is likely to stay where it is, underground  because this is what thermodynamics dictates.

We can now venture a little bit further if we keep firmly in mind that the globalized industrial world (GIW), and by extension all of us, do not “live” on fossil resources but on net energy delivered by the global energy system; and if we also keep in mind that, in this matter, oil-derived transport fuels are the key since, without them, none of the other fossil and nuclear resources can be mobilized and the GIW itself can’t function.

In my experience, most often, when faced with such a broad spectrum of conflicting views, especially involving matters pertaining to physics and the social sciences, the lack of agreement is indicative that the core questions are not well formulated. Physicist David Bohm liked to stress: “In scientific inquiries, a crucial step is to ask the right question. Indeed each question contains presuppositions, largely implicit. If these presuppositions are wrong or confused, the question itself is wrong, in the sense that to try to answer it has no meaning. One has thus to inquire into the appropriateness of the question.”

Here it is important, in terms of system analysis, to differentiate between the global energy industry (GEI) and the GIW. The GEI bears the brunt of thermodynamics directly, and within the GEI, the oil industry (OI) is key since, as seen in Part 1, it is the first to reach the thermodynamics limit of resource extraction and, since it conditions the viability of the GEI’s other components – in their present state and within the remaining timeframe, they can’t survive the OI’s eventual collapse. On the other hand, the GIW is impacted by thermodynamic decline with a lag, in the main because it is buffered by debt – so that by the time the impact of the thermodynamic collapse of the OI becomes undeniable it’s too late to do much about it.

At the micro level, debt can be “good” – e.g. a company borrows to expand and then reimburses its debt, etc… At the macro level, it can be, and has now become, lethal, as the global debt can no longer be reimbursed (I estimate the energy equivalent of current global debt, from states, businesses, and households to be in the order of some 10,700 EJ, while current world energy use is in the order of 554 EJ; it is no longer doable to “mind the gap”).

Crude oil prices are dropping to the floor

Figure 4 – The radar signal for an Oil Pearl Harbor


In brief, the GIW has been living on ever growing total debt since around the time net energy from oil per head peaked in the early 1970s. The 2007-08 crisis was a warning shot. Since 2012, we have entered the last stage of this sad saga – when the OI began to use more energy within its own production chains than what it delivers to the GIW. From this point onwards retrieving the present financial fiat system is no longer doable.

This 2012 point marked a radical shift in price drivers.[4] Figure 4 combines the analyses of TGH (The Hills Group) and mine. In late 2014 I saw the beginning of the oil price crash as a signal of a radar screen. Being well aware that EROIs for oil and gas combined had already passed below the minimum threshold of 10:1, I understood that this crash was different from previous ones: prices were on their way right down to the floor. I then realized what TGH had anticipated this trend months earlier, that their analysis was robust and was being corroborated by the market there and then.

Until 2012, the determining price driver was the total energy cost incurred by the OI. Until then the GIW could more or less happily sustain the translation of these costs into high oil prices, around or above $100/bbl. This is no longer the case. Since 2012, the determining oil price driver is what the GIW can afford to pay in order to still be able to generate residual GDP growth (on borrowed time) under the sway of a Red Queen that is running out of thermodynamic “breath”. I call the process we are in an “Oil Pearl Harbor”, taking place in a kind of eerie slow motion. This is no longer retrievable. Within roughly ten years the oil industry as we know it will have disintegrated. The GIW is presently defenseless in the face of this threat.

The Oil Fizzle Dragon-King

Figure 5 – The “Energy Hand”


To illustrate how the GEI works I often compare its energy flows to the five fingers of the one hand: all are necessary and all are linked (Figure 5). Under the Red Queen, the GEI is progressively loosing its “knuckles” one by one like a kind of unseen leprosy – unseen yet because of the debt “veil” that hides the progressive losses and more fundamentally because of what I refer to at the bottom of Figure 5, namely were are in what I call Oil Fizzle Dragon-King.

A Dragon-King (DK) is a statistical concept developed by Didier Sornette of the Swiss Federal Institute of Technology, Zurich, and a few others to differentiate high probability and high impact processes and events from Black Swans, i.e. events that are of low probability and high impact. I call it the Oil Fizzle because what is triggering it is the very rapid fizzling out of net energy per barrel. It is a DK, i.e. a high probability, high impact unexpected process, purely because almost none of the decision-making elites is familiar with the thermodynamics of complex systems operating far from equilibrium; nor are they familiar with the actual social workings of the societies they live in. Researchers have been warning about the high likelihood of something like this at least since the works of the Meadows in the early 1970s.[5]

The Oil Fizzle DK is the result of the interaction between this net energy fizzling out, climate change, debt and the full spectrum of ecological and social issues that have been mounting since the early 1970s – as I noted on Figure 1, the Oil Fizzle DK is in the process of whipping up a “Perfect Storm” strong enough to bring the GIW to its knees. The Oil Pearl Harbor marks the Oil Fizzle DK getting into full swing.

To explain this further, with reference to Figure 5, oil represents some 33% of global primary energy use (BP data). Fossil fuels represented some 86% of total primary energy in 2014. However, coal, oil, and gas are not like three boxes neatly set side by side from which energy is supplied magically, as most economists would have it.

In the real world (i.e. outside the world economists live in), energy supply chains form networks, rather complex ones.  For example, it takes electricity to produce many products derived from oil, coal, and gas, while electricity is generated substantially from coal and gas, and so on.  More to the point, as noted earlier, because 94% of all transport is oil-based, oil stands at the root of the entire, complex, globalized set of energy networks.  Coal mining, transport, processing, and use depend substantially on oil-derived transport fuels; ditto for gas.[6]   The same applies to nuclear plants. So the thermodynamic collapse of the oil industry, that is now underway, not only is likely to be completed within some 10 years but is also in the process of triggering a falling domino effect (aka an avalanche, or in systemic terms, a self-organising criticality, a SOC).

Presently, and for the foreseeable future, we do not have substitutes for oil derived transport fuels that can be deployed within the required time frame and that would be affordable to the GIW. In other words, the GIW is falling into a thermodynamic trap, right now. As B. W. Hill recently noted, “The world is now spending $2.3 trillion per year more to produce oil than what is received when it is sold. The world is now losing a great deal of money to maintain its dependence on oil.”

In the longer run, the end effect of the Oil Fizzle DK is likely to be an abrupt decline of GHG emissions.

However, the danger I see is that meanwhile the GEI, and most notably the OI, is not going to just “curl up and die”. I think we are in a “die hard” situation. Since 2012, we are already seeing what I call a Big Mad Scramble (BMS) by a wide range of GEI actors that try to keep going while they still can, flying blind into the ground. The eventual outcome is hard to avoid with a GEI operating with only about 12% energy efficiency, i.e. some 88% wasteful current primary energy use. The GIW’s agony is likely to result in a big burst of GHG emissions while net energy fizzles out. The high danger is that the old quip will eventuate on a planetary scale: “the operation was successful but the patient died”… Hence my call for “inquiring into the appropriateness of the question” and for systemic thinking. We are in deep trouble. We can’t afford to get this wrong.

Part 3 – Standing slightly past the edge of the cliff

At least since the early 1970s and the Meadows’ work, we have known that the globalized industrial world (GIW) is on a self-destructive path, aka BAU (Business as usual). We now know that we are living through the tail end of this process, the end of the Oil Age, precipitating what I have called the Oil Fizzle Dragon-King, Seneca style, that is, after a slow, relatively smooth climb (aka “economic growth”) we are at the beginning of an abrupt fall down a thermodynamic cliff.

The chief issue is whole system change. This means thinking in whole systems terms where the thermodynamics of complex systems operating far from equilibrium is the key.  Understanding the situation requires moving repeatedly from the particulars, the details, to the whole system, improving our understanding of the whole and from this going back to the particulars, improving our understanding of them, going back to considering the whole, and so on.

Whole system replacement, i.e. going 100% renewable, requires a huge energy embodiment that is not feasible.  Having the “Energy Hand” in mind (Figure 5), where does this required energy come from in a context of sharp decline of net energy from oil and the Red Queen effect, and concerning renewable, inverse Red Queen/cannibalization effects?  

Solely considering the performances and cost of this or that alternative energy technology won’t suffice.  Short of addressing the complexities of whole system replacement, the situation we are in is some kind of “Apocalypse now”.  The chief challenge I see is thus how to shift safely, with minimal loss of life (substantial loss of life there will be; this has become unavoidable), from fossil-BAU (and nuclear) …

We currently have some 17 TW of power installed globally (mostly fossil with some nuclear), i.e. about 2.3kW/head, but with some 4 billion people who at best are grossly energy stressed, many who have no access to electricity at all and only limited transport, in a context of an efficiency of global energy systems in the order of 12%.[9]

Going “green” and surviving it (i.e. avoiding the inverse Red Queen effect) means increasing our Energy Hand from 17 TW to 50 TW (as a rough order of magnitude), with efficiencies shifting from 12% to over 80%.

It should be clear that under this predicament something would have to give, i.e. some of us would get even more energy stressed and die, or as the Chinese and Indians have been doing use much more of remaining fossil resources but then this would accelerate global warming and many other nasties. 

Whole system replacement (on a “do or die” mode) requires considering whole production chain networks from mining the ores, through making the metals, cement, etc., to making the machines, to using them to produce the stuff we require to go 100% sustainable. Given the very short time window constraint, we can’t afford to get it wrong in terms of how to possibly getting out of there – we have hardly enough time to have one go at it.

Remaining time frame

We no longer have 35 years, (say up to around 2050).  We have at best 10 years, not to debate and agonize but to actually do, with the next three years being key.  The thermodynamics on this, summarized in Part 1, is rock hard.  This time-frame, combined with the Oil Pearl Harbor challenge and the inverse Red Queen constraints, means in my view that none of the current “doings” renewable-wise can cut it.

Weak links

Notwithstanding its apparent power, the GIW is in fact extremely fragile.  It embodies a number of very weak links in its networks.  I have highlighted the oil issue, an issue that defines the overall time frame for dealing with “Apocalypse now”.  In addition to that and to climate change, there are a few other challenges that have been variously put forward by a range of researchers in recent years, such as fresh water availability, massive soil degradation, trace pollutants, degradation of life in oceans (about 99% of life is aquatic), staple food threats (e.g. black stem rust, wheat blast, ground level ozone, etc.), loss of biodiversity and 6th mass extinction, all the way to Joseph Tainter’s work concerning the links between energy flows, power (in TW), complexity and overshoot to collapse.[11]  

These weak links are currently in the process of breaking or are about to break, the breaks forming a self-reinforcing avalanche (SOC) or Perfect Storm.  All have the same key time-frame of about 10 years as an order of magnitude for acting.  All require a fair “whack” of energy as a prerequisite to handling them (the “whack” being a flexible and elastic unit of something substantial that usually one does not have).

Cognitive failure

The “Brexit” saga is perhaps the latest large-scale demonstration of cognitive failure in a very long series.  That is to say, the failure on the part of decision-making elites to make use of available knowledge, experience, and expertise to tackle effectively challenges within the time-frame required to do so.

Cognitive failure is probably most blatant, but largely remaining unseen, concerning energy, the Oil Fizzle DK and matters of energy returns on energy investments (EROI or EROEI).  What we can observe is a triple failure of BAU, but also of most current “green” alternatives (Figure 7): (1) the BAU development trajectory since the 1950s failed; (2) there has been a failure to take heed of over 40 years of warnings; and (3) there has been a failure to develop viable alternatives.

Figure 8 – The necessity of very high EROIs

  • With an EROI of 1.1 :  1   at the production well we can pump oil out and look at it…that’s all – there is no spare energy to do anything else with it
  • 1.2 : 1    We can refine crude oil into diesel fuel…and that’s all
  • 1.3 : 1    We can dispatch the diesel to a service station…and that’s all
  • 3 : 1        We can run a truck with it as well as enough spare energy to build and maintain the truck, roads, and bridges…and that’s all
  • 5 : 1        We can put something in the truck and deliver it…and that’s all
  • 8 : 1        We can provide a living to the oil field worker, the refinery worker, the truck driver, and the farmer…and that’s all
  • 10 : 1      You may have minimal health care, some education…and that’s all
  • 20 : 1      You may have the basic set of consumer items such as refrigerators, stoves, radios, TV, a small car…and that’s all
  • 30 : 1      Or higher – you can have a prosperous lifestyle and the spare energy to deal with ecological issues and to invest in a secure energy future

This is expanded from similar attempts by Jessica Lambert et al., to perhaps highlights what sliding down the thermodynamic cliff entails.  Charles Hall has shown that a production EROI of 10:1 corresponds roughly to an end-user EROI of 3.3:1 and is the bare minimum for an industrial society to function.[15]  In sociological terms, for 10:1 think of North Korea.  As shown on Figure 7, currently I know of no alternative, either unconventional fossils based, nuclear or “green” technologies with production EROIs (i.e. equivalent to the well head EROI for oil) above 20:1; most remain below 10:1.  I do think it feasible to go back above 30:1, in 100% sustainable fashion, but not along prevalent modes of technology development, social organization, and decision-making.

We are in an unprecedented situation.  As stressed by Tainter, no previous civilization has ever managed to survive the kind of predicament we are in.  However, the people living in those civilizations were mostly rural and had a safety net, in that their energy source was 100% solar, photosynthesis for food, fiber and timber – they always could keep going even though it may have been under harsh conditions.  We no longer have such a safety net; our entire food systems are almost completely dependent on the net energy from oil that is in the process of dropping to the floor and our food supply systems cannot cope without it.

Arnoux responds to readers comments:

It is important to not confuse EROI or EROEI at the well head and for the whole system up to the end-users. The Hill’s Group people have shown that the EROIE as defined by them passed below the critical viability level of 10:1 around 2010 and that along current dynamics by circa 2030 it will be about 6.89:1, by which time no net energy per barrel will reach end-users (assuming there is still an oil industry at this point, which a number of us consider most unlikely, at least not the oil industry as we presently know it). Net energy here means what is available to end-users typically to go from A to B, the energy lost as waste heat (2nd principle) and the energy used by the oil industry having been fully deducted – as such it cannot be directly linked in reverse to evaluate an EROI.

We are considering the whole system, from oil exploration to end-users. The matter is that relative to the early stages in the development of the oil industry, the total energy costs of producing the energy reaching end-users has been increasing steadily barrel after barrel and we are now getting close to a point when no significant energy will reach end-users. We expect that the industry will breakdown well before this critical point is reached.

The idea of collapse remains taboo in numerous circles and understandably is rather unpalatable. However, increasingly the awareness of the dangers appears to be progressing rapidly, all the way notably among very wealthy people who now constitute a booming market segment for underground luxury bunkers where, as the marketing goes, they could survive 5 years without going back to the surface in case of heavy turmoil…

In energy matters inequality is prevalent. Some regions are likely to retain access to residual net energy from oil longer than others and to the detriment of others, and this isn’t shaping up as a nice and smooth affair. Prof Micheal Klare has spoken of a global “30 Year War” (Klare, Michael, 2011, “The New Thirty Years War”, in European Energy Review, 5 September). However, war requires a lot of oil-based energy, so war is likely to accelerate thermodynamic collapse dynamics. For example, in the Middle East a number of researchers have noted the contribution of years of drought and displacement of about 1 million farmers to Syrian cities that has led to the present tragedy. However, few realize that another factor contributing to turmoil in the region is the competition between two sets of pipelines projects and related political and military interests, one focused on Iran and the other on KSA to link those areas to the Mediterranean. It is not possible to read through a crystal ball at the regional level. It is likely that if mistakes can be made and atrocities committed, they will take place… All in all, however, I tend to agree with B. W. Hill that globally the tail end of the Oil Fizzle process is most unlikely to extend beyond 2030.

You ask “how are they to be convinced to abandon their investments prior to catastrophic collapse?” It’s clear to me that they are not going to be convinced and there is no point in trying to and above all not time left to do so. I have come to think that those who cling to BAU for dear life do not have much prospects to last long simply because they are no longer within a viable thermodynamic space. On the other hand there are millions currently innovating and doing their utter best to stay or come back within such a space. They do so mostly flying blind, mostly without enquiring into the appropriateness of the questions they ask, which makes their life a lot harder and riskier. As a result many will end up outside the viable space and vanish, however, given the numbers, I think that statistically quite a number will manage to live within that space and evolve new ways, probably enough for one or more new kind(s) of civilization(s).

For over a century the ratio of gold to oil has remained in a narrow range of 1g to 6g of gold per barrel of sweet crude – gold being an age old monetary means that goes by weight and is not subject to inflation and other vagaries it can be used as a fixed metric not amenable to much manipulations (as fiat currencies and price indices are). This ratio is presently close to 1.04g/bbl. However, as we have seen, the GIW does not “live” on crude but on net energy from crude, essentially in the form of transport fuels. Currently the net energy that reaches end-users is about 16% of the gross energy in an average barrel of sweet crude (it was about 70% in 1920). This gives a present shadow price of about US$277/bbl, a highly unpalatable figure for the GIW’s operations (or 6.5g of gold/bbl). Of course, as net energy keeps dropping, a time will come, very soon, when after a burst the shadow price also drops to the floor (a value of x times zero equals zero). Put in other words, gold and oil have begun to diverge since 2014. All currencies have been dropping against gold since 1971. The stable gold-oil relationship is breaking down because the fundamental was not the crude barrel but the amount of net energy able to “power growth”; since 2012 this is now fizzling out.

I am saying that when 1 barrel of sweet crude is traded at US$44 (actually as I write it’s at about $43 and a bit), the GIW has access to only 16% of the energy it contains, so the net financial impact for the GIW as a whole is yes, $277/bbl equivalent. The GIW can’t make money with the full barrel, only the 16% residual, so it all happens as if it was attempting to “grow” at a basic cost of $277/bbl, which these days is quite a challenge. Even adjusting for inflation, at the time of the 1978-79 crisis (based on BP inflation adjusted price data) with some 56% net energy available to end-users, the shadow price was around US$188/bbl equivalent, and back then the situation was dire. In New Zealand we had carless days… So now at $277/bbl? The main difference I see is that now the GIW lives fully on debt, with central banks “printing money” like there is no tomorrow, which is probably correct – there is no tomorrow for the GIW in this fashion. We are at the stage where thermodynamics comes back home to roost.

In practice, no one but businesses from the oil industry buys oil. End-users buy transport fuels, plastics, etc… Now, in the main transport fuels are used to generate economic activity. No one can generate as much economic activity per barrel now, with only 16% net energy that can be used to do so, as compared to say 1920 when about 70% net energy was available. So after quite a bit of speculation up and down by traders who by and large have not a clue about what is going on, progressively the price of crude adjusts in proportion to the economic activity that can be generated downstream. The globalised industrial world (GIW), taken as a whole, cannot afford to pay more for its fuel than the amount of economic “growth” that it can generate with it, not for a long time any way. The consequence, however, is that the GIW decelerates in proportion, which is what we are observing.



[1] See for example, Stevens, Paul, 2016, International Oil Companies: The Death of the Old Business Model, Energy, Research Paper, Energy, Environment and Resources, Chatham House; England, John W., 2016, Short of capital? Risk of underinvestment in oil and gas is amplified by competing cash priorities, Deloitte Center for Energy Solutions, Deloitte LLP. The Bank of England recently commented: “The embattled crude oil and natural gas industry worldwide has slashed capital spending to a point below the minimum required levels to replace reserves — replacement of proved reserves in the past constituted about 80 percent of the industry’s spending; however, the industry has slashed its capital spending by a total of about 50 percent in 2015 and 2016. According to Deloitte’s new study {referred to above], this underinvestment will quickly deplete the future availability of reserves and production.”

[2] This effect is also referred to as “cannibalizing”. See for example, J. M. Pearce, 2009, Optimising Greenhouse Gas Mitigation Strategies to Suppress Energy Cannibalism, 2nd Climate Change Technology Conference, May 12-15, Hamilton, Ontario, Canada. However, in the oil industry and more generally the mining industry, cannibalism usually refers to what companies do when there are reaching the end of exploitable reserves and cut down on maintenance, sell assets at a discount or acquires some from companies gone bankrupt, in order to try and survive a bit longer. Presently there is much asset disposal going on in the Shale Oil and Gas patches, ditto among majors, Lukoil, BP, Shell, Chevron, etc….  Between spending cuts and assets disposal amounts involved are in the $1 to $2 trillions.

[3] This graph is based on THG’s net energy data, BP oil production data and UN demographic data.

[4] As THG have conclusively clarified, see

[5] The Meadows’ original work has been amply corroborated over the ensuing decades. See for example, Donella Meadows, Jorgen Randers, and Dennis Meadows, 2004, A Synopsis: Limits to Growth: The 30-Year Update, The Donella Meadows Institute; Turner, Graham, 2008, A Comparison of the Limits to Growth with Thirty Years of Reality, Socio-Economics and the Environment in Discussion, CSIRO Working Paper Series 2008-09; Hall, Charles A. S. and Day, John W, Jr, 2009, “Revisiting the Limits to Growth After Peak Oil” in American Scientist, May-June; Vuuren, D.P. van and Faber, Albert, 2009, Growing within Limits, A Report to the Global Assembly 2009 of the Club of Rome, Netherlands Environmental Assessment Agency; and Turner, Graham, M., 2014, Is Global Collapse Imminent? An Updated Comparison of The Limits to Growth with Historical Data, MSSI Research Paper No. 4, Melbourne Sustainable Society Institute, The University of Melbourne.

[6] Although there is a drive to use more and more liquefied natural gas for gas tankers and ordinary ship fuel bunkering.

[7] Dellingpole, James, 2013, “The dirty secret of Britain’s power madness: Polluting diesel generators built in secret by foreign companies to kick in when there’s no wind for turbines – and other insane but true eco-scandals”, in The Daily Mail, 13 July.

[8] As another example, Axel Kleidon has shown that extracting energy from wind (as well as from waves and ocean currents) on any large scale would have the effect of reducing overall free energy usable by humankind (free in the thermodynamic sense, due to the high entropy levels that these technologies do generate, and as opposed to the direct harvesting of solar energy through photosynthesis, photovoltaics and thermal solar, that instead do increase the total free energy available to humankind) – see Kleidon, Axel, 2012, How does the earth system generate and maintain thermodynamic disequilibrium and what does it imply for the future of the planet?, Max Planck Institute for Biogeochemistry, published in Philosophical Transaction of the Royal Society A,  370, doi: 10.1098/rsta.2011.0316.

[9] E.g. Murray and King, Nature, 2012.

[10] This label is a wink to the Sea People who got embroiled in the abrupt end of the Bronze Age some 3,200 years ago, in that same part of the world currently bitterly embroiled in atrocious fighting and terrorism, aka MENA.

[11] Tainter, Joseph, 1988, The Collapse of Complex Societies, Cambridge University Press; Tainter, Joseph A., 1996, “Complexity, Problem Solving, and Sustainable Societies”, in Getting Down to Earth: Practical Applications of Ecological Economics, Island Press, and Tainter, Joseph A. and Crumley, Carole, “Climate, Complexity and Problem Solving in the Roman Empire” (p. 63), in Costanza, Robert, Graumlich, Lisa J., and Steffen, Will, editors, 2007, Sustainability or Collapse, an Integrated History and Future of People on Earth, The MIT Press, Cambridge, Massachusetts and London, U.K., in cooperation with Dahlem University Press.

[12] See for example Armour, Kyle, 2016, “Climate sensitivity on the rise”,, 27 June.

[13] For a good overview, see Spratt, David, 2016, Climate Reality Check, March.

[14] For example, Jacobson, Mark M. and Delucchi, Mark A., 2009, “A path to Sustainability by 2030”, in Scientific American, November.

[15] Hall, Charles A. S. and Klitgaard, Kent A., 2012, Energy and the Wealth of Nations, Springer; Hall, Charles A. S., Balogh, Stephen, and Murphy, David J. R., 2009, “What is the Minimum EROI that a Sustainable Society Must Have?” in Energies, 2, 25-47; doi:10.3390/en20100025. See also Murphy, David J., 2014, “The implications of the declining energy return on investment of oil production” in Philosophical Transaction of the Royal Society A, 372: 20130126,

[16] Joseph Tainter, 2011, “Energy, complexity, and sustainability: A historical perspective”, Environmental Innovation and Societal Transitions, Elsevier

Posted in Cascading Failure, EROEI remaining oil too low, How Much Left, Interdependencies, Limits To Growth, Net Energy Cliff, Predictions, Scientists | Tagged , , , , , , | 20 Comments

Peak Uranium by Ugo Bardi from Extracted: How the Quest for Mineral Wealth Is Plundering the Planet

Figure 1. cumulative uranium consumption by IPCC model 2015-2100 versus measured and inferred Uranium resources

[ Figure 1 shows that the next IPCC report counts very much on nuclear power to keep warming below 2.5 C.  The black line represents how many million tonnes of reasonably and inferred resources under $260 per kg remain (2016 IAEA redbook). Clearly most of the IPCC models are unrealistic.  The IPCC greatly exaggerates the amount of oil and coal reserves as well. Source: David Hughes (private communication)

This is an extract of Ugo Bardi’s must read “Extracted” about the limits of production of uranium.

Many well-meaning citizens favor nuclear power because it doesn’t emit greenhouse gases.  The problem is that the Achilles heel of civilization is our dependency on trucks of all kinds, which run on diesel fuel because diesel engines transformed our civilization with their ability to do heavy work better than steam, gasoline, or any other kind of engine.  Trucks are required to keep the supply chains going that every person and business on earth require, from food to the materials and construction of the roads they run on, as well as mining, agriculture, construction trucks, logging etc. 

Nuclear power plants are not a solution, since trucks can’t run on electricity, so anything that generates electricity is not a solution, nor is it likely that the electric grid can ever be 100% renewable (read “When trucks stop running”, this can’t be explained in a sound-bite).  And we certainly aren’t going to be able to replace a billion trucks and equipment with diesel engines by the time the energy crunch hits with something else, there is nothing else.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Bardi, Ugo. 2014. Extracted: How the Quest for Mineral Wealth Is Plundering the Planet. Chelsea Green Publishing.

Although there is a rebirth of interest in nuclear energy, there is still a basic problem: uranium is a mineral resource that exists in finite amounts.

Even as early as the 1950s it was clear that the known uranium resources were not sufficient to fuel the “atomic age” for a period longer than a few decades.

That gave rise to the idea of “breeding” fissile plutonium fuel from the more abundant, non-fissile isotope 238 of uranium. It was a very ambitious idea: fuel the industrial system with an element that doesn’t exist in measurable amounts on Earth but would be created by humans expressly for their own purposes. The concept gave rise to dreams of a plutonium-based economy. This ambitious plan was never really put into practice, though, at least not in the form that was envisioned in the 1950s and ’60s. Several attempts were made to build breeder reactors in the 1970s, but the technology was found to be expensive, difficult to manage, and prone to failure. Besides, it posed unsolvable strategic problems in terms of the proliferation of fissile materials that could be used to build atomic weapons. The idea was thoroughly abandoned in the 1970s, when the US Senate enacted a law that forbade the reprocessing of spent nuclear fuel.

A similar fate was encountered by another idea that involved “breeding” a nuclear fuel from a naturally existing element—thorium. The concept involved transforming the 232 isotope of thorium into the fissile 233 isotope of uranium, which then could be used as fuel for a nuclear reactor (or for nuclear warheads). 48 The idea was discussed at length during the heydays of the nuclear industry, and it is still discussed today; but so far, nothing has come out of it and the nuclear industry is still based on mineral uranium as fuel.

Today, the production of uranium from mines is insufficient to fuel the existing nuclear reactors. The gap between supply and demand for mineral uranium has been as large as almost 50% from 1995 to 2005, though gradually reduced the past few years.

The U.S. mined 370,000 metric tons the past 50 years, peaking in 1981 at 17,000 tons/year.  Europe peaked in the 1990s after extracting 460,000 tons.  Today nearly all of the 21,000 ton/year needed to keep European nuclear plants operating is imported.

The European mining cycle allows us to determine how much of the originally estimated uranium reserves could be extracted versus what actually happened before it cost too much to continue. Remarkably in all countries where mining has stopped it did so at well below initial estimates (50 to 70%). Therefore it’s likely ultimate production in South Africa and the United States can be predicted as well.

Table 1. The European mining cycle allows us to determine how much of the originally estimated uranium reserves could be extracted versus what actually happened before it cost too much to continue. Remarkably in all countries where mining has stopped it did so at well below initial estimates (50 to 70%). Therefore it’s likely ultimate production in South Africa and the United States can be predicted as well.

The Soviet Union and Canada each mined 450,000 tons. By 2010 global cumulative production was 2.5 million tons.  Of this, 2 million tons has been used, and the military had most of the remaining half a million tons.

The most recent data available show that mineral uranium accounts now for about 80% of the demand.  The gap is filled by uranium recovered from the stockpiles of the military industry and from the dismantling of old nuclear warheads.

This turning of swords into plows is surely a good idea, but old nuclear weapons and military stocks are a finite resource and cannot be seen as a definitive solution to the problem of insufficient supply. With the present stasis in uranium demand, it is possible that the production gap will be closed in a decade or so by increased mineral production. However, prospects are uncertain, as explained in “The End of Cheap Uranium.” In particular, if nuclear energy were to see a worldwide expansion, it is hard to see how mineral production could satisfy the increasing uranium demand, given the gigantic investments that would be needed, which are unlikely to be possible in the present economically challenging times.

At the same time, the effects of the 2011 incident at the Fukushima nuclear power plant are likely to negatively affect the prospects of growth for nuclear energy production, and with the concomitant reduced demand for uranium, the surviving reactors may have sufficient fuel to remain in operation for several decades.

It’s true that there are large quantities of uranium in the Earth’s crust, but there are limited numbers of deposits that are concentrated enough to be profitably mined. If we tried to extract those less concentrated deposits, the mining process would require far more energy than the mined uranium could ultimately produced [negative EROI].

Modeling Future Uranium Supplies

Uranium supply and demand to 2030

Table 2. Uranium supply and demand to 2030


Michael Dittmar used historical data for countries and single mines, to create a model that projected how much uranium will likely be extracted from existing reserves in the years to come. The model is purely empirical and is based on the assumption that mining companies, when planning the extraction profile of a deposit, project their operations to coincide with the average lifetime of the expensive equipment and infrastructure it takes to mine uranium—about a decade.

Gradually the extraction becomes more expensive as some equipment has to be replaced and the least costly resources are mined. As a consequence, both extraction and profits decline. Eventually the company stops exploiting the deposit and the mine closes. The model depends on both geological and economic constraints, but the fact that it has turned out to be valid for so many past cases shows that it is a good approximation of reality.

This said, the model assumes the following points:

  • Mine operators plan to operate the mine at a nearly constant production level on the basis of detailed geological studies and to manage extraction so that the plateau can be sustained for approximately 10 years.
  • The total amount of extractable uranium is approximately the achieved (or planned) annual plateau value multiplied by 10.

Applying this model to well-documented mines in Canada and Australia, we arrive at amazingly correct results. For instance, in one case, the model predicted a total production of 319 ± 24 kilotons, which was very close to the 310 kilotons actually produced. So we can be reasonably confident that it can be applied to today’s larger currently operating and planned uranium mines. Considering that the achieved plateau production from past operations was usually smaller than the one planned, this model probably overestimates the future production.

Table 2 summarizes the model’s predictions for future uranium production, comparing those findings against forecasts from other groups and against two different potential future nuclear scenarios.

As you can see, the forecasts obtained by this model indicate substantial supply constraints in the coming decades—a considerably different picture from that presented by the other models, which predict larger supplies.

The WNA’s 2009 forecast differs from our model mainly by assuming that existing and future mines will have a lifetime of at least 20 years. As a result, the WNA predicts a production peak of 85 kilotons/year around the year 2025, about 10 years later than in the present model, followed by a steep decline to about 70 kilotons/year in 2030. Despite being relatively optimistic, the forecast by the WNA shows that the uranium production in 2030 would not be higher than it is now. In any case, the long deposit lifetime in the WNA model is inconsistent with the data from past uranium mines. The 2006 estimate from the EWG was based on the Red Book 2005 RAR (reasonably assured resources) and IR (inferred resources) numbers. The EWG calculated an upper production limit based on the assumption that extraction can be increased according to demand until half of the RAR or at most half of the sum of the RAR and IR resources are used. That led the group to estimate a production peak around the year 2025.

Assuming all planned uranium mines are opened, annual mining will increase from 54,000 tons/year to a maximum of 58 (+ or – 4) thousand tons/year in 2015. [ Bardi wrote this before 2013 and 2014 figures were known. 2013 was 59,673 (highest total) and 56,252 in 2014.]

Declining uranium production will make it impossible to obtain a significant increase in electrical power from nuclear plants in the coming decades.

Posted in Ugo Bardi, Uranium | Tagged , | 5 Comments