Bodhi Paul Chefurka: Carrying capacity, overshoot and sustainability

Preface. This is a post written by Bodhi Paul Chefurka in 2013 at his blog here. I don’t understand his ultimate sustainable carrying capacity based on hunter gatherers. Why will agriculture go away? But the rest of the article is spot on.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Ever since the writing of Thomas Malthus in the early 1800s, and especially since Paul Ehrlich’s publication of “The Population Bomb”  in 1968, there has been a lot of learned skull-scratching over what the sustainable human population of Planet Earth might “really” be over the long haul.

This question is intrinsically tied to the issue of ecological overshoot so ably described by William R. Catton Jr. in his 1980 book “Overshoot:The Ecological Basis of Revolutionary Change”.  How much have we already pushed our population and consumption levels above the long-term carrying capacity of the planet?

In this article I outline my current thoughts on carrying capacity and overshoot, and present five estimates for the size of a sustainable human population.

Carrying Capacity

Carrying capacity” is a well-known ecological term that has an obvious and fairly intuitive meaning: “The maximum population size of a species that the environment can sustain indefinitely, given the food, habitat, water and other necessities available in the environment.” 

Unfortunately that definition becomes more nebulous and controversial the closer you look at it, especially when we are talking about the planetary carrying capacity for human beings. Ecologists will claim that our numbers have already well surpassed the planet’s carrying capacity, while others (notably economists and politicians…) claim we are nowhere near it yet!
This confusion may arise because we tend to confuse two very different understandings of the phrase “carrying capacity”.  For this discussion I will call these the “subjective” view and the “objective” views of carrying capacity.
The subjective view is carrying capacity as seen by a member of the species in question. Rather than coming from a rational, analytical assessment of the overall situation, it is an experiential judgement.  As such it tends to be limited to the population of one’s own species, as well as having a short time horizon – the current situation counts a lot more than some future possibility.  The main thing that matters in this view is how many of one’s own species will be able to survive to reproduce. As long as that number continues to rise, we assume all is well – that we have not yet reached the carrying capacity of our environment.

From this subjective point of view humanity has not even reached, let alone surpassed the Earth’s overall carrying capacity – after all, our population is still growing.  It’s tempting to ascribe this view mainly to neoclassical economists and politicians, but truthfully most of us tend to see things this way.  In fact, all species, including humans, have this orientation, whether it is conscious or not.

Species tend to keep growing until outside factors such as disease, predators, food or other resource scarcity – or climate change – intervene.  These factors define the “objective” carrying capacity of the environment.  This objective view of carrying capacity is the view of an observer who adopts a position outside the species in question.It’s the typical viewpoint of an ecologist looking at the reindeer on St. Matthew Island, or at the impact of humanity on other species and its own resource base.

This is the view that is usually assumed by ecologists when they use the naked phrase “carrying capacity”, and it is an assessment that can only be arrived at through analysis and deductive reasoning.  It’s the view I hold, and its implications for our future are anything but comforting.

When a species bumps up against the limits posed by the environment’s objective carrying capacity,its population begins to decline. Humanity is now at the uncomfortable point when objective observers have detected our overshoot condition, but the population as a whole has not recognized it yet. As we push harder against the limits of the planet’s objective carrying capacity, things are beginning to go wrong.  More and more ordinary people are recognizing the problem as its symptoms become more obvious to casual onlookers.The problem is, of course, that we’ve already been above the planet’s carrying capacity for quite a while.
One typical rejoinder to this line of argument is that humans have “expanded our carrying capacity” through technological innovation.  “Look at the Green Revolution!  Malthus was just plain wrong.  There are no limits to human ingenuity!”  When we say things like this, we are of course speaking from a subjective viewpoint. From this experiential, human-centric point of view, we have indeed made it possible for our environment to support ever more of us. This is the only view that matters at the biological, evolutionary level, so it is hardly surprising that most of our fellow species-members are content with it.

The problem with that view is that every objective indicator of overshoot is flashing red.  From the climate change and ocean acidification that flows from our smokestacks and tailpipes, through the deforestation and desertification that accompany our expansion of human agriculture and living space, to the extinctions of non-human species happening in the natural world, the planet is urgently signalling an overload condition.

Humans have an underlying urge towards growth, an immense intellectual capacity for innovation, and a biological inability to step outside our chauvinistic, anthropocentric perspective.  This combination has made it inevitable that we would land ourselves and the rest of the biosphere in the current insoluble global ecological predicament.


When a population surpasses its carrying capacity it enters a condition known as overshoot.  Because the carrying capacity is defined as the maximum population that an environment can maintain indefinitely, overshoot must by definition be temporary.  Populations always decline to (or below) the carrying capacity.  How long they stay in overshoot depends on how many stored resources there are to support their inflated numbers.  Resources may be food, but they may also be any resource that helps maintain their numbers.  For humans one of the primary resources is energy, whether it is tapped as flows (sunlight, wind, biomass) or stocks (coal, oil, gas, uranium etc.).  A species usually enters overshoot when it taps a particularly rich but exhaustible stock of a resource.  Like fossil fuels, for instance…
Population growth in the animal kingdom tends to follow a logistic curve.  This is an S-shaped curve that starts off low when the species is first introduced to an ecosystem, at some later point rises very fast as the population becomes established, and then finally levels off as the population saturates its niche. 
Humans have been pushing the envelope of our logistic curve for much of our history. Our population rose very slowly over the last couple of hundred thousand years, as we gradually developed the skills we needed in order to deal with our varied and changeable environment,particularly language, writing and arithmetic. As we developed and disseminated those skills our ability to modify our environment grew, and so did our growth rate. 
If we had not discovered the stored energy resource of fossil fuels, our logistic growth curve would probably have flattend out some time ago, and we would be well on our way to achieving a balance with the energy flows in the world around us, much like all other species do.  Our numbers would have settled down to oscillate around a much lower level than today, similar to what they probably did with hunter-gatherer populations tens of thousands of years ago.

Unfortunately, our discovery of the energy potential of coal created what mathematicians and systems theorists call a “bifurcation point” or what is better known in some cases as a tipping point. This is a point at which a system diverges from one path onto another because of some influence on events.  The unfortunate fact of the matter is that bifurcation points are generally irreversible.  Once past such a point, the system can’t go back to a point before it.

Given the impact that fossil fuels had on the development of world civilization, their discovery was clearly such a fork in the road.  Rather than flattening out politely as other species’ growth curves tend to do, ours kept on rising.  And rising, and rising. 

What is a sustainable population level?

Now we come to the heart of the matter.  Okay, we all accept that the human race is in overshoot.  But how deep into overshoot are we?  What is the carrying capacity of our planet?  The answers to these questions,after all, define a sustainable population.

Not surprisingly, the answers are quite hard to tease out.  Various numbers have been put forward, each with its set of stated and unstated assumptions –not the least of which is the assumed standard of living (or consumption profile) of the average person.  For those familiar with Ehrlich and Holdren’s I=PAT equation, if “I” represents the environmental impact of a sustainable population, then for any population value “P” there is a corresponding value for “AT”, the level of Activity and Technology that can be sustained for that population level.  In other words, the higher our standard of living climbs, the lower our population level must fall in order to be sustainable. This is discussed further in an earlier article on Thermodynamic Footprints.

To get some feel for the enormous range of uncertainty in sustainability estimates we’ll look at five assessments, each of which leads to a very different outcome.  We’ll start with the most optimistic one, and work our way down the scale.

The Ecological Footprint Assessment

The concept of the Ecological Footprint was developed in 1992 by William Rees and Mathis Wackernagel at the University of British Columbia in Canada.

The ecological footprint is a measure of human demand on the Earth’s ecosystems. It is a standardized measure of demand for natural capital that may be contrasted with the planet’s ecological capacity to regenerate. It represents the amount of biologically productive land and sea area necessary to supply the resources a human population consumes, and to assimilate associated waste. As it is usually published, the value is an estimate of how many planet Earths it would take to support humanity with everyone following their current lifestyle.

It has a number of fairly glaring flaws that cause it to be hyper-optimistic. The “ecological footprint” is basically for renewable resources only. It includes a theoretical but underestimated factor for non-renewable resources.  It does not take into account the unfolding effects of climate change, ocean acidification or biodiversity loss (i.e. species extinctions).  It is intuitively clear that no number of “extra planets” would compensate for such degradation.

Still, the estimate as of the end of 2012 is that our overall ecological footprint is about “1.7 planets”.  In other words, there is at least 1.7 times too much human activity for the long-term health of this single, lonely planet.  To put it yet another way, we are 70% into overshoot.

It would probably be fair to say that by this accounting method the sustainable population would be (7 / 1.7) or about four billion people at our current average level of affluence.  As you will see, other assessments make this estimate seem like a happy fantasy.

The Fossil Fuel Assessment

The main accelerant of human activity over the last 150 to 200 years has been fossil fuel.  Before 1800 there was very little fossil fuel in general use, with most energy being derived from wood, wind, water, animal and human power. The following graph demonstrates the precipitous rise in fossil fuel use since then, and especially since 1950.

This information was the basis for my earlier Thermodynamic Footprint analysis.  That article investigated the influence of technological energy (87% of which comes from fossil fuels) on human planetary impact, in terms of how much it multiplies the effect of each “naked ape”. The following graph illustrates the multiplier at different points in history:

Fossil fuels have powered the increase in all aspects of civilization, including population growth.  The “Green Revolution” in agriculture that was kicked off by Nobel laureate Norman Borlaug in the late 1940s was largely a fossil fuel phenomenon, relying on mechanization,powered irrigation and synthetic fertilizers derived from fossil fuels. This enormous increase in food production supported a swift rise in population numbers, in a classic ecological feedback loop: more food (supply) => more people (demand) => more food => more people etc…

Over the core decades of the Green Revolution from 1950 to 1980 the world population almost doubled, from fewe rthan 2.5 billion to over 4.5 billion.  The average population growth over those three decades was 2% per year.  Compare that to 0.5% from 1800 to 1900; 1.00% from 1900 to 1950; and 1.5% from 1980 until now:

This analysis makes it tempting to conclude that a sustainable population might look similar to the situation in 1800, before the Green Revolution, and before the global adoption of fossil fuels: about 1 billion peopleliving on about 5% of today’s global average energy consumption.

It’s tempting (largely because it seems vaguely achievable), but unfortunately that number may still be too high.  Even in 1800 the signs of human overshoot were clear, if not well recognized:  there was already widespread deforestation through Europe and the Middle East; and desertification had set into the previously lush agricultural zones of North Africa and the Middle East.

Not to mention that if we did start over with “just” one billion people, an annual growth rate of a mere 0.5% would put the population back over seven billion in just 400 years.  Unless the growth rate can be kept down very close to zero, such a situation is decidedly unsustainable.

The Population Density Assessment

There is another way to approach the question.  If we assume that the human species was sustainable at some point in the past, what point might we choose and what conditions contributed to our apparent sustainability at that time?

I use a very strict definition of sustainability.  It reads something like this: “Sustainability is the ability of a species to survive in perpetuity without damaging the planetary ecosystem in the process.”  This principle applies only to a species’ own actions, rather than uncontrollable external forces like Milankovitch cycles, asteroid impacts, plate tectonics, etc.

In order to find a population that I was fairly confident met my definition of sustainability, I had to look well back in history – in fact back into Paleolithic times.  The sustainability conditions I chose were: a very low population density and very low energy use, with both maintained over multiple thousands of years. I also assumed the populace would each use about as much energy as a typical hunter-gatherer: about twice the daily amount of energy a person obtains from the food they eat.

There are about 150 million square kilometers, or 60 million square miles of land on Planet Earth.  However, two thirds of that area is covered by snow, mountains or deserts, or has little or no topsoil.  This leaves about 50 million square kilometers (20 million square miles) that is habitable by humans without high levels of technology.

A typical population density for a non-energy-assisted society of hunter-forager-gardeners is between 1 person per square mile and 1 person per square kilometer. Because humans living this way had settled the entire planet by the time agriculture was invented 10,000 years ago, this number pegs a reasonable upper boundary for a sustainable world population in the range of 20 to 50 millionpeople.

I settled on the average of these two numbers, 35 million people.  That was because it matches known hunter-forager population densities, and because those densities were maintained with virtually zero population growth (less than 0.01% per year)during the 67,000 years from the time of the Toba super-volcano eruption in 75,000 BC until 8,000 BC (Agriculture Day on Planet Earth).

If we were to spread our current population of 7 billion evenly over 50 million square kilometers, we would have an average density of 150 per square kilometer.  Based just on that number, and without even considering our modern energy-driven activities, our current population is at least 250 times too big to be sustainable. To put it another way, we are now 25,000%into overshoot based on our raw population numbers alone. 

As I said above, we also need to take the population’s standard of living into account. Our use of technological energy gives each of us the average planetary impact of about 20 hunter-foragers.  What would the sustainable population be if each person kept their current lifestyle, which is given as an average current Thermodynamic Footprint (TF) of 20?

We can find the sustainable world population number for any level of human activity by using the I = PAT equation mentioned above.

  • We decided above that the maximum hunter-forager population we could accept as sustainable would be 35 million people, each with a Thermodynamic Footprint of 1.
  • First, we set I (the allowable total impact for our sustainable population) to 35, representing those 35 million hunter-foragers.
  • Next, we set AT to be the TF representing the desired average lifestyle for our population.  In this case that number is 20.
  • We can now solve the equation for P.  Using simple algebra, we know that I = P x AT is equivalent to P = I / AT.  Using that form of the equation we substitute in our values, and we find that P = 35 / 20.  In this case P = 1.75.

This number tells us that if we want to keep the average level of per-capita consumption we enjoy in in today’s world, we would enter an overshoot situation above a global population of about 1.75 million people. By this measure our current population of 7 billion is about 4,000 times too big and active for long-term sustainability. In other words, by this measure we are we are now 400,000% into overshoot

Using the same technique we can calculate that achieving a sustainable population with an American lifestyle (TF = 78) would permit a world population of only 650,000 people – clearly not enough to sustain a modern global civilization. 

For the sake of comparison, it is estimated that the historical world population just after the dawn of agriculture in 8,000 BC was about five million, and in Year 1 was about 200 million.  We crossed the upper threshold of planetary sustainability in about 2000 BC, and have been in deepening overshoot for the last 4,000 years.

The Ecological Assessments

As a species, human beings share much in common with other large mammals.  We breathe, eat, move around to find food and mates, socialize, reproduce and die like all other mammalian species.  Our intellec tand culture, those qualities that make us uniquely human, are recent additions to our essential primate nature, at least in evolutionary terms.

Consequently it makes sense to compare our species’ performance to that of other, similar species – species that we know for sure are sustainable.  I was fortunate to find the work of American marine biologist Dr. Charles W. Fowler, who has a deep interest in sustainability and the ecological conundrum posed by human beings.  The following two assessments are drawn from Dr. Fowler’s work.

First assessment

In 2003, Dr. Fowler and Larry Hobbs co-wrote a paper titled, Is humanity sustainable?”  that was published by the Royal Society.  In it, they compared a variety of ecological measures across 31 species including humans. The measures included biomass consumption, energy consumption, CO2 production, geographical range size, and population size.

It should come as no great surprise that in most ofthe comparisons humans had far greater impact than other species, even to a 99%confidence level.  The only measure inwhich we matched other species was in the consumption of biomass (i.e. food).

When it came to population size, Fowler and Hobbs foundthat there are over two orders of magnitude more humans than one would expectbased on a comparison to other species – 190 times more, in fact.  Similarly, our CO2 emissions outdid otherspecies by a factor of 215.

Based on this research, Dr. Fowler concluded that there are about 200 times too many humans on the planet.  This brings up an estimate for a sustainable population of 35 million people.

This is the same as the upper bound established above by examining hunter-gatherer population densities.  The similarity of the results is not too surprising, since the hunter-gatherers of 50,000 years ago were about as close to “naked apes” as humans have been in recent history.

Second assessment

In 2008, five years after the publication cited above, Dr. Fowler wrote another paper entitled Maximizing biodiversity, information and sustainability.”  In this paper he examined the sustainability question from the point of view of maximizing biodiversity.  In other words, what is the largest human population that would not reduce planetary biodiversity?

This is, of course, a very stringent test, and one that we probably failed early in our history by extirpating mega-fauna in the wake of our migrations across a number of continents.

In this paper, Dr. Fowler compared 96 different species, and again analyzed them in terms of population, CO2 emissions and consumption patterns.

This time, when the strict test of biodiversity retention was applied, the results were truly shocking, even to me.  According to this measure, humans have overpopulated the Earth by almost 700 times.  In order to preserve maximum biodiversity on Earth, the human population may be no more than 10 million people – each with the consumption of a Paleolithic hunter-forager.



As you can see, the estimates for a sustainable human population vary widely – by a factor of 400 from the highest to the lowest.

The Ecological Footprint doesn’t really seem intended as a measure of sustainability.  Its main value is to give people with no exposure to ecology some sense that we are indeed over-exploiting our planet.  (It also has the psychological advantage of feeling achievable with just a little work.)  As a measure of sustainability,it is not helpful.

As I said above, the number suggested by the Thermodynamic Footprint or Fossil Fuel analysis isn’t very helpful either – even a population of one billion people without fossil fuels had already gone into overshoot.

That leaves us with three estimates: two at 35 million, and one of 10 million.

I think the lowest estimate (Fowler 2008, maximizing biodiversity), though interesting, is out of the running in this case, because human intelligence and problem-solving ability makes our destructive impact on biodiversity a foregone conclusion. We drove other species to extinction 40,000 years ago, when our total population was estimated to be under 1 million.

That leaves the central number of 35 million people, confirmed by two analyses using different data and assumptions.  My conclusion is that this is probably the largest human population that could realistically be considered sustainable.

So, what can we do with this information?  It’s obvious that we will not (and probably cannot) voluntarily reduce our population by 99.5%.  Even an involuntary reduction of this magnitude would involve enormous suffering and a very uncertain outcome.  In fact, it’s close enough to zero that if Mother Nature blinked, we’d be gone.

In fact, the analysis suggests that Homo sapiens is an inherently unsustainable species.  This outcome seems virtually guaranteed by our neocortex, by the very intelligence that has enabled our rise to unprecedented dominance over our planet’s biosphere.  Is intelligence an evolutionary blind alley?  From the singular perspective of our own species, it quite probably is. If we are to find some greater meaning or deeper future for intelligence in the universe, we may be forced to look beyond ourselves and adopt a cosmic, rather than a human, perspective.


How do we get out of this jam?

How might we get from where we are today to a sustainable world population of 35 million or so?  We should probably discard the notion of “managing” such a population decline.  If we can’t get our population to simply stop growing, an outright reduction of over 99% is simply not in the cards.  People seem virtually incapable of taking these kinds of decisions in large social groups.  We can decide to stop reproducing, but only as individuals or (perhaps) small groups. Without the essential broad social support, such personal choices will make precious little difference to the final outcome.  Politicians will by and large not even propose an idea like “managed population decline”  – not if they want to gain or remain in power, at any rate.  China’s brave experiment with one-child families notwithstanding, any global population decline will be purely involuntary.


A world population decline would (will) be triggered and fed by our civilization’s encounter with limits.  These limits may show up in any area: accelerating climate change, weather extremes,shrinking food supplies, fresh water depletion, shrinking energy supplies,pandemic diseases, breakdowns in the social fabric due to excessive complexity,supply chain breakdowns, electrical grid failures, a breakdown of the international financial system, international hostilities – the list of candidates is endless, and their interactions are far too complex to predict.

In 2007, shortly after I grasped the concept and implications of Peak Oil, I wrote my first web article on population decline: Population: The Elephant in the Room.  In it I sketched out the picture of a monolithic population collapse: a straight-line decline from today’s seven billion people to just one billion by the end of this century.
As time has passed I’ve become less confident in this particular dystopian vision.  It now seems to me that human beings may be just a bit tougher than that.  We would fight like demons to stop the slide, though we would potentially do a lot more damage to the environment in the process.  We would try with all our might to cling to civilization and rebuild our former glory.  Different physical, environmental and social situations around the world would result in a great diversity in regional outcomes.  To put it plainly, a simple “slide to oblivion” is not in the cards for any species that could recover from the giant Toba volcanic eruption in just 75,000 years.

Or Tumble?

Still, there are those physical limits I mentioned above.  They are looming ever closer, and it seems a foregone conclusion that we will begin to encounter them for real within the next decade or two. In order to draw a slightly more realistic picture of what might happen at that point, I created the following thought experiment on involuntary population decline. It’s based on the idea that our population will not simply crash, but will oscillate (tumble) down a series of stair-steps: first dropping as we puncture the limits to growth; then falling below them; then partially recovering; only to fall again; partially recover; fall; recover… 

I started the scenario with a world population of 8 billion people in 2030. I assumed each full cycle of decline and partial recovery would take six generations, or 200 years.  It would take three generations (100 years) to complete each decline and then three more in recovery, for a total cycle time of 200 years. I assumed each decline would take out 60% of the existing population over its hundred years, while each subsequent rise would add back only half of the lost population. 

In ten full cycles – 2,000 years – we would be back to a sustainable population of about 40-50 million. The biggest drop would be in the first 100 years, from 2030 to 2130 when we would lose a net 53 million people per year. Even that is only a loss of 0.9% pa, compared to our net growth today of 1.1%, that’s easily within the realm of the conceivable,and not necessarily catastrophic – at least to begin with. 

As a scenario it seems a lot more likely than a single monolithic crash from here to under a billion people.  Here’s what it looks like:

It’s important to remember that this scenario is not a prediction. It’s an attempt to portray a potential path down the population hill that seems a bit more probable than a simple, “Crash! Everybody dies.”

It’s also important to remember that the decline will probably not happen anything like this, either. With climate change getting ready to push humanity down the stairs, and the strong possibility that the overall global temperature will rise by 5 or 6 degrees Celsius even before the end of that first decline cycle, our prospects do not look even this “good” from where I stand.

Rest assured, I’m not trying to present 35 million people as some kind of “population target”. It’s just part of my attempt to frame what we’re doing to the planet, in terms of what some of us see as the planetary ecosphere’s level of tolerance for our abuse. 

The other potential implicit in this analysis is that if we did drop from 8 to under 1 billion, we could then enter a population free-fall. As a result, we might keep falling until we hit the bottom of Olduvai Gorge again. My numbers are an attempt to define how many people might stagger away from such a crash landing.  Some people seem to believe that such an event could be manageable.  I don’t share that belief for a moment. These calculations are my way of getting that message out.

I figure if I’m going to draw a line in the sand, I’m going to do it on behalf of all life, not just our way of life.

What can we do? 

To be absolutely clear, after ten years of investigating what I affectionately call “The Global Clusterfuck”, I do not think it can be prevented, mitigated or managed in any way.  If and when it happens, it will follow its own dynamic, and the force of events could easily make the Japanese and Andaman tsunamis seem like pleasant days at the beach.

The most effective preparations that we can make will all be done by individuals and small groups.  It will be up to each of us to decide what our skills, resources and motivations call us to do.  It will be different for each of us – even for people in the same neighborhood, let alone people on opposite sides of the world.

I’ve been saying for a couple of years that each of us will each do whatever we think is appropriate to the circumstances, in whatever part of the world we can influence. The outcome of our actions is ultimately unforeseeable, because it depends on how the efforts of all 7 billion of us converge, co-operate and compete.  The end result will be quite different from place to place – climate change impacts will vary, resources vary, social structures vary, values and belief systems are different all over the world.The best we can do is to do our best.

Here is my advice: 

  • Stay awake to what’s happening around us.
  • Don’t get hung up by other people’s “shoulds and shouldn’ts”.
  • Occasionally re-examine our personal values.  If they aren’t in alignment with what we think the world needs, change them.
  • Stop blaming people. Others are as much victims of the times as we are – even the CEOs and politicians.
  • Blame, anger and outrage is pointless.  It wastes precious energy that we will need for more useful work.
  • Laugh a lot, at everything – including ourselves.
  • Hold all the world’s various beliefs and “isms” lightly, including our own.
  • Forgive others. Forgive ourselves. For everything.
  • Love everything just as deeply as you can.

That’s what I think might be helpful. If we get all that personal stuff right, then doing the physical stuff about food, water, housing,transportation, energy, politics and the rest of it will come easy – or at least a bit easier. And we will have a lot more fun doing it.

I wish you all the best of luck!
Bodhi Paul Chefurka
May 16, 2013


Posted in Overshoot, Paul Chefurka, Population | Tagged , , | 7 Comments

Gravity energy storage

Preface. This is interesting, but not commercial. And as my book “When trucks stop running” explains, trucks are the basis of civilization, and can’t run on electric batteries or overhead wires. Even if they could, I explained why a 100% renewable energy grid was impossible, especially because you need 30 days of storage to ride out seasonal shortages of wind and solar. And even if I were wrong, oil decline is likely to begin with 10 years, so we’ll be stuck with whatever solutions are commercial at the time.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Deign, J. 2019. Energy vault funding breathes life into gravity storage.

The speculative field of gravity-based energy storage got a boost recently with news of a strategic investment and new patents.

Swiss-U.S. startup Energy Vault, one of the most high-profile gravity storage players to date, secured financial backing from Cemex Ventures, the corporate venture capital unit of the world’s second-largest building materials giant, and a pledge to help with deployment through Cemex’s “strategic network.”

Meanwhile, the University of Nottingham and the World Society of Sustainable Energy Technologies confirmed the filing of patent applications for a concept called EarthPumpStore, which uses abandoned mines as gravity storage assets.

Implementing the technology across 150,000 disused open-cast mines in China alone could deliver an estimated storage capacity of 250 terawatt-hours , the University of Nottingham said in a press note.

MY NOTE: well whoop-dee-doo. China generates 16.2 trillion terawatt-hours (TWh) a day. That’s 64 billion times more than all of the open-cast mines can provide. Better start digging more holes!

The announcements indicate growing interest in a class of energy storage concepts that appear seductively simple but have yet to gain widespread acceptance.

Most gravity storage concepts are based on the idea of using spare electricity to lift a heavy block, so the energy can be recovered when needed by letting the weight drop down again.

In the case of Energy Vault, the blocks are made of concrete and are lifted up by cranes 33 stories high. EarthPumpStore, meanwhile, envisages pulling containers filled with compacted earth up the sides of open-cast mines.

Gravity is also the force underpinning pumped hydro, the most widespread and cost-effective form of energy storage in the world. But pumped hydro development is slow and costly, requiring sites with specific topographical characteristics and often involving significant permitting hurdles.

The proponents of newer gravity storage options claim that installation and deployment of their technology is quicker, easier and cheaper.

The University of Nottingham, for example, estimates EarthPumpStore would cost about $50 per installed kilowatt-hour, compared to $200 for pumped hydro and $400 for battery storage.

The university also said EarthPumpStore could achieve a round-trip efficiency of more than 90 percent, compared to between 50 percent and 70 percent for pumped hydro, plus an energy storage density up to eight times higher. Other sources have made similar claims. 

In 2017, for example, a study by Imperial College London for the gravity storage technology developer Heindl Energy concluded that Heindl’s concept could achieve a levelized cost of storage of $148 per megawatt-hour, compared to $206 for pumped hydro.

“Based on the given data, gravity storage is most cost-efficient for bulk electricity storage, followed by pumped hydro and compressed air energy storage,” the research concluded. 

Given gravity storage’s apparent simplicity and cost-effectiveness, it is curious that the concept hasn’t taken off. One of the first companies to emerge with a gravity-based idea was Advanced Rail Energy Storage (ARES), a Santa Barbara-based firm that was founded in 2010.

ARES plans to hoist railcar-based weights up a hillside, and in 2016 finally got U.S. Bureau of Land Management approval for a proposed 50-megawatt, 12.5-megawatt-hour project in Nevada. At the time, ARES was expecting the project to be up and running in early 2019.

However, as of last August the company was still securing permits and pushed its go-live date back to 2020. Other gravity storage hopefuls seem to be making equally slow progress, although last year saw two U.K. companies getting funding.

Energy SRS, a collaboration of five U.K. firms and the University of Bristol, got £727,000 (about $922,000 at today’s exchange rate) from the government research and innovation body Innovate U.K.

The funding was for a prototype, which Energy SRS is hoping to scale up by 2020. Meanwhile, another startup, Gravitricity, got a separate Innovate U.K. grant, of £650,000 ($824,000 today), to build a 250-kilowatt prototype of its mineshaft-based gravity concept.

Gravitricity is also aiming for full-scale implementation next year.

Daniel Finn-Foley, principal analyst at Wood Mackenzie Power & Renewables, said concerns over the safety, scalability and round-trip efficiency of lithium-ion batteries could lead to growing interest in alternatives such as gravity storage.

“It could be a key technology in the long term as states continue to mandate carbon-free energy,” he said. “I doubt the 100 percent vision will be solved by dropping lithium-ion batteries everywhere, so seeing new technologies emerge will be key.”

Posted in Energy Storage, Research | Tagged | 9 Comments

Peak Stainless Steel

This study shows that there is a significant risk that stainless steel production will reach its maximum capacity around 2055 because of declining nickel production, though recycling, and use of other alloys on a very small scale can compensate somewhat.

The model in this study assumes business as usual for metal production and fossil fuel supplies (though the authors note that energy limitations are likely in the future, which will limit mining). If oil begins to decline within 10 years, as many think, shortages of stainless steel and everything else will happen before 2055.

There are two kinds of steel. Stainless which resists corrosion and is more ductile and tough than regular steel, also known as mild or carbon steel.

By weight, stainless steel is the fourth largest metal produced, after carbon steel, cast iron, and aluminum.

But stainless steel is limited by the alloying metals manganese (Mn), chromium (Cr) and nickel (Ni), which have limited reserves.

There are over 150 grades of stainless steel which is used for cutlery, cookware, zippers, construction, autos, handrails, counters, shipping containers, medical instruments and equipment, transportation of chemicals, liquids, and food products, harsh environments with high heat and toxic substances, off-shore oil rigs, wind, solar, geothermal, hydropower, battleships, tanks, submarines, and too many other products to name.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Sverdrup, H. U., et al. 2019. Assessing the long-term global sustainability of the production and supply for stainless steel. Biophysical economics and resource quality.

The extractable amounts of nickel are modest, and this puts a limit on how much stainless steel of different qualities can be produced. Nickel is the most key element for stainless steel production.

This study shows that there is a significant risk that the stainless steel production will reach its maximum capacity around 2055 and slowly decline after that. The model indicates that stainless steel of the type containing Mn–Cr–Ni will have a production peak in about 2040, and the production will decline after 2045 because of nickel supply limitations. 

For making stainless steel, four metals are essential and regularly used for making high quality steel, assisted by specialty metals for special properties:

  • Iron for bulk of the stainless steel material
  • Chromium for corrosion resistance
  • Manganese for removing impurities and gain strength and workability
  • Nickel for corrosion resistance, temperature resistance and hardness
  • Molybdenum, cobalt, vanadium and niobium for strength, hardness, corrosion resistance and temperature resistance. Small amounts of nitrogen, phosphorus, silicon or aluminium is sometimes added to these alloys to fine-tune the properties of the material.

For stainless steels, metals like vanadium (occurs as a contaminant in almost all iron ore) are used for toughness and strength, tungsten, tantalum and niobium for extra hardness and high temperature resistance, cobalt for corrosion prevention. World production of stainless steel typically consists of 5–12% manganese, 10–18% chromium, 3–5% nickel and 0.1% molybdenum on the average.

Nickel is an important component in high-quality stainless steel (46% of supply), it is used in nonferrous alloys and super-alloys (34%), electroplating (14%), and 6% is used for other uses. There is no replacement for Nickle that exist, although chromium may be used for some of the functions of nickel in an alloy, and cobalt, molybdenum and niobium may do other alloying functions.

“Could even metals like iron, or manganese or chromium run out if we looked far enough into the future?”

Running their model until 3800 with business-as-usual figures, ” a critical time occurs around 2500 AD. Then most metals resources will have been depleted. Iron will be in abundant supply per person until about 2450, but then a sharp decline sets in. The same happens to manganese and chromium, then are sufficient until about 2500, and then the final decline comes, whereas the supply of nickel will be a trickle after 2300.”


Posted in Important Minerals, Infrastructure | Tagged , , | 4 Comments

Medicare for All?

Preface.  This is a 3-page review of a 34-page overview Congressional Budget Office report requested by congress on establishing a single-payer health care system. 

IMHO, I don’t see how this can possibly happen.  How can a dysfunctional congress deal with such a complex undertaking, let alone ignore powerful insurance, hospitals, and health care provider lobbyists? Haven’t we learned anything from both Clinton & Obama’s attempts to reform health care with a public option?

Also, although Medicare is seen as a single payer system, many analysts disagree, since “private insurers play a significant role in delivering Medicare benefits outside the traditional Medicare program.” 

Peak oil and health care

But the biggest stumbling block of all is that it really does look like we’re on the cusp of peak oil.  The 2019 BP Statistical review of world energy showed that 98% of all new oil produced in 2018 came from U.S. Fracking, and we’re nowhere “peak demand”, consumption grew by 3.1 million barrels per day (bpd) to a new record of 99.8 million bpd (Rapier 2019).  Since what really matters is peak diesel to keep trucks running, we may be past peak diesel, since fracked oil is far better for plastics than transportation fuel.

So take good care of yourself. There will be far less health care in the future, and eventually nothing but what your local community provides.

Components of single payer system.jpg

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical PreppingKunstlerCast 253KunstlerCast278Peak Prosperity , XX2 report


CBO. 2019. Key design components and considerations for establishing a single-payer health care system.  United States Congressional Budget Office.

The report does not address all of the issues involved in designing, implementing, and transitioning to a single-payer system, nor does it analyze the budgetary effects of any specific proposal.


  29 million people under age 65 were uninsured, 11% of the population.

243 million people under age 65 had health insurance: 160 million people through an employer, 69 million via Medicaid the Children’s Health Insurance Program

Some of the key design considerations for policymakers interested in establishing a single-payer system include the following:

  • How would the government administer a single-payer health plan?
  • Who would be eligible for the plan, and what benefits would it cover?
  • What cost sharing, if any, would the plan require?
  • What role, if any, would private insurance and other public programs have?               
  • Which providers would be allowed to participate, and who would own the hospitals and employ the providers?
  • How would the single-payer system set provider payment rates and purchase prescription drugs?
  • How would the single-payer system contain health care costs?
  • How would the system be financed?

Establishing a single-payer system would be a major undertaking that would involve substantial changes in the sources and extent of coverage, provider payment rates, and financing methods of health care in the United States.

Although a single-payer system could substantially reduce the number of people who lack insurance, the change in the number of people who are uninsured would depend on the system’s design. For example, some people (such as noncitizens who are not lawfully present in the United States) might not be eligible for coverage under a single-payer system and thus might be uninsured.

Single-Payer Health Care Systems

Although single-payer systems can have a variety of different features and have been defined in many ways, health care systems are typically considered single-payer systems if they have these four key features:

  • The government entity (or government-contracted entity) operating the public health plan is responsible for most operational functions of the plan, such as defining the eligible population, specifying the covered services, collecting the resources needed for the plan, and paying providers for covered services
  • The eligible population is required to contribute toward financing the system
  • The receipts and expenditures associated with the plan appear in the government’s budget
  • Private insurance, if allowed, generally plays a relatively small role and supplements the coverage provided under the public plan.

In the United States, the traditional Medicare program is considered an example of an existing single-payer system for elderly and disabled people, but analysts disagree about whether the entire Medicare program is a single-payer system because private insurers play a significant role in delivering Medicare benefits outside the traditional Medicare program.

Questions and complexities

  • Could people opt out?
  • Which services would the system cover, and would it cover long-term services and supports?
  • How would the system address new treatments and technologies?
  • What cost sharing, if any, would the plan require?
  • How would the system purchase and determine the prices of prescription drugs?
  • Would the government finance the system through premiums, cost sharing, taxes, or borrowing?
  • How would the system pay providers and set provider payment rates?
  • What role would private health insurance have?
  • Who would own the hospitals and employ the providers?

Differences Between Single-Payer Health Care Systems and the Current U.S. System

Establishing a single-payer system in the United States would involve significant changes for all participants— individuals, providers, insurers, employers, and manufacturers of drugs and medical devices—because a single-payer system would differ from the current system in many ways, including sources and extent of coverage, provider payment rates, and methods of financing. Because health care spending in the United States currently accounts for about one-sixth of the nation’s gross domestic product, those changes could significantly affect the overall U.S. economy.

Although policymakers could design a single-payer system with an intended objective in mind, the way the system was implemented could cause substantial uncertainty for all participants. That uncertainty could arise from political and budgetary processes, for example, or from the responses of other participants in the system.

The transition toward a single-payer system could be complicated, challenging, and potentially disruptive. To smooth that transition, features of the single-payer system that would cause the largest changes from the current system could be phased in gradually to minimize their impact. Policymakers would need to consider how quickly people with private insurance would switch their coverage to the new public plan, what would happen to workers in the health insurance industry if private insurance was banned entirely or its role was limited, and how quickly provider payment rates under the single-payer system would be phased in from current levels.

Coverage. In a single-payer system that achieved universal coverage, everyone eligible would receive health insurance coverage with a specified set of benefits regardless of their health status. Under the current system, CBO estimates, an average of 29 million people per month—11% of U.S. residents under age 65—were uninsured in 2018.5 Most (or perhaps all) of those people would be covered by the public plan under a single-payer system, depending on who was eligible.

A key design choice is whether noncitizens who are not lawfully present would be eligible. An average of 11 million people per month fell into that category in 2018, and they might not have health insurance under a single-payer system if they were not eligible for the public plan. About half of those 11 million people had health insurance in 2018.

In 2018, a monthly average of about 243 million people under age 65 had health insurance. About two-thirds of them, or an estimated 160 million people, had health insurance through an employer. Roughly another quarter of that population, or about 69 million people, are estimated to have been enrolled in Medicaid or the Children’s Health Insurance Program (CHIP).

Currently, national health care spending—which totaled $3.5 trillion in 2017—is financed through a mix of public and private sources, with private sources such as businesses and households contributing just under half that amount and public sources contributing the rest (in direct spending as well as through forgone revenues from tax subsidies). Shifting such a large amount of expenditures from private to public sources would significantly increase government spending and require substantial additional government resources. The amount of those additional resources would depend on the system’s design and on the choice of whether or not to increase budget deficits. Total national health care spending under a single-payer system might be higher or lower than under the current system depending on the key features of the new system, such as the services covered, the provider payment rates, and patient cost-sharing requirements.

It would probably have lower administrative costs than the current system—following the example of Medicare and of single-payer systems in other countries—because it would consolidate administrative tasks and eliminate insurers’ profits. Moreover, unlike private insurers, which can experience substantial enrollee turnover over time, a single-payer system without that turnover would have a greater incentive to invest in measures to improve people’s health and in preventive measures that have been shown to reduce costs. Whether the single-payer plan would act on that incentive is unknown.

An expansion of insurance coverage under a single-payer system would increase the demand for care and put pressure on the available supply of care.

A single-payer system would affect other sectors of the economy that are beyond the scope of this report. For example, labor supply and employees’ compensation could change because health insurance is an important part of employees’ compensation under the current system.


Rapier, R. 2019. The U.S. accounted for 98% of global oil production growth in 2018. Forbes

Posted in Health | Tagged | 2 Comments

Cheddar Power

Preface. Oh how I love cheddar. When I hear that someone is a vegan I stare in disbelief. A life without cheese is a life not worth living, especially a life without cheddar. As a perpetually hungry child, if Mom was in the front room, I’d dash to the back of the house and get cheddar out of the refrigerator and slice off a small piece of cheese. If there is a substitute for oil, oh please let it be cheese!

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Paraskova, T. 2019. Cheddar To The Rescue? UK Company Uses Cheese To Power 4,000 Homes.

Say Cheese

A UK dairy in Yorkshire has signed an agreement with a local biogas plant to supply it with a by-product of cheese-making that would be turned into thermal power to heat homes in the area.

The Wensleydale Creamery, which produces the Yorkshire Wensleydale cheese, makes 4,000 tons of cheese every year at its dairy in Hawes in the heart of the Yorkshire Dales.

The company has struck a deal with specialist environment fund manager Iona Capital, under which an Iona biogas plant will produce more than 10,000 MWh of energy per year from whey—a by-product of cheese making, Wensleydale Creamery said on Monday.

Under the deal, Wensleydale Creamery will provide Iona Capital’s Leeming Biogas plant in North Yorkshire with leftover whey from the process of cheese making. The plant will process and turn the whey into “green gas” via anaerobic digestion that will produce thermal power sufficient to heat 800 homes a year.

Iona Capital already has nine such renewable energy plants in Yorkshire, which save the equivalent of 37,300 tons of carbon dioxide (CO2) each year.

“Once we have converted the cheese by-product supplied by Wensleydale into sustainable green gas, we can feed what’s left at the end of the process onto neighbouring farmland to improve local topsoil quality. This shows the real impact of the circular economy and the part intelligent investment can play in reducing our CO2 emissions,” Mike Dunn, co-founder of Iona, said in a statement.

“The whole process of converting local milk to premium cheese and then deriving environmental and economic benefit from the natural by-products is an essential part of our business plan as a proud rural business. It is only possible as a result of significant and continued investments in our Wensleydale Creamery at Hawes and to sign this agreement and have the opportunity to convert a valuable by-product of cheese making into energy that will power hundreds of homes across the region will be fantastic for everyone involved,” Wensleydale Creamery’s managing director, David Hartley, said.   

Posted in Far Out | Tagged | 4 Comments

Oil Choke Points vulnerable to war, chaos, terrorism, piracy

Preface. The U.S., thanks to fracking, which is likely to peak by 2025, produces half of the 20 million barrels of oil it uses per day. The other half of our oil is imported, with 45% of imports (4.5 million barrels per day) from the middle east (IEA 2019). That’s a lot of oil, but Japan, South Korea, China, and other nations are far more dependent on middle eastern oil than the U.S. is (Klug 2019). Yet if manufacturing fails abroad from oil shortages, the consequent financial crash will affect the U.S. as well, and after fracking begins to decline (at 80% over 3 years), the U.S. will become much more dependent on OPEC countries, which have 82% of the world’s crude oil reserves, and 65% of these reserves are in the Middle East, led by Saudi Arabia, Iran, Iraq, Kuwait and the UAE (Newman 2018).

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

chokepoints oil global map

Source: Ballout, D. 2013. Choke Points: Our energy access points. Oil Change.

Global oil chokepoints 2018, Million barrels per day

16.8 Hormuz
15.7 Malacca
4.8 Bab-el-Mandeb
4.6 Suez canal
2.7 Bosphorus
0.8 Panama canal
7.4 Other
52.9 Total seaborne oil     Source: Lloyd’s list intelligence 

Asian nations will be most affected if war closes the straits of Hormuz

This image has an empty alt attribute; its file name is hormuz-strait-producers-and-consumers-of-oil.jpg


60% percent of America’s oil arrives on vessels.

Due to the narrowness of the straits all of them can be disrupted with relative ease by piracy, terrorism, wars, or shipping accidents, affecting the supply of oil and world prices.

The strong dependence of the U.S. economy on oil has made it of the utmost importance to guarantee the free flow of shipments through these straits. For this reason policing has fallen mainly on the U.S. Navy with the help of some allies.  Cost estimates of maritime patrolling for the U.S. have been calculated to range between $68 and $83 billion per year, or 12 to 15 percent of conventional military spending. The rest of the world has benefited freely from this security provided by the U.S.

As long as the U.S. economy continues to depend heavily on oil it will have to incur in the high maritime patrolling costs to guarantee the free flow of oil through these critical straits.

According to John Hofmeister, former president of Shell Oil, “With respect to the choke points, the three most serious are the Suez Canal, the Hormuz Straits, which is separating Yemen from Oman and Iran, and the Straits of Malacca, which is between Malaysia and Indonesia. These choke points carry enormous amounts of crude oil. Matt Simmons used to speak of the Straits of Hormuz as, we live one day away from an oil Pearl Harbor. In other words, those Straits of Hormuz transport between 20 and 25% of daily consumption of global oil, and were they to be shut in, the world would be in a panic overnight if it were not possible to pass oil.

The Straits of Hormuz watch about 20 to 25% of the world’s daily crude oil production move through it, and if the world were to lose that amount of oil because of a shutdown in the Straits, I think that the immediate impact on crude oil prices would be to not just double but even triple the current crude oil price of the panic that would set in in terms of future contracting. There might be a slight delay to see how long it make take to clean up the mess that might be created there but it is such a critical pinch point and there is so much of that oil that goes both east and west that it is not only energy security for the United States, it is energy security for the world’s second largest economy, China. And so the consequence would be dramatic. Five dollars would look cheap in terms of a gasoline price in the event of the Straits of Hormuz being shut in.” (Serial No. 112-4).

Attack on Abqaiq

The National Commission on Energy Policy and Securing America’s Energy Future conducted a simulation called Oil Shock Wave to explore the potential security and economic consequences of an oil supply crisis. The event started by assuming that political unrest in Nigeria combined with unseasonably cold weather in North America contributed to an immediate global oil supply shortfall. The simulation then assumed that 3 terrorist attacks occur in important ports and processing plants in Saudi Arabia and Alaska which sent oil prices immediately soaring to $123 a barrel and $161 barrel 6 months later. At these prices, the country goes into a recession and millions of jobs are lost as a result of sustained oil prices.  This simulation almost became reality with the failed attack on Abqaiq in Saudi Arabia in February 2006. Had the attack been successful, it would have removed 4to 6 million barrels per day from the global market sending prices soaring around the world and would likely have had a devastating impact on our economy. (Indiana Senator Evan Bayh, U.S. SENATE March 7, 2006. Energy independence S. HRG. 109-412)

oil middle east countries chaos and regional wars














Keating, J. July 17, 2014. The Middle East Friendship Chart.

Keating, J. July 17, 2014. The Middle East Friendship Chart.


Ras Tanura port in Saudi Arabia: 10% of the world’s oil

Peter Maass.  The Breaking Point. August 21, 2005 The New York Times

Saudi Arabia had 22% of the world’s oil reserves (in 2005) The largest oil terminal in the world is Ras Tanura, on the east coast of Saudi Arabia, along the Persian Gulf.  Ras Tanura is the funnel through which nearly 10 percent of the world’s daily supply of petroleum flows. In the control tower, you are surrounded by more than 50 million barrels of oil, yet not a drop can be seen.

As Aref al-Ali, my escort from Saudi Aramco, the giant state-owned oil company, pointed out, ”One mistake at Ras Tanura today, and the price of oil will go up.” This has turned the port into a fortress; its entrances have an array of gates and bomb barriers to prevent terrorists from cutting off the black oxygen that the modern world depends on. Yet the problem is far greater than the brief havoc that could be wrought by a speeding zealot with 50 pounds of TNT in the trunk of his car. Concerns are being voiced by some oil experts that Saudi Arabia and other producers may, in the near future, be unable to meet rising world demand. The producers are not running out of oil, not yet, but their decades-old reservoirs are not as full and geologically spry as they used to be, and they may be incapable of producing, on a daily basis, the increasing volumes of oil that the world requires. ”One thing is clear,” warns Chevron, the second-largest American oil company, in a series of new advertisements, ”the era of easy oil is over.”

If consumption begins to exceed production by even a small amount, the price of a barrel of oil could soar to triple-digit levels. This, in turn, could bring on a global recession, a result of exorbitant prices for transport fuels and for products that rely on petrochemicals — which is to say, almost every product on the market. The impact on the American way of life would be profound: cars cannot be propelled by roof-borne windmills. The suburban and exurban lifestyles, hinged to two-car families and constant trips to work, school and Wal-Mart, might become unaffordable or, if gas rationing is imposed, impossible.

Ghawar is the treasure of the Saudi treasure chest. It is the largest oil field in the world and has produced 55 billion barrels of oil the past 50 years, more than half of Saudi production in that period. The field currently produces more than five million barrels a day, about half of the kingdom’s output. If Ghawar is facing problems, then so is Saudi Arabia and, indeed, the entire world.

Simmons found that the Saudis are using increasingly large amounts of water to force oil out of Ghawar. “Someday the remarkably high well flow rates at Ghawar’s northern end will fade, as reservoir pressures finally plummet. Then, Saudi Arabian oil output will clearly have peaked Simmons says that there are only so many rabbits technology can pull out of its petro-hat.

Strait of Hormuz

Any military action in the Strait of Hormuz in the Gulf would knock out oil exports from OPEC’s biggest producers, cut off the oil supply to Japan and South Korea and knock the booming economies of Gulf states.

Roger Stern, a professor at the University of Tulsa National Energy Policy Institute, estimates we’ve spent $8 trillion protecting oil resources in Persian Gulf since 1976, when the Navy first began increasing its military presence in the region following the first Arab oil embargo. We did this because we feared oil supplies would run out, and that the Soviets would march to the Persian gulf to get oil when they ran low themselves  (Stern).  We import very little of this oil, yet Japan, Europe, India, and the nations that do don’t pay us to do this.  Admiral Greenert plans to shift 10% of our navy from the East Coast to the Pacific Coast to protect the South china seas (from China).

Here are some key facts on what passes through the international waterway and some of the direct economic consequences of any attack on merchant shipping.

  • 20 percent of the world’s oil traded worldwide (35% of all seaborne oil), and 20% of the global liquefied natural gas (EIA).
  • 2.9 billion deadweight tons passes through the strait every year.
  • Crude oil exported through the Strait rose to 750 million tons in 2006.
  • 27 percent of transits carry crude on oil tankers, rising to 50 percent if petroleum products, natural gas and Liquefied Petroleum Gas transits are included.
  • Transits for dry commodities like grains, iron ore and cement account for 22 percent of transits.
  • Container trade accounts for 20% of transits, carrying finished goods to Gulf countries.

Oil exports passing through Hormuz:  (2006 figures)

  • Saudi Arabia — 88 percent
  • Iran — 90 percent
  • Iraq — 98 percent
  • UAE — 99 percent
  • Kuwait — 100 percent
  • Qatar — 100 percent

Top 10 importers of crude oil through Hormuz (2006 figures)

  • Japan — Takes 26% of crude oil moving through the strait (shipments meet 85% of country’s oil needs)
  • Republic of Korea — 14 percent (meets 72 percent of oil needs)
  • United States — 14 percent (meets 18 percent of oil needs)
  • India — 12 percent (meets 65 percent of oil needs)
  • Egypt — 8 percent (N.B. most transhipped to other countries)
  • China — 8 percent (meets 34 percent of oil needs)
  • Singapore — 7 percent
  • Taiwan — 5 percent Thailand — 3 percent
  • Netherlands — 3 percent (Source: Lloyd’s Marine Intelligence Unit)

U.S. Energy Security Strait of Hormuz Threat – All OPEC imports from the Persian Gulf region are shipped via marine tankers through the Strait of Hormuz.  Due to Iran’s developing nuclear arms program and Israel’s perception of this being a threat, which could lead to conflict if Israel makes a first strike against Iran (regardless of whether the threat is verified or not), and U.S. sanctions to possibly curtail Iran’s nuclear arms development, Iran has threatened to the shutdown of the Strait of Hormuz in retaliation.  If and when Iran carries out their threat to shutdown the Strait of Hormuz the U.S. would immediately lose about 2.2 MBD of crude oil imports or almost 12% of current total petroleum oil supplied (consumed); far greater than the 4% lost during the 1973 Arab OPEC oil embargo.

Strait of Hormuz Shutdown Impacts – The impact of losing all Persian Gulf imports could be substantial.  Not only would the U.S. be subjected to a very quick loss of 2.2 MBD of imports, but the impact on world markets could also be devastating (up to 20% of all world market crude oil supplies currently flow through the Strait).  World oil prices could directionally double almost overnight, sending world energy markets and economies into chaos.  While the U.S. and UN conventional military forces should be able to readily take-on and neutralize Iran’s conventional forces, it’s Iran’s small-independent, unconventional forces that likely pose the greatest and longer term threat to Persian Gulf shipping and regional OPEC oil infrastructures.

Other Chokepoints (EIA)

  1. Strait of Malacca with 17% of the world’s oil, most of it headed to China, Japan and South Korea.
  2. Suez Canal / SUMED pipeline with 5% of world oil, key routes for oil destined for Europe and North America. A potential threat is the growing unrest in Egypt since their revolution in 2011.
  3. Bab el-Mandab could keep tankers from the Persian gulf from reching the Suez canal and sumed pipeline
  4. Turkish Straits. Increased oil exports from the Caspian Sea region make the Turkish Straits one of the most dangerous choke points in the world supplying Western and Southern Europe.
  5. Danish Straits, an increasingly important route for Russian oil to Europe.

Also read:

Sullivan’s testimony in: HR 112–24. March 31, 2011. Rising oil prices and dependence on hostile regimes: the urgent case for Canadian oil. U.S. House of Representatives. 102 pages

Brooks, G. Allen. March 21, 2014.Musings: The Challenges Facing Saudi Arabia Include More Than Oil.


EIA. 2012 U.S. Energy Information Administration “World Oil Transit Chokepoints”.

EIA. 2019. How much petroleum does the United States import and export? U.S. Energy Information Administration.

FACTBOX: Strait of Hormuz: economic effects of disruption. Jan 7, 2008. Stefano Ambrogi. Reuters

Klug, F. 2019. Middle East attack jolts oil-import dependent Asia. Washington Post.

Miller, J. August 20, 2013. What are the Largest Risks to U.S. Energy Security?

Newman, N. 2018. Middle East Leads Global Supply of Conventional Oil. Rigzone.

Serial No. 112-4. February 10, 2011. The effects of middle east events on U.S. Energy markets. House of Representatives, subcommittee on energy and power 112th congress. 231 pages

Stern, R. 2010. United States cost of military force projection in the Persian Gulf, 1976–2007. EnergyPolicy.


Posted in Chokepoints, Infrastructure Attacks, Oil & Gas, Oil Shocks, Threats to oil supply | Tagged , , , | 6 Comments

Wood, the fuel of preindustrial societies, is half of EU renewable energy

Source: Ben Adler. Aug 25, 2014. Europe is burning our forests for “renewable” energy.
Wait, what?

Preface: By far the largest so-called renewable fuel used in Europe is wood. In its various forms, from sticks to pellets to sawdust, wood (or to use its fashionable name, biomass) accounts for about half of Europe’s renewable-energy consumption.

Although Finland is the most heavily forested country in Europe, with 75% of their land covered in woods, they may not have enough biomass to replace coal when all coal plants are shut down by 2029.  Much of their land has no roads or navigable waterways, so imports would be cheaper than using their own forests (Karagiannopoulos 2019).

Vaclav Smil, in his 2013 book “Making the Modern World: Materials and Dematerialization” states: “Straw continues to be burned even in some affluent countries, most notably in Denmark where about 1.4 Mt of wheat straw (nearly a quarter of the total harvest) is used for house heating or even in centralized district heating and electricity generation.”

There are three articles about wood below. Some other wood energy reports:

2016:  Forests in southern states are disappearing to supply Europe with energy. In the past 60 years, the southern U.S. lost 33 million acres of forests even though biomass is not carbon neutral. Salon

2016: Japan is now turning to burning wood to generate electric power because of fewer nuclear power plants after Fukushima

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


1. The Economist. April 6, 2013. Wood: The fuel of the future. Environmental lunacy in Europe.

Which source of renewable energy is most important to the European Union? Solar power, perhaps? (Europe has three-quarters of the world’s total installed capacity of solar photovoltaic energy.) Or wind? (Germany trebled its wind-power capacity in the past decade.) The answer is neither.

By far the largest so-called renewable fuel used in Europe is wood.

In its various forms, from sticks to pellets to sawdust, wood (or to use its fashionable name, biomass) accounts for about half of Europe’s renewable-energy consumption.

In some countries, such as Poland and Finland, wood meets more than 80% of renewable-energy demand. Even in Germany, home of the Energiewende (energy transformation) which has poured huge subsidies into wind and solar power, 38% of non-fossil fuel consumption comes from the stuff. After years in which European governments have boasted about their high-tech, low-carbon energy revolution, the main beneficiary seems to be the favored fuel of pre-industrial societies.

The idea that wood is low in carbon sounds bizarre. But the original argument for including it in the EU’s list of renewable-energy supplies was respectable. If wood used in a power station comes from properly managed forests, then the carbon that billows out of the chimney can be offset by the carbon that is captured and stored in newly planted trees. Wood can be carbon-neutral. Whether it actually turns out to be is a different matter. But once the decision had been taken to call it a renewable, its usage soared.

In the electricity sector, wood has various advantages. Planting fields of windmills is expensive but power stations can be adapted to burn a mixture of 90% coal and 10% wood (called co-firing) with little new investment. Unlike new solar or wind farms, power stations are already linked to the grid. Moreover, wood energy is not intermittent as is that produced from the sun and the wind: it does not require backup power at night, or on calm days. And because wood can be used in coal-fired power stations that might otherwise have been shut down under new environmental standards, it is extremely popular with power companies.

Money grows on trees

The upshot was that an alliance quickly formed to back public subsidies for biomass. It yoked together greens, who thought wood was carbon-neutral; utilities, which saw co-firing as a cheap way of saving their coal plants; and governments, which saw wood as the only way to meet their renewable-energy targets. The EU wants to get 20% of its energy from renewable sources by 2020; it would miss this target by a country mile if it relied on solar and wind alone.

The scramble to meet that 2020 target is creating a new sort of energy business. In the past, electricity from wood was a small-scale waste-recycling operation: Scandinavian pulp and paper mills would have a power station nearby which burned branches and sawdust. Later came co-firing, a marginal change. But in 2011 RWE, a large German utility, converted its Tilbury B power station in eastern England to run entirely on wood pellets (a common form of wood for burning industrially). It promptly caught fire.

Undeterred, Drax, also in Britain and one of Europe’s largest coal-fired power stations, said it would convert three of its six boilers to burn wood. When up and running in 2016 they will generate 12.5 terawatt hours of electricity a year. This energy will get a subsidy, called a renewable obligation certificate, worth £45 ($68) a megawatt hour (MWh), paid on top of the market price for electricity. At current prices, calculates Roland Vetter, the chief analyst at CF Partners, Europe’s largest carbon-trading firm, Drax could be getting £550m a year in subsidies for biomass after 2016—more than its 2012 pretax profit of £190m.

With incentives like these, European firms are scouring the Earth for wood. Europe consumed 13m tonnes of wood pellets in 2012, according to International Wood Markets Group, a Canadian company. On current trends, European demand will rise to 25m-30m a year by 2020.

Europe does not produce enough timber to meet that extra demand. So a hefty chunk of it will come from imports. Imports of wood pellets into the EU rose by 50% in 2010 alone and global trade in them (influenced by Chinese as well as EU demand) could rise five- or sixfold from 10m-12m tonnes a year to 60m tonnes by 2020, reckons the European Pellet Council. Much of that will come from a new wood-exporting business that is booming in western Canada and the American south. Gordon Murray, executive director of the Wood Pellet Association of Canada, calls it “an industry invented from nothing”.

Prices are going through the roof. Wood is not a commodity and there is no single price. But an index of wood-pellet prices published by Argus Biomass Markets rose from €116 ($152) a tonne in August 2010 to €129 a tonne at the end of 2012. Prices for hardwood from western Canada have risen by about 60% since the end of 2011.

This is putting pressure on companies that use wood as an input. About 20 large saw mills making particle board for the construction industry have closed in Europe during the past five years, says Petteri Pihlajamaki of Poyry, a Finnish consultancy (though the EU’s building bust is also to blame). Higher wood prices are hurting pulp and paper companies, which are in bad shape anyway: the production of paper and board in Europe remains almost 10% below its 2007 peak. In Britain, furniture-makers complain that competition from energy producers “will lead to the collapse of the mainstream British furniture-manufacturing base, unless the subsidies are significantly reduced or removed”.

But if subsidising biomass energy were an efficient way to cut carbon emissions, perhaps this collateral damage might be written off as an unfortunate consequence of a policy that was beneficial overall. So is it efficient? No.

Wood produces carbon twice over: once in the power station, once in the supply chain. The process of making pellets out of wood involves grinding it up, turning it into a dough and putting it under pressure. That, plus the shipping, requires energy and produces carbon: 200kg of CO2 for the amount of wood needed to provide 1MWh of electricity.

This decreases the amount of carbon saved by switching to wood, thus increasing the price of the savings. Given the subsidy of £45 per MWh, says Mr Vetter, it costs £225 to save one tonne of CO2 by switching from gas to wood. And that assumes the rest of the process (in the power station) is carbon neutral. It probably isn’t.

A fuel and your money

Over the past few years, scientists have concluded that the original idea—carbon in managed forests offsets carbon in power stations—was an oversimplification. In reality, carbon neutrality depends on the type of forest used, how fast the trees grow, whether you use woodchips or whole trees and so on. As another bit of the EU, the European Environment Agency, said in 2011, the assumption “that biomass combustion would be inherently carbon neutral…is not correct…as it ignores the fact that using land to produce plants for energy typically means that this land is not producing plants for other purposes, including carbon otherwise sequestered.

Tim Searchinger of Princeton University calculates that if whole trees are used to produce energy, as they sometimes are, they increase carbon emissions compared with coal (the dirtiest fuel) by 79% over 20 years and 49% over 40 years; there is no carbon reduction until 100 years have passed, when the replacement trees have grown up. But as Tom Brookes of the European Climate Foundation points out, “we’re trying to cut carbon now; not in 100 years’ time.

In short, the EU has created a subsidy which costs a packet, probably does not reduce carbon emissions, does not encourage new energy technologies—and is set to grow like a leylandii hedge.

2. ZME Science. August 2015. The UK plans to build the world’s largest wood-burning power plant.

The United Kingdom has announced plans to build the world’s largest biomass power plant. The Tees Renewable Energy Plant (REP) will be located in the Port of Teesside, Middlesbrough and it will have a capacity of 299 MW. While the plant is designed to be able to function on a wide range of biofuels, its main intended power sources are wood pellets and chips, of which the plant is expected to use more than 2.4 million tons a year. The feedstock will be sourced from certified sustainable forestry projects developed by the MGT team and partners in North and South America, and the Baltic States, and supplied to the project site by means of ships.

Wood pellets, which are low in sulphur and chlorine, will be primarily used to fuel the plant.

A biomass power plant of this type is referred to as a combined heat and power (or CHP) plant. It will generate enough renewable energy to supply its own operations and commercial and residential utility customers in the area.

Investment in the renewable project is estimated to reach £650m ($1 billion), which will be partly funded through aids from the European Commission, and construction works would create around 1,100 jobs. Environmental technology firm Abengoa, based in Spain, along with Japanese industry giant Toshiba will be leading the project for their client, MGT Teesside, subsidiary to the British utility MGT Power.

The feedstock will be burned to generate steam at 565°C that will drive a steam turbine, which will rotate the generator to produce electricity. The generated power will be conveyed to the National Grid.The exhaust steam generated by the steam turbine plant will be condensed by the ACCs and re-used, whereas the flue gases from the CFB boiler will be discharged via the exhaust stack.

Nitrogen dioxide (NO2) emissions will be minimized by using capture technology, fabric filters will reduce emission of particulate matter or dust and check the sulphur content of the fuel feed, while sulphur dioxide (SO2) emissions will be reduced through limestone injection into the boiler.


Karagiannopoulos, L. 2019. Finland Will Need to Import Biomass to Keep Warm. Reuters.

Posted in Wood | Tagged , , , , | 6 Comments

Hydrogen, the Homeopathic Energy Crisis Remedy

PetroRabigh hydrogen production unit, Saudi Arabia. Source:

Preface.  Hydrogen is the dumbest, most ridiculous possible energy resource. Far more energy is required to make and store it than you get out of it, since it has absolutely no energy at all. 

In a hydrogen fuel cell truck, turning hydrogen back into electricity is only 24.7% efficient, so you’ve lost 3 times as much energy as you put in (.84 natural gas production x .67 H2 onboard reforming x .54 fuel cell efficiency x .84 electric motor and drivetrain efficiency x .97 rolling resistance) (DOE 2011).

It’s also highly explosive. Hydrogen has a lower ignition energy than gasoline or natural gas, which means it can ignite more easily. Because hydrogen burns with a nearly invisible flame, special flame detectors are required.  It requires 12 times less energy to ignite than gasoline vapor, so heat sources or the smallest of sparks can turn hydrogen into a bomb. 

On June 11, 2019 a hydrogen refueling station in Norway exploded (Huang 2019).  Toyota and Hyundai were so alarmed they stopped sales of fuel-cell vehicles (FCV) in Norway.  A week earlier, Air products & chemicals, who make hydrogen for FCV in California, exploded and shook buildings up 5 miles away (Pena 2019, Woodrow 2019). Since there are so few hydrogen refueling stations around the world.: “With this explosion, the average number of explosions per station will probably be orders of magnitude higher than for any other fuel. Not a good thing if you intend to promote FCVs” (Kane 2019).

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Alice Friedemann. 2008. “The Hydrogen Economy. Savior of Humanity or an Economic Black Hole?” Skeptic 14:48-51.

Skeptics scoff at perpetual motion, free energy, and cold fusion, but what about energy from hydrogen? Before we invest trillions of dollars in a hydrogen economy, we should examine the science and pseudoscience behind the hydrogen hype. Let’s begin by taking a hydrogen car out for a spin.

Although the Internal Combustion Engine (ICE) in your car can burn hydrogen, the hope is that someday fuel cells, which are based on electrochemical processes rather than combustion (which converts heat to mechanical work), will become more efficient and less polluting than ICEs.1 Fuel cells were invented before combustion engines in 1839 by William Grove. But the ICE won the race by using abundant and inexpensive gasoline, which is easy to transport and pour, and very high in energy content.2


Unlike gasoline, hydrogen isn’t an energy source — it’s an energy carrier, like a battery. You have to make hydrogen and put energy into it, both of which take energy. Hydrogen has been used commercially for decades, so we already know how to do this. There are two main ways to make hydrogen: using natural gas as both the source and the energy to split hydrogen from the carbon in natural gas (CH4), or using water as the source and renewable energy to split the hydrogen from the oxygen in water (H2O).

1) Making Hydrogen from Fossil Fuels. Currently, 96 % of hydrogen is made from fossil fuels, mainly for oil refining and partially hydrogenated oil.3 In the United States, 90 % is made from natural gas, with an efficiency of 72 %,4 which means you lose 28 % of the energy contained in the natural gas to make it (and that doesn’t count the energy it took to extract and deliver the natural gas to the hydrogen plant).

Hydrogen from water using electrolysis is 12 times more costly than natural gas, so no wonder “renewable” hydrogen from water is only made when an especially pure hydrogen is required, mainly by NASA for rocket fuel

One of the main arguments made for switching to a “hydrogen economy” is to prevent global warming that has been attributed to the burning of fossil fuels. When hydrogen is made from natural gas, however, nitrogen oxides are released, which are 58 times more effective in trapping heat than carbon dioxide.5 Coal releases large amounts of CO2 and mercury. Oil is too powerful and useful to waste on hydrogen — it is concentrated sunshine brewed over hundreds of millions of years. A gallon of gas represents about 196,000 pounds of fossil plants, the amount in 40 acres of wheat.6

Natural gas as a source for hydrogen is too valuable. In the U.S. about 34% is used to generate electricity and balance wind and solar, 30% is used in manufacturing, 30% to heat homes and buildings, and another 3-5% to create fertilizer as both a feedstock and energy source. This has led to a many-fold increase in crop production, allowing billions more people to be fed who otherwise wouldn’t be.7,8

We simply don’t have enough natural gas left to make a hydrogen economy happen from this nonrenewable, finite source. Extraction of natural gas is declining in North America.9  Although fracked natural gas has temporarily been a stopgap for the decline of conventional natural gas, the International Energy Agency estimates that fracked gas production will peak as soon as 2023 (IEA 2018).  Alternatively we could import Liquefied Natural Gas (LNG), but it would take at least a decade to set up LNG ships and shoreline facilities at a cost of many billions of dollars. Making LNG is so energy intensive that it would be economically and environmentally insane to use it as a source of hydrogen.10

2) Making Hydrogen from Water. Only 4% of hydrogen is made from water via electrolysis. It is done when the hydrogen must be extremely pure. Since most electricity comes from fossil fuels in electricity generating plants that are 30 % efficient, and electrolysis is 70 % efficient, you end up using four units of energy to create one unit of hydrogen energy: 70% * 30% = 21% efficiency.11

Sure, renewables could generate the electricity, but only about 6.6% of power comes from wind, and 1.6% from solar (EIA 2019).

Producing hydrogen by using fossil fuels as a feedstock or an energy source defeats the purpose, since the whole point is to get away from fossil fuels. The goal is to use renewable energy to make hydrogen from water via electrolysis. When the wind is blowing, current wind turbines can perform at 30–40 percent efficiency, producing hydrogen at an overall rate of 25 percent efficiency — 3 units of wind energy to get 1 unit of hydrogen energy. The best solar cells available on a large scale have an efficiency of ten percent, or 9 units of energy to get 1 hydrogen unit of energy. If you use algae making hydrogen as a byproduct, the efficiency is about .1 percent.12 No matter how you look at it, producing hydrogen from water is an energy sink. If you want a more dramatic demonstration, please mail me ten dollars and I’ll send you back a dollar.

Hydrogen can be made from biomass, but there are numerous problems:

  1. it’s very seasonal;
  2. it contains a lot of moisture, requiring energy to store and dry it before gasification;
  3. there are limited supplies;
  4. the quantities are not large or consistent enough for large-scale hydrogen production;
  5. a huge amount of land is required because even cultivated biomass in good soil has a low yield — 10 tons per 2.4 acres;
  6. the soil will be degraded from erosion and loss of fertility if stripped of biomass;
  7. any energy put into the land to grow the biomass, such as fertilizer and planting and harvesting, will add to the energy costs;
  8. the delivery costs to the central power plant must be added; and
  9. it is not suitable for pure hydrogen production.13

Putting Energy into Hydrogen

No matter how it’s been made, hydrogen has no energy in it. It is the lowest energy dense fuel on earth.14 At room temperature and pressure, hydrogen takes up three thousand times more space than gasoline containing an equivalent amount of energy.15 To put energy into hydrogen, it must be compressed or liquefied. To compress hydrogen to the necessary 10,000 psi is a multi-stage process that costs an additional 15 percent of the energy contained in the hydrogen.

If you liquefy it, you will be able to get more hydrogen energy into a smaller container, but you will lose 30–40 percent of the energy in the process. Handling it requires extreme precautions because it is so cold — minus 423 F. Fueling is typically done mechanically with a robot arm.16


For the storage and transportation of liquid hydrogen, you need a heavy cryogenic support system. The tank is cold enough to cause plugged valves and other problems. If you add insulation to prevent this, you will increase the weight of an already very heavy storage tank, adding additional costs to the system.17

Let’s assume that a hydrogen car can go 55 miles per kg.18 A tank that can hold 3 kg of compressed gas will go 165 miles and weigh 400 kg (882 lbs).19 Compare that with a Honda Accord fuel tank that weighs 11 kg (25 lbs), costs $100, and holds 17 gallons of gas. The overall weight is 73 kg (161 lbs, or 8 lbs per gallon). The driving range is 493 miles at 29 mpg. Here is how a hydrogen tank stacks up against a gas tank in a Honda Accord (last column is cost):

Amount of fuel Tank weight with fuel   Driving Range
Hydrogen 55 kg @3000 psi 400 kg 165 miles13 $200021
Gasoline 17 gallons 73 kg 493 miles $100

According to the National Highway Safety Traffic Administration (NHTSA), “Vehicle weight reduction is probably the most powerful technique for improving fuel economy. Each 10 percent reduction in weight improves the fuel economy of a new vehicle design by approximately eight percent.”

The more you compress hydrogen, the smaller the tank can be. But as you increase the pressure, you also have to increase the thickness of the steel wall, and hence the weight of the tank. Cost increases with pressure. At 2000 psi, it is $400 per kg. At 8000 psi, it is $2100 per kg.20 And the tank will be huge — at 5000 psi, the tank could take up ten times the volume of a gasoline tank containing the same energy content.

Fuel cells are heavy. According to Rosa Young, a physicist and vice president of advanced materials development at Energy Conversion Devices in Troy, Michigan: “A metal hydride storage system that can hold 5 kg of hydrogen, including the alloy, container, and heat exchangers, would weigh approximately 300 kg (661 lbs), which would lower the fuel efficiency of the vehicle.”21

Fuel cells are also expensive. In 2003, they cost $1 million or more. At this stage, they have low reliability, need a much less expensive catalyst than platinum, can clog and lose power if there are impurities in the hydrogen, don’t last more than 1000 hours, have yet to achieve a driving range of more than 100 miles, and can’t compete with electric hybrids like the Toyota Prius, which is already more energy efficient and low in CO2 generation than projected fuel cells.22

Hydrogen is the Houdini of elements. As soon as you’ve gotten it into a container, it wants to get out, and since it is the lightest of all gases, it takes a lot of effort to keep it from escaping. Storage devices need a complex set of seals, gaskets, and valves. Liquid hydrogen tanks for vehicles boil off at 3–4 percent per day.23

Hydrogen also tends to make metal brittle.24 Embrittled metal can create leaks. In a pipeline, it can cause cracking or fissuring, which can result in potentially catastrophic failure.25 Making metal strong enough to withstand hydrogen adds weight and cost. Leaks also become more likely as the pressure grows higher. It can leak from un-welded connections, fuel lines, and non-metal seals such as gaskets, O-rings, pipe thread compounds, and packings. A heavy-duty fuel cell engine may have thousands of seals.26 Hydrogen has the lowest ignition point of any fuel, 20 times less than gasoline. So if there’s a leak, it can be ignited by any number of sources.27  And an odorant can’t be added because of hydrogen’s small molecular size (SBC).

Worse, leaks are invisible — sometimes the only way to know there’s a leak is poor performance.


One barrier to hydrogen is pipelines. There are currently 700 miles of hydrogen pipelines in operation—that is in comparison to 1 million miles of natural gas pipelines. To move to a nationwide use of hydrogen, safe and effective pipelines have to be developed. Tests have to be developed to test for the degradation that is likely to occur to the metals that can be caused by hydrogen weakening the pipeline.

Working very closely with State weights and measures organizations, NIST has long maintained the standard for ensuring that consumers actually receive a gallon of gas every time they pay for one. Now NIST researchers are incorporating the properties of hydrogen in standards that will support the development of hydrogen as a fuel in vehicles. One of the challenges in the use of hydrogen as a vehicle fuel is the seemingly trivial matter of measuring fuel consumption. Consumers and industry are accustomed to high accuracy when purchasing gasoline. Refueling with hydrogen is a problem because there are currently no mechanisms to ensure accuracy at the pump. Hydrogen is dispensed at a very high pressure, at varying degrees of temperature and with mixtures of other gases (S.HRG. 110-1199)


Canister trucks ($250,000 each) can carry enough fuel for 60 cars.28 These trucks weigh 40,000 kg, but deliver only 400 kg of hydrogen. For a delivery distance of 150 miles, the delivery energy used is nearly 20 percent of the usable energy in the hydrogen delivered. At 300 miles, that is 40 percent. The same size truck carrying gasoline delivers 10,000 gallons of fuel, enough to fill about 800 cars.29

Another alternative is pipelines. The average cost of a natural gas pipeline is one million dollars per mile, and we have 200,000 miles of natural gas pipeline, which we can’t re-use because they are composed of metal that would become brittle and leak, as well as the incorrect diameter to maximize hydrogen throughput. If we were to build a similar infrastructure to deliver hydrogen it would cost $200 billion. The major operating cost of hydrogen pipelines is compressor power and maintenance.30 Compressors in the pipeline keep the gas moving, using hydrogen energy to push the gas forward. After 620 miles, 8 percent of the hydrogen has been used to move it through the pipeline.31

How much electricity would we need to make hydrogen for light-duty vehicles (Post 2017)?


At some point along the chain of making, putting energy in, storing, and delivering the hydrogen, we will have used more energy than we can get back, and this doesn’t count the energy used to make fuel cells, storage tanks, delivery systems, and vehicles.32 When fusion can make cheap hydrogen, when reliable long-lasting nanotube fuel cells exist, and when light-weight leak-proof carbon-fiber polymer-lined storage tanks and pipelines can be made inexpensively, then we can consider building the hydrogen economy infrastructure. Until then, it’s vaporware. All of these technical obstacles must be overcome for any of this to happen.33 Meanwhile, the United States government should stop funding the Freedom CAR program, which gives millions of tax dollars to the big three automakers to work on hydrogen fuel cells. Instead, automakers ought to be required to raise the average overall mileage their vehicles get — the Corporate Average Fuel Economy (CAFE) standard.34

At some time in the future the price of oil and natural gas will increase significantly due to geological depletion and political crises in extracting countries. Since the hydrogen infrastructure will be built using the existing oil-based infrastructure (i.e. internal combustion engine vehicles, power plants and factories, plastics, etc.), the price of hydrogen will go up as well — it will never be cheaper than fossil fuels. As depletion continues, factories will be driven out of business by high fuel costs35,36,37 and the parts necessary to build the extremely complex storage tanks and fuel cells might become unavailable.

The laws of physics mean the hydrogen economy will always be an energy sink. Hydrogen’s properties require you to spend more energy than you can earn, because in order to do so you must overcome waters’ hydrogen-oxygen bond, move heavy cars, prevent leaks and brittle metals, and transport hydrogen to the destination. It doesn’t matter if all of these problems are solved, or how much money is spent. You will use more energy to create, store, and transport hydrogen than you will ever get out of it.

Any diversion of declining fossil fuels to a hydrogen economy subtracts that energy from other possible uses, such as planting, harvesting, delivering, and cooking food, heating homes, and other essential activities. According to Joseph Romm, a Department of Energy official who oversaw research on hydrogen and transportation fuel cell research during the Clinton Administration: “The energy and environmental problems facing the nation and the world, especially global warming, are far too serious to risk making major policy mistakes that misallocate scarce resources.38

Related articles


  1. Thomas, S. and Zalbowitz, M. 1999. Fuel cells — Green power. Department of Energy, Los Alamos National Laboratory, 5.
  2. Pinkerton, F. E. and Wicke, B.G. 2004. “Bottling the Hydrogen Genie,” The Industry Physicist, Feb/Mar: 20–23.
  3. Jacobson, M. F. September 8, 2004. “Waiter, Please Hold the Hydrogen.” San Francisco Chronicle, 9(B).
  4. Hoffert, M. I., et al. November 1, 2002. “Advanced Technology Paths to Global Climate Stability: Energy for a Greenhouse Planet.” Science, 298, 981–987.
  5. Union of Concerned Scientists. How Natural Gas Works.
  6. Kruglinski, S. 2004. “What’s in a Gallon of Gas?” Discover, April, 11.
  7. Fisher, D. E. and Fisher, M. J. 2001. “The Nitrogen Bomb.” Discover, April, 52–57.
  8. Smil, V. 1997. “Global Population and the Nitrogen Cycle.” Scientific American, July, 76–81.
  9. Darley, J. 2004. High Noon for Natural Gas: The New Energy Crisis. Chelsea Green Publishing.
  10. Romm, J. J. 2004. The Hype About Hydrogen: Fact and Fiction in the Race to Save the Climate. Island Press, 154.
  11. Ibid., 75.
  12. Hayden, H. C. 2001. The Solar Fraud: Why Solar Energy Won’t Run the World. Vales Lake Publishing.
  13. Simbeck, D. R., and Chang, E. 2002. Hydrogen Supply: Cost Estimate for Hydrogen Pathways — Scoping Analysis. Golden, Colorado: NREL/SR-540-32525, Prepared by SFA Pacific, Inc. for the National Renewable Energy Laboratory (NREL), DOE, and the International Hydrogen Infrastructure Group (IHIG), July, 13.
  14. Ibid., 14.
  15. Romm, 2004, 20.
  16. Ibid., 94–95.
  17. Phillips, T. and Price, S. 2003. “Rocks in your Gas Tank.” April 17. Science at NASA.
  18. Simbeck and Chang, 2002, 41.
  19. Amos, W. A. 1998. Costs of Storing and Transporting Hydrogen. National Renewable Energy Laboratory, U.S. Department of Energy, 20.
  20. Simbeck and Chang, 2002, 14.
  21. Valenti, M. 2002. “Fill’er up — With Hydrogen.” Mechanical Engineering Magazine, Feb 2.
  22. Romm, 2004, 7, 20, 122.
  23. Ibid., 95, 122.
  24. El kebir, O. A. and Szummer, A. 2002. “Comparison of Hydrogen Embrittlement of Stainless Steels and Nickel-base Alloys.” International Journal of Hydrogen Energy #27, July/August 7–8, 793–800.
  25. Romm, 2004, 107.
  26. Fuel Cell Engine Safety. December 2001. College of the Desert
  27. Romm, J. J. 2004. Testimony for the Hearing Reviewing the Hydrogen Fuel and FreedomCAR Initiatives Submitted to the House Science Committee. March 3.
  28. Romm, 2004. The Hype About Hydrogen, 103.
  29. Ibid., 104.
  30. Ibid., 101–102.
  31. Bossel, U. and Eliasson, B. 2003. “Energy and the Hydrogen Economy.” Jan 8.
  32. Ibid.
  33. National Hydrogen Energy Roadmap Production, Delivery, Storage, Conversion, Applications, Public Education and Outreach. November 2002. U.S. Department of Energy.
  34. Neil, D. 2003. “Rumble Seat: Toyota’s Spark of Genius.” Los Angeles Times. October 15.,0,7911314.story
  35. Associated Press, 2004. “Oil Prices Raising Costs of Offshoots.” July 2.
  36. Abbott, C. 2004. “Soaring Energy Prices Dog Rosy U.S. Farm Economy.” Forbes, Reuters News Service. May 24.
  37. Schneider, G. 2004. “Chemical Industry in Crisis: Natural Gas Prices Are Up, Factories Are Closing, And Jobs Are Vanishing.” Washington Post, 1(E). March 17.
  38. Romm, 2004. The Hype About Hydrogen, 8.

References added after this was published in Skeptic Magazine

DOE. 2011. Advanced technologies for high ef?ciency clean vehicles. Vehicle Technologies Program. Washington DC: United States Department of Energy.

EIA. 2019. Table 7.2a Electricity net generation total (all sectors). U.S. Energy Information Administration.

Huang, E. 2019. A hydrogen fueling station explosion in Norway has left fuel-cell cars nowhere to charge. Quartz.

IEA. 2018. World Energy Outlook. International Energy Agency.

Kane, M. 2019. Hydrogen fueling station explodes: Toyota & Hyundai halt fuel cell car sales.

Pena, L. 2019. Hydrogen explosion shakes Santa Clara neighborhood. ABC News.

Post, W. March 6, 2017. The Hydrogen Economy Will Be Highly Unlikely.

SBC. February 2014. Hydrogen-based energy conversion. SBC Energy Institute.

S.HRG. 110-1199. June 24, 2008. Climate change impacts on the transportation sector. Senate Hearing.

Woodrow, M. 2019. Bay Area experiences hydrogen shortage after explosion. ABC News.

Posted in Alternative Energy, Electric trucks impossible, Energy, Hydrogen | Tagged , , , | 5 Comments

Pumped Hydro Storage (PHS)

Preface. This is the only commercial way to store energy now (CAES hardly counts with just one plant and salt domes to put more in existing in only 5 states). Though of course hydropower is only in a few states as well, 10 states have 80% of hydropower, and PHS needs to go far above existing reservoirs. There are very few places this could be done.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Pumped hydro storage generates power by using electrically powered turbines to move water from a lower level at night uphill to a reservoir above.

During daylight hours when electricity demand is higher, the water is released to flow back downhill to spin electrical turbines. Locations must have both high elevation and space for a reservoir above an existing body of water.

Pumped hydro uses roughly 20–30 % more energy than it produces, with more electricity required to pump the water uphill than is generated when it goes downhill. Nonetheless, pumped hydro enables load shifting, and is important to balance wind and solar power.

Appearances can be deceiving: Pumped hydro is not a Rube Goldberg scheme. Many of you have used a kilowatt or two of pumped hydro yourself. PHS accounts for over 98 % of what little current energy storage exists in the United States, and is the only kind of commercial storage that can provide sustained power over 12 hours (typically, the other 12 hours are spent pumping the water up).

Existing PHS facilities store terawatts of power annually, but account for less than 2 % of annual U.S. power generation. In 2018, the United States had 22.9 gigawatts (GW) of pumped storage hydroelectric generating capacity, compared with 79.9 GW of conventional hydroelectric capacity. This isn’t likely to increase much, since like hydroelectric dams, there are few places to put PHS. Only two have been built since 1995, for a grand total of 43 in the U.S., with most of the technically attractive sites already used (Hassenzahl 1981).

Most were built between 1960 and 1990; nearly half of the pumped storage capacity still in operation was built in the 1970s (EIA 2019).

Existing PHS in the U.S. can store 22 GW, with the potential for another 34 GW more across 22 states, though high cost and environmental issues will prevent many from being built. Additionally, saltwater PHS could be built above the ocean along the West coast, but so far the high cost of doing so, shorter lifespan due to saltwater corrosion, distance from the grid, and concerns of salt seepage into the soil have prevented their development. Underground caverns and floating sea walls are other possibilities, but also aren’t commercial yet.

PHS has a very low energy density. To store the energy contained in just one gallon of gasoline requires over 55,000 gallons to be pumped up the height of Hoover Dam, which is 726 feet high (CCST 2012).

In 2011, pumped hydro storage produced 23 TWh of electricity across the U.S. However, those plants consumed 29 TWh moving water uphill, a net loss of 6 TWh.

So, how many PHS units would it take to give the U.S. that one day of electricity storage, 11.12 TWh? Over 365 days, our 43 existing pumped hydro plants produced two days of energy storage (23 TWh). Thus, the U.S. would need more than 7800 additional plants (365/2 * 43). Rube Goldberg, I can imagine what you would make of this.


CCST. 2012. California’s energy future: electricity from renewable energy and fossil fuels with carbon capture and sequestration. California: California Council on Science and Technology.

Hassenzahl, W.V. ed. 1981. Mechanical, thermal, and chemical storage of energy. London: Hutchinson Ross.

Posted in Dams, Energy Production, Pumped Hydro Storage (PHS) | Tagged , , , , | 15 Comments

The 10 countries with the most endangered species in the world

I don’t know whether to go to these countries to see these beautiful creatures before they’re extinct, or to spend my money on countries like Costa Rica and Tanzania that have set aside a quarter or more of their land to preserve biodiversity.

An excessive number of people using half the land and what it produces on the planet is what’s driving exitinction. Interesting how many of these nations where species are going to be permanently extinct don’t allow abortions and getting birth control can be difficult. So I’ve added whether a nation allows abortion and has birth control to the statistics.

One of the first acts of the Trump administration in January 2017 was to cut the funding for abortions and contraception, which has made it hard for hundreds of thousands of women to get birth control

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity, XX2 report


Madden, D. 2019. Ranked: the ten countries with the most endangered species in the world. Forbes.

Industry, pollution, agriculture, deforestation, air travel and decreasing habitats are conspiring to make it very hard for thousands of species to survive, let alone flourish. And that truth stretches to every corner of the world, be it forest, mountain, reef, ocean, city or savannah.

The International Union for Conservation of Nature (IUCN) Red List has been the world’s foremost information source on the global conservation status of animal, fungi and plant species since 1964. It currently lists an astounding 27,000 species as at risk of extinction, which is an even more astounding 27% of all species we currently know about. 

  • 40% of all amphibians
  • 34% of conifers
  • 33% of reef corals
  • 31% of sharks and rays
  • 27% of crustaceans
  • 25% of mammals
  • 14% of birds

#1 Mexico: 665 endangered species

71 birds, 96 mammals, 98 reptiles, 181 fish, 219 amphibians

Why? Mexico has one of the highest deforestation rates in the world to make more farmland available to feed an ever growing population, which may double by 2050.  This is because of restrictions on abortions in most states, and abortion not being decriminalized until 2007 and contraceptives prohibited until the late 1960s (Wiki 2019)

#2 Indonesia: 583   191 mammals, 160 birds

Contraception is only available on the black market and abortion in back alley clinics for many women. A legal abortion is hard to obtain (GI 2008)

#3 Madagascar: 553  

Abortion is illegal.

#4 India: 542  

Despite six decades of family planning promotion, contraceptive prevalence rate in India remains poor, particularly in the three North Indian states where 18 percent of the population lives

#5 Columbia: 540  

Only allows abortion for rape, incest, or the mother is at risk, and hard to get. But birth control is available.

#6 USA 475  

#7 Ecuador: 436  

Only allows abortion if the mother is at risk, illegal even in cases of rape, incest, and severe fetal impairment. But birth control is available.

#8 China: 435  

#9 Brazil: 414

Abortion is prohibited in all circumstances, though a woman who was raped or whose life is in danger won’t go to jail.  Birth control is legal.

#10 Peru: 385

Only allows abortion if the mother is at risk. If a woman has an illegal abortion she may spend up to 2 years in prison, and the person who performed the abortion from 1 to 6 years.  Birth control is available. It’s hard to get the morning after pill, and it was discovered that 25% of them are fake.


GI. 2008. Abortion in Indonesia. Guttmacher Institute.

Wiki. 2019. Abortion in Mexico and Women in Mexico.

Posted in Biodiversity Loss, Deforestation | Tagged , , , | 3 Comments