Book review of “Deer Hunting with Jesus” Best book on why people vote for Trump

Preface. Joe Bageant grew up in poor, conservative Winchester Virginia, which is like tens of thousands of other small towns in America. He is one of the few who escaped and got a college education.  When he retired there in 1999, he knew hundreds of people, and gives readers a visceral, gut-level understanding of what life is like in Republican bible-belt territory. He paints vivid portraits of the locals he knows and cares about, the feudal economics that keep them poor, how Christian fundamentalism is woven into their lives, and why they vote against their own interests.  Best of all, he’s funny.

Continue reading

Posted in Critical Thinking and Scientific Literacy, Human Nature, Politics, Religion | Tagged , , , , | 3 Comments

Population growth creates climate crisis, says environmental scientist

Preface.  In the 70s and first half of the 80s population and immigration were on the platforms of ALL environmental groups. Today in mainstream media and most environmental groups, only nasty, racist people who hate black and brown people are mentioned. Never an interview with an ecologist, or discussion of limits to growth. Certainly the IPCC has no models of how less population would affect climate change, though it obviously would, since the reason for deforestation, wetland and biodiversity loss and all the other existential crises occur is to feed ever more people driving ever more cars burning ever more fossils to growth food, refrigeration, cooking, heating, cooling, and manufacture ever more goods for ever more people.

The Sierra Club was instrumental in making the topic of population and immigration taboo and politically incorrect because David Gelbaum gave them over $100 million dollars in exchange for not taking a position on these issues anymore (Weiss KR (2004) The Man Behind the Land. Los Angeles Times). And $100 million more since then according to Wikipedia.

I did a google search on population growth and climate change and looked at hundreds of results. I found three, only one of them mainstream. You have to go back to 2009 to start seeing articles on their connection.  The dozen or so mainstream articles that do mention these two topics deny there is a connection, how dare anyone suggest such a racist thing. Kind of like the argument pro-gun advocates use to deny the need for gun control by saying “Guns don’t kill people, people kill people”.

Meanwhile, without free contraception and abortion world-wide through family planning, taxing more than one child, and other measures that are not harsh and voluntary, we come ever closer to Mother Nature solving the problem for us with drought, heat waves, and more — which has been the death of many civilizations in the past. With peak production of both conventional and unconventional oil in 2018, declining oil in the future will be the coup de grace, since for now we can buy our way out of it and stay alive with help from fossil fuels, such as to get the last fish in the ocean with giant ships and spotter planes in the Antarctic, air-conditioning in heat waves, refrigeration, industrially grow food for 8 billion people with gigantic tractors and combines and so on.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Financial Sense, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

UGC (2024) Population Growth. Understanding Climate Change, University of California, Berkeley

Population growth is the increase in the number of humans on Earth. For most of human history our population size was relatively stable. But with innovation and industrialization, energy, food, water, and medical care became more available and reliable. Consequently, global human population rapidly increased, and continues to do so, with dramatic impacts on global climate and ecosystems. We will need technological and social innovation to help us support the world’s population as we adapt to and mitigate climate and environmental changes.

Human population growth impacts the Earth system in a variety of ways, including:

  • Increasing the extraction of resources from the environment. These resources include fossil fuels (oil, gas, and coal), minerals, treeswater, and wildlife, especially in the oceans. The process of removing resources, in turn, often releases pollutants and waste that reduce air and water quality, and harm the health of humans and other species.
  • Increasing the burning of fossil fuels for energy to generate electricity, and to power transportation (for example, cars and planes) and industrial processes.
  • Increase in freshwater use for drinking, agriculture, recreation, and industrial processes. Freshwater is extracted from lakes, rivers, the ground, and man-made reservoirs.
  • Increasing ecological impacts on environments. Forests and other habitats are disturbed or destroyed to construct urban areas including the construction of homes, businesses, and roads to accommodate growing populations. Additionally, as populations increase, more land is used for agricultural activities to grow crops and support livestock. This, in turn, can decrease species populationsgeographicrangesbiodiversity, and alter interactions among organisms.
  • Increasing fishing and hunting, which reduces species populations of the exploited species. Fishing and hunting can also indirectly increase numbers of species that are not fished or hunted if more resources become available for the species that remain in the ecosystem.
  • Increasing the transport of invasive species, either intentionally or by accident, as people travel and import and export supplies. Urbanization also creates disturbed environments where invasive species often thrive and outcompete native species. For example, many invasive plant species thrive along strips of land next to roads and highways.
  • The transmission of diseases. Humans living in densely populated areas can rapidly spread diseases within and among populations. Additionally, because transportation has become easier and more frequent, diseases can spread quickly to new regions.

***

Linden E (2022) The Climate Challenge of the World’s Population Hitting 8 Billion. Time magazine.

Global population surpassed 8 billion this week, a shocking milestone because back in the 1990s this threshold was not expected to be breached until 2050. Whether you’re a dour Malthusian or a technological optimist, one thing is undeniable: The 2.7 billion people added to global population since 1990 makes the task of averting a climate catastrophe vastly more challenging than it was when global warming first arose as a mainstream concern.

Getting to zero net emissions in 1990—when fossil fuels were putting 22.4 billion metric tons of greenhouse gas emissions (GHG) into the atmosphere—was hard enough. Now, we have to eliminate those emissions along with roughly 14 billion tons of annual GHG emissions resulting from population growth.

One of actions to take is family planning, which until now has been largely absent from the conversation around global warming. Most of the expected 2 billion people will be born in the poorer nations. These nations burn fewer fossil fuels, but all aspire to raise their standard of living, which, given today’s energy mix, means more GHG emissions per capita. Even without economic growth, that population increase would mean roughly four billion additional metric tons of CO2 going into the atmosphere each year. That’s about a 10% increase, and, as of today, the world has never been able to voluntarily reduce annual emissions.

Population should be part of climate discussions, but I cannot remember a time when family planning has been featured in international efforts. Yes, it’s a hot button topic in many of the emerging nations, many of which take affront when the rich nations ask them to stabilize their numbers. But its absence from the agenda from last week’s COP27 is a tell that the Congress of Parties process is not a serious effort to really tackle the risk of climate change.

Population growth is the elephant in the room for climate change, but it is also the elephant in the room for ecological issues such as tropical deforestationdesertification, the extinction crisis, the destabilizing of earth’s life support systems on land and in the oceans; demographic issues such as involuntary migration, fresh water, and food insecurity; and political issues such as civil unrest and state failure. Slowing population growth will reduce pressures on all of these issues and threats.

Population growth is a fraught issue. In the last few decades, a major driver to limit family size has been the demographic shift towards urban areas. In cities, additional kids become a liability because of the higher costs of housing and food. This shows that people can change attitudes towards family size quite rapidly, given incentives and access to family planning. For governments, the incentive should be the prospect of a climate Hell if population continues to increase by several hundred million people every decade. Many emerging nations have made great strides in lowering infant mortality, but, all too often efforts on maternal and infant health are not coupled with access to family planning, which is one reason why human numbers surpassed 8 billion two decades ahead of schedule.

Laubichler M (2022) Population growth, climate change create an ‘Anthropocene engine’ that’s changing the planet. Salon.com

BMJ (2021) Human population growth is the root cause of climate change. British Medical Journal. doi: https://doi.org/10.1136/bmj.n2386 

Behind a paywall for me, but I found this summary: “”Human population growth is the root cause of climate change” is a 2021 letter and comment in The BMJ by Jonathan Austen. The letter argues that population growth and increased consumption are the main causes of climate change. It suggests that population growth could be addressed through financial incentives for smaller families and free access to contraception. The letter also claims that population stability would lead to less deforestation and construction, which would have a significant impact on climate change”.

(2024)  South Carolina’s population growth creates climate crisis, says environmental scientist

South Carolina is growing, but not all growth is good. At least that is what Leon Kolankiewicz, an environmental scientist with NumbersUSA and lead author of “From Sea to Sprawling Sea,” an environmental impact study that explores how U.S. population growth has driven rural land loss across four decades, said.

“You are making it very difficult to achieve your climate goals by increasing the number of energy consumers, it just doesn’t work”. From 1982 to 2017, 35 years, South Carolina lost 2,126 square miles to what Kolankiewicz described as urban sprawl – the loss of rural land to urbanized development.  “This pace of development, rural land loss, is accelerating,” Kolankiewicz said. “So that, quite understandably, has a lot of South Carolinians concerned or even upset.

Though politicians, including Gov. Henry McMaster, have praised South Carolina’s population growth, and paraded it around as proof of a “booming” economy, environmentalists like Kolankiewicz are concerned that urban sprawl, brought about by an increase in population, can steer areas like the Palmetto State – and the United States – into an existential crisis. “We face an issue of how human beings are going to live when there are 330 million of us in this country,” Kolankiewicz said. “We can’t keep doing that. It is unsustainable. You’re robbing Peter to pay Paul. Losing rural lands to urban sprawl can cripple the environment. Wildlife loses natural grazing land, farmers lose farmland and deforestation only adds to dramatic drops in air quality,” Kolankiewicz explained.

In his study, he contends that even if the loss of habitat and farmland continues at the lower rate of the 2002 to 2017 period, the average destruction of 1,200 square miles per year across the United States would be unsustainable for a country that desires the continued capability of food independence and stewardship of the animal and plant life currently living within its borders. “You can’t have growth in any object or entity in a finite system,” Kolankiewicz said. “Neither the United States, the biosphere, nor the world as a whole was growing in terms of resources to accommodate ever-increasing human demands.”

Kolankiwicz, who also wrote:

***

Scientific American (2009) Does Population Growth Impact Climate Change? Does the rate at which people are reproducing need to be controlled to save the environment?

No doubt human population growth is a major contributor to global warming, given that humans use fossil fuels to power their increasingly mechanized lifestyles. More people means more demand for oil, gas, coal and other fuels mined or drilled from below the Earth’s surface that, when burned, spew enough carbon dioxide (CO2) into the atmosphere to trap warm air inside like a greenhouse.

According to the United Nations Population Fund, human population grew from 1.6 billion to 6.1 billion people during the course of the 20th century. (Think about it: It took all of time for population to reach 1.6 billion; then it shot to 6.1 billion over just 100 years.) During that time emissions of CO2, the leading greenhouse gas, grew 12-fold. And with worldwide population expected to surpass nine billion over the next 50 years, environmentalists and others are worried about the ability of the planet to withstand the added load of greenhouse gases entering the atmosphere and wreaking havoc on ecosystems down below.

Developed countries consume the lion’s share of fossil fuels. The United States, for example, contains just five percent of world population, yet contributes a quarter of total CO2 output. But while population growth is stagnant or dropping in most developed countries (except for the U.S., due to immigration), it is rising rapidly in quickly industrializing developing nations. According to the United Nations Population Fund, fast-growing developing countries (like China and India) will contribute more than half of global CO2 emissions by 2050, leading some to wonder if all of the efforts being made to curb U.S. emissions will be erased by other countries’ adoption of our long held over-consumptive ways.

“Population, global warming and consumption patterns are inextricably linked in their collective global environmental impact,” reports the Global Population and Environment Program at the non-profit Sierra Club. “As developing countries’ contribution to global emissions grows, population size and growth rates will become significant factors in magnifying the impacts of global warming.”

According to the Worldwatch Institute, a nonprofit environmental think tank, the overriding challenges facing our global civilization are to curtail climate change and slow population growth. “Success on these two fronts would make other challenges, such as reversing the deforestation of Earth, stabilizing water tables, and protecting plant and animal diversity, much more manageable,” reports the group. “If we cannot stabilize climate and we cannot stabilize population, there is not an ecosystem on Earth that we can save.”

CONTACTS: United Nations Population Fund, www.unfpa.org; Sierra Club’s Global Population and Environment Program, www.sierraclub.org/population; Worldwatch Institute, www.worldwatch.org.

 

Posted in Climate Change, David Korowicz, Overpopulation, Population | Tagged , , | Comments Off on Population growth creates climate crisis, says environmental scientist

Book review of “Democracy in Chains”, the history of how extremist Republicans stealthily stole our Democracy

Preface.  I can’t do justice to this book in a book review (so buy it), especially the history of how the right-wing libertarians came to be so powerful, their huge influence on congress, the judiciary, and laws enacted, and how this was done with great stealth.  At the heart of what they want to do is change the U.S. Constitution, in ways that would benefit the super rich and harm everyone else.  They’d do this by putting even more locks and bolts on it to any change.

As it is, the Constitution already has a lot of locks. It restrains what the people can do to a degree not seen in any other democratic nation.  It has too many checks and balances, veto power, and vast power is given to rural states, which tend to be conservative, by giving them more votes to them than populous states.  For instance, consider that Wyoming and California both have 2 senators, but California is 70 times more populated, so a vote in California has 70 times less weight than a Wyoming resident’s vote.

Continue reading

Posted in Politics | Tagged , , , | Comments Off on Book review of “Democracy in Chains”, the history of how extremist Republicans stealthily stole our Democracy

Net Energy Cliff & the Collapse of Civilization

Energy CliffThe remaining oil is poor quality, and the energy to get this often remote oil so great that more and more energy (blue) goes into oil production itself, leaving far less — the grey area — available to fuel the rest of civilization. Source: 22 June 2009. David Murphy. The Net Hubbert Curve: What Does It Mean? theoildrum.

One of the reasons the actual amount of oil drops off so fast is that the EROI is so low. A huge amount of the oil has to go back into getting more oil rather than being delivered to society as it once was on the upside.  I just found this really good, short, well written, & illustrated explanation of EROI that you should read first if you aren’t familiar with EROI, or even if you are as a reminder:

Stuart McMillen 2017 Diminishing returns: understanding ‘net energy’ and ‘EROEI’. 

Diminishing returns: understanding ‘net energy’ and ‘EROEI’

Oil is the master resource that makes all other activities and goods possible, including coal, natural, gas, transportation, agriculture, mining, manufacturing, and 500,000 products made OUT OF petroleum, often using oil as the energy source as well. Or natural gas.  Think PLASTICS (especially with fracked shale oil, which is superlight with maybe gasoline but not much kerosene or diesel).

This is the scariest chart I’ve ever seen.  It shows civilization is likely to crash within the next 20-30 years. I thought the oil depletion curve would be symmetric (blue), but this chart reveals it’s more likely to be a cliff (gray) when you factor in Energy Returned on Energy Invested (EROEI). And ironically, do you know what saved us? FRACKED OIL & NATURAL GAS.  Since 2005. But the last oil  basin, the Permian, shows signs of slowing down, and fracked wells decline by 80% in three years (versus 6% for the 500 giant oil fields we get half of all oil from, most of them in the Middle East). Fracked oil accounted for 96% of the oil that slightly raised oil product a little since world conventional oil peaked in 2008. And don’t forget, world peak conventional AND unconventional peaked in 2018.

The gray represents the actual (net) energy after you subtract out the much higher amount of energy (blue) needed to get and process the remaining nasty, distant, low-quality, and difficult to get at oil.  We’ve already gotten the high-quality, easy oil.

Before peaking in 2006, the world production of conventional petroleum grew exponentially at 6.6% per year between 1880 and 1970.  Although Hubbert drew symmetric rising and falling production curves, the declining side may be steeper than a bell curve, because the heroic measures we’re taking now to keep production high (i.e. infill drilling, horizontal wells, enhanced oil recovery methods, etc.), may arrest decline for a while, but once decline begins, it will be more precipitous (Patzek 2007).

Clearly you can’t “grow” the economy without increasing supplies of energy.  You can print all the money or create all the credit you want, but try stuffing those down your gas tank and see how far you go.  Our financial system depends on endless growth to pay back debt, so when it crashes, there’s less credit available to finance new exploration and drilling, which guarantees an oil crisis further down the line.

Besides financial limits, there are political limits, such as wars over remaining resources.

For a little while you can fix broken infrastructure and still plant, harvest, and distribute food, maintain and operate drinking water and sewage treatment plants, pump water from running-dry aquifers like the Ogallala which grows 1/4 of our food, but at some point it will be hard to provide energy to all food and infrastructure.

The entire world is competing for the steep grey area of oil that’s left, most of which is in the Middle East.

Hubbert thought nuclear energy would fill in for fossil fuels

Gail Tverberg at ourfiniteworld writes “Hubbert only made his forecast of a symmetric downslope in the context of another energy source fully replacing oil or fossil fuels, even before the start of the decline. For example, looking at his 1956 paper, Nuclear Energy and the Fossil Fuels, we see nuclear taking over before the fossil fuel decline”.

The Power of Exponential Growth: Every 10 years we have burned more oil than all previous decades.

exponential 7pct oil needed

Another way of looking at this is what systems ecologists call Energy Returned on Energy Invested (EROEI). In the USA in 1930 an “investment” of the energy in 1 barrel of oil produced another 100 barrels of oil, or an EROEI of 100:1.   That left 99 other barrels to use to build roads, bridges, factories, homes, libraries, schools, hospitals, movie theaters, railroads, cars, buses, trucks, computers, toys, refrigerators – any object you can think of, and 500,000 products use petroleum as a feedstock (see point #6).  By 1970 EROEI was down to 30:1 and in 2000 it was 11:1 in the United States.

Charles A. S. Hall, who has studied EROEI for most of his career and published in Science and other top peer-reviewed journals, believes that society needs an EROEI of at least 12 or 13:1 to maintain our current level of civilization.

Because we got the easy oil first, we have used up 73% of the net energy that will ever be available, since the remaining half of the reserves require so much energy to extract.

Some other reasons why the cliff may even be steeper

It’s not our oil

Nearly all of the good, high quality, cheap sweet oil is in the Middle East.  Most of the remaining oil will need vast amounts of fresh water to get it out, but there is very limited fresh water in these countries.  The refineries and other extraction infrastructure are easy targets to damage or destroy by terrorists or in wars as well.

Export Land Model

Oil producing countries are using more and more of their own (declining) oil as population and industry grows within their own nation, and they too need to use more and more energy to get at their difficult oil.  This results in a similar chart to the net energy cliff — suddenly there will hardly be any oil to buy on the world markets.  See Jeffrey Brown’s article “The Export Capacity Index” (one of his statistics is that at the current rate of increasing imports of oil in India and China, these 2 countries alone would be importing 100% of available oil within 18 years).

Technology

As we improve our technology to get at the remaining oil, we make the cliff on the other side even steeper as we get oil now that would have been available to future generations.

Investments won’t be made because the payback times will lengthen

Since what remains is increasingly difficult and expensive to find, develop and extract, investment payback periods lengthen, eventually to impossibly long periods, or to periods that approach the useful life of the capital investment (effectively the same limit in the financial dimension as is an EROEI of 1). Which means it doesn’t matter how much might theoretically be underground, the only thing that matters is how much is actually going to be economically feasible to recover, and that is going to be considerably less than 100% of what might be theoretically and technically possible to recover.

Energy is becoming impossibly expensive, as you can see in these photos of The Tallest structure ever moved by Mankind, a Norwegian natural gas offshore platform.

Exponential growth of population

This makes whatever oil we have left last even less long.

Less oil obtained than could have been

Projects maximize a return on investment over a return of every last drop of possible oil.  Making money is so important that a lot of offshore Gulf oil that could have been obtained if extracted more slowly remains in the ground to wastefully get it out as fast as possible to make a profit because that’s how our financial system operates: short-term gratification.  But hey!  That’s less carbon dioxide and global warming, so in a totally unintended orgy of insatiable greed the “there are no limits to growth” billionaires have ironically helped save the planet.

Flow Rate: An 8% or higher decline rate is likely

According to the IEA, the world decline rate is 8.5% offset 4% by Enhanced Oil Recovery (which takes energy). Hook says oil will decline by 0.015 a year when all oil is decline for the giant oil fields, but at higher rates for smaller fields. Especially fracked oil at 80% over 3 years.    That’s too fast for civilization to cope with.

Welcome to net zero!  Great news since so many think that climate change is the ONLY problem.

Oil Chokepoints

And the problem isn’t just geological. There are several critical areas of the world where the flow of oil could be stopped by war or terrorism.

Wars, cyber-attacks, nuclear war, social chaos

By 2024, if not sooner, the unequal distribution of the remaining oil, starvation generated riots and pillaging, and collapsing economies have triggered war(s), massive migrations, and social chaos.

Shale oil and natural gas can not prevent the cliff.  Martin Payne explains: “shale oil plays give us a temporary reprieve from what Bob Hirsch called the severe consequences of not taking enough action proactively with respect to peak oil. Without unconventional oil, what we wind up with is essentially Hubbert’s cliff instead of a Hubbert’s rounded peak”. But this won’t last:Conventional oil–which was found in huge quantities, in giant fields in the 40’s and 50’s – well those giant fields had huge reserves and high porosities and permeabilities – meaning they would flow at very high rates for decades. This is in contrast to a relative few shale oil plays which have very low porosity and perm and which must be hydraulically fractured to flow. Conventional oil is just a different animal than unconventional oil; some unconventional oil wells have high initial rates of production, but all of these wells have high decline rates. Hubbert anticipated a lot of incremental efforts by the industry to make the right-hand or decline side of his curve a more gradual curve rather than a sharp drop (Andrews)

If any of these wars involve nuclear bombs, then at least a billion people will die.

The unrest has certainly curtailed the ability of oil companies to drill.

Even farmers may stop growing crops once city residents and roaming militias harvest whatever is grown (i.e. Africa as described in Parenti’s “Tropic of Chaos: Climate change and the new geography of violence).

Cyberattacks from China, Russia, and elsewhere have brought the electric grid down in the USA to prevent US military forces for trying to grab the remaining Saudi and Iraqi oil –the armed forces will be too busy trying to maintain order in the USA to venture abroad — nor could they go even if they wanted to, because Chinese and Russian drone attacks will have destroyed all of the United State oil refineries, and we have retaliated against them, so they won’t be able to refine oil either).  We’ve also cyberattacked their electric grids.  Most major cities have no sewage treatment or clean water. Nuclear power plants are melting down.

There’s no substitute for oil  

Coal — why it can’t easily subsitute for oil

“Peak is dead” and the future of oil supply:

Steve andrews (ASPO):  You mention in your paper that natural gas liquids can’t fully substitute for crude oil because they contain about a third less energy per unit volume and only one-third of that volume can be blended into transportation fuel.  In terms of the dominant use of crude oil—in the transportation sector—how significant is the ongoing increase in NGLs vs. the plateau in crude oil?

Richard G. Miller: The role of NGLs is a bit curious.  You can run a car on it if you want, but it’s not a drop-in substitute for liquid oil.  You can convert vehicle engines in fleets to run on liquefied gas; it’s probably better thought of as a fleet fuel.  But it’s not a substitute for oil for my car.  By and large, raising NGL production is not a substitution to making up a loss of liquid crude.

The only way I can see this being prevented or the end of oil delayed a few years, is if a government has already developed effective bio-weapons and doesn’t care if their own population suffers as well.

I feel crazy to have just written this very dire paragraph with just a few of the potential consequences, but the “shark-fin” curve made me do it!

Even though I’ve been reading and writing about peak everything since 2001, and the rise and fall of civilizations for 40 years, it is hard for me to believe a crash could happen so fast.  It is hard to believe there could ever be a time that isn’t just like now.  That there could ever be a time when I can’t hop into my car and drive 10,000 miles.

I can imagine the future all too well, but it is so hard to believe it.

Believe it.

References

Andrews, Steve. 29 July 2013. Interview with Martin Payne—Is Peak Oil Dead? ASPO-USA Peak Oil Review.

Patzek, T. 2007 How can we outlive our way of life? 20th round table on sustainable development of fuels, OECD headquarters.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

Posted in 2) Collapse, 3) Fast Crash, An Overview, Energy, EROEI Energy Returned on Energy Invested, Net Energy Cliff, Stages of | Tagged , , , , , , | 11 Comments

Giant oil field decline rates, peak oil, & reserves

Preface. Of the roughly 47,500 oil fields in the world, 507 of them, about 1%, are giant oil fields holding nearly two-thirds of all the oil that has ever been, or ever will be produced, with the largest 100 giants, the “elephants,” providing nearly half of all oil today. Since the 1960s, the world has consumed more oil than what has been discovered, and the average size of new oil fields has declined, leaving us heavily dependent on the original giant oil fields discovered over 50 years ago(Aleklett et al. 2012).

Since giant oil fields dominate oil production, the rate they decline at is a good predictor of future world decline rates. In 2007, the 261 giants past their plateau phase were declining at an average rate of 6 % a year. Their decline rate will continue to increase by 0.15 % a year, to 6.15, 6.3, 6.45 % and so on. By 2030 these giants, and the other giants joining them, will be declining at an average rate of over 9 % a year (Hook 2009; IEA 2010).  At this exponentially increasing rate, it will take just 16 years to have just 10% of the oil that existed at peak production.

Dittmar (2016) estimates an annual production decline of 6 ± 1% which implies that after ten years, production will be only 54 ± 6% of what it was at the beginning of those ten years.

Sooner or later, this math dictates supply shortages Hallock et al. (2014).

World peak crude oil production happened in November 2018.  It won’t be long until decline rates reach 6% and exponentially increase by 0.015 a year.  That’s great news for climate change, because oil is the master resource that makes all others possible, including coal and natural gas.  Hip Hip Hooray, we’re on the way to net zero.

The first post is not only a summary of remaining reserves, but the New World Order of Russia, Iran, Iraq, and China that control a quarter of the remaining conventional oil.  And the Russian attack on Ukraine the first major resource war.  If you look at Art Berman’s post online, you’ll see that Venezuela has 18% of world reserves and Canada 10%. but these are heavy oil and tar sands respectively.  They have an EROI of 4 to 6 at best, so if an EROI if at least 7 or 10 or even 14 is required, will fall off the net energy cliff and remain unexploited.  While they still can be obtained, the flow rate is very slow since they’re nasty, gunky, abd energy intense to refine.

By the way, if you’re curious about what oil looks like, see 14 photos here.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Berman A (2024) The oil and energy macro.

U.S. oil reserves in 2022 exceeded 48 billion barrels, with tight oil 27 billion barrels of it (56%). U.S. reserves are about half of Russia’s, one-third of Iraq’s, & about one-fifth of Iran’s & Saudi Arabia’s (Figure 2). It holds roughly 3% of the world’s reserves compared to Iraq’s 9%, Iran’s 12% and Saudi Arabia’s 15%. Countries in the Persian Gulf have almost half of the world’s oil, and 42% of the worlds remaining proved reserves are in just four countries: Saudi Arabia, Iran, Iraq and the United Arab Emirates. Iraq is now a vassal state of Iran—an enemy of the U.S.—and, together, they have more than 20% of the oil that’s left. Add Russia and our principal enemies control a quarter of the world’s oil.

Those are terrible odds. U.S. foreign policy after World War II was founded on oil security from the Middle East. The last four energy-blind U.S. presidents managed to undo that. One of those two will be the next president of the United States. Most people know that the wars in Ukraine and in the Middle East are serious but think of them in parochial terms—that they developed out of long-standing feuds between people who have always been at each other’s throats.

Russia and China are aligned with Iran. Iran and China have signed 20 agreements covering various areas including trade, transportation, and information technology. China buys almost all of the oil that Iran is able to export. Iran has been supplying drones and other military equipment to Russia in its war on Ukraine.

Iran is an oil super power. Combined with its vassal state Iraq, it holds 21% of the world’s oil reserves, more than Saudi Arabia or Venezuela (Figure 2 above). Its Houthi allies attacked Saudi Arabia’s main refinery complex in 2019, and have been disrupting trade in the Red Sea and Suez Canal since November 2023. Almost 9 million barrels of oil per day (mmb/d) pass through the Canal and the Bab al-Mandab Strait. As I wrote in a recent post, almost everything in the Middle East is about oil. The Houthis recently expanded their range attacking shipping in the Indian Ocean.

Energy lies at the heart of global power struggles and the downward arc of economic prosperity. The West remains largely energy-blind but those pushing for a new world order are not. In this high-stakes game, Iran, Russia, and China stand as formidable players, acutely aware of the pivotal role oil plays in shaping the future landscape of power and influence.

Iran’s ambitions in the Middle East, Russia’s maneuvers in Eastern Europe and the Middle East, and

China’s ascent in the Asia-Pacific region and beyond—all hinge on their dominance of energy resources and production capabilities. Their strategic calculus revolves around securing access to vital energy reservoirs and asserting control over critical supply routes.

We are at the beginning of the end of the Oil Age. Europe’s energy crisis, the war in Ukraine, and rise of Iran as the dominant power in the Middle East are all part of a struggle to dominate remaining fossil resources as well as new energy sources.

Many Americans including some of its leaders have the peculiar idea that the United States dominates world energy and that its military might is as formidable in global events as it was 75 years ago. I don’t think so. While some celebrate a new high in U.S. oil reserves, I worry that its 3% of remaining world supply is a drop in the bucket.

The nations working toward a new world order have a plan. What is ours?

Excerpts from: Hook, M., Hirsch, R., Aleklett, K. June 2009. Giant oil field decline rates and their influence on world oil production. Energy Policy 37(6): 2262-2272

Future trends in giant oil field decline rates

Figure 7. The decline rate of existing oil production within total world oil production will increase with time. This is because more fields go into decline as time goes on. To illustrate, consider three identical oil fields that start to decline at three different times, each with a 12% decline rate, similar to the average 13% found typical for Norway. In this example, the overall decline rate starts at zero and progressively increases.

Figure 7. The decline rate of existing oil production within total world oil production will increase with time. This is because more fields go into decline as time goes on. To illustrate, consider three identical oil fields that start to decline at three different times, each with a 12% decline rate, similar to the average 13% found typical for Norway. In this example, the overall decline rate starts at zero and progressively increases.

Analysis of our 331 giant field dataset going back in time showed that the world average decline rate was near 0% until roughly 1960, when overall decline began to increase, as more and more giant oil fields left the plateau phase. Thereafter, production from new giants failed to compensate for the declines of existing giant production. The average decline rate of the giant oilfields was found to increase by around 0.15% per year. Extrapolating the 1960-2005 trend yielded an average decline rate of nearly 10% by 2030 (Figure 10). This giant field trend should have a strong influence on the future global decline rate in oil production.

Non-OPEC decline rate in 2030 is likely to be 11%, OPEC decline rate only 8% for reasons further explained below.

The production-weighted decline rate has been behaving somewhat differently, especially since 1985, compared to the average decline rate. The reason for this change is the introduction of new technologies, most notably horizontal drilling and fracturing techniques, in many major fields in former Soviet Union and the Middle East. Using new technologies, it was possible to halt the decline in many giants and keep production stable for some time. Eventually the average and the production-weighted declines must follow each other. The currently stable production weighted decline cannot be expected to continue far into the future, once technology-enhanced fields reach the final onset of decline. The world has had a false sense of security, temporarily created by decline-delaying technology introduction in underdeveloped fields.

Figure 10: The decline rate of the world’s giant oilfields. The trend toward an increasing average decline rate is very clear and explained by an ever decreasing volumes of newly discovered and declining production from new giant fields. The time period 1973-1982 was disregarded since production was deliberately reduced during that period by OPEC. The divergence between the two decline rates after 1985 is caused by the introduction of new technology and the revival of giant fields in primarily Middle East and Russia.

Figure 10: The decline rate of the world’s giant oilfields. The trend toward an increasing average decline rate is very clear and explained by an ever decreasing volumes of newly discovered and declining production from new giant fields. The time period 1973-1982 was disregarded since production was deliberately reduced during that period by OPEC. The divergence between the two decline rates after 1985 is caused by the introduction of new technology and the revival of giant fields in primarily Middle East and Russia.

As more giant fields go into decline in the future, the average decline rates for all giants will increase.  Since the 1970s, the share of giant oilfields in decline has increased, showing the overall maturity and lack of new fields brought into production. Many giants have been in production for many decades without reaching the onset of decline, but sooner or later they will eventually do so. By 2030 one can expect 80% of total giant oilfield production to come from fields in decline, if one extrapolates the trend since 1985. The share could be even higher when important OPEC “super giants”, such as Ghawar or Safaniyah, leave the plateau phase in the intervening years.

The most important contributors to the world’s total oil production are the giant oil fields. Using a comprehensive database of giant oil field production, the average decline rates of the world’s giant oil fields are estimated. Separating subclasses was necessary, since there are large differences between land and offshore fields, as well as between non-OPEC and OPEC fields. The evolution of decline rates over past decades includes the impact of new technologies and production techniques and clearly shows that the average decline rate for individual giant fields is increasing with time. These factors have significant implications for the future, since the most important world oil production base– giant fields – will decline more rapidly in the future, according to our findings. Our conclusion is that the world faces an increasing oil supply challenge, as the decline in existing production is not only high now but will be increasing in the future.

It is well known that oil production from many oil fields worldwide is in decline and that more fields transition into decline each year. In roughly mid 2004, total world oil production ceased to expand. Instead, new production has only succeeded in keeping world oil production relatively flat (Figure 1).

Figure 1: World liquid fuels production from January 2001 to November 2008. Since mid-2004, production has stayed within a 4% fluctuation band, which indicates that new production has only been able to offset the decline in existing production. Source: EIA (2009)

Figure 1: World liquid fuels production from January 2001 to November 2008. Since mid-2004, production has stayed within a 4% fluctuation band, which indicates that new production has only been able to offset the decline in existing production. Source: EIA (2009)

A recent analysis by Cambridge Energy Research Associates estimated that the weighted decline of production from all existing world oil fields was roughly 4.5% in 2006 (CERA, 2007), which is in line with the 4-6% range estimated by ExxonMobil (2004). However, Andrew Gould, CEO of Schlumberger, stated that an accurate average decline rate is hard to estimate, but an overall figure of 8% is not an unreasonable assumption (Schlumberger, 2005). T. Boone Pickens (2008), agreed with Gould in recent testimony before the US Senate Committee on Energy and Natural Resources. Duroc-Danner (2009) gives a blended average decline rate for oil and gas today of about 6%. The International Energy Agency (IEA) came to the conclusion that the average production-weighted decline rate worldwide was 6.7% for post-peak fields (IEA, 2008), which means that the overall decline rate would be less, since many fields are not yet in decline.

Giant oil fields are the world’s largest. There are two ways to define a giant oil field. One is based on ultimately recoverable resources (URR), and the second is based on maximum oil production level. The URR definition considers giants to have more than 0.5 Gb of ultimately recoverable resources.

The production definition assumes a production of more than 100,000 barrels per day (b/d) for more than one year (Simmons, 2002). In this analysis we consider the worlds conventional oil fields, regardless of location, e.g. shallow or deep water, the Arctic, etc. Conventional oil fields refer to reservoirs that dominantly allow oil to be recovered as a free flowing dark to light-colored liquid (Speight, 2007). Consequently, heavier crude oils that require special production methods are excluded.

Using our definition of giant oil fields, we find that roughly 500 (about 1% of the total number of world oil fields) are classified as giants. Their contribution to world oil production was over 60% in 2005, with the 20 largest fields alone responsible for nearly 25% (Figure 2). Giant fields represent roughly 65% of the global ultimate recoverable conventional oil resources (Robelius, 2007). Many studies have pointed out the importance of giant oil fields, for instance Campbell (1991), Hirsch (2008), Meng and Bentley (2008).

Figure 2: World crude oil production from 1925 to 2005. The dominance of the giant oil fields can clearly be seen. Modified from Robelius (2007)

Figure 2: World crude oil production from 1925 to 2005. The dominance of the giant oil fields can clearly be seen. Modified from Robelius (2007)

The overall production from giant fields is declining, because a majority of the largest giant fields are over 50 years old, and fewer and fewer new giants have been discovered since the decade of the 1960s (Figure 3). The average contribution from an individual giant oilfield to world production is less than 1%. Thus, with few exceptions, e.g., Ghawar, the contribution from a single field is generally small compared to the total. On this basis, our approach is to estimate collective behaviors.

Figure 3: Discovery trends for giant oil fields in both number and annual discovered volume, based on the most optimistic, backdated URR values. Modified from Robelius (2007)

Figure 3: Discovery trends for giant oil fields in both number and annual discovered volume, based on the most optimistic, backdated URR values. Modified from Robelius (2007)

Fields with production over 100,000 b/d were included in our analysis, but they numbered only 20 fields in our total. Production profiles of giant fields generally have a long plateau phase, rather than the sharp “peak” often seen in smaller fields. The end of the plateau phase is the point where production enters the decline phase. We adopted the end-of-plateau as the point where production lastingly leaves a 4% fluctuation band.

Fields with production over 100,000 b/d were included in our analysis, but they numbered only 20 fields in our total. Production profiles of giant fields generally have a long plateau phase, rather than the sharp “peak” often seen in smaller fields. The end of the plateau phase is the point where production enters the decline phase. We adopted the end-of- plateau as the point where production lastingly leaves a 4% fluctuation band.

Each field is assumed to have a constant decline rate, and the production for an individual oil field fluctuates around some average value over time. Examples of how well this approximation agrees with actual production in a few cases are shown in Figures 4 and 5.

Figure 4: The production curves of the land-based US giant Prudhoe Bay and the giant UK Thistle offshore field. The approximately exponential average decline rate is clearly seen in these two well-behaved fields.

Figure 4: The production curves of the land-based US giant Prudhoe Bay and the giant UK Thistle offshore field. The approximately exponential average decline rate is clearly seen in these two well-behaved fields.

In some cases, wars, sabotage and politically motivated shut downs are evident, making these fields more difficult to describe in a simple manner. Many OPEC fields have been on a plateau for a very long time, and some were even mothballed for various periods. Nonconforming fields, such as Eldfisk, Ekofisk and some fields in Nigeria, were treated separately, and production disturbances such as guerrilla attacks or accidents were disregarded (Figure 5 not shown).

Decline curves can be made much more detailed and complicated (hyperbolic etc.), so the simple exponential model used here should be seen as a simplified model for future outlooks. One disadvantage of the exponential decline curve is that it tends to underestimate tail production, which usually flattens out to a harmonic decline. However, the production levels far out in the tail region are generally very low compared to the plateau level, so this approximation is a minor problem for the model. In the near and medium term future the exponential decline curve is a suitable tool for realistic outlooks.

Some fields can also show complex behavior with several exponential decline phases or even production collapses, where the decline can be doubled in the end stage of the field life. In other cases introduction of new technology can revive the field and significantly dampen the decline temporarily. This is the case in some Russian fields, which were reworked after the fall of the Soviet Union. However, Höök et al. (2009) found that such events are likely to result in higher decline rates later on, compensating the temporal decrease in decline rate.

Average decline rate

The Uppsala giant field database includes 331 giant oil fields with a combined estimated URR of over 1130 Gb, using estimates adopted by Robelius (2007). 214 fields are land-based (about 65% of the total), while 117 are offshore installations (about 35%). To calculate the decline rate of giants that were in decline as of the end of 2005, we considered only the 261 fields classified as post-plateau and in decline. Of these, 170 were land-based and 91 offshore. IEA (2008) gives an average depletion factor, defined as cumulative production divided by initial 2P reserves, of 48% for their super-giants and giants. Höök et al. (2009) found that most giant fields leave the plateau phase and reach the onset of decline when around 40% of the URR has been produced, and combined with IEA’s average depletion factor, it is not surprising that the majority of the fields are categorized as in decline.

An average annual decline rate for the world’s giant oil fields is seen to be roughly -6.5%, which is in line with the average observed decline rate worldwide of -6.5% and the -5.8% production-weighted average annual decline rate obtained by IEA (2008). The agreement with the 5.8% production-weighted annual decline for large fields obtained by CERA (2007) is good.

Decline rates of land vs offshore Giant fields

  • 170 land:…….mean -4.9%  median -4.4%   production weighted -3.9%
  •   91 offshore: mean -9.4%  median -9%      production weighted -9.7%

Land-based fields decline much slower than offshore fields, as expected and in agreement with both IEA (2008) and CERA (2007). The reason for this difference is generally the higher production capabilities built into offshore installations in order to repay expensive investments as soon as possible. Also, a significant number of land-based fields started to decline far back in time, before the introduction of modern production techniques such as water injection and other pressure managing methods. In comparison, there were virtually no deepwater offshore fields before 1970. The offshore group can be even further divided into shelf and deepwater fields, where deepwater fields tend to decline faster and shelf fields somewhat slower.

Decline rates of OPEC versus NON-OPEC giant fields

OPEC Fields

  • 97 all: mean -4.8%  median -4.1%   production weighted -3.4%
  • 73 land: mean -3.8%  median -3.8% production weighted -2.8%
  • 24 offshore: mean -7.7%  median -6.1% production weighted -7.5%

NON-OPEC Fields

  • 164 all: mean -7.5%  median -6.3%   production weighted -7.1%
  • 97 land: mean -5.7%  median -4.7% production weighted -5.2%
  • 67 offshore: mean -10%  median -9.4% production weighted -10.3%

The average decline of all non-OPEC giant fields is above 7%, indicating that non-OPEC production is dropping relatively rapidly.

The OPEC quota system appears to have been quite efficient at moderating production and maintaining the longevity of fields, instead of extracting oil rapidly with an accompanying high decline rate.

Many of the low decline rates can be found in fields in the US that peaked prior to 1970s. In fact most of the non-OPEC land group is dominated by the US giant fields. The non-OPEC offshore group is dominated by giant fields in the North Sea. Many fields, both giant fields and smaller ones, in the North Sea, show a high decline rate, for instance in Norway and the UK (Zittel, 2001; Höök and Aleklett, 2008).

In comparison to the non-OPEC group, the OPEC group generally displays lower decline rates. One quite intriguing detail is that OPEC fields tend to exit the plateau phase at a lower percentage of their URR volumes. This is an explanation for the lower decline rate. Instead of a prolonged plateau, a longer decline phase with less annual decrease has generally been favored as a production strategy compared to non-OPEC. This is examined in greater detail in another study (Höök et al, 2009). An alternative explanation is that OPEC URR estimates are exaggerated, as claimed by former Aramco vice-president Sadad al Husseini (2007).

Evolution of decline rate in time

It is useful to consider the historical evolution of field decline rates, since many giant fields are old and passed into decline before much of modern oil field technology was developed and implemented. The year that fields left plateau production was used to form subgroups, e.g., if a field started to decline in 1950-1959, it is included in the 1950s group and so on. We believe that the year of the onset of production decline is of greater importance because it better reflects the impacts of improved technology and alternate production strategies.

From this data, it is evident that average decline rates for giant oil fields as a group are increasing with time, even though individual field decline rates are essentially constant once a field has reached the onset of decline.

An important conclusion is that higher decline rates must be applied to giant fields that enter decline in the future. Prolonged plateau levels and increased depletion made possible by new and improved technology result in a generally higher decline rates.

Detailed case studies of giant oilfields suggest that technology can extend the plateau phase, but at the expense of more pronounced declines in later years (Gowdy and Juliá, 2007). Our findings verify their conclusion.

Since a large number of important giants are subject to enhanced production methods, such as water flooding, gas injection, fracturing or other measures, it is reasonable to expect relatively higher declines after those fields depart their plateau phase.

Comprehensive discussion on development of mature oilfields and a few examples of utilization in giant fields can be found in Babadagli (2007). The collapse of production from the Cantarell field in Mexico, which was extensively subjected to technologies aimed at increasing production, meant that the field declined even faster than the government’s pessimistic scenarios (Luhnow, 2007).

For all offshore fields, a clear trend towards higher decline rates over time was found. For all land-based fields, the trend is not as clear but is directionally similar. Separating OPEC and non-OPEC fields reveals larger differences. Data for the decade of the 2000s are limited because less declining field data was available as of the end of 2005.

The non-OPEC land group shows an inclination towards somewhat higher decline rates. The increasing decline rate trend for non-OPEC offshore giant oil fields is much clearer. The trends for the land-based fields deviate in the 1970s and 1980s, which is probably due to the twin oil crises that occurred during those decades. For the OPEC group, a tendency towards lower decline rates is observed and can be explained by the fact that fields were taken from their plateau phase for political reasons, resulting in subsequent less steep decline rates.

LAND Decade # fields Mean % Median % Prd weight %
OPEC pre 1960 10 -4.3 -4.7 -4
non-OPEC pre 1960 13 -4.2 -3.8 -4.3
OPEC 1960 8 -4.9 -5.5 -5.9
non-OPEC 1960 10 -5.3 -5.4 -6
OPEC 1970 36 -2.9 -3 -2.2
non-OPEC 1970 36 -5.5 -4.7 -4.9
OPEC 1980 5 -2.8 -3 -1.9
non-OPEC 1980 20 -4.8 -4.1 -4.6
OPEC 1990 12 -5.1 -5.1 -4
non-OPEC 1990 16 -8.2 -6.3 -6.5
OPEC 2000 2 -9.8 -9.8 -10.2
non-OPEC 2000 2 -11.5 -11.5 -9.9
Offshore Decade # fields Mean % Median % Prd weight %
OPEC pre 1960 0
non-OPEC pre 1960 0
OPEC 1960 0
non-OPEC 1960 2 -2.8 -2.8 -3.7
OPEC 1970 7 -4.7 -3.9 -4.3
non-OPEC 1970 10 -6.8 -6.5 -7.9
OPEC 1980 3 -5.2 -6.3 -6.2
non-OPEC 1980 13 -8.5 -7.5 -9.3
OPEC 1990 12 -8.2 -7 -7.9
non-OPEC 1990 24 -11.5 -12.1 -11.5
OPEC 2000 2 -19.6 -19.6 -20.8
non-OPEC 2000 18 -11.7 -11.8 -10.3

Future global production

Without good data on a large fraction of the world’s oil fields, an accurate estimate of future global oil production cannot be developed. While various databases exist, all include approximations and estimates, so none are fully definitive. Nevertheless, a number of factors can provide insights into what might evolve. First, the world’s giant oil fields are the dominating contributors to total world oil production. Second, it is found that the decline of smaller fields is equal to or greater than those of the giants (see for instance IEA, 2008; CERA, 2007). A detailed study of Norwegian fields showed that giants declined at an average of 13%, while the small fields, condensate, and NGL declined at 20% or more (Höök and Aleklett, 2008).

A small field requires fewer wells to fully develop; hence it is more easily depleted. A large field requires many more wells, often widely separated, so it is typically depleted more slowly. High depletion rates, which are common in small fields, have been shown to strongly correlate with high decline rates (Höök et al., 2009). Thus, giant oil field decline rates are useful for estimating the likely average world decline. Accordingly, we believe that the decline in existing production, both for giants and other fields, will be at least 6.5% or 5.5% if production-weighted.

The findings of Höök et al. (2009) indicate that the decline rates are only significant in the first digit. Consequently, we use 6% for our production outlook to reflect uncertainty. A comparison, in a field-by-field study of predominantly giant fields, IEA (2008) derived an average decline rate worldwide for all oil fields of 6.7%. IEA (2008) stated that field size was a large determinant of field decline behavior, noting that large fields decline relatively slower than small fields. This reasoning also supports their expected increases in future decline rates, as the world moves towards generally smaller oilfields.

The exact annual increase in world average decline rate is difficult to estimate and requires a more comprehensive database than was available to us. Accordingly, the value of 0.15% per year derived here should be taken as a rough estimate. The important point, however, is that the average decline rate in existing production is clearly increasing with time. Also, the contribution from declining fields is increasing. Consequently, “we must run faster and faster just to stand still.

Using our 6% production-weighted average decline and extrapolation of the contribution from declining fields, one can create a future outlook for world crude oil production. By incorporating the increasing average decline, another possible future can be envisioned. The difference between using a constant decline rate and a growing decline is as much as 7 Mb/d by 2030 (Figure 13).

Figure 13: The historical world oil production along with a crude oil forecast using the reference scenario from IEA World Energy Outlook 2008. A constant decline rate of existing production of 6%, combined with an increasing share of fields in decline, is displayed as one possibility. Our other scenario is a case with increasing average decline. The IEA WEO 2008 forecast for fields in production (FIP) is compared to our own estimates of reasonable decline rates and the contribution from declining fields. The IEA forecast is reasonable in the near-term, but towards 2030, it seems optimistically biased. Using a constant decline rate compared to an increasing rate can mean as much as 7 Mb/d of production capacity by 2030.

Figure 13: The historical world oil production along with a crude oil forecast using the reference scenario from IEA World Energy Outlook 2008. A constant decline rate of existing production of 6%, combined with an increasing share of fields in decline, is displayed as one possibility. Our other scenario is a case with increasing average decline. The IEA WEO 2008 forecast for fields in production (FIP) is compared to our own estimates of reasonable decline rates and the contribution from declining fields. The IEA forecast is reasonable in the near-term, but towards 2030, it seems optimistically biased. Using a constant decline rate compared to an increasing rate can mean as much as 7 Mb/d of production capacity by 2030.

Conclusion

Based on a comprehensive database of giant oil field production data, we estimated the average decline rates of the world’s giant oil fields that are beyond their plateau phase. Since there are large differences between land and offshore fields and non-OPEC and OPEC fields, separation into different subclasses was necessary. In order to obtain a realistic forecast of future giant field decline rates, the subclasses were treated separately to better reflect their different behaviours. Thus, our average total decline rate for post-plateau giant fields of 6.5% and CERA’s overall 6.3% are in good agreement, and our 5.5% production-weighted giant field decline rate compares reasonably with IEA’s 6.5% and CERA’s 5.8%. Offshore fields decline faster than land fields, and OPEC fields decline slower than non-OPEC fields. There are small differences in the data sets and definitions between the studies, but the results from these three studies can be considered approximately equivalent.

The evolution of decline rates over time includes the impact of new technologies and production techniques and clearly shows that average decline rates are increasing. This verifies the results of Gowdy and Julia (2007). Furthermore, prolonged plateau levels come at the cost of higher subsequent decline rates. This conclusion is in line with the findings of IEA (2008) regarding trends in average decline. The trends in average decline rate and production-weighted decline rate indicate that technology transfer, primarily to the Middle East, was able to dampen the decline in many highly productive fields (Figures 10, 11 and 12). This cannot continue, and ultimately, production-weighted decline must approach the average decline rate, as in the case of Norway (Figure 9). How the production-weighted decline will behave in the future is difficult to estimate, but an increase appears inevitable. Future decline rates of giant fields that have not yet left the plateau phase can be expected to be higher than those that are now in decline. This is in line with a recent statement about a decline of 10% in mature fields from Petrobras downstream director Paulo Roberto Costa (2008). The crash of the Cantarell field in Mexico and the experiences of the North Sea giants are a vivid example of what can happen to other giant oilfields in the future.

These findings have large implications for the future, since the most important world oil production base – giant oilfields – will decline more rapidly. In the extreme, a potential 10% annual decline in Ghawar would be very challenging to compensate and would create severe problems for Saudi-Arabia and the world. The future behaviour of the remaining giants, especially in OPEC, will be a key factor in future oil supply. Based on the decline behaviour of giants, decline rate estimates for world oil production are possible because of the large influence of the giants. Many studies have shown that smaller fields, condensate, and NGL will decline at least as fast or faster than giant oilfields, once the onset of decline is reached (CERA, 2007; Höök and Aleklett, 2008; IEA, 2008). Consequently, we believe that there is a strong basis for believing that giant oilfields can be used to set a floor for future decline rate assumptions. In conclusion, this analysis shows that the average decline rate of the giant oil fields have been increasing with time, reflecting the fact that more and more fields enter the decline phase and fewer and fewer new giant fields are being found. The increase is in part due to new technologies that have been able to temporarily maintain production at the expense of subsequent more rapid decline. Growing average decline rates have also been noted by IEA (2008). The difference between using a constant decline in existing production and an increasing decline rate is significant and could mean as much of a difference of 7 Mb/d by 2030 (Figure 13). By 2030 the production from fields currently on stream could have decreased by over 50% in agreement with IEA (2008). The struggle to maintain production and compensate for the decline in existing production will become harder and harder. Our conclusion is that the world will face an increasing oil supply challenge, as the decline in existing production is not only high but also increasing.

More details about the above

IEA (2008) uses a similar classification in their study. IEA classify “super-giants” as fields with more than 5 Gb of initial 2P reserves, “giants” contain more than 500 million barrels of initial 2P reserves, and “large fields” contain initial reserves of more than 100 million barrels. In IEA (2008), super-giants are treated as a subclass, while this study includes them in the giant category. In total, IEA covers 317 super-giant and giant fields, which makes their data set similar to ours. Their production-weighting is done by using the cumulative production of each field in the averaging.

CERA (2007) did not treat giant oil fields in the same way as this study. Instead their study was performed on a set of “large fields”, classified as fields with more than 300 million barrels of originally present 2P reserves of oil and condensate. In total, their dataset represent 1,155 billion barrels of original 2P reserves in place, accounting for approximately half the current annual global production. Consequently, their dataset is deemed approximately equal to the data set used in this study. When it comes to production-weighting, CERA (2007) takes each field’s latest oil and condensate production into account for the averaging.

Because of financial and practical differences between land-based and offshore fields, a division is needed to establish a comprehensive picture of how each subclass behaves. A further division into OPEC-fields and non-OPEC fields was also made to better reflect the potentially different behaviors of giants managed with no political restrictions on production and those sometimes limited by quota systems.

OPEC controls or formerly controlled 143 of the fields in our database. Gabon is no longer a part of OPEC, but used to be a member; consequently, their fields were classified as OPEC-fields because they were previously subjected to the quota system. Ecuador suspended their OPEC-membership, but recently rejoined the organization. In total, OPEC has 104 land and 39 offshore fields in our database.

Outside of OPEC, 190 fields were considered with 110 fields onshore and 78 offshore. The North Sea, Russia and the US were the most important regions within the non-OPEC group.

The decline rate of a field is affected by introduction of new technology, investments, changes in strategies and other factors affecting production.  The statistical uncertainty is difficult to estimate, since production data contains political influences, differences in definitions, reporting practice and many other parameters, making conventional statistical error estimate hard to apply. Traditional statistical analysis was based on the assumption that production data measures approximately the same thing, and results in standard deviations of around 5% and may be seen as a rough attempt to put a number on the inaccuracy (Höök et al., 2009). In comparison, neither IEA (2008) nor CERA (2007) provides any uncertainty estimates and hence it is hard to judge the statistical variations in their results.

Because the number of fields is so large, our approach provided reasonable statistics and reasonable mean, median and production weighted values for the giant oil fields as a group. The production weighted values were created by weighting the decline rate against the peak or plateau production level for each field, thus giving greater importance to fields with high production. The production weighted decline is lower than the mean value, because fields with high production levels often tend to be larger and decline slower than the rest.

References

Aleklett, K., et al. 2012. Peeking at peak oil. Berlin: Springer

al-Husseini, S.I., 2007, Long-Term Oil Supply Outlook: Constraints on Increasing Production Capacity, presentation held at Oil and Money Conference, London, 30 October 2007, see also: http://www.boell-meo.org/download_en/saudi_peak_oil.pdf

Arps, J.J, (1945); Analysis of Decline Curves, Transactions of the American Institute of Mining, Metallurgical and Petroleum Engineers, 1945, 160, 228-247.

AAPG (1970); Geology of giant petroleum fields, Memoir 14, 1970
AAPG (1980); Giant oil and gas fields of the decade 1968-1978, Memoir 30 / 1980
AAPG (1992); Giant oil and gas fields of the decade 1978-1988, Memoir 54 / 1992
AAPG (2003); Giant oil and gas fields of the decade 1990-1999, Memoir 78 / 2003
AAPG (2005); Saudi Arabia’s Ghawar Field – The Elephant of All Elephants, AAPG Explorer, Jan 2005

Aleklett, K. (2006); Oil production limits mean opportunities, conservation. Oil & Gas Journal, Volume 104, Issue 31, 21 Aug 2006
23

Babadagli, T. (2007); Development of mature oil fields — A review, Journal of Petroleum Science and Engineering, Volume 57, Issues 3-4, June 2007, Pages 221-246

Barnes, R.J, Linke, P., Kokossis, A. (2002); Optimisation of oilfield development production capacity, Computer Aided Chemical Engineering, Volume 10, 2002, Pages 631-636

Doublet, L.E, Pande, P.K, McCollom, T.J, Blasingame, T.A. (1994); Decline Curve Analysis Using Type Curves–Analysis of Oil Well Production Data Using Material Balance Time: Application to Field Cases, Society of Petroleum Engineers paper presented at the International Petroleum Conference and Exhibition of Mexico, 10-13 October 1994, Veracruz, Mexico, SPE paper 28688-MS, 24 p.

Duroc-Danner, B.J., 2009, Game-Changing Technology: The Key to Tapping Mature Reservoirs, World Energy. Vol. 5, Issue 1, February 9, 2009.

CERA (2007); Finding the Critical Numbers: What Are the Real Decline Rates for Global Oil Production? Private report written by Peter M. Jackson and Keith M. Eastwood

Campbell, C., (1991); The Golden Century of Oil, 1950-2050: The Depletion of a Resource, first edition, Springer, 29 October 1991, 368 p.

Costa, Paulo Roberto (2008): Statement in an interview in Valor Economico on 27 October 2008, available from: http://rigzone.com/news/article.asp?a_id=68365

Dittmar, M. 2016. Regional Oil Extraction and Consumption: A Simple Production Model for the Next 35 years Part I. Biophysical Econ Resource Quality.

EIA, 2009, January 2009 International Petroleum Monthly, see also: http://www.eia.doe.gov/emeu/ipsr/supply.html

ExxonMobil (2004); A Report on Energy Trends, Greenhouse Gas Emissions, and Alternative Energy, information to share holders, February 2004

Gowdy, J., Juliá, R. (2007); Technology and petroleum exhaustion: Evidence from two mega-oilfields, Energy, Volume 32, Issue 8, August 2007, Pages 1448-1454

Hallock, J. L., Jr, et al. 2014. Forecasting the limits to the availability and diversity of global conventional oil supply: validation. Energy 64:130–153.

Hirsch, Robert (2008); Mitigation of maximum world oil production: Shortage scenarios, Energy Policy, Volume 36, Issue 2, February 2008, Pages 881-889

Höök, M., Aleklett, K. (2008); A decline rate study of Norwegian Oil Production, Energy Policy, Volume 36, Issue 11, November 2008, Pages 4262-4271

Höök, M., Söderbergh, B., Jakobsson, K., Aleklett, K. (2009); The evolution of giant oil field production behaviour, Natural Resources Research, Volume 18, Number 1, March 2009, Pages 39-56

IEA (2008); World Energy Outlook 2008, available from: http://www.worldenergyoutlook.org/

IEA. 2010. World energy outlook 2010, 116. International Energy Agency.

Luhnow, D. (2007); Mexico’s Oil Output Cools, Wall Street Journal, Monday 29 January 2007. Available from: http://www.rigzone.com/news/article.asp?a_id=40538
Meng, Q.A, Bentley, R.W. (2008); Global oil peaking: Responding to the case for ‘abundant supplies of oil’, Energy, Volume 33, Issue 8, August 2008, Pages 1179-1184

Nehring, R., (1978); Giant Oil Fields and World Oil Resources, Report prepared by the RAND Corporation for the Central Intelligence Agency, June 1978.

Ortíz-Gómez, A. Rico-Ramirez, V., Hernández-Castro, S. (2002); Mixed-integer multiperiod model for the planning of oilfield production, Computers & Chemical Engineering, Volume 26, Issues 4-5, 15 May 2002, Pages 703-714

Palsson, B. Davies, D.R. Todd A.C. and Somerville, J.M. (2003); The Water Injection Process A Technical and Economic Integrated Approach, Chemical Engineering Research and Design, Volume 81, Issue 3, March 2003, Pages 333-341

Pickens, T. Boone (2008); testimony to the US Senate Committee on Energy and Natural resources, 17 June 2008, see also: http://energy.senate.gov/public/_files/PickensTestimony061708.doc

Robelius, F., (2007); Giant Oil Fields – The Highway to Oil: Giant Oil Fields and their Importance for Future Oil Production. Doctoral thesis from Uppsala University

Simmons, M., (2002); The World’s Giant Oilfields, white paper from January 9, 2002, available from: http://www.simmonsco-intl.com/files/giantoilfields.pdf

Schlumberger (2005); Schlumberger Chairman and CEO Andrew Gould addressed the oil and gas investment community at the 33rd Annual Howard Weil Energy Conference on April 4, 2005 in New Orleans, Louisiana, USA. Available from: http://www.slb.com/content/news/presentations/2005/20050404_agould_neworleans.asp

Speight, J., 2008, Synthetic Fuels Handbook: Properties, Process, and Performance, McGraw-Hill Professional, 2008, 422 p

Zittel, W., (2001); Analysis of the UK Oil Production, a contribution to ASPO (the Association for the Study of Peak Oil & Gas), 22 February 2001, available online from: http://www.peakoil.net/Publications/06_Analysis_of_UK_oil_production.pdf

Posted in Flow Rate, How Much Left, Peak Oil | Tagged , , , , , , , | Comments Off on Giant oil field decline rates, peak oil, & reserves

Book review of “Surveillance State”

Preface.  My next book is going to be about why the electric grid can’t stay up for many reasons, including energy storage not scaling up, heavy-duty transportation is still dependent on diesel despite a few $400-$500,000 electric trucks subsidized in California and New York by clean air boards, finite natural gas to balance wind & solar / load follow / provide baseload power, all renewables are made with fossil fuels for every step of their life cycle so they can’t outlive fossils, recycling 100% is impossible (and also requires fossil fuels), nuclear power doesn’t scale, manufacturing requires fossil fuels (see “Life After Fossil fuels”) and much more.  Meanwhile what electricity exists is being exponentially consumed by AI, the Cloud, and Bitcoin far faster than the grid is growing.

So this isn’t something to worry about. When the grid goes down permanently, so will electronic surveillance.  But I added this post anyhow because I thought it was interesting. China hopes to nip rebellion in the bud before a revolution takes place by their excessive monitoring and to show that their system is better than democracy. They can respond quickly to the reasons for the unrest. The history of surveillance goes way back. Nations have inventoried their people and resources for millennia (for taxation and more).  The extent that China can monitor people and how they use the information is scary.  Perhaps a glimpse of your own future.  By 2019, Chinese surveillance systems had expanded across the globe. Huawei had installed Safe City systems in 700 cities across more than 100 countries and regions. ZTE in 160 cities spread across 45 countries.

Continue reading

Posted in Political Books | Tagged , | 1 Comment

How Christians were manipulated by Pat Robertson into becoming right-wing Republicans

In 1984, Pat Robertson’s evangelical Christian show the”700 Cub” raised $248 million

This is a book review is of “The Gospel of self: How Jesus Joined the GOP.” It was written by an evangelical Christian for evangelical Christians to show them how he, as the senior producer of Pat Robertson’s “The 700 Club” duped them out of their morals and money.

Pat Robertson played a key role in shifting the GOP to the far right.  He established the Christian Broadcasting Network (CBN) in 1960, eight years before  Rush Limbaugh’s national radio show, and 16 years before Fox “News”.   Pat Robertson is in many ways to blame for Trumpism today.

Continue reading

Posted in Critical Thinking, Critical Thinking and Scientific Literacy, Human Nature, Pat Robertson, Politics | Tagged , , , , , | 1 Comment

Hydrogen: The dumbest & most impossible renewable

PetroRabigh hydrogen production unit, Saudi Arabia. Source: tradearabia.com

Preface. This post originally appeared in Skeptic Magazine in 2008 as “The Hydrogen Economy. Savior of Humanity or an Economic Black Hole?” I’ve updated it quite a bit since then.

Hydrogen is the dumbest, most ridiculous energy alternative. It is insanely far from being renewable because it has no energy at all, energy has to be forced into it like a battery, and you lose even more energy when converting it back to electricity.  It has the highest negative energy return of any alternative, because far more energy is used than you ever get back. Think about it — first you have to split hydrogen from natural gas or use many times more energy than that to electrolyze it from water, compress or liquefy H, construct incredibly expensive and short-lived steel containers and pipelines because hydrogen degrades and embrittles them. Then finally deliver hydrogen to non-existent hydrogen vehicles.  Fuel cell technology is far from commercial.

It’s also highly explosive. Hydrogen requires 12 times less energy to ignite than gasoline vapor, so heat sources or the smallest of sparks can turn hydrogen into a bomb.  

Nor can hydrogen replace fossil fuels in manufacturing (see Chapter 9 Manufacturing Uses Over Half of All Fossil Energy in Life After Fossil Fuels: A Reality Check on Alternative Energy). There are no commercial ways to use hydrogen — or electricity — to replace fossil fuels in manufacturing, so with peak oil likely having happened in 2018 there’s not much time left to invent them, and many years to scale them up to becoming commercial — and trillions of dollars if it can be done.  But it probably can’t, because the carbon in coal or charcoal is essential.  So although hydrogen direct reduction can take pre-heated iron ore and convert it into direct reduced iron (H2 DRI), also known as sponge iron, it still needs carbon from coal or charcoal to create steel, even if done in an electric arc furnace plant. If just half of China’s steel was produced this way, the electricity required just to produce the H2 would be over 200 TWh, requiring over 150 GW of solar capacity dedicated to only this process, a capacity that is nearly equal to the total installed capacity of solar in Europe today.

Much of this article was in skeptic magazine in 2008, since then I’ve added quite a bit more information.

Even more reasons why H is dumb (the cost, EROI and more): Lesser JA (2024) Green Hydrogen: A Multibillion-Dollar Energy Boondoggle. Manhattan Institute.

The heavily subsidized eight hydrogen production centers funded by the Inflation Reduction act and other government funds are having a hard time finding customers for the hydrogen they plan to make.  The only customers that might exist are the ones already using hydrogen for refining petroleum, chemicals, and fertilizer.  But convincing them to make a switch to clean H is not an easy sell. It will cost them money to do that. Buyers need to be sure that the amount needed will be produced and delivered, which will require non-existent as yet infrastructure of pipelines and storage facility and break up existing long-standing contracts. And the subsidies to make the three-times or more expensive hydrogen would need to last for decades. At least a quarter is made onsite as a byproduct and will be used onsite, so no reason to buy that hydrogen elsewhere. And difficult for man other complex economic reasons: St. John J (2024) Clean hydrogen has a serious demand problem. Experts worry that U.S. policies to make clean hydrogen cheaper aren’t enough to convince today’s dirty hydrogen users to make the switch. Canary media

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

* * *

Alice Friedemann. 2008. “The Hydrogen Economy. Savior of Humanity or an Economic Black Hole?” Skeptic 14:48-51.

Skeptics scoff at perpetual motion, free energy, and cold fusion, but what about energy from hydrogen? Before we invest trillions of dollars in a hydrogen economy, we should examine the science and pseudoscience behind the hydrogen hype. Let’s begin by taking a hydrogen car out for a spin.

Although the Internal Combustion Engine (ICE) in your car can burn hydrogen, the hope is that someday fuel cells, which are based on electrochemical processes rather than combustion (which converts heat to mechanical work), will become more efficient and less polluting than ICEs.1 Fuel cells were invented before combustion engines in 1839 by William Grove. But the ICE won the race by using abundant and inexpensive gasoline, which is easy to transport and pour, and very high in energy content.2

Production

Unlike gasoline, hydrogen isn’t an energy source — it’s an energy carrier, like a battery. You have to make hydrogen and put energy into it, both of which take energy. Hydrogen has been used commercially for decades, so we already know how to do this. There are two main ways to make hydrogen: using natural gas as both the source and the energy to split hydrogen from the carbon in natural gas (CH4), or using water as the source and renewable energy to split the hydrogen from the oxygen in water (H2O).

1) Making Hydrogen from Fossil Fuels. Currently, 99.9 % of hydrogen is made from fossil fuels, mainly for oil refining, ammonia (fertilizer), and partially hydrogenated oil (IEA 2019,3). In the United States, 90 % is made from natural gas, with an efficiency of 72 %,4 which means you lose 28 % of the energy contained in the natural gas to make it (and that doesn’t count the energy it took to extract and deliver the natural gas to the hydrogen plant).

Hydrogen from water using electrolysis is 12 times more costly than natural gas, so no wonder “renewable” hydrogen from water is only made when an especially pure hydrogen is required, mainly by NASA for rocket fuel. 

The best element for electrolytic production is iridium, which is extremely expensive and rarer than even platinum.  No other element comes close to being an alternative. Iridium is a byproduct of mining for platinum and palladium, it is never mined for its own sake, and it’s supply is not growing (Böhm et al 2019, Hobson 2021). 

One of the main arguments made for switching to a “hydrogen economy” is to prevent global warming that has been attributed to the burning of fossil fuels. When hydrogen is made from natural gas, however, nitrogen oxides are released, which are 58 times more effective in trapping heat than carbon dioxide.5 Coal releases large amounts of CO2 and mercury. Oil is too powerful and useful to waste on hydrogen — it is concentrated sunshine brewed over hundreds of millions of years. A gallon of gas represents about 196,000 pounds of fossil plants, the amount in 40 acres of wheat.6

Natural gas as a source for hydrogen is too valuable. In the U.S. about 34% is used to generate electricity and balance wind and solar, 30% is used in manufacturing, 30% to heat homes and buildings, and another 3-5% to create fertilizer as both a feedstock and energy source. This has led to a many-fold increase in crop production, allowing 4+ billion more people to be alive.7,8

We simply don’t have enough natural gas left to make a hydrogen economy happen from this nonrenewable, finite source. Extraction of natural gas is declining in North America.9  Although fracked natural gas has temporarily been a stopgap for the decline of conventional natural gas, the International Energy Agency estimates that fracked gas production will peak as soon as 2023 (IEA 2018).  Alternatively we could import Liquefied Natural Gas (LNG), but it would take at least a decade to set up LNG ships and shoreline facilities at a cost of many billions of dollars. Making LNG is so energy intensive that it would be economically and environmentally insane to use it as a source of hydrogen.10

2) Making Hydrogen from Water. Only 4% of hydrogen is made from water via electrolysis. It is done when the hydrogen must be extremely pure. Since most electricity comes from fossil fuels in electricity generating plants that are 30 % efficient, and electrolysis is 70 % efficient, you end up using four units of energy to create one unit of hydrogen energy: 70% * 30% = 21% efficiency.11

Fresh water hydrogen competes with agriculture and drinking water, so ideally hydrogen would be made from more abundant sea water. But that would require expensive purification and desalination, since electrolysis turns chloride ions into toxic chlorine gas and degrades the equipment. New technology, such as membranes, catalysts, and electrode materials need to be invented (Tong 2020).

Sure, renewables could generate the electricity, but only about 6.6% of power comes from wind, and 1.6% from solar (EIA 2019).

Producing hydrogen by using fossil fuels as a feedstock or an energy source defeats the purpose, since the whole point is to get away from fossil fuels. The goal is to use renewable energy to make hydrogen from water via electrolysis. When the wind is blowing, current wind turbines can perform at 30–40 percent efficiency, producing hydrogen at an overall rate of 25 percent efficiency — 3 units of wind energy to get 1 unit of hydrogen energy. The best solar cells available on a large scale have an efficiency of ten percent, or 9 units of energy to get 1 hydrogen unit of energy. If you use algae making hydrogen as a byproduct, the efficiency is about .1 percent.12 No matter how you look at it, producing hydrogen from water is an energy sink. If you want a more dramatic demonstration, please mail me ten dollars and I’ll send you back a dollar.

Hydrogen can be made from biomass, but there are numerous problems:

  1. it’s very seasonal;
  2. it contains a lot of moisture, requiring energy to store and dry it before gasification;
  3. there are limited supplies;
  4. the quantities are not large or consistent enough for large-scale hydrogen production;
  5. a huge amount of land is required because even cultivated biomass in good soil has a low yield — 10 tons per 2.4 acres;
  6. the soil will be degraded from erosion and loss of fertility if stripped of biomass;
  7. any energy put into the land to grow the biomass, such as fertilizer and planting and harvesting, will add to the energy costs;
  8. the delivery costs to the central power plant must be added; and
  9. it is not suitable for pure hydrogen production.13

Putting Energy into Hydrogen

No matter how it’s been made, hydrogen has no energy in it. It is the lowest energy dense fuel on earth.14 At room temperature and pressure, hydrogen takes up three thousand times more space than gasoline containing an equivalent amount of energy.15 To put energy into hydrogen, it must be compressed or liquefied. To compress hydrogen to the necessary 10,000 psi is a multi-stage process that costs an additional 15 percent of the energy contained in the hydrogen.

If you liquefy it, you will be able to get more hydrogen energy into a smaller container, but you will lose 30–40 percent of the energy in the process. Handling it requires extreme precautions because it is so cold — minus 423 F. Fueling is typically done mechanically with a robot arm.16

BloombergNEF estimates that to generate enough green hydrogen to meet just 25% of our energy requires more electricity than the world generates today from all sources combined, and an investment of $11 trillion in production, storage and transportation infrastructure (Petrova 2020).

Storage

You may stumble on stories about storing hydrogen in underground salt caverns. This is how the single Compressed Air Energy Storage electricity generation facility stores compressed air. But only 3 gulf states and a tiny part of Utah have these salt caverns containing very large underground caverns at a depth of 1650–4250 feet. CAES has yet to be deployed in rock salt, aquifers, or abandoned rock mines because these formations are less likely to be airtight, and hydrogen, being the smallest element, even more likely than air to escape.

For the storage and transportation of liquid hydrogen, you need a heavy cryogenic support system. The tank is cold enough to cause plugged valves and other problems. If you add insulation to prevent this, you will increase the weight of an already very heavy storage tank, adding additional costs to the system.17

Let’s assume that a hydrogen car can go 55 miles per kg.18 A tank that can hold 3 kg of compressed gas will go 165 miles and weigh 400 kg (882 lbs).19 Compare that with a Honda Accord fuel tank that weighs 11 kg (25 lbs), costs $100, and holds 17 gallons of gas. The overall weight is 73 kg (161 lbs, or 8 lbs per gallon). The driving range is 493 miles at 29 mpg. Here is how a hydrogen tank stacks up against a gas tank in a Honda Accord (last column is cost):

Amount of fuel Tank weight with fuel   Driving Range
Hydrogen 55 kg @3000 psi 400 kg 165 miles13 $200021
Gasoline 17 gallons 73 kg 493 miles $100

According to the National Highway Safety Traffic Administration (NHTSA), “Vehicle weight reduction is probably the most powerful technique for improving fuel economy. Each 10 percent reduction in weight improves the fuel economy of a new vehicle design by approximately eight percent.”

The more you compress hydrogen, the smaller the tank can be. But as you increase the pressure, you also have to increase the thickness of the steel wall, and hence the weight of the tank. Cost increases with pressure. At 2000 psi, it is $400 per kg. At 8000 psi, it is $2100 per kg.20 And the tank will be huge — at 5000 psi, the tank could take up ten times the volume of a gasoline tank containing the same energy content.

Fuel cells are heavy. According to Rosa Young, a physicist and vice president of advanced materials development at Energy Conversion Devices in Troy, Michigan: “A metal hydride storage system that can hold 5 kg of hydrogen, including the alloy, container, and heat exchangers, would weigh approximately 300 kg (661 lbs), which would lower the fuel efficiency of the vehicle.”21

Fuel cells are also expensive. In 2003, they cost $1 million or more. At this stage, they have low reliability, need a much less expensive catalyst than platinum, can clog and lose power if there are impurities in the hydrogen, don’t last more than 1000 hours, have yet to achieve a driving range of more than 100 miles, and can’t compete with electric hybrids like the Toyota Prius, which is already more energy efficient and low in CO2 generation than projected fuel cells.22

Hydrogen fuel cells need a lot of platinum — far more than can be obtained if they were to become commercial (41).

Hydrogen is the Houdini of elements. As soon as you’ve gotten it into a container, it wants to get out, and since it is the lightest of all gases, it takes a lot of effort to keep it from escaping. Storage devices need a complex set of seals, gaskets, and valves. Liquid hydrogen tanks for vehicles boil off at 3–4 percent per day.23

Hydrogen also tends to make metal brittle.24 Embrittled metal can create leaks. In a pipeline, it can cause cracking or fissuring, which can result in potentially catastrophic failure.25 Making metal strong enough to withstand hydrogen adds weight and cost.

Leaks also become more likely as the pressure grows higher. It can leak from un-welded connections, fuel lines, and non-metal seals such as gaskets, O-rings, pipe thread compounds, and packings. A heavy-duty fuel cell engine may have thousands of seals.26 Hydrogen has the lowest ignition point of any fuel, 20 times less than gasoline. So if there’s a leak, it can be ignited by any number of sources.27  And an odorant can’t be added because of hydrogen’s small molecular size (SBC).

Worse, leaks are invisible — sometimes the only way to know there’s a leak is poor performance.

Hydrogen leakage increases climate change

New reports show that hydrogen leaks have warming effects 11 times worse than those of CO2 over a 100-year timespan from extending the life of methane, increasing tropospheric ozone, and greater amounts of water vapor. Leaks occur at all stages, up to 10% overall during electrolysis, compression or liquefaction, pipelines, and storage (Warwick 2022, FNC 2022)

Hydrogen emissions from the individual elements as a percentage of the hydrogen produced, transported or used within that element. Source FNC (2022)

 

And hydrogen can be bad for your health. Burning hydrogen in power plants can produce lung-damaging nitrogen oxides as a byproduct (Chediak 2022).

Distribution

One barrier to hydrogen is pipelines. There are currently 700 miles of hydrogen pipelines in operation—that is in comparison to 1 million miles of natural gas pipelines. To move to a nationwide use of hydrogen, safe and effective pipelines have to be developed. Tests have to be developed to test for the degradation that is likely to occur to the metals that can be caused by hydrogen weakening the pipeline.

According to former secretary of energy Steven Chu (39), hydrogen seeps into metal and embrittles it, and is a materials problem that has not been solved for decades and may never be solved.  

Nor can hydrogen piggyback on natural gas pipelines at less dangerous concentrations of 5 to 15% — it can’t be extracted at the other end until pressure swing adsorption membranes are invented, and all impurities taken out so that fuel cells aren’t damaged.  Even if invented, renewable wind or solar would only be able to create excess electricity to make hydrogen for just part of the year. Pipeline operators won’t want to install and remove expensive extraction equipment seasonally (40).

If added to natural gas pipelines, existing compressor stations don’t work well with it, and home gas appliances can only handle a mix of 80% natural gas / 20% hydrogen at best.  Blending hydrogen into gas pipelines for use in buildings and power generation would increase customer costs, increase air pollution and cause safety risks from explosions and fires from leaks, while minimally reducing greenhouse gases. Hydrogen also produces less energy than natural gas, so a 20% hydrogen blend would provide only a 6% to 7% reduction in greenhouse gas emissions at a much higher cost (Chediak 2022).

And get this — no matter how little hydrogen is mixed with natural gas in pipelines, it will degrade the  “mechanical properties of most metals. Fatigue is accelerated over 30 times. There is no threshold below which hydrogen effects can be ignored. Even small amounts of hydrogen have large effects (Ronevich and San Marchi 2019).

One of the challenges in the use of hydrogen as a vehicle fuel is the seemingly trivial matter of measuring fuel consumption. Refueling with hydrogen is a problem because there are currently no mechanisms to ensure accuracy at the pump. Hydrogen is dispensed at a very high pressure, at varying degrees of temperature and with mixtures of other gases (S.HRG. 110-1199)

Transport

Canister trucks ($250,000 each) can carry enough fuel for 60 cars.28 These trucks weigh 40,000 kg, but deliver only 400 kg of hydrogen. For a delivery distance of 150 miles, the delivery energy used is nearly 20 percent of the usable energy in the hydrogen delivered. At 300 miles, that is 40 percent. The same size truck carrying gasoline delivers 10,000 gallons of fuel, enough to fill about 800 cars.29
 
Another alternative is pipelines. The average cost of a natural gas pipeline is one million dollars per mile, and we have 200,000 miles of natural gas pipeline, which we can’t re-use because they are composed of metal that would become brittle and leak, as well as the incorrect diameter to maximize hydrogen throughput. If we were to build a similar infrastructure to deliver hydrogen it would cost $200 billion. The major operating cost of hydrogen pipelines is compressor power and maintenance.30 Compressors in the pipeline keep the gas moving, using hydrogen energy to push the gas forward. After 620 miles, 8 percent of the hydrogen has been used to move it through the pipeline.31

 

Another option is shipping hydrogen, but “tell that to the people who’ll have to ship it across the globe at hyper-cold temperatures close to those in outer space”. There are three prototype ships in experimental stages with many technical challenges to overcome.  The hydrogen has to be chilled to minus 253 degrees Celcius (-423 F), just 20 degrees above absolute zero, the coldest possible temperature, to keep it in liquid form and keep the vessel from cracking. That’s even colder than liquefied natural gas (-160 C, -260 F). To prevent embrittlement, new high-strength steel, super insulation, and welding techniques will be necessary. The costs will be extremely high though none of the prototype makers were willing to say what the cost might be (Saul 2021).

How much electricity would we need to make hydrogen for light-duty vehicles (Post 2017)?

Conclusion

At some point along the chain of making, putting energy in, storing, and delivering the hydrogen, we will have used more energy than we can get back, and this doesn’t count the energy used to make fuel cells, storage tanks, delivery systems, and vehicles.32 When fusion can make cheap hydrogen, when reliable long-lasting nanotube fuel cells exist, and when light-weight leak-proof carbon-fiber polymer-lined storage tanks and pipelines can be made inexpensively, then we can consider building the hydrogen economy infrastructure. Until then, it’s vaporware. All of these technical obstacles must be overcome for any of this to happen.33 Meanwhile, the United States government should stop funding the Freedom CAR program, which gives millions of tax dollars to the big three automakers to work on hydrogen fuel cells. Instead, automakers ought to be required to raise the average overall mileage their vehicles get — the Corporate Average Fuel Economy (CAFE) standard.34

At some time in the future the price of oil and natural gas will increase significantly due to geological depletion and political crises in extracting countries. Since the hydrogen infrastructure will be built using the existing oil-based infrastructure (i.e. internal combustion engine vehicles, power plants and factories, plastics, etc.), the price of hydrogen will go up as well — it will never be cheaper than fossil fuels. As depletion continues, factories will be driven out of business by high fuel costs35,36,37 and the parts necessary to build the extremely complex storage tanks and fuel cells might become unavailable.

The laws of physics mean the hydrogen economy will always be an energy sink. Hydrogen’s properties require you to spend more energy than you can earn, because in order to do so you must overcome waters’ hydrogen-oxygen bond, move heavy cars, prevent leaks and brittle metals, and transport hydrogen to the destination. It doesn’t matter if all of these problems are solved, or how much money is spent. You will use more energy to create, store, and transport hydrogen than you will ever get out of it.

Any diversion of declining fossil fuels to a hydrogen economy subtracts that energy from other possible uses, such as planting, harvesting, delivering, and cooking food, heating homes, and other essential activities. According to Joseph Romm, a Department of Energy official who oversaw research on hydrogen and transportation fuel cell research during the Clinton Administration: “The energy and environmental problems facing the nation and the world, especially global warming, are far too serious to risk making major policy mistakes that misallocate scarce resources.38

Related articles

See posts in categories hydrogen, manufacturing, electric trucks, batteries (vehicles also need an electric battery), and When trucks stop running.

References from Skeptic magazine:

  1. Thomas, S. and Zalbowitz, M. 1999. Fuel cells — Green power. Department of Energy, Los Alamos National Laboratory, 5. www.lanl.gov/orgs/mpa/mpa11/Green%20Power.pdf
  2. Pinkerton, F. E. and Wicke, B.G. 2004. “Bottling the Hydrogen Genie,” The Industry Physicist, Feb/Mar: 20–23.
  3. Jacobson, M. F. September 8, 2004. “Waiter, Please Hold the Hydrogen.” San Francisco Chronicle, 9(B).
  4. Hoffert, M. I., et al. November 1, 2002. “Advanced Technology Paths to Global Climate Stability: Energy for a Greenhouse Planet.” Science, 298, 981–987.
  5. Union of Concerned Scientists. How Natural Gas Works. www.ucsusa.org/clean_energy/renewable_energy/page.cfm?pageID=84
  6. Kruglinski, S. 2004. “What’s in a Gallon of Gas?” Discover, April, 11. http://discovermagazine.com/2004/apr/discover-data/
  7. Fisher, D. E. and Fisher, M. J. 2001. “The Nitrogen Bomb.” Discover, April, 52–57.
  8. Smil, V. 1997. “Global Population and the Nitrogen Cycle.” Scientific American, July, 76–81.
  9. Darley, J. 2004. High Noon for Natural Gas: The New Energy Crisis. Chelsea Green Publishing.
  10. Romm, J. J. 2004. The Hype About Hydrogen: Fact and Fiction in the Race to Save the Climate. Island Press, 154.
  11. Ibid., 75.
  12. Hayden, H. C. 2001. The Solar Fraud: Why Solar Energy Won’t Run the World. Vales Lake Publishing.
  13. Simbeck, D. R., and Chang, E. 2002. Hydrogen Supply: Cost Estimate for Hydrogen Pathways — Scoping Analysis. Golden, Colorado: NREL/SR-540-32525, Prepared by SFA Pacific, Inc. for the National Renewable Energy Laboratory (NREL), DOE, and the International Hydrogen Infrastructure Group (IHIG), July, 13. www.nrel.gov/docs/fy03osti/32525.pdf
  14. Ibid., 14.
  15. Romm, 2004, 20.
  16. Ibid., 94–95.
  17. Phillips, T. and Price, S. 2003. “Rocks in your Gas Tank.” April 17. Science at NASA. http://science.nasa.gov/headlines/y2003/17apr_zeolite.htm
  18. Simbeck and Chang, 2002, 41.
  19. Amos, W. A. 1998. Costs of Storing and Transporting Hydrogen. National Renewable Energy Laboratory, U.S. Department of Energy, 20. www.eere.energy.gov/hydrogenandfuelcells/pdfs/25106.pdf
  20. Simbeck and Chang, 2002, 14.
  21. Valenti, M. 2002. “Fill’er up — With Hydrogen.” Mechanical Engineering Magazine, Feb 2. www.memagazine.org/backissues/membersonly/feb02/features/
    fillerup/fillerup.html
  22. Romm, 2004, 7, 20, 122.
  23. Ibid., 95, 122.
  24. El kebir, O. A. and Szummer, A. 2002. “Comparison of Hydrogen Embrittlement of Stainless Steels and Nickel-base Alloys.” International Journal of Hydrogen Energy #27, July/August 7–8, 793–800.
  25. Romm, 2004, 107.
  26. Fuel Cell Engine Safety. December 2001. College of the Desert www.eere.energy.gov/hydrogenandfuelcells/tech_validation/pdfs/fcm06r0.pdf
  27. Romm, J. J. 2004. Testimony for the Hearing Reviewing the Hydrogen Fuel and FreedomCAR Initiatives Submitted to the House Science Committee. March 3. http://gop.science.house.gov/hearings/full04/mar03/romm.pdf
  28. Romm, 2004. The Hype About Hydrogen, 103.
  29. Ibid., 104.
  30. Ibid., 101–102.
  31. Bossel, U. and Eliasson, B. 2003. “Energy and the Hydrogen Economy.” Jan 8. www.methanol.org/pdf/HydrogenEconomyReport2003.pdf
  32. Ibid.
  33. National Hydrogen Energy Roadmap Production, Delivery, Storage, Conversion, Applications, Public Education and Outreach. November 2002. U.S. Department of Energy. www.eere.energy.gov/hydrogenandfuelcells/pdfs/national_h2_roadmap.pdf
  34. Neil, D. 2003. “Rumble Seat: Toyota’s Spark of Genius.” Los Angeles Times. October 15. www.latimes.com/la-danneil-101503-pulitzer,0,7911314.story
  35. Associated Press, 2004. “Oil Prices Raising Costs of Offshoots.” July 2. www.tdn.com/articles/2004/07/02/biz/news03.prt
  36. Abbott, C. 2004. “Soaring Energy Prices Dog Rosy U.S. Farm Economy.” Forbes, Reuters News Service. May 24.
  37. Schneider, G. 2004. “Chemical Industry in Crisis: Natural Gas Prices Are Up, Factories Are Closing, And Jobs Are Vanishing.” Washington Post, 1(E). March 17. www.marshall.edu/cber/media/040317-WP-chemical.pdf
  38. Romm, 2004. The Hype About Hydrogen, 8.
  39. Chu, S. June 23, 2020. Stanford Global energy dialogue series. Stanford University. Video: https://www.youtube.com/watch?v=-9SPglLg0W0&feature=youtu.be
  40. Chatsko M. 2020. Does the hydrogen economy have a pipeline problem? The Motley Fool.
  41. Chatsko M. 2020. Will platinum doom hydrogen cars? The Motley Fool.

References added after publication in Skeptic Magazine

Blain L (2021) Powerpaste packs clean hydrogen energy in a safe, convenient gray goop. New Atlas.

Böhm D, Beetz M, Maximilian Schuster M et al (2019) Efficient OER Catalyst with Low Ir Volume Density Obtained by Homogeneous Deposition of Iridium Oxide Nanoparticles on Macroporous Antimony-Doped Tin Oxide Support. Advanced Functional Materials, 2019; 1906670 DOI: 10.1002/adfm.201906670

Chediak M (2022) Hydrogen is every U.S. gas utility’s favorite future savior. Bloomberg.

DOE. 2011. Advanced technologies for high efficiency clean vehicles. Vehicle Technologies Program. Washington DC: United States Department of Energy.

EIA. 2019. Table 7.2a Electricity net generation total (all sectors). U.S. Energy Information Administration.

FNC (2022) Fugitive hydrogen emissions in a future hydrogen economy. Frazer-Nash consultancy.
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1067137/fugitive-hydrogen-emissions-future-hydrogen-economy.pdf

Hobson P (2021) Tight supply and hydrogen hopes drive iridium up 160%. Reuters.  https://www.reuters.com/article/us-precious-iridium/tight-supply-and-hydrogen-hopes-drive-iridium-up-160-idUSKBN2AC1DG

Huang, E. 2019. A hydrogen fueling station explosion in Norway has left fuel-cell cars nowhere to charge. Quartz.

IEA. 2018. World Energy Outlook. International Energy Agency.

IEA (2019) The future of hydrogen. International Energy Agency.

Kane, M. 2019. Hydrogen fueling station explodes: Toyota & Hyundai halt fuel cell car sales. insideevs.com.

Pena, L. 2019. Hydrogen explosion shakes Santa Clara neighborhood. ABC News.

Petrova M (2020) Green hydrogen is gaining traction, but still has massive hurdles to overcome. CNBC

Post, W. March 6, 2017. The Hydrogen Economy Will Be Highly Unlikely. energycollective.com

Ronevich J, San Marchi C (2019) Hydrogen effects on pipeline steels and blending into natural gas. Sandia National Laboratories, H2FC hydrogen and fuel cells program, American Gas association sustainable growth commitee, November 5th, 2019

Saul J (2021) Too Cold to Handle? Race is on to pioneer shipping of hydrogen. Reuters.

SBC. February 2014. Hydrogen-based energy conversion. SBC Energy Institute.

S.HRG. 110-1199. June 24, 2008. Climate change impacts on the transportation sector. Senate Hearing.

Tong W, et al. 2020. Electrolysis of low-grade and saline surface water. Nature Energy.

Warwick N et al (2022) Atmospheric implications of increased hydrogen use. www.gov.uk
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1067144/atmospheric-implications-of-increased-hydrogen-use.pdf

Woodrow, M. 2019. Bay Area experiences hydrogen shortage after explosion. ABC News.

Posted in Alternative Energy, Electric & Hydrogen trucks impossible, Energy, Hydrogen, Pipeline, Ships and Barges, Trucks | Tagged , , , | 5 Comments

Why large projects fail. Especially Renewable Energy

Megaprojects over $1 billion in order of likelihood to go over budget and timeline

Preface.  There are many reasons why big projects go over their budget, timeline, and generate fewer benefits than promised. Only 8.5% of projects hit the mark on both cost and time. And a minuscule 0.5% nail cost, time, and benefits, and were completed on time, on budget, and delivered promised benefits. Or to put that another way, 91.5% of projects go over budget, over schedule, or both. And 99.5% of projects go over budget, over schedule, under benefits, or some combination of these.

This is a huge problem for renewables.  Nuclear above all, the #1 type of 25 projects that take much longer and cost more than predicted.  Not just the existing light water reactors (LWR) either, but the new-fangled “advanced” small nuclear reactors (hardly “advanced” since many have been around for 70 years and have yet to succeed).

Fusion, Hydropower / dams, the California electric rail, and pumped hydropower are also expensive and complex.  So are utility scale batteries, the electric grid expansion to charge batteries, heat homes and do everything else for that matter, and the large solar and wind farms to generate the electricity.  Hydrogen infrastructure. Compressed Air energy systems to store electricity.

If peak oil was in 2018, then we don’t have the luxury of time.  To electrify the grid requires tripling the amount of power we already generate. Probably 10 times more given that electricity will do EVERYTHING.  Or 20 times more since months of electricity need to be stored to cope with seasonal wind and solar.

Other causes of project failures are a lack of experience, not being able to borrow the billions to construct a project, getting permits and fighting NIMBYism, being overly optimistic, politics, Black Swans (the longer a project takes, the more likely a black swan will occur), underestimating the cost in order to get funding, not enough planning, poor management, supply chain failures, bad weather, natural disaster (earthquake etc), and much more.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Financial Sense, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Flyvbjerg B (2023) How Big Things Get Done: The Surprising Factors That Determine the Fate of Every Project, from Home Renovations to Space Exploration and Everything In Between. Crown Currency.

Nuclear power plants are one of the worst-performing project types in my database, with an average cost overrun of 120% in real terms and schedules running 65% longer than planned. Even worse, they are at risk of fat-tail extremes for both cost and schedule, meaning they may go 20 or 30% over budget. Or 200 or 300%. Or 500%. Or more. There is almost no limit to how bad things can get.

Nuclear power (from table below)

Project Type (A) Mean Cost Overrun (%)* (B) % of Projects in Tail (>= 50% Overrun) (C) Mean Overrun of Projects in Tail (%)
Nuclear Power 120 55 204

You build only one thing. By definition, that thing is one of a kind. To put that in the language of tailors, it is bespoke: no standard parts, no commercial off-the-shelf products, no simple repetition of what was done last time. And that translates into slow and complex. Nuclear power plants, for one, are the products of a staggering number of bespoke parts and systems that must all work, and work together, for the plant as a whole to work.

You can’t build a nuclear power plant quickly, run it for a while, see what works and what doesn’t, then change the design to incorporate the lessons learned. It’s too expensive and dangerous. That means that experimentation is out. You have no choice but to get it right the first time.

Second, there’s a problem with experience. If you are building a nuclear power plant, chances are that you haven’t done much of that before for the simple reason that few have been built and each takes many years to complete, so opportunities to develop experience are scarce. Yet with no experimentation and little experience, you still have to get it exactly right the first time. That’s difficult, if not impossible.

Even if you do have some experience building nuclear power plants, you probably won’t have experience building this particular nuclear plant because, with few exceptions, each plant is specifically designed for a specific site, with technology that changes over time.

The new “advanced” small nuclear reactors (SMR) are being sold as modular and easy to repeat.  NOT TRUE. There are dozens of designs, many of them never succeeded, and it will take a long time for the NRC to get up to speed on so many options. 

Here’s an example of an SMR that didn’t work out.  In 1983, the government of Japan launched a new project that was as promising as it was enormous. Its name was Monju, meaning “wisdom.” When completed, Monju would be both a nuclear power plant churning out electricity for consumers and a fast-breeder reactor, a new type of nuclear plant that would produce fuel for the nuclear industry. For a nation long threatened by energy insecurity, Monju was designed to deliver a better future. Construction started in 1986. It finished almost a decade later, in 1995. But a fire immediately shut down the facility. An attempt to cover up the accident became a political scandal that kept the plant closed for years.

In 2000, the Japan Atomic Energy Agency announced that the plant could restart. Japan’s supreme court finally authorized the restart in 2005. Operations were scheduled to begin in 2008, but they were postponed until 2009. Test runs started in 2010, with full operations scheduled to commence, for the first time, in 2013. But in May 2013, maintenance flaws were discovered on some 14,000 components of the plant, including critical safety equipment. The restart was halted. Further violations of safety protocols were uncovered. Japan’s Nuclear Regulation Authority declared Monju’s operator to be unqualified. At that point, the government had spent $12 billion and the estimated cost of finally restarting Monju and operating it for ten years was another $6 billion—at a time when the 2011 Fukushima disaster had turned popular opinion against nuclear power. In 2016, it announced that Monju would be permanently closed.

Decommissioning Monju is expected to take another 30 years and cost a further $3.4 billion. If that forecast proves more accurate than the rest, the project will have taken 60 years, cost more than $15 billion, and produced zero electricity.[5]

Lacking experimentation and experience, what you learn as you proceed is that the project is more difficult and costly than you expected

Obstacles that were unknown are encountered. Solutions thought to work don’t. And you cannot make up for it by tinkering or starting over with revised plans. Operations experts call this “negative learning”: The more you learn, the more difficult and costly it gets.

Third, there’s the financial strain. A nuclear power plant must be completely finished before it can generate any electricity. So all the money that is poured into the plant produces nothing for the entire time it takes you to get to the ribbon-cutting ceremony—which, given the bespokeness, the complexity, the lack of experimentation, your lack of experience, the negative learning, and the need to get everything right the first time, will probably be a very long time. All this is reflected in the dreadful performance data for nuclear power plants.

Finally, don’t forget black swans. All projects are vulnerable to unpredictable shocks, with their vulnerability growing as time passes. So the fact that the delivery of your one huge thing will take a very long time means that it is at high risk of being walloped by something you cannot possibly anticipate. That’s exactly what happened to Monju. More than a quarter century after the project was launched, when the plant was still not ready to go, an earthquake caused a tsunami that struck the nuclear plant at Fukushima, producing the disaster that turned public opinion against nuclear power and finally convinced the Japanese government to pull the plug on Monju.…

It’s important to note that modularity is a matter of degree. The Empire State Building wasn’t modular to the extent that a Lego model of the Empire State Building is modular, but its floors were designed to be as similar as possible, with many being identical, which meant that workers often repeated work, which helped them learn and work faster. Similarly, the construction of the Pentagon was accelerated by keeping the five sides of the building identical. Following this line of reasoning, I advised a company building a large nuclear power plant to duplicate exactly what it had done in building a recent plant, not because the earlier plant had been a great success but because even that degree of repetition would boost them up the learning curve.

California High-Speed Rail

In 2019, California’s governor announced that the state would complete only part of the route: the 171-mile section between the towns of Merced and Bakersfield, in California’s Central Valley, at an estimated cost of $23 billion. But when that inland section is completed, the project will stop. It will be up to some future governor to decide whether to launch the project again and, if so, figure out how to get the roughly $80 billion—or whatever the number will be by then—to extend the tracks and finally connect Los Angeles and San Francisco.

Consider that the cost of the line between only Merced and Bakersfield is the same as or more than the annual gross domestic product of Honduras, Iceland, and about a hundred other countries. And that money will build the most sophisticated rail line in North America between two towns most people outside California have never heard of. It will be—as critics put it—the “bullet train to nowhere.

Look at California High-Speed Rail. When it was approved by voters and construction started, there were lots of documents and numbers that may have superficially resembled a plan. But there was no carefully detailed, deeply researched, and thoroughly tested program, which is to say that there was no real plan.

When the project got under way it could at best be described as a “vision” or an “aspiration.

It’s a safe bet that when California governor Gavin Newsom decided not to scrap the California High-Speed Rail project, only curtail it, he and his aides were thinking very carefully about how sunk costs would weigh on the public mind, and they knew that scrapping the project would be interpreted by the public as “throwing away” the billions of dollars already spent.

I expect that California’s “bullet train to nowhere” will one day be the standard textbook illustration of the sunk-cost fallacy and escalation of commitment

by its impressive detail, it may give the false idea that the overall plan is stronger than it is, like a beautiful facade with no structure behind it. Governments and bureaucratic corporations are good at churning out this sort of analysis. It is a major reason why the California High-Speed Rail project was able to spend more than a decade “in planning” before construction began, producing impressive quantities of paper and numbers without delivering a plan worthy of the name.

One big one is politics. The ambition to be the first, the biggest, the tallest, or some other superlative is another.

Politicians everywhere know that awarding contracts to domestic companies is a good way to make influential friends and win public support with promises of jobs, even if the domestic company will not perform as well as its foreign competitor because it is less experienced. When this happens—and it happens routinely—those responsible put other interests ahead of achieving the project’s goal.

Big projects involve big money and big self-interest. And since “who gets what” is the core of politics, there is politics in every big project, whether public or private.

This fact helps explain why the California High-Speed Rail project became such a mess. There is no real high-speed rail in the United States, which suggests how much experience US companies have building it. When California started to seriously consider this type of rail, foreign companies with lots of experience—notably SNCF, the French National Railway Company—set up offices in California in the hope of landing a sole-source contract or at least being a major partner in the project’s development. But the state decided not to go that way. Instead, it hired a large number of mostly inexperienced, mostly US contractors and oversaw them with managers who also had little or no experience with high-speed rail.[2

Ticket price estimates were provided in a range of scenarios, with a low of $68 and a high of $104. The total cost of the project was estimated at between $32.785 billion and $33.625 billion. See California High-Speed Rail Authority, Financial Plan (Sacramento: California High-Speed Rail Authority, 1999); California High-Speed Rail Authority, California High-Speed Train Business Plan (Sacramento: California High-Speed Rail Authority, 2008); Safe, Reliable High-Speed Passenger Train Bond Act for the 21st Century, AB-3034, 2008, https://leginfo.legislature.ca.gov/?faces/?billNavClient.xhtml?bill_id=200720080AB3034.

How do visions become plans that deliver successful projects? Not like this.

PSYCHOLOGY

One driver is psychology. In any big project—meaning a project that is considered big, complex, ambitious, and risky by those in charge—people think, make judgments, and make decisions. And where there are thinking, judgment, and decisions, psychology is at play; for instance, in the guise of optimism

Another driver is power. In any big project people and organizations compete for resources and jockey for position.

In every big project, there are blizzards of numbers generated at different stages by different parties. Finding the right ones—those that are valid and reliable—takes skill and work. Even trained scholars get it wrong.[4] And it doesn’t help that big projects involve money, reputations, and politics. Those who have much to lose will spin the numbers, so you cannot trust them. That’s not fraud. Or rather, it’s not usually fraud; it’s human nature. And with so many numbers to choose from, spinning is a lot easier than finding the truth.

This is a serious problem. Projects are promised to be completed by a certain time, at a certain cost, with certain benefits produced as a result—benefits being things such as revenues, savings, passengers moved, or megawatts of electricity generated. So how often do projects deliver as promised? That is the most straightforward question anyone could ask. But when I started to look around in the 1990s, I was stunned to discover that no one could answer it. The data simply hadn’t been collected and analyzed. That made no sense when trillions of dollars had been spent on the giant projects increasingly being called megaprojects—projects with budgets in excess of $1 billion.

Our database started with transportation projects: the Holland Tunnel in New York; the BART system in San Francisco; the Channel Tunnel in Europe; bridges, tunnels, highways, and railways built throughout the twentieth century. It took five years, but with my team I got 258 projects into the database, making it the biggest of its kind at the time.[5]

“Project estimates between 1910 and 1998 were short of the final costs an average of 28 percent,” according to The New York Times, summarizing our findings. “The biggest errors were in rail projects, which ran, on average, 45 percent over estimated costs [in inflation-adjusted dollars]. Bridges and tunnels were 34 percent over; roads, 20 percent. Nine of 10 estimates were low, the study said.”[7] The results for time and benefits were similarly bad.

Measured differently—from an earlier date and including inflation—the numbers are much worse.

I worked with McKinsey, and indeed we found that IT disasters were even worse than transportation disasters. We shifted our research to mega-events such as the Olympic Games and got the same result. Big dams? Same again. Rockets? Defense? Nuclear power? The same. Oil and gas projects? Mining? Same. Even something as common as building museums, concert halls, and skyscrapers fit the pattern. I was astonished.[10] And the problem wasn’t limited to any country or region; we found the same pattern all over the world.[11] The famously efficient Germans have some remarkable examples of bloat and waste, including Berlin’s new Brandenburg Airport, which was years delayed and billions of euros over budget, Even Switzerland, the nation of precise clocks and punctual trains, has its share of embarrassing projects; for instance, the Lötschberg Base Tunnel, which was completed late and with a cost overrun of 100 percent.

The pattern was so clear that I started calling it the “Iron Law of Megaprojects”: over budget, over time, under benefits, over and over again.[13]

the probability that any big project will blow its budget and schedule and deliver disappointing benefits is very high and very reliable.

The database that started with 258 projects now contains more than 16,000 projects from 20-plus different fields in 136 countries on all continents except Antarctica, and it continues to grow.

Only 8.5 percent of projects hit the mark on both cost and time. And a minuscule 0.5 percent nail cost, time, and benefits, and were completed on time, on budget, and delivered promised benefits

Or to put that another way, 91.5 percent of projects go over budget, over schedule, or both. And 99.5 percent of projects go over budget, over schedule, under benefits, or some combination of these.

you have assumed that if you are hit with a cost overrun, it will be somewhere around the mean—or 62 percent. Why did you assume that? Because it would be true if the cost overruns followed what statisticians call a “normal distribution.” That’s the famous bell curve.

But the “normal” distribution isn’t the only sort of distribution that exists.  Other distributions that are called “fat-tailed” because, compared with normal distributions, they contain far more extreme outcomes in their tails. Wealth, for example, is fat-tailed. At the time of writing, the wealthiest person in the world is 3,134,707 times wealthier than the average person.

If human height followed the same distribution as human wealth, the tallest person in the world would not be 1.6 times taller than the average person; he would be 3,311 miles (5,329 kilometers) tall, meaning that his head would be thirteen times farther into outer space than the International Space Station.  Another example of a fat tail distribution are the costs of natural disasters with rising severity

Are project outcomes distributed “normally,” or do they have fat tails? My database revealed that information technology projects have fat tails. To illustrate, 18% of IT projects have cost overruns above 50% in real terms. And for those projects the average overrun is 447%! That’s the average in the tail, meaning that many IT projects in the tail have even higher overruns than this. Information technology is truly fat-tailed!

So are nuclear storage projects. And the Olympic Games. And nuclear power plants. And big hydroelectric dams. As are airports, defense projects, big buildings, aerospace projects, tunnels, mining projects, high-speed rail, urban rail, conventional rail, bridges, oil projects, gas projects, and water projects. Most project types have fat tails. How “fat” their tails are—how many projects fall into the extremes and how extreme those extremes are—does vary. I’ve cited them in order, from fattest to least fat (but still fat)—or, if you prefer, from most at risk of terrifying overruns to less at risk (but still very much at risk).

Most project types are not only at risk of coming in late, going over budget, and generating fewer benefits than expected. They are at risk of going disastrously wrong. That means you may not wind up 10% over budget; you may go 100% over. Or 400%. Or worse. These are black swan outcomes, and the project types at risk of them are called “fat-tailed.” They include nuclear power plants, hydroelectric dams, information technology, tunnels, major buildings, aerospace, and many more.

The table shows cost overruns for 25 project types covering 16,000 plus projects. Overrun is measured as (a) mean cost overrun, (b) percentage of projects in the upper tail (defined as = 50 percent), and (c) mean overrun in the tail. Overrun is measured in real terms. The numbers in the table are base rates for cost risk in project management. For example, if you’re planning to host the Olympic Games, your base rate (expected value) for cost overrun will be 157 percent, with a 76 percent risk of ending up in the tail with an expected overrun of 200 percent and substantial further risk of overrun above this.

We see from the table that the base rates are very different for different project types for both mean risk and tail risk. The highest mean risk is found for nuclear storage, at 238%, while the lowest is found for solar power, at 1 percent. The highest risk for ending up in the tail is held by the Olympics, at 76 percent, while the highest mean overrun in the tail is found for IT projects, at 447 percent. Differences in base rates must be taken into account when planning and managing projects but often are not. Frequently, empirical base rates are not considered at all

Cost overrun was calculated not including inflation and baselined as late in the project cycle as possible, just before the go-ahead (final business case at final investment decision). This means that the numbers in the table are conservative. If inflation had been included and early business cases used as the baseline, cost overrun would be much higher, sometimes several times higher.

What do fat-tailed outcomes look like? Boston’s “Big Dig”—replacing an elevated highway with a tunnel, with construction started in 1991—put the city through the wringer for 16 years and cost more than triple what it was supposed to.

NASA’s James Webb Space Telescope, which is now almost a million miles from Earth, was forecast to take 12 years but required 19 to complete, while its final cost of $8.8 billion was an astronomical—forgive me—450% over budget.

My data show that smaller projects are susceptible to fat tails, too. Moreover, fat-tailed distributions, not normal distributions, are typical within complex systems, both natural and human, and we all live and work within increasingly complex systems, which means increasingly interdependent systems. Cities and towns are complex systems.

Markets are complex systems. Energy production and distribution are complex systems. Manufacturing and transportation are complex systems. Debt is a complex system. So are viruses. And climate change. And globalization. On and on the list goes. If your project depends on other people and many parts, it is all but certain that your project is embedded in complex systems.

That describes projects of all types and scales, all the way down to home renovations.

Nassim Nicholas Taleb famously dubbed low-probability, high-consequence events “black swans.” Disastrous project outcomes such as these can end careers, sink companies, and inflict a variety of other carnage. They definitely qualify as black swans.

1 Earthquakes On Richter scale
2 Cybercrime Financial loss
3 Wars # of battle deaths per capita of involved nations
4 Pandemics Number of deaths
5 IT procurement % size of cost overrun
6 Floods volume of water
7 Bankruptcies % of firms per year per industry
8 Forest fires size of area affected
9 Olympic Games % size of cost overrun
10 Blackouts # of customers affected

Major corporations on the hook for runaway projects may be able to keep things going by borrowing more and more money. Governments can also pile up debt. Or raise taxes.

Projects that fail tend to drag on, while those that succeed zip along and finish.

Think of the duration of a project as an open window. The longer the duration, the more open the window. The more open the window, the more opportunity for something to crash through and cause trouble, including a big, bad black swan. What could that black swan be? Almost anything. It could be something dramatic, like an election upset, a stock market collapse, or a pandemic.

Events such as these may be extremely unlikely on any given day, month, or year. But the more time that passes from the decision to do a project to its delivery, the greater their probability.  The more time that passes from decision to delivery, the greater the probability of one or more of these events happening.

Gusts, at just the wrong moment, that pushed the bow of Ever Given, a giant container ship, into a bank of the Suez Canal. The ship got stuck and couldn’t be budged for six days, blocking the canal, halting hundreds of ships, freezing an estimated $10 billion in trade each day, and sending shocks rippling through global supply chains. The people and projects who suffered as a result of those supply-chain troubles may never have realized it, but the cause of their trouble was ultimately strong winds in a faraway desert.[23]   A complex systems theorist might describe what happened by saying that the dynamic interdependencies among the parts of the system—the wind, the canal, the ship, and supply chains—created strong nonlinear responses and amplification.

Minor changes combined in a way to produce a disaster. In complex systems, that happens so often that the Yale sociologist Charles Perrow called such events “normal accidents.”[24]  Growing complexity and interdependency may make such outcomes more likely in today’s world. From the dramatic to the mundane to the trivial, change can rattle or ruin a project—if it occurs during the window of time when the project is ongoing. Solution? Close the window by speeding up the project and bringing it to a conclusion faster.

BUT CONSTRUCTING TOO FAST IS A PROBLEM

When Møller asked the architect, Henning Larsen, how long it would take, Larsen said five years. “You’ll get four!” Møller curtly responded. But the cost of that haste was terrible, and not only in terms of cost overruns. Larsen was so appalled by the completed building that he wrote a whole book to clear his reputation

In 2021, after an overpass collapsed beneath a metro train in Mexico City, three independent investigations concluded that rushed, shoddy work was to blame.

The New York Times did its own investigation and concluded that the city’s insistence that construction be completed before the city’s powerful mayor was due to leave office had been a key contributing cause of the collapse. “The scramble led to a frenzied construction process that began before a master plan had been finalized and produced a metro line with defects from the start,” the Times concluded.

On project after project, rushed, superficial planning is followed by a quick start that makes everybody happy because shovels are in the ground. But inevitably, the project crashes into problems that were overlooked or not seriously analyzed and dealt with in planning. People run around trying to fix things. More stuff breaks. There is more running around.

PLANNING SHOULD BE SLOW

Most planning is done with computers, paper, and physical models, meaning that planning is relatively cheap and safe. Barring other time pressures, it’s fine for planning to be slow. Delivery is another matter. Delivery is when serious money is spent and the project becomes vulnerable as a consequence.  Planning is a safe harbor. Delivery is venturing across the storm-tossed seas. This is a major reason why, at Pixar—the legendary studio that created Toy Story, Finding Nemo, The Incredibles, Soul, and so many other era-defining animated movies—“directors are allowed to spend years in the development phase of a movie. A Pixar movie usually goes through the cycle from script to audience feedback eight times. The amount of change between versions one and two “is usually huge,” said Docter. “Two to three is pretty big. And then, hopefully, as time goes on, there are enough elements that work that the changes become smaller and smaller.

Cultivating ideas and innovations takes time. Spotting the implications of different options and approaches takes more time. Puzzling through complex problems, coming up with solutions, and putting them to the test take still more time. Planning requires thinking—and creative, critical, careful thinking is slow.

SOMERVELL CHOSE HORRIBLE SPOT TO PUT THE PENTAGON ON. Why didn’t he realize that there was a much better site available before he sought and got approval for the original design? Why didn’t any of those who approved the plan spot the flaw? Because Somervell’s plan was so absurdly rushed and superficial that no one had even looked for other sites, much less considered their relative merits carefully. They had all treated the first suitable site identified as the only suitable site. Somervell fought back. He criticized the critics and insisted that only the Arlington Farm site was suitable. As he drove his staff even harder to start digging a hole. Critics urged the president to intervene. He did, telling Somervell to switch sites. Incredibly, Somervell still wouldn’t quit.

Somervell presented his plan to the secretary of war, a congressional subcommittee, and the White House cabinet, including the president. Each time, so few probing questions were asked that the blatant flaws in the plan were not revealed. And each time, the plan was quickly approved. Somervell’s superiors simply didn’t do their jobs. The fact that the flaw was eventually exposed and the plan stopped—in part because one of the determined critics of the plan happened to be the president’s uncle—only underscores their dereliction of duty and the arbitrariness of the decision.

Purposes and goals are not carefully considered. Alternatives are not explored. Difficulties and risks are not investigated. Solutions are not found. Instead, shallow analysis is followed by quick lock-in to a decision that sweeps aside all the other forms the project could take. “Lock-in,” as scholars refer to it, is the notion that although there may be alternatives, most people and organizations behave as if they have no choice but to push on. I call such premature lock-in the “commitment fallacy.

The only thing that is truly unusual about the Pentagon story is that a group of politically connected critics managed to expose the flaws in Somervell’s plan after it had been approved and get the project moved to another site—the one where the Pentagon is today.

One is what I call “strategic misrepresentation,” the tendency to deliberately and systematically distort or misstate information for strategic purposes. If you want to win a contract or get a project approved, superficial planning is handy because it glosses over major challenges, which keeps the estimated cost and time down, which wins contracts and gets projects approved.

But as certain as the law of gravity, challenges ignored during planning will eventually boomerang back as delays and cost overruns during delivery. By then the project will be too far along to turn back. Getting to that point of no return is the real goal of strategic misrepresentation. It is politics, resulting in failure by design.

A second is psychology, and politics for Kahneman. Which factor is most important depends on the character of decisions and projects. In Kahneman’s laboratory experiments, the stakes are low. There is typically no jockeying for position, no competition for scarce resources, no powerful individuals or organizations, no politics of any kind. The closer a project is to that situation, the more individual psychology will dominate.  As projects get bigger and decisions more consequential, the influence of money and power grows. Powerful individuals and organizations make the decisions,

We are a deeply optimistic species. That makes us an overconfident species. The large majority of car drivers say that their driving skills are above average. Unchecked, optimism leads to unrealistic forecasts, poorly defined goals, better options ignored, problems not spotted and dealt with, and no contingencies to counteract the inevitable surprises. Yet, as we will see in later chapters, optimism routinely displaces hard-nosed analysis in big projects, as in so much else people do.

One of the basic insights of modern psychology is that quick and intuitive “snap judgments” are the default operating system of human decision making—“System One,” to use the term coined by the psychologists Keith Stanovich and Richard West and made famous by Kahneman.

Conscious reasoning is a different system: System Two. A key difference between Systems One and Two is speed.  To generate snap judgments, the brain can’t be overly demanding about information. Instead, it proceeds on the basis of what Kahneman calls “WYSIATI” (What You See Is All There Is), meaning an assumption that whatever information we have on hand is all the information available to make the decision.

When we have a strong intuitive judgment we seldom subject it to slow, careful, critical scrutiny. We just go with it. Emotions such as anger, fear, love, or sadness may inspire rash conclusions as well. We all know—at least when we’re thinking coolly—that strong emotions are not necessarily logical or supported by evidence and are therefore an unreliable basis for judgment.

But the intuitive judgments generated by System One are not experienced as emotions. They simply “feel” true. With the truth in hand, it seems perfectly reasonable to act on it.  This is what makes optimism bias so potent. Small-business owners who are sure they will avoid the fate of most small-business owners

it’s important to recognize that quick, intuitive judgment often works remarkably well. That’s why it is our default. This is typical of the planning of big projects. It’s just not suitable for the sort of quick-and-intuitive decision making that comes naturally to us. But all too often we apply it anyway—because it comes naturally to us.

Forty years ago, Kahneman and Tversky showed that people commonly underestimate the time required to complete tasks even when there is information available that suggests the estimate is unreasonable. They called this the “planning fallacy,

The physicist and writer Douglas Hofstadter mockingly dubbed it “Hofstadter’s Law”: “It always takes longer than you expect, even when you take into account Hofstadter’s Law.

You expect to get downtown on a Saturday night within twenty minutes, but it takes 40 minutes instead and now you’re late—just like last time. Incredibly, when people said there was a 99% probability that they would be done—that it was virtually certain—only 45% actually finished by that time.

For us to be so consistently wrong, we must consistently ignore experience. And we do, for various reasons. When we think of the future, the past may simply not come to mind and we may not think to dig it up because it’s the present and future we are interested in. If it does surface, we may think “This time is different” and dismiss it. Or we may just be a little lazy and prefer not to bother, a preference well documented in Kahneman’s work. We all do this. Think of taking work home on the weekend. It’s a safe bet that what you will get done is less than what you planned.

Using a best-case scenario as the basis for an estimate is a really bad idea because the best case is seldom the most likely way the future will unfold. It’s often not even probable. There is an almost infinite number of things that could pop up during the weekend and eat into your work time: illness, an accident, insomnia, a phone call from an old friend, a family emergency, broken plumbing, and on down the list.

If this sort of casual forecasting seems miles removed from the cost and time estimates of big projects, think again. It’s common for such estimates to be made by breaking a project down into tasks, estimating the time and cost of each task, then adding up the totals.

Experience—in the shape of past outcomes of similar projects—is often ignored, and little or no careful consideration is given to the many ways the forecasts could be knocked off-kilter. These are effectively forecasts based on best-case scenarios

“Speed matters in business,” notes one of Amazon’s famous leadership principles, written by Jeff Bezos.

Notice, however, that Bezos carefully limited the bias for action to decisions that are “reversible. It is also inapplicable to many decisions on big projects because they are so difficult or expensive to reverse

When this bias for action is generalized into the culture of an organization, the reversibility caveat is usually lost. What’s left is a slogan—“Just do it!  Managers feel more productive executing tasks than planning them,

People in power, which includes executives deciding about big projects, prefer to go with the quick flow of availability bias, as opposed to the slow effort of planning. Executives everywhere will recognize that attitude. It emerges from a desire to get going on a project, to see work happening, to have tangible evidence of progress.

Planning is working on the project. Progress in planning is progress on the project, often the most cost-effective progress you can achieve.

THE POLITICS OF OVERESTIMATION

“In France, there is often a theoretical budget that is given because it is the sum that politically has been released to do something. In three out of four cases this sum does not correspond to anything in technical terms. This is a budget that was made because it could be accepted politically. The real price comes later. The politicians make the real price public where they want and when they want.” That’s a long way of saying that estimates aren’t intended to be accurate; they are intended to sell the project. In a word, they are lies

“News that the Transbay Terminal is something like $300 million over budget should not come as a shock to anyone,” wrote Willie Brown, a former San Francisco mayor and California state assemblyman. “We always knew the initial estimate was under the real cost. Just like we never had a real cost for the Central Subway or the Bay Bridge or any other massive construction project. So get off it. In the world of civic projects, the first budget is really just a down payment. If people knew the real cost from the start, nothing would ever be approved”.   With contracts signed, the next step is to get shovels in the ground. Fast. “The idea is to get going,” concluded Willie Brown. “Start digging a hole and make it so big, there’s no alternative to coming up with the money to fill it in.”

Needless to say, Brown was retired from politics when he wrote that.

The editor of a major US architecture and design magazine even rejected an article I pitched about strategic misrepresentation on the grounds that lying about projects is so routine that his readers take it for granted, and thus the article wouldn’t be newsworthy.

Rushed, superficial planning is no problem for the creation of a lowball estimate. In fact, it can be enormously helpful. Problems and challenges overlooked are problems and challenges that won’t increase the estimate.

It also helps to express ironclad confidence in estimates, as Montreal mayor Jean Drapeau did when he promised that the 1976 Olympic Games would cost no more than what was budgeted. Future embarrassment awaits when you talk like that. But that’s for later. After you get what you want. And maybe after you’ve retired.

Elia Kazan—then retired, naturally—when he explained how he had gotten Columbia Pictures to fund a movie he wanted to make in the late 1940s. “Get the work rolling, involve actors contractually, build sets, collect props and costumes, expose negative, and so get the studio in deep. Once money in some significant amount had been spent, it would be difficult for Harry [Cohn, president of Columbia Pictures] to do anything except scream and holler.

Today, Heaven’s Gate is famous in Hollywood, but not in a good way. It ultimately cost five times the initial estimate and opened a year late. The critical reaction was so savage that Cimino withdrew the film, reedited it, and released it again six months later. It bombed at the box office. And United Artists was wiped out.

One element that is central to any account is the “sunk cost fallacy.” Money, time, and effort previously spent to advance a project are lost in the past. You can’t get them back. They are “sunk.” Logically, when you decide whether to pour more resources into a project, you should consider only whether doing so makes sense now. Sunk costs should not be a factor in your thinking—but they probably will be because most people find it incredibly hard to push them out of their minds. And so people commonly “throw good money after bad,” to use the old phrase.

On the day of the game, there is a big snowstorm. The higher the price the friends paid for the tickets—their sunk costs—the more likely they are to brave the blizzard and attempt driving to the game, investing more time, money, and risk. “Driving into the blizzard” is particularly common in politics. Sometimes that’s because politicians themselves fall for it. But even politicians who know better understand that the public is likely to be swayed by sunk costs, so sticking with a fallacy is politically safer than making a logical decision.

We should be careful not to see psychology and politics as separate forces; they can be mutually reinforcing and typically are in big projects. When they align in favor of a superficial plan and a quick start, that’s probably what will occur—with predictable consequences.

Good planning explores, imagines, analyzes, tests, and iterates. That takes time. Thus, slow is a consequence of doing planning right, not a cause. The cause of good planning is the range and depth of the questions it asks and the imagination and the rigor of the answers it delivers. Notice that I put “questions” before “answers.” It’s self-evident that questions come before answers. Or rather, it should be self-evident.

Projects routinely start with answers, not questions. What Gehry means by questioning isn’t doubting or criticizing, much less attacking or tearing down. He means asking questions with an open-minded desire to learn. It is, in a word, exploration. “You’re being curious,” he says. That’s the opposite of the natural inclination to think that What You See Is All There Is (WYSIATI). The whole conversation starts with a simple question: “Why are you doing this project?” Few projects start this way. All should. As brutal as the experience was, the process of building the Disney Concert Hall taught Gehry a host of lessons that he used in building the Guggenheim Bilbao and has used in projects ever since. Who has power, and who doesn’t? What are the interests and agendas at work? How can you bring on board those you need and keep them there? How do you maintain control of your design? These questions are as important as aesthetics and engineering to the success of a project.

Bezos noted that when a project is successfully completed and it’s ready to be publicly announced, the conventional last step is to have the communications department write two documents. One is a very short press release (PR) that summarizes what the new product or service is and why it is valuable for customers. The other is a “frequently asked questions” (FAQ) document. To pitch a new project at Amazon, you must first write a PR and FAQ. The language of both documents must be plain. With language like that, flaws can’t be hidden behind jargon, slogans, or technical terms. Projects are pitched in a one-hour meeting with top executives. Amazon forbids PowerPoint presentations and all the usual tools of the corporate world, so copies of the PR/FAQ are handed around the table and everyone reads it, slowly and carefully, in silence. Then they share their initial thoughts, with the most senior people speaking last to avoid prematurely influencing others. Finally, the writer goes through the documents line by line, with anyone free to speak up at any time. “This discussion of the details is the critical part of the meeting, The writer of the PR/FAQ then takes the feedback into account, writes another draft, and brings it back to the group. The same process unfolds again. And again. And again. It ensures that the concept that finally emerges is seen with equal clarity in the minds of everyone

But no process is foolproof. When Jeff Bezos dreamed up the idea of a 3D phone display that would allow control by gestures in the air (the hand-waving again!), he fell in love with the concept. Later, he effectively co-authored the PR/FAQ that would launch the Amazon Fire Phone project. When the Fire Phone was released in June 2014 at a price of $200, it didn’t sell. The price was cut in half. Then the phone was free. Amazon couldn’t give it away. A year later, it was discontinued and hundreds of millions of dollars were written off. “It failed for all the reasons we said it was going to fail—that’s the crazy thing about it,” said a software engineer. Asking “Why?” can work only where people feel free to speak their minds and the decision makers really listen.

Caro is the greatest living American biographer, famous for long, deeply researched, massively complex books. “What is this book about?,” he asks himself. “What is its point?” He forces himself to “boil the book down to three paragraphs, or two, or one. These paragraphs express the narrative theme with perfect simplicity. But don’t mistake simple for easy. Caro writes a draft and throws it out. Then another. And another, in seemingly endless iterations. This can go on for weeks as he compares his summary with his voluminous research. “The whole time, I’m saying to myself, ‘no, that’s not exactly what you’re trying to do in this book.’ When he finally has his precious paragraphs, he pins them to the wall behind his desk, where it is literally impossible for him to lose sight of his goal.

The construction of the Sydney Opera House was an outright fiasco. Setbacks piled up. Costs exploded. Scheduled to take five years to build, it took fourteen. The final bill was 1,400 percent over the estimate, one of the largest cost overruns for a building in history. A bad plan is one that applies neither experimentation nor experience. The plan for the Sydney Opera House was very bad. Utzon’s entry was so sparse that it didn’t even satisfy all the technical requirements set by the organizers, but his simple sketches were indisputably brilliant—perhaps too brilliant. They mesmerized the jury and swept objections aside. The principal mystery lay in the curved shells at the heart of Utzon’s vision. They were beautiful on two-dimensional paper, but what three-dimensional constructs would enable them to stand? What materials would they be made of? How would they be built? None of that had been figured out. Utzon hadn’t even consulted engineers.

In bad planning, it is routine to leave problems, challenges, and unknowns to be figured out later. In many projects, the problem is never solved.

We all know that experience is valuable.

All else being equal, an experienced carpenter is a better carpenter than an inexperienced one. It should also be obvious that in the planning and delivery of big projects experience should be maximized whenever possible—by hiring the experienced carpenter, for example. It shouldn’t be necessary to say that. But it is necessary to say it—loudly and insistently—because, as we shall see, big projects routinely do not make maximum use of experience. In fact, experience is often aggressively marginalized. When the Canadian government decided it wanted to buy two icebreakers, it didn’t buy them from manufacturers in other countries that were more experienced. Rather than give the contracts to one company so that it could build one ship, learn from the experience, and deliver the second ship more efficiently, it gave one contract to one company and the other to another company. So why do it? One company is in a politically important region in Quebec, the other in a politically important region in British Columbia.

The desire to do what has never been done before can be admirable, for sure. But it can also be deeply problematic. First-mover advantage is greatly overstated. In a watershed study, researchers compared the fates of “pioneer” companies that had been the first to exploit a market and “settlers” that had followed the pioneers into the market. Drawing on data from five hundred brands in fifty product categories, they found that almost half of pioneers failed, compared to 8 percent of settlers. The surviving pioneers took 10 percent of their market, on average, compared to 28 percent for settlers.

Consider Seattle’s State Route 99 tunnel. A decade ago, when Seattle announced that it would bore a tunnel under its waterfront to replace a highway on the surface, it was far from a first, so there was plenty of relevant experience to draw on. But Seattle decided that its tunnel would be the biggest of its kind in the world, with enough room for two decks, each with two lanes of traffic. But to bore the world’s biggest tunnel, you need the world’s biggest boring machine. Such a machine, by definition, hasn’t been built and used before. It would be the first. That cost $80 million, which is more than double the price of a standard boring machine. After boring a thousand feet of a tunnel that would be nine thousand feet long, the borer broke down and became the world’s biggest cork in a bottle. Extracting it from the tunnel, repairing it, and getting it back to work took two years and cost another $143 million.

Project planners should prefer highly experienced technology for the same reason house builders should prefer highly experienced carpenters. But we often don’t see technology this way. Too often, we assume that newer is better.

Olympic Games. Since 1960, the total cost of hosting the Games—six weeks of competitions, including the Paralympics, held once every four years—has grown spectacularly and is now in the tens of billions of dollars. Every Olympic Games since 1960 for which data are available, summer and winter, has gone over budget. The average overrun is 157 percent in real terms. Only nuclear waste storage has higher cost overruns of the twenty-plus project categories my team and I study. There is no permanent host for the Games. Instead, the International Olympic Committee (IOC) calls on cities to bid for each Games. The IOC prefers to move the Games from region to region, continent to continent, because that’s an excellent way to promote the Olympic brand and therefore serves the interests of the IOC—which is to say, it is excellent politics.

Testing is all the more critical for those very rare big projects—such as finding solutions to the climate crisis, getting people to Mars, or permanently storing nuclear waste—that must do what has never been done before because that is the heart of the project. They start with a deep deficit of experience.

How did MTR know that its delivery was failing? By the schedule and budget slipping. But slippage was measured against MTR’s forecasts of how long and how costly the various stages of the project would be. If those forecasts were fundamentally unrealistic, a team expected to meet them would fail no matter what they did. The delivery would be doomed before it started. That should be obvious. But when things go wrong and people get desperate, the obvious is often overlooked,

Many small probabilities added together equal a large probability that at least some of those nasty surprises will actually come to pass. Your forecast did not account for that. That means your forecast, which seemed perfectly reasonable and reliable, was actually a highly unrealistic things-will-go-according-to-plan best-case scenario,

And things almost never go according to plan. On big projects, they don’t even come close.

You may think, as most people do, that the solution is to look more closely at your kitchen renovation, identify all the things that could possibly go wrong, and work them into your forecast. It’s not. Spotting ways that things can go wrong is important because it enables you to reduce or eliminate risks or mitigate them, as I’ll discuss below. But it won’t get you the foolproof forecast you want. No matter how many risks you can identify, there are always many more that you can’t. They are the “unknown unknowns. You just need to start over with a different perspective: See your project as one in a class of similar projects already done, as “one of those.” Use data from that class—about cost, time, benefits, or whatever else you want to forecast—as your anchor. Then adjust up or down, if necessary, to reflect how your specific project differs from the mean in the class. That’s it. It couldn’t be simpler.

shoddy forecasting is the bread and butter of countless corporations. They do not want the people who authorize projects and pay the bills to have a more accurate picture of what projects will cost and how long they will take.

Kahneman writes about a time when he and some colleagues set out to compose a textbook together. They all agreed that it would take roughly two years. But when Kahneman asked the only member of the group with considerable experience in producing textbooks how long it usually takes, that expert said he couldn’t recall any project taking less than seven years. The textbook was ultimately finished eight years later.

In the kitchen renovation example, I took for granted that you have data on kitchen renovations that will allow you to calculate the average cost. But you probably don’t. And you’ll struggle to find them. The same goes for your kitchen renovation. Look around for others who have done a kitchen renovation in, say, the past five to ten years. Ask friends, family, co-workers. Kitchen renovations are common, so let’s say you come up with fifteen such projects. Get the total cost of each, add them, divide by fifteen. That’s your anchor. Even simpler and even more accurate, you could get the percentage cost overrun for each renovation

In 2004, I got a call from Anders Bergendahl, the Swedish official in charge of decommissioning nuclear power plants. He needed a reliable estimate of how much it would cost to decommission Sweden’s fleet of nuclear power plants, which would take decades, and safely store nuclear waste, which would last for centuries. Sweden’s nuclear industry would be asked to pay into a fund to cover those costs, so the government needed to know how much the industry should pay.  Sweden would be the first country to carry out a planned decommissioning of a fleet of reactors. “I can’t help,” I said.

But he had noticed a strange thing when he had compared the consultants’ report with an academic book in which my team and I had documented the cost risk for transportation infrastructure such as roads, bridges, and rail lines. According to our book, the cost risk was higher for that very ordinary sort of infrastructure.

only a minority of the many project types in my database are “normally” distributed. The rest—from the Olympic Games to IT projects to nuclear power plants and big dams—have more extreme outcomes in the tails of their distributions. With these fat-tailed distributions, the mean is not representative of the distribution and therefore is not a good estimator for forecasts. What does this mean in practice? Ideally, you’d always want to know whether you’re facing a fat-tailed distribution or not.

My team and I were asked to do just that for High Speed 2, or HS2, a $100 billion–plus high-speed rail line that will run from London to northern England, if and when it is completed. Using our database, we first explored the cost distribution of comparable high-speed rail projects around the world. Sure enough, the distribution had a fat tail. High-speed rail is risky business, as we saw in Hong Kong. So we zeroed in on the projects in the tail and investigated what exactly had made each project blow up. The answers were surprisingly simple. The causes had not been “catastrophic” risks such as terrorism, strike actions, or other surprises; they had been standard risks that every project already has on its risk register. We identified roughly a dozen of those and found that projects were undone by the compound effects of these on a project already under stress. We found that projects seldom nosedive for a single reason.

One of the most common sources of trouble for high-speed rail was archaeology. In many parts of the world, and certainly in England, construction projects are built on layers of history. The moment a project starts digging, there’s a good chance it will uncover relics of the past.

We also found that early delays in procurement and political decisions correlated with black swan blowouts in the HS2 reference class. Interestingly, early delays are not seen as a big deal by most project leaders. They figure they have time to catch up. But it’s dead wrong. Early delays cause chain reactions throughout the delivery process. The later a delay comes, the less remaining work there is do.

We had ten more items on the list of causes of high-speed rail black swans, including late design changes, geological risks, contractor bankruptcy, fraud, and budget cuts.

Projects that run into trouble and end in miserable failure are soon forgotten because most people aren’t interested in miserable failures; projects that run into trouble but persevere and become smash successes are remembered and celebrated.

Someone could note, for example, that Steve Jobs, Bill Gates, and Mark Zuckerberg all dropped out of university and conclude that a key to success in information technology is to leave school. What’s missing, and what makes that odd conclusion possible, are the dropouts who went nowhere in information technology and are ignored. That’s survivorship bias.

I know this conclusion is not emotionally satisfying. How could it be? The rare exceptions Hirschman incorrectly thought are typical are, almost by definition, fantastic projects that make irresistible stories. They follow the perfect Hero’s Journey, with a narrative arc from great promise to near ruin to an even greater accomplishment and celebration. We seem hardwired to love such stories. We crave them in all cultures and times. So there will always be authors telling those stories. Like Hirschman. Or Malcolm Gladwell.

Then there’s the Hoover Dam. A soaring structure that awes tourists as much today as it did when it was finished in 1936, the Hoover Dam was a gigantic project built in what was a remote, dusty, dangerous location. Yet it came in under budget and ahead of schedule. In the annals of big projects, it is a legend. In large part, that triumph was owed to Frank Crowe, the engineer who managed the project. Before tackling the Hoover Dam, Crowe had spent a long career building dams across the western United States and over those many years had developed a large and loyal team that followed him from project to project. The experience contained within that team was profound. So were the mutual trust, respect, and understanding.

The value of experienced teams cannot be overstated, yet it is routinely disregarded. A Canadian hydroelectric dam I consulted on is one of countless examples. It went ahead under the direction of executives who had zero experience with hydroelectric dams. Why? Because executives with experience were difficult to find. How hard can it be to deliver a big project?, the owners pondered. The oil and gas industry delivers big projects. A hydroelectric dam is a big project. Ergo, executives from oil and gas companies should be able to deliver a dam.

Or so the owners reasoned—and hired oil and gas executives to build the dam. The reader will not be surprised to learn that in sharp contrast to the Hoover Dam, this project turned into a fiasco that threatened the economy of a whole province.

The word deadline comes from the American Civil War, when prison camps set boundaries and any prisoner who crossed a line was shot.

Digital simulation enabled the second strategy: a radically different approach to construction. Instead of sending materials to a worksite to be measured, cut, shaped, and welded into buildings—the conventional way since the building of the pyramids—the materials were sent to factories, which used the detailed, precise digital specifications sent to them to manufacture components. Then the components were sent to the worksite to be assembled. To the untrained eye, T5 would have looked like a conventional construction site, but it wasn’t; it was an assembly site. The importance of this difference cannot be overstated, and every large construction site will need to follow suit if construction is going to make it into the twenty-first century.

By giving companies only positive incentives to perform well—including bonuses for meeting and beating benchmarks—it ensured that the interests of the many different companies working on the project were not pitted against one another.

There are five project types that are not fat-tailed. solar power, wind power, fossil thermal power (power plants that generate electricity by burning fossil fuels), electricity transmission, and roads. THIS MEANS THEY ARE TYPICALLY DELIVERED ON TIME AND BUDGET

The problem isn’t nuclear power; many other project types have track records only somewhat less bad.

Manufacturing in a factory and assembling on-site is far more efficient than traditional construction because a factory is a controlled environment designed to be as efficient, linear, and predictable as possible.

bad weather routinely wreaks havoc on outdoor construction, while production in a factory proceeds regardless of the elements.

Modularity can do astonishing things. When the Covid pandemic first emerged in China in January 2020, a company that makes modular housing modified an existing room design and cranked out units in a factory. Nine days later, a 1000-bed hospital with 1400 staff opened in Wuhan, ground zero of the outbreak.

So what sets the fortunate five apart? They are all modular to a considerable degree, some extremely so. Solar power? It’s born modular, with the solar cell as the basic building block. In a factory, put multiple solar cells together onto a panel. Ship and install the panel. Install another, and wire them together. Add another panel. And another until you have an array. Keep adding arrays until you are generating as much electricity as you like.

Wind power? Also extremely modular. Modern windmills consist of four basic factory-built elements assembled on-site: a base, a tower, the “head” (nacelle) that houses the generator, and the blades that spin. Snap them together, and you have one windmill. Repeat this process again and again, and you have a wind farm.

Fossil thermal power? Look inside a coal-burning power plant, say, and you’ll find that they’re pretty simple, consisting of a few basic factory-built elements assembled to make a big pot of water boil and run a turbine. They’re modular, much as a modern truck is modular. The same goes for oil- and gas-fired plants.

Electricity transmission? Parts made in a factory are assembled into a tower, and factory-made wires are strung along them. Repeat. Or manufactured cables are dug into the ground, section by section. Repeat again.

Roads? A multibillion-dollar freeway consists of several multimillion-dollar freeway sections strung together. Repeat, repeat, repeat.

At one extreme—the terrifying place where no one wants to be—we find storage of nuclear waste, hosting the Olympic Games, construction of nuclear power plants, building information technology systems, and constructing hydroelectric dams. They are all classic “one huge thing” projects.

Wind power must grow eleven-fold. Solar power must grow a mind-boggling twenty-fold.[37]

EMPIRE STATE BUILDING: WHY IT WAS A SUCCESS

the Empire State Building would be: slim, straight, and stretching higher into the sky than any other building on the planet.

In a 1931 publication, the corporation boasted that before any work had been done on the construction site “the architects knew exactly how many beams and of what lengths, even how many rivets and bolts would be needed. They knew how many windows Empire State would have, how many blocks of limestone, and of what shapes and sizes, how many tons of aluminum and stainless steel, tons of cement, tons of mortar. Even before it was begun, Empire State was finished entirely—on paper.

the Empire State Building was officially opened by President Herbert Hoover—exactly as scheduled,

The Empire State Building had been estimated to cost $50 million. It actually cost $41 million ($679 million in 2021). That’s 17% under budget. I call the pattern followed by the Empire State Building and other successful projects “Think slow, act fast.

another factor was his insistence that the project use existing, proven technologies “in order to avoid the uncertainty of innovative methods.” That included avoiding “hand work” whenever possible and replacing it with parts designed “so that they could be duplicated in tremendous quantity with almost perfect accuracy,” Lamb wrote, “and brought to the building and put together like an automobile on the assembly line.” Lamb minimized variety and complexity, including in floor designs, which were kept similar as much as possible. As a result, construction crews could learn by repeating. In other words, workers didn’t build one 102-story building, they built 102 one-story buildings.

the general contractors who erected the giant were Starrett Brothers and Eken, “a company with a proven track record of efficiency and speed in skyscraper construction,” noted the historian, Carol Willis.

it wasn’t the first time Lamb had designed the building. In Winston-Salem, North Carolina, the Reynolds Building, once the headquarters of the R. J. Reynolds Tobacco Company, is an elegant art deco structure that looks remarkably like a smaller, shorter Empire State Building.

DENMARK TRAIN TUNNEL UNDERWATER FAILURE

the government announced the Great Belt project. It comprised two bridges, one of which would be the world’s longest suspension bridge, to connect two of the bigger islands, including the one with Copenhagen on it. There would also be an underwater tunnel for trains—the second longest in Europe—which would be built by a Danish-led contractor. That was interesting because Danes had little experience boring tunnels. Things went wrong from the start. First there was a yearlong delay in delivering four giant tunnel-boring machines. Then, as soon as the machines were in the ground, they proved to be flawed and needed redesign, delaying work another five months. Finally, the big machines started slowly chewing their way under the ocean floor.

Up above, the bridge builders brought in a massive oceangoing dredger to prepare their worksite.[1] To do its work, the dredger stabilized itself by lowering giant support legs into the seafloor. When the work was done, the legs were lifted, leaving deep holes. By accident, one of the holes happened to be on the projected path of the tunnel. Neither the bridge builders nor the tunnelers saw the danger.

The machine and the whole tunnel flooded. So did a parallel tunnel and the boring machine within it.

the salt water in the tunnel was like acid to its metal and electronics. Engineers on the project told me at the time that it would be cheaper to abandon the tunnel and start again rather than pull out the borers, drain the tunnel, and repair it.

But politicians overrode them because an abandoned tunnel would be too embarrassing. Inevitably, the whole project came in very late and way over budget. The actual overrun was 55 percent, and 120 percent for the tunnel alone (in real terms, measured from the final investment decision

 

Posted in An Index of Best Energyskeptic Posts, Infrastructure & Fast Crash, Infrastructure Books, Interdependencies, Supply Chains | Tagged , , , , | Comments Off on Why large projects fail. Especially Renewable Energy

Menhaden: the fish at the bottom of the ocean food web

Preface. Menhaden are being overfished because it is very profitable and the industry has been powerful enough to prevent reform for over 20 years.

Even more money could be made by more people if there were a 10 year moratorium on catching them, because they are the main fish at the bottom of the food chain eating biomass (phytoplankton) on the East Coast of the U.S. If their numbers could be brought back to what they used to be, the population of blue fin tuna, striped bass, redfish, bluefish, and humpback whales, to name just a few of the 79 species that feed on menhaden, would explode and create many commercial and sports fishing jobs.

Continue reading

Posted in Biodiversity Loss, Fisheries, Jobs and Skills, Starvation | Tagged , , , , | 6 Comments