Summary of German Armed Forces Peak Oil Study

[This is a summary I first published in 2011.  It’s important, so I’ve re-posted it today.  According to Der Spiegel this study was leaked and not meant for publication.  The document concludes that the public must be made aware of peak oil implications so that preparation for peak is accelerated immediately. My comments are inside brackets.  This is a small subset of the overall study–you may want to read it in its entirety. 

Looks like they predicted the election of Trump:

“…people will experience a lowering of living standards due to an increase in unemployment and the cost of oil for their vehicles. Studies reveal that only continuous improvement of individual living conditions provide the basis for tolerant and open societies.  Setbacks in economic growth can lead to an increase in the number of votes for extremist and nationalistic parties.”

If you’re wondering about the silence from the U.S. government and media on peak oil, it’s probably because they’re afraid acknowledging it would cause stock markets to crash world-wide.  Congress is certainly aware of the problem (see the senate and house hearings here).  If preparation is being made, it’s being done in top secret departments of homeland security and the military (i.e. rationing, refugee camps to stop mass migrations overwhelming areas that are still ecologically viable, etc).

Alice Friedemann  www.energyskeptic.com Women in ecology  author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity

***

BTC. November 2010. Armed Forces, Capabilities and Technologies in the 21st Century Environmental Dimensions of Security. Sub-study 1. Peak oil security policy implications of scarce resources. Bundeswehr Transformation Centre, Future Analysis Branch.  112 pages.  English version

This study is intended to explain the potential security policy consequences, risks and cascade effects that may arise from peak oil and addresses the subject of finite resources and their potential security policy implications as well as climate change and demography.

In the past, numerous conflicts were linked to raw material deposits. Literature relating to this subject is extensive. Usually resource conflicts have been restricted to specific regions and of limited relevance to international security policy.  With global peak oil, this could change: 1) a global lack of oil could represent a systemic risk because its versatility as a source of energy and as a chemical raw material would mean that virtually every social subsystem would be affected by a shortage. 2) The concentration of oil deposits and transport infrastructure in the “Strategic Ellipse” could result in a shift in geopolitical power.

The Strategic Ellipse has 74% of the world’s oil and 70% of the world’s natural gas reserves:

It is a fact that oil is finite and that there is a peak oil. Since this study is mainly focused on understanding cause-effect relations following such a peak oil situation, it is not necessary to specify a precise point in time. Some institutions claim that peak oil will occur as early as around 2010. [My comment: conventional oil peaked in 2005 (peak oil posts, Science magazine Peak Oil Production May Already Be Here Mar 25, 2011]

Today approximately 90% of all industrially manufactured products depend on the availability of oil. Oil is not only the source material for producing fuels and lubricants but is also used as hydrocarbon for most plastic. It is one of the most important raw materials in the production of many different products such as pharmaceuticals, dyes and textiles. As the source material for various types of fuels, oil is a basic prerequisite for the transportation of large quantities of goods over long distances. Alongside information technology, container ships, trucks and aircraft form the backbone of globalization. Oil-based mobility also significantly influences our lifestyle, both regionally and locally. For example, living in suburbs several miles from their workplace would be impossible for many people without a car.

A considerable increase in oil price would pose a systemic risk because the availability of relatively affordable oil is crucial for the functioning of large parts of the economic and social systems. For some subsystems, such as worldwide goods shipping or individual transportation, the importance of oil is obvious.

Many challenges exist when there aren’t enough oil supplies. For example, consider North Korea. After the Korean War, the USSR helped North Korea develop modern agriculture. When the USSR collapsed, the flow of cheap oil suddenly dried up. Agricultural machines had to be put out of service. A return to traditional cultivation methods was aggravated by over-fertilized land, and the proportion of people employed in agriculture was increased from 25% to 35% to compensate for the loss of 80% of the agricultural machines. Despite this, harvests dropped by 60% between 1989 and 1998.

[Pages 13 and 14 describe how Great Britain, the USA, Russia, India, and China have declared energy as essential to their prosperity, competitiveness, and affecting national security, and the actions they are taking to secure oil.]

It can therefore be stated that against the backdrop of the ever-decreasing availability of fossil fuels, the challenge of ensuring long-term energy supply is reflected in national strategies worldwide, leaving no doubt as to the vital importance attached to this issue. In this context, the fact that energy supply aspects occupy an increasingly important place in the national security strategy documents of various countries is likely to have consequences on the nature of future energy relations. These strategy documents emphasize a peaceful method of securing energy supplies.

But competition and conflict over scarce oil resources are likely to arise at some point.

The German Government defines energy security as a “secure, sustainable and competitive supply of energy”. Energy crises are lasting imbalances between supply and demand, which provoke price jumps and have negative effects on the economies concerned. Energy security policy therefore aims at preventing energy supply shortages or supply disruptions.

90% of all oil imports to Germany come from countries that have already exceeded their national peaks during the study’s period of review. It is very likely peak oil has already occurred for Russia, Norway and Great Britain. These 3 countries supply 60% of Germany’s total oil import volume.  Great Britain, from which Germany receives 10% of its oil imports, is already an oil net importer and can only export oil to Germany after having previously imported from third countries.

About 90% of all oil countries have exceeded their peak or are likely to reach it by 2015. Brazil and Angola may be able to increase production. Saudi Arabia is likely to be in decline. This is extremely relevant because the point at which global peak oil occurs is likely to be determined primarily by Saudi Arabia’s oil production potential. In a worst-case scenario, this would mean that even a dominant oil power such as Saudi Arabia could cease to function as a potential swing producer.

Scenarios

Oil producing nations are likely to use their power to aggressively enforce political, economic, or ideological objectives for their own interests, such as Russia’s gas disputes with Ukraine.

Countries are also likely to stop exporting as much oil to preserve oil resources for future generations. The more obvious the actual scarcity of oil, the more expensive oil would become and thus the greater the profits of producer countries. The calculus of “political peaking” would become all the more understandable. Political peaking would further aggravate peak oil-induced supply shortage and related price increases.

If oil is sold to a producing nation’s citizens at below-market prices to improve their lives, export quantities will dwindle even further.

In the past countries have relied on many sources of oil, but as the oil increasingly only becomes available from the strategic ellipse, diversifying sources won’t be possible, and the nations in the strategic ellipse will become ever more important.

Nations will woo oil producing nations by acting as trade partners, investors, suppliers of technology and weapons, lenders, “development aid workers”, etc.   China does not leave energy supply to the markets, but already today tries to place it under government control.  It also supports the foreign operations of its national oil companies by providing regional, broadly based and intensified energy diplomacy. The Chinese commitment in Africa is an example for the country’s attempts to position itself for sustainably securing its national resource supply. In addition, Chinese oil companies have been making efforts to obtain licences for a share of the reserves in the US for several years, and lately they have been successful. On 12 October 2010, for example, the Chinese oil giant CNOOC bought into the Texan reserves of Chesapeake Energy in Texas for several billion dollars.

Reshaping supply relationships after global peak oil.  In light of peak oil, the share of oil traded on the global, freely accessible oil market might decrease in favor of oil traded via bilateral agreements, replacing a free market with private contracts.  Oil producing nations might demand nuclear material in exchange for oil.

Increased importance of oil infrastructure.  When peak oil is exceeded, transport infrastructure will become even more important.  Global transportation routes via which oil is distributed with supertankers or long pipeline sections are difficult to protect and provide easy targets for interrupting the oil supply. This will increase the incentive to sabotage energy infrastructure.   Since a major part of the oil reserves remaining after peak oil is concentrated in the Strategic Ellipse, the oil infrastructure in this region is becoming increasingly important for many countries. Interruption of these energy infrastructures would be an easy and worthwhile target. A comparatively huge amount of damage with global political and economic implications could be caused with very little resources and low risk. The series of attacks in Nigeria already show these tendencies. The infrastructure of gas and electricity as a partial substitute for oil will require increased protection.

Environmental problems, war.  Conventional deposits cause much less environmental damage than non-conventional oil resources, such as tar sands and deep sea oil.  Accidents in the arctic could have severe consequences in the complex Arctic ecosystem

War.  Ownership of the arctic isn’t settled which could lead to conflicts.  The strategic significance in securing resources and the exploration of new and controversial oil-producing areas may increase the probability of a further build-up of military arsenals to enforce those claims. Efforts aimed at expanding military capacities for the protection of own claims on the Arctic can already be seen today.  Similar considerations apply to international waters. The growing possibility of deep-sea resources exploration would increasingly bring unsettled territorial claims as a potential cause of conflict to the fore, as can currently be seen in the territorial conflicts over the South China Sea. With the exploitation of high sea deposits, the significance of blue water navies would also increase.

Natural gas (NG) as an extension of the oil era

NG is seen as a substitute for oil in many fields and is expected to last longer than oil.  [Note: in the USA, reserves of NG have been greatly exaggerated, also here].

Natural gas will therefore be one of the most important fossil fuels of the future and will have to replace oil to a considerable extent.  NG cannot simply be shipped but must be transported as gas via a pipeline or, after compression or liquefaction (liquefied natural gas (LNG)), with special-purpose tankers. Pipeline systems, however, which currently carry the major part of natural gas produced to the consumers, are regionally restricted. Instead of one world market for natural gas are several regional markets with limited numbers of suppliers.

Pipelines that carry NG span countries as well political, economic, and cultural regions, which is likely to lead to conflict over the routes, construction, and a need for increased protection of the pipelines.

Nuclear proliferation.  Nations may electrify their energy infrastructure to make use of more nuclear plants, but that will increase the likelihood of accidents which could have dramatic ecological consequences – globally.  This is even more of a problem in nations with weak institutions and technological competence.   Uranium mining is environmentally destructive, a great deal of water is required to cool nuclear power plants, and dismantling of old plants and waste disposal are further negative factors.  Expansion of nuclear energy increases the odds of nuclear weapons getting into the wrong hands.  Oil producers are likely to demand nuclear material in exchange for oil.  Nuclear terrorist groups or organized crime will have access to increased waste and nuclear material

Biofuels

This section basically says that peak oil is going to cause food prices to rise, leading to food crises and instability.  Food prices will go up even more if crops are grown to create biofuels because that displaces food crops.

Excessive biomass production without sustainable agricultural solutions would exacerbate the impact of climate change. A more intensive agriculture, especially with high yield crops grown as monocultures, will have additional negative effects especially on those regions that are already facing acute water shortages.  The degradation of soil due to erosion, compression, salinization and desertification may progress considerably. With the destruction of intact eco-systems and the loss of biodiversity the natural regeneration potential of the biosphere would decrease on a local and global level. Without sustainable solutions the rapidly growing production of renewable energy raw materials could intensify economic and ecological crises in many regions of the world.

[Growing plants for fuel or electricity is never going to work, it doesn’t scale up, has low to negative EROI, and so on:  Peak Soil: Why Cellulosic and other Biofuels are Not Sustainable and a Threat to America’s National Security ]

Coal.  Will last longer than oil but is also finite.  If technologies for a climate-friendly coal power generation (carbon capture and storage (CCS) etc.) are not used globally, the CO2 concentration in the atmosphere will increase considerably and accelerate climate change. The same is true for coal liquefaction but with the current state of the art is inefficient energetically and harmful to the climate.  The setup of such plants will involve high economic and political costs. Complex planning and approval procedures and negative impacts on the environment may pose potential obstacles. In view of a global oil shortage, nations may resort to low quality high-polluting coal.  Coal liquefaction is a “last resort” to supply industry, transport systems and armed forces with fuel, (i.e. Germany in WWII).

Other forms of energy.  Nations will try to develop other forms of energy, but no one region hardly ever has the conditions favorable to develop sun, wind, geothermal, and biomass.  Developing these alternatives depends on a complex electrical infrastructure being built.  This infrastructure also needs to be protected and must operate across the borders of nations and different cultural groups, so it’s far more than a technological or economic challenge to build, requiring a long-term stable economic and political environment.

Societal risks of peak oil

  1. Economic collapse
  2. Transportation restricted
  3. Erosion of confidence in state institutions

There are not sufficient alternatives to oil for transportation, so when oil grows short, there are likely to be extreme restrictions for private vehicles, especially in suburbs, resulting in a “mobility crisis” that would make the economic crisis much worse.

Scarce or expensive oil would drive up the cost of all goods.  Our current international movement of goods has largely been made possible by the technological progress in the field of freight traffic (container ships, trucks, cooling systems), which are based on fossil fuels.  So trying to switch all modes of transport to alternative energy sources is much more complex with today’s common means of transportation and technology. Mobility on the basis of fossil fuels is likely to remain a long time.   Oil shortages could lead to bottlenecks in delivering food and other life-sustaining essential goods.

After peak oil, there would be significant differences from past food shortages:

  • The crisis would concern all food traded over long distances, not just single regions or products. Regions that are structurally already at risk today would however be particularly affected (see figure 6).
  • Crop yields also depend on oil. Lack of machines or oil-based fertilizers and other chemicals to increase crop yield would therefore have a negative effect on crop production
  • The increase in food prices would be long-term
  • Competition between the use of farmland for food production and for producing biofuels could worsen food shortages and crises.

Even countries with good food production could experience social unrest if it’s distributed inefficiently or unfairly.

Economic collapse

Oil is used directly or indirectly in the production of 90% of industrial goods, so a shortage of oil would affect the entire economic system.  All prices would go up.  Unemployment would go up.

[The paper mentions that other types of jobs would become available, but going from making cars to buggies and breeding enough horses and mules to pull the buggies will take a lot of time.  Rickshaws would be a better analogy, since we can’t go back to horses given we don’t have enough land to feed them].

There are no post-fossil societies to look at for ideas about how to succeed in making this transition, this is a completely novel situation.  [Not true, look at what’s happened in North Korea and Cuba]

Loss of confidence.  After oil shortages people will experience a lowering of living standards due to an increase in unemployment and the cost of oil for their vehicles. Studies reveal that only continuous improvement of individual living conditions provide the basis for tolerant and open societies.  Setbacks in economic growth can lead to an increase in the number of votes for extremist and nationalistic parties.

3.2 The Systemic Risk of Exceeding the Tipping Point.

The transmission channels of an oil price shock involve diverse and interdependent economic structures and infrastructures, some of which are of vital importance. Its consequences are therefore not entirely predictable. Initially, it will be possible to measure the extent of these consequences, although not exclusively, by a reduced growth of the global economy.

Tipping points are characterized by the fact that when they are reached, a system no longer responds to changes proportionally, but chaotically. The term “tipping processes” is used in the field of climate research. At such a point, a minor change in has a drastic effect on an ecosystem.  At first glance, it seems obvious that a phase of slowly declining oil production quantities would lead to an equally slowly declining economic output. Peak oil would bring about a decline in global prosperity for a certain length of time. Economies, however, move within a narrow band of relative stability. Within this band, economic fluctuations and other shocks are possible, but the functional principles remain unchanged and provide for new equilibrium’s within the system. Outside this band, however, this system responds chaotically as well.

How this might occur:

  1. After peak oil alternative fuels will not compensate leading to a loss of confidence in the markets.
  2. Increasing oil prices will reduce consumption and economic output leading to recession.
  3. Higher transportation costs will make the prices of all traded goods rise.  Trade volumes would decrease, and some nations would no longer be able to afford to import food.
  4. National budgets will be devoted to securing food and dealing with unemployment, leaving little funding to invest in oil substitutes and green technology.  Revenues would keep falling as a result of the recession and declining tax revenue.

In the medium term, the global economic system and all market-oriented economies would collapse.

  1. Corporations would realize the contraction will go on for a long time
  2. Tipping point: In an economy shrinking over an indefinite period, savings would not be invested because companies would not be making any profit.  For an indefinite period, companies would no longer be in a position to pay borrowing costs or to distribute profits to investors. The banking system, stock exchanges and financial markets could collapse altogether. In theory, there are industries that could profit from the situation. The oil industry or companies in the green-tech sector would certainly have an increasing demand for capital. Given the companies’ environment, in particular the dependence of these industries on (international) value chains and infrastructures, as well as the dramatically changing conditions on the demand side, it would be implausible to expect “islands of stability” which continue to exist on a “micro level”.
  3. Financial markets are the backbone of global economy and an integral component of modern societies. All other subsystems have developed hand in hand with the economic system.  A completely new system state would materialize.

Other likely consequences

Banks left with no commercial basis. Banks would not be able to pay interest on deposits as they would not be able to find creditworthy companies, institutions or individuals. As a result, they would lose the basis for their business.

Loss of confidence in currencies. Belief in the value-preserving function of money would dwindle. This would initially result in hyperinflation and black markets, followed by a barter economy at the local level.

Collapse of value chains. The division of labor and its processes are based on the possibility of trade in intermediate products. It would be extremely difficult to conclude the necessary transactions lacking a monetary system.

Collapse of unpegged currency systems. If currencies lose their value in their country of origin, they can no longer be exchanged for foreign currencies. International value-added chains would collapse as well.

Mass unemployment. Modern societies are organized on a division-of-labor basis and have become increasingly differentiated in the course of their histories. Many professions are solely concerned with managing this high level of complexity and no longer have anything to do with the immediate production of consumer goods. The reduction in the complexity of economies that is implied here would result in a dramatic increase in unemployment in all modern societies.

National bankruptcies. In the situation described, state revenues would evaporate. (New) debt options would be very limited, and the next step would be national bankruptcies.

Collapse of critical infrastructures. Neither material nor financial resources would suffice to maintain existing infrastructures. Infrastructure interdependencies, both internal and external with regard to other subsystems, would worsen the situation.

Famines. Ultimately, production and distribution of food in sufficient quantities would become challenging.

The developments shown here make it clear that it is essential to secure the supply of energy to the economic cycle in sufficient quantities to enable positive economic growth. A contraction in economic activity over an indefinite period of time represents a highly unstable state that will cause the system to collapse. It is hardly possible to estimate the security risks that such a development would involve.

Other countries may spread the “infection” of economic collapse to other nations due to the highly interdependent relations between nations in the global economy.

In complex systems, less energy can lead to collapse.

War.  Oil shortages are likely to be seen by importing nations as a national security issue leading to conflict, which could also emerge over renewable energy resources.

Effects on armed forces

In the long run, not only all societies and economies worldwide but armed forces as well will be faced with the various and difficult challenges of transformation towards a “post-fossil” age.

Implications for Germany: A markedly reduced mobility of the German Armed Forces would have various consequences – not only for the available equipment and training, but also for their (global) power projection and intervention capabilities. Given the size and complexity of many transport and weapon systems as well as the high standards set for qualities like robustness in operation, alternative energy and drive propulsion systems would hardly be available to the necessary extent in the short term. One of the consequences to be initially expected would be further cutbacks in the use of large weapon systems for training purposes in all services, thus raising the need for more “virtualized” training. However, effects on current and planned missions would most likely be even more severe. Deployment to the theater of operations, the operation of bases and the mission itself are considerably more energy- and above all fuel-intensive than the mere upkeep of armed forces.  Rapid operations of highly mobile forces, which are regularly deployed by air, would be particularly affected, as well as air force missions, laying severe restrictions upon these types of operations. Despite being common practice, alternative solutions for deployment like increased rail transport or a markedly more efficient transport of equipment, supplies or even personnel by ship are unlikely to provide full substitution. Especially with regard to deployments from railway stations and sea ports into the operations area (“the last mile”) and deployments within theaters of operations lacking access to sea or railway, combustion engine based drive propulsion systems will not as easily be substitutable. The same applies to tactical mobility.

In order to prevent a restriction of capabilities and deployment options of the Bundeswehr, alternative solutions to oil-based fuels would be necessary in the short term. While these solutions, such as coal liquefaction or in some cases natural gas liquefaction, are possible and conceivable in principle, they would entail considerable political and economic efforts. They would require considerable investments and radical industrial policy decisions. Considering the challenges society as a whole would face as a result of peak oil, it seems unlikely that this could be accomplished even in case of an emergency. Moreover, worldwide (re-)initiation particularly of coal and gas liquefaction would further expedite both the shortage of fossil fuels and climate change. Even though cooperation in international alliances may hold benefits when it comes to technologies or coal and gas reserves, it would turn coal and gas into even more important and strategic resources and make their national exploitation a priority. Especially coal could potentially become a “strategic reserve” for Germany. Besides ensuring the availability of alternative fuel solutions such as liquid coal or gas at least in technological terms, building up large strategic reserves of fuel for all kinds of Bundeswehr vehicles, ships and aircraft should be considered in order to bridge supply shortages for an extended period of time if necessary.

Conclusion

When considering the consequences of peak oil, no everyday experiences and only few historical parallels are at hand. It is therefore difficult to imagine how significant the effects of being gradually deprived of one of civilization’s most important energy sources will be.

Psychological barriers cause indisputable facts to be blanked out and lead to almost instinctively refusal to look into this difficult subject in detail. Peak oil, however, is unavoidable. This study shows the existence of a very serious risk that a global transformation of economic and social structures, triggered by a long-term shortage of important raw materials, will not take place without frictions regarding security policy. The disintegration of complex economic systems and their interdependent infrastructures has immediate and in some cases profound effects on many areas of life, particularly in industrialized countries.

While it is possible to identify specific risks, the majority of the challenges we are facing are still unknown.

Even if the developments described in this study do not occur as depicted, it is still necessary and sensible to prepare for peak oil. The time factor may be decisive for a successful transformation towards post-fossil societies. In order to accelerate democratic decision processes in this respect, it is necessary to embed the dangers of an eroding resource basis in the public mind. This is the only way to develop the necessary problem awareness for prospective settings of the course. In general, decentralized solutions can indeed be encouraged by centralized agencies, but not developed and implemented

Posted in Books, GOVERNMENT, Government study predictions, Military, Over Oil, Peak Oil, Peak Resources, War, War Books | Tagged , , , , , , , , , | 4 Comments

Nitrogen fertilizer poses significant threats to humans and the environment

nitrogen-flows-in-agriculture

NRC. 2015. A Framework for Assessing Effects of the Food System. National Research Council, National Academies Press. 19 pages.  

Nitrogen (N) is essential for agricultural productivity, but in its more reactive forms, it can pose significant threats to humans and the environment. Quantifying the abundance of nitrogen in different chemical forms and understanding its pathways through soil, air, water, plants, and animals under different management scenarios are essential to minimize threats to human health and environmental quality. Nonetheless, studying multiple forms of nitrogen in the environment presents many challenges and calls for the use of a systems analysis framework.

Nitrogen (N) is the most limiting element for plant growth in many ecosystems, despite being the most plentiful element in the earth’s atmosphere. In its most abundant form, gaseous dinitrogen (N2), N is unavailable to most organisms. However, following transformation to other forms, especially nitrate (NO3–) and ammonium (NH4+), N becomes highly reactive in the biosphere and can be highly mobile in water and air. Nitrogen is a key component of proteins in both plants and animals, including the enzymes responsible for photosynthesis and other critical biological reactions, and the muscles used for movement and other body functions. Consequently, most crops, especially cereals, require sizable supplies of N to yield well, and livestock and poultry need a diet rich in N to produce large quantities of milk, eggs, and meat. Agriculture now uses more reactive N than does any other economic sector in the United States  and is also the sector responsible for the greatest losses of reactive N to the environment, where N has multiple unintended consequences, including threats to human health, degradation of air and water quality, and stress on terrestrial and aquatic organisms. Because reactive N strongly affects crop production and farm profitability, as well as human health and environmental quality, managing N efficiently and in an environmentally harmonious manner is a critically important component of agricultural sustainability.

Mineral N fertilizers produced through the Haber-Bosch process constitute the single greatest source of reactive N introduced into the United States, with about 11 teragrams (Tg) of fertilizer N being used in U.S. agriculture each year. (A teragram is the equivalent of 1 billion kilograms (= 2,204,622,622 pounds = 1,102,311 short tons)

Mineral forms of N fertilizer are energetically expensive to synthesize (57 MJ fossil energy/kg N) and sensitive to increases in the price of natural gas used in their production.

Thus, the fact that typically only 40% to 60% of applied N fertilizer is absorbed by crop plants implies large agronomic, economic, and energetic inefficiencies, as well as a large potential for excess N to move downstream and downwind from crop fields.

The exact fate of N fertilizer is heavily dependent on farm management decisions influencing N cycle processes, including crop selection, irrigation management, and the rate, formulation, placement, and timing of fertilizer applications. The fate of fertilizer N also can be highly dependent on weather conditions, especially precipitation patterns.

In addition to the application of mineral fertilizers, N may enter crop fields by several other pathways. Biological fixation of atmospheric N2 by microbes associated with the roots of leguminous crops like soybean and alfalfa (symbiotic fixation) adds about 8 Tg N per year to U.S. agroecosystems (EPA, 2011). Additional pathways by which reactive N is introduced into agroecosystems include lightning, fixation by nonsymbiotic microbes living in soil, and atmospheric deposition. The former two processes are responsible for adding only small quantities of N; the latter input can be locally important.

About 6.8 Tg of N is present in manure produced each year in the United States, but of that quantity, only 0.5 to 1.3 Tg N is applied to cropland and 3.7 Tg N is deposited on pastures and rangelands , indicating that a substantial proportion of manure N is not recycled effectively.

Moreover, manure application rates vary greatly among fields, with most fields receiving none and some receiving high rates.

Consequently, excessive concentrations of nutrients, especially phosphorus and N, can occur in the vicinity of concentrated animal feeding operations and can lead to water pollution.

Nitrogen can be lost from soil via

  1. Leaching, runoff, and denitrification are critical components of agroecosystem N dynamics, farm profitability, and environmental quality.
  2. As gaseous ammonia emitted from fertilizer and manure applied to the soil
  3. Senescing crops.
  4. Erosion of topsoil and the organic forms of N it contains constitutes another pathway for N loss from agroecosystems.
  5. Harvesting large amounts of crop residue. This can deplete soil organic matter and the lack of protective soil cover may result in increased amounts of N lost through erosion and runoff .
  6. Magnitudes of various N losses from agroecosystems are highly variable in space and time, and they are strongly influenced by weather conditions and management practices.

Human Health and Environmental Concerns

Reactive N released from agroecosystems is responsible for a number of adverse public health and environmental effects. Four of the most salient effects for the United States are noted here.

Drinking water contamination

Nitrate coming from farmland is an important contaminant of drinking water in many agricultural regions (EPA, 2011). It constitutes a potential health threat due to its ability to (1) induce methemoglobinemia, a condition in which the oxygen-carrying capacity of blood is inhibited; (2) promote endogenous formation of N-nitroso compounds, which are carcinogens and teratogens; and (3) inhibit iodine uptake, thereby inducing hypertrophic changes in the thyroid.

These health concerns are not restricted to members of the farm population. Nitrate contamination of surface water is common in the Corn Belt and is a recurrent challenge to cities such as Des Moines, Iowa, which draws drinking water from the Raccoon and Des Moines Rivers, both of which drain intensively farmed areas. After repeatedly violating the U.S. Environmental Protection Agency’s (EPA’s) drinking water standard of 10 mg L–1 for nitrate-nitrogen, and challenged by increasing levels of nitrate in its source water, the Des Moines Water Works constructed the largest ion exchange nitrate removal facility in the world in 1991. The need for this facility, which provides service to 500,000 people, has not abated, as record high levels of nitrate were encountered in Des Moines’ drinking water sources in 2013.

Nitrate also poses a significant threat to groundwater used for drinking water. A recent report focusing on the Tulare Lake Basin and Salinas Valley of California, which together contain 40% of the state’s irrigated cropland and more than 50% of its dairy cattle, found that nitrate poses a significant threat to the health of rural communities dependent on well water, with nearly 1 in 10 people in the two regions now at risk. The report identified agricultural fertilizers and animal wastes as the largest sources of nitrate in groundwater in the areas investigated

Eutrophication and hypoxia

Reactive N in water draining from agricultural regions can be responsible for eutrophication of freshwater bodies and hypoxia in coastal waters. High levels of N in water stimulate harmful algal blooms, leading to suppression of desired aquatic vegetation, and when the algae die, their subsequent decomposition by bacteria leads to large reductions in dissolved oxygen concentrations, with concomitant reductions in populations of shellfish, game fish, and commercial fish. Eutrophication and hypoxia effects are often spatially separated from their causes. For example, an estimated 71% of the N entering the northern Gulf of Mexico, the largest hypoxic zone in the United States and the second largest hypoxic zone worldwide, comes from croplands, rangelands, and pastures upstream in the Mississippi River Basin, with 17% of the total N load coming from Illinois, 11% from Iowa, and 10% from Indiana. Thus, because of the mobility of reactive N, agricultural practices and land uses in one region can affect water quality, recreational activities, and economic sectors like fisheries hundreds of miles downstream.

Greenhouse gas loading

Agricultural practices, principally fertilizer use, are responsible for about 74% of U.S. emissions of nitrous oxide (N2O), a greenhouse gas with a global warming potential 300-fold greater than that of carbon dioxide. Although the agricultural sector is responsible for only 6.3% of total U.S. greenhouse gas emissions, it is notable that agricultural emissions can offset efforts to use agricultural systems to mitigate climate change by sequestering carbon dioxide or providing alternative energy sources. Nitrous oxide emissions from agriculture also are notable as illustrations of how practices taking place locally on farmlands can have global scale effects.

Ecological and human health effects of ammonia and other NHx-N emissions

In 2002, the United States emitted 3.1 Tg of N into the atmosphere as ammonia and other NHx-N compounds, with agricultural practices, principally manure and fertilizer management, estimated to be responsible for 84% of that total. Most of these emissions are deposited within 1,000 km downwind as ammonia or ammonium in rainwater and aerosols. Ammonia emissions can lead to the formation of fine inorganic particulate matter (PM2.5) as ammonium-sulfate-nitrate salts, which are a factor for premature human mortality.

Deposition of reactive N from the atmosphere can acidify soils and waters and alter plant and soil community composition in grasslands and forests, leading to reductions in overall biological diversity and increases in the abundance of certain weedy species. Like the movement of reactive N in water from agricultural regions to coastal ecosystems, the aerial movement and deposition of NHx-N compounds illustrates that agriculture’s impact on the environment can extend into other ecosystems that may be located considerable distances from farmlands.

Using models of ammonia sources and transport and PM2.5 formation and deposition, Paulot and Jacob (2014) calculated the quantities of atmospheric ammonia and PM2.5 that are related to U.S. food exports and the associated impacts of these pollutants on human health. They concluded that over the study period of 2000 to 2009, 5,100 people died annually due to these emissions, incurring a cost of $36 billion. This value greatly exceeded the net value of the exported food ($23.5 billion per year). The investigators noted that these human health and economic costs indicated “extensive negative externalities,” and that taking into account other environmental impacts of agriculture, such as eutrophication, loss of biodiversity, and greenhouse gas emissions, would further diminish the value of agricultural production and exports.

Policy and Educational Considerations

Environmental quality and human health concerns related to the use of N for crop production have important policy dimensions.

In an analysis of 29 watersheds covering 28% of the United States, Broussard et al. (2012) noted that increases in federal farm program payments were significantly correlated with greater dominance of cropland by corn and soybean, more expansive fertilizer applications, and higher riverine nitrate concentrations.

They suggested that federal farm policies, expressed through farm payments, are a potent policy instrument that affects land-use decisions, cropping patterns, and water quality. Based on focus group interviews with farmers and residents of the Wells Creek and Chippewa River watersheds in Minnesota, Boody et al. (2005) noted that recent federal programs have encouraged the production of a narrow set of commodity crops while discouraging diversified agriculture and conservation efforts that better protect environmental quality. Similarly, Nassauer (2010, p. 190) observed that “for more than 50 years, production subsidies have vastly exceeded conservation spending––by almost ten times today—and this ratio has been clearly understood by farmers making production decisions.

Consequently, fewer opportunities exist for reducing N emissions to air and water from arable croplands through the increased use of conservation buffer strips and grasslands, reconstructed wetlands, and diversified cropping systems that include hay and other non-commodity crops.

Federal energy policies that have promoted ethanol production from corn grain have been linked to reactive N emissions. Donner and Kucharik (2008) used process-based models to simulate hydrological and nutrient fluxes in the Mississippi River Basin under different corn production scenarios. They found that the increase in corn cultivation required to meet the federal goal of producing 15 to 36 billion gallons of renewable fuels by the year 2022 would increase average annual discharge of dissolved inorganic N into the Gulf of Mexico by 10 to 34 percent.

 

Posted in Climate Change, Fisheries, Groundwater, Planetary Boundaries, Pollution, Soil | Tagged , , , | Comments Off on Nitrogen fertilizer poses significant threats to humans and the environment

Water resources infrastructure deteriorating

[ Water infrastructure has inter-dependencies with other essential infrastructure, if dams or levees fail, agriculture and electric power suffer, towns and homes flooded. If ports along the ocean and inland water ways aren’t maintained and waterways dredged, the by far the most efficient form of transportation, shipping, energy is wasted on less efficient rail and trucks.

What follows are a few excerpts from this 121 page National Research Council report.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

NRC. 2012. Corps of Engineers Water Resources Infrastructure: Deterioration, Investment, or Divestment? National Research Council, National Academies Press.

The U.S. Army Corps of Engineers (Corps) has major federal responsibilities for supporting flood risk management activities in communities across the nation, ensuring navigable channels on the nation’s waterways, and restoring aquatic ecosystems. The Corps also has authorities to provide water supply, protect and maintain beaches, generate hydroelectric power, support water-based recreation, and ensure design depths in the nation’s ports, harbors, and associated channels. The Corps is the federal government’s largest producer of hydropower and a leading provider of outdoor recreation areas and facilities. The Corps of Engineers also regulates alteration of wetlands.

To meet its responsibilities in these various sectors, the Corps of Engineers has built incrementally what now comprises an extensive water resources management infrastructure that includes approximately 700 dams, 14,000 miles of levees in the federal levee system, and 12,000 miles of river navigation channel and control structures.

This infrastructure has been developed over the course of more than a century, most of it on an individual project basis, within varying contexts of system planning. From a macroscale perspective, the water resources infrastructure of the nation is largely “built out.”  New water projects of course will be constructed in the future, but given that most of the nation’s major river and coastal systems have been developed, there are reduced opportunities for new water resources infrastructure construction.

Ecosystem restoration was added as a primary missions area for the Corps in 1996 and has been the main focus of new construction. Large portions of the Corps’ water resources infrastructure were built in the first half of the twentieth century and are experiencing various stages of decay and disrepair. Project maintenance and rehabilitation are thus high priority needs for Corps water infrastructure. Funding streams in the U.S. federal budget over the past 20 years consistently have been inadequate to maintain all of this infrastructure at acceptable levels of performance and efficiency. In instances where the Corps shares maintenance responsibilities with a nonfederal partner (e.g., many of the flood risk management projects built by the Corps), local or state funds are less available than in recent past years. The water resources infrastructure of the Corps of Engineers thus is wearing out faster than it is being replaced or rehabilitated. Estimated to have a value of $237 billion in the 1980s, the estimated value of that infrastructure today is approximately $164 billion.

ASSETS. The Corps’ primary civil works mission areas are navigation, flood risk management, and ecosystem restoration. The Corps also has authorities, responsibilities, and programs for hydropower generation, harbors and ports, recreation, and coastal and beach protection.

The Corps’ original involvement in national water resources planning dates back to the nineteenth century and its work in ensuring navigable rivers. In the twentieth century, and after 1927 Mississippi River flooding and resulting damages, the Corps became involved in flood damage reduction

The inland navigation system presents an especially formidable challenge and a set of difficult choices. There are limited options and stark realities, including:

  • Funding from Congress for project construction and rehabilitation has been declining steadily.
  • Lockage fees on users/direct beneficiaries could be implemented. These are resisted by users and others.
  • Parts of the system could be decommissioned or divested and the extent of the system decreased.
  • The status quo is a likely future path, but it will entail continued deterioration of the system and eventual, significant disruptions in service. It also implies that the system will be modified by deterioration, rather than by plan.

Introduction

The Corps of Engineers has constructed, operates, and maintains a vast national water resources infrastructure, with various facilities in all 50 U.S. states. The traditional mission areas of the Corps were flood control and navigation enhancement, and the agency has constructed tens of thousands of miles of levees, hundreds of locks and dams for navigation, and dams for multiple purposes, including hydroelectric power generation.

The Corps has constructed channel control structures along hundreds of miles of rivers and along the intracoastal waterways of the southern and eastern United States.

The Corps also has important responsibilities in ensuring navigable depths in the nation’s ports and harbors.

Corps water resources infrastructure affects river flows and levels on many of the nation’s large river systems, including the Columbia, Missouri, Mississippi, and Ohio Rivers.

Much of the Corps of Engineers water resources infrastructure was constructed many decades ago. Approximately 95% of the dams managed by the Corps are more than 30 years old, and 52% have reached or exceeded their 50-year project lives.

Similar statistics can be cited for Corps levees, hydropower, and other facilities. This deterioration of Corps water resources infrastructure is a microcosm of larger national trends in the deteriorating condition of major infrastructure, including highways, bridges, roads, airports, and drinking water and wastewater treatment facilities. Degradation of U.S. infrastructure has been discussed in many fora, such as the well-known annual infrastructure ‘report cards’ 12 issued by the American Society of Civil Engineers. In addition to aging Corps water infrastructure, in particular, federal resources for major rehabilitation have decreased. Since the mid-1980s, the constant dollar value estimate of the net capital stock civil works projects of the Corps has decreased from roughly $237 billion to about $164 billion.

The Monongahela River, which flows from West Virginia to Pittsburgh, where it joins the Allegheny River to form the Ohio River, was one of the nation’s first inland waterways to have a lock and dam infrastructure installed to aid river navigation. Construction of the first locks and dams was initiated in 1837 by the Commonwealth of Pennsylvania. The federal government also constructed locks and dams in the Monongahela, and in the late nineteenth century the federal government took over the entire system. The present navigation system comprises nine locks and dams and was constructed by the Corps of Engineers beginning in 1902. Locks and Dams 2, 3, and 4 in the Lower Monongahela River, just south of Pittsburgh, are the three oldest currently operating navigation facilities on the river and experience the largest volume of commercial traffic for the river.

The U.S. Army Corps of Engineers constructed, operates, and maintains a vast water resources infrastructure across the United States that includes dams, levees, and coastal barriers for flood risk management, locks and dams for inland navigation, ports and harbors, and hydropower generation facilities. Much of this infrastructure exhibits considerable maintenance and rehabilitation needs. Federal investments in civil works infrastructure for water management have been declining since the mid-1980s, and today there are considerable deferred rehabilitation and maintenance needs

The Corps has constructed, and operates and maintains, a large portion of the infrastructure that supports the nation’s commercial inland waterways and its ports and harbors. Corps-maintained waterways and ports support commercial navigation in 41 U.S. states. In considering the current state of the Corps’ navigation infrastructure and its options for rehabilitating and upgrading that infrastructure, it is important to recognize several distinctions between infrastructure for inland navigation and that for harbors and ports. Important differences between these systems in terms of taxation, public and private funding and facilities ownership, companies that use the facilities, and other factors will affect the direction of future infrastructure rehabilitation and upgrades.

Inland Navigation

The commercial inland navigation system includes roughly 12,000 miles of maintained river channels and 191 locks sites with 238 navigation lock chambers. Figure 3-1 shows the scope of the Corps-maintained inland waterways system. The U.S. inland navigation system is used to ship bulk commodities such as corn and soybeans, coal, fertilizer, fuel oil, scrap metal, and aggregate (sand and gravel). Some of this cargo may transit nearly the entire length of the system. For example, corn and soybeans are shipped from across the Midwestern United States down the Ohio, Illinois, and Mississippi Rivers to the Port of New Orleans, then exported. By contrast, some portions of the system are used primarily for local transport. For example, of total commodity tonnage shipped on the Missouri River between 1994 and 2006, 83 percent was estimated to originate and/or terminate in the state of Missouri, with 84 percent of the shipments consisting of sand and gravel (GAO, 2009). The Atlantic and Gulf Intracoastal water-ways also provide commercial transportation corridors. All portions of the inland navigation system also serve recreational uses, but it is commercial that primarily justifies and helps fund the system. The system is used primarily by U.S. based, domestic shipping companies. Lock and dam facilities on the inland navigation system are federally owned, operated, maintained, and rehabilitated

Some portions of the Atlantic and Gulf Intracoastal Waterways, however, are operated and maintained by the states they border. There have been major changes to the U.S. economy, patterns of trade, and other cargo transportation alternatives since much of the inland navigation system was constructed several decades ago. Before the nation had its currently extensive rail and highway systems, “inland waterways were a primary means of transporting bulk goods” (Stern, 2012). Today, alternative modes for shipping inland navigation goods—namely, roads and rail—are in a more advanced state of development than during the period when the lock and dam projects were constructed. Although they remain important transportation modes for some sectors in some areas, “inland waterways are a relatively small part of the nation’s overall freight transportation network”. The topics of relative costs, energy uses and efficiencies, and environmental impacts of rail, road, and barge transport make for lively debate among users of these respective modes.

Another important aspect of the inland navigation system is that its locks and dams create extensive upstream navigation pools. These navigation pools often affect river ecosystems up- and downstream for tens of miles. The inland navigation system thus affects many public resources and many private system users beyond commercial cargo carriers. There are impacts on floodplain lands overseen by federal government agencies (such as the U.S. Fish and Wildlife Service), private landowners, and recreational users, including boaters and anglers. The navigation pools are sources of both beneficial and negative effects. Ports and Harbors The Corps of Engineers maintains 926 coastal, Great Lakes, and inland harbors (Figure 3-2). U.S. harbors and ports operate in a setting very different from the inland navigation system. For example, U.S. harbors and ports handle a wider variety and higher volume and value of cargo than does the inland navigation system. Many more shippers use U.S. harbor and port facilities compared to the inland navigation system, and these shippers include both U.S. domestic and international companies.

The harbors and ports generally are operated as public-private partnerships, and do not depend on direct federal resources. Corps responsibilities in ports and harbors are focused on dredging to maintain desired navigation and docking depths. The Corps also maintains wave/surge protection structures at some ports and harbors. This division of responsibilities and limited role for the federal government allows harbors and ports to pursue a broader range of partnerships and financing options.

There are generally fewer cost-effective alternatives to maritime transport for intercontinental or trans-ocean shipment for larger, heavier bulk goods such as coal and petroleum. This provides strong incentives for all port and harbor users and beneficiaries to be interested in port and harbor maintenance.

The Olmstead locks and dam project will replace 1920s-era Locks and Dams 52 and 53, the first two on the Ohio River above the confluence with the Mississippi River. These two aged facilities handle about 90 million tons of cargo annually, the highest cargo tonnage in the entire inland waterways system. Completion of the Olmsted Locks and Dam project, first authorized in the Water Resources Development Act of 1988, is the highest priority inland waterways project for the Corps of Engineers. The project is located about 20 miles upstream of the Mississippi River, near Olmsted, Illinois. The project includes two 110-foot-wide by 1,200-foot-long lock chambers, and a 2,500-foot dam with navigable pass located near the Illinois shoreline. When the Olmsted project was authorized by Congress in 1988, the estimated cost was $775 million and the estimated completion date was 2000, but subsequent design changes, dam construction difficulties, and inadequate, start-stop funding have increased the cost estimate to $3.1 billion and extended the projected completion date to 2024. The twin 1200-foot locks were completed in 2002 at a total cost of approximately $430 million, including the costs of the cofferdam and approach walls. The contract for the dam was awarded in 2004 and construction of the dam commenced in 2005. In 2004, the total project cost estimate was revised to $1.4 billion and the completion date to 2014, by 2011, the project cost estimate was revised to $2.1 billion and the completion date to 2018; and in March 2012 budget hearings the Corps revised the cost estimate to $3.1 billion and the completion date to 2024.

In summary, the inland navigation system relies more heavily on federal support for major maintenance than do ports and harbors, which depend more on fees from private shippers and investments from state and local governments. In an era of steady reduction of federal investments in civil works infrastructure, these distinctions may have sobering implications for prospects of future inland navigation infrastructure repairs and upgrades.

Infrastructure Status – Inland Navigation. Large portions of the inland navigation infrastructure were constructed in the first half of the twentieth century. Many dams on the Ohio River, for example, were built in the early 1900s, with some of them being constructed over one hundred years ago. The Upper Mississippi River 9-foot channel navigation project was authorized in the Rivers and Harbors Act of 1930 and completed by 1940. The Missouri River main-stem dams were authorized with passage of the 1944 Flood Control Act, and the Missouri River Bank Stabilization Project (BSNP) was authorized in the 1945 Rivers and Harbors Act. Officially completed in 1981, many revetments and other BSNP channel works were built during the 1950s and 1960s. Much of this navigation infrastructure is nearing the end of (or has exceeded) its design life and is in various states of disrepair. Investments in routine maintenance, upgrades, and rehabilitation for the infrastructure have lagged since the mid-1980s.

Prospects for Decommissioning

Decommissioning of a dam entails full or partial removal of an existing dam and its associated facilities, or significant changes to its operations thereof. In the United States, the process of dam decommissioning includes many of the same considerations as project construction

Dam decommissioning is not a simple process, nor is it without costs. Especially for larger dams, substantial advance planning is required,

Free-flowing rivers transport and remobilize sediments, especially during high flows associated with spring snowmelt and storm events. Deposition of these sediments on flood plains and coastal wetlands renews sediment lost through erosion and maintains the high productivity of these ecosystems, as described in the “flood pulse” concept. In turn, floodplain ecosystems attenuate floods, decreasing the magnitude of peak flows downstream, and coastal wetlands protect coastal communities from storm surges. In addition, riparian zones and floodplains provide critical habitats for aquatic biota and migratory birds.

The naturally varied hydrologic regime of free-flowing rivers provides a benefit in terms of maintaining aquatic biodiversity, especially in sustaining populations of endangered fish. Flow regulation can impair the survival of native fish by causing large daily variations in downstream flow to meet power demands and by creating barriers to upstream migration of salmon and steelhead.

Transportation Mode Alternatives

There often are alternative transport modes for the cargo that is shipped on the inland navigation system, the primary alternatives being rail and truck. For example, roughly one-third of U.S. grain exports today are shipped via rail to Portland, Oregon, where grain is transferred to ocean cargo ships. U.S. freight rail carriers have in many cases upgraded and modernized their fleets in recent decades and have become more energy efficient

Economic Efficiency and Future Infrastructure Investments

The level of funding available to repair and upgrade the entire U.S. inland navigation to safe and reliable conditions will not be available in the near future. Clearly, the future U.S. inland navigation system will be different from the system of 50-plus years ago.

There are some claims that reduced barge traffic would in turn lead to reduced exports and increased reliance on alternative modes that would be more fuel inefficient and cause more air pollution. However, there has been little research on modal substitution for different product shipments on the inland waterways system

FLOOD RISK MANAGEMENT

The Corps of Engineers has constructed an extensive infrastructure designed to manage flood risks along rivers and also infrastructure to protect against surges from coastal storms. The Corps has built approximately 11,750 miles of riverine levees across the nation and provides shoreline protection for hundreds of miles of U.S. coastlines. Many of the Corps’ approximately 700 dams also serve flood control purposes. Like its infrastructure for navigation activities, a large portion of Corps of Engineers levees and other protective structures were constructed in the first half of the twentieth century or earlier and face many similar maintenance, rehabilitation, upgrade—and funding— issues.

Infrastructure Status Dams The Corps of Engineers today owns and operates approximately 700 dams. These dams range in size and purpose from large multipurpose projects to waterways navigation dams. Not all these dams serve flood control purposes. Navigation dams on the upper Mississippi and Ohio Rivers, for example, were not designed for flood protection and do not provide such benefits. Corps dams that provide flood risk reduction almost always support multiple purposes, such as hydroelectric power generation, water supply, and recreation. Approximately 95 percent of the dams managed by the Corps are more than 30 years old, and 52 percent have reached or exceeded nominal 50-year project lives.  Half of the Corps’ dam portfolio is actionable for rehabilitation, and that the potential requirements would exceed $20 billion. These dams are widely spread across the nation and exhibit varying degrees of deficiency and life-safety risk.

SACRAMENTO-SAN JOAQUIN DELTA LEVEE SYSTEM: RISKS AND REHABILITATION

California’s Central Valley, one of the nation’s highly productive agricultural regions, is drained by the Sacramento River flowing from the north and the San Joaquin River flowing from the south. These rivers converge in the Sacramento-San Joaquin Delta before flowing to Suisun Bay and eventually to the San Francisco Bay and the Pacific Ocean. The Delta region comprises about 738,000 acres of land in six counties. Once dominated by islands, wetlands, and riparian forests, the Delta has been completely reconfigured for agriculture. Beginning in the 1850s, levees were constructed along the Sacramento and San Joaquin Rivers, and many of their tributaries, to make the land usable for both human settlement and agriculture.

The Central Valley today has one of the nation’s most extensive levee systems, with approximately 1600 miles of federal levees and an equal length of nonfederal levees. The Delta region includes approximately 1100 miles of levees, of which 385 levee miles are incorporated into federal flood control projects, mostly along the main-stem Sacramento and San Joaquin Rivers. The 700-plus miles of nonfederal levees, many of which line not rivers but rather channels and prevent tidal inflows, generally do not meet the same design standards as the federal levees. Unlike river levees, which experience only periodic water loading during floods, many Delta levees have constant water loading. The aging Delta levee system is fragile and undergoing failure.

There have been many Delta levee breaches, and there is great concern about multiple levee failures in the event of an earthquake or large storm. The City of Sacramento, now a major urban area with a population of approximately 500,000, is at substantial risk for a catastrophic flood event

Hydropower Infrastructure Status

The Corps has the most projects. It operates 75 power plants with a total rated capacity of 20,500 megawatts (MW). In addition, there are another 90 nonfederal hydropower plants located at Corps dams with a total capacity of 2,300 MW.

As in its other mission areas, the Corps hydropower facilities are facing the challenges of an aging infrastructure and limited access to sources of revenue for adequate maintenance and repair.

Through its 75 hydropower plants and installed generation capacity of 20,500 megawatts (MW), the Corps owns and operates approximately one-fourth of the nation’s hydropower capacity. Most of its generating capacity is in the Federal Columbia River Power System (FCRPS), with much of the remaining capacity in its Missouri River dams.

Average annual energy generation from Corps projects is approximately 70 billion kWh (worth approximately $5 billion at current wholesale prices for power), and annual revenue to the U.S. Treasury from Corps hydropower sales is in the range $2 billion to $3 billion per year. This represents over half the size of the entire Corps’ annual appropriation. As of 2010, the median age of all Corps hydropower projects was 47 years, and 90% of the projects were 34 years old or older. Given the ages of the facilities, OMR needs and failure rates are increasing, along with associated decreases in performance. As an example, total hours of forced outages across all Corps hydropower projects have been increasing steadily since at least 1999 (Figure 3-7).

In an era of heightened interest in energy policies and sources, electricity generation from Corps hydropower projects has been decreasing steadily as a result of insufficient equipment maintenance and rehabilitation. Total electric power generation from Corps hydropower projects decreased from 73.6 TWh in 2000 to 61.7 TWh in 2008, a decrease of 16%. At some Corps hydro power projects, none of the original equipment has been replaced since the facilities were constructed 30 or more years ago. Annual budgets for repairs and upgrades of most of the Corps hydropower equipment have been inadequate for a long time. This has resulted in degraded infra-structure and less efficient operation.

Although there is much interest in increasing domestic hydropower production, there are challenges confronting hydropower production beyond just finding the resources to replace, rehabilitate, and upgrade equipment. The fate of hydropower is entwined with the opposition to large dams based on economic, social, and environmental factors. Dams change river flows and the fish runs that depend on them, alter water chemistry, change riverine landscapes, and inundate large areas that can include scenic canyons and valleys. There is growing interest in dam removal in the United States, which could affect some Corps hydropower projects in the future, although likely not the largest projects. In addition, climate change adds concerns about reliability and predictability of hydropower development. Hydropower production also faces increasing competition for use of the water and for reservoir storage space. Many Corps dams and reservoirs are part of multiple-purpose projects, so that hydropower must compete with other uses such as flood protection, irrigation, water supply, efforts to protect fish, and efforts to restore aquatic ecosystems.

In order to realize the full potential for installed hydropower generation capacity at Corps projects, new approaches to funding OMR for hydropower must be developed and made possible through legislation. As noted in Sale (2010), PMAs are required by law to sell federal hydropower at rates that usually are significantly below market rates. These sales occur under long-term contracts that cannot easily be changed. The primary customers and beneficiaries of this power pressure the federal power producers to keep operation and maintenance costs as low as possible so as to keep power rates low.

Much of the existing water resources infrastructure of the Corps of Engineers, which is primarily in the mission areas of navigation, flood risk management and hydropower production, is quite aged and has not been adequately maintained. Funding needs for the repair and rehabilitation of this infrastructure are substantial, and it is clear from the long-term trend of declining funding from Congress for new construction and rehabilitation that new infusions of funding will not be available in the short term. Parts of the infrastructure are failing, and parts are being taken out of service because of lack of funding. Corps of Engineers infrastructure has different OMR needs, ranging from lock repair, dam safety, levee monitoring and maintenance, port deepening, and hydropower facility maintenance and upgrades.

Inland Navigation

The inland navigation system presents an especially formidable challenge and a set of difficult choices. There are stark realities and limited options, including:

  • Funding from Congress for project construction and rehabilitation has been declining steadily.
  • Lockage fees on users/direct beneficiaries could be implemented. These are resisted by users and others.
  • Parts of the system could be decommissioned or divested and the extent of the system decreased.
  • The status quo is a likely future path, but it will entail continued deterioration of the system and eventual, significant disruptions in service. It also implies that the system will be modified by deterioration, rather than by plan.

The national water infrastructure is largely “built out.” Compared to an earlier era, there are fewer opportunities and only a limited number of undeveloped or appropriate sites for new water resources infrastructure. New water projects will be constructed in the future, but the nation’s water resources infrastructure needs increasingly are in the areas of existing project operations, maintenance, and rehabilitation. In some instances, full project replacement may be needed. As new construction has declined since 1980, so too has the Corps civil works budget and hence funds available for OMR.

Without insufficient funding to address its many OMR needs, Corps of Engineers water resources infrastructure is not being adequately maintained and rehabilitated. Its future state thus will depend on actions taken, or not taken, in the near future. There is no single, obvious path forward for alternative funding mechanisms that might be used to fully maintain and upgrade existing Corps infrastructure. The different parts of the Corps water resources infrastructure—inland navigation, flood risk management, hydropower, and ports and harbors—are governed by different laws and have different sources of revenue.

 

Posted in Dams, Interdependencies, Water Infrastructure | Tagged , , , | Comments Off on Water resources infrastructure deteriorating

Nuclear power in the U.S. is dead, reactors shutting down – not built

More nuclear power reactors are shutting down than being built

Preface. This article focuses on reactors being shut down, other posts discuss why they’re not being constructed, despite the intense and well funded efforts of the nuclear lobby. 

Since this article was published in 2013, 12 of the 37 at risk plants Cooper listed have been or are scheduled to close: (closing date): Clinton, Davis-Besse (5/31/2020), Duane Arnold (2020), Ft. Calhoun (10/24/2016), Indian Point 2 (4/30/2020) & 3 (4/30/2021), Oyster Creek (9/17/2018)Palisades (2022), Perry (5/31/2021), Pilgrim (6/1/2019), Quad Cities (6/1/2018)Three Mile Island (September 2019), Vermont Yankee (12/29/2014).  And four more Cooper didn’t list are scheduled to shut down: San Onofre 2 & San Onofre 3 (6/12/2013),  Beaver Valley 1 & 2 (5/31/2021), Beaver Valley 2 (10/31/2021)

And not long before Cooper published this article, Kewaunee (05/07/2013) and Crystal River (02/20/2013) were closed for financial reasons.   Here are the remaining plants Cooper listed that aren’t scheduled to close: Browns Ferry, Callaway, Calvert Cliff, Commanche Peak, Cook, Cooper,  Fermi, Hope Creek, LaSalle, Limerick, Millstone, Monticello, Nine Mile Point,  Point Beach, Prairie Island, Robinson, Seabrook, Sequoyah, South Texas, Susquehanna, Turkey Point, Wolf Creek

FitzPatrick & Ginna (NY) were to have closed 1/27/2017 but Governor Cuomo passed a ratepayer tax of $7.6 billion to extend operations until 2029.

But hold the press — the nuclear lobby is succeeding in getting extensions of licenses beyond the 60 year lifetime. Six reactors have recently extended their operating licenses another 20 years to 80 total years of operation, and approximately 19 other reactors are pursuing similar extensions.  And there is pressure to extend their life to 100 years because of billions on cost savings.  Diablo Canyon 1 & 2 were scheduled to shut down in 2024-2025 but their life was extended in 2022 another 20 years.

Nuclear power plants need more water than coal or natural gas plants to cool down, 720 gallons per megawatt-hour (coal needs 500 gallons and natural gas 190 gallons of water). So climate change may shut down additional plants due to drought as lake and stream levels drop or get too hot to cool the plant, and new nuclear power plants may not get approved in the future with that in mind (Kaufman 2018).

Reactors under construction that have been shut down

After spending $9 billion dollars on the two reactors of the Virgil C. Summer Nuclear Generating Station, with only 40% completion, and expected final price tag of $25 billion, it was shut down in 2017 (Plumer).  The only new nuclear plant being built in the U.S. now is in Georgia.

Another plant that was under construction, Savannah River Site nuclear weapons complex near Aiken, is being shut down after going billions over budget. The project, known as MOX, was touted as a way to get rid of excess U.S. weapons-grade plutonium and provide jobs as part of an arms-control agreement with Russia. But it recently was projected to cost at least $17 billion to complete, about three times original projections (Fretwell 2018).

Cooper leaves out the cost of nuclear waste storage, which makes the economics of nuclear plants even worse than in the article below (see his testimony before the Nuclear Regulatory Commission).

One of the costs Cooper mentions are Post-Fukushima updates. Five years after the accident at Fukushima in Japan resulted in three reactor meltdowns, the global nuclear industry is spending $47 billion on safety enhancements mandated after the accident revealed weaknesses in plant protection from earthquakes and flooding. The median cost per nuclear power reactor is $46.7 million (Platts).

The only nuclear power reactors under construction are Georgia Power’s Vogtle reactors, initially estimated to cost $14 billion, but now $27 billion (Amy 2021).  The first reactors at the plant, in the 1970s, took a decade longer to build than planned, and cost 10 times more than expected. Expected to be running 2016, it’s now unlikely that they’ll be ready even in 2022.

In France, a new plant is running around six years behind scheduled and likely to cost around $8 billion more than planned. Even keeping old reactors running may not make financial sense. In California, for example, extending the life of the Diablo Canyon plant will require new cooling towers that cost around $8 billion. It may also need billions in earthquake retrofits, because engineers realized after the project was built that it’s on a fault line” (Peters).  2016 update: this is one of the reasons they’re going to be shut down.

There are only 61 commercially operating nuclear power plants left (of 90) in the United States today.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Cooper M (2013) Renaissance in reverse: competition pushes aging U.S. nuclear reactors to the brink of economic abandonment. Institute for Energy and the environment, Vermont Law School. 

Although Wall Street analysts expressed concerns about the economic viability of the aging nuclear fleet in the U.S., the recent early retirements of 4 nuclear reactors has sent a shock wave through the industry. One purely economic retirement (Kewaunee, 1 reactor) and three based on the excessive cost of repairs (Crystal River, 1 reactor, and San Onofre, 2 reactors).

In addition to the cancellation of 5 large uprates (Prairie Island, 1 reactor, LaSalle, 2 reactors, and Limerick, 2 rectors), four by the nation’s large nuclear utility, suggest a broad range of operational and economic problems.

These early retirements and decisions to forego uprates magnify the importance of the fact that the “nuclear renaissance” has failed to produce a new fleet of reactors in the U.S.

With little chance that the cost of new reactors will become competitive with low carbon alternatives in the time frame relevant for old reactor retirement decisions, a great deal of attention will shift to the economics of keeping old reactors online, increasing their capacity and/or extending their lives.

The purpose of the paper is not to predict which reactors will be the next to retire, but explain why we should expect more early retirements. It does so by offering a systematic framework for evaluating the factors that place reactors at risk of early retirement.

  • It extracts 11 risk factors from the Wall Street analysis and identifies three dozen reactors that exhibit four or more of the risk factors (see Exhibit ES-1).
  • It shows that the poor performance of nuclear reactors that is resulting in early retirements today has existed throughout the history of the commercial nuclear sector in the U.S. The problems are endemic to the technology and the sector.
  • It demonstrates that the key underlying economic factors — rising costs of an aging fleet and the availability of lower cost alternatives – are likely to persist over the next couple of decades, the relevant time frame for making decisions about the fate of aging reactors.

While the purpose of the Wall Street analyses is to advise and caution investors about utilities that own the aging fleet of at-risk reactors, my purpose is to inform policymakers about and prepare them for the likelihood of early retirements.           

RISK FACTORS (all of the above have at least 4 to 9 of these):

  1. ECONOMIC: Cost , Small, Old, Standalone, Merchant, 20 yr<w/out extension , 25yr< w/ext.
  2. OPERATIONAL: Broken, Reliability, Long-term-outage
  3. SAFETY: Many issues, Fukushima Retrofit

cooper 2013 reactors at risk of retiring

cooper 2013 reactors at risk of retiring 2

Sources and Notes: Credit Suisse, Nuclear… The Middle Age Dilemma?, Facing Declining Performance, Higher Costs, Inevitable Mortality, February 19, 2013; UBS Investment Research, In Search of Washington’s Latest Realities (DC Field Trip Takeaways), February 20, 2013; Platts, January 9, 2013, “Some Merchant Nuclear Reactors Could Face Early Retirement: UBS,” reporting on a UBS report for shareholders; Moody’s, Low Gas Prices and Weak Demand are Masking US Nuclear Plant Reliability Issues, Special Comment, November 8, 2012.; David Lochbaum, Walking a Nuclear Tightrope: Unlearned Lessons of Year-Plus Reactor Outages, September 2006, “The NRC and Nuclear Power Plant Safety in 2011, 2012, and UCS Tracker); NRC Reactor pages.

Operational Factors: Broken/reliability (Moody’s for broken and reliability); Long Term Outages (Lochbaum, supplemented by Moody’s, o-current, x=past); Near Miss (Lochbaum 2012); Fukushima Retrofit (UBS, Field Trip, 2013) .

Economic Factors: Cost, Wholesale markets (Credit Suisse) Age (Moody’s and NRC reactor pages with oldest unit X=as old or older than Kewaunee, i.e. 1974 or earlier commissioning, O= Commissioned 1975-1979, i.e. other pre-TMI); Small (Moody’s and NRC Reactor pages, less than 700 MW at commissioning); Stand Alone (Moody’s and NRC Reactor pages); Short License (Credit Suisse and NRC Reactor pages). Some of the characteristics are site specific, some are reactor specific.

The reactors at a specific plant can differ by age, size, technology and the current safety issues they face. Historically, in some cases there were long outages at one, but not all of the reactors at a plant. Similarly, there are numerous examples of a single reactor being retired early at a multi-reactor site. Given the complexity of an analysis of individual reactors across the eleven risk factors and the fact that unique precipitating events are the primary cause of early retirements, I count only one potential reactor retirement per plant.

If anything goes wrong, any of these reactors could be retired early. The precipitating event could be a further deterioration of the economics, or it could be mechanical or safety related problems, as indicated on the right side of the table. The market will operate faster in the case of merchant reactors, but economic pressures have become so severe that regulators have been forced to take action as well. The same factors call into question the economic value of license extensions and reactor uprates where they require significant capital outlays.

Reviewing the Wall Street analyses, it is possible to parse through the long list of reactors at risk and single out some that face particularly intense challenges:

Palisades (Repair impending, local opposition), Ft. Calhoun (Outage, poor performance), Nine Mile Point (Site size saves it, existing contract), Fitzpatrick (High cost), Ginna (Single unit with negative margin, existing contract), Oyster Creek (Already set to retire early), Vt. Yankee (Tax and local opposition), Millstone (Tax reasons), Clinton (Selling into tough market), Indian Point (License extension, local opposition), A couple of other reactors that are afflicted by a large number of these (Davis-Besse, Pilgrim) could also be particularly vulnerable.

The lesson for policy makers in the economics of old reactors is clear and it reinforces the lesson of the past decade in the economics of building new reactors. Nuclear reactors are simply not competitive. They are not competitive at the beginning of their life cycle, when the build/cancel decision is made, and they are not competitive at the end of their life cycles, when the repair/retire decision is made. They are not competitive because the U.S. has the technical ability and a rich, diverse resource base to meet the need for electricity with lower cost, less risky alternatives. Policy efforts to resist fundamental economics of nuclear reactors will be costly, ineffective and counterproductive.

INTRODUCTION: THE CHALLENGE OF AN AGING FLEET

Over the last decade, as nuclear advocates touted a “nuclear renaissance” they made extremely optimistic claims about nuclear reactor costs to convince policymakers and regulators that new nuclear reactors would be cost competitive with other options for meeting the need for electricity. These economic analyses rested on two broad categories of claims about nuclear reactors.

(1) New nuclear reactors could be built quickly and at relatively low cost.

(2) New Nuclear reactors would run at very high levels of capacity for long periods of time with very low operating costs.

Dramatically escalating construction cost estimates and severe construction difficulties and delays in virtually all market economies where construction of a handful of new nuclear reactors was undertaken have proven the first set of assumptions wrong. Recent decisions to retire aging reactors early remind us that the second set of assumptions was never true of the first cohort of commercial nuclear reactors and call into question the extremely optimistic assumptions about the operation of future nuclear reactors.

The Energy Information Administration (EIA) recently noted that in the current market, if aging reactors are in need of significant repair, it may not be worthwhile to do so. As the EIA put it, “Lower Power Prices and Higher Repair Costs Drive Nuclear Retirements.”

However, the problem is more profound than that. It is not only old, broken reactors that are at risk of retirement. As old reactors become more expensive to operate, they may become uneconomic to keep online in the current market conditions. Indeed, the first reactor retired in 2013 (Kewaunee) was online and had just had it licenses extended for 20 years, but its owners concluded it could not compete and would yield losses in the electricity market of the next two decades so they chose to decommission it. Things have gotten so bad in the aging nuclear fleet in the U.S. that Wall Street analyst have begun to issue reports with titles like “Nuclear… the Middle Age Dilemma? Facing Declining Performance, Higher Costs and Inevitable Mortality,” “Some Merchant Nuclear Reactors Could Face Early Retirement: UBS” and “Low Gas Prices and Weak Demand are Masking US Nuclear Plant Reliability Issues.

These early retirements magnify the importance of the fact that the “nuclear renaissance” has failed to produce a new fleet of reactors in the U.S. With little chance that the cost of new reactors will become competitive with low carbon alternatives in the time frame relevant for old reactor retirement decisions, a great deal of attention will shift to the economics of keeping old reactors online, increasing their capacity and /or extending their lives.

As has been the case throughout the history of the commercial nuclear sector in the U.S., the primary obstacle to nuclear power is economic and it is critically important to cut through the hype and hyperbole on both sides of the nuclear debate to reach sound economic conclusions.

In half of the U.S. the price of electricity is set in a wholesale market. In these areas, the wholesale prices, which is what all generators earn, are driven primarily by the fuel cost of running the last plant that needs to be operated to make sure supply is adequate to meet demand. This is the price that “clears” the market. In most regions of the nation, the price is set by natural gas, with coal playing that role in some places. In those areas of the U.S. were the wholesale price of electricity is set by the market, prices have been declining dramatically.

Over the past half-decade, the market clearing price has been declining. Fuel costs have been declining, driven by a dramatic decline in natural gas prices. At the same time, demand for electricity has been declining due to increasing efficiency of electricity consuming equipment and consumer durables. Moreover, the increase in renewable generation, which has the lowest (zero) cost of fuel and therefore always runs when it is available, has lowered the demand for fossil fired generation. This means that the market clears with more efficient (lower cost) plants, which lowers the market clearing price even farther.

For consumers this is a very beneficial process; for producers not so much, since the prices they receive are declining.

Old nuclear reactors are particularly hard hit by this market development. With prices set by fuel costs, all of the other costs of nuclear generation must be paid for out of the difference between the fuel costs of the reactor and the market clearing price. This is called the “quark” spread. A nuclear reactor is paid the market clearing price, which it must use to pay its own fuel costs, while the remainder must cover its other costs.

While nuclear fuel costs are low (although they have been rising), their non-fuel operation and maintenance costs and their ongoing capital costs are high. The high nonfuel operation and maintenance costs (including capital additions) are high because of the complex technology needed to control a very volatile fuel. As reactors age, these non -fuel operating and ongoing capital additions rise.

With “quark” spreads falling, and operating costs rising, the funds available may no longer cover the other costs, or yield a rate of profit that satisfies the reactor owner.

Old reactors are pushed to the edge. If a reactor is particularly inefficient (has high operating costs), needs major repairs, or a safety retrofit is required, the old reactors can be easily pushed over the edge. The problem for old nuclear reactors has become acute. At precisely the moment that quark spreads are declining, the non-fuel operating costs of old reactors are rising.

In the analysis that first sounded the alarm about early retirements of specific reactors, UBS explained the situation as follows

Following Dominion’s recent announcement to retire its Kewaunee nuclear plant in Wisconsin in October, we believe the plant may be the figurative canary in the coal mine. Despite substantially lower fuel costs than coal plants, fixed costs are approximately 4-5x times higher than coal plants of comparable size and may be higher for single-unit plants. Additionally, maintenance capex of ~$50/kW-yr, coupled with rising nuclear fuel capex, further impede their economic viability … We believe 2013 will be another challenging year for merchant nuclear operators, as NRC requirements for Fukushima-related investments become clearer in the face of substantially reduced gas prices. While the true variable cost of dispatching a nuclear plant remains exceptionally low (and as such will continue to dispatch at most hours of the day no matter what the gas price), the underlying issue is that margins garnered during dispatch are no longer able to sustain the exceptionally high fixed cost structures of operating these units. Nuclear units… have continued to see rising fuel and cost structures of late, with no anticipation for this to abate. Moreover, public policy initiatives, such as Fukushima-related retrofits and mandates to reduce once- through cooling (potentially requiring cooling towers/screens for some units) and new taxes on others (Vermont Yankee, Dominion’s Millstone) have further impeded the economics of nuclear.

The problem is not a figment of the imagination of Wall Street analysts or confined to a small number of individual reactors. It is widespread, as demonstrated by the behavior of Exelon, the largest nuclear utility in the U.S. with ownership of one-quarter of all U.S. reactors. Exelon was also a big supporter of wind power, until the economics of old nuclear reactors began to deteriorate. Exelon then launched a campaign against subsidies for wind power, because the rich wind resource in the Midwest had begun to back out expensive gas. Market clearing prices declined reducing the margins that its nuclear fleet enjoyed. Exelon’s campaign against wind was sufficiently vigorous to get it kicked off the board of the American Wind Energy Association.

After decades of arguing that nuclear is the ideal low (fuel) cost, always-on source of power and touting the benefits of free markets in electricity, Exelon is proposing to reduce its output of nuclear power to drive up the market clearing price. Since withholding supply for the purpose of increasing prices is frowned upon (indeed would be a violation of the antitrust laws if they applied), it has to negotiate with the Independent System Operator to reduce output. These acts of desperation clearly suggest that the economics of old reactors are very dicey.

The pressure is magnified because the cost of operating old reactors is rising. Credit Suisse estimates that in the period when “quark” spreads were falling from $40/MWH to $20-$30/MWH, the operating costs of nuclear reactors were rising to the range of $25-$30/MWH. The resulting margins are razor thin, if not negative. The primary drivers of cost increases are non-fuel O&M and fuel costs, which have increased about $10/MWH. Thus declining wholesale prices account for about two-thirds of the shrinking margin and rising costs account for one-third.

Risk Factors

The economics of individual reactors will be affected by the size and condition of the reactor and the market into which it sells power. Credit Suisse points out that the merchant generators face the greatest challenges and concludes that “the challenge of upward cost inflation/weak plant profitability will likely put pressure on smaller, more marginal plants that could weigh on nuclear’s market share.”

Outages

The Credit Suisse analysis did not stop with operating costs, but went on to identify another important characteristic that affects aging nuclear reactors, outages. A nuclear reactor only receives the wholesale prices and earns the “quark” spread if it is operating. Credit Suisse noted that 2011 and 2012 were years of heavy outage.

The largest part of the increase in outages was driven by large reactors down with operational problems (Crystal River, San Onofre, and Fort Calhoun), although extended outages for uprates also played a part (Turkey Point, St. Lucie). The reactors with the longest outages, facing substantial repair costs, Crystal River and San Onofre, have since been retired.

Moody’s has also expressed concern about reliability from a different point of view. When reactors are offline, the owners not only lose whatever margin they could have earned, they must replace the power. In addition to costing the utility cash income, this will increase the demand for power in the market and push up the market clearing price. However, in the opinion of Moody’s, in the current supply and demand context, the availability of low cost natural gas is “masking” the seriousness of that problem. Moody’s worries that if the outages continue, the cost of replacement power will rise substantially. Moody’s highlights the fact that after Crystal River and San Onofre, whose outages led to early retirements, the longest ongoing outage is Fort Calhoun, now in unplanned outage for over two years. It has been beset with multiple issues and is under close scrutiny by the NRC

The load factor – the percentage of the year a reactor is online producing power – is an important determinant of its economic performance. The average load factor is not only 4% lower for the oldest reactors, but the standard deviation is almost twice as high. In a market where margins are so thin, a 4 percentage point difference in load factor is an important loss of revenue, and the much higher standard deviation represents significant uncertainty. Age and reliability matter and they go hand in hand.

Asset Life

Age affects more than the level and uncertainty of the load factor. It is a primary determinant of remaining life. While many reactors have sought and received license extensions, a number of the older reactors have not. This means that capital expenditures may have to be recovered over a shorter period of time. To the extent that there are capital costs associated with keeping these reactors online, the short life may make it difficult to recover those costs where margins are thin. “Even assuming licenses are extended, 11 merchant nuclear units have a maximum useful life of less than 20 years… We worry whether plants will see the full 60 years as thin margins and big capex are too hard to cover.”

The analysis of the economics of aging reactors identifies a number of other characteristics that appear to reduce the economic viability of aging reactors. Small units that stand alone – geographically or organizationally – are believed to have higher costs and therefore are more vulnerable in the current market environment. Both of these factors generally reflect economies of scale since operating costs are spread across a smaller amount of capacity and output. Large, multi-unit sites integrated into corporate fleets of reactors can share indivisible costs. The retirement of Kewaunee underscores the fact that the economic benefits of being part of a fleet of reactors are dependent on the geographic location of the reactors as well.

Companies that operate multiple units are often better able to generate economies of scale and benefit from the breadth of experience housed in their nuclear operations. They are in a better position to share the best practices among their own fleets and to compete for talent in this highly specialized field. Because of these advantages, a number of single unit nuclear plant operator have decided to contract out all or part of the management of their nuclear operations to one of the more experienced companies in the field.34

Regulated Reactors

Credit Suisse presents a similar analysis for regulated reactors, noting that “deregulated market prices are somewhat less relevant but we think… illustrate the challenges to economics of regulated nuclear as well.” Market economics may not rule in these cases, but these reactors exhibit similar difficulties. Using Kewaunee economics as the dividing line (cash flow of about $9/MWH); there are almost two dozen regulated reactors with challenging economics. In this groups are retirements (San Onofre), canceled uprates (Prairie Island), and a long term outage (Fort Calhoun). We find seven standalone assets, eight reactors with less than 20 years remaining on their licenses, and half a dozen small reactors (700 MW or less). There are 14 reactors that have two or more of these characteristics. Thus, in terms of basic economics, there are three dozen reactors that are on the razor’s edge.

CAPEX WILDCARDS

The above analysis describes the “normal” process of operating an aging fleet in the context of an energy economy in which low cost resources are available to meet needs. With the economic viability of an increasing number of reactors coming into question, the possibility of the need for significant capital expenditures becomes quite ominous. The prudence of making major expenditures to meet safety concerns, repair breakage and install technologies to increase output (uprates) is called into question. While there is a tendency to treat these as extraordinary events, they are frequent enough to merit consideration as part and parcel of the nuclear economic equation.

The commercial nuclear industry has historically had difficulty executing major construction projects and that problem afflicts aging reactors. The retirement of Crystal River and San Onofre was precipitated by repairs/upgrades that failed badly, resulting in the need for major repairs. The Florida uprates had substantial cost overruns. The Monticello life extension and uprate activity have experienced cost overruns of over 80 percent.

The response of Executives responsible for the Monticello uprate is revealing.

“It is a large complex project with many intricate components that required changes from the original plans,” Xcel’s chief nuclear officer, Timothy O’Connor, said in recent written testimony submitted to state regulators…O’Connor… testifies that other reactor projects – Grand Gulf in Mississippi, Turkey Point and St. Lucie in Florida and Watts Barr in Tennessee – also experienced cost overruns, in one case double the original estimate.

Defending uprate cost overruns by pointing out that everyone else is suffering the same problem is more an indictment of the industry than a defense of the utility. In fact, the severe contemporary execution risk of keeping old reactors online or increasing the output has started to look a lot like the contemporary (and historical) execution risk of building new reactors. With almost three dozen uprates approved since 2009, over half have been abandoned cancelled or put on hold. Half of those that have moved forward have suffered major cost overruns.

The major uprates that have been proposed, and in a number of cases cancelled or abandoned, generally have cost estimates in the range of $1800 to $3500 per kW. Actual costs have been much higher, in the range of $3400 to $5800/kW. These high actual costs of the uprates are three to four times as much as new advanced combined cycle gas plant costs. Even the initial cost estimates were almost twice as high. Since the reactors being proposed for uprates are still old reactors, they are likely to have significant operating costs, although the uprates may improve their performance. With new gas plants being more efficient, as well, and having much lower capital costs and short lead times, it may well be that choosing between an uprate and a new gas plant has become a very close call. This explains the mixed record of major uprates in the past half-decade.

Since uprates represent the largest capital projects most reactors will witness and most nuclear utilities will undertake in the mid-term, the poor performance is telling. These uprates are afflicted by the same flaws as new builds, past and present, cost overruns, delays, declining demand and low cost alternatives.

Safety, Spent Fuel and the Fukushima Effect

One factor to which UBS devotes a great deal of attention, but Credit Suisse does not mention, is safety related costs.

Among our greatest concerns for the US nuclear portfolio into 2013 is the risk of greater Fukushima-related costs. While expectations around the need of hardened vents differ, we see cost risks of up to $30-40 Mn/per unit under a worst case scenario; while other estimates suggest costs range in the $15 Mn ballpark. Notably, PPL estimates Fukushima-related costs of $50-60 Mn, excluding vents for its 1.6 GW Susquehanna unit.”

Safety concerns surrounding spent fuel are presently holding up the license extension for a dozen reactors as the NRC deals with a court challenge to its “waste confidence” finding. Fukushima and the “waste confidence” ruling remind investors that nuclear power has a unique set of risks that may weigh on economic decisions.

In a major post-Fukushima analysis of the nuclear sector UBS called it a “tail risk.” This is an event that may have a very low probability, but which can have a huge impact on the value of an investment. It has come to be identified more popularly as a “black swan.

In my earlier analysis of the impact of Fukushima, I cited an estimate of the potential costs that ran to a quarter of a trillion dollars. Tokyo Electric Power Company is seeking public funds to help it pay for its current estimate of costs, which is $137 billion. The number has been rising steadily and there is some question about whether the victims are being fully compensated.

The estimate of $137 billion, if that is the final cost, underscores several important points about nuclear safety and nuclear costs. First, the disaster bankrupted the company. Its stock collapsed and it has been taken over by the government. If only $137 billion can bankrupt the 4th largest utility in the world, the “tail risk” associated with nuclear reactor ownership should get the attention of investors. Second, the economic impact of nuclear accidents does not flow from the public health effects, but from the disruption of the affected community. The most immediate impact of nuclear accidents may not be the deaths that they cause, but the disruption of the economy and social life of a large surrounding area and psychological despair that they cause. I have shown that Fukushima deserves the attention it gets in both the historical and contemporary contexts, but there is a larger lesson here. Safety is an evolving concept in nuclear power because the power source is so volatile and dangerous and the technology to control it becomes extremely complex. Over time, external challenges and internal weakness are revealed. The threats to public health and safety cannot be ignored. Responding to them becomes particularly costly for existing reactors, since retrofits are difficult. As older reactors become farther and farther out of sync with the evolving understanding of safety, the challenge grows.

REACTORS AT RISK

Turning to the future, there are a significant number of reactors, a third of the fleet that exhibits the characteristics that put reactors at risk for negative developments. Exhibit III-6 summarizes the risk factors faced by over three dozen aging reactors. The first six factors – cost, small size, old, standalone, selling into a wholesale market and short cost recovery periods – reflect the economic dimension. The next 5 risk actors involve Operational factors (broken, reliability and long term outage) and safety factors (Multiple safety issues and Fukushima retrofits). These reflect the operational/repair dimension of the analysis. The first 3 reactors evaluated have been retired early and they highlight the two different types of factors that create risk. Kewaunee epitomizes the purely economic factors. Crystal River and San Onofre epitomize the repair/outage factors. I have only included reactors that exhibit at least three of the risk factors as identified in the sources cited.

The list is long and not intended as a prediction of which reactors are “the next to go.” The historical analysis shows that it is generally a combination of factors that leads to the retirement decision. However, the vulnerability of large numbers of reactors suggests that there will be future early retirements and uprates will be slow to come.

The analysis is primarily economic, as indicated on the left side of the table. All of the reactors have significant economic issues. If anything goes wrong, any of these could be retired early. The precipitating event could be a further deterioration of the economics, or it could be mechanical or safety related problems, as indicated on the right side of the table. The market will operate faster in the case of merchant reactors, but economic pressures have become so severe that regulators have been forced to take action as well. The same factors call into question the economic value of license extensions and reactor uprates where they require significant capital outlays.

THE HISTORICAL EXPERIENCE OF U.S. COMMERCIAL NUCLEAR REACTORS

The dire straits in which a significant part of the U.S. commercial nuclear fleet finds itself are not an aberration or a sudden shift in prospects. It is part and parcel of the history of the industry in the U.S. In fact, the quiet period of high performance in the late 1990s and early 2000s is the exception rather than the rule. With the memory of the huge cost overruns in the 1970s and 1980s fading, the quiet period of the 1990s played an important part in creating the misimpression that new reactors would just hum along. This contributed to the misleading economic analysis on which the “nuclear renaissance” relied during its early hype cycle.

The assumption that nuclear reactors hum along, once they are proposed or even online, is not consistent with the U.S. experience. About half of all reactors ordered or docketed at the Nuclear Regulatory Commission were cancelled or abandoned. Of those that were completed and brought online, 15% were retired early, 23% had extended outages of 1 to 3 years, and 6% had outages of more than 3 years. In other words, more than one-third of the reactors that were brought online did not just hum along. Another 11% were turnkey projects, which had large cost overruns and whose economics were unknown.

Outages and Early Retirements

The magnitude of long outages and early retirements is sufficient to require that they be incorporated into the economic analysis of nuclear power. The pattern across time reinforces the observation that the high level of performance in the late 1990s/early 2000s were an exception rather than the rule. After a large number of reactors came on line there were a significant number of outages in the early 1980s. Again in the 1990s there were a significant number of outages and retirements. The lull of problems in the late 1990s and early 2000s has been followed by a sharp increase in problems.

Ultimately, since the start of the commercial industry, over one-quarter of all U.S. reactors have had outages of more than one year. There are three causes of these outages:

  1. Replacement—to refresh parts that have worn out
  2. Retrofit—to meet new standards that are developed as the result of new knowledge and operating experience (e.g. beyond-design events)
  3. Recovery—necessitated by breakage of major components

The average cost of an outage (in 2005 dollars), even before the most recent outages, was more than $1.5 billion, with the highest cost topping $11 billion. The costs of the recent outages that led to early retirement in Crystal River and San Onofre run into the billions.

The occurrence of outages has a strong correlation with retirement, as does the occurrence of a second outage. Early retirement reactors are typically older and smaller. The early retired reactors were brought online before the agency (originally the Atomic Energy Commission) began to adopt and enforce vigorous safety regulation. They are not worth repairing or keeping online when new safety requirements are imposed, or when the reactors are in need of significant repair. Outages exhibit similar relationships.

The larger the number of rules in place when construction was initiated, the less likely there was to be an outage or an early retirements. The larger the increase in rules during construction, the greater the likelihood of an outage. While the industry interprets the existence and change of rules as an expensive nuisance, I have shown that they reflect strong concerns about safety that were triggered by the extremely poor safety record of the industry in its early years. The older reactors experienced more outages and needed more retrofits to get back or stay online. They were built before performance was regulated, generally performed poorly and suffered the outage and retirement consequences.

Qualitatively, the decision to retire a reactor early usually involves a combination of factors such as major equipment failure, system deterioration, repeated accidents, and increased safety requirements. Economics is the most frequent proximate cause, and safety is the most frequent factor that triggers the economic reevaluation. Although popular opposition “caused” a couple of early retirements (a referendum in the case of Rancho Seco; state and local government in the case of Shoreham), this was far from the primary factor, and in some cases local opposition clearly failed (referenda failed to close Trojan or Maine Yankee). External economic factors, such as declining demand or more-cost-competitive resources, can render existing reactors uneconomic on a “stand-alone basis or (more often) in conjunction with one of the other factors.

Performance: Load Factors and Operating Costs

The increasing problems faced by aging nuclear reactors are reflected in the load factor. The average load factor for the nuclear industry throughout its history of commercial operation in the United States has been less than 75%. While it is true that over the decade from the late 1990s through the end of the 2000s the load factor was 90%, it is also true that it took 20 years to get to that level and the industry has recently fallen below it.

This is the source of concern expressed by the Wall Street analysts about the aging fleet, but it also raises an important point about new reactors. New technologies require shake out periods and the more complex they are, the longer the period. The assumption of a 90% load factor for new builds is highly suspect.

Moreover, the calculation of load factors overestimates the actual load factor because the denominator includes only reactors that are operable. Reactors that have been retired early or are on long term outage (not in service for the entire year) are not included in the analysis. I show an adjusted load factor that includes in the denominator the long term outages and early retirements. I assume that all the early retirements were reactors that were expected to still be on line, but for the difficulties that shut them down.This number is substantial. When early retirements and long term outages of more than a year are taken into account, the load factor has been about 70%.

Operating costs appear to exhibit a similar long term pattern as load factors. There was a long period of rising operating costs, then a period of modest decline and relative stability. However, in the past decade costs have begun to rise again.

What we can say about the recent past is that in a short period of time the industry has experienced a full complement of the bad things that can happen to old reactors – purely economic retirement, broken reactors, an uprate that developed into a broken plant and an early retirement, large cost overruns for new builds and uprates and abandonment of uprates. We can also identify the circumstances that brought these negative events about and show that they are not only short term aberrations, but are consistent with the long-term history of the industry.

The key question is: will the price of alternatives keep the economic pressure on the margins of aging reactors with rising costs?

Natural Gas Cost History and Trends

Predicting long-term natural gas prices has been described as a perilous undertaking, but a consensus has emerged among most reasonable analysts that a significant period of low gas prices is upon us. Projecting price out 50 years may be very risky, but 20 years is less so and that is the relevant time frame for aging reactors. Exelon’s battle with wind, its efforts to move the market clearing prices and its decision to cancel the uprates at Limerick and LaSalle and its earlier decision to abandon its plans to build a new reactor, reflect the very challenging economics that nuclear faces in today’s market. Those economics are driven by a belief that gas prices are likely to remain low for the relevant economic time frame. John Rowe, CEO of Exelon has been adamant in this regard.

Traders on the NYMEX agree with Rowe, who notes that analysts do not see a high gas price over the several decades.

As we have seen, wind power plays a role by shifting the supply-curve in such a way that it lowers the market clearing price. As wind is added to meet long-term needs, it has this short-term effect.

Rowe also notes that there are renewables that will compete with nuclear in the next decade – “But, as I look, I think wind and solar do become more economic, wind much the first. Nuclear plants may become economic again but not in the next decade.” Longer-term cost trends support Rowe’s observation that alternatives to nuclear power beyond gas are becoming more attractive options. In contrast to nuclear reactor construction costs and cost estimates that have been rising dramatically, several of the alternatives are exhibiting reductions in cost, driven by technological innovation, learning by doing, and economies of scale.

There is certain to be a great debate about how much the reduction in electricity consumption reflects the recession, there is no doubt that increasing efficiency will change the trajectory of demand. With new building codes and appliance efficiency standards, per capita energy consumption will decline significantly over the next two decades. New building codes call for a 30% reduction in energy consumption in new building designs. Since the oldest, least efficient buildings are likely to be replaced, the effect will be larger than that. The stock changes slowly however. Appliance efficiency standards have been raised in recent years and the Obama administration has announced a program to raise standards on many appliances in the range of 20 to 30%. Since the life cycle of appliances is much shorter than buildings, over the course of two decade most appliances will be replaced by more efficient models. The decline will offset increases in population and GDP, resulting in, at best, flat aggregate demand. The debate over climate change has also placed great emphasis on improving efficiency and using renewables.

With aggregate demand likely to be flat, at best, and renewable costs falling and output rising, the downward pressure on market clearing prices is likely to continue. It appears likely that the pressures on the market clearing price will continue for the period in which decisions about retiring aging nuclear reactors will be made.

CONCLUSION

Nuclear economics have always been marginal at best. The first cohort of commercial reactors was much more costly than the available alternatives, but those reactors were forced online by a regulatory system that did not have a market to look to, or care to do so even if one existed. It can be argued that the locomotive that pulled half the nation toward restructuring and much greater reliance on market signals was the reaction against the excessive costs of nuclear power. Some advocates of restructuring loudly declared restructuring would prevent another nuclear fiasco.

Ironically, it appears that an unintended consequence of the shift toward markets will be to force the early retirement of the very reactors that a market never would have allowed to be built in the first place. While half the country does not rely on markets to set the price of electricity, the presence of markets across the country sends strong signals to regulators that keeping aging reactors online, especially if they need repairs or retrofits, does not make economic sense. Thus, although the outcome is ironic in the long sweep of nuclear history in the U.S. it is perfectly consistent with the fundamental economics of nuclear power throughout that history. While the purpose of the Wall Street analysis is to advise and caution investors about utilities that own the aging fleet of at-risk reactors, my purpose is to inform policymakers about and prepare them for the likelihood of early retirements. By explaining the economic causes of early retirements, the policymakers will be better equipped to make economically rational responses to those retirements (or the threat of retirement).

Economic reality has slammed the door on nuclear power.

  • In the near-term old reactors are uneconomic because lower cost alternatives have squeezed their cash margins to the point where they no longer cover the cost of nuclear operation.
  • In the mid-term, things get worse because the older reactors get, the less viable they become.
  • In the long term new reactors are uneconomic because there are numerous low-carbon alternatives that are less costly and less risk.

The lesson for policy makers in the economics of old reactors is clear and it reinforces the lesson of the past decade in the economics of building new reactors. Nuclear reactors are simply not competitive. They have never been competitive at the beginning of their life cycle, when the build/cancel decision is made, and they are not competitive at the end of their life cycles, when the repair/retire decision is made. They are not competitive because the U.S. has the technical ability and a rich, diverse resource base to meet the need for electricity with lower cost, less risky alternatives. Policy efforts to resist fundamental economic reality of nuclear power will be costly, ineffective and counterproductive.

About Mark Cooper: I am a Senior Fellow for Economic Analysis at the Institute for Energy and the Environment at Vermont Law School. A copy of my curriculum vitae is attached. I am an expert in the field of economic and policy analysis with a focus on energy, technology, and communications issues. For over thirty years I have analyzed the economics of energy production and consumption on behalf of consumer organizations and public interests groups, focusing in the past four years on cost of the alternative resources available to meet electricity needs for the next several decades. My analyses are presented in a series of articles (1), reports (2), and testimonies before state regulatory agencies and state and federal legislatures. I have served as an expert witness in several regulatory proceedings involving electricity and nuclear reactors, starting with proceedings before the Mississippi Public Service Commission almost 30 years ago regarding the proposed Grand Gulf II nuclear reactor and including proceedings before the Florida and South Carolina Commissions regarding the proposed reactors in those states.

(1) Cooper, Mark. “The Only Thing that is Unavoidable About Nuclear Power is its High Cost,” Corporate Knights, forthcoming; “Nuclear Safety and Afford able Reactors: Can We Have Both?,” Bulletin of the Atomic Scientists, 2012; “Nuclear Safety and Nuclear Economics, Fukushima Reignites the Never-Ending Debate: Is Nuclear Power Not Worth the Risk at Any Price?,” Symposium on the Future of Nuclear Power, University of Pittsburgh, March 27-2 8, 2012; “Post-Fukushima Case for Ending Price Anderson,” Bulletin of the Atomic Scientists, October 2011; “The Implications of Fukushima: The US Perspective, Bulletin of the Atomic Scientists, July/August 2011 67: 8-13.

(2) Public Risk, Private Profit, Rate payer Cost, Utility Imprudence: Advanced Cost Recovery for Reactor Construction Creates Another Nuclear Fiasco, Not a Renaissance, March 2013; Fundamental Flaws In SCE&G’s Comparative Economic Analysis, October 1, 2012; Policy Challenges of Nuclear Reactor Construction: Cost Escalation and Crowding Out Alternatives, September, 2010; All Risk, No Reward, December 2009; The Economics of Nuclear Reactors: Renaissance of Relapse, June 2009; Climate Change and the Electricity Consumer: Background Analysis to Support a Policy Dialogue, June 2008.

REFERENCES

Amy J (2021) Georgia nuclear plant cost tops $27B as more delays unveiled. Associated Press.Fretwell, S. 2018. Multi-billion dollar mistake? Feds move to abandon over-budget nuclear fuel plant. thestate.com

Kaufman, A. C. 2018. Trump Administration’s Climate Report Raises New Questions About Nuclear Energy’s Future. The thirstiest source of electricity is already struggling, and greater risk of droughts will only add to those woes. Huffington Post.

Peters, Adele. March 23, 2016. This Map Of All The Nuclear Reactors In The World Is A Reality Check. There are fewer nuclear reactors than you may realize. And by the time more are financed and built, the Arctic ice will be all gone anyway.  fastcoexist.com

Platts. March 29, 2016. Nuclear safety upgrades post-Fukushima cost $47 billion.

Plumer, B. July 21, 2017. U.S. Nuclear Comeback Stalls as Two Reactors Are Abandoned. New York Times.

Posted in Nuclear Power Energy | Tagged , , , , , , , , , | 1 Comment

Nuclear Regulatory Commission accused of putting millions of lives and trillions of dollars at risk

Spent nuclear fuel pool. Source: Recent Sandia International Used Nuclear Fuel Management Collaborations. 2015. energy.sandia.gov

[ Edwin Lyman and his co-authors in Science magazine have accused the Nuclear Regulatory Commission (NRC) of putting millions of American lives at risk, due to “pressure from the nuclear utilities and a Congress sympathetic to the utilities’ complaints of overregulation. This is the well-known phenomenon of “regulatory capture.” Former U.S. Senator Pete Domenici described how he curbed the NRC’s regulatory reach by threatening to cut its budget by one-third.”

Here’s why you should care: Studies of Fukushima have shown that spent nuclear fuel in pools are outside of the containment area and could catch on fire if the water boils off.  In the case of the Peach bottom nuclear power plant in Pennsylvania, from 4 million (1) to 18 million people would have to evacuate (3), for many years.  It’s recently been learned that this almost happened at Fukushima and if it had, would have required an additional 1.6 to 35 million people to evacuate.

It would only cost $5 billion to prevent this from happening ($50 million per plant). That’s cheap compared to the trillions of dollars in damages a spent nuclear pool file could cause (9).

I know most people don’t want to hear yet another dire problem exists, but consider giving this your attention.  It’s up to you to do something about NRC regulatory capture, which the authors of this article write “will be dealt with only when pressure from the concerned public outweighs that from the nuclear industry”.

Here’s another reason to care. you are going to pay for it:  “If a spent fuel–pool fire were to occur,  under the Price-Anderson Act of 1957, the nuclear industry would be liable only for damages up to $13.6 billion, leaving the public to deal with damages exceeding that amount (15). A fire in a dense-packed fuel pool could cause trillions of dollars in damages.

While you’re at it, try to reopen Yucca Mountain and other storage facilities for nuclear waste so that it doesn’t poison future generations for hundreds of thousands of years after fossil fuels no longer power civilization.  Our descendants won’t be able to store nuclear waste safely once they go back to becoming a wood and muscle-powered society again.

The nuclear spent fuel pool water will boil off someday.  Fossil fuels are about to decline faster than governments can cope with, and two-thirds of electricity is still generated with fossil fuels.  Not to mention natural disasters, an electromagnetic pulse from a solar or nuclear event, a financial crisis, terrorism, and so on.

If you’d like to know more about spent nuclear pool fires, I have several posts here from Science Magazine, the National Academy of Sciences, and other sources here and sciencedaily has an excellent article here.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Edwin Lyman, Michael Schoeppner, Frank von Hippel. 26 May 2017. Nuclear safety regulation in the post-Fukushima era.  Science  356: 808-809   DOI: 10.1126/science.aal4890

The March 2011 Fukushima Daiichi nuclear accident prompted regulators around the world to take a hard look at their requirements for protecting nuclear plants against severe accidents. In the United States, the Nuclear Regulatory Commission (NRC) ordered a “top-to-bottom” review of its regulations, and ultimately approved a number of safety upgrades. It rejected other risk-reduction measures, however, using a screening process that did not adequately account for impacts of large-scale land contamination events.

Among rejected options was a measure to end dense packing of 90 spent fuel pools, which we consider critical for avoiding a potential catastrophe much greater than Fukushima.

Unless the NRC improves its approach to assessing risks and benefits of safety improvements—by using more realistic parameters in its quantitative assessments and also taking into account societal impacts—the United States will remain needlessly vulnerable to such disasters.

Spent nuclear fuel must be cooled in water-filled pools immediately after discharge from reactors. After cooling for a few years, transfer of spent fuel to air-cooled dry storage casks becomes practical. In the United States, the NRC allows spent fuel to remain in pool storage until a geologic repository for spent fuel becomes available.

To minimize storage costs, utilities pack the pools as densely as possible, and only when they are full do they make space for newly discharged hot fuel by buying dry casks to store the fuel that has cooled the longest. Dense-packed spent fuel would be susceptible to catching fire if an accident or terrorist attack caused a loss of the pool’s cooling water. [My comment:

Oxidation by steam of a small fraction of the zirconium in the fuel cladding would liberate sufficient hydrogen gas to potentially cause an explosion and destruction of the building covering the pool. Explosions of hydrogen gas generated by steam reactions with uncovered reactor cores destroyed buildings covering three Fukushima Daiichi reactors, exposing their fuel pools to the environment (see the photo).

Fortunately, in Fukushima, the spent fuel remained covered with water. For almost a month, however, Tokyo Electric Power Company overestimated the water level in the densely packed spent fuel pool of unit 4 and did not add enough water to keep up with the rate of evaporation. A month after the earthquake, when the utility finally measured the water level directly, it had fallen from 7 to 2 meters above the top of the stored fuel. Fortuitously, water had leaked into pool 4 from the adjacent reactor cavity—which does not ordinarily contain water—keeping the spent fuel covered and preventing a fire (1).

Thirty-year half-life cesium was the main radioactive contaminant that forced relocation of large populations following the Chernobyl and Fukushima accidents.

NRC contractors at Sandia National Laboratory estimated that, had there been a fire in pool 4, 100 times as much cesium-137 would have been released to the atmosphere than actually leaked from the damaged Fukushima reactors (2). If that had happened, depending on weather conditions, cesium-137 contamination would have forced long-term relocation of between 1.6 million and 35 million people, instead of 150,000, from Japan’s East Coast (3).

After Fukushima, the NRC evaluated whether to require nuclear power plants to reduce the risk of a catastrophic spent fuel fire by transferring fuel that had cooled for >5 years from pools to safer dry storage casks. The NRC had two primary options for imposing a “backfit” such as this on already-licensed nuclear power plants. The first was to declare that the change was needed to provide “adequate protection” of public health and safety.

The National Research Council estimated that if a spent nuclear fuel fire happened at the Peach Bottom nuclear power plant in Pennsylvania, nearly 3.5 million people would need to be evacuated and 12 thousand square miles of land would be contaminated.  In its technical evaluation, the NRC estimated that, for a typical U.S. Mark I boiling-water reactor at Peach Bottom in Pennsylvania, a spent fuel fire in a dense-packed pool would require relocation of 4.1 million people from an area of 9,500 square miles (24,500 km2)–50 times as many as the corresponding values for a fire in a low-density pool (4). [My comment: A Princeton University study that looked at the same scenario concluded it was more likely that 18.1 million people would need to evacuated and 39,000 square miles of land contaminated (3)].

However, neither this finding nor a broader regulatory analysis of all U.S. plants persuaded the NRC to change its view that high-density pool storage provides “adequate protection” according to its interpretation of the Atomic Energy Act, which does not provide criteria for determining adequate protection (5).

Given this decision, under the NRC’s self-imposed rules, the backfit could be adopted only if the monetary value of the resulting reduction in risk to the public were to exceed the cost of implementation and the increase in safety were “substantial” (6). In the decades since NRC adopted this “backfit rule,” it has based determinations increasingly on quantitative assessments of risk, defined as the product of probability and consequences. The quality of these complex calculations depends strongly on the validity of the input assumptions, and they typically have large uncertainties that the NRC fails to fully account for in its regulatory decisions. These characteristics also introduce opportunities for the NRC to produce risk assessments that justify, rather than inform, its decisions. In any case, no matter how large the consequences of an accident, if the NRC estimates a low enough probability, the risk will be too low to justify major expenditures on mitigation.

Thus, although the NRC backfit analysis found that the huge quantity of fission products released by a dense-packed pool fire could be dramatically reduced by lowering the fuel density, it estimated that the probability of a fire resulting in a large release would be small—on the order of 4 × 10−6 per pool per year, although with a large uncertainty (7 × 10−7 to 3 × 10−5).

The NRC’s cost-benefit analysis did not account for the possibility of a terrorist attack, which cannot be quantified but should not be ignored (7). In addition, the NRC made a series of assumptions that tended to minimize the estimated health and economic consequences of a high-density release. After making these assumptions and ignoring uncertainties, the NRC found that the probability-weighted benefits to the public from transferring spent fuel to passively air-cooled dry cask storage did not justify the estimated cost of $5 billion to the nuclear utilities (about $50 million per reactor).

A recent National Academy of Sciences study (on which author F.v.H. served) found that the NRC cost-benefit analysis—unreasonably, in our view—excluded accident consequences beyond 50 miles and underestimated consequences in a number of other ways (4). In response to a petition by the state of New York, the NRC acknowledged that its assumption in such calculations, that virtually all the relocated population could return home within less than a year, was inconsistent with the experience in Japan, where some of the relocated population is just beginning to return after 6 years (8). NRC computer output made public as a result of the New York hearing also showed that the NRC analysis assumed radiation dose standards for population relocation that were much less restrictive than those recommended by the Environmental Protection Agency (EPA) or those that were applied by Soviet and Japanese authorities after the Chernobyl and Fukushima accidents. If the EPA guidance were followed, we estimate that the average area evacuated as a result of a spent fuel fire in a densely packed pool at the Peach Bottom plant would increase about 3-fold (3). Correcting for the above errors would have made the NRC’s central estimates of the benefits of expedited transfer of spent fuel to dry cask storage greater than the costs (9).

The NRC argues that, even if the benefits of a backfit exceed its costs, it should be subjected to a “safety goal screening” to determine whether the safety enhancement is “substantial.” The screening criteria set limits on the health risks from accidents to individuals living close to nuclear plants. The NRC’s analysis met these limits by assuming a rapid and long-duration evacuation of these close-in areas. For the small doses that the NRC staff estimated that members of the public would incur after returning to their decontaminated towns, the health risk would be less than the NRC’s safety goals as long as the frequency of a spent fuel fire in the United States was less than once every 4 years (3).

Yet health risks to individuals are not synonymous with societal risks. When its safety goal policy was first developed in the 1980s, the NRC considered but rejected including a “societal risk” in addition to an individual health–risk threshold for regulatory action (10). The NRC’s failure to adopt such a criterion has long been criticized. Imposing a reasonable constraint on the cumulative societal impact of accidents would compel the NRC to lower the risk of a large-scale land contamination event that could drive millions from their homes and businesses for years (11, 12). The psychological trauma and economic cost of even one such event would be unacceptable. In our view, if the NRC were to use more realistic quantitative assessments and give weight to societal impacts, a requirement to expedite transfer of spent fuel to dry casks would be justified.

The NRC’s skewed approach to nuclear reactor safety regulation appears to be in part a result of pressure from the nuclear utilities and a Congress sympathetic to the utilities’ complaints of overregulation. This is the well-known phenomenon of “regulatory capture.” Former U.S. Senator Pete Domenici described how he curbed the NRC’s regulatory reach by threatening to cut its budget by one-third. He believed that, partly in response to this pressure, the NRC committed to adopting “risk-informed regulation” (13). Risk-informed regulation would be legitimate if the underlying methodology and data were sound and uncertainties were properly accounted for. But the NRC relied on flawed calculations and ignored their uncertainties when it rejected expedited transfer of spent fuel from pool storage.

Many in Congress are opposed to additional costly regulations, fearing that more nuclear power plants will become unprofitable and shut down. Recently, chairs of the NRC’s Senate oversight committee and subcommittee insisted on “strict application and adherence to the Backfit Rule” (14). If a spent fuel–pool fire were to occur, however, under the Price-Anderson Act of 1957, the nuclear industry would be liable only for damages up to $13.6 billion, leaving the public to deal with damages exceeding that amount (15). A fire in a dense-packed fuel pool could cause trillions of dollars in damages (9).

To reduce the risk and invest in infrastructure, Congress could consider allocating $5 billion for casks to store spent fuel. The federal government is already reimbursing nuclear utilities almost $1 billion per year for casks needed to store older spent fuel because the Department of Energy has not fulfilled its commitment to remove the fuel to an underground repository or interim storage site (16, 17). States also could act to reduce the risk. As part of its policy to reduce fossil fuel use, New York recently decided to mandate subsidies totaling about $500 million per year for continued operation of four nuclear power reactors (18). Illinois has adopted a similar policy, and other states are considering the same. States could condition such subsidies on agreements by utilities to end dense-packing of spent fuel pools.

The larger problem of NRC regulatory capture will be dealt with, however, only when pressure from the concerned public outweighs that from the nuclear industry.

References and Notes

  1. 2016. Nuclear and Radiation Studies Board, Lessons Learned from the Fukushima Nuclear Accident for Improving Safety of U.S. Nuclear Plants: Phase 2. National Academy Press, Washington, DC, chap. 2.
  2. Gauntt et al. 2012. Fukushima Daiichi Accident Study. SAND2012-6173, Sandia National Laboratories, Albuquerque, NM: 176–199.
  3. N. von Hippel, M. Schoeppner. 2016. Economic Losses from a fire in a dense-packed U.S. Spent fuel pool. Science and global security 24: 141.
  4. 2016. Nuclear and Radiation Studies Board, Lessons Learned from the Fukushima Nuclear Accident for Improving Safety of U.S. Nuclear Plants: Phase 2. National Academy Press, Washington, DC, chap. 7.
  5. 2014. Commission Response Sheet, COMSECY-13-0030, Staff evaluation and recommendation for Japan lessons learned—Tier 3 issue on expedited transfer of spent fuel. Nuclear Regulatory Commission.
  6. NRC, Staff evaluation and recommendation for Japan lessons learned—Tier 3 issue on expedited transfer of spent fuel (COMSECY-13-0030, NRC, 2013).
  7. 2004. National Research Council, Safety and Security of Commercial Spent Nuclear Fuel Storage. National Academies Press, Washington, DC: 34–35.
  8. In the matter of Entergy Nuclear Operations, Inc. (Indian Point Nuclear Generating Units 2 and 3), Docket nos. 50-247-LR and 50-286-LR (NRC, 2016).
  9. N. von Hippel, M. Schoeppner, Sci. Glob. Secur. 25, 10.1080/08929882.2017.1318561 (2017).
  10. Okrent. 17 April 1987. The safety goals of the U.S. Nuclear Regulatory Commission. Science 236:296. Abstract/FREE Full Text
  11. Bier, M. Corradini, R. Youngblood, C. Roh, S. Liua. June 2014. Proceedings of the 12th Conference on Probabilistic Safety Assessment and Management, Honolulu, Hawaii, paper 199_1 (International Association for Probabilistic Safety Assessment and Management).
  12. Denning, V. Mubayi, Risk Anal. 37, 160 (2016).
  13. Domenici. 2004. A Brighter Tomorrow: Fulfilling the Promise of Nuclear Energy (Rowman & Littlefield, Lanham, MD, chap. 5.
  14. Inhofe, S. M. Capito. 21 December 2016. Letter to Chairman of the NRC, 21 December 2016.
  15. 2014. Nuclear Energy Institute, Price-Anderson Act provides effective liability insurance at no cost to the public, [NEI fact sheet] (NEI, Washington, DC); http://bit.ly/2oUniNh.
  16. 2015. FY 2015 DOE agency financial report, U.S. Department of Energy.
  17. 2015. FY 2016 DOE agency financial report, U.S. Department of Energy.
  18. McGeehan. 1 August 2016.  New York Times, 1 August 2016.
Posted in Nuclear spent fuel fire | Tagged , , , , | Comments Off on Nuclear Regulatory Commission accused of putting millions of lives and trillions of dollars at risk

Theo Henckens: do we need mining quotas to prevent mineral depletion?

Preface: Ugo Bardi writes: “Currently, the problem of resource depletion is completely missing from the political debate. There has to be some reason why some problems tend to disappear from the public’s radar as they become worse. Unfortunately, the depletion problem won’t go away because the public is not interested in it. I discussed depletion in depth in my 2014 book “Extracted” and now Theo Henckens’ updates the situation with this post based on his PhD dissertation “Managing Raw Materials Scarcity, Safeguarding the availability of geologically scarce mineral resources for future generations” (16 October 2016, University of Utrecht, The Netherlands). The full dissertation can be downloaded via the link http://dspace.library.uu.nl/handle/1874/339827.  (UB)

Theo Henckens. Jan 3, 2017. An update on mineral depletion: do we need mining quotas?  Cassandra’s legacy.

To ensure that sufficient zinc, molybdenum and antimony are available for our great-grandchildren’s generation, we need an international mineral resources agreement.

Molybdenum is essential for the manufacture of high-grade stainless steels, but at present molybdenum is hardly recycled. Yet unless reuse of molybdenum is dramatically increased, the extractable reserves of molybdenum on Earth will run out in about eighty years from now. The extractable reserves of antimony, a mineral used to make plastics more heat-resistant, will run out within thirty years.

During more than a century the use of mineral resources increased exponentially with an average between 3 and 4% annually. Can this go on, given the limited amounts of mineral resources in the earth’s crust?

TRENDS IN THE ANNUAL EXTRACTION OF SEVEN COMMODITIES
 

Which raw materials or minerals are scarce?  A mineral’s scarcity is expressed as the number of years that its extractable amount in the Earth’s crust is sufficient to meet anticipated demand. This exhaustion period is estimated from the annual use of such mineral. I calculated the ratio between the extractable amount and the annual consumption for 65 mineral resources. My calculation is based on what is considered to be maximally extractable from the Earth’s crust. These “Extractable Global Resources” are derived from a study by the International Resource Panel of UNEP (United Nations Environmental Program) in 2011. Regarding the annual use of mineral resources I have supposed an annual growth of 3% until 2050, where after I have supposed that extraction stabilizes. The table below shows the top ten scarcest mineral resources.

TOP TEN SCARCE MINERAL RESOURCES

Exhaustion period-(in years)of-remaining extractable mineral resources
Important applications
Antimony
30
Flame retardants
Gold
40
Electronic components
Zinc
80
Corrosion protection
Molybdenum
80
High-grade steels
Rhenium
100
High-quality alloys
Copper
200
Electricity grid
Chromium
200
Stainless steels
Bismuth
200
Pharmaceuticals and cosmetics
Boron
200
Glasswool
Tin
300
Tins, brass

What is a sustainable extraction rate?

In my dissertation I have defined a sustainable extraction rate as follows: “The extraction of a mineral resource is sustainable, if a world population of nine billion people can be provided with that mineral resource during a period of thousand years, supposing that the average use per world citizen is equally divided over the countries of the world”. Actually, the concept of sustainability is only applicable to an activity, which can continue forever. Concerning the extraction of mineral resources, I consider a thousand years as a reasonable approach. This is arbitrary of course. But 100 years is too short. In that case we would accept that our grandchildren would be confronted with exhausted mineral resources.

A sensitivity analysis reveals that even if we assume that the extractable reserves in the Earth’s crust are ten times higher than the already optimistic assumption of the UNEP International Resource Panel, then the use of antimony, gold, zinc, molybdenum, and rhenium in industrialized countries would still have to be hugely reduced in order to preserve sufficient of these raw materials for future generations. This is particularly so if we want these resources to be more fairly shared among countries and people than is currently the case. There are also environmental and energy limits to the ever deeper and remoter search for ever lower concentrations of minerals. If we want to stretch out all the exhaustion periods in the table to 1000 years, then it can be calculated that the extraction of antimony should be reduced of 96 %, that of zinc of 82 %, that of molybdenum of 81 %, that of copper of 63 %, that of chromium of 57 % and that of boron of 44 %. This is compared to the extracted quantities in 2010. These reduction percentages are high. The question is whether that is feasible. Moreover, would the price mechanism not lead to a timely and sufficient extraction reduction of scarce mineral resources?

The price mechanism fails.  One would suppose that the general price mechanism would work: the price of relatively scarce mineral resource rises quicker than the price of relative abundant mineral resources.

TRENDS IN THE REAL PRICE OF SCARCE AND NON-SCARCE MINERALS IN THE UNITED STATES 1900-2015*

* The minerals have been classified according to their scarcity. The scarce raw materials in the figure are antimony, zinc, gold, molybdenum and rhenium. The moderately scarce raw materials are tin, chromium, copper, lead, boron, arsenic, iron, nickel, silver, cadmium, tungsten and bismuth. The non-scarce raw minerals are aluminum, magnesium, manganese, cobalt, barium, selenium, beryllium, vanadium, strontium, lithium, gallium, germanium, niobium, the platinum-group metals, tantalum and mercury.

My research makes clear that the price of scarce mineral resources has not risen significantly faster than that of abundant minerals. I demonstrate in my dissertation that, so far, the geological scarcity of minerals has not affected their price trends. The explanation might be that the London Metal Exchange looks ahead for a maximum period of only ten years and that mining companies anticipate for up to thirty years. But we must look much further ahead if we are to preserve scarce resources for future generations.

Eventually, the price of the scarcest minerals will rise, but probably not until their reserves are almost exhausted and little remains for future generations.

Technological opportunities are not being exploited. Are the conclusions I reach over-pessimistic? After all, when the situation becomes dire, we can expect recycling and material efficiency to increase. The recycling of molybdenum can be greatly improved by selectively dismantling appliances, improved sorting of scrap metal and by designing products from which molybdenum can be easier recycled. Alternative materials with the same properties as scarce minerals can be developed. Antimony as a flame retardant can be replaced fairly easily by other flame retardants. Scarcity will drive innovation.

30 to 50%of zinc is already being recycled from end of life products, but although it is technologically possible to increase this percentage, this is barely happening. Almost no molybdenum is recycled. Recycling is not increasing because the price mechanism is not working for scarce minerals. In the absence of sufficient financial market pressure, how can technological solutions for recycling and substitution be stimulated?

What should happen?  I argue that what is needed is an international agreement: by limiting the extraction of scarce minerals stepwise, scarcity will be artificially increased – in effect, simulating exhaustion and unleashing market forces. This could be done by determining an annual extraction quota, beginning with the scarcest minerals. Such an international mineral resources agreement should secure the sustainable extraction of scarce resources and the legitimate right of future generations to a fair share of these raw materials. This means that agreement should be reached on reducing the extraction of scarce mineral resources, from 96 percent for antimony to 82 percent for zinc and 44 percent for boron, compared to the use of these minerals in 2010. In effect, such an agreement would entail putting into practice the normative principles that were agreed on long ago relating to the sustainable use of non-renewable raw materials, such as the Stockholm Declaration (United Nations, 1972), the World Charter for Nature (UN, 1982), and the Earth Charter (UNESCO, 2000). These sustainability principles were recently reconfirmed in the implementation report of Agenda 21 for Sustainable Development (United Nations, 2016).

Financial compensation for countries with mineral resources.  Countries that export the scarce minerals will be reluctant to voluntarily cut back extraction because they would lose revenue. They should therefore receive financial compensation. The compensation scheme should ensure that the income of the resource countries does not suffer. In exchange, user countries will become owners of the raw materials that are not extracted, but remain in the ground. An international supervisory body should be set up for inspection, monitoring, evaluation and research.

Not a utopian idea.  In my dissertation, I set out the case for operationalizing the fundamental principles for sustainable extraction of raw materials, which have been agreed in various international conferences and confirmed by successive conferences of the United Nations. The climate agreement, initially thought to be a utopian idea, has become reality, so there is no reason why a mineral resources agreement should not follow.

Antimony: More than 50% of the antimony annually sold is used in flame retardants, especially in plastics for electrical and electronic equipment. A third of this equipment currently contains antimony. In addition, more than a quarter of antimony sold annually is used in lead batteries. In principle, antimony in its application as a flame retardant can largely be replaced by other types of flame retardants and antimony containing lead batteries can be replaced by non-antimony containing batteries.
Gold: In addition to its use in jewelry and as security for paper money, gold is especially used in high-quality switches, connectors and electronic components.
Zinc: The main application of zinc is as a coating on another metal to protect it against corrosion. Other applications include brass, zinc gutters, rubber tires and as a micro-nutrient in swine feed.
Molybdenum: Almost 80% of the volume of molybdenum extracted per annum is used to manufacture high-grade steels that are mainly used in constructions exposed to extreme conditions such as high temperatures, salt water and aggressive chemicals. There are very few substitutes for the current applications of molybdenum, and molybdenum is difficult, though not impossible, to recycle.
Rhenium: Rhenium is mainly used in high-quality alloys, to enable them to withstand extreme temperatures. It is also used in catalysts, to give gasoline a higher octane number.
Rare Earth Metals: Scarce mineral resources should not be confused with the Rare Earth Metals that are mainly mined in China. The Rare Earth Metals are seventeen chemical elements with exotic names, such as praseodymium, dysprosium and lanthanum. The name “Rare Earths” dates from the early nineteenth century. Rare Earths are geologically not scarce, at least not if you compare their extractable global resources with their current annual usage. But of course, that could change in the future.
Posted in Mining, Peak Critical Elements | Tagged , , , , | Comments Off on Theo Henckens: do we need mining quotas to prevent mineral depletion?

How reasonable are oil production scenarios from public agencies?

eia-iea-conventional-oil-prd-to-2030So far both the U.S. Energy Information Administration (EIA) and International Energy Agency (IEA) are on target in their predictions. In 2014 (the last year for which there is data), world production of crude oil and lease condensate was 77.833 million barrels/day (mb/d) and NGL 10.133 mb/d

But the production line may not keep going up until 2030. This paper criticizes the models and methods used by the EIA/IEA to assume growth to 2030.  Much of the paper requires a high mathematical literacy, so I’ve left much of that out – do read the paper in its entirety if you are good at math. ]

Excerpts from:

Jakobsson, K., Söderbergh, B., Höök, M. & Aleklett, K. How reasonable are oil production scenarios from public agencies? Energy Policy, 2009, Vol. 37, Issue 11: 4809-4818, 23 pages 

Abstract. According to the long term scenarios of the  IEA) and the EIA, conventional oil production is expected to grow until at least 2030. EIA has published results from a resource constrained production model which ostensibly supports such a scenario. The model is here described and analyzed in detail. However, it is shown that the model, although sound in principle, has been misapplied due to a confusion of resource categories. A correction of this methodological error reveals that EIA’s scenario requires rather extreme and implausible assumptions regarding future global decline rates. This result puts into question the basis for the conclusion that global “peak oil” would not occur before 2030.

Introduction

For good policymaking, it is important to have good scenarios of the future. A good scenario, we take as evident, is one that builds on reasonable assumptions in the light of past experience. As for global oil production, the most widely cited and authoritative long term scenarios are those published by the International Energy Agency (IEA) of the OECD, and the Energy Information Administration (EIA) of the U.S. Department of Energy. According to the latest available scenarios of conventional crude oil and natural gas liquids supply (as of September 2008), IEA expects production to grow by 1.1% annually, reaching 105 million barrels per day (Mb/d) in 2030; while EIA projects a 1.0% annual increase to 103 Mb/d in 2030. In other words, these two agencies present virtually identical pictures of the future. There will be no peak oil before 2030.

It is relevant, then, to ask how solid the assumptions behind these scenarios really are. EIA’s first long term oil supply scenarios, which must be interpreted as the methodological basis for the optimistic 2030 outlook, were published in 2000 (Wood and Long) and were criticized for being flawed and overly optimistic by Bentley (2002) among others. Four years later, EIA published a new version, virtually identical to the original, stating that “nothing has happened, nor has any new information become available, that would significantly alter the results.” (Wood et al., 2004) The debate on the issue is, in other words, not yet closed. The two main conclusions of this paper are that:

  • EIA has taken a generally sound forecasting model and implemented it in a seriously flawed way.
  • A correct implementation of the model, using the same assumptions as EIA, indicates that the official scenarios are not reasonable, since oil production can be expected to decline before 2030, possibly in the immediate future.

It should be stressed that the purpose has not been to pin down the exact date of peak oil, but to examine the validity of current forecasts and their underlying assumptions. Transparency has been a high priority, since much of the debate around peak oil appears to stem from either reference to contradictory (and sometimes proprietary) reserve data or from ambiguous use of common terms such as reserves, resources, and depletion.

All data that is referred to in this paper is available in the public domain, and terms are unambiguously defined to the largest extent possible. All references to “oil” implies conventional crude oil, not including natural gas liquids (NGLs), tar sands, extra heavy oil, biofuels or synthetic crude. The volumetric unit of measure is a barrel, which equals 159 liters or 42 U.S. gallons. The abbreviations Gb (billion barrels) and Mb (million barrels) are occasionally used.

Merely presenting a critique of EIA’s forecasting methodology requires only a short paper. The reader who only looks for this specific critique can jump directly to section 6. However, neither EIA, nor any other forecaster, has explained why the applied forecasting method is relevant to begin with. Since we have found the model applied by EIA to be very useful when implemented correctly, especially for field-by-field modeling, we believe that such an explanation is called for. We will therefore first argue for the use of resource constrained modeling in general, describe how this particular model is implemented and point to empirical evidence which justifies its use.

A defense of resource constrained modeling

The forecasting method applied by EIA in Wood et al. (2004), which we in the following will refer to as the Maximum Depletion Rate Model, can be characterized as resource constrained in the sense that the amount of oil in the ground ultimately puts a limit to the rate of production.

The application of resource constrained models is still surrounded by controversy. As the geologist Hubbert put it, the production of a fixed resource must start at zero and also decline to zero, after passing through one or several maxima. (Hubbert, 1956) The peaking phenomenon is thus a trivial consequence of oil’s finite nature.

A meticulous observer could add that no resource is literally exhaustible. (Houthakker, 2002) However, this merely implies that production drops to insignificance without ever becoming identically zero, hardly a distinction of much practical interest.  Adelman has questioned Hubbert’s fundamental assumption by stating that the amount of mineral in the earth is an irrelevant non-binding constraint to production. (1990) This is true in the very limited sense that we will never recover every last drop of oil from the earth. Non-geological circumstances, yet undefined, will limit the actual global recovery factor. However, the recovery factor being undefined is not to say it is unlimited. It is limited to end up between zero and 100 percent of the earth’s abundance (which in itself is perfectly well defined, although not exactly known). Thus, the fact that the amount of recoverable oil is “undefined” and unknown cannot be an argument against the existence of a production peak.

Watkins has, by following a similar way of argument, suggested an agnostic view on whether technology and new knowledge will forever beat the depletion of oil. (Watkins, 2006) But such agnosticism is only defensible in case we refute the original assumption that oil is finite.

Accepting the peaking concept in principle is one thing; agreeing on a good predictive method is another. Hubbert’s approach to the forecasting problem was to estimate an ultimate recovery from the discovery trend, and assume that production would follow a symmetrical bell-shaped curve. The peak would then occur when 50% of the oil was produced. Watkins has argued that asking a Hubbert curve to handle an economic commodity such as oil is like asking a eunuch to sire a family. (2006) Admittedly the Hubbert curve does not explicitly involve economic variables and provides no explanation for the resulting production pattern. There is no particular reason to believe that the peak would occur at a 50% depletion level, or that the production profile would be symmetrical.(Bardi, 2005) The Hubbert curve should therefore be seen as a strictly empirical rule-of-thumb rather than as a rigorous scientific hypothesis. The question is: what is the alternative to an empirical rule-of-thumb?

Constructing a formal model that includes economic variables is notoriously difficult, as Lynch (2002) describes. He even suggests that it is necessary to resort to simple extrapolation as a forecasting method. Simon (1996) takes a similar position when hesitates that the “economist’s approach” consists of extrapolating trends of past costs into the indefinite future. Simon’s conclusion is that since the cost of oil has generally declined during a long period, it must continue to do so indefinitely.

The first counterargument to this way of reasoning concerns the interpretation of empirical data: is there really a contradiction between, on the one hand, long periods of declining cost and, on the other hand, an ultimate production peak followed by increasing cost? As Reynolds (1999) has shown, there is not necessarily any such contradiction, given that technology is improving and that the producers are uncertain about the actual size of the resource.

The second counterargument concerns the general use of extrapolation as a forecasting method. In a certain sense, all science must be extrapolative. The issue, then, is what to extrapolate. Drawing a declining discovery curve into the future would be consistent both with past experience in oil provinces and with the assumption that there is a finite amount of oil to discover (which does not mean that it is valid to forecast future discoveries by extrapolation, due to continuous reserve growth in existing fields). Extrapolating an increasing production curve indefinitely would fail on the second point. Of course, extrapolating an increasing production trend may work most of the time, and in a short term perspective. But predicting an unavoidable trend shift such as peak oil, by first assuming that no trend shifts exist, is clearly an approach bound to failure.

Any useful production model must incorporate the fact that oil is a resource only existing in a finite amount. While it would be desirable to explicitly model the additional influence of economic and other variables, no one has been able to show that it improves the performance of a resource constrained model to an extent that would justify the increased complexity. There is no denying that factors such as oil price matter. The question is: how much do they matter, and can their impact be quantified with any accuracy? If not, then simple resource constrained models are the only tools available for forecasting. Although they may be too simple to accurately predict the exact date of peak oil, they have the potential to distinguish between reasonable and unreasonable scenarios. From a long term policy planning perspective, this should still be valuable information.

Model description

The Maximum Depletion Rate Model (MDRM), as we have chosen to label it, is a resource constrained production model. It does not assume, like the Hubbert curve, that production growth and decline is symmetrical, or that the production peak occurs at the depletion midpoint, but it assumes that there is a limit to the rate at which the remaining resource can be extracted. The MDRM has been used to forecast global oil production for at least 30 years.  However, no forecaster has actually described the foundations of the model, its strengths and caveats, what choices can be made and how they affect the result.

The resource-production ratio (R/P) denotes the relation between the annual production and the resource base at the beginning of the year in question. A central assumption of the MDRM is the existence of a minimum ratio (R/P)min which constrains production in relation to the available resource base. In other words, only a certain fraction of the remaining resource can be produced during one year.

Resource base

Most of the debate around oil production scenarios stems from ambiguous or disputed assumptions concerning the resource base. Applying the MDRM or publishing an R/P figure without clearly stating the underlying resource base is meaningless, since the result is impossible to interpret. Comparisons of studies are irrelevant unless the same type of resource base is used. We will come back to this point when we discuss the way in which EIA uses the model.

An important distinction regarding the resource base is that between fixed and non-fixed resource numbers. A fixed resource base is a “best estimate” of R0 used throughout both computations of historical R/P values and forecasts. Ultimate recoverable resource (URR) is a widely used notation for such a figure, but we will here use the synonym estimated ultimate recovery (EUR) to emphasize that a static resource base is always an estimate. When EUR refers to a region, it should include an estimate of oil yet undiscovered. The weakness of the EUR is that it only can be validated at hindsight; at the point where no forecast is longer needed. The simplest way to handle this inherent uncertainty is to use a range of possible EUR numbers, a range which should narrow as production proceeds. The great advantage of using a fixed resource base is its simplicity.

When a non-fixed resource base is used, the initial resource estimate R0 is continuously updated through resource revisions. A non-fixed resource base is appealing from a theoretical perspective, since it realistically implies that the amount of undiscovered oil is irrelevant for current production. Since resource estimates are dynamic and subject to economic factors, it has been argued that using a fixed resource base is a methodological error (Lynch, 2002). Unfortunately there are significant drawbacks of using a non-fixed resource base at a global level. The main disadvantage is the limited availability of reliable and comparable reserve data.

In practice, the most widely used data is the “proved reserves” compiled annually by the Oil & Gas Journal. However, these numbers have not been evaluated according to consistent criteria. While U.S. reserves are reported conservatively (so-called 1P reserves), most reserves, particularly within OPEC countries, probably are not. Taken together, the public reserve data is a mixed bag of inconsistent reserve figures of little value for forecasting purposes.

It has been suggested that the “proved + probable” (2P) reserves are more suitable for forecasting (Bentley et al., 2007). 2P reflects the amount of oil that can be produced from discovered fields with at least 50% probability. The 2P reserves for discovered fields should, ideally, not grow with time on average. For a more detailed account of reserve definitions, we refer to SPE (2007). Unfortunately 2P reserves are generally not publicly accessible; most of the data can only be obtained from industry databases at considerable cost.

Another disadvantage with a non-fixed resource base is that future resource revisions must be forecasted, perhaps several decades into the future, in order to construct scenarios. The Workshop on Alternative Energy Strategies (WAES, 1977), which used proved reserves as a non-fixed resource base, assumed that the reserves would grow by 10 to 20 Gb/y until the year 2000 and subsequently approach a global EUR of either 1600, 2000, or 3000 Gb. Using a non-fixed resource base is thus not automatically a way to avoid an EUR figure. While it is possible to model a scenario where reserves grow at an undiminished rate, such an approach does not make much sense in a world with a finite amount of oil. If the resource base is not allowed to grow indefinitely, then an ultimate EUR must be assumed at least implicitly.

We recommend the use of a fixed resource base for the sake of simplicity. The theoretical advantage of a non-fixed resource base is in reality diminished due to the lack of good global data and the need to forecast annual reserve additions as well as an implicit EUR.

A fixed resource base postpones the peak, but it also results in a steeper decline.

jakobsson-2009-fixed-vs-non-fixed-resource-base

Minimum R/P ratio.  Due to the large number of influencing factors: geological, technological and economic; there is no universal (R/P)min that is applicable to all fields or all regions. After the onset of decline, it is possible to estimate the (R/P)min directly through the observed decline rate. But in the pre-decline phase it is necessary to draw analogies from fields/regions with similar geological and technological conditions.  Estimation of (R/P)min unavoidably involves an element of personal judgment. It is advisable to use a range of possible values rather than one point estimate. All else equal, a lower (R/P)min postpones the peak production and makes the subsequent decline steeper.

Default production curve.  Production is not geologically constrained as long as R/P is higher than the assumed (R/P)min. Therefore, it can actually be quite arbitrarily defined. The two simplest options are either to keep it at a constant plateau level (typical of many individual fields, where the plateau is determined by the technical capacity), or to let it grow at a constant rate (suitable for regions). All else equal, a higher default production rate propones the peak since the resource base is more rapidly depleted.

Calculating R/P for a region.  It is impossible to give an analytical formula for the temporal distribution of fields, since the timing of production from a particular field is determined by the year of discovery, available extraction technology, administrative barriers, macroeconomic circumstances, and other factors. The estimation of (R/P)reg, min must therefore rely on empirical experience of how regions generally develop.

Empirical support for the MDRM

The MDRM (using a fixed resource base) fits well with observed production behavior of individual wells and fields. Arps (1944) described how production unconstrained by capacity limits can be empirically fitted to a hyperbolic decline function

In other words, assuming that there exists an (R/P)min is equivalent to assuming exponential decline. The simplicity of the exponential function has made it a popular forecasting tool within the field of reservoir engineering called decline curve analysis. Under certain ideal reservoir conditions, the exponential behavior can be derived from physical reservoir variables (Fetkovich, 1980). When reservoir conditions are not ideal, the tail end production tends to resemble a hyperbolic or harmonic decline.

Statfjord, Ekofisk, Oseberg and Gullfaks are hitherto the four largest Norwegian fields in terms of original recoverable resources. Since Statfjord, Oseberg and Gullfaks are long past their respective peaks and are estimated to have a depletion level higher than 90%, their EURs are reasonably certain. Ekofisk is estimated to be more than 70% depleted, but the EUR has been revised upwards considerably in the past. All four fields are produced with water and/or gas injection, though at Ekofisk only on a large scale since 1987. Figure 5 shows the production curves together with the fitted (R/P)min. An (R/P)min of 6-7 has been attainable in three of the four cases, and the model fits well with the observed decline. In the case of Ekofisk, subsidence of the seafloor and compaction of the reservoir has led to production difficulties, but also to a substantially increased recovery factor (Nagel, 2001). Such exceptional conditions cannot be captured by the MDRM.

Macro level.  We cannot assume a priori that a model which fits well to individual fields will be useful also at the regional level. Applying a model to aggregated data always leads to loss of information. Whenever possible, we would recommend a field-by-field approach to avoid aggregation. However, that is not a feasible strategy for a global scenario. Fortunately, evidence suggests that the MDRM is reasonably consistent with observed production profiles even for larger regions, which justifies its use also as a global model.

Brandt (2007) used production data from 139 oil producing regions of various sizes, from U.S. state level to continents, in order to test the goodness-of-fit of three simple growth/decline models: Hubbert, linear, and exponential. Brandt also allowed for asymmetrical growth/decline patterns. Several of the results are relevant in this context: In 74 of the 139 regions (53%), both a growth and decline rate could be estimated. Asymmetrical models generally had a better fit than symmetrical ones, even when adjusting for the increased complexity. The Hubbert model (symmetrical or asymmetrical) had the best fit in 19 of the 74 cases (26%), the linear model in 16 cases (22%), and the exponential model in 32 cases (43%). In 7 cases there was no clear best fitting model. There is no evidence that the Hubbert model would fit better to larger regions than smaller ones. Regions generally have a slower decline rate than growth rate. The mean decline rate in the 74 regions was 4.1%, a number inflated by a few extreme cases, since three quarters of the regions showed a rate less than 3.8%. The median rate was 2.6%, while the rate weighted for cumulative production was merely 1.9% which indicates that larger regions tend to decline more slowly than smaller ones.

The exponential growth/decline model, which was the single best-fitting of the models tested, is consistent with the MDRM. Brandt’s results can therefore be taken as an indication that the MDRM is at least as good as other simple resource constrained models at a regional level. The observed decline rates suggest that larger region size is related to slower decline. This is consistent with the discussion about regional (R/P)min in section 3.3. The observed decline rates should be interpreted with some caution, since it is not certain that they will remain unchanged in the future.

Sensitivity and uncertainty

Since R/P is a function of both the production rate and the remaining resource base, uncertainty in any of these two parameters necessarily reduces the reliability of the estimated R/P. In practice, the resource base is the major source of uncertainty. A complicating circumstance is that R/P is disproportionately sensitive to uncertainty in the resource base. Figure 6 illustrates how revisions in a fixed resource base affect the estimated R/P at different depletion levels. The problem does not occur when a non-fixed resource base is used, since current resource revisions do not affect past years’ R/P.

Assume, for example, that the depletion level (based on the original estimate of R0) at the end of year t-1 is 50%, while the estimated R/P at year t is 20. If the estimated R0 is then adjusted 10% upwards, the R/P is altered by a factor of 1.2. The new R/P for year t thus becomes 1.2*20=24.

At high depletion levels the R/P is very sensitive to even small uncertainties in the resource base. For this reason, one should not put too much weight on the R/P at high levels of depletion. The absolute level of production is usually low at this stage in any case, so a precise estimate of R/P is not necessary. What matters most is the R/P at peak/end-of-plateau, which typically occurs when the depletion level is 30-60% (see figure 7).

EIA’s long term world oil supply scenarios

The conclusion of the 2004 EIA report is that peak oil is not imminent, however very likely to occur within a century. Twelve scenarios are generated, using combinations of three different resource estimates and four alternative growth rates until peak (0-3%). The resource estimates are the low (2248 Gb), medium (3003 Gb) and high (3896 Gb) scenarios from the World Petroleum Assessment of the U.S. Geological Survey (USGS, 2000), which estimated the amount of conventional oil and NGL that would be made available through new discoveries and reserve appreciation until 2025 in the world exclusive of the U.S. To these figures, EIA has added their own resource estimates for the U.S. Subtracting USGS’s estimates from the total indicates that the range of the U.S. resource base, according to EIA, is 324-360 Gb, with 344 Gb as a mean estimate. Regarding the decline behavior post-peak, only one scenario is considered and motivated in the following way:

EIA selected an R/P ratio of 10 as being representative of the post-peak production experience. The United States, a large, prolific, and very mature producing region, has an R/P ratio of about 10 and was used as the model for the world in a mature state” (Wood et al., 2004).

The result is summarized in figure 8. The peak dates are spread out over the time span 2021-2112, but 2037 is pointed out as a reference case.

jakobsson-2009-fg-8-eia-12-long-term-supply-scenarios

There are indeed significant uncertainties regarding the resource base and future demand growth. The result is thus a combined scenario and sensitivity analysis. It is however striking that no alternative decline behaviors have been considered. There can only be two justifiable arguments for their absence: (1) there is virtually no uncertainty in the assumed decline behavior; (2) although there are uncertainties, they have no significant impact on the timing of the peak. Both these potential arguments are unfounded.

The resource base for the forecast is estimates of global EUR. EIA does not explicitly state that they use a fixed resource base, but it appears reasonable since they do not assume any resource growth rate. But when the EIA points to the U.S. as an analogous case, they refer to the R/P based on proved reserves, a resource base which is non-fixed and has grown considerably in the past. It is true that the U.S. R/P based on proved reserves have fluctuated around 10 for several decades, but that is irrelevant in this context. A relevant analogy would be the U.S. (R/P)min computed from the same resource base that EIA assumes in their forecast (324-360 Gb EUR), which would yield an (R/P)min of around 70 rather than 10 (see figure 9). USA:

The resource base

Another factor not related to EIA’s methodology, but still crucial for the result, is the resource estimates generated by the U.S. Geological Survey. The mean estimate implies that the world’s total reserves (outside the U.S.) will grow by 1261 Gb during the period 1996-2025. 649 Gb will be new discoveries of conventional oil, 612 Gb reserve growth in known fields. The majority of other recent estimates of global EUR fall within the span set by the high and low estimates of USGS (NPC, 2007). However, an evaluation of the petroleum assessment (Klett et al., 2007) indicates that between 1996 and 2003 (27% of the assessment period) only 69 Gb (11% of 649 Gb) was discovered. It appears motivated to conclude that USGS’s mean and high resource estimates are unconfirmed and may be over-optimistic. The current discovery rate indicates that the low estimate (334 Gb until 2025) should be considered more likely. The reserve growth, on the other hand, was of the expected magnitude (171 Gb, or 28%). Almost half of this reserve growth has occurred in Middle East and North Africa. Since USGS did not publish reserve growth estimates for individual regions, it is impossible to determine whether the result actually validates the estimation method or is merely a coincidence.

Alternative world oil scenarios

Our world oil supply scenarios are applications of the MDRM, like those of EIA, but with some important modifications. While EIA used an invalid (R/P)min and did not examine how different values affected the result, it is here shown that the assumed (R/P)min has a dramatic impact on the timing of the peak. The scenarios display values of (R/P)min ranging from 70 to 30. The upper limit is what has been observed historically in the U.S.; the lower limit implies that the world should have a 3.3% decline rate, which appears rather extreme compared to the regional decline rates presented by Brandt (2007) and the fact that larger regions tend to decline more slowly than smaller ones. A reasonable guess is that the actual value will be within the range 70-50.

For each (R/P)min scenario there are three alternative EUR estimates. We use the same low (2248 Gb), mean (3003 Gb) and high (3896 Gb) estimates as EIA for the sake of comparability, although the mean and high estimates seem rather optimistic in the light of current discovery rates.

Since the main purpose of the study is to examine the realism of official production forecasts, the demand growth is always assumed to be 1% annually, which is the demand growth rate that EIA and IEA project until 2030. The official forecasts include both crude oil and NGLs, while our scenarios only concern crude oil in order to be comparable with Wood et al. (2004). We have made the simplifying assumption that crude oil and NGLs will grow proportionately.

The world production in 2007 was 26.7 Gb according to EIA statistics. We estimate the cumulative production at the end of 2007 to be 1012 Gb based on the USGS Petroleum Assessment and more recent production figures from EIA.

The result is shown in figure 10 and summarized in table 2. The projected peak dates range from 2007 to 2054.

Assuming EUR = 2248 Gb and (R/P)min = 70, production collapses immediately which indicates that at least one of the parameters is incorrect.

The significant result, however, is that sustained production growth to 2030 requires either that the (R/P)min is 30 together with an EUR of at least 3003 Gb, or else that the EUR must be 3896 Gb. It thus appears likely that crude oil production will start to decline before 2030. An imminent peak in production cannot be ruled out. jakobsson-2009-fig-10-world-crude-oil-prd-scenarios jakobsson-2009-table-2-projected-peak-dates-and-production

The present study has indicated that:

  1. Resource constrained models are presently the only feasible tools for long term oil production scenarios.
  2. The best way to account for uncertainty is to use a range of values for all relevant parameters.
  3. The Maximum Depletion Rate Model (MDRM) is consistent with empirical experience at the field level, and is at least as good as other resource constrained models at a regional level. It is therefore reasonable to use it for global scenarios.
  4. Using a fixed resource base (EUR estimates) yields an exponential decline behavior and is preferable over a non-fixed resource base from a practical point of view.
  5. EIA has constructed unreasonable world scenarios by making an invalid analogy between R/P based on “proved reserves” and R/P based on EUR.
  6. A correct implementation of the model shows that crude oil production may start to decline well before 2030. An imminent peak cannot be ruled out.
  7. The result puts into question the reasonability of EIA’s and IEA’s official production forecasts, which assume that oil production will grow by 1% annually at least until 2030.

In the peak oil debate, analysts who downplay the possibility of an early peak are usually labeled “optimists”. This title we would like to claim for ourselves. In our view, optimism means to always have a constructive attitude after a sober look at the facts at hand, not merely hope for the best scenario to come about. An early production peak followed by a gentle decline should provide good opportunities for an orderly transition from today’s oil dependent economy to a more sustainable one. It should definitely not be interpreted as a doomsday scenario, but rather as a cause for cautious optimism. EIA’s high-peak-steep-decline scenarios, on the other hand, would make an orderly transition extremely difficult and likely have catastrophic consequences for the economy.

References

  • Adelman, M.A., 1990. Mineral depletion, with special reference to petroleum, Review of Economics & Statistics, pp. 1-10.
  • Arps, J.J., 1944. Analysis of decline curves (Technical Publication no. 1758). American Institute of Mining and Metallurgical Engineers.
  • Bardi, U., 2005. The mineral economy: a model for the shape of oil production curves. Energy Policy 33, 53-61.
  • Bentley, R.W., 2002. Global oil & gas depletion: an overview. Energy Policy 30, 189-205.
  • Bentley, R.W., Mannan, S.A., Wheeler, S.J., 2007. Assessing the date of the global oil peak: The need to use 2P reserves. Energy Policy 35, 6364-6382.
  • Brandt, A.R., 2007. Testing Hubbert. Energy Policy 35, 3074-3088.
  • Energy Information Administration (EIA), 2008a. International Energy Outlook 2008. Energy Information Administration, Washington, DC.
  • Energy Information Administration (EIA), 2008b. Petroleum Navigator. Energy Information Administration.
  • Fetkovich, M.J., 1980. Decline curve analysis using type curves. Journal of Petroleum Technology June 1980, 1065-1077.
  • Flower, A.R., 1978. World oil production. Scientific American 238, 42-49.
  • Houthakker, H.S., 2002. Are Minerals Exhaustible? Quarterly Review of Economics and Finance 42, 417-421.
  • Hubbert, M.K., 1956. Nuclear energy and the fossil fuels. Shell Development Company, Exploration and Production Research Division, Houston.
  • International Energy Agency (IEA), 2007. World Energy Outlook 2007. OECD/International Energy Agency, Paris.
  • Klett, T.R., Gautier, D.L., Ahlbrandt, T.S., 2007. An Evaluation of the USGS World Petroleum Assessment 2000 – Supporting Data. U.S. Geological Survey Open-File Report 2007-1021.
  • Lynch, M.C., 2002. Forecasting oil supply: theory and practice. The Quarterly Review of Economics and Finance 42, 373-389.
  • Nagel, N.B., 2001. Compaction and subsidence issues within the petroleum industry: From Wilmington to Ekofisk and beyond. Physics and Chemistry of the Earth, Part A: Solid Earth and Geodesy 26, 3-14.
  • National Petroleum Council (NPC), 2007. Facing the Hard Truths about Energy. National Petroleum Council, Washington, D.C.
  • Norwegian Petroleum Directorate (NPD), 2008. Fact Pages. Norwegian Petroleum Directorate.
  • Reynolds, D.B., 1999. The mineral economy: how prices and costs can falsely signal decreasing scarcity. Ecological Economics 31, 155-166.
  • Simon, J., 1996. The Ultimate Resource 2, 2 ed. Princeton University Press, Princeton.
  • Society of Petroleum Engineers (SPE), 2007. Petroleum Resources Management System. Society of Petroleum Engineers.
  • United States Geological Survey (USGS), 2000. World Petroleum Assessment 2000. USGS.
  • Watkins, G.C., 2006. Oil scarcity: What have the past three decades revealed? Energy Policy 34, 508-514.
  • Wood, J.H., Long, G.R., 2000. Long Term World Oil Supply (A Resource Based / Production Path Analysis). Energy Information Administration.
  • Wood, J.H., Long, G.R., Morehouse, D.F., 2004. Long-Term World Oil Supply Scenarios: The Future is Neither as Bleak or Rosy as Some Assert. Energy Information Administration.
  • Workshop on Alternative Energy Strategies (WAES), 1977. Energy Supply to the Year 2000: Global and National Studies. The MIT Press, Cambridge, Mass.
Posted in How Much Left, Peak Oil | Tagged , , , | Comments Off on How reasonable are oil production scenarios from public agencies?

Why did the environmental movement drop the issue of overpopulation?

[ This is most of the 27 page report. Beck and Kolankiewicz wrote this to explain why the environmental movement abandoned the goal of keeping population within the carrying capacity of U.S. resources. Systems ecologists such as Paul Erlich, David Pimentel and others estimate the U.S. can support about 100 to 150 million people without fossil fuels. That was the population during the Great Depression, when 1 in 4 Americans were farmers, yet even so, many people were hungry.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Roy Beck & Leon Kolankiewicz. The Environmental Movement’s Retreat from Advocating U. S. Population Stabilization (1970-1998): A First Draft of History.

The years surrounding 1970 marked the coming of age of the modern environmental movement. As that movement enters its fourth decade, perhaps the most striking change is the virtual abandonment by national environmental groups of U.S. population stabilization as an actively pursued goal.

Population Issues and the 1970-Era Environmental Movement

How did the American environmental movement change so radically?

Around 1970, U.S. population and environmental issues were widely and publicly linked. In environmental “teach-ins” across America, college students of the time heard repetitious proclamations on the necessity of stopping U.S. population growth in order to reach environmental goals; and the most public of reasons for engaging population issues was to save the environment. The nation’s best-known population group, Zero Population Growth (ZPG)-founded by biologists concerned about the catastrophic impacts of ever more human beings on the biosphere-was outspokenly also an environmental group. And many of the nation’s largest environmental groups had or were considering “population control” as major planks of their environmental prescriptions for America.

As Stewart Udall (Secretary of the Interior during the Kennedy and Johnson administrations) wrote in The Quiet Crisis: “Dave Brower [executive director of the Sierra Club] expressed the consensus of the environmental movement on the subject in 1966 when he said, ‘We feel you don’t have a conservation policy unless you have a population policy.'”1 Brower encouraged Stanford University biologist and ZPG cofounder Paul Ehrlich to write The Population Bomb, published in 1968, which surpassed even Rachel Carson’s landmark work, Silent Spring, to become the best-selling ecology book of the 1960s. 2 Ehrlich’s polemic echoed and amplified population concerns earlier raised by two widely read books, both published in 1948: Our Plundered Planet, by Fairfield Osborn, chairman of the Conservation Foundation, and Road to Survival, by William Vogt, a former Audubon Society official who later became the national director of Planned Parenthood.3

The seeming consensus among leaders of the nascent environmental movement was paralleled, and bolstered, by widespread agreement among influential researchers and scholars in the natural sciences throughout the 1960s and 1970s.4 The importance attached to each country’s stopping its own population growth was not confined to the United States. In 1972, Great Britain’s leading environmental magazine, The Ecologist, published the hard-hitting Blueprint for Survival, supported by thirty-four distinguished biologists, ecologists, doctors, and economists, including Sir Julian Huxley, Peter Scott, and Sir Frank Fraser-Darling. With regard to population, the Blueprint stated: “First, governments must acknowledge the problem and declare their commitment to ending population growth; this commitment should also include an end to immigration.” 5

Organizers of the first Earth Day in 1970 note that U.S. population growth was a central theme.6 The nationwide celebration revealed a massive popular groundswell that helped spur Congress and the Nixon, Ford, and Carter administrations to enact a host of sweeping environmental laws and create a federal bureaucracy to implement and enforce those and others that had been pushed through in the 1960s. Two months after Earth Day, the First National Congress on Optimum Population and Environment convened in Chicago.7 Religious groups-especially the United Methodist Church and the Presbyterian Church- urged for ethical and moral reasons that the federal government adopt policies that would lead to a stabilized U.S. population. President Nixon addressed the nation about problems it would face if U.S. population growth continued unabated. On January 1, 1970, the president signed into law the National Environmental Policy Act (NEPA), 8 often referred to as the nation’s “environmental Magna Carta.” 9 In Title I of the act, the “Declaration of National Environmental Policy” began: “The Congress, recognizing the profound impact of man’s activity on the interrelations of all components of the environment, particularly the profound influences of population growth.” 10

Later in 1970, President Nixon and Congress jointly appointed environmental, labor, business, academic, demographic, population, and political representatives to a bipartisan Commission on Population Growth and the American Future, chaired by John D. Rockefeller III. Among its findings in 1972 was that it would be difficult to reach the environmental goals being established at the time unless the United States began stopping its population growth. Rockefeller wrote that “gradual stabilization of our population through voluntary means would contribute significantly to the nation’s ability to solve its problems.” 11

The Sierra Club, for example, in 1969 urged “the people of the United States to abandon population growth as a pattern and goal; to commit themselves to limit the total population of the United States in order to achieve a balance between population and resources; and to achieve a stable population no later than the year 1990.” 12

A large coalition of environmental groups in 1970 endorsed a resolution stating that “population growth is directly involved in the pollution and degradation of our environment-air, water, and land-and intensifies physical, psychological, social, political and economic problems to the extent that the well-being of individuals, the stability of society and our very survival are threatened.” The same groups committed themselves to “find, encourage and implement at the earliest possible time” the policies and attitudes that would bring about the stabilization of the U.S. population.13

Most of that interest had disappeared by 1998, however-but not because population growth had stopped or the problems it caused had been solved.

The Missing Issue in 1998

When the Society for Environmental Journalists held its annual conference in Chattanooga, Tennessee, in October 1998, urban sprawl was a recurring theme. And no wonder-U.S. population growth was every bit as potent a force in 1998 as it had been in 1970: some 2.5 million Americans were being added each year at a rate faster than some Third World countries and ten times faster than Europe. It was a volume of growth nearly matching that of the Baby Boom years that helped trigger the 1970-era environmental/population movement. The Earth Day 1970 vision of a stabilized American population within a generation had never materialized.

Yet population growth was strangely missing from most reporting on sprawl and from a popular session in which a panel of newspaper reporters and editors discussed their expansive coverage of the problems from, the causes of, and the solutions to urban sprawl in different parts of the country. The panelists talked about problematic zoning, planning and lifestyle choices, but not about the 25 million new residents added each decade-or the sheer amount of space required for their housing, worksites, schools, roads, recreation facilities, shopping centers, and other infrastructure. When challenged from the audience, all the panelists agreed that urban sprawl would be far less destructive without the massive population growth that was occurring in America. And they agreed that urban life and environmental losses would be immensely different if some 70 million people had not been added to the U.S. population since 1970.

In the late 1990s, as in 1970, the problems stemming from U.S. population growth were huge news. But the underlying population growth itself and its causes were barely being mentioned.

Journalists tend to look to competing interest groups to define the issues they cover. Business groups always have defined one end of the growth issue spectrum as they pushed for ever more population growth. At one time, environmental groups defined the other end by calling for no growth. By 1998, however, environmental groups no longer emphasized population growth as something a nation could choose or reject. When interviewed about sprawl, environmental leaders did not mention the population factor.

That was reflected in the back of the Chattanooga hotel room where the sprawl panel took place. There, a representative from the national Sierra Club headquarters had placed a display of literature from the Club’s major new campaign against urban sprawl. The highly publicized, multimillion-dollar campaign mentioned population growth only in passing, and then only to minimize its role. None of the materials suggested stabilizing U.S. population as one part of the solution to urban sprawl. The Sierra campaign instead focused its advocacy on creating more regulation and management of U. S. growth to ameliorate its adverse effects on the environment. And it assumed-and tacitly accepted-that the U.S. population would never stop growing.

Professor T. Michael Maher of the University of Southwestern Louisiana. He conducted a study of news coverage of urban sprawl, endangered species, and water shortages-all issues profoundly affected by population growth. In a random sample of 150 stories on those issues, he found only one which mentioned that part of the solution might be to try to stabilize the U.S. population.14

The journalists told Maher they were uncomfortable raising the population issue on their own. With the business and political establishments continuing to push for “more growth” and the environmental establishment now pushing for “smart growth,” the special-interest groups had defined a spectrum for the media that excluded “no growth” and “greatly reduced growth” from the range of available, acceptable options. Maher studied the membership materials for the nation’s environmental groups and discovered:

“Population is off the agenda for the purported leaders of the environmental movement.” 15

The authors have chosen 1998 as the end of the period being analyzed here because that was the year when the environmental movement erupted in a highly public battle over U.S. population issues. After more than two decades of dwindling interest in population issues, many of the old environmental guard from the 1970 era openly challenged the national leadership of two influential organizations, the Sierra Club and Zero Population Growth, to put U.S. population stabilization-and the reduction in immigration levels it entailed-back on the agenda. The Sierra Club and ZPG, once so outspoken in the 1970s on the urgency of U.S. stabilization, had each changed their policies in the two years prior to 1998 to dissociate themselves from this cause.

In 1998, the national Sierra Club leadership defeated those who tried to return their organization to its earlier pro-stabilization policy, which advocated both lower fertility and lower immigration. 16

Reviewing the Rejected “Foundational Formula” of 1970-Era Environmentalism

The retreat from stabilization advocacy by environmental groups in the 1990s directly contradicted the conclusion of the President’s Council on Sustainable Development in 1996. Established by President Clinton to follow through on the UN Conference on Environment and Development in Rio de Janeiro (the “Earth Summit”), the council acknowledged the integral relationship between a stable population and sustainable development, observing that “clearly, human impact on the environment is a function of both population and consumption patterns.” and declaring the need to “move toward stabilization of the U.S. population.” 17

Such thinking was central to the environmental activism of the 1970 era because most environmentalists’ view of environmental quality was deeply shaped by what we will call here the “Foundational Formula” of the movement. That formula expressed the movement’s understanding of the problem it was tackling and of how to solve it. The 1990s environmental movement was fundamentally different from the 1970-era movement in that it had largely abandoned that Foundational Formula.

There are several ways of expressing the environmental impacts of humanity. One of the best known is the I=PAT equation offered by biologist Paul Ehrlich and physicist John Holdren: Environmental Impact (I) equals Population size (P) times Affluence, or consumption per person (A), times Technology, or damage per unit of consumption (T). 18

One doesn’t have to work with the Foundational Formula much to realize that changes in the Individual Impact and changes in the Population Size factor have roughly equal power over improving or deteriorating Total Environmental Impact. For example:

Increasing the Individual Impact by 30 percent while holding Population Size constant, would have a tremendously deleterious effect on the bay. And so would increasing Population Size by 30 percent (as Individual Impact is held constant). It really doesn’t matter which one is increased; the bay feels similar pain.

While most environmental groups averted their attention from the population issue, the U.S. population soared by more than 33 percent (nearly 70 million people) between 1970 and 1998-mostly because of increased immigration. The Census Bureau projects that, under current immigration policies, U.S. population will grow by yet another 50 percent over the next fifty years.

The worsening of the Population Size factor had in many respects negated the improvements in the Individual Impact factor. For instance, America’s lakes, rivers, and streams were to have become “fishable and swimmable,” according to the 1972 Clean Water Act. But after more than half a trillion dollars spent controlling water pollution (costs passed on to consumers and taxpayers), around 40 percent of U.S. surface waters still weren’t fishable and swimmable in the mid-1990s.20 The nation has more nitrogen oxide (a smog precursor) and more carbon dioxide (a greenhouse gas) emissions than thirty years ago, more endangered species and fewer wetlands.21

As Congress numerous times debated and approved policies that increased Population Size substantially, the major environmental groups stood silent.

Cause #1: U.S. Fertility Dropped Below Replacement-Level Rate in 1972

In 1972, the U.S. Total Fertility Rate fell to below the 2.1 births per woman that marks the replacement-level fertility rate. By 1976, fertility had hit an all-time low of 1.7 and hovered just above that for years. A common remembrance of aging population activists is their memory of the night in 1973 when TV broadcasters announced that the 1972 U.S. fertility rate had reached zero population growth. The American people apparently were profoundly confused by this announcement, with many believing the U.S. population problem had been solved. (In fact, because of what demographers call “population momentum,” it takes a country up to seventy years after the replacement-level fertility rate is reached to actually stop growing. But by 1972, the fertility rate had indeed declined to a level low enough to eventually produce zero population growth, as long as immigration remained reasonably low.) With zero population growth supposedly achieved (or at least approached), many people in the population movement may have felt their activism was no longer needed. Americans had reduced the size of the average family as far as was necessary. On average they were living up to the rallying cry of “stop at two.” Many activists shifted their former population energies into feminism, other aspects of conservation and environmentalism, or moved on to other pursuits altogether. “Full-Formula” environmentalism that dealt with both Individual Impact and Population Size factors shrank to a small core constituency as quickly as it had burst into a mass popular movement. The population committees of environmental groups lost popularity and significance or disbanded altogether.

The neglect of the population issue within organizations surely influenced new employees as they came on board during this period. Many of them probably never heard of the “full-Formula” environmental approach. They worked only on the Individual Impact side of the Formula. Many had little background in the natural sciences, resource conservation, or analytical/quantitative fields. To them, population advocacy may have looked like an external issue that could easily be left to external groups to handle.

Perhaps another factor was at work as well. The overwhelmingly non-Hispanic, white leadership of the environmental movement may have felt it was defensible to address population growth as long as the great bulk of this growth came from non-Hispanic whites, which it did during the Baby Boom. But the situation changed dramatically after 1972. From that year forward, the fertility of non-Hispanic whites was below the replacement rate, while that of black Americans and Latinos remained well above the replacement rate.22 To talk of fertility reductions after 1972 was to draw disproportionate attention to nonwhites. Certain minorities and their spokespersons- with long memories of disgraceful treatment by the white majority and acutely aware of their comparative powerlessness in American society-were deeply suspicious of possible hidden agendas in the population stabilization movement. As the Reverend Jesse Jackson told the Rockefeller Commission, “our community is suspect of any programs that would have the effect of either reducing or levelling off our population growth. Virtually all the security we have is in the number of children we produce.” 23 And Manuel Aragon, speaking in Spanish, declared to the Commission: “what we must do is to encourage large Mexican American families so that we will eventually be so numerous that the system will either respond or it will be overwhelmed.” 24

By the 1990s, a majority of the nation’s growth stemmed from sources other than non- Hispanic whites (especially Latin American and Asian immigrants and their offspring). Environmentalist leaders-proud and protective of their claim to the moral high ground- may have been reluctant to jeopardize this by venturing into the political minefield of the nation’s volatile racial/ethnic relations through appearing to point fingers at “outsiders,” “others,” or “people of color” as responsible for America’s ongoing problem with population growth.

Cause#2: Abortion and Contraceptive Politics Created Organized Opposition

In June 1960, the Food and Drug Administration approved oral contraceptives for sale. By the late 1960s, the Vatican and American Catholic leadership were engaged in a major counterattack on the growing use of contraceptives in the United States. They focused a considerable amount of their ire on groups advocating population control. Their focus made a certain sense from their point of view. Most population and environmental groups that called for stabilization also made explicit calls, not for abstinence or celibacy, but rather for more availability of reliable, safe contraceptives and sex education. Many of them also called for the legalization of abortion.

Then in 1973, in Roe v. Wade, the U.S. Supreme Court legalized abortion. That set off a much more intense campaign by the Catholic Church-and increasingly by conservative Protestants-against the whole of the population movement.

Abortion had been something of a minor issue within the population stabilization movement but was included because of the thought that fertility might not be brought to replacement level without the availability of abortion. As it turned out, legalized abortion was not a necessary component to reach replacement-level fertility. America reached its stabilization fertility goal the year before the Supreme Court legalized abortion.

But to the Catholic hierarchy and the pro-life movement, the legalized abortion and population stabilization causes have been inextricably linked. In the 1990s, it was still difficult for a pro-stabilization person or group to get a hearing among many Catholic and pro-life groups without being automatically considered an abortion apologist.

A number of leaders of philanthropic foundations and politicians involved with population efforts in the 1970s have said that active measures by Catholic bishops and the Vatican were the greatest barrier to moving population measures and in setting a national population policy. Congressman James Scheuer was a member of the 1972 Commission on Population Growth and the American Future. In 1992, he wrote that “the Vatican and others blocked any reasonable discussion of population problems.” 26 This opposition applied both nationally and internationally. In a 1993 interview, Milton P. Siegel, assistant director general of the World Health Organization from 1946 to 1970, indicated that “one way or another, sometimes surreptitiously, the Catholic church used its influence to defeat, if you will, any movement toward family planning or birth control.” 27

As population activists reported on the Catholic activism and criticized it, the population movement began to be tarred as anti-Catholic. Environmental groups seeking membership, funds, and support from a wide spectrum of Americans had good reason to stay out of population issues altogether rather than risk offending their own current and potential members who also were members of the largest religious denomination in America. Environmental groups with Catholic board members were known to use them as reasons for not being more involved in population issues.

Roman Catholic opposition, both from the Vatican itself and from American Catholic leaders, apparently played a key role in pressuring government policymakers as well. On May 5, 1972, gearing up for his reelection campaign, President Nixon publicly disavowed the recommendations of his own Commission on Population Growth and the American Future, which U.S. Catholic bishops had blasted for its permissive attitude toward contraception and abortion.28 Evidently still concerned about overpopulation, however, Nixon ordered a study in April 1974 of the national security implications of population growth.29 When the study was released in 1975, President Gerald Ford endorsed the findings of National Security Study Memorandum 200 (NSSM 200). The report strongly stated that exploding populations in the Third World would threaten the security of the United States. These threats would come from the destabilization of those countries’ economic, political, and ecological systems. Besides recommending helping those nations curb their population growth, NSSM 200 called on the United States to provide world leadership in population control by seeking to attain stabilization of its own population by the year 2000.

Although President Ford endorsed the NSSM 200, nothing ever became of it. Historians will want to study the literature that through the years since has made the case that NSSM 200 was never implemented because of meetings between Vatican officials and U.S. government officials of Roman Catholic background as well as a systematic campaign of pressure by the U.S. Conference of Catholic Bishops. “American policy [toward support of international family planning programs] was changed as a result of the Vatican’s not agreeing with our policy,” President Reagan’s ambassador to the Vatican told Time magazine.30 How much pressure was actually exerted is an important question to resolve.

 

Cause #3: Emergence of Women’s Issues as Priority Concern of Population Groups

Another likely reason environmental groups did not fully engage U.S. population issues in the 1980s and 1990s was that the groups that specialized in population issues drifted away from population stabilization and environmental protection as primary reasons for being.

By the 1990s, for example, Planned Parenthood no longer played any role in advocating for U.S. population stabilization to protect the environment. Its focus had narrowed to making sure that women had full access to the whole range of options concerning fertility and births. That had always been a primary mission of Planned Parenthood, but one of the major purposes of empowering women had once been to reduce U.S. population growth.

To understand these shifts, historians will need to look at the differing roots of the 1970-era population movement. While one root included people with high environmental consciousness, several roots did not. Many of the early population leaders were primarily concerned about health issues; others about development issues. Still others were predecessors of the modern feminist movement. The environmentalist angle tended to be pushed out front during the late 1960s as environmentalism reached mass popularity. But as environmentalists abandoned population issues in the 1970s, the population groups more and more de-emphasized their environmental motives. By the 1990s, some of the groups actually opposed helping the environment through population stabilization or reduction efforts. Christian Science Monitor correspondent George Moffett observed: “Women’s groups complain that overstating the consequences of rapid population growth has created a crisis atmosphere in some countries, which has led to human rights violations in the name of controlling fertility.” 31

This was in striking evidence at the 1994 UN International Conference on Population and Development in Cairo, Egypt. As Catholic lay theologian George Weigel observed, “Over the long haul . . . the most significant development at the Cairo Conference may have been a shift in controlling paradigms: from ‘population control’ to ‘the empowerment of women.'” 32 “The Cairo Programme contains hundreds of recommendations about women’s rights and other social issues but almost none about population,” wrote former Deputy Assistant Secretary of State for Environment and Population Affairs Lindsey Grant.33 The long international document from Cairo made no mention of the connections between population growth and the environmental ills of countries with growing populations.

This shift away from an overriding concern with population and environmental limits may be seen most importantly in the group Zero Population Growth (ZPG). In 1968, Paul Ehrlich’s The Population Bomb ignited a national movement. Zero Population Growth was founded that same year to take advantage of the incredible publicity the book generated.

Hundreds of ZPG chapters sprang up overnight. ZPG’s first leaders were described as all being pro-environmentalist, pro-choice, and pro-family planning. In the beginning, ZPG had a motto, “Zero Population Growth is our name and our mission.” There were several large organizations dealing with population growth in other countries. But ZPG’s primary mission was explicitly to stabilize the U.S. population, according to members of the early ZPG boards of directors.34 That remained the stated mission through the 1980s.

In the 1970s, ZPG’s population policy recommendations covered every contribution to U.S. population growth. It included stands on contraceptives, sex education for teenagers, equality for women, abortion, opposition to illegal immigration, and a proposal to reduce legal immigration from about 400,000 a year to 150,000 a year by 1985 in order to reach zero population growth by 2008.35

ZPG started the modern immigration-reduction movement in the 1970s. After American fertility fell below the replacement-level rate, the ZPG board recognized that immigration was rising rapidly and would soon negate all the benefits of lower fertility. Even though immigration seemed separate from the family planning issues that had dominated precursor population organizations, ZPG tackled it squarely because it related to the issue of U.S. population stabilization, which was deemed essential to the health of the American environment. By the late 1970s, the ZPG leaders who were the most interested in immigration issues spun off a new organization called the Federation for American Immigration Reform (FAIR). Their idea was that FAIR would take no stand on abortion and other controversial family planning issues in order to attract a wider constituency that would work for immigration reform not only for environmental reasons, but for economic relief for the working poor and taxpayers, for social cohesion, and for national security.

 

The ZPG leaders who left the ZPG board for FAIR also happened to be most of the people with the greatest interest in population as an environmental issue. That meant that those remaining on the board were more inclined toward the type of population movement that was rooted in family planning and women’s issues. While ZPG continued to have policies on U.S. stabilization and the environment-and produce some outstanding educational materials-these policies and programs got less and less staff and board attention as the 1980s progressed. New staff were hired less on the basis of their environmental expertise and commitment and more because of their commitment to women’s issues.

By 1996, ZPG was focused overwhelmingly on global population issues from the women’s empowerment perspective. A secondary focus was excessive consumption by Americans.36 The board removed the word “stabilize” from much of its literature and its Mission Statement. On October 25, 1997, the ZPG board substituted “slowing” for “stopping” so that it then advanced a goal of merely “slowing” U.S. and world population growth. ZPG’s president Judith Jacobsen wrote in the newsletter ZPG Reporter that the reason ZPG didn’t support creating U.S. policies to reduce domestic population growth was that population problems in Third World countries needed to be resolved first. She said that the “Cairo Conference taught us that changing the conditions of women’s lives is the most powerful answer” for population problems. She then gave a long list of ZPG’s essential commitments, none of which were population stabilization or environmental protection.37

Thus, just before its thirtieth anniversary, ZPG had severed its goals from its name and its founding mission-zero population growth. Also abandoned as a central concern was the protection of the American environment, which had been at the heart of ZPG’s founding. ZPG had not necessarily turned anti-environment or anti-stabilization, but it had evolved into an organization with different priorities.

Cause #4: Schism Between the Conservationist and New-Left Roots of the Movement Historians

Two of the roots go back a century: (1) The wilderness preservation movement was exemplified by John Muir, the National Parks, and, later, Wilderness Areas.38 (2) The resource conservation movement was exemplified by President Theodore Roosevelt, his chief forester Gifford Pinchot, and the National Forests.39 A third root of the modern environmental movement is much younger. It was an outgrowth of what was called New-Left politics with, in some cases, a strong strain of socialism, as espoused by its guru of the 1970 era, Barry Commoner. This root was given its greatest impetus with the 1962 publication of Silent Spring by naturalist Rachel Carson. Although Carson was deeply concerned about the unforeseen effects of pesticides and other man-made poisons being released indiscriminately into the natural environment, this third root of modern environmentalism came to focus more on urban and health issues such as air, water, and toxic contamination, as they affected the human environment. Commoner, in fact, criticized conservationists for putting wildlife ahead of human health. As journalist Mark Dowie writes: “The central concern of the new movement is human health. Its adherents consider wilderness preservation and environmental aesthetics worthy but overemphasized values. They are often derided by antitoxic activists as bourgeois obsessions.” 40

Having much in common with the emerging Green parties of Europe (social justice, peace, and ecology), the new “greens” of America joined with the wilderness preservationists and resource conservationists as the modern environmental movement was born in the 1960s. But the New Left greens held opposite views on population from those of most preservationists and conservationists. In his influential 1971 book The Closing Circle and elsewhere, Barry Commoner minimized the role of population as a cause of environmental problems. Commoner said the problems attributed to population growth were actually caused by unfair distribution of resources and by profitable technologies. Environmental degradation could be rectified by changing economic systems.41

Wilderness advocate and popular southwestern author Edward Abbey spoke for many when he said that “growth for the sake of growth is the ideology of the cancer cell.” 42

It appears that the New Left greens tried to keep population issues off the Earth Day 1970 agenda. They lost. Conservationists and preservationists succeeded in retaining their fundamental tenet that there could be no long-term environmental preservation without limiting human numbers. The college students and young adults who were rushing into the movement at the time may have been more temperamentally inclined toward the antiwar, antiestablishment New Left greens, but the young new environmentalists-armed with millions of dog-eared copies of The Population Bomb-seemed overwhelmingly to accept the old-line conservationists’ assessment of population. Most of the new more-liberal environmental groups that were formed at the time rejected the New Left’s opposition to fighting never-ending population growth and joined with the conservationists on their population stances.

But the New Left wing of environmentalism reversed its losses in the 1990s, according to Earth First! co-founder Dave Foreman, one of the most publicized and aggressive players in the first twenty years of U.S. environmentalism.43 He said the New Left wing-which he called “Progressive Cornucopians”-established its antistabilization view as the dominant one in the national staffs and boards of many groups, including the Sierra Club.

On the winning side of the 1990s population policy conflict were people like Brad Erickson, coordinator of the Political Ecology Group (PEG), which played a key role in helping the Sierra Club board abandon its proscriptive population stabilization policy in 1996 and then fight off the pro-stabilization Sierra members in 1998.44 Erickson said the fight was a replay of the one at Earth Day 1970, which the New Left greens lost.45 He said the plan of the New Left greens in the 1960s had been to use the environmental issue as one of several they hoped would bloom into a full manifestation of a progressive movement far beyond the confines of traditional American economics and culture. But conservationists hijacked Earth Day, forced their population issues into it and the movement, and have limited the effectiveness of environmentalism ever since, Erickson explained. This view is shared by author Mark Dowie, who argues that population stabilization and immigration reform have retarded the transformation of conservation and preservation-oriented environmentalism into a movement for “environmental justice.”

Cause #5: Immigration-Protected by “Political Correctness”- Became the Chief Cause of U. S. Growth

Modifications in immigration law in 1965 inadvertently started a chain migration through extended family members that began to snowball during the 1970s. At the very time that American fertility fell to a level that would allow population stabilization within a matter of decades, immigration levels were rising rapidly.

By the 1980s, annual immigration had more than doubled and was running above 500,000 a year. By the 1990s, annual average legal immigration had surpassed a million. And that didn’t even include a net addition of 200,000 to 500,000 illegal aliens each year. By the end of the 1990s, immigrants and their offspring were contributing nearly 70 percent of U.S. population growth.47

If immigration and immigrant fertility had been at replacement-level rates since 1972- as had native-born fertility-the United States would never have grown above 250 million.48 Instead, U.S. population passed 270 million before the turn of the century. And the Census Bureau projected that current immigration and immigrant fertility were powerful enough to contribute to the United States surpassing 400 million soon after the year 2050-on the way past a billion.

The most aggressive group was Zero Population Growth-before it shifted away from being an environmental organization. A 1977 Washington Post story revealed the public way ZPG confronted immigration.49 Under the headline “Anti-Immigration Campaign Begun,” the story began: “The Zero Population Growth foundation is launching a nationwide campaign to generate public support for sharp curbs on both legal and illegal immigration to the United States.” It quoted Melanie Wirken, ZPG’s Washington lobbyist, saying the group favored a “drastic reduction in legal immigration” from levels that were then averaging about 400,000 a year. The article reported that ZPG was adding another lobbyist so that Wirken could devote all of her time to immigration issues.

The Sierra Club urged the federal government to conduct a thorough examination of U.S. immigration policies and their impact on U.S. population trends and how those trends affected the nation’s environmental resources. “All regions of the world must reach a balance between their populations and resources,” the Club added.50 Then in 1980, the Sierra Club testified before Father Hesburgh’s Select Committee on Immigration and Refugee Reform: “It is obvious that the numbers of immigrants the United States accepts affects our population size and growth rate. It is perhaps less well known the extent to which immigration policy, even more than the number of children per family, is the determinant of future numbers of Americans.” The Club said it is an “important question how many immigrants the United States wants to accept and the criteria we choose as the basis for answering that question.” In 1989, the Sierra National Population Committee declared that “immigration to the U.S. should be no greater than that which will permit achievement of population stabilization in the U.S.,” a policy confirmed by the Club’s Conservation Coordinating Committee.51 The immigration-reduction advocacy of the Sierra Club and ZPG beginning in the 1970s was affirmed in the Global 2000 Report to the President in 1981, which stated that the federal government should “develop a U.S. national population policy that includes attention to issues such as population stabilization, and . . . just, consistent, and workable immigration laws.” 52 It was reaffirmed in the 1996 report of the Population and Consumption Task Force of the President’s Council on Sustainable Development. The task force concluded: “This is a sensitive issue, but reducing immigration levels is a necessary part of population stabilization and the drive toward sustainability.” 53

The environmental movement of the late 1990s was willing to miss those environmental goals (and newer ones) for the sake of protecting a level of immigration that was four times higher than the tradition before the first Earth Day. What was it about the immigration issue that made environmental groups, by and large, meekly acquiesce to a level of immigration that clashed head-on with the fundamental goal of population stabilization? Years of pondering this question have led the authors to the conclusion that, of all the factors involved in the environmental movement’s retreat from U.S. population stabilization, the growing demographic influence of immigration is the single most important one.

Historians will find much to consider in the following possible explanations for the groups’ avoidance of immigration numbers:

  • Fear that immigration reduction would alienate “progressive” allies and be seen as racially insensitive

The primary lens through which most environmental leaders in the 1980s and 1990s seemed to view immigration was not an environmental-or labor-paradigm but a racial one. According to this paradigm, immigration often appeared to be about nonwhite people moving into a mostly white country, just as whites themselves had done to indigenous Native Americans in previous centuries. To propose reductions in immigration was not seen as reducing labor competition or population growth but as trying to protect the majority status of America’s white population. It was seen as rejecting nonwhite immigrants. 54

“The concept of immigration control has become contaminated in the minds of the new class by the ideas of racism, narrow self-seeking nationalism, and a bigoted preference for cultural homogeneity. . . . Their enthusiasm for anti-racism and international humanitarianism is often sincere but there are also social pressures supporting this sincere commitment and making apostasy difficult.” And later: “Ideologically correct attitudes to immigration have offered the warmth of in-group acceptance to supporters and the cold face of exclusion to dissenters.” 56 Similar analysis in the United States suggests that it is “politically incorrect” to talk of reducing immigration.

Taboos against challenging immigration policies are enforced by a “political correctness” that often is based on honorable sentiments tied to an individual’s personal connections to immigration. These sentiments are usually strongest among those with the most direct, and recent, immigrant experiences in their immediate families, i. e., those whose spouses, parents, grandparents, or aunts or uncles immigrated to the United States. Sensitivity is heightened still more for those who feel a strong personal identity as a member of ethnic groups-such as Irish, Italian, Greek, Slavic, Chinese, Japanese, or Jewish-whose members once fled persecution in other countries or who may have met with discrimination in this country. Even when such a person does recognize that U.S. population growth is problematic, and that immigration is a major contributor to it, he or she may well reason that it would be hypocritical, as a descendant of immigrants and indirect beneficiary of a generous immigration policy, to “close the door” even partially on any prospective immigrant. Dealing with immigration can become almost physically sickening for such people, who feel they must make a choice between environmental protection and their view of themselves as a part of an immigrant ethnic group. (For such Americans, their own ethnic group’s experiences seem to obscure the fact that more than 90 percent of present immigrants are not fleeing persecution or starvation but are simply seeking greater material prosperity.) Thus, the response of these Americans to the population dilemma may have more to do with their sense of ethnicity than any scientific analysis of environmental challenges.

One of the main reasons the Sierra Club leadership gave in 1998 for avoiding the immigration issue was that they dared not risk appearing to be racially insensitive. Executive Director Carl Pope acknowledged that the official endorsers of the referendum trying to confront immigration numbers did not have racially questionable motives. Rather, he admitted, they were esteemed Sierrans and environmental scholars, with distinguished records of environmental service to their country. In fact, Pope said, he used to agree with them that immigration should be cut for environmental reasons. But he changed his mind because he didn’t believe it possible to conduct a public discussion about immigration cuts without stirring up racial passions: “While it is theoretically possible to have a non-racial debate about immigration, it is not practically possible for an open organization like the Sierra Club to do so. . . . [Recent history in California has] caused me to change my view of whether it is possible for the Sierra Club to deal with the immigration issue in a way which would not implicate us in ethnic or racial polarization.” 57 Pope acknowledged that it was the opponents of stabilization who were injecting race into the discussion by publicly “lambasting the club as racist.” But the Sierra Club, he insisted, could not subject itself to those kinds of epithets merely in order to confront the full issue of U.S. population growth.58

ZPG’s president, Judith Jacobsen, addressed the racial issues in a letter to members:

“ZPG is already explicitly committed to building bridges to communities of color and working on immigrants’ rights as part of our long-held goal of improving the success of the population movement by expanding it to include a broad spectrum of American diversity. A policy to reduce legal immigration now would make this work impossible. We want ZPG to strengthen our ties to communities of color, not jeopardize them. In this way, we can build relationships, listen and refine our immigration policy and strategy as the public debate evolves.” 59 Jacobsen said the ZPG board voted to take no position on reduction of immigration, “with full knowledge of immigration’s important role in the U.S. population growth, both today and in the future.”

Then-ZPG President Dianne Dillon-Ridgely dismissed any concern about immigration’s contribution to the country’s population growth as illegitimate.60

ZPG Executive Director Peter Kostmayer, when questioned about immigration, told the audience: “Let me be frank. You are a wealthy, middle-class community, and if you concentrate on the issue of immigration as a way of controlling population, you won’t come off well. It just doesn’t work. The population movement has an unhappy history in this regard.” 61 About the same time, in a handwritten note to a ZPG member inquiring about the group’s immigration stance, Kostmayer wrote, “It would be so, so counterproductive to be perceived as antiimmigrant.” 62

  • The transformation of population and environment into global issues needing global solutions

In 1970, population growth often was discussed in terms of its threat to local or national environmental resources-in countries all over the world. The argument often went something like this: The cultures, traditions, religions, economies, health care, tax structures, and laws of each country create incentives for high birth rates. Each country has to make its own changes to bring down those birth rates to protect its own environmental resources, but nations also must act cooperatively in international efforts to provide financial and technical assistance to those nations requesting them. Because some of the problems of overpopulation are indeed global, each nation has a stake in every other nation moving toward population stabilization.

By the 1990s, most environmental group comments about population growth were that it was almost exclusively a global problem. Population growth rarely was described as a threat to localized environmental resources such as specific watersheds, landscapes, species’ habitats, estuaries, and aquifers. Rather, population growth usually was linked to global (or worldwide) environmental problems such as biodiversity losses, climate change, and the decline of the oceans.63

Under the new thinking, the population size of individual nations was not nearly as important as the size of the total global population. Certain top leaders of the environmental groups said this was a significant reason they no longer saw U.S. population stabilization per se as a priority goal. They especially lost interest in U. S. stabilization when in the 1990s long-term U.S. population growth was being driven almost entirely by people in other countries moving to the United States and having their above-replacement-level number of babies in America. In the ascendant “global” view, this migration wasn’t important because it was merely shifting the growth from one part of the globe to another; the global problem was not increasing because of it, they reasoned. The Sierra Club’s Carl Pope said: “I seriously doubt that anyone is in a position to calculate exactly which changes in immigration policy would minimize GLOBAL environmental stress.”

Executive Director Pope wrote in Asian Week that overpopulation and its effects on the environment are “fundamentally global problems; immigration is merely a local symptom. . . . Erecting fences to keep people out of this country does nothing to fix the planet’s predicament. It’s the equivalent of rearranging the deck chairs on the Titanic.” 66

  • Influence of human rights organizations

The influence of human rights groups and philosophies on environmental leaders may be another part of the explanation for why environmental groups were not willing to work for population stabilization in the 1990s. Michael Hanauer, a ZPG leader in the Boston chapter, who resigned from ZPG’s national board in 1998, pointed out that environmental groups no longer dealt with U.S. stabilization because “much of their roots, associations, history, knowledge, empathies and even networking was within the human rights movement. Offending these groups was not in the cards.” 67

Throughout the U.S. human rights community had arisen various concepts of the human right of poor workers to cross national borders if they could improve their economic condition by taking jobs in another country. Most U.S. human rights organizations-with the American Civil Liberties Union being the best-known example-actively lobbied against any reductions in immigration. Dozens of human rights organizations were formed specifically to advocate for the rights of immigrants and for immigration.

Environmental leaders in the 1990s increasingly worked in coalitions with the human rights groups, especially on international environmental, trade, and development issues and on antitoxics crusades. It appears that people moved easily back and forth between human rights and environmental jobs.

In the 1990s Sierra Club, for example, officials began to appoint people from human rights organizations to its National Population Committee. These individuals came from organizations which argued that population growth is not a cause of problems in the United States or the rest of the world. They opposed stabilization efforts before being appointed; they were among the most aggressive leaders in working to change the Sierra Club’s pro-stabilization policy and in fighting the referendum that failed to reestablish the policy.68

While the agendas of the human rights and environmental groups should not be seen as fundamentally at odds with each other, they nonetheless are not the same. The human rights agenda is about protecting freedoms and rights of individuals here and now. The environmental agenda since the inauguration of the conservation and preservation movements a century ago, and since its rejuvenation and reorientation in the 1960s, has been about protecting the natural and human environments, now and in perpetuity.

The human rights agenda is by necessity oriented toward the immediate needs of individuals. The environmental agenda has often also dealt with immediate threats but just as often works for goals that are far into the future.69

Human rights work is about people getting their full share of rights; its ideal is freedom. Environmental work is often about asking or forcing people to restrain their rights and freedoms in order to protect the natural world from human actions, so that people who are not yet born might someday be able to enjoy and prosper in a healthy, undiminished environment. The fact that human rights work and environmental work involve tensions between goals and philosophy does not mean that either of them must be seen as wrong or right.

  • Triumph of ethics of globalism over ethics of nationalism/internationalism

Globalism refers to elimination of the sovereign nation-state as a locus of community, loyalty, economy, laws, culture, and language. The heart of the difference between globalism and nationalism is an ethical viewpoint of whether a community has the right or even the responsibility to give priority attention to the members of its own community over people outside the community.71

That relates to whether a nation has the right to protect its own environmental resources before it succeeds in helping some other country to preserve its environment. Is it ethical to stabilize the population of one’s own country when other countries are still growing? Is it ethical to bar a human being who is alive today from immigrating and advancing economically if the reason for barring the immigration is to preserve the natural resources of the target country for the benefit of human beings not yet born?

The ethical basis of nationalism is as a community in which every member has a certain responsibility for everybody else in that community. The highest priority of a national government under the nationalist ethic is the members of that community. This has been the dominant ethical principle in the United States and most other nations in which the national government is expected to establish laws and regulations concerning trade, labor, capital, civil rights, and the environment based primarily on their effects on the people of its own nation.

The globalist ethic that we describe here is less communitarian and more individualistic. It gives a higher ethical value to the freedom of an individual (and by extension, the corporate bodies owned by individuals) to act with fewer or no restrictions by national governments. This ethic similarly unleashes workers around the world to cross borders to work in ways that maximize their incomes and unleashes corporations to move capital, goods, and labor in ways that maximize their profits.72 Under a globalist ethic, immigration policy should not be used to protect America’s poor if it blocks the economic improvement of even poorer workers from other countries.

One of the most common arguments by environmental opponents of U.S. population stabilization in 1998 was that it would be unethical to protect U.S. environmental resources and achieve U.S. population stabilization at the expense of workers and their families from other nations who would not be allowed to move here to better their lives. Another major argument was that stabilizing the U.S. population merely protected U.S. ecosystems at the expense of ecosystems in other countries where population would be higher because people weren’t allowed to emigrate.73

Under the more globalist ethic of the 1998 Sierra leaders (and the leaders of many other environmental organizations who publicly or privately supported them), it was seen as both selfish and futile for the United States to stabilize its own population before the rest of the world does. In fact, some leaders suggested that if some countries remain poor, Congress should not reduce immigration and U.S. population growth even when the rest of the world’s population does stabilize. Only when socioeconomic conditions in the rest of the world are high enough that foreign workers no longer want to move to the United States should this country be allowed to stabilize its population,

There is little sign that the leaders of those groups or their members did any calculations as to what it would take to achieve such grand goals of eliminating global poverty-or whether there was any practicality at all in the thought of raising the living standards of more than 4 billion impoverished Third World citizens high enough that they would not want to immigrate to the United States.

Daniel Quinn, author of Ishmael (something of a cult favorite among environmentalists), observed: “We have encouraged people to think that all we have to do to end our population expansion is to end economic and social injustice all over the world. This is a will-of-the-wisp because these are things that people have been striving to do for thousands of years without doing them. And why we think that this will be doable in the next few years is quite bizarre to me. They don’t recognize any of the biological realities involved.” 76 The most likely scenario, according to geopolitical elder statesman George F. Kennan, is that current quadrupled immigration to the United States will decline naturally “only when the levels of overpopulation and poverty in the United States are equal to those of the countries from which these people are now anxious to escape.”

  • Fear of demographic trends

Still another reason environmental groups didn’t want to tackle immigration numbers to slow U.S. population growth may have been their fear of changing demographics. As the population of foreign-born Americans and their children rose ever higher, they became an increasingly powerful political bloc whom many environmental leaders feared could thwart environmentalist initiatives and legislation if they perceived environmental groups to be hostile to immigration.

Particularly in California, where the foreign born and their children already comprise more than a third of the population, Sierra leaders worried aloud not only that advocating U.S. population stabilization might lose immigrants, their friends and family as supporters, but that sensitive political alliances with ethnic politicians could be jeopardized as well. The executive director of the California League of Conservation Voters-an organization immersed in state politics-pleaded with the Sierra Club not to “commit suicide over the immigration issue. This is something the environmental community cannot afford.” 78

In this fearful way of thinking, advocacy of immigration reduction to stabilize the population and protect the environment can only be seen by those ethnic or racial minorities whose numbers are significantly augmented by immigration as an attempt to prevent them from becoming a majority of the population in California during the next few years-and of the country later next century. Having their future power thus threatened by environmentalists, these groups would insist that their elected officials vote against environmental protection measures, according to the demographic-fear scenario.

There were some reasons for the environmental leaders to have adopted such a belief during the Sierra Club’s referendum campaign. They heard from some self-appointed immigrant spokespersons who made the threat of retaliation. And Sierra leaders may have drawn similar conclusions from a contingent of California Democratic state-level politicians, many of them Latinos, who directly challenged the Club to defeat the immigration-reduction referendum. “A position by the club to further limit immigration would be considered immigrant bashing by many elected officials of color wrote Santos Gomez (an appointed member of the Club’s National Population Committee) in a newspaper oped piece.80

If immigrants did retaliate, that would be something “the environmental community cannot afford.” It would not be a question of whether the environment could afford another doubling of the U.S. population but whether the environmental community could afford immigrant retaliation if environmentalists tried to stop the doubling. Protection of environmental institutions may have been placed ahead of protection of the environment itself.

  • The power of money

There were many observers-and players-in the 1990s who suggested that the shifts in population emphasis had more to do with the funding of environmental groups than any other factor.

With scores of environmental groups competing with each other for members and donors, each needs special programs and actions to distinguish itself. They also need programs that can yield short-term victories that they can tout to their donors. Even under very favorable circumstances, a campaign for U.S. population stabilization cannot achieve its goal for several decades. And the benefits are not easily seen at first. In contrast, many other environmental crusades bring about faster, more tangible results. Stabilizing population, on the other hand, doesn’t improve the environment; rather, it keeps environmental conditions from growing worse. You can’t photograph the bad things that you prevented-because they didn’t happen. Which direct-mail package is likely to raise more money: newspaper clippings about forcing the removal of a dam, cleaning up smog, and establishing a park, or a headline stating that the rate of population growth declined incrementally from the previous year? How much of this kind of thinking occurred inside the environmental groups?

The 1998 edition of the catalogue Environmental Grantmaking Foundations (82) listed

180 foundations that specified population as an area of environmental gift-giving. Yet these and most other foundations interested in underwriting population programs had a distinctly global perspective and were focused on family planning, women’s empowerment, and reproductive health issues. The experience of the 1990s showed that fewer than ten foundations in the entire country were willing and able to significantly fund nonprofit groups with a clear U.S. population stabilization agenda.

Then there is the possibility that corporate donors actively steered groups away from population issues. In his book Living Within Limits, Garrett Hardin asserted that the corporate and philanthropic foundations that funded the twentieth anniversary of Earth Day in 1990 let it be known that they would not look kindly on the event having a population emphasis.83 So in contrast to Earth Day 1970, there was none.

It may be that the greatest fear that corporations had of environmental groups was not the ostensible environmental regulations they advocated but a cutoff of U.S. population growth to fuel ever-expanding consumer markets, land development, and construction. In addition, those same forces had an intense self-interest in a growing labor pool to keep the cost of labor down. Corporate leaders knew that U.S. population growth would eventually come to a halt without continued high immigration. How many of those leaders had influence over corporate and foundation philanthropy to environmental groups? “As baby boomers age and domestic birthrates stagnate, only foreign-born workers will keep the labor pool growing. . . . Economic dynamism, in other words, will depend on a continuing stream of foreign-born workers,” opined an article in Business Week.84

During the Sierra Club battle over population policy in 1998, Sierra leaders warned that foundations and major individual donors had said that they would withdraw hundreds of thousands of dollars in previously pledged grants if the members of the Club took a stand in favor of reducing immigration.85

The Sierra Club national board also found itself in the previously unheard-of position of being endorsed by the Home Builders Association of Northern California. This development group applauded the position of the Sierra Club board to accept the current immigration level, which is projected to force California’s home-needing population to 50 million by 2025.86

Three well-endowed foundations-Pew, Turner, and Rockefeller-gave grants in support of a book whose very title, Beyond the Numbers: A Reader on Population, Consumption, and the Environment,87 reveals a shift away from sheer numbers of people as the primary concern. And in November 1995, in Washington, D.C., the Pew Global Stewardship Initiative co-sponsored a one-day “Roundtable Discussion on Global Migration, Population, and the Environment” with the nation’s main coalition supporting high immigration numbers (the National Immigration Forum). According to Mark Krikorian of the Center for Immigration Studies, who was present, this meeting was “clearly an attempt to keep environmental groups from going off the reservation and supporting immigration cuts then being debated in Congress.”

Historians need to explain how an environmental issue as fundamental as U.S. population growth could have moved from center-stage within the American environmental movement to virtual obscurity in just twenty years. For the American environment itself, the ever-growing demographic pressures ignored by the environmental establishment showed no signs of abating on their own as the nation prepared to enter the twenty-first century.

References

  1. Steward L. Udall, The Quiet Crisis and the Next Generation (Salt Lake City, 1963, 1988),239.
  2. Paul R. Ehrlich, The Population Bomb (New York, 1968); Rachel L. Carson, Silent Spring(Boston, 1962).
  3. Stephen Fox, John Muir and His Legacy: The American Conservation Movement,

1890–1975 (Boston, 1981).

  1. Examples include the University of Georgia’s Eugene P. Odum, a leading ecologist and author of the textbook Fundamentals of Ecology (Philadelphia, 1971); the University of California–Davis’ Kenneth E. F. Watt, a pioneering systems modeler and author of Principles of Environmental Science (New York, 1973); the Conservation Foundation’s Raymond Dasmann, a zoologist and author of The Destruction of California (New York, 1965); the University of California–Berkeley’s Daniel B. Luten, a chemist, natural resource specialist, and author of Progress Against Growth (1986); and the University of California–Santa Barbara’s Garrett Hardin, a human ecologist, president of the Pacific Division of the American Association for the Advancement of Science, and author of the most reprinted article ever—“The Tragedy of the Commons”—in the prestigious journal Science (13 December 1968).
  1. Edward Goldsmith, Robert Allen, Michael Allaby, John Davoll, and Sam Lawrence, A Blueprint for Survival (New York, 1972), 48. The authors were all editors of The Ecologist.
  1. Gaylord Nelson, personal communication, 1998. Former U.S. Senator Nelson is widely credited as the founder of Earth Day.
  1. Doug LaFollette et al., “U.S. Sustainable Population Policy Project—Planning Document,” unpublished, 20 June 1998. Doug LaFollette is Secretary of State of Wisconsin.
  2. PL 91-190; 83 Stat. 852, 42 U.S.C. 4321.
  3. R. B. Smythe, “The Historical Roots of NEPA,” in Environmental Policy and NEPA: Past, Present, and Future, ed. Ray Clark and Larry Canter (Boca Raton, 1997), 12.
  4. 42 U.S.C. 4331.
  5. Commission on Population Growth and the American Future, Population and the American Future (Washington, D.C., 1972). Excerpt above from transmittal letter.
  1. Sierra Club Board of Directors policy adopted, 3–4 May 1969.
  2. Resolution sponsored and circulated by ZPG; adopted by the Sierra Club on 4 June 1970.
  3. T. Michael Maher, “How and Why Journalists Avoid the Population-Environment Connection,” Population and Environment 18.4 (1997).
  1. T. Michael Maher, Personal communication with the author, 1998.
  2. Dirk Olin, “Divided We Fall? The Sierra Club’s debate over immigration may be just the beginning,” Outside 23 (July 1998).
  3. President’s Council on Sustainable Development, Sustainable America: A New Consensus for Prosperity, Opportunity, and a Healthy Environment (Washington, D.C., 1996). The council included representatives of a wide range of interests and backgrounds, including environmentalists, population activists, women’s groups, minorities, academics, and business leaders, as well as cabinet-level federal officials. Quotes from chapter 6 and chapter 1, respectively.
  1. Paul R. Ehrlich and John P. Holdren, “Impact of Population Growth,” Science 171 (1971), 1212–17.
  2. Mathis Wackernagel and William Rees, Our Ecological Footprint: Reducing Human Impact Upon the Earth (Philadelphia, 1996).
  3. Council on Environmental Quality, Environmental Quality: 25th Anniversary Report(Washington, D.C., 1997).
  1. Ibid.
  2. In 1970, the “black and other” Total Fertility Rate (TFR) was 3.0 (National Center for Health Statistics, Historical Statistics of the United States: Colonial Times to 1970 [1976]). By 1997, black fertility had fallen to 2.2, slightly above the general population’s replacement rate of 2.1. Overall Hispanic fertility even in 1997 stood at 3.0, well above replacement level. That of Mexican-born women residing in the U.S. was 3.3 (National Center for Health Statistics. 1999. National Vital Statistics Report, vol. 47, no. 18)—actually higher than the fertility rate of women in Mexico itself (2.9 in 1998 according to the U.S. Census Bureau at http://www.census.gov/cgibin/ipc/idbsum).
  1. See note 11 above, pp. 72–73.
  2. See note 11 above, p. 72.
  3. According to the National Center for Health Statistics, the TFR of non-Hispanic white females was 1.8 in 1997 (compared to 2.1 for replacement level). Using Census Bureau data, it can be calculated that in 1970, non-Hispanic whites comprised 83 percent of the U.S. population and accounted for approximately 78 percent of the births. By 1994, non-Hispanic whites comprised 74 percent of the population and accounted for 60 percent of the births. With immigration included (approximately 90 percent of which originates from non-European sources), the non-Hispanic white share of current population growth drops well below 50 percent. According to medium projections of the Census Bureau and the National Research Council of the National Academy of Sciences, non-Hispanic whites will account for 6 percent of the nation’s population growth between 1995 and 2050, blacks for 18 percent, Asians for 20 percent, and Hispanics for 54 percent (James P. Smith and Barry Edmonston, eds., The New Americans: Economic, Demographic, and Fiscal Effects of Immigration [Washington, D.C., 1997], table 3.7.) By 2050, non-Hispanic whites are projected to have declined to 51 percent of the U.S. population from 87 percent in 1950 (table 3.10, The New Americans).
  1. James Scheuer, “A Disappointing Outcome: United States and World Population Trends Since the Rockefeller Commission,” The Social Contract 2.4 (1992).
  2. “The Vatican and World Population Policy: An Interview with Milton P. Siegel,” The Humanist (March–April 1993).
  1. David Simcox, “Twenty Years Later: A Lost Opportunity,” The Social Contract 2.4 (1992).
  2. Stephen D. Mumford, The Life and Death of NSSM 200: How the Destruction of Political Will Doomed a U.S. Population Policy (Research Triangle Park, N.C., 1996).
  3. Carl Bernstein, “The Holy Alliance,” Time, 24 February 1992.
  4. George D. Moffett, Critical Masses: The Global Population Challenge (New York, 1994), 190.
  5. George Weigel, “What Really Happened at Cairo, and Why,” in The Nine Lives of Population Control, ed. Michael Cromartie (Washington, D.C., 1995), 145.
  1. Lindsey Grant, “Multiple Agendas and the Population Taboo.” Focus 7.3 (1997); reprinted from chapter 16 of Juggernaut: Growth on a Finite Planet (Santa Ana, Calif., 1996).
  2. Judy Kunofsky, post to on-line Sierra Club population forum, 1997. Dr. Kunofsky was on the ZPG Board of Directors from 1972 to 1984 and was president from 1977 to 1980; Joyce Tarnow, personal communication, 1998. Tarnow is president of Floridians for a Sustainable Population.
  3. Celia Evans Miller and Cynthia P. Green, “A U.S. Population Policy: ZPG’s Recommendations.” Zero Population Growth policy paper, 1976.
  1. Alan Kuper, “ZPG or ZCG?” E-mail to list, 10 April 1999. Kuper, a long-time Sierra member and one of the population activists who spearheaded the 1998 referendum, pointed out that seven out of ten questions on ZPG’s latest Earth Day quiz related to consumption. “Based on what I have, I’d say ZPG is promoting in classrooms across the US, reduction in consumption more than reduction in numbers.”
  1. Judith Jacobsen, “President’s Message,” ZPG Reporter, February 1998.
  2. Roderick Nash, Wilderness and the American Mind (New Haven, 1973 rev. ed. [1967]).
  3. Samuel P. Hays, Conservation and the Gospel of Efficiency: The ProgressiveConservation Movement, 1890–1920 (Cambridge, Mass., 1959, 1969); Douglas H. Strong, Dreamers and Defenders—American Conservationists (Lincoln, Neb., 1971, 1988).

40.Mark Dowie, Losing Ground: American Environmentalism at the Close of the Twentieth Century (Cambridge, Mass., 1995), 127.

  1. Barry Commoner, The Closing Circle (New York, 1971).
  2. James R. Hepworth and Gregory McNamee, Resist Much, Obey Little, Remembering Ed Abbey (San Francisco 1996), quote at p. 104, John F. Rohe, A Bicentennial Malthusian Essay:Conservation, Population, and the Indifference to Limits (Traverse City, Mich., 1997).
  1. Dave Foreman, “Progressive Cornucopianism,” Wild Earth 7.4 (1998).
  2. A 1998 fundraising letter from PEG claimed that “Sierra grassroots leaders told us that ‘The Sierra Club would not have won this vote without PEG,’” an assessment that PEG’sadversaries would probably agree is not far off the mark.
  1. Brad Erickson, personal interview, May 1998.
  2. See note 40 above.
  3. Steven A. Camarota, “Immigrants in the United States—1998: A Snapshot of America’s Foreign-born Population,” Backgrounder (Washington, D.C., 1999).

48.Poster Project for a Sustainable U.S. Environment, 1998. Based on Census Bureau data.

  1. Susan Jacoby, “Anti-Immigration Campaign Begun,” Washington Post, 8 May 1977.
  2. Sierra Club Board of Directors, “U.S. Population Policy and Immigration.” Adopted 6–7 May 1978.
  3. Sierra Club Population Report (Spring 1989).
  4. Gerald O. Barnye, “Global Future: Time to Act,” in The Global 2000 Report to the President. A report prepared for President Carter by the Council on Environmental Quality and U.S. Department of State, 1981, p. 11.
  1. President’s Council on Sustainable Development, Population and Consumption Task

Force Report (Washington, D.C., 1996).

  1. Emil Guillermo, “The Sierra Club’s Nativist Faction,” San Francisco Examiner, 17 December 1997.
  1. Robert Reich, The Work of Nations: A Blueprint for the Future (New York, 1991).
  2. Katherine Betts, The Great Divide: Immigration Politics in Australia (Sydney, 1999), 5, 29.
  3. Carl Pope, on-line post to Sierra members, 1997.
  4. Club leaders appeared unaware of or unimpressed by the numerous surveys over the years which have indicated that majorities of most minorities favor reduced immigration levels. For instance, in a February 1996 Roper poll, 73 percent of blacks and 52 percent of Hispanics favored cutting immigration to 300,000 or fewer annually. The 1993 Latino National Political Survey, largest ever done of this ethnic group in the United States, found that 7 in 10 Latino respondents—higher than the percentage of “Anglos”—thought there were “too many immigrants.” A Hispanic USA Research Group poll (1993) found that three-quarters of Hispanics believed fewer immigrants should be admitted. A majority of Asian-American voters in California cast ballots in favor of Proposition 187 in 1994. Findings such as these should have allayed the Club leadership’s ostensible fears that even a principled stand against (what Club icon David Brower termed) “overimmigration” strictly on environmental grounds would spark a minority backlash. But they did not. It may well be that the Club establishment cared more about the opinions of minority elites and self-appointed “leaders” than they did about rank-and-file minority opinion.
  1. See note 37 above.
  2. Personal communication from an individual present at the conference, 1999.
  3. Georgia C. DuBose, “ZPG official says law, local action can cut population.” The Journal (Martinsburg, W.Va.), 29 March 1998.
  4. Peter Kostmayer, letter to ZPG member, 30 March 1998.
  5. A prime example of this global view is Al Gore’s 1992 book Earth in the Balance (Boston, 1992). In 1998 Vice-President Gore again explicitly linked population growth to global issues when he touted increased family-planning support as one means of combating global warming.
  6. Carl Pope, post to on-line Sierra Club population forum, 16 December 1997.
  7. Brock Evans, “The Sierra Club Ballot Referendum on Immigration, Population, and the Environment.” Focus 8.1 (1998). Evans is the executive director of the Endangered Species Coalition, and a former vice-president for National Issues of the National Audubon Society, associate executive director of the Sierra Club, 1981 recipient of the Club’s highest honor (the John Muir Award), and a 1984 candidate for Congress from the state of Washington.
  8. Carl Pope, “Think Globally, Act Sensibly—Immigration is not the problem.” Asian Week (San Francisco), 2 April 1998. The irony of using the Titanic analogy to represent overpopulation and immigration is that if the HMS Titanic’s bulkheads had been sealed and reached all the way up (a standard feature in ships nowadays) instead of just part way, the ship might have been saved from sinking because in-rushing ocean water would have been confined to several compartments instead of spilling over the top of each bulkhead into subsequent ones. (The Titanic could flood four compartments and still float. It breached five.) Thus, the opposite conclusion can be drawn from this maritime tragedy, namely, that barriers between distinct nation-states may well be essential to preventing one country’s failure to address overpopulation from becoming the whole world’s failure. Economist and philosopher Kenneth Boulding (author of “The Economics of the Coming Spaceship Earth”), in another of his insightful essays, wrote that what really disturbed him was the possibility of converting the world from a place of many experiments into one giant, global experiment.
  9. Michael Hanauer, “Why Domestic Environmental Organizations Won’t Visibly Advocate Domestic Population Stabilization,” draft of unpublished manuscript, 1999.
  10. See note 43.
  11. See note 67 above.
  12. See note 67 above.
  13. Roy Beck, “Sorting Through Humanitarian Clashes in Immigration Policy,” paper presented at the Annual National Conference on Applied Ethics, California State University at Long Beach, 1997.
  1. For more detailed descriptions and critiques of corporate globalism, see Sir James Goldsmith, “Global Free Trade and GATT,” Focus 5.1 (1995), excerpted from his book Le Piege; Herman E. Daly, “Against Free Trade and Economic Orthodoxy,” The Oxford International Review (Summer 1995); idem, “Globalism, Internationalism, and National Defense,” Focus 9.1 (1999); Jerry Mander and Edward Goldsmith, eds., The Case Against the Global Economy: And for a Turn Toward the Local (San Francisco, 1997); and David Korten, When Corporations Rule the World (West Hartford, Conn., and San Francisco, 1995).
  1. In a 1998 post to the on-line Sierra Club population forum, Executive Director Carl Pope cited a hypothetical example of 100,000 peasants moving from the Guatemalan highlands to the Peten rainforest (also in Guatemala) versus their moving to Los Angeles, and concluded that the former was worse for the global environment. Similarly, environmental filmmaker and author Michael Tobias (World War III: Population and the Biosphere at the Millennium [Santa Fe, 1994]), when questioned after a 1994 Los Angeles speech on overpopulation, said he would favor relocating people from rapidly-growing tropical countries with high and threatened biodiversity to countries like the United States with less biodiversity, although he admitted this idea was “quixotic.”
  2. ZPG Reporter, February 1998.
  3. William Branigin, “Sierra Club Votes for Neutrality on Immigration: Population Issue ‘Intensely Debated,’” Washington Post, 26 April 1998; John H. Cushman Jr., “Sierra Club Rejects Move to Oppose Immigration,” New York Times, 26 April 1998.
  4. Daniel Quinn and Alan D. Thornhill, “Food Production and Population Growth,” video documentary supported by the Foundation for Contemporary Theology (Houston, 1998).
  5. George F. Kennan, Around the Cragged Hill: A Personal and Political Philosophy (New York, 1993).
  6. John H. Cushman Jr., “An Uncomfortable Debate Fuels a Sierra Club Election,” New York Times, 5 April 1998.
  7. Ben Zuckerman, “Will the Sierra Club Be Hurt If the Ballot Question Passes?” in Population and the Sierra Club: A Discussion of Issues About the Upcoming Referendum, ed. Alan Kuper, Dick Schneider, and Ben Zuckerman (1998), 8-page discussion paper distributed by Sierrans for U.S. Population Stabilization.
  1. Santos Gomez, op-ed in San Francisco Chronicle, 17 November 1998.
  2. Home Builders Association of Northern California, “Behind the Sierra Club Vote on Curbing Immigration: Do Environmentalists Risk Alienating the Fastest-growing Ethnic Group in California?” HBA News 21.1 (February 1998).
  1. Rochester, N.Y., Resources for Global Sustainability.
  2. Garrett Hardin, Living Within Limits (New York, 1993).
  3. Howard Gleckman, “A Rich Stew in the Melting Pot,” Business Week, 31 August 1998.
  4. Alan Kuper, personal communication based on meeting with Sierra Club executive director, 1998.
  1. See note 81 above.
  2. Laurie Ann Mazur, ed., Beyond the Numbers: A Reader on Population, Consumption, and the Environment (Washington, D.C., 1994).
  3. Mark Krikorian, personal communication, 1999.

 

Lindsey Grant, “Multiple Agendas and the Population Taboo.” Focus 7.3 (1997); reprinted from chapter 16 of Juggernaut: Growth on a Finite Planet (Santa Ana, Calif., 1996).

Posted in Population | Tagged , , , , | Comments Off on Why did the environmental movement drop the issue of overpopulation?

70 million people may need emergency food in 2017

Emergency food assistance needs unprecedented as Famine threatens four countries. January 25, 2017. Famine Early Warnings systems network (fews.net)

The Famine Early Warning Systems Network is a leading provider of early warning and analysis on food insecurity. Created by USAID in 1985 to help decision-makers plan for humanitarian crises, FEWS NET provides evidence-based analysis on some 35 countries. Implementing team members include NASA, NOAA, USDA, and USGS, along with Chemonics International Inc. and Kimetrica


Figure 1. Estimated population in need of emergency food assistance (2015-2017)Sources: FEWS NET, OCHA, Southern Africa RVAC
Note: Fiscal years run from October 1 through September 30. See Figure 2 for illustration of countries included in these estimates

The combined magnitude, severity, and geographic scope of anticipated emergency food assistance needs during 2017 is unprecedented in recent decades. Given persistent conflict, severe drought, and economic instability, FEWS NET estimates that 70 million people, across 45 countries, will require emergency food assistance this year. Four countries – Nigeria, Somalia, South Sudan, and Yemen – face a credible risk of Famine (IPC Phase 5). In order to save lives, continued efforts to resolve conflict and improve humanitarian access are essential. In addition, given the scale of anticipated need, donors and implementing partners should allocate available financial and human resources to those areas where the most severe food insecurity is likely.

Food insecurity during 2017 will be driven primarily by three factors. Most importantly, persistent conflict is disrupting livelihoods, limiting trade, and restricting humanitarian access across many regions, including the Lake Chad Basin, the Central African Republic, Sudan, South Sudan, the Great Lakes Region, Somalia, Yemen, Ukraine, Syria, Iraq, and Afghanistan. A second important driver is drought, especially those driven by the 2015/16 El Niño and the 2016/17 La Niña. In Southern Africa and the Horn of Africa, significantly below-average rainfall has sharply reduced crop harvests and severely limited the availability of water and pasture for livestock. In Central Asia, snowfall to date has also been below average, potentially limiting the water available for irrigated agriculture during 2017. Finally, economic instability, related to conflict, a decline in foreign reserves due to low global commodity prices, and associated currency depreciation have contributed to very high staple food prices in Nigeria, Malawi, Mozambique, South Sudan, and Yemen.

As a result of these principal drivers, FEWS NET estimates that 70 million people  across 45 countries, will face Crisis (IPC Phase 3) or worse acute food insecurity and will require emergency food assistance during 2017 (Figure 2). This marks the second consecutive year of extremely large needs, with the size of the acutely food insecure population roughly 40 percent higher than in 2015 (Figure 1). The countries likely to have the largest acutely food insecure populations during 2017 are Yemen, Syria, South Sudan, and Malawi. Together, these four countries account for roughly one-third of the total population in need of emergency food assistance.

In addition to the sheer size of the food insecure population, a persistent lack of access to adequate food and income over the past three years has left households in the worst-affected countries with little ability to manage future shocks. Given this reduced capacity to cope and the possibility that additional shocks will occur, four countries face a credible risk of Famine (IPC Phase 5) during 2017. In Nigeria, evidence suggests that Famine occurred in 2016 and could be ongoing. In both Yemen and South Sudan the combination of persistent conflict, economic instability, and restricted humanitarian access makes Famine possible over the coming year. Finally in Somalia, a failure of the October to December 2016 Deyr rains and a forecast of poor spring rains threaten a repeat of 2011 when Famine led to the deaths of 260,000 Somalis. Emergency (IPC Phase 4), characterized by large food gaps, significant increases in the prevalence of acute malnutrition, and excess mortality among children, is also anticipated in southern areas of Malawi, Zimbabwe, Sudan, and Madagascar if adequate assistance is not provided.


Figure 2. Estimated peak size of the population in need of emergency food assistance during FY2017Sources: FEWS NET, OCHA, Southern Africa RVAC

Posted in Peak Food | Tagged , | 6 Comments

No, we’re not going to make ethanol out of CO2 and stop global warming

Preface.  In the article below Robert Rapier debunks the research paper proposing to convert CO2 into ethanol.  The researchers were honest and said “that the process is unlikely to be economically viable.” But the press spun it into a major breakthrough.  Or as Rapier puts it: “The bottom line here is if someone presents a scheme for turning air, water, or carbon dioxide into fuel, it is necessarily consumes more energy than it produces. It is an energy sink.”

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Robert Rapier. Oct 27, 2016.  Ethanol From Carbon Dioxide Is Still A Losing Proposition. energytrendinsider.com

If I told you that I had created a process to extract pure gold from seawater, you might deem it an amazing accomplishment. If I issued a press release stating these facts, it very well could go viral.

In fact, the oceans do contain an estimated 20 million tons of dissolved gold, worth close to a quadrillion dollars at the current spot market price. But you may have noticed that I have omitted a very important fact.

I haven’t mentioned how much it costs to produce a troy ounce of gold using the process I have designed. That seems like an important detail, so I explain that the production cost is only $50,000 or so per ounce (which today is worth about $1,265), but I am sure that with enough investment dollars — and maybe a few government subsidies — I can get that cost down to something more reasonable. (This is how we subsidize some advanced biofuels where production costs are an order of magnitude above what could be considered economical).

Readers immediately understand the problem. You don’t spend more to produce something than you can sell it for. But change the equation to energy instead of money and people suddenly forget that lesson. Or they fail to recognize that is what is taking place.

That brings me to the point of today’s article, one I’m forced to reiterate often: in the world of energy as in most others, there is no free lunch.

Earlier this month a research paper was published by the Department of Energy’s Oak Ridge National Laboratory (ORNL) called “High-Selectivity Electrochemical Conversion of CO2 to Ethanol using a Copper Nanoparticle/N-Doped Graphene Electrode.” The paper reports on some truly interesting science, and the researchers were measured and cautious in their conclusions.

But something got lost in translation as media outlets sought to portray this as a “holy grail,” “game changer,” “major breakthrough” or “solution to climate change.” The benefits, one story said, were unimaginable. Part of the problem, in my opinion, is that the press release from the Department of Energy was titled Scientists Accidentally Turned CO2 Into Ethanol.

The word “accidental” plays into the misconception people have of how science is done. Many take the romantic view that game-changing, eureka discoveries are merely awaiting the next lucky accident, so when they read this headline the translation becomes something like “New Discovery Solves Climate Change.”

That’s because the public loves its energy miracles. People love the idea of a car that can run on water or the car that gets 400 miles per gallon (which of course GM and Ford suppressed) or the magic pill you can pop in your tank that greatly enhances fuel efficiency. So it isn’t surprising that this kind of story goes viral (in notable contrast to the articles debunking these viral stories.)

In order to understand what’s really going on, let’s consider a fundamental principle of thermodynamics.

If you burn something containing a combination of carbon, hydrogen, and oxygen — e.g., gasoline, ethanol, wood, natural gas — that combustion reaction is going to produce heat, carbon dioxide and water. These are the combustion products.

It is possible to reverse the combustion reaction and convert that water and carbon dioxide back into fuel. But you have to add heat. A lot of heat. How much? More than you can get from burning the fuel in the first place. No new catalyst, and no discovery, accidental or otherwise, can get around that fundamental issue without overturning scientific laws observed and confirmed over 150 years.

Given that, what can we say immediately about this process? Going back to the fundamentals of thermodynamics, we can say, without a doubt, that the process consumes more energy than it produces. In other words, to produce 1 British thermal unit (BTU) of ethanol will require the initial consumption of more than 1 BTU of energy (and generate CO2 emissions.) The resulting 1 BTU of ethanol would ultimately be consumed. The net effect once the ethanol is consumed is more than 2 BTUs’ worth of emissions per BTU of ethanol produced. Or, to be blunt, unless the process can be run on excess renewable or nuclear power (more on that below), converting carbon dioxide into ethanol would actually worsen net carbon dioxide emissions.

Now the researchers involved certainly know this. They actually acknowledged in the paper that the process is unlikely to be economically viable. To my knowledge they haven’t intentionally misled anyone.

But the public has been misled in the retelling of the story. I have heard this research presented as “an efficient way of removing carbon dioxide from the atmosphere.” No, that’s not at all what the researchers claimed. They claimed a Faradaic efficiency in the process of 63%. In other words, 63% of the electricity used in process was utilized in the reaction. They further said that 84% of what was produced was ethanol. That’s the “high-selectivity” part of the title.

But that says nothing at all about the energy consumption required to remove carbon dioxide from the atmosphere so it can participate in this reaction. That is an enormous energy cost because carbon dioxide exists at only 400 parts per million in the atmosphere. Or in the case of passive removal (which is what plants do by means of photosynthesis), the process is very slow.

The high Faradaic efficiency and selectivity also provide little information about the overall energy requirements to turn purified carbon dioxide into purified ethanol, but we already know that it’s more than the energy contained in the ethanol. And it could be a lot more, and that could result in a lot more carbon dioxide emissions.

There is a way that a process like this that is an energy sink could be viable, and that would be if you had cheap, surplus energy that might otherwise be wasted. For example, if a wind farm or nuclear plant produced far more electricity than the grid could handle, you could envision dumping the excess power into such a process. That could in theory reduce carbon dioxide emissions, but there are a lot of caveats that would warrant a longer discussion. Such an intermittent process brings up its own set of issues, and then there’s the question of whether that would really be the best use of the surplus energy.

The bottom line here is if someone presents a scheme for turning air, water, or carbon dioxide into fuel, it is necessarily consumes more energy than it produces. It is an energy sink.

Now, I need to get back to processing ocean water, just as soon as I finish writing this grant proposal for the process.

Posted in Biofuels, Biomass EROI, Critical Thinking, Far Out, Other Experts | Tagged , , | 2 Comments