EROI of Canadian Natural Gas. A peak was reached despite enormous investment

[ Although I’ve extracted much of this paper, it is not complete—there are missing equations, figures, tables, and text– so see the paper for details (it is available online).  I’ve rearranged the order of the paper.  The conclusion is just below the introduction.  Some of the important points include:

  1. Natural gas production in Western Canada peaked in 2001 and remained nearly flat until 2006 despite more than quadrupling the drilling rate.
  2. Canada seems to be one of many counter examples to the idea that oil and gas production can rise with sufficient investment.
  3. The drilling intensity for natural gas was so high that net energy delivered to society peaked in 2000–2002, while production did not peak until 2006.
  4. The industry consumed all the extra energy it delivered to maintain the high drilling effort.
  5. The inability of a region to increase net energy may be the best definition of peak production. This increase in energy consumption reduces the total energy provided to society and acts as a contracting pressure on the overall economy as the industry consumes greater quantities of labor, steel, concrete and fuel.
  6. It is clear that state of the art conventional oil & natural gas extraction is unable to improve drilling efficiency as fast as depletion is reducing well quality.
  7. This pattern shows the falsehood of the idea that additional investment always results in increased production. During the initial rising EROI phase, flat or falling drilling rates can increase production, and during the falling EROI phase, production can fall despite dramatic increases in investment.
  8. There appears to be a maximum energy investment that can be sustained, which is about 15:1 to 22:1 EROI or 5% to 7% of gross energy. [If this is the case], then economic growth may not be possible if more energy is diverted into the energy producing sector. If this minimum exists, then it places a lower bound EROI on any energy source that is expected to become a major component of societies’ future energy mix.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

Freise, J. November 3, 2011 The EROI of Conventional Canadian Natural Gas Production.  Sustainability 2011, 3, 2080-2104.

Abstract: Canada was the world’s third largest natural gas producer in 2008, with 98% of its gas being produced by conventional, tight gas, and coal bed methane wells in Western Canada.

Natural gas production in Western Canada peaked in 2001 and remained nearly flat until 2006 despite more than quadrupling the drilling rate.

Canada seems to be one of many counter examples to the idea that oil and gas production can rise with sufficient investment.

This study calculated the Energy Return on Energy Invested and Net Energy of conventional natural gas and oil production in Western Canada by a variety of methods to explore the energy dynamics of the peaking process. All these methods show a downward trend in EROI during the last decade.

Natural gas EROI fell from 38:1 in 1993 to 15:1 at the peak of drilling in 2005.

The drilling intensity for natural gas was so high that net energy delivered to society peaked in 2000–2002, while production did not peak until 2006.

The industry consumed all the extra energy it delivered to maintain the high drilling effort. The inability of a region to increase net energy may be the best definition of peak production. This increase in energy consumption reduces the total energy provided to society and acts as a contracting pressure on the overall economy as the industry consumes greater quantities of labor, steel, concrete and fuel. It appears that energy production from conventional oil and gas in Western Canada has peaked and entered permanent decline.

Introduction

At the start of the 21st century we have a lot of pressing questions about our future energy supply: Can the world maintain its oil production plateau? Can natural gas production grow to replace coal and oil? Is it physically possible to grow the economy using renewable energy sources or even transition to renewable energy sources? What ties these questions together is a concept called net energy. It takes an investment of energy (in the form of fuel, steel, labor, and more) to produce energy. The net energy is the amount of surplus after this investment has been paid. This surplus is the energy available to operate the rest of the economy. All of these questions may be asked in a simpler form: Can we do X and still maintain or grow the net energy supply? Thus, insight gained from understanding the energy production of fossil fuels may transition to understanding of the growth (or decline) of renewable energy sources.

Canada’s oil and natural gas industry makes an interesting case study for net energy analysis. The country is a very large petroleum producer and was the world’s third largest natural gas producer in 2008 [1] and most of that production comes from the onshore Western Canadian Sedimentary Basin (WCSB). It went through a peak in oil production in the 1970s and, despite an increase in drilling, the country could not return to peak rates. Most recently, natural gas production fell from an eight-year plateau despite a 300% increase in the rate of drilling and an even greater increase in investment.

A net energy analysis of Canadian conventional oil and natural gas provides several things: First, it is a measurement of current conditions. How much net energy is being produced now and what is the trend? Second, it provides insight into the net energy dynamics of the production growth, peak/plateau, and decline for oil and natural gas production. Third, it gives some indication of what net energy levels are needed for an energy system to grow and below which levels cause a peak or decline in the energy system.

Net Energy and the Economy.  It takes energy to produce energy. For natural gas and oil production, energy is consumed as fuel to drive drilling rigs and other vehicles, energy to make the steel in drill and casing pipe, energy to heat the homes of the workers and provide them with food. These energy expenditures make up the cost of producing energy. Net energy is the surplus energy after these costs have been paid.

Friese 2011 NG EROI figure 1

Figure 1. (a) Energy return on energy invested (EROI) 20:1 energy supply & surplus; (b) contraction caused by fall to 10:1 EROI; and (c) Surplus returned by higher end use efficiency.

As costs rise, the energy sector makes a huge increase in its demand for labor, steel, fuel, etc. from society at large, shown by a large increase in the red area. But at the same time, the energy sector is providing no additional energy that is needed to create that extra steel, supply the fuel, or support the labor. Society must then cannibalize other sectors to supply the demands of the energy sector and the non-energy economy is seen to contract. This non-energy sector contraction would then cause a collapse in demand for energy, and returning society to somewhere between A and B.

To help formalize this example, assume Figure 1 shows a theoretical energy source supplying 1 Giga Joule (GJ) of energy. The three columns show three different net energy conditions. Column A shows an energy supply that requires 5% of the gross energy as input energy. It has an EROI of 20:1 and a net energy of 95%. Column B shows the same energy source, but where the cost of producing energy has doubled to consume 10% of the gross energy supply. It has an EROI of 10:1 and a net energy of 90%. The transport, refining, and end use efficiency remain the same and so the final surplus has contracted.

Column C represents a society that has adapted to the lower EROI energy source by improving efficiency of use and the surplus has returned. The more efficient a society, the lower the net energy supply it may subsist upon. This last point will be important when examining the difference between the peaks in oil and natural gas.

CONCLUSION: The Current State of Western Canadian Natural Gas and Oil Production.  All of three methods show a downward trend in EROI during the last decade (Figure 10) and the combined oil and gas industry has fallen from a long term high EROI of 79:1 (about 1% energy consumed) to a low of 15:1 (7% energy consumed)

Friese 2011 Figure 10 EROI comparison according to technique

Figure 10. EROI comparison according to technique.

Natural gas EROI reached an even deeper low of 14:1 (7%) or even 13:1 (8%) with the NEB EUR method.

 

It is clear that state of the art conventional oil & natural gas extraction is unable to improve drilling efficiency as fast as depletion is reducing well quality. The fact that EROI does not rebound to match prior drilling rates and the EUR result shows no rebound indicates that well quality continues to decline. The small rebound in EROI is an result of the rolling average technique of methods one and two.

The conventional oil and gas in the WCSB has peaked. Falling well quality will likely continue to push cost up or production down.

This pattern shows the falsehood of the idea that additional investment always results in increased production. During the initial rising EROI phase, flat or falling drilling rates can increase production, and during the falling EROI phase, production can fall despite dramatic increases in investment.

There appears to be a maximum energy investment that can be sustained, which is about 15:1 to 22:1 EROI or 5% to 7% of gross energy. This might indicate a minimum EROI that can be supported while the economy grows. The minimum was higher for the oil peak than the natural gas peak and this might have been caused by inexpensive imported oil or because the economy had become more energy efficient (Figure 1 column C) allowing a lower minimum EROI.

The natural gas and oil peaks differed when analyzed using net energy. The oil peak had a peak in gross and net energy on the same year, suggesting that some outside factor was responsible for reducing investment. Natural gas showed a net energy peak before a gross production peak. This suggests that price was not the limiting factor in reducing drilling effort. Instead, from 1996 to 2005, the drilling rate for natural gas quadrupled and expenditures rose even faster, despite falling net energy and this in turn suggests that it was falling net energy was the eventual cause of economic contraction and falling prices.

A peak in net energy may be the best definition of “peak” production. When net energy peaks before gross energy it indicates that price was not the limiting factor in the effort to liberate energy. This is a likely model of world net energy production where less expensive imported energy sources cannot replace existing but declining energy sources.

A rise in EROI appears to be possible only when a new resource or region is being exploited, such as the transition from oil to gas as the primary energy production in the WCSB during the late 1980s. This study has focused on conventional natural gas production and it is very uncertain how exploitation of shale gas reserves will change the energy return.

Wider Implications.  Some wider conclusions about renewable energy are suggested by this net energy study. If there is a maximum level of investment between 5% and 7% of gross energy, then economic growth may not be possible if more energy is diverted into the energy producing sector. If this minimum exists then it places a lower bound EROI on any energy source that is expected to become a major component of societies’ future energy mix. For instance, nuclear power with its low EROI is likely below this level [25,26].

Also, if the maximum level of investment is 7% of output energy consumed and a renewable energy source has an EROI of 20:1, or 5%, then the 2% remaining is the maximum that may be invested into growth of the energy source without causing the economy to decline. This radically reduces the rate at which society may change the energy mix that supports it [27].

This study does not attempt to estimate the EROI or net energy of shale gas, but some caution is warranted by comparison between these results and some cursory findings for the cost of shale gas. The International Energy Agency’s World Energy Outlook 2009 contained a graph showing the cost of natural gas production in the Barnett Shale (Figure 11). The core (best) counties, Johnson and Tarrant, show the lowest cost while counties outside the core production region show higher costs.

A very rough comparison can be made to the costs in this report. If the royalty amounts are subtracted and inflation adjusted into $2002 values, the Johnson County cost would be $2.94 resulting in an EROI of roughly 15:1 (7% of output consumed). This is not much higher than the lowest EROI values found in the WCSB. All the remaining Barnett Shale costs are much higher. Hill and Hood would have an EROI of 8:1 and Jack and Erath would have an EROI of roughly 5:1 (22% of output energy consumed in extraction). Given the history of the WCSB production peaks, it is hard to see how shale gas production could be much increased with such low net energy values. Shale gas may have a very short lived EROI increase over conventional while the core counties are exploited and then suffer a production collapse as EROI falls rapidly. This would fit the pattern seen with oil and then with natural gas in the WCSB.

The IEA WEO 2009 also contains Figure 12, an illustration of a world view that increasing cost will liberate more and more energy for use by society.

Friese 2011 figure 12 net energy reduces volume as quality declines

Figure 12. Modified from the IEA WEO 2009 [28] with dotted lines added to illustrate concept of net energy reducing the total volume of energy available as resource quality declines.

 

Conventional gas reservoirs, now peaked in production and shrinking in the WCSB, are seen as the small tip of a huge number of other resources that could be liberated with increasing investment. But falling net energy may prove this view false. If the energy return is too low, production growth may be limited or impossible from many of these energy sources. Much of the energy produced may need to be consumed during extraction. The proper shape of this diagram is likely to be a diamond with non-conventional resources forming a smaller part of the diamond underneath as denoted by the added dotted lines.

 

 

Background on the Western Canadian Sedimentary Basin.  Western Canada produced 98% of Canada’s natural gas in 2009 with the majority of that coming from the Western Canadian Sedimentary Basin (WCSB) that underlies most of Alberta, parts of British Columbia, Saskatchewan and the Northwest Territories [7].

Friese 2011 Energy Content of Petroleum Production by type stacked

Figure 3. Energy Content of Petroleum Production, by type, stacked.

This paper focuses on conventional natural gas, tight natural gas (gas in a low porosity geologic formation that must be liberated via artificial fracturing) and conventional oil production. Western Canadian natural gas production is still largely conventional and so makes a good area of study. In 2008, 55% of marketed natural gas was conventional gas from gas wells, 32% was tight gas, 8% was solution gas from oil wells, 5% coal bed methane (non-conventional), and less than 1% was shale gas [9,10]. Figure 3. Energy Content of Petroleum Production, by type, stacked.

The Canadian Gas Potential Committee in 2005 estimated that the WCSB contains 71% of the conventional gas endowment of Canada and that of an original 278 Tcf of marketable natural gas (technically and economically recoverable) 143 Tcf remain [11]. They note: “The majority of the large gas pools have been discovered and a significant portion of the discovered reserves has been produced” and further “62% of the undiscovered potential occurs in 21,100 pools larger than 1 Bcf OGIP. The remaining 38% of the undiscovered potential occurs in approximately 470,000 pools each containing less than 1 Bcf”. To put this in context, the petroleum industry has drilled less than 200,000 natural gas wells from 1947 to 2009 [7], and so will require at least a doubling of drilling effort to reach at last half of the marketable natural gas.

Results and Discussion.

Method One: EROI and Net Energy of Western Canadian Oil and Gas Production

The Canadian Association of Petroleum Producers (CAPP) maintains records of oil and gas production and expenditures going back to 1947. In theory it is simple to calculate net energy and EROI from this public data. Energy output equals the total production volumes of each hydrocarbon produced in a given year (conventional oil, natural gas, natural gas liquids), which is converted to heat energy equivalents, and measured in Giga Joules. The energy input side is more difficult as the public data for expenditures is recorded only in Canadian $ per year and not in energy. An energy intensity factor is used to convert the dollar expenditures into energy. This factor is calculated from Energy Input Output—Life Cycle Analysis

As the energy intensity factor includes wages paid to labor, but energy inputs are not quality corrected, the results are equivalent to EROIsociety and not the EROIStandard [12]. EROIStandard corrects the input energy for quality but excludes labor costs. The energy intensity factor was 24 MJ/$(U.S. 2002) and all expenditures were inflation corrected and converted to U.S. dollars. While the focus of this paper is on natural gas production, this result provides a historical time line to compare with the more limited time series for natural gas only. The results are first plotted as gross energy and net energy alongside the meters drilled per year as in Figure 4.

Friese 2011 Net energy content ofoil and gas

Figure 4. Net Energy content of oil and gas produced after invested energy is subtracted, with total meters drilled.

The time period from 1947 to 1956 showed rising production along with a rising drilling rate. From 1956 to 1973 production rose despite no corresponding rise in drilling. From 1973 to 1985 production fell despite a rise in drilling effort. The increased drilling rates were unable to increase gross energy and actually drove down net energy during this period.

In the mid-1980s, energy production once again rose with a falling drilling rate. That trend reversed to rising production with increased drilling. Then, in the year 2000, the petroleum industry showed an initial peak in gross and net energy (see Table 1). The increases in drilling effort that happened after 2000 were unable to increase production and actually drove down net energy (falling EROI). When the drilling rate increased, it drove down net energy. When the drilling rate slowed (as it did after 2006) then production dropped and net energy fell even faster.

Friese 2011 table 1 annual gross and net energy prd of oil gas ngl

Table 1. Annual gross and net energy production of oil, gas, and natural gas liquids.

 

Plotting the same data as EROI is quite illuminating. Figure 5 shows that the industry underwent a dramatic rise in energy efficiency from the early 1950s until 1973 when it reached a peak in EROI of 79:1. At this peak the industry consumed only the equivalent of 1% of the energy it produced. Then, the industry suffered a tremendous efficiency drop to a low EROI of 22:1 (about 5% of energy production consumed by investment) only 7 years later as the industry more than doubled its drilling rate in an effort to return to the oil production peak.

Another interesting inflection point was 1985 when the industry started a 7-year period when a reduced drilling rate providing an increase in production. We can see this corresponded to an increase in efficiency as the industry focused on growing natural gas production (see Figure 3). EROI rose to 46:1 (about 2% consumed by investment) by 1992. This fortunate trend was not long lived. Once the drilling rate started to rise, EROI has had a volatile but downward trend to a new low of 15:1 in 2006, where the industry consumed the equivalent of 7% of all the energy it produced. And further, it took a dramatic reduction in drilling and falling back on the production of older wells to achieve the small uptick in EROI seen in 2009.

Friese 2011 EROI of oil and gas 1947-2009

Figure 5. EROI of oil and gas from 1947 to 2009 with meters drilled.

Natural gas from conventional and tight natural gas wells is now the dominant energy source in the WCSB and has just recently peaked. By removing the oil from the net energy and EROI calculations we can gain an insight into the energy dynamics of peak natural gas production. The data necessary to separate oil and gas production and expenditure is limited to 1993 to 2009. The details of splitting out both gas expenditures and gas production from the oil data are explained in Section 3 methodology. The basic method for finding the net energy from natural gas wells alone is very similar to that for oil and natural gas combined. On the energy output side, the difficulty is that oil wells also produce natural gas and NGL and the amount from oil vs. gas wells is not recorded in the CAPP statistics. A NEB report [13] did report the amount of oil well-associated gas for a limited time series and this relation was used to estimate the amount of associated gas for the remaining years. On the input side, the expenditures for oil and gas well drilling and production are also intermixed. As drilling is the largest expense, it was assumed that the distance of drilling is directly proportional to percentage of expenditures. For example, if gas wells were 75% of the meters drilled, then 75% of exploration and development costs were apportioned to natural gas production.

Figure 6 shows the resulting EROI for natural gas wells and displays a variable but downward trend in EROI over the whole data period except for a rebound during 2007 to 2009 when drilling rates fell back to 1998 levels. However, the EROI did not return to 1998 levels along with the drilling rate.

Friese 2011 EROI of natural gas wells

Figure 6. EROI of natural gas wells with meters drilled

Table 2 displays the net energy of natural gas well production. The peak for the estimated gross energy from natural gas wells occurred in 2006 at 6.9 e9 GJ, but the peak in net energy happened much sooner. In 2002, net energy peaked at 6.5 GJ. The drilling industry doubled the meters drilled from 2002 to 2005, but could not deliver more net energy to society. The additional industry investment consumed all the extra energy produced, and more.

Friese 2011 Table 2a

 

 

 

 

 

Friese 2011 Table 2bTable 2. Gross and net energy from natural gas wells. Gross Net Industry Gas Year Energy Energy Directed

The first two methods used to estimate EROI suffer an inherent inaccuracy: The output energy of a given year is mostly produced by wells drilled in past years. Figure 7 shows an example of how production from wells drilled each year stack on top of each other to yield the annual production rate. Each colored band represents the natural gas produced from a given year’s wells. The wells drilled from 2003 to 2004 produced the yellow band. It is easy to see from this chart how most of the natural gas produced in 2003 was actually from wells drilled in prior years.

Friese 2011 Figure 7 estimate of NG prd by wells each year

 

Figure 7. Canadian National Energy Board (NEB) Estimate of natural gas produced by wells drilled each year. From [8].

 

A well may produce oil or gas for 30 years, but all the expense is applied during the year it was drilled. This mismatch in time scales can cause EROI to spike and dip if the drilling rate moves up and down. A rapid increase in drilling can cause EROI to dip as the investment is booked all at once, but production will take years to arrive. A rapid decrease in drilling will cause investment to suddenly drop, while production from wells from previous years stays high and will result in an EROI spike. These spikes and dips are exactly how the economy experiences the change in energy flows, and so it is perfectly valid to use this technique, but the averaging effect hides how the newest wells are performing.

One method to reveal current well performance would be to attribute the expected full life production of the well, the Estimated Ultimate Recovery (EUR), against the investment amount the year the well was drilled. The Canadian National Energy Board does periodic studies of producing natural gas. They calculate the EUR for the wells drilled each year [8]. They examined the wells drilled each year, totaled the past production from those wells, and used decline curves to estimate the remaining production of each year’s wells.

In this third method, the NEB calculated EUR was used instead of the annual production statistics for that year. The goal was to try to estimate the EROI of the very latest natural gas wells drilled and thus learn if the natural gas EROI rebound seen with the rolling average method was an artifact of the drop in drilling rate or if the natural gas wells improved in quality. The results are shown in Tables 3 and 4 and Figure 8. Again, the EROI trend is clearly declining. A specific example is to compare 1997 to 2005. Both years have very similar estimated ultimate recovery (EUR), but 2005 had a capital expenditure that was 3 times higher. This strongly suggests that the well prospects worsened over a short time period.

Friese 2011 table 3

Table 3. Estimated Ultimate Recovery (EUR) and cost per GJ for natural gas wells. Estimated

 

 

 

 

Friese 2011 Table 4Table 4. Total cost per GJ, Net EUR and EROI for natural gas wells.

 

 

 

 

 

 

Friese 2011 figure 8 EROI usnig NEB ests of ultimate recoveryFigure 8. EROI using NEB estimates of ultimate recovery, with meters drilled.

 

The EROI curve in Figure 8 is slightly less volatile than the rolling average technique, but more strikingly, the years 2007 and 2008 do not show the rebound in EROI that the rolling average method displayed. Assuming the NEB estimates for EUR are correct, this result indicates that the rebound was an artifact of the rapidly falling drilling rate on the rolling average and that new wells are performing considerably worse than prior years’ wells.

EROI Boundary

There are many stages to petroleum production: exploration, drilling, gathering and separation, refining, and transport of finished products, and the burning of the final fuel. The EROI could be calculated at any of these points in the process. Some studies have looked at the EROI of these various stages [6]. This paper examines the EROI within a boundary that includes the exploration, drilling, gathering and separating stages. This is typically referred to as the upstream petroleum industry.  This analysis does not include refining, the transport of finished products, or the final usage efficiency. This boundary does include labor costs. These results correspond to EROI society (lower case) as described in the EROI protocol [12].

These results are not quite EROI Standard which would include quality correcting the input energy values (not available from the EIO-LCA) and excluding the labor costs (which are rolled into the industry statistics and not removable). Care should be taken to match the boundary conditions before comparing these results to other studies.

Method One: EROI and Net Energy of Western Canadian Conventional Oil and Gas Production.  The Canadian Association of Petroleum Producers (CAPP) maintains statistics on oil and natural gas production and oil and gas expenditures going back to 1947 [22] but the expense data is intermingled. This forces us to estimate the EROI of oil and gas together, but doing so provides a historical perspective for the more limited natural gas EROI that will be calculated later. The net energy and EROI of the combined oil and natural gas industry is thus the first result calculated.

Energy Output: Oil and Gas Production Statistics. Records of petroleum production are also maintained by CAPP and published in the annual statistical handbook [22]. Summed were the values for Western Canadian conventional oil, marketed natural gas, condensates, ethane, butane, propane, and pentane plus. This paper focuses on conventional production and excludes synthetic oil from tar sands and bitumen production. States included in Western Canada are Alberta, British Columbia, Manitoba, Saskatchewan, and the Northwest Territories. The resulting energy production values are displayed in Figure 3.

Energy Input: Oil and Gas Expenditure Statistics. CAPP also maintains expenditure statistics for the petroleum industry back to 1947 [22]. Statistics are organized by state and major category. Money paid for land acquisition and royalties were excluded as these do not involve energy expenditure (money paid for land and royalties shifts to who gets to spend the industry profits, not how much energy is expended in extracting the resources). Investment categories include these Exploration expenses: Geological and Geophysical, Drilling and Other. Development expenses include: Drilling, Field Equipment, Enhanced Recovery (EOR), Gas Plants, and Other. Operating expenses include: Well and flow lines, Gas Plants and Other. All expenditures from all categories and states were summed into one value for each year.

Inflation Adjustment & Exchange Rate. The Canadian dollar expenditure statistics are nominal must be inflation corrected to the year 2002 to use the energy intensity factor calculated via EIO-LCA analysis. The inflation adjustment is intended to remove the effect of currency devaluation. The inflation adjustment was done using the Canadian CPI [23]. The adjusted results were converted into U.S. $ using the Bank of Canada Annual Average of Exchange rates for 2002 of $1.0 (U.S.) to $1.57 (Canadian) [24] and then converted into Joules of energy input using the expenditures energy intensity factor of 24 MJ/(U.S. 2002).

Combined Oil and Gas Results and Example. The results are displayed in Table 1 located in Section 2.1. A worked example for the year 2002 has an invested energy of 361 e6 GJ = $15 e9 × 24 MJ/($U.S. 2002). Net energy is 9.78 e9 GJ = 10.14 e9 GJ – 0.361 e9 GJ (note the scale change of 361). EROI is 28 = 10.14 e9 GJ / 0.361 e9 GJ.

Method Two: Net Energy and EROI of Western Canadian Natural Gas Wells. The method of calculating the EROI and net energy of natural gas wells is very similar to that used for oil and gas combined. Production and expenditure data were taken from the CAPP statistics and converted to units of energy. Oil production and expenditures were removed (as detailed below). The same energy intensity factor, inflation correction, and exchange rate were used as during the petroleum EROI calculation. The same EROI boundary was used, which includes the gas plants, but not refining or transportation.

Natural Gas Production Statistics. The energy from oil production was excluded, but natural gas also produced as a byproduct of oil production was included. Natural gas is trapped in solution in the liquid oil. The gas comes out of solution when the pressure drops as the oil is produced. Oil also contains some of the lighter fraction hydrocarbons, such as condensates, propane etc. The CAPP statistical handbook does not make the distinction between solution gas and non-associated gas. However, the Canadian National Energy Board provided solution gas data from private sources for the years 2000 to 2008 [13]. Solution gas accounts for about 10% of the total marketed natural gas so it is important it be removed. For 2000 to 2008 the NEB values were used directly. To extend the solution gas estimates for the whole period of 1993 to 2009, a regression was fit between conventional oil production and the amount of solution gas for the 8 years of data. The linear correlation was high, R = 0.93 and the resulting regression was used to predict the amount of solution gas from conventional oil production for the remaining years. The energy in the lighter hydrocarbons (natural gas liquids) needed to be apportioned between oil and gas wells as they are roughly equal to 16% of the energy in the produced natural gas (so about 1.6% of natural gas well gross energy). No public data could be found that suggested a proper ratio, so for this study it was assumed that the ratio of lighter hydrocarbons associated with oil would be the same as the ratio of natural gas associated with the oil. The solution gas ratio was used for each year and that portion of the total NGLs was removed from the gross energy produced.

Natural Gas Exploration and Development Expenditures. The CAPP expenditure statistics encompass both oil and gas expenditures, so some secondary statistic is needed to estimate how the combined expenditures should be apportioned. The statistics do separate the meters of exploration and development drilling that target oil vs. gas wells. For this study it was assumed that the apportionment of expenditure dollars would be directly related to the meters of drilling. This assumption is true only if the oil and gas wells have similar costs. As most oil and gas are produced from the same basin, this was assumed to be a reasonable apportionment (as opposed to if all the natural gas were on shore and the oil production was done much more expensively off shore). The online version of the CAPP statistical handbook contains only the drilling distance statistics for the current year. Copies of data from past handbooks must be requested directly from CAPP for the years 1993 to 2010 [22]. Table 6 relates these hard to acquire numbers. As an example, in 2002 the total meters drilled for oil was 0.71 e6 + 4.65 e6 = 5.36 e6 meters and the total meters drilled for natural gas was 2.63 e6 + 6.02 e6 = 8.65 e6. Natural gas was thus 61.7% of total drilling and so 61.7% of exploration and development expenditures would be apportioned to natural gas wells for 2002. Exactly like the combined oil and gas method, royalties and land expenditures were removed.

References and Notes

  1. International Energy Statistics: Natural Gas Production. http://www.eia.gov/cfapps/ipdbproject/ IEDIndex3.cfm?tid=3&pid=3&aid=1
  2. Hall, C.A.S.; Powers, R.; Schoenberg, W. Peak Oil, EROI, investments and the economy in an uncertain future. In Biofuels, Solar and Wind as Renewable Energy Systems, 1st ed.; Pimentel, D., Ed.; Springer: Berlin, Germany, 2008; pp. 109-132.
  3. Downey, M. Oil 101, 1st ed.; Wooden Table Press: New York, NY, USA, 2009; p. 452.
  4. Hamilton, J.D. Historical oil shocks. Nat. Bur. Econ. Res. Work. Pap. Ser. 2011, 16790.
  5. Carruth, A.A.; Hooker, M.A.; Oswald, A.J. Unemployment equilibria and input prices: Theory and evidence from the United States. Rev. Econ. Stat. 1998, 80, 621-628.
  6. Hall, C.A.S.; Balogh, S.; Murphy, D.J.R. What is the minimum EROI that a sustainable society must have? Energies 2009, 2, 25-47.
  7. Canada’s Energy Future: Infrastructure changes and challenges to 2020—An Energy Market Assessment October 2009; Technical Report Number NE23-153/2009E-PDF; National Energy Board: Calgary, Alberta, Canada, 2010.
  8. Short-term Canadian Natural Gas Deliverability 2007-2009Short-term Canadian Natural Gas Deliverability 2007-2009 1/2007E; National Energy Board: Calgary, Alberta, Canada, 2007. Available online: http://www.neb-one.gc.ca/clf-nsi/rnrgynfmtn/nrgyrprt/ntrlgs/ntrlgsdlvrblty20072009/ ntrlgsdlvrblty20072009-eng.html
  9. Short-term Canadian Natural Gas Deliverability 2007-2009 Appendices; NE2-1/2007-1E-PDF; National Energy Board: Calgary, Alberta, Canada, 2007. Available online: http://www.nebone.gc.ca/clf-nsi/rnrgynfmtn/nrgyrprt/ntrlgs/ntrlgsdlvrblty20072009/ntrlgsdlvrblty20072009ppndceng.pdf
  10. Johnson, M. Energy Supply Team, National Energy Board, 444 Seventh Avenue SW, Calgary, Alberta, T2P 0X8, Canada; Personal communication, 2010.
  11. Natural Gas Potential in Canada – 2005 (CGPC – 2005). Executive Summary; Canadian Natural Gas Potential Committee: Calgary, Alberta, Canada, 2006. Available online: http://www.centreforenergy.com/documents/545.pdf (accessed on October 1, 2010)
  12. Murphy, D.J.; Hall, C.A.S. Order from chaos: A preliminary protocol for determining EROI of fuels. Sustainability 2011, 3, 1888-1907.
  13. 2009 Reference Case Scenario: Canadian Energy Demand and Supply to 2020—An Energy Market Assessment. Appendixes; National Energy Board: Calgary, Alberta, Canada, 2009. Available online: http://www.neb.gc.ca/clf-nsi/rnrgynfmtn/nrgyrprt/nrgyftr/2009/rfrnccsscnr2009ppndc- eng.zip (accessed on September 7, 2010)
  14. Hall, C.; Kaufman, E.; Walker, S.; Yen, D. Efficiency of energy delivery systems: II. Estimating energy costs of capital equipment. Environ. Manag. 1979, 3, 505-510.
  15. Bullard, C. The energy cost of goods and services. Energ. Pol. 1975, 3, 268-278.
  16. Cleveland, C. Net energy from the extraction of oil and gas in the United States. Energy 2005, 30, 769-782.
  17. Hendrickson, C.T.; Lave, L.B.; Matthews, H.S. Environmental Life Cycle Assessment of Goods and Services: An Input-Output Approach; RFF Press: Washington, DC, USA, 2006; p. 272.
  18. Carnegie Mellon University Green Design Institute Economic Input-Output Life Cycle Assessment (EIO-LCA), USA 1997 Industry Benchmark model. Available online: http://www.eiolca.net (accessed on October 1, 2010).
  19. Crude Petroleum and Natural Gas Extraction: 2002, 2002 Economic Census, Mining, Industry Series; EC02-21I-211111; U.S. Census Bureau: Washington, DC, USA, 2004.
  20. Natural Gas Liquid Extraction: 2002, 2002 Economic Census, Mining, Industry Series Natural Gas Liquid Extraction: 2002, 2002 Economic Census, Mining, Industry Series 21I-211112; U.S. Census Bureau: Washington, DC, USA, Appendices.
  21. Gagnon, N.; Hall, C.A.S.; Brinker, L. A preliminary investigation of energy return on energy investment for global oil and gas production. Energies 2009, 2, 490-503.
  22. Canadian Petroleum Association. Statistical Handbook for Canada’s Upstream Petroleum Industry; Canadian Association of Petroleum Producers: Calgary, Canada, 2010.
  23. Statistics Canada Table 326-0021 Consumer Price Index (CPI), 2005 basket, annual (2002 = 100 unless otherwise noted). Available online: http://www.statcan.gc.ca/start-debut-eng.html (accessed on 20 September 2010).
  24. Annual Average of Exchange Rates 2002. Available online: http://www.cra-arc.gc.ca/tx/ndvdls/ fq/xchng_rt-eng.html (accessed on October 23, 2010) 2
  25. Lenzen, M. Life cycle energy and greenhouse gas emissions of nuclear energy: A review. Energy Convers. Manag. 2008, 49, 2178-2199.
  26. Pearce, J.M. Thermodynamic limitations to nuclear energy deployment as a greenhouse gas mitigation technology. Int. J. Nucl. Govern. Econ. Ecol. 2008, 2, 113-130.
  27. Mathur, J.; Bansal, N.K.; Wagner, H.-J. Dynamic energy analysis to assess maximum growth rates in developing power generation capacity: Case study of India. Energ. Policy 2004, 32, 281-287.
  28. Gas Resources, Technology and Production Profiles, Chapter 11. World Energy Outlook 2009; International Energy Agency: Paris, France, 2009.

 

Posted in EROEI Energy Returned on Energy Invested, Natural Gas, Peak Natural Gas | Tagged , | 4 Comments

Drinking water and sewage treatment use a lot of energy

[ Water treatment (drinking and sewage) use tremendous amounts of energy. Some of the statistics from this document “Water & Wastewater Utility energy research roadmap” below are:

  • In 2008 municipal wastewater treatment systems (WWTP) in the United States used approximately 30.2 billion kilowatt hours (kWh) per year, or about 0.8% of total electricity used in the United States.
  • These WWTPs are becoming large energy consumers and they can require approximately 23% of the public energy use of a municipality.
  • About 10-40% of the total energy consumed by wastewater treatment plants is consumed for sludge handling.
  • Desalination consumes 3% of annual electricity consumption in the United States Future projections estimate this percentage to double to 6% due to higher water demand and more energy intensive treatment processes
  • A significant percentage of energy input to a water distribution system is lost in pipes due to friction, pressure and flow control valves, and consumer taps.
  • AWWA estimates that about 20% of all potable water produced in the United States never reaches a customer water meter mostly due to loss in the distribution system. When water is lost through leakage, energy and water treatment chemicals are also lost.
  • In California, agricultural groundwater and surface water pumping is responsible for approximately 60% of the total peak day electrical demand related to water supply, particularly the energy consumed within Pacific Gas and Electric’s (PG&E) controlled area. Over 500 megawatts (MW) of electrical demand for water agencies in California is used for providing water and sewer services to customers. The water related electrical consumption for the State of California is approximately 52,000 gigawatt hours (GWh). Electricity use for pumping is approximately 20,278 GWh, which is 8% of the state’s total electricity use. The remaining is consumed at the customer end side for heat, pressurize move and cool water.

This paper also looks at ways to save energy, and extraction of nutrients such as phosphorous — a good idea, since phosphate production may peak as soon as 40 years from now.

As global oil production declines and there isn’t enough energy to run civilization as we know it now, hard choices will need to be made.  First in line is agriculture, which consumes about 15 to 20% of energy in the U.S. to plant, harvest, store, distribute, cook, and so on.

Clean water and sewage treatment are just as important as food.  But drought threatens to increase energy requirements.   “The energy intensity of desalination is at least 5 to 7 times the energy intensity of conventional treatment processes”, so even though only 3% of the population is served by desalination, 18% of electricity used in the municipal water industry is for desalination plants.

But making water systems more energy efficient is trivial compared to trying to maintain and replace our aging water infrastructure, which is falling apart.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

CEC. 2016. Water and wastewater utility energy research road map. California Energy Commission.  135 pages.

Excerpts:

ABSTRACT.  Water and wastewater utilities are increasingly looking for innovative and cost effective energy management opportunities to reduce operating costs, mitigate contributions to climate change, and increase the resiliency of their operations. The Water Research Foundation, the California Energy Commission and the New York State Energy Research and Development Authority jointly funded this project to assess the current state-of-knowledge on energy management, concepts and practices at water and wastewater utilities; understand the issues, trends and challenges to implement  energy projects; identify new opportunities to set a direction for future research; and develop a roadmap for energy research that includes a list of prioritized research, development, and demonstration projects on energy management for water and wastewater utilities.

EXECUTIVE SUMMARY.  The water industry faces challenges associated with escalating energy costs due to increased energy consumption and higher energy unit prices. Increased energy consumption is affected by energy-intensive treatment technologies needed to meet more stringent water quality regulations, growing water demand, pumping over longer distances, and climate change. More desalinated water to augment water supply shortages and the growth of groundwater augmentation is also anticipated.

The water industry faces challenges associated with escalating energy costs due to increased energy consumption and higher energy unit prices. Increased energy consumption is affected by energy-intensive treatment technologies needed to meet more stringent water quality regulations, growing water demand, pumping over longer distances, and climate change (GWRC, 2008). Moreover, the need for desalinated water to augment water supply shortages and the growth of groundwater augmentation is also anticipated (House, 2007). The same study by the Energy Commission estimates the demand for electricity in the water industry to double in the next decade. The water sector has shown only a limited response in implementing improvements that effectively address sustainability issues due to insufficient modernization, the presence of numerous regulatory and economic hurdles, and poor integration of energy issues within the water policy decision-making process (Liner and Stacklin, 2013; Rothausen and Conway, 2011).

Energy Management Opportunities in Wastewater Treatment and Water Reuse. Currently, there are over 15,000 municipal wastewater treatment plants (WWTPs), including 6,000 publicly owned treatment works (POTWs) providing wastewater collection and treatment services to around 78% of the United States’ population (Mo and Zhang, 2013; Spellman, 2013). According to the report published by EPRI and the WRF (Arzbaecher et al., 2013) in 2008 municipal wastewater treatment systems in the United States used approximately 30.2 billion kilowatt hours (kWh) per year, or about 0.8% of total electricity used in the United States. These WWTPs are becoming large energy consumers and they can require approximately 23% of the public energy use of a municipality (Means, 2004). Typical wastewater treatment operations have a total average electrical use of 500 to 4,600 kWh per MG treated, which varies depending on the unit operations and their efficiency (Kang et al., 2010; WEF, 2009; GWRC, 2008; NYSERDA, 2008a). Treatment-process power requirements as high as 6,000 kilowatt hours per million gallons (kWh/MG) are required when membrane bioreactors are used in place of activated sludge or extended aeration (Crawford & Sandino, 2010).

Approximately 2,000 million kWh of electricity are consumed annually by wastewater treatment plants in California (Rajagopalan, 2014). Energy use by these utilities is affected by influent loadings and effluent quality goals, as well as process type, size and age (Spellman, 2013). The majority of energy use occurs in the treatment process, for aeration (44%) and pumping (7%) (WEF, 2009). In major Australian WWTPs, the pumping energy for wastewater facilities ranged from 16 to 62% of the energy used for treatment (Kenway et al., 2008). In New York, the wastewater sector uses approximately 25% more electricity on a per unit basis (1,480 kWh/MG) than the national average (1,200 kWh/MG) due to the widespread use of energy intensive activated sludge, as well as compliance with stringent New York State effluent limits, which often require tertiary or other advanced treatment. Additionally, the predominance of combined (storm water and wastewater) sewer systems at the largest facilities, coupled with significant inflow and infiltration, result in extremely large variations in influent flow rates and loading, making efficient operations difficult (Yonkin et al., 2008).

The greatest potential for net positive energy recovery occurs at larger facilities, which are only a small percentage of the treatment works nationwide, but treat a large percentage of the nation’s wastewater. By achieving energy neutrality and eventually energy positive operations at larger facilities, the energy resources in the majority of domestic wastewater can be captured. This principle guided WERF to prepare a program to conduct the research needed to assist treatment facilities over 10 million gallons per day (MGD) to become energy neutral (Cooper et al., 2011). Energy self-sufficiency has been attained at a wastewater plant in Strass, Austria, where the average power usage is approximately 1,000 kWh/million gallon (MG) treated, which is also the approximate electricity generation from the sludge (Kang et al., 2010). The design employs two stages of aerobic treatment, with innovative controls, where biosolids generated in the two stages are thickened and anaerobically digested, with gas recovery and power generation. The centrate from the dewatering operation is treated in a sequencing batch reactor using the DEamMONification (DEMON) process to reduce the recirculation of nutrients to the head of the plant.

The importance of the scale of a facility in understanding the different strategies that may be implementable for the technology or service options available is pointed out in a recent report (AWE and ACEEE, 2013). It is important that energy management best practices are defined with consideration of specific plant size or treatment process. The largest per unit users of energy are, in fact, small water and wastewater treatment plants that treat less than 1 MGD, as well as those that employ an activated sludge with or without tertiary treatment process.

Wastewater treatment facilities have significant electricity demand during periods of peak utility energy prices. An effective energy load management strategy can help wastewater utilities to significantly reduce their electricity bills. A number of electrical load management opportunities are available to wastewater utilities (Table 2.1), notably by flattening the energy demand curve, particularly during peak pricing periods and by shifting major electrical demand to lower cost tariff blocks (e.g., overnight), for intra–day operations, or from season to season where long- or short-term wastewater or sludge storage is practical (NYSERDA, 2010). Wastewater treatment facilities have the potential to benefit from electric utility demand response (DR) opportunities, programs and tariffs. Although the use of integrated energy load management systems for wastewater utilities is still in its infancy, some wastewater utilities have begun implementing strategies that provide a foundation for participation in demand response programs. Such implementations are thus far limited to control pumping in lift stations of wastewater collection systems in utilities equipped with sufficient storage (Thompson et al., 2008). Wastewater treatment processes may offer other opportunities for shifting wastewater treatment loads from peak electricity demand hours to off-peak hours, as part of Demand Management Programs (DMPs), by modulating aeration, backwash pumps, biosolids thickening, dewatering and anaerobic digestion for maximum operation during offpeak periods. Recently, wastewater utilities, such as the Camden County Municipal Utilities Authority, have developed a computerized process system that shaved the peaks by avoiding simultaneous use of energy-intensive process units, to the maximal extent possible, thereby minimizing the peak charge from the energy provider (Horne and Kricun, 2008). In addition, the East Bay Municipal Utilities District has implemented a load management strategy which stores anaerobic digester gas until it can be used for power generation during peak-demand periods. Another opportunity for shifting electrical loads from on-peak to off-peak hours is over-oxygenating stored wastewater prior to a demand response event, then turning off aerators during peak periods without compromising effluent quality (Thompson et al., 2008). For a wastewater facility to successfully implement demand response programs, advanced technologies that enhance efficiency and control equipment are needed, such as a comprehensive and real-time demand control from centralized computer control systems that can provide an automatic transfer switch to running onsite power generators during peak demand periods, in accordance with air quality requirements (Thompson et al., 2008).

An interesting opportunity for reducing energy use in municipal wastewater treatment is to improve storm water management (Lekov, 2010). The adoption of stormwater treatment only at CSO communities can reduce energy consumption for wastewater treatment systems due to reductions in volume at the treatment plant and reduction in volumes requiring pumping in the combined sewer collection system.

Wastewater utilities are actively working to reduce the energy use of their facilities by increasing efficiency. Energy efficiency is part of the process to reduce energy demand along the path to a net energy neutral wastewater treatment plant. Briefly, wastewater treatment plants can target energy efficiency by replacing or improving their core equipment, through use of variable frequency devices (VFDs), appropriately sized impellers and implementation of energy-saving automation schemes. Efficiency can also be improved at the process level, by implementing low energy treatment alternatives to an activated sludge process or improving process control.

Energy Efficient Equipment. There are numerous types of energy efficient equipment that a wastewater utility can utilize to reduce energy consumption. Common facility-wide plant improvements include upgrade of electric motors and the installation of VFDs in pumps. These modifications can result in substantial energy efficiency because at least 60% of the electrical power fed to a typical wastewater treatment plant is consumed by electric motors (Spellman, 2013). VFDs enable pumps to accommodate fluctuating demand and allow more precise control of processes. VFDs can reduce a pump’s energy use by up to 50% compared to a motor running at constant speed for the same period. Wastewater treatment facilities can also upgrade their heating, cooling, and ventilation systems (HVAC) to improve energy efficiency and reduce energy costs. The latest developments in HVAC equipment can substantially reduce cooling energy use by approximately 30 to 40% and achieve energy efficiency ratios as high as 11.5. The latest air-source heat pumps can reduce heating energy use by about 20 to 35%. Water-source heat pumps also have superior ratings, especially when outside air temperatures drop below 20 degrees Fahrenheit (°F) (15.2 energy efficiency ratio) and can use heat from treated effluent to supply space heating. The Sheboygan Wastewater Treatment Plant reduced its energy consumption by 20% from 2003 solely by implementing energy demand management strategies that targeted efficiency by equipment replacement (e.g., motors, VFDs, blowers, etc.) and scheduling of regular maintenance (Liner and Stacklin, 2013).

Wastewater treatment plants have also recently used advanced sensors and control devices to optimize energy so that what is supplied meets but does not exceed the actual demand. For example, the adoption of lower dissolved oxygen set-points in the aeration basin can still maintain microbial growth and generate energy savings of 15-20% (Kang et al., 2010). The installation of energy submeters is another important plant improvement that, however, can require high capital investments for a utility. Recent advances in lamps, luminaries, controls, and lighting design provide numerous advantages over traditional lighting systems. Since lighting accounts for 35 to 45% of the energy use of an office building, the installation of high-efficiency alternatives for nearly every plant can dramatically reduce the operational energy bill for the utility. Incentives and rebates are commonly available from electric utilities and other agencies, such as NYSERDA, to support the installation of energy-efficient fixtures and equipment that reduce energy use financial impacts

Aeration is the largest energy user in a typical wastewater treatment plant, thus the aeration process should be evaluated when implementing energy reduction programs. Installing automatic dissolved oxygen control enables continuous oxygen level monitoring in the wastewater and so that aerators can be turned off when the oxygen demand is met. Based on the aeration capacity of the wastewater treatment system and the average wastewater oxygen requirement, the automated dissolved oxygen control can be the most cost effective method to optimize aeration energy and achieve energy savings up to 25% to 40% if compared to manually controlled systems. In addition to automated control systems, the installation of smaller modular and high efficiency blowers to replace centralized blowers, the proximity of the blowers to the aeration basin to reduce energy losses from friction, and the installation of high efficiency pulsed air mixers are important efficiency measure to be considered.

About 10-40% of the total energy consumed by wastewater treatment plants is consumed for sludge handling. Most of the energy required is due to the shear force applied for dewatering, solids drying and treatment of high-strength centrate. As an example, in California centrifuge and belt filter presses consume 30,000 kWh/year/MGD and 2-6,000 kWh/year/MGD, respectively (Rajagopalan, 2014). Many studies have been conducted on understanding sludge dewatering processes and improving their efficiency. Recent studies by the Energy Commission have focused on the improvement of sludge dewatering to achieve lower energy consumption by using nanoparticulate additives. By implementing this solution at wastewater treatment plants in California, the state would be able to save an additional 10.5 million kWh per year, which includes the cost of energy, polymer and nanoadditives for sludge dewatering, and sludge disposal

Another innovation directed toward more energy efficient systems is the use of distributed systems in place of the centralized treatment systems historically favored due to their economies of scale. Centralized plants are generally located down gradient in urban areas, permitting gravity wastewater flow to the treatment plant, while the demand for reclaimed wastewater generally lies up gradient. This means higher energy demands for pumping the reclaimed wastewater back to the areas in need. These energy costs can be reduced through use of smaller distributed treatment plants located directly in water limited areas

Processes and technologies already in use at wastewater treatment plants include biogas-powered combined heat and power (CHP), thermal conversion from biosolids, renewable energy sources (e.g., systems solar arrays and wind turbines), energy recovery at the head of the wastewater treatment plant and within the treatment process.

Energy recovery from anaerobic digestion with biogas utilization and biosolids incineration with electricity generation is widespread, but there is potential for further deployment. Of the approximately 837 biogas generating facilities in the United States, only 35% generate electricity from biogas and only 9% sell electricity back to the grid (Liner and Stacklin, 2013). The low application rate is partly due to the

dominance of small wastewater systems in the United States (less than 5 MGD). It is estimated that anaerobic digestion could produce about 350 kWh of electricity for each million gallons of wastewater treated at the plant and save 628 to 4,940 million kWh annually in the United States (Stillwell et al., 2010). The electricity produced by CHPs is reliable and consistent, but the installation requires relatively high one-time capital costs. Research shows that recovery of biogas becomes cost-effective for wastewater treatment plants with treatment capacities of at least 5 MGD (Mo and Zhang, 2013; Stillwell et al., 2010). Various wastewater treatment plants, such as by the East Bay Municipal Utility District (Oakland, California) and the Strass WWTP (Austria) became a net-positive, energy-generating wastewater plant by powering low-emission gas turbines with biogas from co-digestion processes.

Biosolids incineration with electricity generation is an effective energy recovery option that uses multiple hearth and fluidized bed furnaces.  Both incineration technologies require cleaning of exhaust gases to prevent emissions of odor, particulates, nitrogen oxides, acid gases, hydrocarbons, and heavy metals.

As for biogas-generating electricity, incineration can be used to power a steam cycle power plant, thus producing electricity in medium to large wastewater treatment plants where a high amount of solids is produced.

Disadvantages of incineration are high capital investments, high operating costs, difficult operations, and the need for air emissions control (Stillwell et al., 2010). Despite these disadvantages, biosolids incineration with electricity generation is an innovative approach to managing both water and energy. For example, the Hartford Water Pollution Control Facility in Hartford (Connecticut) is incorporating an energy recovery facility into their furnace upgrade project and they anticipate that biosolids incineration will generate 40% of the plant’s annual electricity consumption (Stillwell et al., 2010).

Wastewater utilities can now strategically replace incineration with advanced energy recovery technologies (MWH Global, 2014). Like incineration, gasification and pyrolysis offer the potential to minimize the waste mass for ultimate disposal from processing sewage sludge for its sludge treatment centers and also offer the prospect of greater energy recovery and/or lower operating cost than that offered by incineration (MWH Global, 2014). The range of gasification technologies available is large and at present it is believed that there are further synergies, such as recovering heat for digester and/or thermal hydrolysis process heating, that can be derived for a digestion or advanced digestion/ gasification advanced energy recovery. Pyrolysis, offers further advantages over the gasification options due to the production of a better syngas product than gasification, favoring more effective gas engine/CHP power generation.

Nutrient recovery from wastewater can offset the environmental loads associated with producing the equivalent amount of fertilizers from fossil fuels (Mo and Zhang, 2013). Various nutrient recovery methods have been applied in wastewater treatment processes and include biosolids land application, urine separation, controlled struvite crystallization and nutrient recovery through aqua-species. Biosolids land application involves spreading biosolids on the soil surface or incorporating or injecting biosolids into the soil. Urine separation involves separation of urine from other wastewater sources for recovery of nutrients. The process is promising in terms of maximizing nutrient recovery from wastewater, because around 70-80% of nitrogen and 50% of phosphorus in domestic wastewater is contained in urine (Maurer et al., 2003).

Although not widely applied, aqua-species, such as macroalgae, microalgae, duckweed, crops and wetland plants after utilizing nutrients in wastewater, can be harvested and used as fertilizers or animal feeds

While these individual resource recovery methods have been studied, there is a paucity of peer-reviewed articles focusing on the current status and sustainability of these individual methods as well as their integration at different scales

Recently, a few research programs have started investigating the potential for nutrient recovery, including carbon, nitrogen and phosphorus from wastewater treatment process. A recent report from WERF with support from the Commonwealth Scientific and Industrial Research Organization (CSIRO), Resource Recovery from Wastewater: A Research Agenda, summarized and defined the future research needs for the resource recovery opportunities in the wastewater sector (Burn et al., 2014).

WERF is developing a tool for the implementation and acceptance of resource recovery technologies at WWTPs, with a major focus on extractive nutrient (phosphorus) recovery technologies that employ greater energy efficiency and offer monetary savings (Latimer, 2014). WERF has prioritized high profile research on P concentration and recovery opportunities during wastewater treatment processes. Polyphosphate-accumulating organisms (PAO) can be responsible for P concentration in cells and direct concentration and precipitation of struvite that can be recovered for niche agricultural markets (Burn et al., 2014). This report implies that nitrogen recovery seems to be a lower priority than carbon (through biogas) or phosphorus recovery, unless combined with other recovery opportunities. N recovery is possible through the use of adsorption/ion-exchange, precipitation and stripping processes.

A $26 million ion-exchange pilot facility in New York that concentrated ammonia from recycle streams (centrate) of anaerobically digested sludge showed that the above mentioned methods are viable, however not yet as cost effective as the Haber-Bosch process (Burn et al., 2014).

Treated wastewater can be reused for various beneficial purposes to provide ecological benefits, reduce the demand of potable water and augment water supplies (Mo and Zhang, 2013). Beneficial uses include agricultural and landscape irrigation, toilet flushing, groundwater replenishing and industrial processes (EPA, 2004). Currently, around 1.7 billion gallons per day of wastewater is reused in the US, and this reuse rate is growing by 15% every year (Mo and Zhang, 2013) and Florida and California are pioneering states in the country focusing on water reuse. The level of wastewater treatment required varies depending on the regulatory standards, the technologies used and the water quality characteristics. Some of the treatment process or schemes utilized are able to save energy for the same amount of water delivered.

Although there is integrated resource recovery in practice currently, particularly at the community level, the related studies are rare. In a WWTP in Florida onsite energy generation, nutrient recycling and water reuse are combined: CHP is used to generate electricity from the digested gases, biosolids are sold for land application and part of the treated water is used for agricultural and landscape irrigation. In general, to date, very limited studies have reviewed the integrated energy-nutrient-water recovery in WWTPs, particularly on a national-scale (McCarty et al., 2011; Mo and Zhang, 2013; Verstraete et al., 2009) and there are no studies optimizing the resource recovery via multiple approaches

Energy Management Opportunities in Drinking Water and Desalination. Desalination consumes 3% of annual electricity consumption in the United States (Boulos and Bros, 2010; EPA, 2012b; Sanders and Webber, 2012; Arzbaecher et al., 2013). Future projections estimate this percentage to double to 6% due to higher water demand and more energy intensive treatment processes (Chaudhry and Shrier, 2010). Estimates indicate that approximately 90% of the electricity purchased by water utilities, or approximately $10 billion per year, is required for pumping water through the various stages of extraction, treatment, and final distribution to consumers (Bunn, 2011; Skeens et al., 2009). Despite recent energy efficiency progress in pumping systems, there has not been any notable impact on existing energy intensity values. Furthermore, the energy use in drinking water utilities, with the exclusion of energy use for water heating by residential and commercial users, contributes significantly to an increasing carbon footprint with an estimated 45 million tons of greenhouse gases (GHG) emitted annually in the UnitedStates.

In California, agricultural groundwater and surface water pumping is responsible for approximately 60% of the total peak day electrical demand related to water supply, particularly the energy consumed within Pacific Gas and Electric’s (PG&E) controlled area. Over 500 megawatts (MW) of electrical demand for water agencies in California is used for providing water and sewer services to customers (House, 2007). The water related electrical consumption for the State of California is approximately 52,000 gigawatt hours (GWh) (House, 2007). Electricity use for pumping is approximately 20,278 GWh, which is the 8% of the state’s total electricity use. The remaining is consumed at the customer end side for heat, pressurize move and cool water.

To address the challenges associated with poorer quality sources and/or reduced supply, water utilities have been exploiting new water supply options such as seawater and saline groundwater, the use of which is growing about 10% each year. The use of these new water sources require two to ten times more energy per unit of water treated than traditional water treatment technologies.

While previous studies have focused on energy requirements for water utilities, there is a lack of studies that estimate peak electric demand and peak use in the water sector (House, 2007). This lack of understanding of peak electrical demand and use is even more limited by the lack of water demand profiles that can be compared to electric use profiles in the water sector. Development of water demand profiles is very difficult and not monitored as well as electric use, due to the fact that water is billed by volume and not by time-of-use (House, 2007). Pricing water in a TOU structure is still a complicated task for water utilities, however it has the potential to offer large energy savings.

In many cases, successful water efficiency programs reduce the total revenues for water agencies under typical rate structures

Research is needed to investigate the potential for decoupling investments from revenues in water markets and other financial methods that would make conservation and efficiency programs more attractive and encourage alternative energy supplies. Better valuing of the different qualities and sources of water would also facilitate better choices of water resource applications that take the real cost/value of the supply and quality into consideration.

Energy Efficiency Estimates indicate that between 10 and 30% cost savings are readily achievable by almost all utilities implementing energy efficient programs or strategies (Leiby and Burke, 2011). In addition to cost savings, improving efficiency will result in a number of benefits, including the potential to reinvest in new infrastructure or programs, reduce the pressure on the electrical grid, achieve

Energy efficient processes and new technologies to be applied in the water treatment and desalination sector are still at the research stage or are under-development. For example, NeoTech Aqua Solutions, Inc. has developed a new ultraviolet (UV) disinfection technology (D438) that uses 1/10 of the energy compared to lamps required in similar flow conventional UV systems. The technology demands less electricity and results in a smaller electrical bill, less maintenance, and a smaller overall carbon footprint.

Estimates of energy efficiency in water supply and drinking water systems, associated economics and related guidelines are lacking.

Energy Efficient Operations and Processes

Energy efficiency can be targeted in water supply and distribution system operations as well as water treatment. Efficient pump scheduling and network optimization are significant contributors to efficiency practices

A significant percentage of energy input to a water distribution system is lost in pipes due to friction, pressure and flow control valves, and consumer taps (Innovyze, 2013).

The energy intensity (kWh per MG of water treated) of desalination is at least 5 to 7 times the energy intensity of conventional treatment processes, so even though the population served by desalination is only about 3%, we estimate that approximately 18% of the electricity used in the municipal water industry is for desalination plants. Due to the lower energy consumption, RO processes are preferred to thermal treatments for domestic water desalinization in the United States.

In an RO process, costs associated with electricity are 30% of the total cost of desalinated water. Reducing energy consumption is critical for lowering the cost of desalination and addressing environmental concerns about GHG emissions from the continued use of conventional fossil fuels as the primary energy source for seawater desalination plants.

The feed water to the RO is pressurized using a high pressure feed pump to supply the necessary pressure to force water through the membrane to exceed the osmotic pressure and overcome differential pressure losses through the system

Typically, an energy recovery device (ERD) in combination with a booster pump is used to recover the pressure from the concentrate and reduce the required size of the high pressure pump (Stover, 2007; Jacangelo et al., 2013). A theoretical minimum energy is required to exceed the osmotic pressure and produce desalinated water. As the salinity of the seawater or feed water recovery increases, the minimum energy required for desalination also increases. For example, the theoretical minimum energy for seawater desalination with 35,000 milligrams per liter (mg/L) of salt and a feed water recovery of 50% is 1.06 kilowatt hours per cubic meter (kWh/m3)(Elimelech and Philip, 2011). The actual energy consumption is larger as real plants do not operate as a reversible thermodynamic process

Typically, the total energy requirement for seawater desalination using RO (including pre- and post-treatment) is on the order of 3 – 6 kWh/m3 (Semiat, 2008; Subramani et al., 2011). More than 80% of the total power usage by desalination plants is attributed to the high pressure feed pumps

The energy consumption associated with filtration systems increases due to fouling by nanoparticles as reported in a study from the Energy Commission (Rosso and Rajagopalan, 2013). For example, flux analysis of MF 200 nanometer (nm) pore size membranes showed that particles between 100 and 2.5 nm contributed the most to the membrane fouling, more than fouling due to cake formation. Further understanding of the mechanisms of membrane fouling and of pretreatment options with coagulants will offer energy savings opportunities for water and water reclamation utilities

AWWA estimates that about 20% of all potable water produced in the United States never reaches a customer water meter mostly due to loss in the distribution system. When water is lost through leakage, energy and water treatment chemicals are also lost.

REFERENCES

  • ACEEE (American Council for an Energy Efficient Economy). 2005. A Roadmap to Energy in Water and Wastewater Industry. Report # IE054.
  • Adham, S. 2007. Dewatering Reverse Osmosis Concentrate from Water Reuse Applications Using Forward Osmosis. Water Reuse Research Foundation. WRRF # 05-009-1.
  • Arzbaecher, C., K. Parmenter, R. Ehrhard, and J. Murphy. 2013. Electricity Use and Management in the Municipal Water Supply and Wastewater Industries. Denver, Colo.: Water Research Foundation; Palo Alto, Calif.: Electric Power Research Institute. AWE (Alliance for Water Efficiency) and ACEEE (American Council for an Energy Efficient Economy). 2013. Water Energy nexus research recommendations for future opportunities. June, 2013.
  • Badruzzaman, M., C. Cherchi, J. Oppenheimer, C.M. Bros, S. Bunn, M. Gordon, V. Pencheva, C. Jay, I. Darcazallie, and J.G. Jacangelo. 2015. Optimization of Energy and Water Quality Management Systems for Drinking Water Utilities. Denver, Colo.: Water Research Foundation, forthcoming.
  • Bollaci, D. 2013. Challenge Projects on Low Energy Treatment Schemes for Water Reuse, Phase 1. Water Environment Research Foundation
  • Boulos, P.F. and C.M. Bros. 2010. Assessing the carbon footprint of water supply and distribution systems. Journal of American Water Works Association, 102 (11).
  • Brandt, M.J., R.A. Middleton, and S. Wang. 2010. Energy Efficiency in the Water Industry: A compendium of Best Practices and Case studies. UK Water Industry Research. WERF #OWSO9C09.
  • Bunn, S. 2011. Optimizing operations holistically for maximum savings, Proceedings of Annual Conference and Exposition 2010 of the American Water Works Association, June 20–24, 2010, Chicago, IL.
  • Burn, S., T. Muster, A. Kaksonen, and G. Tjandraatmadja. 2014. Resource Recovery from Wastewater: A Research Agenda. Water Environment Research Foundation. Report #NTRY2C13.
  • Cantwell, J.L. 2010a. Energy Efficiency in Value Engineering: Barriers and Pathways. Water Environment Research Foundation. WERF # OWSO6R07a.
  • Cantwell, J.L. 2010b. Overview of State Energy Reduction Programs and Guidelines for the Wastewater Sector. Water Environment Research Foundation. WERF # OWSO6R07b.
  • Carlson, S.W., and A. Walburger. 2007. Energy Index Development for Benchmarking Water and Wastewater Utilities. Denver, Colo.: Water Research Foundation.
  • Cath, T.Y., J.E. Drewes, and C.D. Lundin. 2009. A Novel Hybrid Forward Osmosis Process for Drinking Water Augmentation using Impaired Water and Saline Water Sources. Las Cruces, NM: WERC; Denver, Colo.: Water Research Foundation.
  • Chan, C. 2013. Personal communication. Interview on June 6th, 2013. East Bay Municipal Utility District, Oakland, CA. Chandran, K. [N.d.] Development and Implementation of a Process Technology Toolbox for Sustainable Biological Nutrient Removal using Mainstream Deammonification. Water Environment Research Foundation. WERF # STAR_N2R14. Forthcoming.
  • Chang, Y., D.J. Reardon, P. Kwan, G. Boyd, J. Brandt, K.L. Rakness, and D. Furukawa. 2008. Evaluation of Dynamic Energy Consumption of Advanced Water and Wastewater Treatment Tecnologies. Denver, Colo.: Water Research Foundation.
  • Chaudhry, S., and C. Shrier. 2010. Energy sustainability in the water sector: Challenges and opportunities. Proceedings of Annual Conference and Exposition 2010 of the American Water Works Association, June 20–24, 2010, Chicago, IL.
  • Cherchi, C., M. Badruzzaman, J. Oppenheimer, C.M. Bros, and J.G. Jacangelo. 2015. Energy and water quality management systems for water utility’s operations: A review. Journal of environmental management, 153, 108-120.
  • Conrad, S. [N.d.]. Water and Electric Utility Integrated Planning. Denver, Colo.: Water Research Foundation. Forthcoming.
  • Conrad, S.A., J. Geisenhoff, T. Brueck, M. Volna, and P. Brink. 2011. Decision Support System for Sustainable Energy Management. Denver, Colo.: Water Research Foundation.
  • Cooley, H., and R. Wilkinson. 2012. Implications of Future Water Supply Sources for Energy Demands. Water Reuse Research Foundation. WRRF # 08-16.
  • Cooper, A., C. Coronella, R. Humphries, A. Kaldate, M. Keleman, S. Kelly, N. Nelson, K. O’Connor, S. Pekarek, J. Smith, and Y. Zuo. 2011. Energy Production and Efficiency Research – The Roadmap to Net-Zero Energy. Water Environment Research Foundation Fact Sheet, 2011.
  • Crawford, G.V. 2011a. Sustainable Energy Optimization Tool- Carbon Heat Energy Assessment Plant Evaluation Tool (CHEApet). Water Environment Research Foundation. Report # OWSO4R07c.
  • Crawford, G.V. 2011b. Demonstration of the Carbon Heat Energy Assessment Plant Evaluation Tool (CHEApet). Water Environment Research Foundation. Report # OWSO4R07g.
  • Crawford, G.V. 2010a. Best Practices for Sustainable Wastewater Treatment: Initial Case Study Incorporating European Experience and Evaluation Tool Concept. Water Environment Research Foundation. Report # OWSO4R07a.
  • Crawford, G.V. 2010b. Technology Roadmap for Sustainable Wastewater Treatment Plants in a Carbon-Constrained World. Water Environment Research Foundation. Report # OWSO4R07d.
  • Crawford, G., and J. Sandino. 2010. Energy Efficiency in Wastewater Treatment in North America. WERF, Alexandria, VA.
  • CRS (Congressional Research Service). 2014. Energy-Water Nexus: The Water Sector’s Energy Use. A Report. Elimelech, M., and W.A. Phillip. 2011. The future of seawater desalination: Energy, technology, and the Environment. Science, 333, 712 – 717.
  • El-Shafai, S.A., F.A. El-Gohary, F.A. Nasr, N. Peter van der Steen, and H.J. Gijzen. 2007. Nutrient recovery from domestic wastewater using a UASB-duckweed ponds system. Bioresource Technology 98, 798-807. Environmental KTN (Environmental Knowledge Transfer Network). 2008. Energy Efficient Water & Wastewater Treatment. Stimulating business innovation and environmental protection through the transfer of knowledge. January, 2008.
  • EPA (U.S. Environmental Protection Agency). 2004. Guidelines for Water Reuse. September, 2004.
  • EPA (U.S. Environmental Protection Agency). 2008. Ensuring a Sustainable Future: An Energy Management Guidebook for Wastewater and Water Utilities.
  • EPA (U.S. Environmental Protection Agency). 2012a. Centers for Water Research on National Priorities Related to a Systems View of Nutrient Management Priorities Related to a Systems View of Nutrient Management STAR-H1.
  • EPA (U.S. Environmental Protection Agency). 2012b. National Water Program 2012 Strategy: Response to Climate Change. December 2012.
  • Forrest, A.L., K.P. Fattah, D.S. Mavinic, and F.A. Koch. 2008. Optimizing struvite production for phosphate recovery in WWTP. Journal of Environmental Engineering, 134(5), 395-402.
  • Ghiu, S. 2014. DORIS – Energy Consumption Calculator for Seawater Reverse Osmosis Systems. Denver, Colo.: Water Research Foundation.
  • Griffiths-Sattenspiel, B., and W. Wilson. 2009. The carbon footprint of water. River Network, Portland. GWRC (Global Water Research Coalition). 2008. Water and energy: Report of the GWRC Research Strategy workshop.
  • He, C., Z. Liu, and M. Hodgins. 2013. Using Life Cycle Assessment for Quantifying Embedded Water and Energy in a Water Treatment System. Water Research Foundation. WRF #4443.
  • Hernández, E., M.A. Pardo, E. Cabrera, and R. Cobacho. 2010. Energy assessment of water networks, a case study, Proceedings of the Water Distribution System Analysis 2010 Conference – WDSA2010, Tucson, AZ, USA, Sept. 12–15, 2010. Hightower, M., D. Reible, and M. Webber. 2013. Workshop Report: Developing a Research Agenda for the Energy Water Nexus. National Science Foundation. Grant # CBET 1341032.
  • Holt, J.K., H.G. Park, Y.M. Wang, M. Stadermann, A.B. Artyukhin, C.P. Grigoropoulos, A. Noy, and O. Bakajin. 2006. Fast mass transport through sub-2-nanometer carbon nanotubes. Science, 312, 1034 – 1037.
  • Horne, J., and A. Kricun. 2008. Using management systems to reduce energy consumption and energy costs. Proceedings of the Water Environment Federation, 2008(10), 5826-5843.
  • Horne, J., J. Turgeon, and E. Byous. 2011. Energy Self-Assessment Tools and Energy Audits for Water and Wastewater Utilities. A presentation.
  • Horvath, A., and J. Stokes. 2013. Life-cycle energy assessment of alternative water supply systems in California. California Energy Commission. Report # CEC-500-2013-037.
  • House, L. 2007. Water supply-related electricity demand in California. California Energy Commission. Report # CEC-500-2007-114.
  • House, L. 2011. Time-of-use water meter effects on customer water use. California Energy Commission. Report # CEC-500-2011-023.
  • Huxley, D.E., W.D. Bellamy, P. Sathyanarayan, M. Ridens, and J. Mack. 2009. Greenhouse Gas Emission Inventory and Management Strategy Guidelines for Water Utilities. Denver, Colo.: Water Research Foundation. Innovyze, Inc. 2013. Available online at: www.innovyze.com
  • Jacangelo, J., A. Subramani, J. Oppenheimer, M. Badruzzaman. 2013. Renewable Energy Technologies and Energy Efficiency Strategies (Guidebook for Desalination and Water Reuse). Water Reuse Research Foundation. WateReuse-08-13.
  • Jacobs, J., T.A. Kerestes, and W.F. Riddle. 2003. Best Practices for Energy Management. Denver, Colo.: AwwaRF.
  • Jentgen, L.A., S. Conrad, H. Kidder, M. Barnett, T. Lee, and J. Woolschlager. 2005. Optimizing Operations at JEA’s Water System. Denver, Colo.: AwwaRF.
  • Jentgen, L.A., S. Conrad, R. Riddle, E.V. Sacken, K. Stone, W. Grayman, and S. Ranade. 2003. Implementing a Prototype energy and Water Quality Management System. Denver, Colo.: AwwaRF. Jimenez, J. [N.d.]. Advancing Anaerobic Wastewater and Solids Treatment Processes. Water Environment Research Foundation. WERF # ENER5R12. Forthcoming. Johnson Foundation. 2013. Building Resilient Utilities How Water and Electric Utilities Can CoCreate Their Futures. Report.
  • Jolly, M., and J. Gillard. 2009. The economics of advanced digestion. In Proceedings of the 14th European Biosolids and Organic Resources Conference and Exhibition—9th–11th November (Vol. 2009).
  • Kärkkäinen, S., and J. Ikäheimo. 2009. Integration of demand side management with variable output DG. Proceedings of the 10th IAEE European conference, September, 2009, Vienna, 7–10.
  • Kang, S.J., K.P. Olmstead, and T. Allbaugh. 2010. A Roadmap to Energy Self-Sufficiency for U.S. Wastewater Treatment Plants. In the Proceedings of the Water Environment Federation Technical Exhibition and Conference (WEFTEC), 2010.
  • Kenway, S.J., A. Priestley, S. Cook, S. Seo, M. Inman, A. Gregory, and M. Hall. 2008. Energy Use in the Provision and Consumption of Urban Water in Australia and New Zealand. Commonwealth Scientific and Industrial Research Organisation, Victoria, Australia; & Water Services Association of Australia, Melbourne, Australia. Available online at: www.clw.csiro.au/publications/waterforahealthycountry/2008/wfhc-urban-waterenergy.pdf.
  • Kilian, R.E. [N.d.]. Co-digestion of Organic Waste – Addressing Operational Side Effects. Water Environment Research Foundation. WERF # ENER9C13. Forthcoming.
  • Kim, Y.J., and J.H. Choi. 2010. Enhanced desalination efficiency in capacitive deionization with an ion-selective membrane. Separation and Purification Technology, 71, 70 – 75.
  • Knapp, J., and G. MacDonald. [N.d.] “Energy Recovery from Pressure Reducing Valve Stations Using In-Line Hydrokinetic Turbines.” Denver, Colo.: Water Research Foundation. Forthcoming.
  • Latimer, R. 2014. Towards a renewable future: assessing resource recovery as a viable treatment alternative. Water Environment Research Foundation. Report #NTRY1R12.
  • Lawson, R., R. Sandra, G. Shreeve, and A. Tucker. 2013. Links and Benefits of Water and Energy Efficiency Joint Learning. Denver, Colo.: Water Research Foundation.
  • Leiby, V., and M.E. Burke. 2011. Energy Efficiency in the North American Water Supply Industry: A Compendium of Best Practices and Case Studies. Denver, Colo.: Water Research Foundation.
  • Lekov, A. 2010. Opportunities for Energy Efficiency and Open Automated Demand Response in Wastewater Treatment Facilities in California – Phase I Report. A Report by the Lawrence Berkeley National Laboratory.
  • Li, B. 2011. Electricity Generation from Anaerobic Wastewater Treatment in Microbial Fuel Cell. Water Environment Research Foundation. WERF # OWSO8C09.
  • Liner, B., and C. Stacklin. 2013. Driving Water and Wastewater Utilities to More Sustainable Energy Management. ASME 2013 Power Conference. American Society of Mechanical Engineers, 2013.
  • Lisk, B., E. Greenberg, and F. Bloetscher. 2013. Case Studies: Implementing Renewable Energy at Water Utilities. Denver, Colo.: Water Research Foundation.
  • Lorand, R.T. 2013. Green Energy Life Cycle Assessment Tool Version 2. Denver, Colo.: Water Research Foundation.
  • Maurer, M., P. Schwegler, and T.A. Larsen. 2003. Nutrients in urine: energetic aspects of removal and recovery. Water Science & Technology, 48(1), 37-46.
  • McCarty, P.L., J. Bae, and J. Kim. 2011. Domestic wastewater treatment as a net energy producer–can this be achieved? Environmental science & technology, 45(17), 7100-7106.
  • McCutcheon, J., R.L. McGinnis, and M. Elimelech. 2005. A novel ammonia-carbon dioxide forward (direct) osmosis desalination process. Desalination, 174, 1 – 11.
  • McGuckin, R., J. Oppenheimer, M. Badruzzaman, A. Contreras, and J.G. Jacangelo. 2013 . Toolbox for Water Utility Energy and Greenhouse Gas Emission Management: An International Review. Denver, Colo.: Water Research Foundation.
  • Means, E. 2004. Water and Wastewater Industry Energy Efficiency: A Research Roadmap. Denver, Colo.: AwwaRF.
  • Mo, W., and Q. Zhang. 2013. Energy–nutrients–water nexus: Integrated resource recovery in municipal wastewater treatment plants municipal wastewater treatment plants 267.
  • Monteith, H.D. 2008. State-of-the-Science Energy and Resource Recovery from Sludge. Water Environment Research Foundation. WERF # OWSO3R07. Monteith, H.D. 2011. Life Cycle Assessment Manager for Energy Recovery (LCAMER). Water Environment Research Foundation. Report # OWSO4R07h/f. MWH Global. 2007. Assessment of Energy Recovery Devices for Seawater Desalination” by MWH Global, 2007 – West Basin Ocean Water Desalination Demonstration Facility, Redondo Beach, CA. MWH Global. 2014. The burning question on energy recovery. February 2014. Available at: wwtonline.co.uk.
  • Nerenberg, R., J. Boltz, G. Pizzarro, M. Aybar, K. Martin, and L. Downing. 2013. Challenge Projects on Low Energy Treatment Schemes for Water Reuse, Phase 1. Water Reuse Research Foundation. WateReuse-10-06C, University of Notre Dame.
  • Nikkel, C., E. Marchand, A. Achilli, and A. Childress. 2013. Challenge Projects on Low Energy Treatment Schemes for Water Reuse, Phase 1. Water Reuse Research Foundation WateReuse-10-06B, University of Nevada, Reno.
  • NYSERDA (New York State Energy Research and Development Authority). 2004. Energy Efficiency in Municipal Wastewater Treatment Plants: Technology Assessment.
  • Albany, N.Y. NYSERDA (New York State Energy Research and Development Authority). 2008a. Statewide Assessment of Energy Use by the Municipal Water and Wastewater Sector. NYSERDA. Albany, N.Y.
  • NYSERDA (New York State Energy Research and Development Authority). 2008b. Energy and the Municipal Water and Wastewater Treatment Sector. A presentation for the Genesee/Finger Lakes Regional Planning Council. May 9, 2008.
  • NYSERDA (New York State Energy Research and Development Authority). 2010. Water & wastewater energy management best practices handbook. NYSERDA. Available online at: http://www.nyserda.ny.gov.
  • Papa, F., D. Radulj, B. Karney, and M. Robertson. 2013. Pump Energy Efficiency Field Testing & Benchmarking in Canada. International Conference on Asset Management for Enhancing Energy Efficiency in Water and Wastewater Systems, International Water Association, Marbella, Spain.
  • Parry, D.L. 2014. Co-digestion of Organic Waste Products with Wastewater Solids: Final Report with Economic Model. Water Environment Research Foundation. WERF # OWSO5R07.
  • PLMA (Peak Load Management Alliance). 2002. Demand Response: Principles for regulatory guidance. Jupiter, Fla.: Peak Load Management Alliance, Report.
  • Rajagopalan, G. 2014. The use of novel nanoscale materials for sludge dewatering. California Energy Commission. Report # CEC-500-2014-081.
  • Raucher, R.S., J.E. Cromwell, K. Cooney, P. Thompson, L. Sullivan, B. Carrico, and M. MacPhee. 2008. Risks and Benefits of Energy Management for Drinking Water Utilities. Denver, Colo.: AwwaRF.
  • Reardon, D. [N.d.] Striking the Balance between Nutrient removal in Wastewater Treatment and Sustainability. Water Environment Research Foundation. WERF # NUTR1R06n. Forthcoming.
  • Rosso, D. 2014. Framework for Energy Neutral Treatment for the 21st Century through Energy Efficient Aeration. Water Environment Research Foundation. WERF # INFR2R12.
  • Rosso, D., L.E. Larson, and M.K. Stenstrom. 2010a. Aeration of large-scale municipal wastewater treatment plants: state of the art. CEC-500-2009-076-APF.
  • Rosso, D., S.-Y. Leu, P. Jiang, L.E. Larson, R. Sung, and M.K. Stenstrom. 2010b. Aeration Efficiency Monitoring with Real-Time Off-Gas Analysis. CEC-500-2009-076-APF.
  • Rosso, D., and G. Rajagopalan. 2013. Energy reduction in membrane filtration process through optimization of nanosuspended particle removal. California Energy Commission. Report # CEC-500-2013-132.
  • Rothausen, S.G., and D. Conway. 2011. Greenhouse-gas emissions from energy use in the water sector. Nature Climate Change, 1(4), 210-219.
  • Salveson, A.T. 2013. Challenge Projects on Low Energy Treatment Schemes for Water Reuse, Phase 1. Water Reuse Research Foundation. WateReuse-10-06A, Carollo Engineers.
  • Salveson, A. [N.d.] Evaluation of Innovative Reflectance Based UV for Enhanced Disinfection and Advanced Oxidation. Denver, Colo.: Water Research Foundation. Forthcoming.
  • Sanders, K.T., and M.E. Webber. 2012. Evaluating the energy consumed for water use in the United States. Environmental Research Letters, 7(3), 034034.
  • Sandino, J. 2010. Evaluation of Processes to Reduce Activated Sludge Solids Generation and Disposal. Water Environment Research Foundation. WERF # 05-CTS-3.
  • Seacord, T., J. MacHarg, and S. Coker. (2006). Affordable Desalination Collaboration 2005 Results. Proceedings of the American Membrane Technology Association Conference in Stuart, FL, USA, July 2006.
  • Semiat, R. 2008. Energy issues in desalination processes. Environmental Science and Technology, 42, 8193 – 8201.
  • Senon, C., M. Badruzzaman, A. Contreras, J. Adidjaja, M.S. Allen, and J.G. Jacangelo. [N.d.] Drinking Water Pump Station Design and Operation for Energy Efficiency. Denver, Colo.: Water Research Foundation. Forthcoming.
  • Skeens, B., W. Wood, and N. Spivey. 2009. Water production and distribution real–time energy management. Proceedings of the Awwa DSS Conference, Reno, NV, USA, September.
  • Skerlos, S.J., L. Raskin, N.G. Love, A.L. Smith, L.B. Stadler, and L. Cao. 2013. Challenge Projects on Low Energy Treatment Schemes for Water Reuse, Phase 1. Water Reuse Research Foundation. WateReuse-10-06D, University of Michigan.
  • Spellman, F.R. 2013. Water & Wastewater Infrastructure: Energy Efficiency and Sustainability. CRC Press, Boca Raton, FL.
  • Stillwell, A.S., D.C. Hoppock, and M.E. Webber. 2010. Energy recovery from wastewater treatment plants in the United States: a case study of the energy-water nexus.” Sustainability, 2.4 (2010): 945-962.
  • Stover, R. 2007. Seawater reverse osmosis with isobaric energy recovery devices. Desalination, 203, 168 – 175.
  • Stover, R., and N. Efraty 2011. Record low energy consumption with Closed Circuit Desalination. International Desalination Association (IDA) World Congress – Perth Convention and Exhibition Center (PCEC), Perth, Western Australia, September 4 – 9, 2011.
  • Subramani, A., M. Badruzzaman, J. Oppenheimer, and J.G. Jacangelo. 2011. Energy minimization strategies and renewable energy utilization for desalination: A review. Water Research, 45, 1907 – 1920.
  • Sui, H., B.G. Han, J.K. Lee, P. Walian, and B.K. Jap. 2001. Structural basis of water specific transport through the AQP1 water channel. Nature, 414, 872 – 878.
  • Tarallo, S. 2014. Utilities of the Future Energy Findings.
  • Water Environment Research Foundation. WERF # ENER6C13. Tarallo, S. [N.d.] Energy Balance and Reduction Opportunities, Case Studies of Energy-Neutral Wastewater Facilities and Triple Bottom Line (TBL) Research Planning Support. Water Environment Research Foundation. WERF # ENER1C12. Forthcoming.
  • Thompson, L., K. Song, A. Lekov, and A. McKane. 2008. Automated Demand Response Opportunities in Wastewater Treatment Facilities. Ernest Orlando Lawrence Berkeley National Laboratory. November 2008.
  • Toffey, B. 2010. Beyond Zero Net Energy: Case Studies of Wastewater Treatment for Energy and Resource Production. AWRA-PMAS Meeting. September 16, 2010.
  • Van Horne, M. [N.d.] Developing Solutions to Operational Side Effects Associated with Co-digestion of High Strength Organic Wastes. Water Environment Research Foundation. WERF # ENER8R13. Forthcoming.
  • Van Paassen, J., W. Van der Meer, and J. Post. 2005. Optiflux: from innovation to realization. Desalination 178, 325-331.
  • Veerapaneni, S., B. Jordan, G. Leitner, S. Freeman, and J. Madhavan. 2005. Optimization of RO desalination process energy consumption. International Desalination Association World Congress, Singapore.
  • Veerapaneni, S.V., B. Klayman, S. Wang, and R. Bond. 2011. Desalination Facility Design and Operation for Maximum Energy Efficiency. Denver, Colo.: Water Research Foundation.
  • Verstraete, W., P. Van de Caveye, and V. Diamantis. 2009. Maximum use of resources present in domestic “used water”. Bioresource technology, 100(23), 5537-5545.
  • Von Meier, A. 1999. Occupational cultures as a challenge to technological innovation. Engineering Management, IEEE Transactions on, 46(1), 101-114.
  • VWEA (Virginia Water Environment Association). 2013. WERF: Research on Sustainable and ReNEW-able Resources – Nutrients, Energy, Water. A Presentation for the VWEA Education Committee, 2013 Annual Seminar. Richmond, VA.
  • Wallis, M.J, M.R. Ambrose, C., and C. Chan. 2008. Climate change: Charting a water course in an uncertain future. Journal of American Water Works Association, 100 (6).
  • WEF (Water Environment Federation). 2009. Energy Conservation in Water and Wastewater Facilities – MOP 32 (WEF Manual of Practice). McGraw-Hill Professional, 2009.
  • WEF (Water Environment Federation). 2012. Energy Roadmap: Driving Water and Wastewater Utilities to More Sustainable Energy Management. October, 2012.
  • Welgemoed, T.J. 2005. Capacitive Deionization Technology: Development and evaluation of an industrial prototype system. University of Pretoria, 2005, Dissertation.
  • Wiesner, M. 2013. Direct Contact Membrane Distillation for Water Reuse Using Nanostructured Ceramic Membranes. Water Reuse Research Foundation. WRRF # 07-05-1.
  • Wilcoxson, D., and M. Badruzzaman. 2013. Optimization of wastewater lift stations for reduction of energy usage and greenhouse gas emissions. Water Environment Research Foundation. Report # INFR3R11.
  • Wilf, M., L. Awerbuch, and C. Bartels. 2007. T he guidebook to membrane desalination technology: reverse osmosis, nanofiltration and hybrid systems process, design, applications and economics. Balaban Desalination Publications.
  • Wilf, M., and C. Bartels. 2005. Optimization of seawater RO systems design. Desalination, 173, 1 – 12. Wilf, M., and J. Hudkins. 2010. Energy Efficient Configuration of RO Desalination Units. Proceedings of Water Environment Federation Membrane Applications Conference, Anaheim, California.
  • Willis, J.L. 2011. Combined Heat and power system evaluation tool instruction manual. Water Environment Research Foundation. Report WERF#U2R08b. Willis, J., L. Stone, K. Durden, N. Beecher, C. Hemenway, and R. Greenwood. 2012. Barriers to Biogas Use for Renewable Energy. Water Environment Research Foundation. Report # OWSO11C10.
  • Willis, J.L. [N.d.] Identification of Barriers to Energy Efficiency and Resource Recovery at WRRF’s and Solutions to Promote These Practices. Water Environment Research Foundation. WERF # ENER7C13. Forthcoming.
  • Yonkin, M., K. Clubine, and K. O’Connor. 2008. Importance of Energy Efficiency to the Water and Wastewater Sector. Clearwaters.

 

 

Posted in Sewage treatment, Water | Tagged , , , , | Leave a comment

Nicole Foss: Negative interest rates and the war on cash

Nicole Foss, September 4-8, 2016, theautomaticearth.com

Part 1 is here: Negative Interest Rates and the War on Cash (1)

Part 2 is here: Negative Interest Rates and the War on Cash (2)

Part 3 is here: Negative Interest Rates and the War on Cash (3)

Part 4 is here: Negative Interest Rates and the War on Cash (4)

Nicole Foss: As momentum builds in the developing deflationary spiral, we are seeing increasingly desperate measures to keep the global credit Ponzi scheme from its inevitable conclusion. Credit bubbles are dynamic — they must grow continually or implode — hence they require ever more money to be lent into existence. But that in turn requires a plethora of willing and able borrowers to maintain demand for new credit money, lenders who are not too risk-averse to make new loans, and (apparently effective) mechanisms for diluting risk to the point where it can (apparently safely) be ignored. As the peak of a credit bubble is reached, all these necessary factors first become problematic and then cease to be available at all. Past a certain point, there are hard limits to financial expansions, and the global economy is set to hit one imminently.

Borrowers are increasingly maxed out and afraid they will not be able to service existing loans, let alone new ones. Many families already have more than enough ‘stuff’ for their available storage capacity in any case, and are looking to downsize and simplify their cluttered lives. Many businesses are already struggling to sell goods and services, and so are unwilling to borrow in order to expand their activities. Without willingness to borrow, demand for new loans will fall substantially. As risk factors loom, lenders become far more risk-averse, often very quickly losing trust in the solvency of of their counterparties. As we saw in 2008, the transition from embracing risky prospects to avoiding them like the plague can be very rapid, changing the rules of the game very abruptly.

Mechanisms for spreading risk to the point of ‘dilution to nothingness’, such as securitization, seen as effective and reliable during monetary expansions, cease to be seen as such as expansion morphs into contraction. The securitized instruments previously created then cease to be perceived as holding value, leading to them being repriced at pennies on the dollar once price discovery occurs, and the destruction of that value is highly deflationary. The continued existence of risk becomes increasingly evident, and the realisation that that risk could be catastrophic begins to dawn.

Natural limits for both borrowing and lending threaten the capacity to prolong the credit boom any further, meaning that even if central authorities are prepared to pay almost any price to do so, it ceases to be possible to kick the can further down the road. Negative interest rates and the war on cash are symptoms of such a limit being reached. As confidence evaporates, so does liquidity. This is where we find ourselves at the moment — on the cusp of phase two of the credit crunch, sliding into the same unavoidable constellation of conditions we saw in 2008, but on a much larger scale.

From ZIRP to NIRP

Interest rates have remained at extremely low levels, hardly distinguishable from zero, for the several years. This zero interest rate policy (ZIRP) is a reflection of both the extreme complacency as to risk during the rise into the peak of a major bubble, and increasingly acute pressure to keep the credit mountain growing through constant stimulation of demand for borrowing. The resulting search for yield in a world of artificially stimulated over-borrowing has lead to an extraordinary array of malinvestment across many sectors of the real economy. Ever more excess capacity is being built in a world facing a severe retrenchment in aggregate demand. It is this that is termed ‘recovery’, but rather than a recovery, it is a form of double jeopardy — an intensification of previous failed strategies in the hope that a different outcome will result. This is, of course, one definition of insanity.

Now that financial crisis conditions are developing again, policies are being implemented which amount to an even greater intensification of the old strategy. In many locations, notably those perceived to be safe havens, the benchmark is moving from a zero interest rate policy to a negative interest rate policy (NIRP), initially for bank reserves, but potentially for business clients (for instance in Holland and the UK). Individual savers would be next in line. Punishing savers, while effectively encouraging banks to lend to weaker, and therefore riskier, borrowers, creates incentives for both borrowers and lenders to continue the very behaviour that set the stage for financial crisis in the first place, while punishing the kind of responsibility that might have prevented it.

Risk is relative. During expansionary times, when risk perception is low almost across the board (despite actual risk steadily increasing), the risk premium that interest rates represent shows relatively little variation between different lenders, and little volatility. For instance, the interest rates on sovereign bonds across Europe, prior to financial crisis, were low and broadly similar for many years. In other words, credit spreads were very narrow during that time. Greece was able to borrow almost as easily and cheaply as Germany, as lenders bet that Europe’s strong economies would back the debt of its weaker parties. However, as collective psychology shifts from unity to fragmentation, risk perception increases dramatically, and risk distinctions of all kinds emerge, with widening credit spreads. We saw this happen in 2008, and it can be expected to be far more pronounced in the coming years, with credit spreads widening to record levels. Interest rate divergences create self-fulfilling prophecies as to relative default risk, against a backdrop of fear-driven high volatility.

Many risk distinctions can be made — government versus private debt, long versus short term, economic center versus emerging markets, inside the European single currency versus outside, the European center versus the troubled periphery, high grade bonds versus junk bonds etc. As the risk distinctions increase, the interest rate risk premiums diverge. Higher risk borrowers will pay higher premiums, in recognition of the higher default risk, but the higher premium raises the actual risk of default, leading to still higher premiums in a spiral of positive feedback. Increased risk perception thus drives actual risk, and may do so until the weak borrower is driven over the edge into insolvency. Similarly, borrowers perceived to be relative safe havens benefit from lower risk premiums, which in turn makes their debt burden easier to bear and lowers (or delays) their actual risk of default. This reduced risk of default is then reflected in even lower premiums. The risky become riskier and the relatively safe become relatively safer (which is not necessarily to say safe in absolute terms). Perception shapes reality, which feeds back into perception in a positive feedback loop.

The process of diverging risk perception is already underway, and it is generally the states seen as relatively safe where negative interest rates are being proposed or implemented. Negative rates are already in place for bank reserves held with the ECB and in a number of European states from 2012 onwards, notably Scandinavia and Switzerland. The desire for capital preservation has led to a willingness among those with capital to accept paying for the privilege of keeping it in ‘safe havens’. Note that perception of safety and actual safety are not equivalent. States at the peak of a bubble may appear to be at low risk, but in fact the opposite is true. At the peak of a bubble, there is nowhere to go but down, as Iceland and Ireland discovered in phase one of the financial crisis, and many others will discover as we move into phase two. For now, however, the perception of low risk is sufficient for a flight to safety into negative interest rate environments.

This situation serves a number of short term purposes for the states involved. Negative rates help to control destabilizing financial inflows at times when fear is increasingly driving large amounts of money across borders. A primary objective has been to reduce upward pressure on currencies outside the euro zone. The Swiss, Danish and Swedish currencies have all been experiencing currency appreciation, hence a desire to use negative interest rates to protect their exchange rate, and therefore the price of their exports, by encouraging foreigners to keep their money elsewhere. The Danish central bank’s sole mandate is to control the value of the currency against the euro. For a time, Switzerland pegged their currency directly to the euro, but found the cost of doing so to be prohibitive. For them, negative rates are a less costly attempt to weaken the currency without the need to defend a formal peg. In a world of competitive, beggar-thy-neighbor currency devaluations, negative interest rates are seen as a means to achieve or maintain an export advantage, and evidence of the growing currency war.

Negative rates are also intended to discourage saving and encourage both spending and investment. If savers must pay a penalty, spending or investment should, in theory, become more attractive propositions. The intention is to lead to more money actively circulating in the economy. Increasing the velocity of money in circulation should, in turn, provide price support in an environment where prices are flat to falling. (Mainstream commentators would describe this as as an attempt to increase ‘inflation’, by which they mean price increases, to the common target of 2%, but here at The Automatic Earth, we define inflation and deflation as an increase or decrease, respectively, in the money supply, not as an increase or decrease in prices.) The goal would be to stave off a scenario of falling prices where buyers would have an incentive to defer spending as they wait for lower prices in the future, starving the economy of circulating currency in the meantime. Expectations of falling prices create further downward price pressure, leading into a vicious circle of deepening economic depression. Preventing such expectations from taking hold in the first place is a major priority for central authorities.

Negative rates in the historical record are symptomatic of times of crisis when conventional policies have failed, and as such are rare. Their use is a measure of desperation:

First, a policy rate likely would be set to a negative value only when economic conditions are so weak that the central bank has previously reduced its policy rate to zero. Identifying creditworthy borrowers during such periods is unusually challenging. How strongly should banks during such a period be encouraged to expand lending?

However strongly banks are ‘encouraged’ to lend, willing borrowers and lenders are set to become ‘endangered species’:

The goal of such rates is to force banks to lend their excess reserves. The assumption is that such lending will boost aggregate demand and help struggling economies recover. Using the same central bank logic as in 2008, the solution to a debt problem is to add on more debt. Yet, there is an old adage: you can bring a horse to water but you cannot make him drink! With the world economy sinking into recession, few banks have credit-worthy customers and many banks are having difficulties collecting on existing loans.
Italy’s non-performing loans have gone from about 5 percent in 2010 to over 15 percent today. The shale oil bust has left many US banks with over a trillion dollars of highly risky energy loans on their books. The very low interest rate environment in Japan and the EU has done little to spur demand in an environment full of malinvestments and growing government constraints.

Doing more of the same simply elevates the already enormous risk that a new financial crisis is right around the corner:

Banks rely on rates to make returns. As the former Bank of England rate-setter Charlie Bean has written in a recent paper for The Economic Journal, pension funds will struggle to make adequate returns, while fund managers will borrow a lot more to make profits. Mr Bean says: “All of this makes a leveraged ‘search for yield’ of the sort that marked the prelude to the crisis more likely.” This is not comforting but it is highly plausible: barely a decade on from the crash, we may be about to repeat it. This comes from tasking central bankers with keeping the world economy growing, even while governments have cut spending.

 

Experiences with Negative Interest Rates

 

The existing low interest rate environment has already caused asset price bubbles to inflate further, placing assets such as real estate ever more beyond the reach of ordinary people at the same time as hampering those same people attempting to build sufficient savings for a deposit. Negative interest rates provide an increased incentive for this to continue. In locations where the rates are already negative, the asset bubble effect has worsened. For instance, in Denmark negative interest rates have added considerable impetus to the housing bubble in Copenhagen, resulting in an ever larger pool over over-leveraged property owners exposed to the risks of a property price collapse and debt default:

Where do you invest your money when rates are below zero? The Danish experience says equities and the property market. The benchmark index of Denmark’s 20 most-traded stocks has soared more than 100 percent since the second quarter of 2012, which is just before the central bank resorted to negative rates. That’s more than twice the stock-price gains of the Stoxx Europe 600 and Dow Jones Industrial Average over the period. Danish house prices have jumped so much that Danske Bank A/S, Denmark’s biggest lender, says Copenhagen is fast becoming Scandinavia’s riskiest property market.

Considering that risky property markets are the norm in Scandinavia, Copenhagen represents an extreme situation:

“Property prices in Copenhagen have risen 40–60 percent since the middle of 2012, when the central bank first resorted to negative interest rates to defend the krone’s peg to the euro.”

This should come as no surprise: recall that there are documented cases where Danish borrowers are paid to take on debt and buy houses “In Denmark You Are Now Paid To Take Out A Mortgage”, so between rewarding debtors and punishing savers, this outcome is hardly shocking. Yet it is the negative rates that have made this unprecedented surge in home prices feel relatively benign on broader price levels, since the source of housing funds is not savings but cash, usually cash belonging to the bank.

 

 

The Swedish property market is similarly reaching for the sky. Like Japan at the peak of it’s bubble in the late 1980s, Sweden has intergenerational mortgages, with an average term of 140 years! Recent regulatory attempts to rein in the ballooning debt by reducing the maximum term to a ‘mere’ 105 years have been met with protest:

Swedish banks were quoted in the local press as opposing the move. “It isn’t good for the finances of households as it will make mortgages more expensive and the terms not as good. And it isn’t good for financial stability,” the head of Swedish Bankers’ Association was reported to say.

Apart from stimulating further leverage in an already over-leveraged market, negative interest rates do not appear to be stimulating actual economic activity:

If negative rates don’t spur growth — Danish inflation since 2012 has been negligible and GDP growth anemic — what are they good for?….Danish businesses have barely increased their investments, adding less than 6 percent in the 12 quarters since Denmark’s policy rate turned negative for the first time. At a growth rate of 5 percent over the period, private consumption has been similarly muted. Why is that? Simply put, a weak economy makes interest rates a less powerful tool than central bankers would like.

“If you’re very busy worrying about the economy and your job, you don’t care very much what the exact rate is on your car loan,” says Torsten Slok, Deutsche Bank’s chief international economist in New York.

Fuelling inequality and profligacy while punishing responsible behaviour is politically unpopular, and the consequences, when they eventually manifest, will be even more so. Unfortunately, at the peak of a bubble, it is only continued financial irresponsibility that can keep a credit expansion going and therefore keep the financial system from abruptly crashing. The only things keeping the system ‘running on fumes’ as it currently is, are financial sleight-of-hand, disingenuous bribery and outright fraud. The price to pay is that the systemic risks continue to grow, and with it the scale of the impacts that can be expected when the risk is eventually realised. Politicians desperately wish to avoid those consequences occurring in their term of office, hence they postpone the inevitable at any cost for as long as physically possible.

 

The Zero Lower Bound and the Problem of Physical Cash

 

Central bankers attempting to stimulate the circulation of money in the economy through the use of negative interest rates have a number of problems. For starters, setting a low official rate does not necessarily mean that low rates will prevail in the economy, particularly in times of crisis:

The experience of the global financial crisis taught us that the type of shocks which can drive policy interest rates to the lower bound are also shocks which produce severe impairments to the monetary policy transmission mechanism. Suppose, for example, that the interbank market freezes and prevents a smooth transmission of the policy interest rate throughout the banking sector and financial markets at large. In this case, any cut in the policy rate may be almost completely ineffective in terms of influencing the macroeconomy and prices.

This is exactly what we saw in 2008, when interbank lending seized up due to the collapse of confidence in the banking sector. We have not seen this happen again yet, but it inevitably will as crisis conditions resume, and when it does it will illustrate vividly the limits of central bank power to control financial parameters. At that point, interest rates are very likely to spike in practice, with banks not trusting each other to repay even very short term loans, since they know what toxic debt is on their own books and rationally assume their potential counterparties are no better. Widening credit spreads would also lead to much higher rates on any debt perceived to be risky, which, increasingly, would be all debt with the exception of government bonds in the jurisdictions perceived to be safest. Low rates on high grade debt would not translate into low rates economy-wide. Given the extent of private debt, and the consequent vulnerability to higher interest rates across the developed world, an interest rate spike following the NIRP period would be financially devastating.

The major issue with negative rates in the shorter term is the ability to escape from the banking system into physical cash. Instead of causing people to spend, a penalty on holding savings in a banks creates an incentive for them to withdraw their funds and hold cash under their own control, thereby avoiding both the penalty and the increasing risk associated with the banking system:

Western banking systems are highly illiquid, meaning that they have very low cash equivalents as a percentage of customer deposits….Solvency in many Western banking systems is also highly questionable, with many loaded up on the debts of their bankrupt governments. Banks also play clever accounting games to hide the true nature of their capital inadequacy. We live in a world where questionably solvent, highly illiquid banks are backed by under capitalized insurance funds like the FDIC, which in turn are backed by insolvent governments and borderline insolvent central banks. This is hardly a risk-free proposition. Yet your reward for taking the risk of holding your money in a precarious banking system is a rate of return that is substantially lower than the official rate of inflation.

In other words, negative rates encourage an arbitrage situation favouring cash. In an environment of few good investment opportunities, increasing recognition of risk and a rising level of fear, a desire for large scale cash withdrawal is highly plausible:

From a portfolio choice perspective, cash is, under normal circumstances, a strictly dominated asset, because it is subject to the same inflation risk as bonds but, in contrast to bonds, it yields zero return. It has also long been known that this relationship would be reversed if the return on bonds were negative. In that case, an investor would be certain of earning a profit by borrowing at negative rates and investing the proceedings in cash. Ignoring storage and transportation costs, there is therefore a zero lower bound (ZLB) on nominal interest rates.

Zero is the lower bound for nominal interest rates if one would want to avoid creating such an incentive structure, but in a contractionary environment, zero is not low enough to make borrowing and lending attractive. This is because, while the nominal rate might be zero, the real rate (the nominal rate minus negative inflation) can remain high, or perhaps very high, depending on how contractionary the financial landscape becomes. As Keynes observed, attempting to stimulate demand for money by lowering interest rates amounts to ‘pushing on a piece of string‘. Central authorities find themselves caught in the liquidity trap, where monetary policy ceases to be effective:

Many big economies are now experiencing ‘deflation’, where prices are falling. In the euro zone, for instance, the main interest rate is at 0.05% but the “real” (or adjusted for inflation) interest rate is considerably higher, at 0.65%, because euro-area inflation has dropped into negative territory at -0.6%. If deflation gets worse then real interest rates will rise even more, choking off recovery rather than giving it a lift.

If nominal rates are sufficiently negative to compensate for the contractionary environment, real rates could, in theory, be low enough to stimulate the velocity of money, but the more negative the nominal rate, the greater the incentive to withdraw physical cash. Hoarded cash would reduce, instead of increase, the velocity of money. In practice, lowering rates can be moderately reflationary, provided there remains sufficient economic optimism for people to see the move in a positive light. However, sending rates into negative territory at a time pessimism is dominant can easily be interpreted as a sign of desperation, and therefore as confirmation of a negative outlook. Under such circumstances, the incentives to regard the banking system as risky, to withdraw physical cash and to hoard it for a rainy day increase substantially. Not only does the money supply fail to grow, as new loans are not made, but the velocity of money falls as money is hoarded, thereby aggravating a deflationary spiral:

A decline in the velocity of money increases deflationary pressure. Each dollar (or yen or euro) generates less and less economic activity, so policymakers must pump more money into the system to generate growth. As consumers watch prices decline, they defer purchases, reducing consumption and slowing growth. Deflation also lifts real interest rates, which drives currency values higher. In today’s mercantilist, beggar-thy-neighbour world of global trade, a strong currency is a headwind to exports. Obviously, this is not the desired outcome of policymakers. But as central banks grasp for new, stimulative tools, they end up pushing on an ever-lengthening piece of string.

 

 

Japan has been in the economic doldrums, with pessimism dominant, for over 25 years, and the population has become highly sceptical of stimulation measures intended to lead to recovery. The negative interest rates introduced there (described as ‘economic kamikaze’) have had a very different effect than in Scandinavia, which is still more or less at the peak of its bubble and therefore much more optimistic. Unfortunately, lowering interest rates in times of collective pessimism has a poor record of acting to increase spending and stimulate the economy, as Japan has discovered since their bubble burst in 1989:

For about a quarter of a century the Japanese have proved to be fanatical savers, and no matter how low the Bank of Japan cuts rates, they simply cannot be persuaded to spend their money, or even invest it in the stock market. They fear losing their jobs; they fear a further fall in shares or property values; they have no confidence in the investment opportunities in front of them. So pathological has this psychology grown that they would rather see the value of their savings fall than spend the cash. That draining of confidence after the collapse of the 1980s “bubble” economy has depressed Japanese growth for decades.

Fear is a very sharp driver of behaviour — easily capable of over-riding incentives designed to promote spending and investment:

When people are fearful they tend to save; and when they become especially fearful then they save even more, even if the returns on their savings are extremely low. Much the same goes for businesses, and there are increasing reports of them “hoarding” their profits rather than reinvesting them in their business, such is the great “uncertainty” around the world economy. Brexit obviously only added to the fears and misgivings about the future.

Deflation is so difficult to overcome precisely because of its strong psychological component. When the balance of collective psychology tips from optimism, hope and greed to pessimism and fear, everything is perceived differently. Measures intended to restore confidence end up being interpreted as desperation, and therefore get little or no traction. As such initiatives fail, their failure becomes conformation of a negative bias, which increases the power of that bias, causing more stimulus initiatives to fail. The resulting positive feedback loop creates and maintains a vicious circle, both economically and socially:

There is a strong argument that when rates go negative it squeezes the speed at which money circulates through the economy, commonly referred to by economists as the velocity of money. We are already seeing this happen in Japan where citizens are clamouring for ¥10,000 bills (and home safes to store them in). People are taking their money out of the banking system to stuff it under their metaphorical mattresses. This may sound extreme, but whether paper money is stashed in home safes or moved into transaction substitutes or other stores of value like gold, the point is it’s not circulating in the economy. The empirical data support this view — the velocity of money has declined precipitously as policymakers have moved aggressively to reduce rates.

Physical cash under one’s own control is increasingly seen as one of the primary escape routes for ordinary people fearing the resumption of the 2008 liquidity crunch, and its popularity as a store of value is increasing steadily, with demand for cash rising more rapidly than GDP in a wide range of countries:

While cash’s use is in continual decline, claims that it is set to disappear entirely may be premature, according to the Bank of England….The Bank estimates that 21pc to 27pc of everyday transactions last year were in cash, down from between 34pc and 45pc at the turn of the millennium. Yet simultaneously the demand for banknotes has risen faster than the total amount of spending in the economy, a trend that has only become more pronounced since the mid-1990s. The same phenomenon has been seen internationally, in the US, eurozone, Australia and Canada….

….The prevalence of hoarding has also firmed up the demand for physical money. Hoarders are those who “choose to save their money in a safety deposit box, or under the mattress, or even buried in the garden, rather than placing it in a bank account”, the Bank said. At a time when savings rates have not turned negative, and deposits are guaranteed by the government, this kind of activity seems to defy economic theory. “For such action to be considered as rational, those that are hoarding cash must be gaining a non-financial benefit,” the Bank said. And that benefit must exceed the returns and security offered by putting that hoarded cash in a bank deposit account. A Bank survey conducted last year found that 18pc of people said they hoarded cash largely “to provide comfort against potential emergencies”.

This would suggest that a minimum of £3bn is hoarded in the UK, or around £345 a person. A government survey conducted in 2012 suggested that the total number might be higher, at £5bn….

…..But Bank staff believe that its survey results understate the extent of hoarding, as “the sensitivity of the subject” most likely affects the truthfulness of hoarders. “Based on anecdotal evidence, a small number of people are thought to hoard large values of cash.” The Bank said: “As an illustrative example, if one in every thousand adults in the United Kingdom were to hoard as much as £100,000, this would account for around £5bn — nearly 10pc of notes in circulation.” While there may be newer and more convenient methods of payment available, this strong preference for cash as a safety net means that it is likely to endure, unless steps are taken to discourage its use.

PART 2.

Closing the Escape Routes

 

Nicole Foss: History teaches us that central authorities dislike escape routes, at least for the majority, and are therefore prone to closing them, so that control of a limited money supply can remain in the hands of the very few. In the 1930s, gold was the escape route, so gold was confiscated. As Alan Greenspan wrote in 1966:

In the absence of the gold standard, there is no way to protect savings from confiscation through monetary inflation. There is no safe store of value. If there were, the government would have to make its holding illegal, as was done in the case of gold. If everyone decided, for example, to convert all his bank deposits to silver or copper or any other good, and thereafter declined to accept checks as payment for goods, bank deposits would lose their purchasing power and government-created bank credit would be worthless as a claim on goods.

The existence of escape routes for capital preservation undermines the viability of the banking system, which is already over-extended, over-leveraged and extremely fragile. This time cash serves that role:

Ironically, though the paper money standard that replaced the gold standard was originally meant to empower governments, it now seems that paper money is perceived as an obstacle to unlimited government power….While paper money isn’t as big impediment to government power as the gold standard was, it is nevertheless an impediment compared to a society with only electronic money. Because of this, the more ardent statists favor the abolition of paper money and a monetary system with only electronic money and electronic payments.

We can therefore expect cash to be increasingly disparaged in order to justify its intended elimination:

Every day, a situation that requires the use of physical cash, feels more and more like an anachronism. It’s like having to listen to music on a CD. John Maynard Keynes famously referred to gold (well, the gold standard specifically) as a “barbarous relic.” Well the new barbarous relic is physical cash. Like gold, cash is physical money. Like gold, cash is still fetishized. And like gold, cash is a costly drain on the economy. A study done at Tufts in 2013 estimated that cash costs the economy $200 billion. Their study included the nugget that consumers spend, on average, 28 minutes per month just traveling to the point where they obtain cash (ATM, etc.). But this is just first-order problem with cash. The real problem, which economists are starting to recognize, is that paper cash is an impediment to effective monetary policy, and therefore economic growth.

Holding cash is not risk free, but cash is nevertheless king in a period of deflation:

Conventional wisdom is that interest rates earned on investments are never less than zero because investors could alternatively hold currency. Yet currency is not costless to hold: It is subject to theft and physical destruction, is expensive to safeguard in large amounts, is difficult to use for large and remote transactions, and, in large quantities, may be monitored by governments.

The acknowledged risks of holding cash are understood and can be managed personally, whereas the substantial risk associated with a systemic banking crisis are entirely outside the control of ordinary depositors. The bank bail-in (rescuing the bank with the depositors’ funds) in Cyprus in early 2013 was a warning sign, to those who were paying attention, that holding money in a bank is not necessarily safe. The capital controls put in place in other locations, for instance Greece, also underline that cash in a bank may not be accessible when needed.

The majority of the developed world either already has, or is introducing, legislation to require depositor bail-ins in the event of bank failures, rather than taxpayer bailouts, in preparation for many more Cyprus-type events, but on a very much larger scale. People are waking up to the fact that a bank balance is not considered their money, but is actually an unsecured loan to the bank, which the bank may or may not repay, depending on its own circumstances.:

Your checking account balance is denominated in dollars, but it does not consist of actual dollars. It represents a promise by a private company (your bank) to pay dollars upon demand. If you write a check, your bank may or may not be able to honor that promise. The poor souls who kept their euros in the form of large balances in Cyprus banks have just learned this lesson the hard way. If they had been holding their euros in the form of currency, they would have not lost their wealth.

 

 

Even in relatively untroubled countries, like the UK, it is becoming more difficult to access physical cash in a bank account or to use it for larger purchases. Notice of intent to withdraw may be required, and withdrawal limits may be imposed ‘for your own protection’. Reasons for the withdrawal may be required, ostensibly to combat money laundering and the black economy:

It’s one thing to be required by law to ask bank customers or parties in a cash transaction to explain where their money came from; it’s quite another to ask them how they intend to use the money they wish to withdraw from their own bank accounts. As one Mr Cotton, a HSBC customer, complained to the BBC’s Money Box programme: “I’ve been banking in that bank for 28 years. They all know me in there. You shouldn’t have to explain to your bank why you want that money. It’s not theirs, it’s yours.”

In France, in the aftermath of terrorist attacks there, several anti-cash measures were passed, restricting the use of cash once obtained:

French Finance Minister Michel Sapin brazenly stated that it was necessary to “fight against the use of cash and anonymity in the French economy.” He then announced extreme and despotic measures to further restrict the use of cash by French residents and to spy on and pry into their financial affairs.

These measures…..include prohibiting French residents from making cash payments of more than 1,000 euros, down from the current limit of 3,000 euros….The threshold below which a French resident is free to convert euros into other currencies without having to show an identity card will be slashed from the current level of 8,000 euros to 1,000 euros. In addition any cash deposit or withdrawal of more than 10,000 euros during a single month will be reported to the French anti-fraud and money laundering agency Tracfin.

Tourists in France may also be caught in the net:

France passed another new Draconian law; from the summer of 2015, it will now impose cash requirements dramatically trying to eliminate cash by force. French citizens and tourists will only be allowed a limited amount of physical money. They have financial police searching people on trains just passing through France to see if they are transporting cash, which they will now seize.

This is essentially the Shock Doctrine in action. Central authorities rarely pass up an opportunity to use a crisis to add to their repertoire of repressive laws and practices.

However, even without a specific crisis to draw on as a justification, many other countries have also restricted the use of cash for purchases:

One way they are waging the War on Cash is to lower the threshold at which reporting a cash transaction is mandatory or at which paying in cash is simply illegal. In just the last few years.

  • Italy made cash transactions over €1,000 illegal;
  • Switzerland has proposed banning cash payments in excess of 100,000 francs;
  • Russia banned cash transactions over $10,000;
  • Spain banned cash transactions over €2,500;
  • Mexico made cash payments of more than 200,000 pesos illegal;
  • Uruguay banned cash transactions over $5,000

Other restrictions on the use of cash can be more subtle, but can have far-reaching effects, especially if the ideas catch on and are widely applied:

The State of Louisiana banned “secondhand dealers” from making more than one cash transaction per week. The term has a broad definition and includes Goodwill stores, specialty stores that sell collectibles like baseball cards, flea markets, garage sales and so on. Anyone deemed a “secondhand dealer” is forbidden to accept cash as payment. They are allowed to take only electronic means of payment or a check, and they must collect the name and other information about each customer and send it to the local police department electronically every day.

The increasing application of de facto capital controls, when combined with the prevailing low interest rates, already convince many to hold cash. The possibility of negative rates would greatly increase the likelihood. We are already in an environment of rapidly declining trust, and limited access to what we still perceive as our own funds only accelerates the process in a self-reinforcing feedback loop. More withdrawals lead to more controls, which increase fear and decrease trust, which leads to more withdrawals. This obviously undermines the perceived power of monetary policy to stimulate the economy, hence the escape route is already quietly closing.

In a deflationary spiral, where the money supply is crashing, very little money is in circulation and prices are consequently falling almost across the board, possessing purchasing power provides for the freedom to pursue opportunities as they present themselves, and to avoid being backed into a corner. The purchasing power of cash increases during deflation, even as electronic purchasing power evaporates. Hence cash represents freedom of action at a time when that will be the rarest of ‘commodities’.

Governments greatly dislike cash, and increasingly treat its use, or the desire to hold it, especially in large denominations, with great suspicion:

Why would a central bank want to eliminate cash? For the same reason as you want to flatten interest rates to zero: to force people to spend or invest their money in the risky activities that revive growth, rather than hoarding it in the safest place. Calls for the eradication of cash have been bolstered by evidence that high-value notes play a major role in crime, terrorism and tax evasion. In a study for the Harvard Business School last week, former bank boss Peter Sands called for global elimination of the high-value note.

Britain’s “monkey” — the £50 — is low-value compared with its foreign-currency equivalents, and constitutes a small proportion of the cash in circulation. By contrast, Japan’s ¥10,000 note (worth roughly £60) makes up a startling 92% of all cash in circulation; the Swiss 1,000-franc note (worth around £700) likewise. Sands wants an end to these notes plus the $100 bill, and the €500 note – known in underworld circles as the “Bin Laden”.

 

 

Cash is largely anonymous, untraceable and uncontrollable, hence it makes central authorities, in a system increasingly requiring total buy-in in order to function, extremely uncomfortable. They regard there being no legitimate reason to own more than a small amount of it in physical form, as its ownership or use raises the spectre of tax evasion or other illegal activities:

The insidious nature of the war on cash derives not just from the hurdles governments place in the way of those who use cash, but also from the aura of suspicion that has begun to pervade private cash transactions. In a normal market economy, businesses would welcome taking cash. After all, what business would willingly turn down customers? But in the war on cash that has developed in the thirty years since money laundering was declared a federal crime, businesses have had to walk a fine line between serving customers and serving the government. And since only one of those two parties has the power to shut down a business and throw business owners and employees into prison, guess whose wishes the business owner is going to follow more often?

The assumption on the part of government today is that possession of large amounts of cash is indicative of involvement in illegal activity. If you’re traveling with thousands of dollars in cash and get pulled over by the police, don’t be surprised when your money gets seized as “suspicious.” And if you want your money back, prepare to get into a long, drawn-out court case requiring you to prove that you came by that money legitimately, just because the courts have decided that carrying or using large amounts of cash is reasonable suspicion that you are engaging in illegal activity….

….Centuries-old legal protections have been turned on their head in the war on cash. Guilt is assumed, while the victims of the government’s depredations have to prove their innocence….Those fortunate enough to keep their cash away from the prying hands of government officials find it increasingly difficult to use for both business and personal purposes, as wads of cash always arouse suspicion of drug dealing or other black market activity. And so cash continues to be marginalized and pushed to the fringes.

Despite the supposed connection between crime and the holding of physical cash, the places where people are most inclined (and able) to store cash do not conform to the stereotype at all:

Are Japan and Switzerland havens for terrorists and drug lords? High-denomination bills are in high demand in both places, a trend that some politicians claim is a sign of nefarious behavior. Yet the two countries boast some of the lowest crime rates in the world. The cash hoarders are ordinary citizens responding rationally to monetary policy. The Swiss National Bank introduced negative interest rates in December 2014. The aim was to drive money out of banks and into the economy, but that only works to the extent that savers find attractive places to spend or invest their money. With economic growth an anemic 1%, many Swiss withdrew cash from the bank and stashed it at home or in safe-deposit boxes. High-denomination notes are naturally preferred for this purpose, so circulation of 1,000-franc notes (worth about $1,010) rose 17% last year. They now account for 60% of all bills in circulation and are worth almost as much as Serbia’s GDP.

Japan, where banks pay infinitesimally low interest on deposits, is a similar story. Demand for the highest-denomination ¥10,000 notes rose 6.2% last year, the largest jump since 2002. But 10,000 Yen notes are worth only about $88, so hiding places fill up fast. That explains why Japanese went on a safe-buying spree last month after the Bank of Japan announced negative interest rates on some reserves. Stores reported that sales of safes rose as much as 250%, and shares of safe-maker Secom spiked 5.3% in one week.

In Germany too, negative interest rates are considered intolerable, banks are increasingly being seen as risky prospects, and physical cash under one’s own control is coming to be seen as an essential part of a forward-thinking financial strategy:

First it was the news that Raiffeisen Gmund am Tegernsee, a German cooperative savings bank in the Bavarian village of Gmund am Tegernsee, with a population 5,767, finally gave in to the ECB’s monetary repression, and announced it’ll start charging retail customers to hold their cash. Then, just last week, Deutsche Bank’s CEO came about as close to shouting fire in a crowded negative rate theater, when, in a Handelsblatt Op-Ed, he warned of “fatal consequences” for savers in Germany and Europe — to be sure, being the CEO of the world’s most systemically risky bank did not help his cause.

That was the last straw, and having been patient long enough, the German public has started to move. According to the WSJ, German savers are leaving the “security of savings banks” for what many now consider an even safer place to park their cash: home safes. We wondered how many “fatal” warnings from the CEO of DB it would take, before this shift would finally take place. As it turns out, one was enough….

….“It doesn’t pay to keep money in the bank, and on top of that you’re being taxed on it,” said Uwe Wiese, an 82-year-old pensioner who recently bought a home safe to stash roughly €53,000 ($59,344), including part of his company pension that he took as a payout. Burg-Waechter KG, Germany’s biggest safe manufacturer, posted a 25% jump in sales of home safes in the first half of this year compared with the year earlier, said sales chief Dietmar Schake, citing “significantly higher demand for safes by private individuals, mainly in Germany.”….

….Unlike their more “hip” Scandinavian peers, roughly 80% of German retail transactions are in cash, almost double the 46% rate of cash use in the U.S., according to a 2014 Bundesbank survey….Germany’s love of cash is driven largely by its anonymity. One legacy of the Nazis and East Germany’s Stasi secret police is a fear of government snooping, and many Germans are spooked by proposals of banning cash transactions that exceed €5,000. Many Germans think the ECB’s plan to phase out the €500 bill is only the beginning of getting rid of cash altogether. And they are absolutely right; we can only wish more Americans showed the same foresight as the ordinary German….

….Until that moment, however, as a final reminder, in a fractional reserve banking system, only the first ten or so percent of those who “run” to the bank to obtain possession of their physical cash and park it in the safe will succeed. Everyone else, our condolences.

The internal stresses are building rapidly, stretching economy after economy to breaking point and prompting aware individuals to protect themselves proactively:

People react to these uncertainties by trying to protect themselves with cash and guns, and governments respond by trying to limit citizens’ ability to do so.

If this play has a third act, it will involve the abolition of cash in some major countries, the rise of various kinds of black markets (silver coins, private-label cash, cryptocurrencies like bitcoin) that bypass traditional banking systems, and a surge in civil unrest, as all those guns are put to use. The speed with which cash, safes and guns are being accumulated — and the simultaneous intensification of the war on cash — imply that the stress is building rapidly, and that the third act may be coming soon.

Despite growing acceptance of electronic payment systems, getting rid of cash altogether is likely to be very challenging, particularly as the fear and state of financial crisis that drives people into cash hoarding is very close to reasserting itself. Cash has a very long history, and enjoys greater trust than other abstract means for denominating value. It is likely to prove tenacious, and unable to be eliminated peacefully. That is not to suggest central authorities will not try. At the heart of financial crisis lies the problem of excess claims to underlying real wealth. The bursting of the global bubble will eliminate the vast majority of these, as the value of credit instruments, hitherto considered to be as good as money, will plummet on the realisation that nowhere near all financial promises made can possibly be kept.

Cash would then represent the a very much larger percentage of the remaining claims to limited actual resources — perhaps still in excess of the available resources and therefore subject to haircuts. Not only the quantity of outstanding cash, but also its distribution, may not be to central authorities liking. There are analogous precedents for altering legal currency in order to dispossess ordinary people trying to protect their stores of value, depriving them of the benefit of their foresight. During the Russian financial crisis of 1998, cash was not eliminated in favour of an electronic alternative, but the currency was reissued, which had a similar effect. People were required to convert their life savings (often held ‘under the mattress’) from the old currency to the new. This was then made very difficult, if not impossible, for ordinary people, and many lost the entirety of their life savings as a result.

 

A Cashless Society?

 

The greater the public’s desire to hold cash to protect themselves, the greater will be the incentive for central banks and governments to restrict its availability, reduce its value or perhaps eliminate it altogether in favour of electronic-only payment systems. In addition to commercial banks already complicating the process of making withdrawals, central banks are actively considering, as a first step, mechanisms to impose negative interest rates on physical cash, so as to make the escape route appear less attractive:

Last September, the Bank of England’s chief economist, Andy Haldane, openly pondered ways of imposing negative interest rates on cash — ie shrinking its value automatically. You could invalidate random banknotes, using their serial numbers. There are £63bn worth of notes in circulation in the UK: if you wanted to lop 1% off that, you could simply cancel half of all fivers without warning. A second solution would be to establish an exchange rate between paper money and the digital money in our bank accounts. A fiver deposited at the bank might buy you a £4.95 credit in your account.

 

 

To put it mildly, invalidating random banknotes would be highly likely to result in significant social blowback, and to accelerate the evaporation of trust in governing authorities of all kinds. It would be far more likely for financial authorities to move toward making official electronic money the standard by which all else is measured. People are already used to using electronic money in the form of credit and debit cards and mobile phone money transfers:

I can remember the moment I realised the era of cash could soon be over. It was Australia Day on Bondi Beach in 2014. In a busy liquor store, a man wearing only swimming shorts, carrying only a mobile phone and a plastic card, was delaying other people’s transactions while he moved 50 Australian dollars into his current account on his phone so that he could buy beer. The 30-odd youngsters in the queue behind him barely murmured; they’d all been in the same predicament. I doubt there was a banknote or coin between them….The possibility of a cashless society has come at us with a rush: contactless payment is so new that the little ping the machine makes can still feel magical. But in some shops, especially those that cater for the young, a customer reaching for a banknote already produces an automatic frown. Among central bankers, that frown has become a scowl.

In some states almost anything, no matter how small, can be purchased electronically. Everything down to, and including, a cup of coffee from a roadside stall can be purchased in New Zealand with an EFTPOS (debit) card, hence relatively few people carry cash. In Scandinavian countries, there are typically more electronic payment options than cash options:

Sweden became the first country to enlist its own citizens as largely willing guinea pigs in a dystopian economic experiment: negative interest rates in a cashless society. As Credit Suisse reports, no matter where you go or what you want to purchase, you will find a small ubiquitous sign saying “Vi hanterar ej kontanter” (“We don’t accept cash”)….A similar situation is unfolding in Denmark, where nearly 40% of the paying demographic use MobilePay, a Danske Bank app that allows all payments to be completed via smartphone.

Even street vendors selling “Situation Stockholm”, the local version of the UK’s “Big Issue” are also able to take payments by debit or credit card.

 

 

Ironically, cashlessness is also becoming entrenched in some African countries. One might think that electronic payments would not be possible in poor and unstable subsistence societies, but mobile phones are actually very common in such places, and means for electronic payments are rapidly becoming the norm:

While Sweden and Denmark may be the two nations that are closest to banning cash outright, the most important testing ground for cashless economics is half a world away, in sub-Saharan Africa. In many African countries, going cashless is not merely a matter of basic convenience (as it is in Scandinavia); it is a matter of basic survival. Less than 30% of the population have bank accounts, and even fewer have credit cards. But almost everyone has a mobile phone. Now, thanks to the massive surge in uptake of mobile communications as well as the huge numbers of unbanked citizens, Africa has become the perfect place for the world’s biggest social experiment with cashless living.

Western NGOs and GOs (Government Organizations) are working hand-in-hand with banks, telecom companies and local authorities to replace cash with mobile money alternatives. The organizations involved include Citi Group, Mastercard, VISA, Vodafone, USAID, and the Bill and Melinda Gates Foundation.

In Kenya the funds transferred by the biggest mobile money operator, M-Pesa (a division of Vodafone), account for more than 25% of the country’s GDP. In Africa’s most populous nation, Nigeria, the government launched a Mastercard-branded biometric national ID card, which also doubles up as a payment card. The “service” provides Mastercard with direct access to over 170 million potential customers, not to mention all their personal and biometric data. The company also recently won a government contract to design the Huduma Card, which will be used for paying State services. For Mastercard these partnerships with government are essential for achieving its lofty vision of creating a “world beyond cash.”

Countries where electronic payment is already the norm would be expected to be among the first places to experiment with a fully cashless society as the transition would be relatively painless (at least initially). In Norway two major banks no longer issue cash from branch offices, and recently the largest bank, DNB, publicly called for the abolition of cash. In rich countries, the advent of a cashless society could be spun in the media in such a way as to appear progressive, innovative, convenient and advantageous to ordinary people. In poor countries, people would have no choice in any case.

Testing and developing the methods in societies with no alternatives and then tantalizing the inhabitants of richer countries with more of the convenience to which they have become addicted is the clear path towards extending the reach of electronic payment systems and the much greater financial control over individuals that they offer:

Bill and Melinda Gates Foundation, in its 2015 annual letter, adds a new twist. The technologies are all in place; it’s just a question of getting us to use them so we can all benefit from a crimeless, privacy-free world. What better place to conduct a massive social experiment than sub-Saharan Africa, where NGOs and GOs (Government Organizations) are working hand-in-hand with banks and telecom companies to replace cash with mobile money alternatives? So the annual letter explains: “(B)ecause there is strong demand for banking among the poor, and because the poor can in fact be a profitable customer base, entrepreneurs in developing countries are doing exciting work – some of which will “trickle up” to developed countries over time.”

What the Foundation doesn’t mention is that it is heavily invested in many of Africa’s mobile-money initiatives and in 2010 teamed up with the World Bank to “improve financial data collection” among Africa’s poor. One also wonders whether Microsoft might one day benefit from the Foundation’s front-line role in mobile money….As a result of technological advances and generational priorities, cash’s days may well be numbered. But there is a whole world of difference between a natural death and euthanasia. It is now clear that an extremely powerful, albeit loose, alliance of governments, banks, central banks, start-ups, large corporations, and NGOs are determined to pull the plug on cash — not for our benefit, but for theirs.

Whatever the superficially attractive media spin, joint initiatives like the Better Than Cash Alliance serve their founders, not the public. This should not come as a surprise, but it probably will as we sleepwalk into giving up very important freedoms:

As I warned in We Are Sleepwalking Towards a Cashless Society, we (or at least the vast majority of people in the vast majority of countries) are willing to entrust government and financial institutions — organizations that have already betrayed just about every possible notion of trust — with complete control over our every single daily transaction. And all for the sake of a few minor gains in convenience. The price we pay will be what remains of our individual freedom and privacy.

PART 3

Promoters, Mechanisms and Risks in the War on Cash

 

Nicole Foss: Bitcoin and other electronic platforms have paved the way psychologically for a shift away from cash, although they have done so by emphasising decentralisation and anonymity rather than the much greater central control which would be inherent in a mainstream electronic currency. The loss of privacy would no doubt be glossed over in any media campaign, as would the risks of cyber-attack and the lack of a fallback for providing liquidity to the economy in the event of a systems crash. Electronic currency is much favoured by techno-optimists, but not so much by those concerned about the risks of absolute structural dependency on technological complexity. The argument regarding greatly reduced socioeconomic resilience is particularly noteworthy, given the vulnerability and potential fragility of electronic systems.

There is an important distinction to be made between official electronic currency – allowing everyone to hold an account with the central bank — and private electronic currency. It would be official currency which would provide the central control sought by governments and central banks, but if individuals saw central bank accounts as less risky than commercial institutions, which seems highly likely, the extent of the potential funds transfer could crash the existing banking system, causing a bank run in a similar manner as large-scale cash withdrawals would. As the power of money creation is of the highest significance, and that power is currently in private hands, any attempt to threaten that power would almost certainly be met with considerable resistance from powerful parties. Private digital currency would be more compatible with the existing framework, but would not confer all of the control that governments would prefer:

People would convert a very large share of their current bank deposits into official digital money, in effect taking them out of the private banking system. Why might this be a problem? If it’s an acute rush for safety in a crisis, the risk is that private banks may not have enough reserves to honour all the withdrawals. But that is exactly the same risk as with physical cash: it’s often forgotten that it’s central bank reserves, not the much larger quantity of deposits, that banks can convert into cash with the central bank. Both with cash and official e-cash, the way to meet a more severe bank run is for the bank to borrow more reserves from the central bank, posting its various assets as security. In effect, this would mean the central bank taking over the funding of the broader economy in a panic — but that’s just what central banks should do.

A more chronic challenge is that people may prefer the safety of central bank accounts even in normal times. That would destroy private banks’ current deposit-funded model. Is that a bad thing? They would still have a role as direct intermediators between savers and borrowers, by offering investment products sufficiently attractive for people to get out of the safety of e-cash. Meanwhile, the broad money supply would be more directly under the control of the central bank, whereas now it’s a product of the vagaries of private lending decisions. The more of the broad money supply that was in the form of official digital cash, the easier it would be, for example, for the central bank to use tools such as negative interest rates or helicopter drops.

As an indication that the interests of the private banking system and public central authorities are not always aligned, consider the actions of the Bavarian Banking Association in attempting to avoid the imposition of negative interest rates on reserves held with the ECB:

German newspaper Der Spiegel reported yesterday that the Bavarian Banking Association has recommended that its member banks start stockpiling PHYSICAL CASH. The Bavarian Banking Association has had enough of this financial dictatorship. Their new recommendation is for all member banks to ditch the ECB and instead start keeping their excess reserves in physical cash, stored in their own bank vaults. This is officially an all-out revolution of the financial system where banks are now actively rebelling against the central bank. (What’s even more amazing is that this concept of traditional banking — holding physical cash in a bank vault — is now considered revolutionary and radical.)

There’s just one teensy tiny problem: there simply is not enough physical cash in the entire financial system to support even a tiny fraction of the demand. Total bank deposits exceed trillions of euros. Physical cash constitutes just a small percentage of that sum. So if German banks do start hoarding physical currency, there won’t be any left in the financial system. This will force the ECB to choose between two options:

  1. Support this rebellion and authorize the issuance of more physical cash; or
  2. Impose capital controls.

Given that just two weeks ago the President of the ECB spoke about the possibility of banning some higher denomination cash notes, it’s not hard to figure out what’s going to happen next.

Advantages of official electronic currency to governments and central banks are clear. All transactions are transparent, and all can be subject to fees and taxes. Central control over the money supply would be greatly increased and tax evasion would be difficult to impossible, at least for ordinary people. Capital controls would be built right into the system, and personal spending information would be conveniently gathered for inspection by central authorities (for cross-correlation with other personal data they possess). The first step would likely be to set up a dual system, with both cash and electronic money in parallel use, but with electronic money as the defined unit of value and cash subject to a marginally disadvantageous exchange rate.

The exchange rate devaluing cash in relation to electronic money could increase over time, in order to incentivize people to switch away from seeing physical cash as a store of value, and to increase their preference for goods over cash. In addition to providing an active incentive, the use of cash would probably be publicly disparaged as well as actively discouraged in many ways. For instance, key functions such as tax payments could be designated as by electronic remittance only. The point would be to forced everyone into the system by depriving them of the choice to opt out. Once all were captured, many forms of central control would be possible, including substantial account haircuts if central authorities deemed them necessary.

 

 

The main promoters of cash elimination in favour of electronic currency are Willem Buiter, Kenneth Rogoff, and Miles Kimball.

Economist Willem Buiter has been pushing for the relegation of cash, at least the removal of its status as official unit of account, since the financial crisis of 2008. He suggests a number of mechanisms for achieving the transition to electronic money, emphasising the need for the electronic currency to become the definitive unit of account in order to implement substantially negative interest rates:

The first method does away with currency completely. This has the additional benefit of inconveniencing the main users of currency-operators in the grey, black and outright criminal economies. Adequate substitutes for the legitimate uses of currency, on which positive or negative interest could be paid, are available. The second approach, proposed by Gesell, is to tax currency by making it subject to an expiration date. Currency would have to be “stamped” periodically by the Fed to keep it current. When done so, interest (positive or negative) is received or paid.

The third method ends the fixed exchange rate (set at one) between dollar deposits with the Fed (reserves) and dollar bills. There could be a currency reform first. All existing dollar bills and coin would be converted by a certain date and at a fixed exchange rate into a new currency called, say, the rallod. Reserves at the Fed would continue to be denominated in dollars. As long as the Federal Funds target rate is positive or zero, the Fed would maintain the fixed exchange rate between the dollar and the rallod.

When the Fed wants to set the Federal Funds target rate at minus five per cent, say, it would set the forward exchange rate between the dollar and the rallod, the number of dollars that have to be paid today to receive one rallod tomorrow, at five per cent below the spot exchange rate — the number of dollars paid today for one rallod delivered today. That way, the rate of return, expressed in a common unit, on dollar reserves is the same as on rallod currency.

For the dollar interest rate to remain the relevant one, the dollar has to remain the unit of account for setting prices and wages. This can be encouraged by the government continuing to denominate all of its contracts in dollars, including the invoicing and payment of taxes and benefits. Imposing the legal restriction that checkable deposits and other private means of payment cannot be denominated in rallod would help.

In justifying his proposals, he emphasises the importance of combatting criminal activity…

The only domestic beneficiaries from the existence of anonymity-providing currency are the criminal fraternity: those engaged in tax evasion and money laundering, and those wishing to store the proceeds from crime and the means to commit further crimes. Large denomination bank notes are an especially scandalous subsidy to criminal activity and to the grey and black economies.

… over the acknowledged risks of government intrusion in legitimately private affairs:

My good friend and colleague Charles Goodhart responded to an earlier proposal of mine that currency (negotiable bearer bonds with legal tender status) be abolished that this proposal was “appallingly illiberal”. I concur with him that anonymity/invisibility of the citizen vis-a-vis the state is often desirable, given the irrepressible tendency of the state to infringe on our fundamental rights and liberties and given the state’s ever-expanding capacity to do so (I am waiting for the US or UK government to contract Google to link all personal health information to all tax information, information on cross-border travel, social security information, census information, police records, credit records, and information on personal phone calls, internet use and internet shopping habits).

In his seminal 2014 paper “Costs and Benefits to Phasing Out Paper Currency.”, Kenneth Rogoff also argues strongly for the primacy of electronic currency and the elimination of physical cash as an escape route:

Paper currency has two very distinct properties that should draw our attention. First, it is precisely the existence of paper currency that makes it difficult for central banks to take policy interest rates much below zero, a limitation that seems to have become increasingly relevant during this century. As Blanchard et al. (2010) point out, today’s environment of low and stable inflation rates has drastically pushed down the general level of interest rates. The low overall level, combined with the zero bound, means that central banks cannot cut interest rates nearly as much as they might like in response to large deflationary shocks.

If all central bank liabilities were electronic, paying a negative interest on reserves (basically charging a fee) would be trivial. But as long as central banks stand ready to convert electronic deposits to zero-interest paper currency in unlimited amounts, it suddenly becomes very hard to push interest rates below levels of, say, -0.25 to -0.50 percent, certainly not on a sustained basis. Hoarding cash may be inconvenient and risky, but if rates become too negative, it becomes worth it.

However, he too notes associated risks:

Another argument for maintaining paper currency is that it pays to have a diversity of technologies and not to become overly dependent on an electronic grid that may one day turn out to be very vulnerable. Paper currency diversifies the transactions system and hardens it against cyber attack, EMP blasts, etc. This argument, however, seems increasingly less relevant because economies are so totally exposed to these problems anyway. With paper currency being so marginalized already in the legal economy in many countries, it is hard to see how it could be brought back quickly, particularly if ATM machines were compromised at the same time as other electronic systems.

A different type of argument against eliminating currency relates to civil liberties. In a world where society’s mores and customs evolve, it is important to tolerate experimentation at the fringes. This is potentially a very important argument, though the problem might be mitigated if controls are placed on the government’s use of information (as is done say with tax information), and the problem might also be ameliorated if small bills continue to circulate. Last but not least, if any country attempts to unilaterally reduce the use of its currency, there is a risk that another country’s currency would be used within domestic borders.

Miles Kimball’s proposals are very much in tune with Buiter and Rogoff:

There are two key parts to Miles Kimball’s solution. The first part is to make electronic money or deposits the sole unit of account. Everything else would be priced in terms of electronic dollars, including paper dollars. The second part is that the fixed exchange rate that now exists between deposits and paper dollars would become variable. This crawling peg between deposits and paper currency would be based on the state of the economy. When the economy was in a slump and the central bank needed to set negative interest rates to restore full employment, the peg would adjust so that paper currency would lose value relative to electronic money. This would prevent folks from rushing to paper currency as interest rates turned negative. Once the economy started improving, the crawling peg would start adjusting toward parity.

This approach views the economy in very mechanistic terms, as if it were a machine where pulling a lever would have a predictable linear effect — make holding savings less attractive and automatically consumption will increase. This is actually a highly simplistic view, resting on the notions of stabilising negative feedback and bringing an economy ‘back into equilibrium’. If it were so simple to control an economy centrally, there would never have been deflationary spirals or economic depressions in the past.

Assuming away the more complex aspects of human behaviour — a flight to safety, the compulsion to save for a rainy day when conditions are unstable, or the natural response to a negative ‘wealth effect’ — leads to a model divorced from reality. Taxing savings does not necessarily lead to increased consumption, in fact it is far more likely to have the opposite effect.:

But under Miles Kimball’s proposal, the Fed would lower interest rates to below zero by taxing away balances of e-currency. This is a reduction in monetary base, just like the case of IOR, and by itself would be contractionary, not expansionary. The expansionary effects of Kimball’s policy depend on the assumption that households will increase consumption in response to the taxing of their cash savings, rather than letting their savings depreciate.

That needn’t be the case — it depends on the relative magnitudes of income and substitution effects for real money balances. The substitution effect is what Kimball has in mind — raising the price of real money balances will induce substitution out of money and into consumption. But there’s also an income effect, whereby the loss of wealth induces less consumption and more savings. Thus, negative interest rate policy can be contractionary even though positive interest rate policy is expansionary.

Indeed, what Kimball has proposed amounts to a reverse Bernanke Helicopter — imagine a giant vacuum flying around the country sucking money out of people’s pockets. Why would we assume that this would be inflationary?

 

 

Given that the effect on the money supply would be contractionary, the supposed stimulus effect on the velocity of money (as, in theory, savings turn into consumption in order to avoid the negative interest rate penalty) would have to be large enough to outweigh a contracting money supply. In some ways, modern proponents of electronic money bearing negative interest rates are attempting to copy Silvio Gesell’s early 20th century work. Gesell proposed the use of stamp scrip — money that had to be regularly stamped, at a small cost, in order to remain current. The effect would be for money to lose value over time, so that hoarding currency it would make little sense. Consumption would, in theory, be favoured, so money would be kept in circulation.

This idea was implemented to great effect in the Austrian town of Wörgl during the Great Depression, where the velocity of money increased sufficiently to allow a hive of economic activity to develop (temporarily) in the previously depressed town. Despite the similarities between current proposals and Gesell’s model applied in Wörgl, there are fundamental differences:

There is a critical difference, however, between the Wörgl currency and the modern-day central bankers’ negative interest scheme. The Wörgl government first issued its new “free money,” getting it into the local economy and increasing purchasing power, before taxing a portion of it back. And the proceeds of the stamp tax went to the city, to be used for the benefit of the taxpayers….Today’s central bankers are proposing to tax existing money, diminishing spending power without first building it up. And the interest will go to private bankers, not to the local government.

The Wörgl experiment was a profoundly local initiative, instigated at the local government level by the mayor. In contrast, modern proposals for negative interest rates would operate at a much larger scale and would be imposed on the population in accordance with the interests of those at the top of the financial foodchain. Instead of being introduced for the direct benefit of those who pay, as stamp scrip was in Wörgl, it would tax the people in the economic periphery for the continued benefit of the financial centre. As such it would amount to just another attempt to perpetuate the current system, and to do so at a scale far beyond the trust horizon.

As the trust horizon contracts in times of economic crisis, effective organizational scale will also contract, leaving large organizations (both public and private) as stranded assets from a trust perspective, and therefore lacking in political legitimacy. Large scale, top down solutions will be very difficult to implement. It is not unusual for the actions of central authorities to have the opposite of the desired effect under such circumstances:

Consumers today already have very little discretionary money. Imposing negative interest without first adding new money into the economy means they will have even less money to spend. This would be more likely to prompt them to save their scarce funds than to go on a shopping spree. People are not keeping their money in the bank today for the interest (which is already nearly non-existent). It is for the convenience of writing checks, issuing bank cards, and storing their money in a “safe” place. They would no doubt be willing to pay a modest negative interest for that convenience; but if the fee got too high, they might pull their money out and save it elsewhere. The fee itself, however, would not drive them to buy things they did not otherwise need.

People would be very likely to respond to negative interest rates by self-organising alternative means of exchange, rather than bowing to the imposition of negative rates. Bitcoin and other crypto-currencies would be one possibility, as would using foreign currency, using trading goods as units of value, or developing local alternative currencies along the lines of the Wörgl model:

The use of sheep, bottled water, and cigarettes as media of exchange in Iraqi rural villages after the US invasion and collapse of the dinar is one recent example. Another example was Argentina after the collapse of the peso, when grain contracts priced in dollars were regularly exchanged for big-ticket items like automobiles, trucks, and farm equipment. In fact, Argentine farmers began hoarding grain in silos to substitute for holding cash balances in the form of depreciating pesos.

 

 

For the electronic money model grounded in negative interest rates to work, all these alternatives would have to be made illegal, or at least hampered to the point of uselessness, so people would have no other legal choice but to participate in the electronic system. Rogoff seems very keen to see this happen:

Won’t the private sector continually find new ways to make anonymous transfers that sidestep government restrictions? Certainly. But as long as the government keeps playing Whac-A-Mole and prevents these alternative vehicles from being easily used at retail stores or banks, they won’t be able fill the role that cash plays today. Forcing criminals and tax evaders to turn to riskier and more costly alternatives to cash will make their lives harder and their enterprises less profitable.

It is very likely that in times of crisis, people would do what they have to do regardless of legal niceties. While it may be possible to close off some alternative options with legal sanctions, it is unlikely that all could be prevented, or even enough to avoid the electronic system being fatally undermined.

The other major obstacle would be overcoming the preference for cash over goods in times of crisis:

Understanding how negative rates may or may not help economic growth is much more complex than most central bankers and investors probably appreciate. Ultimately the confusion resides around differences in view on the theory of money. In a classical world, money supply multiplied by a constant velocity of circulation equates to nominal growth.

In a Keynesian world, velocity is not necessarily constant — specifically for Keynes, there is a money demand function (liquidity preference) and therefore a theory of interest that allows for a liquidity trap whereby increasing money supply does not lead to higher nominal growth as the increase in money is hoarded. The interest rate (or inverse of the price of bonds) becomes sticky because at low rates, for infinitesimal expectations of any further rise in bond prices and a further fall in interest rates, demand for money tends to infinity.

In Gesell’s world money supply itself becomes inversely correlated with velocity of circulation due to money characteristics being superior to goods (or commodities). There are costs to storage that money does not have and so interest on money capital sets a bar to interest on real capital that produces goods. This is similar to Keynes’ concept of the marginal efficiency of capital schedule being separate from the interest rate. For Gesell the product of money and velocity is effective demand (nominal growth) but because of money capital’s superiority to real capital, if money supply expands it comes at the expense of velocity.

The new money supply is hoarded because as interest rates fall, expected returns on capital also fall through oversupply — for economic agents goods remain unattractive to money. The demand for money thus rises as velocity slows. This is simply a deflation spiral, consumers delaying purchases of goods, hoarding money, expecting further falls in goods prices before they are willing to part with their money….In a Keynesian world of deficient demand, the burden is on fiscal policy to restore demand. Monetary policy simply won’t work if there is a liquidity trap and demand for cash is infinite.

During the era of globalisation (since the financial liberalisation of the early 1980s), extractive capitalism in debt-driven over-drive has created perverse incentives to continually increase supply. Financial bubbles, grounded in the rediscovery of excess leverage, always act to create an artificial demand stimulus, which is met by artificially inflated supply during the boom phase. The value of the debt created collapses as boom turns into bust, crashing the money supply, and with it asset price support. Not only does the artificial stimulus disappear, but a demand undershoot develops, leaving all that supply without a market. Over the full cycle of a bubble and its aftermath, credit is demand neutral, but within the bubble it is anything but neutral. Forward shifting the demand curve provides for an orgy of present consumption and asset price increases, which is inevitably followed by the opposite.

Kimball stresses bringing demand forward as a positive aspect of his model:

In an economic situation like the one we are now in, we would like to encourage a company thinking about building a factory in a couple of years to build that factory now instead. If someone would lend to them at an interest rate of -3.33% per year, the company could borrow $1 million to build the factory now, and pay back something like $900,000 on the loan three years later. (Despite the negative interest rate, compounding makes the amount to be paid back a bit bigger, but not by much.)

That would be a good enough deal that the company might move up its schedule for building the factory. But everything runs aground on the fact that any potential lender, just by putting $1 million worth of green pieces of paper in a vault could get back $1 million three years later, which is a lot better than getting back a little over $900,000 three years later.

This is, however, a short-sighted assessment. Stimulating demand today means a demand undershoot tomorrow. Kimball names long term price stability as a primary goal, but this seems unlikely. Large scale central planning has a poor track record for success, to put it mildly. It requires the central authority in question to have access to all necessary information in realtime, and to have the ability to respond to that information both wisely and rapidly, or even proactively. It also assumes the ability to accurately filter out misinformation and disinformation. This is unlikely even in good times, thanks to the difficulties of ‘organizational stupidity’ at large scale, and even more improbable in the times of crisis.

PART 4

Financial Totalitarianism in Historical Context

 

Nicole Foss: In attempting to keep the credit bonanza going with their existing powers, central banks have set the global financial system up for an across-the-board asset price collapse:

QE takes away the liquidity preference choice out of the hands of the consumers, and puts it into the hands of central bankers, who through asset purchases push up asset prices even if it does so by explicitly devaluing the currency of price measurement; it also means that the failure of NIRP is — by definition — a failure of central banking, and if and when the central bank backstop of any (make that all) asset class — i.e., Q.E., is pulled away, that asset (make that all) will crash.

It is not just central banking, but also globalisation, which is demonstrably failing. Cross-border freedoms will probably be an early casualty of the war on cash, and its demise will likely come as a shock to those used to a relatively borderless world:

We have been informed with reliable sources that in Germany where Maestro was a multi-national debit card service owned by MasterCard that was founded in 1992 is seriously under attack. Maestro cards are obtained from associate banks and can be linked to the card holder’s current account, or they can be prepaid cards. Already we find such cards are being cancelled and new debit cards are being issued.

Why? The new cards cannot be used at an ATM outside of Germany to obtain cash. Any attempt to get cash can only be as an advance on a credit card….This is total insanity and we are losing absolutely everything that made society function. Once they eliminate CASH, they will have total control over who can buy or sell anything.

The same confused, greedy and corrupt central authorities which have set up the global economy for a major bust through their dysfunctional use of existing powers, are now seeking far greater central control, in what would amount to the ultimate triumph of finance over people. They are now moving to tax what ever people have left over after paying taxes. It has been tried before. As previous historical bubbles began to collapse, central authorities attempted to increase their intrusiveness and control over the population, in order to force the inevitable losses as far down the financial foodchain as possible. As far back as the Roman Empire, economically contractionary periods have been met with financial tyranny — increasing pressure on the populace until the system itself breaks:

Not even the death penalty was enough to enforce Diocletian’s price control edicts in the third century.

Rome squeezed the peasants in its empire so hard, that many eventually abandoned their land, reckoning that they were better off with the barbarians.

Such attempts at total financial control are exactly what one would expect at this point. A herd of financial middle men are used to being very well supported by the existing financial system, and as that system begins to break down, losing that raft of support is unacceptable. The people at the bottom of the financial foodchain must be watched and controlled in order to make sure they are paying to support the financial centre in the manner to which it has become accustomed, even as their ability to do so is continually undermined:

An oft-overlooked benefit of cash transactions is that there is no intermediary. One party pays the other party in mutually accepted currency and not a single middleman gets to wet his beak. In a cashless society there will be nothing stopping banks or other financial mediators from taking a small piece of every single transaction. They would also be able to use — and potentially abuse — the massive deposits of data they collect on their customers’ payment behavior. This information is of huge interest and value to retail marketing departments, other financial institutions, insurance companies, governments, secret services, and a host of other organizations….

….So in order to save a financial system that is morally beyond the pale and stopped serving the basic needs of the real economy a long time ago, governments and central banks must do away with the last remaining thing that gives people a small semblance of privacy, anonymity, and personal freedom in their increasingly controlled and surveyed lives. The biggest tragedy of all is that the governments and banks’ strongest ally in their War on Cash is the general public itself. As long as people continue to abandon the use of cash, for the sake of a few minor gains in convenience, the war on cash is already won.

Even if the ultimate failure of central control is predictable, momentum towards greater centralisation will carry forward for as long as possible, until the system can no longer function, at which point a chaotic free-for-all is likely to occur. In the meantime, the movement towards electronic money seeks to empower the surveillance state/corporatocracy enormously, providing it with the tools to observe and control virtually every aspect of people’s lives:

Governments and corporations, even that genius app developer in Russia, have one thing in common: they want to know everything. Data is power. And money. As the Snowden debacle has shown, they’re getting there. Technologies for gathering information, then hoarding it, mining it, and using it are becoming phenomenally effective and cheap. But it’s not perfect. Video surveillance with facial-recognition isn’t everywhere just yet. Not everyone is using a smartphone. Not everyone posts the details of life on Facebook. Some recalcitrant people still pay with cash. To the greatest consternation of governments and corporations, stuff still happens that isn’t captured and stored in digital format….

….But the killer technology isn’t the elimination of cash. It’s the combination of payment data and the information stream that cellphones, particularly smartphones, deliver. Now everything is tracked neatly by a single device that transmits that data on a constant basis to a number of companies, including that genius app developer in Russia — rather than having that information spread over various banks, credit card companies, etc. who don’t always eagerly surrender that data.

Eventually, it might even eliminate the need for data brokers. At that point, a single device knows practically everything. And from there, it’s one simple step to transfer part or all of this data to any government’s data base. Opinions are divided over whom to distrust more: governments or corporations. But one thing we know: mobile payments and the elimination of cash….will also make life a lot easier for governments and corporations in their quest for the perfect surveillance society.

Dissent is increasingly being criminalised, with legitimate dissenters commonly referred to, and treated as, domestic terrorists and potentially subjected to arbitrary asset confiscation:

An important reason why the state would like to see a cashless society is that it would make it easier to seize our wealth electronically. It would be a modern-day version of FDR’s confiscation of privately-held gold in the 1930s. The state will make more and more use of “threats of terrorism” to seize financial assets. It is already talking about expanding the definition of “terrorist threat” to include critics of government like myself.

The American state already confiscates financial assets under the protection of various guises such as the PATRIOT Act. I first realized this years ago when I paid for a new car with a personal check that bounced. The car dealer informed me that the IRS had, without my knowledge, taken 20 percent of the funds that I had transferred from a mutual fund to my bank account in order to buy the car. The IRS told me that it was doing this to deter terrorism, and that I could count it toward next year’s tax bill.

 

 

The elimination of cash in favour of official electronic money only would greatly accelerate and accentuate the ability of governments to punish those they dislike, indeed it would allow them to prevent dissenters from engaging in the most basic functions:

If all money becomes digital, it would be much easier for the government to manipulate our accounts. Indeed, numerous high-level NSA whistleblowers say that NSA spying is about crushing dissent and blackmailing opponents. not stopping terrorism. This may sound over-the-top. but remember, the government sometimes labels its critics as “terrorists”. If the government claims the power to indefinitely detain — or even assassinate — American citizens at the whim of the executive, don’t you think that government people would be willing to shut down, or withdraw a stiff “penalty” from a dissenter’s bank account?

If society becomes cashless, dissenters can’t hide cash. All of their financial holdings would be vulnerable to an attack by the government. This would be the ultimate form of control. Because — without access to money — people couldn’t resist, couldn’t hide and couldn’t escape.

The trust that has over many years enabled the freedoms we enjoy is now disappearing rapidly, and the impact of its demise is already palpable. Citizens understandably do not trust governments and powerful corporations, which have increasingly clearly been acting in their own interests in consolidating control over claims to real resources in the hands of fewer and fewer individuals and institutions:

By far the biggest risk posed by digital alternatives to cash such as mobile money is the potential for massive concentration of financial power and the abuses and conflicts of interest that would almost certainly ensue. Naturally it goes without saying that most of the institutions that will rule the digital money space will be the very same institutions….that have already broken pretty much every rule in the financial service rule book.

They have manipulated virtually every market in existence; they have commodified and financialized pretty much every natural resource of value on this planet; and in the wake of the financial crisis they almost single-handedly caused, they have extorted billions of dollars from the pockets of their own customers and trillions from hard-up taxpayers. What about your respective government authorities? Do you trust them?…

….We are, it seems, descending into a world where new technologies threaten to put absolute power well within the grasp of a select group of individuals and organizations — individuals and organizations that have through their repeated actions betrayed just about every possible notion of mutual trust.

Governments do not trust their citizens (‘potential terrorists’) either, hence the perceived need to monitor and limit the scope of their decisions and actions. The powers-that-be know how angry people are going to be when they realise the scale of their impending dispossession, and are acting in such a way as to (try to) limit the power of the anger that will be focused against them. It is not going to work.

Without trust we are likely to see “throwbacks to the 14th century….at the dawn of banking coming out of the Dark Ages.”. It is no coincidence that this period was also one of financial, socioeconomic and humanitarian crises, thanks to the bursting of a bubble two centuries in the making:

The 14th Century was a time of turmoil, diminished expectations, loss of confidence in institutions, and feelings of helplessness at forces beyond human control. Historian Barbara Tuchman entitled her book on this period A Distant Mirror because many of our modern problems had counterparts in the 14th Century.

Few think of the trials and tribulations of 14th century Europe as having their roots in financial collapse — they tend instead to remember famine and disease. However, the demise of what was then the world banking system was a leading indicator for what followed, as is always the case:

Six hundred and fifty years ago came the climax of the worst financial collapse in history to date. The 1930’s Great Depression was a mild and brief episode, compared to the bank crash of the 1340’s, which decimated the human population. The crash, which peaked in A.C.E. 1345 when the world’s biggest banks went under, “led” by the Bardi and Peruzzi companies of Florence, Italy, was more than a bank crash — it was a financial disintegration….a blowup of all major banks and markets in Europe, in which, chroniclers reported, “all credit vanished together,” most trade and exchange stopped, and a catastrophic drop of the world’s population by famine and disease loomed.

As we have written many times before at The Automatic Earth, bubbles are not a new phenomenon. They have inflated and subsequently imploded since the dawn of civilisation, and are in fact en emergent property of civilisational scale. There are therefore many parallels between different historical episodes of boom and bust:

The parallels between the medieval credit crunch and our current predicament are considerable. In both cases the money supply increased in response to the expansionist pressure of unbridled optimism. In both cases the expansion proceeded to the point where a substantial overhang of credit had been created — a quantity sufficient to generate systemic risk that was not recognized at the time. In the fourteenth century, that risk was realized, as it will be again in the 21st century.

What we are experiencing now is simply the same dynamic, but turbo-charged by the availability of energy and technology that have driven our long period of socioeconomic expansion and ever-increasing complexity. Just as in the 14th century, the cracks in the system have been visible for many years, but generally ignored. The coming credit implosion may appear to come from nowhere when it hits, but has long been foreshadowed if one knew what to look for. Watching more and more people seeking escape routes from a doomed financial system, and the powers-that-be fighting back by closing those escape routes, all within a social matrix of collapsing trust, one cannot deny that history is about to repeat itself yet again, only on a larger scale this time.

The final gasps of a bubble economy, such as our own, are about behind-the-scenes securing of access to and ownership of real assets for the elite, through bailouts and other forms of legalized theft. As Frédéric Bastiat explained in 1848,

“When plunder becomes a way of life for a group of men in a society, over the course of time they create for themselves a legal system that authorizes it and a moral code that glorifies it.”

The bust which follows the last attempt to kick the can further down the road will see the vast majority of society dispossessed of what they thought they owned, their ephemeral electronic claims to underlying real wealth extinguished.

The Way Forward

The advent of negative interest rates indicates that the endgame for the global economy is underway. In places at the peak of the bubble, negative rates drive further asset bubbles and create ever greater vulnerability to the inevitable interest rate spike and asset price collapse to come. In Japan, at the other end of the debt deflation cycle, negative rates force people into ever more cash hoarding. Neither one of these outcomes is going to lead to recovery. Both indicate economies at breaking point. We cannot assume that current financial, economic and social structures will continue in their present form, and we need to prepare for a period of acute upheaval.

Using cash wherever possible, rather than succumbing to the convenience of electronic payments, becomes an almost revolutionary act. So other forms of radical decentralisation, which amount to opting out as much as possible from the path the powers-that-be would have us follow. It is likely to become increasingly difficult to defend our freedom and independence, but if enough people stand their ground, establishing full totalitarian control should not be possible.

To some extent, the way the war on cash plays out will depend on the timing of the coming financial implosion. The elimination of cash would take time, and only in some countries has there been enough progress away from cash that eliminating it would be at all realistic. If only a few countries tried to do so, people in those countries would be likely to use foreign currency that was still legal tender.

Cash elimination would really only work if it it were very broadly applied in enough major economies, and if a financial accident could be postponed for a few more years. As neither of these conditions is likely to be fulfilled, a cash ban is unlikely to viable. Governments and central banks would very much like to frighten people away from cash, but that only underlines its value under the current circumstances. Cash is king in a deflation. The powers-that-be know that, and would like the available cash to end up concentrated in their own hands rather than spread out to act as seed capital for a bottom-up recovery.

Holding on to cash under one’s own control is still going to be a very important option for maintaining freedom of action in an uncertain future. The alternative would be to turn to hard goods (land, tools etc) from the beginning, but where there is a great deal of temporal and spatial uncertainty, this amounts to making all one’s choices up front, and choices based on incomplete information could easily turn out to be wrong. Making such choices up front is also expensive, as prices are currently high. Of course having some hard goods is also advisable, particularly if they allow one to have some control over the essentials of one’s own existence.

It is the balance between hard goods and maintaining capital as liquidity (cash) that is important. Where that balance lies depends very much on individual circumstances, and on location. For instance, in the European Union, where currency reissue is a very real threat in a reasonably short time-frame, opting for goods rather than cash makes more sense, unless one holds foreign currency such as Swiss francs. If one must hold euros, it would probably be advisable to hold German ones (serial numbers begin with X).

US dollars are likely to hold their value for longer than most other currencies, given the dollar’s role as the global reserve currency. Reports of its demise are premature, to put it mildly. As financial crisis picks up momentum, a flight to safety into the reserve currency is likely to pick up speed, raising the value of the dollar against other currencies. In addition, demand for dollars will increase as debtors seek to pay down dollar-denominated debt. While all fiat currencies are ultimately vulnerable in the beggar-thy-neighbour currency wars to come, the US dollar should hold value for longer than most.

Holding cash on the sidelines while prices fall is a good strategy, so long as one does not wait too long.

The risks to holding and using cash are likely to grow over time, so it is best viewed as a short term strategy to ride out the deflationary period, where the value of credit instruments is collapsing. The purchasing power of cash will rise during this time, and previously unforeseen opportunities are likely to arise.

Ordinary people need to retain as much of their freedom of action as possible, in order for society to function through a period of economic seizure. In general, the best strategy is to hold cash until the point where the individual in question can afford to purchase the goods they require to provide for their own needs without taking on debt to do so. (Avoiding taking on debt is extremely important, as financially encumbered assets would be subject to repossession in the event of failure to meet debt obligations.)

One must bear in mind, however, that after price falls, some goods may cease to be available at any price, so some essentials may need to be purchased at today’s higher prices in order to guarantee supply.

Capital preservation is an individual responsibility, and during times of deflation, capital must be preserved as liquidity. We cannot expect either governments or private institutions to protect our interests, as both have been obviously undermining the interests of ordinary people in favour of their own for a very long time. Indeed they seem to feel secure enough of their own consolidated control that they do not even bother to try to hide the fact any longer. [My comment: for example, see September 9, 2016 story Wells Fargo Is in Trouble for Charging Customers Millions for Bogus Accounts]

It is our duty to inform ourselves and act to protect ourselves, our families and our communities. If we do not, no one else will.

Posted in Bubbles, Capitalism, Negative interest rates, Nicole Foss | Tagged , , , | 4 Comments

Book review of “Too Hot to Touch: The Problem of High-Level Nuclear Waste”

A book review by Alice Friedemann of “Too Hot to Touch: The Problem of High-Level Nuclear Waste” by William M. Alley & Rosemarie Alley. 2013. Cambridge University Press.

[ It is outrageous that on top of climate change and using up fossil fuels for mostly stupid things in a mere 100 years, we’re leaving our descendants with depleted and polluted aquifers and water supplies, eroded topsoil that will take hundreds of years to replace, rising sea levels, and toxic nuclear wast that will last up to a million years, as well as a many other industrial and agricultural toxins that corporations bailed out of paying for, shifting the cost to the public in superfunds (look here for the ones closest to you). Kind of like the bank bailout – now the public and government are on the hook after the next failure while financial executives can now afford to buy even more homes and private jets.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Introduction to Nuclear Waste Disposal

After Yucca Mountain was thrown out as a nuclear waste site in 2009 after 25 years and $10 billion in studies — to help Senator Majority leader Harry Reid (D-NV) get re-elected in 2010 — there is nowhere to put nuclear waste.  Not much, if anything, is being done to find a new place, and there’s no chance an ideologically divided Congress would agree on a new site anyhow.

Meanwhile, 70,000 tons of spent nuclear reactor fuel and 20,000 giant canisters of defense-related high-level radioactive waste is sitting at 121 sites across 39 states, with another 70,000 tons on the way before nuclear power plants reach their end of life.  All of this waste is now, and for millions of years, exposing future generations and is vulnerable to terrorists, tsunamis, floods, rising sea levels, hurricanes, electric grid outages, earthquakes, tornadoes, and other disasters.

Spent fuel pools in America at 104 nuclear power plants, have an average of 10 times more radioactive fuel stored than what was at Fukushima, and almost no safety features such as a backup water-circulation systems and generators.

About 75% of spent fuel in America is being stored in pools, many of them so full they have four times the amount they were designed to hold.

The National Academy of Sciences published a report that stated terrorists could drain the water from spent fuel storage, causing the fuel rods to self-ignite and release large quantities of radioactivity, or they could steal nuclear waste to make a (dirty) bomb.

Not making a choice about where to store nuclear waste is a choice. We will expose future generations to millions of years of toxic radioactive wastes if we don’t clean them up now.

This book has a complete history of nuclear waste and what to do with it, the many issues, how we arrived at doing nothing, and has outstanding explanations of difficult topics across many fields (i.e. nuclear science, geology, hydrology, etc), as well as explaining the even more difficult political and human issues preventing us from disposing of nuclear wastes in a permanent geological repository.

The goal of anti-nuclear opponents has been to prevent a nuclear waste site from happening so that no new nuclear power plants would be built. Many states, such as California, have laws against building new nuclear plants until a waste depository exists.

The thing is, activists never needed to fear new reactors because the upfront costs are so high and the payback so delayed along with such high, uninsurable liabilities, that investors and utilities haven’t wanted to build nuclear power plants for decades.  Also, Uranium reserves are so low there’s only enough left to power existing nuclear plants for a few more decades (Tverberg), and perhaps less than that once the energy crisis hits and the energy to mine and crush millions of tons of ore is used for other purposes.

The only way new plants would ever get built is for the government to build them.  Not going to happen.  America has trillions in debt, hundreds of trillions of unfunded liabilities in the future (i.e. Medicare and other programs), the overall economic system is $600 trillion in debt, and the entire economic system is rotten and corrupt to the core with no reform in sight (see my amazon Fraud & Greed:  Wall Street, Banks, & Insurance book list  for details).  The final nail in the coffin is Fukushima — even if the government decided to nuclear power plants, public opposition would be too high.  Not to mention the most dysfunctional Congress in history.

Within the next few years (Hirsch), we will be on the exponentially declining oil curve of Hubbert’s Peak, and it will be too late to move the waste because our priorities will be rationing oil to agriculture to grow, harvest and distribute food, repair essential infrastructure, home heating and cooling, and emergency services.

Once the energy crisis hits, even if new nuclear plants are begun, which is not a given, since the crisis is oil — electricity doesn’t solve anything — building would probably stop because within the next ten years there are very good odds of another nuclear disaster: our plants are old and falling apart.

It’s really bad, much worse than most people realize. I highly recommend the 128 page report by Hirsch called “Nuclear Reactor Hazards Ongoing Dangers of Operating Nuclear Technology in the 21st Century”, or my summary of this paper at energyskeptic “Summary of Greenpeace Nuclear Reactor Hazards”.

I have nothing against nuclear power.  I don’t even see nuclear waste as the most serious kind of waste that needs to be dealt with.

But it is outrageous that we are doing nothing to protect future generations, who will be back to living in the age of wood and helpless to do anything themselves about the nuclear waste we’ve generated.  They’re going to have enough problems to cope with.

Another reason why it is unlikely many nuclear power plants will be built in the future is that they would barely make a dent in the energy crisis.  Alley points out that to both address climate change AND meet the world’s projected energy needs over the next 50 years, we would need to build ALL OF THESE (Pacala):

  • Fuel economy increased for 2 billion cars from 30 to 60 miles per gallon
  • Carbon emissions cut by 25% in buildings and appliances
  • Replace 1,400 Gigawatt coal plants with natural gas plants. These NG plants would require 4 times as much natural gas as is being produced now.
  • Capture and store 80% of CO2 from today’s coal production
  • Use 17% of all of the world’s croplands to produce biofuels (instead of food)
  • Build 2,000,000 windmills on 3% of land in America
  • Build 900 nuclear power plants to replace coal power plants (there are about 450 nuclear power plants globally now)

Plutonium waste

Plutonium waste needs to be kept away from future terrorists and dictators for the next 30,000 years.  But world-wide there’s 490 metric tons of separated plutonium at military and civilian sites, enough to make more than 60,000 nuclear weapons.  Plutonium and highly enriched uranium are located at over 100 civilian reactor plants.

In addition, there’s 1,400 tons of highly enriched uranium world-wide.  A crude nuclear bomb can be made from as little as 40 to 60 kilograms of U-235, or roughly 28,000 nuclear bombs.

30,000 Russian nuclear warheads with 100 tons of pure weapons-grade plutonium have been dismantled in Texas & Siberia since 1991, with some of this waste dispersed to Hanford, Savannah River, Los Alamos, and other DOE weapons complexes.

There’s also a huge amount of plutonium in spent fuel from civilian nuclear reactors piling up in the UK, France, Russia, Japan, and other nuclear countries. Although it wouldn’t make as good a bomb as the military plutonium, it can still make a bomb, and certainly a dirty bomb.

A National Academy of Sciences (NAS) study group considered 30 different ways of getting rid of excess plutonium, and in the end said that only 2 of these were worth consideration (both of which would end up in a geologic repository).

The first is to mix plutonium and uranium together (MOX), burn them in a commercial reactor, and generate electricity.  The resulting waste would be too hot to touch, so dangerous no one could get near it, not even after 50.  The containers would be too large to make off with as well.

The second option would be to vitrify plutonium with highly radioactive waste at the Hanford or Savannah River sites, and turn it into giant glass logs.  Some of the issues with this were unknown criticality, if it could still be recovered somehow to use in weapons.  The Russians were very much against this because they’d go to waste, instead of generating electricity as in the first option.

A plutonium + uranium (MOX) facility has been under construction since 2007 that’s cost $5 billion so far with no customers willing to burn the MOX fuel (Becker).

Back in 1973 a breeder reactor program that would use plutonium as the fuel used half of the total U.S. energy Research and development budget. Glenn T. Seaborg was it’s most passionate promoter because he felt “that he had discovered a new element that would be the salvation of mankind.” He expected the USA to get 70% of its electricity by the year 2000 from plutonium, and the AEC thought there’d be more than 500 breeder reactors by then, and perhaps 2,000 by 2020.

Yet at the same time, the New Yorker magazine in 1973 published an article about how anyone could figure out how to make an atom bomb from unclassified sources if they could get plutonium to build it.  Breeder reactors would create so much plutonium that even the Atomic Energy Commission thought enough would be stolen to create a black market for it.  At that time the West Valley re-processing plant couldn’t account for 2-4% of their plutonium, enough to make several bombs.

President Jimmy Carter, a nuclear engineer, and 21 influential scientists, economists, and politicians were so worried about proliferation of potential bomb material that Carter stopped commercial reprocessing.  President Reagan tried to reverse this by encouraging private industry to take over, but no companies were willing to take the risk.

Reagan’s Secretary of Energy, a former dentist, stirred up controversy when he proposed that the plutonium from the nuclear waste of utilities be extracted to make bombs.  NRC Commissioner Peter Bradford wryly noted that customers would not like to think that every time they turned on their lights they were also helping to make atomic bombs.

France, Russia, Japan, India, and the UK (and soon China) reprocess their nuclear waste one time only (too hard and expensive to do a 2nd time).  They’ve all created more MOX fuel than they can burn, which has led to increasing stockpiles of plutonium (fissilematerials.org).

Spent nuclear fuel

Nuclear waste is one million times more radioactive than the original uranium fuel rods.  If left out in the air, the metal surrounding the nuclear waste would melt or self-ignite, so spent fuel must be immediately put into water to both cool it down and block the radiation.  After a year the heat drops 99%, and five years later by another factor of five, yet even then, it’s still very hot.

Why you should be afraid of nuclear waste

  • The shorter the half-life the more radiation.  So thorium-234, with a half-life of 24 days, is more radioactive than uranium-238, with a half-life of 4.5 billion years
  • A rough rule is that the amount remaining after 10 half-lives is small enough not to worry about.
  • The worst high-level wastes are cesium-137 and strontium-90, which last for hundreds of years, with half-lives of 30 years.  They’re 100 million times more radioactive than uranium. 99% of the radioactivity at the Hanford Nuclear Reservation is due to these 2 isotopes alone
  • Cesium is extremely dangerous because it emits gamma and beta radiation.  It’s both highly reactive and soluble in water, and easily absorbed by plants and animals, where it goes up the food chain. If we breathe, eat, or drink any, it becomes part of our stomach, intestines, liver, spleen, and muscles, where it continues to emit harmful radiation.
  • Strontium is dangerous because it also can get into living organisms, and it’s so similar to calcium that it replaces the calcium in our bones and teeth for years, potentially causing cancer as it emits radiation (as does radium)
  • After cesium-137 and strontium-90 disappear, the worst wastes are the 1% comprised of the transuranics neptunium, plutonium, americium, technetium-99 (half-life 211,100 years) and iodine-129 with a half-life of 16 million years. Both technetium and iodine are very soluble and mobile in groundwater, which makes then a huge long-term worry — for millions of years.

Curies (millions)     Radioactive Waste

  • 3                      U.S. defense wastes released into environment (as of 1996)
  • 4                      Ocean dumping
  • 50                    Buried low-level waste
  • 100                  Chernobyl (1986)
  • 110                  Hanford releases to Columbia River 1944-1971
  • 800                  Tanks at Hanford, Savannah River, and Idaho (as of 2006)
  • 1,700               Russian defense wastes released into the environment (as of 1996)
  • 3,000               Uranium mine and mill tailings
  • 40,000             U.S. commercial spent fuel (2010)

If you live anywhere near the Hanford, Savannah River, or Idaho National laboratory facilities, you may want to read Chapter 5, which are likely to make you want to move away, so this could be a very expensive chapter to read.

Low Level Radioactive Waste (LLRW)

There’s also quite a lot of LLRW such as uranium mill tailings and medical and hospital wastes, but by far the largest amount are the components of nuclear power plants themselves, which become radioactive over time.  These wastes used to be dumped into big trenches all over the country, and no records were kept.  Finally a decent site, Ward Valley in California, which was far from populated areas, where no water could carry the wastes away, was found and studied extensively, but activists and politicians prevented it from opening.  So just like the extremely dangerous millions-of-years-long waste sitting at hundreds of nuclear plants around the world, low level waste that is also toxic is also waiting for a safer place to be buried.

After decades of studies and being stopped numerous times over six different presidential administrations, one place was finally constructed for long-lived radioactive waste: a Waste Isolation Pilot Plant near Carlsbad.  It does not take spent nuclear fuel, only waste about 1,000 times less radioactive.  This waste will last more than 10,000 years, far longer than any civilization has lasted.

Why not recycle or reprocess the spent fuel?

It seems like such a waste to not do this, since the spent fuel still has 95% of the original uranium as well as some plutonium that’s being “thrown away”.

But it turns out that reprocessing is technologically complex, very expensive, prone to accidents, quite messy, very modest savings of uranium—about 15 to 20%, and still doesn’t do much for the waste problem.

Expensive and/or doesn’t work. One of the few plants (near Buffalo New York), that reprocessed fuel was shut down, and it’s expected to take 40 years and over $5 billion (2006 dollars) to clean it up. A second plant was shut down after $64 million was spent because it never worked, and after $250 million, a third plant never opened.

Causes additional waste.  Reprocessing causes the release of gaseous radionuclides that must be captured, plus a lot of transuranic waste – it’s pretty much a wash.

Can only be reprocessed once.  After France creates MOX fuel, it’s so difficult to reprocess again that it’s shipped back to the reprocessing facility for indefinite storage.

Fast (Breeder) reactors aren’t a silver bullet.  We don’t have them despite 62 years of research, but even we figured out how to make them work, you’d need 16 cycles over 96 years to get a 100-fold mass reduction for just one batch of fuel. We don’t know how to do that yet, and we’d still be stuck with the worst long-lived fusion products that last millions of years and mobile in groundwater.  President George Bush tried to get a program to get a fast breeder program started in 2006 (GNEP), but the National Academy of Science committee was unanimous in rejecting this program and funding was gutted.

Why not use Fast (Breeder) reactors to make remaining supplies last for millennia and reduce nuclear waste?

Not only would a fast reactor burn more plutonium than is bred, it also converts the most toxic remains to shorter-lived radionuclides.

But despite 62 years of research and billions of dollars since the first reactor (Zinn’s EBR-I), not one fast reactor has been succeeded on a commercial scale, because they’re expensive, complicated, likely to be shut down a long time after the slightest malfunction, and take a long time to repair.  The first commercial fast-breeder (Enrico-Fermi in Michigan) shut down after a partial meltdown and other problems.  Clinch River was stopped in 1983 after cost overruns and worry about nuclear proliferation.

China, India, and Russia haven’t given up, but they’re building prototypes and experimental reactors, which are not a commercial level yet.

Japan, France, and Germany have stopped their programs:

  • Japan spent $6 billion on the Monju fast-breeder but it was shut down after just one year in 1995 after a sodium leak caused a large fire. Japan tried again in 2010, but another accident shut it down. Overall the reactor has only generated electricity for one hour so far. After Fukushima, it’s unlikely Japan will ever try to build a breeder reactor.
  • Germany spent $4 billion on their Kalkar fast reactor, but never put it online.
  • France’s small-scale Phenix was shut down in 2009. And their full-scale prototype was shut down in 1997, after befalling various disasters – the sodium cooling system had corrosion and leaks, heavy snowfall caused structural damage, and other problems.

The history of the search for nuclear waste disposal

Originally, back in 1957, it was thought that the waste would only need to be stored for 600 years or less.  No one had any idea that hundreds of thousands of years of safety would be ideal.  And it took decades for this understanding to sink in.

M. King Hubbert, who is credited with being the first scientist to go on record about Peak Oil in the United States in the 50s, was on the nuclear waste storage committee at the National Academies of Science (NAS).  Hubbert wanted the storage to be in the best possible geologic location, but the Atomic Energy Commission fought hard for the wastes to be put in repositories at existing atomic weapons facilities.

The NAS committee felt strongly that no nuclear power plants should be built until a safe place to put nuclear wastes was found.  McCone, the head of the AEC, who’d tried to get 10 Caltech scientists labeled as Communists and fired when they objected to the radioactive fallout from nuclear testing, fought to have their safe waste storage recommendation removed from their report.  They’d written that “none of the major sites at which radioactive wastes are being stored is geologically suited for safe disposal”.   The AEC suppressed the report and disbanded the NAS committee.

Complacency & Secrecy

From the start in 1959, experts at the national laboratories, universities, and industry told the Joint Committee on Atomic Energy that a solution to the waste problem was possible, so congress dropped this as an issue to worry about until 1975.

Also, the atomic bomb and nuclear business in general were shrouded in secrecy, even politicians were kept out of the loop until the 1970s, when Senator Muskie and others began asking serious questions.

Some of the earliest waste disposal ideas

  • Dehumanize a belt across the entire 38th parallel of the Korean peninsula to prevent Communist attacks from the North, which would also serve as a warning to other nations
  • Drop radioactive waste products over enemy territory
  • Missiles with radioactive waste great enough to kill large populations in big cities
  • Shoot radioactive waste into space, send them to the moon
  • Sink it in the polar ice caps where the heat would make it melt its way through to the bottom of the ice sheet
  • Bury it beneath a remote island. No: possible seismic activity, tsunamis, rising sea levels, NIMBY, etc.
  • Deep well injection, like the oil industry does to use salty water to drive oil toward a producing well
  • Rock melting: use an underground nuclear bomb to create a cavity deep underground, fill it with water to cool the waste, then the water would boil off and the rocks above would melt and seal the wastes in

Salt Beds – the Good

It was assumed that salt beds would be safe because the can be hundreds, even thousands of feet thick under huge areas.  Salt dissolves easily in water, so a thick deposit meant that there hadn’t been groundwater for the millions of years needed to form them and tend to be in areas free of earthquakes.  Salt is equal to concrete in radiation shielding, plastic enough to seal up after a fracture, and conducts heat better than rock which helps solve the issues of overheating from the nuclear waste.

Salt Beds – the Bad

When water gets in very corrosive saline brines that migrate towards heat are formed, which would corrode the waste containers. If radionuclides escaped, salt is not good at holding onto them, it’s like teflon.  Ideally you’d want to have waste in a kind of rock that was god at sorption (attachment onto the mineral surfaces), because that can delay or even stop subsurface contaminant movement.

Despite this, the Lyons salt beds were almost used, until it was found that 26 exploratory oil and gas wells had been drilled there and  would be hard to plug up, plus 175,000 gallons of water had disappeared down them during hydraulic fracturing at a nearby mine and no one knew where the missing water was.

Drawbacks to ocean disposal

If we wanted to put all the nuclear waste into the ocean, we’d need a volume equal to about 5% to dilute the waste to safe levels – an amount of water larger than all the fresh water in lakes, rivers, groundwater, glaciers, and the polar ice caps.

  1. Escaped radioactive material would be eaten by plankton and concentrated up the food chain.
  2. Ocean currents will carry escaped contaminants long distances.  A year after the Bikini atoll nuclear test, contaminated water had spread to over 1 million square miles.
  3. Obviously surface waters would be a bad choice, that’s where the fish are. But even in the depths of the Mariana Trench, 7 miles below the surface, it was clear that eventually any nuclear wastes dumped there would eventually make their way back up to the surface.

Despite these drawbacks, the United states dumped low-level waste in 87,000 steel drum containers 50 miles offshore the California coast and the Atlantic ocean (the majority of them) between 1946 and 1970.  Meanwhile, 14 European countries were doing this as well.  It wasn’t until the early 1960s that the public began to object to ocean dumping, especially as toxic wastes floated to shore and other episodes occurred.  Even France got into the act and dumped quite a bit into the Mediterranean Sea.  Jacques Cousteau was one of the leaders of the anti-dumping movement, which is part of what led to his international fame (even before he was well-known for his underwater films).

The Soviet Union was by far the biggest dumper – including 16 nuclear reactors from submarines and much other waste as well, about twice as much as all other countries combined, because it was cheaper and easier.  After the collapse in 1991 the power was cut to aging nuclear submarines that weren’t paying their bills, despite the consequences of what would happen if they didn’t keep their reactors cooled!  So one of the submarines began hauling potatoes to pay the electric bills.

Finally in 1993, after many other incidents listed in the book, 37 nations voted to stop ocean dumping, though Greenpeace has caught the Japanese secretly dumping wastes, but at least it’s not tolerated any longer, though hard to enforce.

Seabed floor

  • First proposed in 1973 in the clays of the deep-sea floor, so even if radioactive particles escaped, they’d cling tightly to the clay.
  • They’re the least desired real estate on the planet
  • The have low permeability to water
  • The plasticity to seal any cracks around a waste container
  • Escaped contaminants aren’t likely to move more than a few meters even after 100,000 years

Possible problems

  • How would the heat affect water and chemical movement within the clay
  • Organisms living in the clays might transport waste to the seafloor
  • Strong currants might carry clay-bound radionuclides to the ocean surface
  • The risks of transporting the wastes not only across land but over the ocean, where accidents are even more likely than on land
  • If there were an accident, the wastes couldn’t be retrieved

In 1986 this idea was abandoned and never tested.  When the main proponent, Charles Hollister died in 1999, the possibility of subseabed disposal died as well.  It might have been the best possible way to go, but it was never tested.

Interim site

The federal government was legally obligated to find a place to store spent nuclear fuel back in 1998, and has been sued ever since for $760 million so far and another $13 billion of future liability costs.  Out of desperation some are proposing an interim site, but of course, no state wants one lest it become the permanent site.  Some Native American tribes were willing to be the location (since this would pay them well), but the states where the reservations existed (New Mexico, Utah, and Nevada) found ways to prevent that from happening.

Not having a permanent repository, or even an interim site, made it very hard for the nuclear industry to expand nuclear power.

Yucca Mountain

In 1975 it was decided that 6 sites would be studied as possible repositories, but by 1987 only one site was under consideration: Yucca Mountain.  Even in 1976 Yucca Mountain seemed like a good location, since 900 man-years of data collection and interpretation in the fields of hydrology, geology, and geophysics has already been done, and there was already a lot of radiation contamination in the area from the nuclear testing.

Of course politics played a big role too, since the other 12 possible repository states fought hard to keep from becoming a permanent or interim solution.  Salt: Louisiana, Mississippi Texas, Utah; Precambrian granite: Michigan, Minnesota, Wisconsin; Interim (due to defunct reprocessing facilities): Illinois, New York, South Carolina.

It was clear from the beginning that a 100% guaranteed perfectly safe site was impossible, but opponents began demanding total certainty, an impossible imperative.

And then total disaster – a federal court ruled in 2004 that Yucca Mountain must be safe not for 10,000 years, but one million years.  This is an impossible amount of time to grasp, let alone guarantee wastes be safe for.  Consider that just 150 years ago we traveled in horse-drawn carts on muddy tracks, 10,000 years ago agriculture was invented, and 40,000 years ago Homo sapiens reached Europe from Africa.  That’s only 5% of 1 million years.

It is impossible to find a site anywhere in the world that’s guaranteed to be safe for 1 million years. Nor is there enough time to do hundreds of studies at other sites.  We don’t have decades to dawdle.  Peak fossil fuels are here (not just oil, but coal and natural gas as well).

To address the million year challenge at Yucca Mountain, for the next 25 years, hundreds of scientists brainstormed 1,200 Features, Events, and Processes (FEPS) that might happen, plus hundreds more specific to the Yucca Mountain site.  Then each FEP, and each combination of FEP were analyzed using computer models and scenarios.

Here’s a list of just a few of the FEPs studied.  The details are too complicated to review, read the book to learn more about the nuances and complexities of these issues:  the tectonic setting and susceptibility to earthquakes (pp. 293-296), future volcanic intrusions and eruptions (pp 289-293), upwelling water (pp. 297-301), fluid inclusions (pp. 301-305), how would the hot waste interact with the host rock, could heat from the waste weaken the host rock, would fractured rock allow too much oxygen to be present and corrode the waste containers or would fractures be a blessing so that in a cooler, wetter future, water would drain away from the nuclear wastes, waste packages failing from defective welds, future humans drilling a hole into the repository, how much water gets into the mountain, where does it go, how fast does it get there, what are the temperature and chemical composition of water and wastes as it travels through the rock, how would climate change affect rainfall, if rain increased and got down to the repository what would happen, how long would it take for waste to seep out at springs, how fast does water move through the unsaturated zone, how fast do canisters and spent fuel cladding corrode, when the waste packaging failed, would the sorption characteristics of the rock keep the waste from spreading, what effect would the 1,000 years of heat (above boiling) generated by the wastes do to water and surrounding rocks, how quickly are exposed radionuclides moved away by the water, how fast does the contaminated water move to the water table and beyond Yucca mountain, how well do natural and engineered barriers do in slowing down this down, and thousands more scenarios or combinations of scenarios.

One by one these  issues were addressed. A quick summary of just a few:

  • Volcanic activity stopped millions of years ago
  • Earthquakes mainly affect the land surface — not deep underground storage
  • Waste could be stored 1,000 feet below the land surface yet still be 1,000 feet above the water table in an area with little water and only a few inches of rain a year.  Rain was not likely to travel 1,000 feet down.
  • The entire area is a closed basin. No surface water leaves the area.  The Colorado River is more than 100 miles away.
  • There’s no gold, silver, or oil to tempt future generations to dig or drill into the nuclear waste.
  • The mountain is made of a rock that makes tunneling easy yet at the same time tough enough to form stable walls that are unlikely to collapse.

Of course there are risks, but the risk is trivial compared to not storing nuclear  wastes where they’re vulnerable to being stolen, terrorism, future generations of mankind unaware of the hazards, hurricanes, tornadoes, flooding, sea level rise, power outages, etc.  I know I repeat myself, but this is the main issue.

The fact that the art work has survived over 25,000 years in dozens of caves in southern France, where 3 times as much rain falls as in Nevada is another indication that the cave-like storage area is probably good for many millennia.

After waste was put into the tunnels 1,000 feet below the surface of Yucca Mountain, the repository would not be sealed for 50 to 300 years so scientists could monitor the waste, fix any possible problems that arose, and potentially retrieve waste.  During this time, the heat of the radioactive decay would be removed by natural and forced ventilation.  Once sealed up, the temperatures would rapidly increase and remain about the boiling point of water for around 1,000 years.

After 10,000 years plutonium will be 90% of the remaining waste.

Yucca mountain ended up being the most studied place on the planet. 

Yucca mountain is the the best possible place to put nuclear waste.

Hundreds of studies done by university, state, federal government agency, and industry scientists.  This information was used to create the Department of Energy’s license application in 2008 and all of this information from decades of work was condensed down to 8,600 pages that weighed 110 pounds.   You can find the 109 page list of these studies at: http://pbadupws.nrc.gov/docs/ML0828/ML082890329.pdf

How Yucca Mountain got a bad rap

Most of the public still believes there are tremendous issues with volcanism, earthquakes, and so on, and that the only reason Yucca Mountain was under consideration was because of being the weakest state politically.

Some of this is due to New York Times science writer William J. Broad, who not just once, but twice, wrote articles that were both incorrect and inflammatory.

The first time was in 1990 Broad when he reported the findings of geologist Jerry Szymanski, who claimed that if the repository were ever flooded hot corrosive liquids would cause vast calamities that would spread throughout Nevada and California.  But when his paper was peer-reviewed by over 40 scientists, it failed.  Yet Broad not only write about Szymanski as if he were a modern folk hero, Broad falsely proclaimed that Yucca Mountain would become the “most dangerous nuclear facility in the world”.  The tiny amount of space given to other scientists in the rebuttal was so biased that it didn’t override the overwhelming impression given to readers that a terrible disaster was going to happen which would be forced upon the public by a government conspiracy.  Twenty scientists wrote a letter to the New York Times to express their dismay at the inflammatory article, at how biased the “scientific evidence” was, and the implication that scientists at the USGS and DOE were incompetent or had compromised their integrity for fear of losing their jobs.  Only 2 paragraphs of their letter were printed.  In the end, after $20 million more of studies, scientists concluded that there was no evidence of hydrothermal activity at Yucca Mountain the past 5 million years – Szymanski’s hypothesis of upwelling was wrong.

In 1995, Broad again wrote an incorrect and inflammatory article which stated that a nuclear explosion might possibly occur in the waste, despite many scientists soundly rejecting this hypothesis.  Broad was asked not to print this before the issues were published in a peer-reviewed publication. Broad and the New York Times didn’t need to be told to wait — science writers do not print stories before peer-reviewed publication. To do is considered a serious ethical lapse. Even though this claim had no scientific validity whatsoever, the New York Times went ahead, and never reported on any papers published after this which showed a nuclear explosion in the waste was not possible, not even a very important paper written by all of the nuclear engineering faculty at the University of California Berkeley and other experts that resoundingly showed this to be not true.

Nevadans had good reason to fight Yucca Mountain.  They didn’t have any nuclear power plants, why should they be stuck with the other 49 states wastes?  Las Vegas had suffered from years of pink clouds blowing their way from above-ground atomic bomb tests, and is now the thyroid cancer capital of the world.  I don’t blame them for having no trust in the Federal government and being tired of being nuclear guinea pigs.

Transport of nuclear waste from other states to Nevada

Nevada recruited other states to their side by pointing out that the waste would have to travel by truck or train through their states to get to Nevada.

So the National Academy of Sciences was asked to look at the safety of transporting spent nuclear fuel and high-level waste. They concluded that these shipments were low risk, which is backed up by an excellent record, none of the 3,000 shipments traveling over 1.7 million miles so far has had an accident that released radioactive waste, and nuclear waste is far less risky than the million rail cars a year with hazardous materials.

Clearly putting the waste in just one facility in a remote part of America would be easier to protect than the over 100 nuclear power plants where the waste was accumulating, but that argument got lost in the political battles and environmental protests.

Yucca Mountain shut down after 25 years and $10 Billion spent

In march 2009 Secretary of Energy Steven Chu announced that Yucca Mountain was not an option anymore.

Alley says that Yucca Mountain was closed as political payoff to Senate Majority Leader Harry Reid (D-NV). Nevada was a swing state and Obama needed Harry Reid to help him with health care, climate change, and regulation of the financial industry.  With Reid up for election in 2010, a dead Yucca Mountain could be the factor that would get Reid re-elected.

A new Blue Ribbon Commission was formed and they didn’t come up with any new ideas.  Even if they had come up with an ideal waste site, this is the least likely congress in American history to be able to agree on anything.  Nor is nuclear waste even on their plates, there are far too many other issues needing their attention.

Conclusion

Dr. Roger Revelle, at the Scripps Institution of Oceanography, thought it wouldn’t be long before humans depleted all of the world’s mineral deposits and other natural resources.  They’d need nuclear energy to create minerals from scratch and to provide energy.  Without atomic energy, he worried that “our children’s children would look forward only to a slow decline into misery and fear”.

A few quotes from the book:

“The technical characteristics of nuclear waste make the disposal problem difficult, yet it is the human factors that have made it intractable.  These include a lack of interest in solving the problem, unrealistic demands for earth-science predictions far into the future, eroding confidence in government and institutions, confusion about which “experts” to trust, and the ever present NIMS and NIMBY. A better understanding of these human elements is imperative to avoid past failings (p. 322)”.

“Humans continue to deplete the world’s resources, cause mass extinction of species, destroy the ocean’s fisheries, destabilize the world’s climate, and oison the environment with persistent toxic chemicals – many of which will outlive the radioactive ones….Humans have given little thought for the next few generations, let alone many thousands of years into the future (p. 323)”.

“It is extraordinarily difficult (if not downright impossible) to address the complex problem of high-level nuclear waste in a society where a large percentage of the public places little or no value on facts.  Today’s culture of infotainment, sound bites, fundamentalist religion, ideological extremism and rigidity, and the politics of fear and hate impairs reasoning and thoughtful debate. As an astounding case in point, contemporary Americans are as likely to believe in flying saucers as in evolution. Depending on how the questions are worded, roughly 30-40% of Americans believe in each.  When asked about evolution, President George W. Bush hedged his bets, saying the “jury is still out.” (p. 327).

“Science literacy is much more than number crunching and memorizing facts. It requires a basic understanding of the scientific process and an appreciation for the fact that the more scientists learn, the more questions there are to ask. Without an understanding and respect for this process, the public is vulnerable to self-proclaimed experts who side-track efforts through unsubstantiated claims; resort to personal attacks on the integrity of scientists whose findings disagree with their agenda; and point to minor errors or inconsistencies as proof that the whole system is a conspiracy to deceive.  All too often, the media have exacerbated the problem (p. 328).

Appendix –other items of interest to me in this book

After the first energy crisis, what Plan B was:

  • Oil shale oil and gas supplies domestically
  • Shifting from oil and gas to coal so we could make a transition to heavy reliance on nuclear power
  • Continued research on breeder reactors
  • Conservation: efficient buildings, etc
  • Solar energy research increased by a very small part of overall energy program
  • Wind and biofuels: not under consideration
  • Fusion: in the end, nuclear fusion would solve all of our energy problems.

Source: Ray, D. L. 1973. The nation’s energy future, a report to Richard M. Nixon. US Atomic Energy Commission

Nixon’s project independence: the problem with this was described by Robert Gillette in Science magazine as requiring too many tough decisions – new technology is only half the battle, to actually implement it would require political decisions about oil shale leasing, power plant siting, and other decisions outside of research and development.  The upshot being that President Nixon couldn’t buy his way into energy independence.  Nixon ended up backing off of self-sufficiency to reducing our dependence on insecure foreign energy sources.

For about $7 billion, the nuclear waste in spent pools could be put into dry casks, but they have only a design life of 50 years, so that’s not a good long-term solution – it would be better to get spent fuel into permanent geological storage as soon as possible.   There are many problems with dry casks, so although the NRC speculates they could last longer than 50 years, numerous problems after testing for just 15 years with much less cooler waste than what’s out there make kicking the can down the road by leaving waste in canisters rather than a repository is no solution at all.  Cask makers and utilities are also unhappy that some people, like Senate Majority Leader Harry Reid (D-NV) propose this as being the best option.

Uranium production today: 60% from Canada, Australia, Kazakhstan, 4% United States.

After India successfully exploded their first bomb, the secret message to Prime Minister Indira Gandhi was “The Buddha is smiling.”

One really weird thing about nuclear decay is that as an element decays, it turns into other elements both lower and higher in the periodic table (i.e. radium-226 decays down to radon-222, cesium-137 can move up to barium-137).

You need at least an inch of heavy metal to protect you from gamma radiation, which can penetrate deeply into your body. Beta can penetrate your skin a small fraction of an inch, and alpha can’t penetrate the outer layers of your skin.  The problem is mainly that you might breathe, drink, or radioactive particles, where they can do a lot of damage.

Production of military fissile materials continues in India, which is producing plutonium and HEU for naval propulsion, Pakistan, which produces plutonium and HEU for weapons, Israel, which is believed to produce plutonium. North Korea  has the capability to produce weapon-grade plutonium and highly-enriched uranium.

What are other countries doing?

World-wide, another 270,000 tons of waste are vulnerable to terrorists, tsunamis, floods, rising sea levels, hurricanes, electric grid outages, and other disasters.

In 2001 Russian president Putin announced that any nation could send them spent fuel for indefinite storage on Russian territory.  But it isn’t likely we’d take them up on that given our fears of nuclear proliferation and terrorist access to this material.

Pages (paragraph) 18(3), 19(5, L) page 20 France Sweden, 25(3, 4) Russia , 315 (3)- 316(1), 316(2)-317(1) why Sweden good, USA bad, 318 (4) summary, 322 (3) 60 new plants being built in 15 countries,

France, Russia, the United Kingdom, Japan, and India operate civilian reprocessing facilities that separate plutonium from spent fuel of power reactors. China is operating a pilot civilian reprocessing facility.

Twelve countries – Russia, the United States, France, the United Kingdom, Germany, the Netherlands (all three are in the URENCO consortium), Japan, Argentina, Brazil, India, Pakistan, and Iran – operate uranium enrichment facilities. North Korea is also believed to have an operational uranium enrichment plant.

References

Becker, J. New Doubts About Turning Plutonium Into a Fuel. New York Times 10 Apr 2011.

Blue ribbon commission on America’s nuclear future. Report to the Secretary of Energy. 2012.

Bruno, J. Spent Nuclear Fuel. Elements Vol 2. 2006

Dittmar, M.  The End of Cheap Uranium. Science of The Total Environment. Vol 4641-62 (792-98), September 2013

Fissilematerials.org. Global Fissile Material Report 2013. Increasing Transparency of Nuclear Warhead and Fissile Material Stocks as a Step toward Disarmament. 2013.

Gabriel, S. Building future nuclear power fleets: The available uranium resources constraint. Resources Policy, vol 38 December 2013

Hirsch, R. L. et al. The Impending World Energy Mess. What it is and what it means to YOU!. 2010.

Mayumi, K. Uranium reserve, nuclear fuel cycle delusion, CO2 emissions from the sea, and electricity supply: Reflections after the fuel meltdown of the Fukushima Nuclear Power Units. Ecological Economics, vol 73, 15 January 2012

Pacala, S. et al. Stabilization wedges: solving the climate problem for the next 50 years with current technologies. Science 305: 968-972. 2004.

Tverberg, G (Gail the Actuary). How Long Before Uranium Shortages? theoildrum.com. 2009.

Posted in Energy, Nuclear, Nuclear, Nuclear Power, Nuclear Waste | Tagged , | 4 Comments

The effect of high energy prices on small business. U.S. House of Representatives, 2014

[ This hearing is about how the unaffordable prices of energy are affecting ordinary people.  Chairman Tipton at one point says that “I do not think that Americans truly realize the significant amount of energy that is necessary to be able to produce food stuffs in our country that we eat daily–upwards of 50% of total production expenses are reliant upon energy costs”.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

House 112-011. April 14, 2014. Drilling for a solution: finding ways to curtail the crushing effect of high gas prices on small business. U.S. House of Representatives. 58 Pages.

Excerpts:

Chairman SCOTT, TIPTON, COLORADO. Today we will hear directly from small businesses on how increased fuel prices have affected their bottom lines and ability to expand and be able to create jobs. Small businesses have been hit especially hard by high fuel prices. In addition to driving up the costs of transportation for their goods and services, the spike in gas prices is drying up consumers for many of our small businesses. Just yesterday, Walmart’s chief executive officer told the Washington Post that the retail giant’s number of customers is increasing with rising gas prices. In an effort to tighten up their budgets by driving less, consumers tend to consolidate their shopping trips to one larger box store to be able to do their shopping rather than going to a handful of community shops where they would normally visit. This trend is even more alarming when taking into consideration that many communities across our country have already seen their consumer bases dwindle in conjunction with staggering unemployment. We are essentially watching the extinction of the mom-and- pop shops play out before our very eyes.

Retailers, of course, are not the only ones feeling the pinch of high gas prices. As we will hear today, it is hitting our farmers, our ranchers, especially hard, and any business that relies on fuel to send or receive their goods and services. This increased cost of doing business is either absorbed by the company, diverting resources away from investment and expansion, or passed along to cash-strapped consumers who have already tightened their belts in cutting back. In either case, it is a roadblock to economic security in this country, economic recovery, and job creation.

MARK CRITZ, PENNSYLVANIA. Small businesses play a key role in the economy creating nearly two-thirds of net new jobs. However, with gas prices rising, their contributions to this growth may be jeopardized. In the last 3 months, oil prices have reached a 30-month high exceeding $112 per barrel. With the U.S. importing more than 200 million barrels of oil per month, the cost of doing so is substantial. Many analysts are suggesting that these increases could lead to gas prices of $5 or more per gallon. Small businesses are drivers of economic progress, but a recent report shows that surges in energy prices are a top concern among them. According to the PNC Economic Outlook Survey of Small Firms, nearly three-quarters responded a sustained rise in energy prices would have a negative impact on their business potentially restraining growth. In order to deal with these price increases, small businesses are often faced with two choices. They can either absorb the costs or pass them on to their customers. Absorbing the higher prices creates financial challenges resulting in less capital to expand their business or hire new employees. Passing the cost increases on to consumers can reduce demand for a firm’s goods and services. Neither are preferable alternatives and this is why we must find a solution. Whether these solutions focus on increasing supply or reducing demand, it is clear that the status quo is not an option. Steps must be taken to increase U.S. energy independence. While much of the price increases are tied to the uprisings occurring in Northern Africa and the Middle East, growing demand as the global economy recovers is also a significant part of this equation. Increasing the supply of oil can lead to lower gas prices. While there are several options to do so, one of the most promising is increasing access to potential oil resources under the U.S. Outer Continental Shelf, particularly in deepwater areas.

Another important energy alternative is to increase the use of oil shale. I know the Green River Oil Shale Formation in Colorado, Utah, and Wyoming is estimated to hold the equivalent of 1.38 trillion barrels of oil equivalent in place.

Pennsylvania, 75 percent of the natural gas it uses every day is being imported. The Marcellus Shale Formation holds enough recoverable natural gas reserves to not only serve Pennsylvania’s needs but to turn our country into a significant exporter of energy generating equally significant economic benefits. This is incredible when you think back to 10 years ago when we were only discussing the importation of this gas.

The United States has enough coal to meet projected energy needs for almost 200 years.

JIM EHRLICH. I speak on behalf of the 170 different potato growers in the San Luis Valley of South Central Colorado. Colorado ranks as the second largest shipper of fresh market potatoes in the country, a fact that many people do not know.  These growers typically produce about 2.2 billion pounds of potatoes a year with a market price of 175– to $240 million depending on the price of potatoes that year. The San Luis Valley is a high alpine desert, base elevation of 7,600 feet with less than 7 inches of moisture annually.

Irrigation supplies are dependent on abundant snowpack and sustained utilization of a vast underground aquifer.

This 6-county region of Colorado is dependent upon agriculture as the economic engine for the valley’s 50,000 residents. Unfortunately, we possess some of the poorest counties in Colorado with many rural families having incomes below poverty level and without opportunity for better jobs.

Today I am going to focus on three things: the impact of high energy prices and gas prices on potato producers in the valley, the inability of the United States to increase domestic production of our vast energy reserves, and the cost of regulation to potato producers, the impact of high energy and gas prices on potato producers.

I recently read a report claiming that for every 10 cent increase in gas prices there is a net loss of $5 billion to the United States’ economy. When you consider the fragile state of the worldwide economy and our economy in the United States, this has great significance. When you consider that petroleum-based products are the only source for most of the transportation needs in the world today, there is no real mystery why when you have one supply and limited supply of that one item and worldwide demand is growing like it is, why there is a problem.

Agriculture requires energy as a critical input to production.

Potato production uses energy directly as fuel and electricity to operate tractors and equipment, cool potato cellars, process and package product indirectly, and fertilizers and chemicals produced off the farm are needed as critical inputs for crop production.

Total energy costs of an irrigated potato crop in the San Luis Valley can be as great as 50 percent of the total production expenses.

Unlike areas of the country where irrigation is unnecessary or no-till practices are common, this is not the case with potato production in the San Luis Valley. It requires large amounts of electricity to irrigate and large amounts of tillage.

Crops must be stored at the correct temperature and humidity year round to ensure marketable condition for consumers.

The crop must be shipped in refrigerated trucks to distant markets across the country throughout the year.

So what happens when gas prices rise like they have this year? Because farmers are price takers and lack the capacity to pass on higher costs through the food marketing chain, the net result is a loss in farm income. The reality is prices of most fuel sources tend to move together. So as gas prices typically rise, other energy prices rise in concert. Fertilizer prices are dependent upon natural gas prices and potatoes require large amounts of nitrogen, phosphate, and pot ash fertilizers.

Harvest, sorting, grading, and shipping are all heavily mechanized energy-dependent steps. The San Luis Valley is located in an isolated mountainous region. High diesel prices affect freight rates and truck availability cutting into the growers’ bottom line.

Because the United States relies on imported sources of oil for over 60 percent of our oil needs, we export wealth daily, primarily to countries that are hostile to us. This not only causes economic stress but is a threat to our national security. Without a stable source of relative economical energy for agriculture, our nation’s food security is at risk also, and as a result, our national security. As the proud father of a U.S. Marine serving in Afghanistan currently, I speak from my heart.

Rick Richter, owner of Richter Aviation, an aerial application business in Maxwell, California. And I am testifying today on behalf of the National Agricultural Aviation Association, also known as the NAAA, of which I am the 2011 president. NAAA is a national association which represents the interests of small business owners and pilot licensed as commercial applicators that use aircraft to enhance the production of food, fiber, and biofuel, protect forestry and control health threatening pests. Aerial application accounts for an estimated 18 percent of commercially applied crop protection products in the United States and is often the only method for timely pesticide application, especially when wet soil conditions, rolling terrain, or dense plant foliage presents the use of other methods of treating an area for pests.

The average aerial application business consists of two operating aircraft, four people, including two pilots, a mixer-loader, and an administrative staffer. Increases in fuel prices result in a number of cash flow and service marketability issues for the aerial application industry. And, of course, the price of fuel for agriculture will trickle down to the end consumer of food.

At the beginning of the season, an aerial applicator sets a base price per acre treated by air based on the expected cost of operation. This is the amount he charges his farmer clients. Depending on the type of fuel used, of which there are two—avgas for piston engineered aircraft and Jet A for turbine engine ag aircraft—an operator includes a base price for fuel going into the season. Some applicators stick with this price regardless of fluctuations in fuel price, and as a result may lose money when prices go up steeply. Other applicators will incorporate a fuel surcharge into their pricing structure. Incorporated within that fee per acre charge is the fuel charge which is based on an average price of fuel per gallon. This ranges but on average it is estimated to be about $2 per gallon. If fuel rises above that figure, a fuel surcharge is added, and a typical fuel surcharge is the difference between the average price for a gallon of fuel that an applicator builds into his acre charge and the price of a gallon of aviation fuel at the time of application, assuming that the latter is a greater amount, multiplied by the average number of gallons burned by that particular aircraft in an hour multiplied by the amount of time it took to make the application for the farmer. Fuel surcharges in our industry have been met with minimal complaint by farmer clients as of late because they will be getting a good price for the crop. If this was 2002 and we were faced with the same high prices for fuel that we are facing today but ag commodity prices were two to three times lower than what they are today, our industry would be facing some real challenges. As of April 6, 2011, the wholesale price of Jet A without taxes was $3.33 per gallon as quoted by a large Southeast U.S. fuel supplier. If in 2002 when commodity prices were much lower and Jet A fuel for turbine-powered ag aircraft was the same price today or the same price that it was at its height in 2008 when it averaged $4.72 per gallon, it would be much tougher for a farmer to embrace a fuel surcharge for aerial application services rendered.

Realistically, when input prices such as fuel are high and commodity prices are low, a significant drop in the use of aerial application services and other farm services would occur as a result of containing costs. Well, this helps the farmer contain expenses but frequently results in less yield and poor crop quality, hence negatively affecting his revenue potential. The lack of application work is a challenge for an aerial application operator that requires steady business each season to remain viable.

Another challenge that aerial applicators face, particularly when fuel prices are high, is the financial terms that fuel suppliers have for payment of their fuel and how those terms differ from their own accounts receivable terms. The typical payment term that an aerial applicator has with his fuel supplier is 10 days with established credit. This usually differs from payment terms that aerial applicators’ customers are accustomed to paying, which is typically between 45 and 60 days. This can pose challenges because fuel costs consist of approximately 20 percent of an aerial applicator’s total expenses. If the average ag aircraft burns 50 gallons per hour and is flown 300 hours per season and there are 2.2 aircraft on average per aerial application operation, then 38,600—excuse me, 36,816 gallons of fuel will be required.

When an applicator is facing a deficit in accounts payable compared to his accounts receivable and outlaying large chunks of capital for fuel particularly when the price of fuel is high, this may result in sizeable interest payments for small aerial application businesses. It is widely expected that higher interest rates will return and coupled with the greater demand for fuel globally will likely lead to a steady increase in the price of fuel and place much greater cost pressures on small aerial application businesses. High fuel cost conditions in some instances do lead to aerial applicators taking more risk in trying to hedge the price of fuel by filling up their tanks early and storing fuel. But storing for too long of a period can result in developing moisture in the fuel, algae problems in Jet A, and possibly evaporation of avgas.

One other issue of concern to the agricultural aviation industry that is related to fuel supply is an effort underway to phase out the use of avgas. EPA has mentioned the possibility of a new environmental standard associated with avgas due to its emissions of lead in the air and calls by environmental activists to ban the fuel completely. Avgas is used in 51.87 percent of ag aircraft in the U.S. today. NAAA’s primary concerns are with the safety and feasibility issues associated with mandated a shift from avgas. NAAA has encouraged the EPA and the FAA to allow time for and devote resources toward the development of a suitable alternative to avgas before imposing avgas regulations or banning the use of the fuel altogether. NAAA urged the agency to consider the detrimental economic impacts that could occur to our industry and the farmers that rely on us should avgas be phased out prior to the development of a safe and practical alternate fuel. Piston engines are a notably less expensive engine

Dick Pingel. I live in Plover, Wisconsin, and have been a small business trucker for the past 28 years. I am a member of Owner-Operators Independent Drivers Association and currently run a one-truck operation hauling food around the country. As you are most likely aware, O-O-I-D-A, or OOIDA as it is known in the trucking industry, is a national trade association representing the interests of small business trucking professionals and professional truck drivers. The more than 152,000 members of OOIDA are small business men and women in all 50 states who collectively own and operate more than 200,000 individual heavy- duty trucks. The majority of the trucking community in this country is made up of small businesses as 93% of all carriers have less than 20 trucks in their fleet and 78% of carriers have just 6 or fewer trucks. In fact, a one-truck operation such as me represents nearly half of the total number of federally registered motor carriers.

Assuming that the trucking industry exclusively moves about 70% of our nation’s goods and that just about all freight is moved by truck at some point in the supply chain, it is not hard to see that the costs and burdens that encumber small business truckers have an impact on our nation’s businesses and consumers. The cost of fuel is very often the largest operating expense with which small business truckers must contend. For folks like me, fuel costs can easily be 50 percent or more of our annual operating expenses. To give you some perspective, the average OOIDA member runs their truck about 120,000 miles or more each year while getting somewhere in the ballpark of only 7 miles per gallon. Most of us will be operating trucks equipped with either twin 135-gallon tanks or twin 150-gallon tanks, so we can easily see a bill of over 1,000 dollars when we fill up.

In addition to the fuel going into the tanks of my tractor, I use a trailer with a diesel-powered refrigerating unit to haul dairy products for producers in Wisconsin. Until recently, I could count on it costing about $50 to fill up my tank for the reefer unit. However, in recent months the cost to fill this tank has increased to more than $100. The additional money I am now spending on fuel for my truck and trailer once went into investing in other areas of my business, but now it must cover basic operating expenses. Every time I pull into a truck stop I hear similar stories,

The national average for diesel is now around $4.12 a gallon, with prices in some states approaching $4.50 per gallon. To put this into perspective, each time the price of a gallon of diesel fuel increases by a nickel, a trucker’s annual cost increases by $1,000. Diesel prices today are more than a dollar higher than they were this time last year, resulting in an enormous extra burden on small business truckers whose average annual income is less than $40,000.

Small business truckers operate in a hyper competitive market, so managing their number one expense is imperative for their survival. In our marketplace, we often see costs increase without any corresponding rate increases. As such, the only way to survive is to become more efficient in how one operates their truck. Small business truckers always drive with an eye towards saving fuel no matter what the price because our business survival depends on it. As small business truckers like myself know, reducing fuel costs is not a science, it is an art and one that we pride ourselves on being masters of.

Dr. Robert Weiner is a professor at George Washington University. Professor Weiner has authored or co-authored four books on energy markets and oil. He has also authored more than 50 articles on environmental and natural resource economics focusing on energy security, risk management, and the oil and gas markets and companies.

The idea of peak oil, which is the third idea, is simply not supported by expected prices. Peak oil suggests we are running out of oil. I think we have seen the entrepreneurship and the ingenuity and technology of business in the United States. The ability to, at least for now, stay well ahead of the battle against depletion and to be able to increase, if allowed, by regulation our domestic energy production.

Chairman TIPTON. Jim, I do not think that Americans truly realize the significant amount of energy that is necessary to be able to produce food stuffs in our country that we eat daily. Given that upwards of 50 percent of total production expenses are reliant upon energy costs as you noted in your testimony, do you believe that if oil prices reach or exceed, and they already have now, the 2008 gas price level of $4 a gallon that it will force potato farmers out of business or force them to make substantial cutbacks?

Mr. EHRLICH. Well, I think that they will definitely have to cut back but I think the key to that is the price of potatoes. This year the price of potatoes is quite high, as all commodity prices are. As a matter of fact, a lot of commodity prices are at all-time highs. Whether that is sustainable, history would tell us no. So I would say that they will definitely be hurt. If potato prices go back to last year’s levels, it will force producers out of production.

Chairman TIPTON. Mr. Richter, in your testimony you pointed out that potential EPA regulations on avgas, which is still being used by the majority of agricultural aviators, you noted that there is no viable alternative right now for avgas. If gas restrictions are put into place, would this effectively close a lot of our sprayers?

Mr. RICHTER. Yes, it would. It would definitely close some of the smaller businesses that are using piston-engine aircraft. What you have got to understand is that the larger turbine aircraft are several times more expensive than the smaller ones, and if it would restrict or if there is a ban completely on avgas you would see probably some of those going out of business because small businesses could not afford the larger turbine aircraft. And it would eventually have an effect on food prices in the end.

Mr. CRITZ.  Mr. Pingel, the trucking industry is starting to increase its use of alternative fuels such as natural gas, ethanol, and biodiesel. How does that work for the independent trucker? You mentioned and I know I have lots of small family trucking firms all around my district and when you are talking 3, 6, maybe 10 trucks, is it economically feasible for the small transportation company to move from strictly diesel to some sort of either mix or completely natural gas engine?

Mr. PINGEL. Some of the states, such as Minnesota have mandated B5, with 5% biofuel.  The problem we ran into was during the winter because biofuel has a tendency to gel up faster. So it is great during the summer. And as far as natural gas, the problem with natural gas is the range on my truck right now in miles per gallon is over 1,000 miles. You cannot carry enough natural gas to go that far, and the range on most of the natural gas trucks that I have seen is right around 300 miles. So you are stopping consistently more times.

Chairman TIPTON. The keystone of this strategy is American oil from American soil. By allowing increased domestic drilling within our borders and within our waters in the near term we can reduce our significant dependence on foreign oil while enabling other more cleaner, more sustainable fuels to be further explored and better integrated into our society, such as natural gas and biofuels.

 

Posted in Congressional Record U.S. | Tagged , , | Leave a comment

Nuclear reactor problems in the news

[ Although safety and disposal of nuclear waste ought to be the main reasons why no more plants should be built, what will really stop them is because it takes years to get permits and $8.5–$20 billion in capital must be raised for a new 3400 MW nuclear power plant (O’Grady 2008). This is almost impossible when a much cheaper and much safer 3400 MW natural gas plant can be built for $2.5 billion in half the time or less.

U.S. nuclear power plants are old and in decline. By 2030, U.S. nuclear power generation might be the source of just 10% of electricity, half of their 20% production of electricity now, because 38 reactors producing a third of nuclear power are past their 40-year life span, and another 33 reactors producing a third of nuclear power are over 30 years old. Although some will have their licenses extended, 37 reactors that produce half of nuclear power are at risk of closing because of economics, breakdowns, unreliability, long outages, safety, and expensive post-Fukushima retrofits (Cooper 2013).

If you’ve read the nuclear reactor hazards paper or my summary of it, then you understand why there will continually be accidents like Fukushima and Chernobyl.  That makes investors and governments fearful of spending billions of dollars to build nuclear plants.

Nor will people be willing to use precious oil as it declines to build a nuclear power plant that could take up to 10 years to build, when that oil will be more needed for tractors to plant and harvest food and trucks to deliver the food to cities (electric power can’t do that, tractors and trucks have to run on oil).

And if we are dumb enough to try, we’ll smack into the brick wall of Peak Uranium.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Nuclear Safety in the news

The International Atomic Energy Agency is supposed to keep track of all the nuclear incidents in the world, but if you go to their incident report page, you’ll notice that the Turkey Point reactor issues in the March 22, 2016 article aren’t mentioned, and British newspaper “The Guardian” also says that their list is incomplete. Wikipedia is very much out of date, but has some fairly long lists of nuclear problems.  The NRDC has a good deal of information, for instance, their article called “What if the Fukushima nuclear fallout crisis had happened here?” where you can see how hit your home would be if the nearest nuclear reactor had a similar level of disaster.

Ambellas, S. January 6, 2017. Overwhelmed Massachusetts nuclear power plant spikes with radiation. The Pilgrim Nuclear Power Plant has spiked with radiation to near alert levels alarming officials. infowars.com

Alvarez, L. March 22, 2016. Nuclear Plant Leak Threatens Drinking Water Wells in Florida. New York Times.

Turkey point, in Florida is the culprit, also mentioned as one of 37 plants at risk of closing in Cooper’s article.

April 2014 ASPO newsletter

“Nuclear power is probably the biggest asset we have in the fight against climate change…But I’m a business guy and I’m a pragmatist, and there’s no future for nuclear in the United States. There’s certainly no future for new nuclear… [Very few know] how close the system came to collapsing in January because everyone wants to go to natural gas and there wasn’t enough natural gas in the system.  The purpose of having old coal plants, to be frank, is keeping the lights on for the next three, five, 10 years…I’m not anti-utilities, I’m not anti-nuclear, I’m not anti-coal, I’m just anti-bullshit.” — David Crane, CEO of NRG Inc., the U.S.’ largest independent power generator

Matthew Wald. 8 Jun 2012. Court Forces a Rethinking of Nuclear Fuel. New York Times.

The Nuclear Regulatory Commission acted hastily in concluding that spent fuel can be stored safely at nuclear plants for the next century or so in the absence of a permanent repository, and it must consider what will happen if none are ever established, a federal appeals court ruled on Friday.  The commission’s wrong decision was made so that the operating licenses of dozens of power reactors (and 4 new ones) could be extended.

The three judge panel unanimously decided that the commission was wrong to assume nuclear fuel would be safe for many decades without analyzing actual reactor storage pools individually across the nation. Nor did they adequately analyze the risk that cooling water might leak from the pools or that the fuel could ignite.

22 May 2012. Severe Nuclear Reactor Accidents Likely Every 10 to 20 Years, European Study Suggests. ScienceDaily

Catastrophic nuclear accidents such as the core meltdowns in Chernobyl and Fukushima are more likely to happen than previously assumed. Based on the operating hours of all civil nuclear reactors and the number of nuclear meltdowns that have occurred, scientists at the Max Planck Institute for Chemistry have calculated that such events may occur once every 10 to 20 years — some 200 times more often than estimated in the past. The researchers also determined that 50% of the radioactive caesium-137 would be spread over an area of more than 1,000 kilometres away from the nuclear reactor, and 25% would go more than 2,000 kilometres. Their results show that Western Europe is likely to be contaminated about once in 50 years by more than 40 kilobecquerel of caesium-137 per square meter. According to the International Atomic Energy Agency, an area is defined as being contaminated with radiation from this amount onwards. In view of their findings, the researchers call for an in-depth analysis and reassessment of the risks associated with nuclear power plants.  Currently, there are 440 nuclear reactors in operation, and 60 more are planned.
Citizens in the densely populated southwestern part of Germany run the worldwide highest risk of radioactive contamination. If a single nuclear meltdown were to occur in Western Europe, around 28 million people on average would be affected by contamination of more than 40 kilobecquerels per square meter. This figure is even higher in southern Asia, due to the dense populations. A major nuclear accident there would affect around 34 million people, while in the eastern USA and in East Asia this would be 14 to 21 million people.
Reference: J. Lelieveld, et al. Global risk of radioactive fallout after major nuclear reactor accidents. Atmospheric Chemistry and Physics, 2012; 12 (9): 4245

Smith, Rebecca. 4 Feb 2012. Worn Pipes Shut California Reactors.  Wall Street Journal. The two reactors at the San Onofre nuclear-power station near San Clemente, Calif., will remain shut down this weekend while federal safety officials investigate why critical—and relatively new—equipment is showing signs of premature wear.  Components in nuclear plants are subjected to extreme heat, pressure, radiation and chemical exposure, all of which can take a toll on materials.  Commission inspectors say they also have found problems with hundreds of steam tubes at the plant’s other reactor.   Experts say the closures may signal a broader problem for the nuclear industry, which has been trying to reassure Americans that its aging reactors are safe in the wake of last year’s disaster at the Fukushima Daiichi plant in Japan. Mr. Dricks said. Two pipes had lost 35% of their wall thickness in just two years of service. Most—about 800—had lost 10% to 20% of wall thickness. The pipes are about three-quarters of an inch in diameter.

Munson, R. 2008. From Edison to Enron: The Business of Power and What It Means for the Future of Electricity. Praeger.

Cost overruns on reactors nearly drove some power companies into bankruptcy.   In 1984 the Department of Energy calculated more than 75% of reactors cost at least double the estimated price.

Utility WPPSS in Washington state defaulted, scaring investors, who once thought there’d be over a thousand reactors running by 2000 with electricity too cheap to meter.  In fact, only 82 plants existed in 2000 and power prices soared 60% between 1969 and 1984 due to the cost overruns.

Nuclear executives tried to blame their problems on too much regulation and environmentalists, but regulations only came after reactors began to break down.   Intense radiation and high temperatures caused pipes, valves, tubes, fuel rods, and cooling systems to crack, corrode, bend, and malfunction.  Only then did the public create the Atomic Energy Commission (now the Nuclear Regulatory Commission) to regulate nuclear power facilities.

Munson lists quite a few problems, but you should search on “Nuclear Reactor Hazards  Ongoing Dangers of Operating Nuclear Technology in the 21st Century” to get a real good understanding of the magnitude of failures despite regulation.  Indeed, even the Wall Street Journal was forced to admit at one point that reactor troubles “tell the story of projects crippled by too little regulation, rather than too much.”

Some of this stemmed from nuclear engineers seeing uranium as just a complicated way to boil water.  But a reactor is not simple, there are over 40,000 valves, the fuel rods reach temperatures over 4,800 F, and it isn’t easy to contain the nuclear reactions.

Management was poor as well, with Forbes magazine calling the U.S. nuclear program “the largest managerial disaster in business history, a disaster on a monumental scale.”

 

References

Cooper, M. 2013. Renaissance in reverse: Competition pushes aging U.S. Nuclear reactors to the brink of economic abandonment. South Royalton: Vermont Law School.

O’Grady, E. 2008. Luminant seeks new reactor. London: Reuters.

 

 

Posted in Energy, Nuclear Power, Nuclear Power | Leave a comment

Electric Cars and Biofuels switch dependence from foreign oil to domestic water and weather risks

Water intensity of transportation

 

Figure 1. Energy/Water Nexus Amy Hardberger, Matthew E. Mantell, Michael Webber, Carey W. King, Karl Fennessey

[ This Senate hearing covers a lot of ground. I found the most interesting testimony to be the intersection of water and energy, which I’ve summarized and paraphrased based on what Michael E. Webber at the University of Texas had to say (as well as other research):

Generating electricity for electric vehicles will use a lot of water.  Nuclear, coal, natural gas, and biomass fuels are the largest users of water in the United States – 49% of all water withdrawals (including saline), and 39% of all freshwater withdrawals – the same amount used by agriculture.  Because most power plants in the U.S. electric grid use a lot of cooling water, electricity is about twice as water-intensive as gasoline per mile traveled.  But unconventional fossil fuels such as oil shale, coal-to-liquids, gas-to-liquids, and tar sands require significantly more water to produce than gasoline, which only requires about 0.2 gallons of water per mile traveled.

Irrigated biofuels from corn or soy can consume 100 to 500 times more water than gasoline: 20 to 100 or more gallons of water for every mile traveled.  By switching from imported petroleum to domestic biofuels, we are essentially substituting domestic water for petroleum.  This may reduce oil price volatility, but we exchange that for risks to the production of biofuels – drought, floods, severe storms, and other calamities from climate change and weather.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Senate 112-25. March 31, 2011. Hydropower. U.S. Senate hearing.  92 pages.

Excerpts:

SENATOR JEFF BINGAMAN, NEW MEXICO, CHAIRMAN.  Today we hear testimony regarding 3 pieces of legislation—S. 629, which is the Hydropower Improvement Act of 2011, S. 630, which is the Marine and Hydrokinetic Renewable Energy Promotion Act of 2011, and also the energy and water integration provisions from Title I, Subtitle D, of ACELA, the American Clean Energy Leadership Act of 2009, which was S. 1462 in the previous Congress. Today we will hear from administration and other witnesses about the potential we have to produce more hydropower in this country through improved efficiency at existing hydropower facilities and adding hydropower capabilities to existing structures. Developing additional energy from hydropower can help to decrease our dependence on fossil fuels.  Developing new policies that integrate energy and water solutions will become increasingly vital as populations grow and environmental needs increase, and a changing climate continues to affect our energy and water resources.

MICHAEL E. WEBBER, PH.D., Assistant Professor, Department of Mechanical Engineering, Assoc. Director, Center for International Energy & Environmental Policy,  UNIVERSITY OF TEXAS AT AUSTIN

My testimony today will make these main points: 1. Energy and water are interrelated, 2. The energy-water relationship is already under strain, 3. Trends imply these strains will be exacerbated

In California, where water is moved hundreds of miles across two mountain ranges, water is responsible for more than 19% of the state’s total electricity consumption.

Similarly large investments of energy for water occurs wherever water is scarce and energy is available. In addition to using energy for water, we also use water for energy. We use water directly through hydroelectric power generation at major dams, indirectly as a coolant for thermoelectric power plants, and as a critical input for the production of biofuels. The thermoelectric power sector-comprised of power plants that use heat to generate power, including those that operate on nuclear, coal, natural gas or biomass fuels-is the single largest user of water in the United States. Cooling of power plants is responsible for the withdrawal of nearly 200 billion gallons of water per day. This use accounts for 49% of all water withdrawals in the nation when including saline withdrawals, and 39% of all freshwater withdrawals, which is about the same as for agriculture.

Nuclear is the most water-intensive, while solar PV, wind, and some uses of natural gas are very water lean.

The Energy-Water Relationship Is Already Under Strain

Unfortunately, the energy-water relationship introduces vulnerabilities whereby constraints of one resource introduce constraints in the other. For example, during the heat wave in France in 2003 that was responsible for approximately 10,000 deaths, nuclear power plants in France had to reduce their power output because of the high inlet temperatures of the cooling water. Environmental regulations in France (and the United States) limit the rejection temperature of power plant cooling water to avoid ecosystem damage from thermal pollution (e.g. to avoid cooking the plants and animals in the waterway). When the heat wave raised river temperatures, the nuclear power plants could not achieve sufficient cooling within the environmental limits, and so they reduced their power output at a time when electricity demand was spiking by residents turning on their air conditioners. In this case, a water resource constraint became an energy constraint.

In addition to heat waves, droughts can also strain the energy-water relationship. During the drought in the southeastern United States in early 2008, nuclear power plants were within days or weeks of shutting down because of limited water supplies. Today in the west, a severe multi-year drought has lowered water levels behind dams, reducing output from their hydroelectric turbines. In addition, power outages hamper the ability for the water/wastewater sector to treat and distribute water.

Trends Imply These Strains Will Be Exacerbated

While the energy-water relationship is already under strain today, trends imply that the strain will be exacerbated unless we take appropriate action. There are four key pieces to this overall trend:

  1. Population growth, which drives up total demand for energy and water,
  2. Economic growth, which can drive up per capita demand for both energy and water,
  3. Climate change, which intensifies the hydrological cycle, and
  4. Policy choices, whereby we are choosing to move towards more energy-intensive water and more water-intensive energy.

Population Growth Will Put Upward Pressure on Demand for Energy & Water

Population growth over the next few decades might yield another 100 million people in the United States over the next four decades, each of whom will need energy and water to survive and prosper. This fundamental demographic trend puts upward pressure on demand for both resources, thereby potentially straining the energy-water relationship further.

Economic Growth Will Put Upward Pressure on Per Capita Demand for Energy & Water

On top of underlying trends for population growth is an expectation for economic growth. Because personal energy and water consumption tend to increase with affluence, there is the risk that the per capita demand for energy and water will increase due to economic growth. For example, as people become wealthier they tend to eat more meat (which is very water intensive), and use more energy and water to air condition large homes or irrigate their lawns. Also, as societies become richer, they often demand better environmental conditions, which implies they will spend more energy on wastewater treatment. However, it’s important to note that the use of efficiency and conservation measures can occur alongside economic growth, thereby counteracting the nominal trend for increased per capita consumption of energy and water. At this point, looking forward, it is not clear whether technology, efficiency and conservation will continue to mitigate the upward pressure on per capita consumption that are a consequence of economic growth. Thus, it’s possible that the United States will have a compounding effect of increased consumption per person on top of a growing number of people.

Climate Change Is Likely To Intensify Hydrological Cycles

One of the important ways climate change will manifest itself it through an intensification of the global hydrological cycle. This intensification is likely to mean more frequent and severe droughts and floods along with distorted snow melt patterns. Because of these changes to the natural water system, it is likely we will need to spend more energy storing, moving, treating and producing water. For example, as droughts strain existing water supplies, cities might consider production from deeper aquifers, poorer-quality sources that require desalination, or long-haul pipelines to get the water to its final destination. Desalination in particular is energy-intensive, as it requires approximately ten times more energy than production from nearby surface freshwater sources such as rivers and lakes.

Policy Choices Exacerbate Strain in the Energy-Water Nexus

On top of the prior three trends is a policy-driven movement towards more energy-intensive water and water-intensive energy. We are moving towards more energy-intensive water because of a push by many municipalities for new supplies of water from sources that are farther away and lower quality, and thereby require more energy to get them to the right quality and location. At the same time, for a variety of economic, security and environmental reasons, including the desire to produce a higher proportion of our energy from domestic sources and to decarbonize our energy system, many of our preferred energy choices are more water-intensive.

Nuclear energy is produced domestically, but is also more water-intensive than other forms of power generation.

The move towards more water-intensive energy is especially relevant for transportation fuels such as unconventional fossil fuels (oil shale, coal-to-liquids, gas-to-liquids, tar sands), electricity, hydrogen, and biofuels, all of which can require significantly more water to produce than gasoline (depending on how you produce them)

Almost all unconventional fossil fuels are more water-intensive than domestic, conventional gasoline production. While gasoline might require a few gallons of water for every gallon of fuel that is produced, the unconventional fossil sources are typically a few times more water-intensive.

Most power plants use a lot of cooling water, and consequently electricity can also be about twice as water-intensive than gasoline per mile traveled if the electricity is generated from the standard U.S. grid.

Though unconventional fossil fuels and electricity are all potentially more water-intensive than conventional gasoline by a factor of 2-5, biofuels are particularly water-intensive. Growing biofuels consumes approximately 1000 gallons of water for every gallon of fuel that is produced. Sometimes this water is provided naturally from rainfall. However, for a non-trivial and growing proportion of our biofuels production, that water is provided by irrigation.

Note that for the sake of analysis and regulation, it is convenient to consider the water requirements per mile traveled. Doing so incorporates the energy density of the final fuels plus the efficiency of the engines, motors or fuel cells with which they are compatible.

Conventional gasoline requires approximately 0.2 gallons of water per mile traveled, while irrigated biofuels from corn or soy can consume 20 to 100 or more gallons of water for every mile traveled. If we compare the water requirements per mile traveled with projections for future transportation miles and combine those figures with mandates for the use of new fuels, such as biofuels, the water impacts are significant.

Water consumption might go up from approximately one trillion gallons of water per year to make gasoline (with ethanol as an oxygenate), to a few trillion gallons of water per year.

To put this water consumption into context, each year the United States consumes about 36 trillion gallons of water. Consequently, it is possible that water consumption for transportation will more than double from less than 3% of national use to more than 7% of national use. In a time when we are already facing water constraints, it is not clear we have the water to pursue this path. Essentially we are deciding to switch from foreign oil to domestic water for our transportation fuels, and while that might be a good decision for strategic purposes, I advise that we first make sure we have the water.

Unfortunately, there are some policy pitfalls at the energy-water nexus. For example, energy and water policy making are disaggregated. The funding and oversight mechanisms are separate, and there are a multitude of agencies, committees, and so forth, none of which have clear authority. It is not unusual for water planners to assume they have all the energy they need and for energy planners to assume they have the water they need. If their assumptions break down, it could cause significant problems. In addition, the hierarchy of policymaking is dissimilar. Energy policy is formulated in a top-down approach, with powerful federal energy agencies, while water policy is formulated in a bottom-up approach, with powerful local and state water agencies. Furthermore, the data on water quantity are sparse, error- prone, and inconsistent. The United States Geological Survey (USGS) budgets for collecting data on water use have been cut, meaning that their latest published surveys are anywhere from 5 to 15 years out of date. National databases of water use for power plants contain errors, possibly due to differences in the units, format and definitions between state and federal reporting requirements. For example, the definitions for water use, withdrawal and consumption are not always clear. And, water planners in the east use ‘‘gallons’’ and water planners in the west use ‘‘acre-feet,’’ introducing additional risk for confusion or mistakes.

Energy for Water—US public water supply requires 4% of national energy and 6% of national electricity consumption

The energy-water relationship is already under strain: constraints are cross-sectoral • Heat waves and droughts can constrain energy • Energy outages can constrain water

SENATOR BINGAMAN. Your testimony highlights the need to investigate the water supply needs associated with electricity generation AND transportation fuels, which our legislation seeks to do. You have also indicated that a ‘‘switch from gasoline to electric vehicles or biofuels is a strategic decision to switch our dependence from foreign oil to domestic water’’.

MICHAEL E. WEBBER. Today, petroleum-based fuels supply more than 95% of our energy for transportation. Because of converging desires to switch to lower-carbon, less volatile, and domestic forms of transportation fuels, a variety of policy mechanisms support the displacement of imported petroleum with electricity, biofuels, unconventional fossil fuels, hydrogen, and natural gas. In general, gasoline and diesel are relatively water-lean to produce. By contrast, most of the alternative transportation fuels-in particular biofuels, unconventional fossil fuels, some forms of electricity, and some forms of hydrogen-are more water-intensive. Thus, by switching from imported petroleum to these domestic options, we are essentially substituting the use of domestic water for petroleum. While this tradeoff has important strategic benefits, it can be problematic from a water resources perspective.

SENATOR BINGAMAN. Many of us are familiar with the concept of ‘‘peak oil’’. Can you please elaborate on the concept of ‘‘peak water’’?

MICHAEL E. WEBBER. ‘‘Peak Water’’ is a reference to the concept of declining productions rates for fresh water. In contrast with ‘‘Peak oil,’’ which refers to a finite resource (petroleum), water is very abundant globally. However, most of that water is available in a form, location, or time of year that is inconvenient or unusable for many people. Consequently, significant amounts of energy are invested to move that water in place, time and form (through pipelines, storage reservoirs and treatment plants) such that it is clean, potable, and available when and where we want it. If energy sources become constrained or prohibitively expensive, then clean, piped water might also become constrained or prohibitively expensive in certain locations or particular times of year. Consequently, ‘‘Peak Energy’’ could trigger a decline in production of freshwater.

Traditional steam-electric (or thermoelectric) power plants, including many of those powered by nuclear, coal, biomass, natural gas, or concentrated solar power, use extensive amounts of water for cooling. Locating these power plants in arid or semi-arid regions, where water resources are scarce, exposes the plants to the risk that they will compete with other municipal, agricultural, industrial or ecological needs for that water. Ensuring that the water needs will be met by the power plants will be challenging if conventional cooling technologies and freshwater sources are used. However, novel dry-cooling and wet-dry-hybrid cooling systems require much less water for power plants, and therefore might be a promising option. For example, some new concentrated solar power systems that use dry cooling have been proposed in Nevada. While these types of systems significantly reduce the amount of water that is needed by power plants, they have a tradeoff of 1) requiring more capital up front to build the cooling systems and 2) reducing the operating efficiency of the power plant. Other options include the use of reclaimed water or saline water for cooling, or building power plants with water-lean combinations of fuels and technologies, such as solar PV, wind turbines, and natural gas simple cycle combustion turbines.

Generally speaking, the northern latitudes of the U.S. have more abundant sources of water available. However, even ‘‘water-rich’’ regions of the country can be exposed to periods of drought. In addition, water abundance can lead to flooding, which also puts the energy sector at risk. Thus, the risk of water problems are widespread.

The energy sector’s growing water use, primarily for irrigating biofuels crops, provides a benefit of displacing some petroleum use, but introduces a risk of competition for water resources. By displacing petroleum, we reduce our exposure to oil price volatility tied to geopolitical events. However, we exchange those risks for water-related risks driven by climate and weather systems. These risks can show up in the form of higher energy prices, which can impact economic growth. Developing more energy-efficient water systems and more water-efficient energy systems can be economically beneficial because they mitigate the downside risks.

Building more energy-intensive water systems and more water-intensive energy systems exacerbates the exposure to risk.

Using reclaimed water or saline water at power plants reduces the need for freshwater in the power sector and can save on water costs for plant operators. Such systems have been built. For example the Palo Verde nuclear power plant in Arizona, and the Sand Hill natural gas power plant in Austin, Texas both use reclaimed water. And, coastal nuclear power plants use saline water. However, these water sources can be more corrosive or cause mineral build-up and thus might require more expensive piping and heat exchanger materials and additional maintenance. Furthermore, in some cases the use of reclaimed water requires permitting approval from relevant agencies and significant up-front capital-intensive infrastructure investments to connect reclaimed water sources from wastewater treatment plants to the electricity stations.

JOHN SEEBACH, DIRECTOR, HYDROPOWER REFORM INITIATIVE, AMERICAN RIVERS.   When it’s done wrong, hydropower can be far from clean. Hydropower is unique among renewable resources because of the scale at which it can damage the environment when it’s done poorly. Unless a hydropower dam is sited, operated, and mitigated appropriately, it can have enormous impacts on river health and the livelihoods of future generations that will depend on those rivers. Poorly done hydropower has caused some species to go extinct, and put others, including some with extremely high commercial value, at grave risk. That’s not something we should take lightly.

America is still blessed with many healthy, free-flowing watersheds, wetlands and floodplains that provide numerous services and values. We must preserve these intact systems and promote them as a vital part of our water supply and flood protection infrastructure. At the same time, we must rehabilitate rivers and streams that have been damaged by existing hydropower projects, and protect habitat from further degradation. A failure to improve the health of rivers now will doom more species to extinction as the world warms.

Hydrokinetic and Marine energy (S. 630) There has been a great deal of discussion about dam-less hydrokinetic technologies that use free-flowing rivers, waves, ocean currents, or other means to generate electricity. We have followed the development of instream hydrokinetic technologies closely. Moreover, since ocean and instream hydrokinetic technologies are often lumped together, we have participated in a number of policy discussions that have addressed both technologies. We are hopeful that these new technologies will eventually allow us to harness the power of moving water in a responsible manner that avoids the devastating impacts associated with dam-building. Unfortunately, there is still precious little information available about how these technologies interact in a natural setting. As of today, we are aware of only one instream hydrokinetic project that is currently licensed to generate in U.S. waters, and our understanding is that it is currently out of service. With so little information available, it is difficult to assess the environmental impacts of these technologies, let alone their commercial feasibility. We can only speculate as to what the costs and benefits of these technologies might be. It is clear, then, that there is a need for more testing, as well as for research into the potential environmental impacts and new and innovative ways that those impacts might be avoided. There is also a need for strong siting criteria that take into account environmentally sensitive areas or areas that are vital to economic activity (like transportation or commercial fishing), and consider the risk that the cumulative impacts of additional development may simply be too high in some watersheds that are already highly impacted by existing hydropower development.

Some of the potential environmental impacts of hydrokinetic energy technologies include (but are not limited to): • Aquatic Species’ interaction with devices and anchoring systems (including Marine mammals, sharks, fish, etc.). Potential risks include avoidance, behavior change, collision, entrainment, or mortality. • Effects due to the removal of energy from waves and currents. Potential risks include altered sediment transport and changes in flow velocity, tidal exchange, and water quality. • Effects of noise, vibration, lighting, EMF from transmission cables, and releases of chemicals (lubricants, oils, etc.) on aquatic and avian species. • Effects of exclusion / restriction zones on recreation, navigation, commercial fishing, etc.

For a much more detailed discussion of some of these impacts, we recommend the U.S. Department of Energy’s Wind and Hydropower Technologies Program’s December 2009 ‘‘Report to Congress on the Potential Environmental Effects of Marine and Hydrokinetic Energy Technologies.’’

Mr. Steven Chalk, Chief Operating Officer and Acting Deputy Assistant Secretary for Renewable Energy at the Department of Energy.  The provisions being considered from ACELA address the interdependence of our energy and water consumption. Water is an integral component of many traditional and alternative energy technologies used for transportation, fuels production and electricity generation. Energy-related water demands are beginning to compete with other demands from population growth, agriculture and sanitation. This competition could become fiercer if climate change increases the risk of drought, making our water supply more vulnerable. The Department of Energy (DOE) has initiated many activities over the last few years to address this energy-water nexus.

About 45% of all hydropower in the United States is generated at Federally-owned facilities, providing clean, renewable power to the grid. DOE’s estimates indicate that there could be an additional 300 gigawatts of hydropower through efficiency and capacity upgrades at existing facilities, powering non-powered dams, new small hydro development and pumped storage hydropower.

Conventional hydropower represented 65% of U.S. renewable electricity generation in 2010, and 7% of total U.S. electricity generation. Conventional hydropower principally serves as a baseload electricity supply, but can also function as a dispatchable resource to balance variable renewable energy technologies such as wind and solar.

The Electric Power Research Institute indicated that its conservative estimate was that MHK power (from wave and tidal sources alone) could provide an additional 13,000 megawatts (MW) of capacity by 2025.

Power generation from thermal energy sources (which include coal, natural gas and nuclear energy) accounted for approximately 41% of U.S. freshwater withdrawals in 2005.  Although most of the water withdrawn for cooling thermal power plants is subsequently returned to the source, this still can have disruptive effects on water flows and temperatures, which in turn negatively affect aquatic organisms, namely fish populations such as salmon.

We identify possibly 300 gigawatts of potential hydro. I would say roughly 12 gigawatts of capacity is from existing hydropower facilities from upgrading efficiency and capacity. A lot of these facilities are very old, so the turbines aren’t very efficient. So, if we can put modern turbines in there, we could get probably about 12 gigawatts of power. If we look at existing dams—and there’s 80,000 dams in the U.S.—most of those are not powered. But we could probably get an additional 12 gigawatts from 595 of those dams if we put powerhouses on those, as long as it can be done in an environmentally sensitive way. The big potential, we estimate about 255 gigawatts, is in small hydro, and this potential is all over the country. In fact, there’s 90 gigawatts of small hydro in Alaska. Incredible potential. Most of these locations have less than 5 megawatts of potential. So, that’s where most of the growth could occur if we would look to grow hydropower.

Then the last area is pump storage, which really is more of a capacity thing than energy. It actually uses more energy, because you have to pump the water back up the hill, and then it takes more energy to do that than you get when you need the power. But this is really important for backstopping and firming up intermittent renewables like wind and solar. So, this is a really important area. We estimate there’s roughly about another 34 gigawatts of this type of power that’s available.

The marine and hydrokinetic portion has gone down a little bit, but in that particular area, the marine and hydrokinetic devices are really where the wind program was 20 years ago. These device designs are just emerging. There’s been very little open water testing—almost no testing like you have wind farms today. We call them ‘‘arrays,’’ in the water. Almost no testing there. So, we feel like the amount of money that we’re putting into the marine and hydrokinetic is the right amount for the current state of development, which is rather immature.

There are a lot of synergies between offshore wind and some of these offshore water devices. Materials, for instance. We have to use composite materials to prevent erosion, corrosion, and other similar phenomena. A major barrier is ensuring that we have the transmission for offshore wind, and for these smaller ocean or wave or tidal devices. Perhaps they could be tied together. How to finance that transmission, and how to go about installing it would actually be a significant hurdle that we would have to address.

If you look at the challenges in siting a solar or thermal plant, it has a steam cycle to produce the energy, or a geothermal plant in the desert where there’s no access to water, you have to come up with ways of, what we call dry cooling. You have to minimize water use. That’s a tough R&D challenge because as you do that, a lot of times you reduce your efficiency in producing electricity. In biomass, for instance, if we’re going to grow sustainable energy crops, it’s a requirement that we have to use very little water—not like irrigating corn that we have today. We have to grow those crops with virtually just natural rainfall.

DOE’s pumped storage hydropower (PSH) initiative is focused on integrating variable renewable resources and identifying and addressing the barriers to deployment in the United States. In September 2010, DOE sponsored a PSH workshop where experts from the industry, manufacturers, laboratories, environmental groups, and government agencies were convened to identify the major PSH deployment barriers. The barriers identified in this workshop include permitting time and cost, lack of models that identify the full value of PSH, lack of uniform markets for ancillary services provided by PSH, high capital cost, and long payback period.

The CHAIRMAN. So, from your perspective, it’s not so much that the power from hydropower is more expensive than natural gas— it’s not.

Mr. MUNRO. Right.

The CHAIRMAN. But, it just takes so much longer to get the permits and to get it constructed, and online.

Mr. MUNRO. That’s true. Also, gas is a firm—it’s a real firm resource, meaning it’s there when you need it.

JEFF C. WRIGHT, DIRECTOR, OFFICE of ENERGY PROJECTS, FEDERAL ENERGY Regulatory Commission. The Commission regulates over 1,600 non-Federal hydropower projects at over 2,500 dams pursuant to Part I of the Federal Power Act, or FPA. Together, these projects represent 54 gigawatts of hydropower capacity—more than half of all the hydropower in the U.S. The FPA authorizes the Commission to issue licenses and exemptions for projects within its jurisdiction. About 71 percent of the hydropower projects regulated by the Commission have an installed capacity of 6 megawatts or less.

MICHAEL L. CONNOR, COMMISSIONER, BUREAU OF RECLAMATION, DEPARTMENT of the INTERIOR

Hydropower is very flexible and reliable when compared to other forms of generation. Reclamation has nearly 500 dams and dikes and 10,000 miles of canals and owns 58 hydropower plants, 53 of which are operated and maintained by Reclamation. On an annual basis, these plants produce an average of 40 million megawatt (MW) hours of electricity, enough to meet the entire electricity needs of over 9 million people on average.  Reclamation is the second largest producer of hydroelectric power in the United States, and today we are actively engaged in looking for opportunities to encourage development of additional hydropower capacity at our facilities.

 

 

Posted in Drought, Energy Production, Hydropower, Transportation, Water | Tagged , , , | 1 Comment

Current energy security challenges 2009 U.S. Senate hearing

[ Here are a few quotes from this 2009 Senate hearing on “Current energy security challenges”:

Eric Schwartz, member, Energy Security Leadership Council:Air transport, long-haul freight shipping, and heavy-duty trucks are not likely to be candidates for electrification…. Despite some initial signs that consumer behavior had changed over the summer, the Council is convinced that with prices back at a more palatable level, this country will return to its profligate use of oil. Indeed, early evidence supports my assertion: new vehicle sales once again shifted in favor of SUVs in December of 2008- for the first time since February of 2008. On New Year’s Day, the Financial Times reported that U.S. sales of hybrid vehicles were down 53% in November compared to one year ago, and the decline is expected to steepen over the coming months…. Deteriorating U.S. energy security is largely due to the nearly complete absence of transportation fuel diversity…. What we must not do is continue to put off the hard choices while clinging to the tired rhetoric of ”energy independence” and the inert sloganeering of ”drill baby drill.”… To the extent that the public loses interest in energy security as a result of low fuel prices, it is difficult to sustain support for sound energy policies. Then, by the time we face a ”crisis,” it is too late to act.”

Karen A. Harbert, VP Institute for 21st century Energy: “It is a simple fact that for the next several decades much of the energy needed to power economic growth will likely be supplied by fossil fuels.  Comprehensive energy reform cannot be done with an eye toward 2-year political cycles; it must be done with an eye toward the next 20 or 30 years.”

Kit Batten Ph.D., Senior Fellow, Center for American Progress Action Fund.  “In 2008 two studies published in Science criticized the use of biofuels, particularly corn-based ethanol, as causing more greenhouse gas emissions than conventional fuels. The studies also note that clearing natural habitats to grow crops for biofuels generally leads to more carbon emissions, and that clearing large areas of land in general can lead to food and water shortages and reduced biodiversity….The fastest, cheapest way to reduce our oil dependence is to reduce demand…. The Apollo and Manhattan Projects are sometimes held up as models of innovation to be emulated, but the energy innovation challenge is fundamentally different because it requires the private sector to adopt new technologies that can succeed in the competitive marketplace. These were not considerations in our country’s efforts to put a man on the moon or to build a nuclear weapon.”

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

Senate 111-2. January 8, 2009. Current energy security challenges. U.S. Senate Hearing.  103 pages.

Excerpts:

Senator Jeff Bingaman, New Mexico. Obviously energy policy is very imminently interconnected with the state of our economy. I think we all know that. We see it at every turn. The historic oil price increase that we experienced last year was one of many factors that caused some of the economic difficulty we currently find ourselves in.

Senator Lisa Murkowski, Alaska.  We can’t begin to fix the economy without addressing the need to run our factories. How we’re going to power our cars. How we’re going to heat our homes. While we have seen lower gas prices that have provided some relief, we recognize that it’s only temporary until we can find a long term solution to our Nation’s dependence on foreign energy sources. That’s one of the reasons we’re here this morning to consider the proposals to address the nation’s tremendous energy security challenges. We’ve got to find ways to power our lives that are cleaner, that are more efficient and of course, more environmentally protective. We know that this is not an easy task. If it was easy we would have figured it out by now.

We hope that what we will hear from you this morning will help us as we work to craft yet another comprehensive energy bill. We need to show real leadership in developing legislation that builds this bridge to our energy future while helping to right the economy here. The 2005 Energy Policy Act, the 2007 Energy Independence Security Act, they did a great deal to advance our nation’s energy policy. We championed clean energy resources, like wind and nuclear. We increased the CAFE standards. We promoted biofuels. We directed the Federal Government to lead on conservation issues. Then last year the Congress addressed production by lifting the moratorium on offshore leasing. We addressed such a magnitude of these issues in these bills that the Federal agencies are still implementing many aspects of them. We’re still waiting for the nation’s first off shore wind project to receive Federal approval. While many of the programs authorized by EPACT and ESA have not received appropriations yet, the stimulus package, which is under development, will likely fund a number of these existing authorizations, everything from making our electrical grids smarter to increasing R&D work on alternative technologies, to providing energy efficiency block grants to schools and local communities.

Eric Schwartz, member, Energy Security Leadership Council & former CO-CEO of Goldman Sachs Asset managementOur military members have commanded U.S. armed forces as they patrol the waterways and shipping lanes crucial to the global oil trade. They have been on the front lines of the battle against violent extremists, who are often funded by dangerous regimes awash in oil and natural gas revenue. And they have spent countless hours strategizing with American allies on the best approaches to safeguarding the thousands of miles of global energy infrastructure that is dangerously vulnerable to sabotage and political manipulation.

The Council’s companies ship goods and services around the world, linking together consumers and small businesses on every continent. They manage networks of data, financial and investing platforms, and they make it possible for Americans to travel easily across the country on a moment’s notice. It is because of their experience and their knowledge of the dangers posed by our energy security vulnerabilities that the members of the Energy Security Leadership Council have dedicated themselves to this issue.

In December 2006, the Council released a report entitled Recommendations to the Nation on Reducing U.S. Oil Dependence. The report laid out a comprehensive blueprint for energy security, including: demand reduction through reformed and increased fuel-economy standards; expanded production of alternatives; and increased domestic production of oil and natural gas. The Council collaborated with Senators Byron Dorgan (D-ND) and Larry Craig (R-ID) to design legislation incorporating the principal elements of the Recommendations. This resulted in the ‘‘Security and Fuel Efficiency Energy Act of 2007 (SAFE Energy Act).’’ In December 2007, Congress passed and President Bush signed into law an energy bill that honored the Recommendations by (1) dramatically reforming and strengthening fuel-economy standards and (2) mandating a Renewable Fuel Standard that will displace significant quantities of gasoline using advanced biofuels such as cellulosic ethanol.

The reality is this: our nation’s dependence on oil—much of it imported and the majority used in our transportation sector—still represents a grave threat to our economic and national security.

All of the Council’s members are acutely aware of the magnitude of the American energy challenge. We have seen first-hand how American oil dependence undermines U.S. foreign policy when our diplomats deal with oil exporters like Russia, Iran and Venezuela. We understand that America can never succeed in the war on terror as long as we fund both sides of the conflict. Speaking to you today as one of the Council’s business leaders, however, I must tell you that the threats posed to the U.S. economy by our dangerous dependence on oil are equally as dire as those posed to our national security. If we continue down the current path, economic weakness and decay at home will continue to threaten American power and influence abroad.

A typical subprime borrower with a poor credit history who bought a $200,000 house in 2006 with a 2 year/28 year ARM with a 4% teaser interest rate for the first 2 years would have seen monthly mortgage payments increase from about $950 a month before the reset to about $1,330 after the reset—an increase of about $4,500 a year. Meanwhile, the median household in America saw its household energy costs increase by roughly $1,600 a year during the same 2-year period. But this type of increase in energy costs affected all U.S. households—not just the one household in 20 that held a subprime mortgage. All of these developments stemming from higher oil prices caused a noticeable slowing of economic growth. The U.S. economy lost more than 700,000 jobs between December 2007 and the beginning of September 2008, and the unemployment rate increased from 4.5 to 6.1%—all before the financial crisis truly hit later in September. In fact, as early as last August, many economists believed the U.S. economy was already on the verge of recession, largely driven by sharply rising and volatile oil prices. This put banks and Wall Street firms in a weakened financial state, with sharply eroded profit positions, even before the credit situation reached its crisis point.

What is so striking about this series of events is its near inevitability—it was an entirely predictable disaster. Just as they warned of the impending collapse of mortgage institutions like Fannie Mae and Freddie Mac, experts also warned that global oil demand was rising unchecked while easy access to cost-effective oil supply was plateauing or falling. This basic dynamic eroded the practical buffer between world oil production capacity and daily oil consumption, leaving the oil market prone to damaging volatility. Despite these well-known dangers, the American economy continued to operate at risk, with almost no substitutes for petroleum products and very few alternatives to driving. Today, 97% of our transportation energy needs are met by petroleum, and the transportation sector accounts for 70% of U.S. oil consumption. Our mistakes have been costly. Sharply higher oil prices had a devastating effect on household, business, and public sector budgets, and effectively functioned as a tax on the economy. One recent estimate by researchers at the Oak Ridge National Laboratory placed the combined cost of foregone economic growth and economic dislocation at nearly $300 billion in 2008. Rising fuel prices also significantly weakened U.S. automakers, whose relatively inefficient but high-margin large vehicles were virtually unsellable for a period of several months.

Finally, the U.S. exported hundreds of billions of dollars to pay for imported oil. Based on initial estimates, the U.S. trade deficit in petroleum products probably reached an all-time high of $350 billion in 2008—exceeding the combined cost of the wars in Iraq and Afghanistan for that year.

This massive financial burden accelerated the deterioration of the American balance of payments and contributed to a weaker U.S. dollar. Today, oil prices are near the bottom of a record slide, $150 dollar oil and U.S. gasoline prices over $4 per gallon led to demand destruction, reinforced by the financial and economic crises and the resulting recession in which we today find ourselvesAs the economy recovers, and drivers return to the roads, our dependence will once again put us at the mercy of rising oil and gas prices—particularly if the existing vehicle fleet is fundamentally the same as it is today.

Despite some initial signs that consumer behavior had changed over the summer, the Council is convinced that with prices back at a more palatable level, this country will return to its profligate use of oil. Indeed, early evidence supports my assertion: new vehicle sales once again shifted in favor of SUVs in December of 2008— for the first time since February of 2008. On New Year’s Day, the Financial Times reported that U.S. sales of hybrid vehicles were down 53% in November compared to one year ago, and the decline is expected to steepen over the coming months.

To be blunt, we can no longer be slaves to the boom and bust cycle of oil prices.

Deteriorating U.S. energy security is largely due to the nearly complete absence of transportation fuel diversity. Not only are ever-greater amounts of oil required to fuel the U.S. transportation system, which is almost entirely dependent on oil, but the world oil market increasingly relies on supplies from hostile and/or unstable foreign producers.

Electrification of transportation would allow cars and light trucks to run on energy produced by a diverse set of sources—nuclear, natural gas, coal, wind, solar, geothermal and hydroelectric. The supply of each of these fuels is secure, and the price of each is less volatile than oil. In the process, electrification would shatter the status of oil as the sole fuel of the U.S. ground transportation fleet. In short, electrification is the best path to the fuel diversity that is indispensable to addressing the economic and national security risks created by oil dependence.

Of course, the transportation sector encompasses a broad range of components that extends beyond short-haul travel.

Air transport, long-haul freight shipping, and heavy-duty trucks are not likely to be candidates for electrification.

The Council, therefore, supports an aggressive program to develop and deploy third generation biofuels—identical on a molecular level to oil-based fuels—that can be used in air transport and heavy-duty trucks. These advanced biofuels can be transported using existing infrastructure and will substantially increase the flexibility of the broader transportation sector.

Central to the success of such an approach will be the manner in which we, as a nation, manage the consequences of oil dependence while we transition to electrification. The upgrades in infrastructure and technology that are required are on the order of trillion dollar investments.

The weakest link in our nation’s electric power system is the transmission grid. The grid is currently insufficiently robust to support the unconstrained movement of power from generators to consumers, particularly location-constrained power (including renewables), and insufficiently reliable for an economy with a growing need for highly reliable power. Overburdened transmission lines increase the probability of service failures and prevent efficient redistribution of power from surplus to deficit regions. Recent studies of the transmission system have concluded that congestion on the transmission grid is costing consumers billions of dollars each year by preventing them from accessing low cost power.

Moreover, rather than constituting a national network, the transmission grid is in effect a patchwork that is not subject to the jurisdiction of any common regulator—indeed, some areas are wholly unregulated at the federal or state level. This balkanized structure makes it difficult to site and finance transmission lines.

The Council’s National Strategy suggests that national leaders must treat grid expansion as a national security imperative. Grid expansion is necessary to ensure the reliability of the grid in an environment of ever-growing demand for power, including that needed for short-haul transportation. Grid expansion also will be necessary to fully exploit the opportunities presented by wind and solar energy, production of which is most promising in sparsely populated areas distant from significant electrical loads, and nuclear power and coal with carbon sequestration, which are also location constrained, though to a lesser extent.

Shortly after the energy crisis of 1973, U.S. energy R&D soared from $2 billion annually to more than $14 billion, with public-sector investment peaking at just under $8 billion and private-sector investment topping out at nearly $6 billion. By 2004, private-sector energy R&D funding was below $2 billion and government funding had dropped to roughly $3 billion.

We not only must spend more, we must establish new institutions to help guide the spending to increase the effectiveness of our investment. Rather than channel the increased spending through the existing offices at the Department of Energy, with their attendant shortcomings, the Council supports the establishment of a new institution either inside or outside of DOE. This institution should be funded, at least in part, by an independent budget stream that avoids the annual earmarks and appropriations battles in Congress and interference by the Office of Management and Budget. Moreover, all funding should be distributed entirely on the basis of merit, while still maintaining the appropriate level of Congressional oversight. One division of the institution should be established to offer significant R&D grants-based support for early-stage research following a peer-review process that examines all grant requests on an ongoing basis. Another division of the institution should also provide financial assistance in a manner similar to a bank to support the deployment of new technologies, whether in the form of loan guarantees or other means that it deems appropriate. Without such institutional reforms, the Council remains skeptical that the United States can achieve the R&D progress necessary to transform our energy system.

If there are more severe and frequent oil price spikes, then the U.S. automobile sector cannot survive against foreign competitors positioned to offer consumers highly fuel efficient vehicles. Without change in the composition of products offered by the Detroit Three, each period of higher prices will be accompanied by an industry crisis and new demands for government intervention. At the same time, the United States has every interest in a competitive domestic automobile manufacturing sector, which cannot be easily or quickly replaced by foreign transplants in the event of the collapse of any significant portion of the domestic industry.

For the American companies to survive and make the transition to producing more fuel efficient vehicles, the public will have no choice but to provide meaningful assistance. Therefore, the National Strategy proposes an $8,000 tax credit for the first two million highly efficient vehicles sold in the United States. A similar measure was included in legislation passed by Congress in late 2008. The National Strategy also calls for direct assistance to the automakers to assist in their retooling to produce the transformative cars of the future. The Council recognizes that Congress provided some assistance last fall, but believes that additional assistance may be necessary in the future. This would not be limited to the Detroit Three, but to any automaker that produces cars in the United States.

The electrification of short-haul transport and the deployment of advanced biofuels will require a decades-long initiative characterized by a concentrated, sustained effort to improve national infrastructure and deploy advanced technologies in a market-friendly way. If properly executed, this process can produce a new U.S. transportation system that is fundamentally disconnected from oil dependence.

It will be critical for the Secretary of Transportation and the National Highway Traffic Safety Administration (NHTSA) to implement fuel-economy rules that give consideration to the seriousness of the national security threat facing the United States. By increasing standards for light-duty vehicles at a rate of 4% per year beyond 2020, U.S. oil consumption would be reduced by nearly 3.5 million barrels per day in 2030.

EISA also mandated the issuance of fuel-economy standards for medium- and heavy-duty trucks for the first time in U.S. history. This structural reform is of great importance for reducing fuel demand in the transportation sector. However, the legislation did not set specific standards for these vehicles, as it did for cars and light trucks. Instead, the bill left NHTSA with statutory authority for setting the medium- and heavy-duty fuel-economy standard as part of its rule-making process. The Council continues to recommend that NHTSA pursue an aggressive and expeditious rule-making process with regard to medium- and heavy-duty trucks as part of implementing EISA and, where possible, consolidate and streamline statutorily- required processes to result in maximum oil savings at the earliest possible date.

The proposal we have put forward represents a commitment to transforming our transportation systems. We can do this. We can end our transportation system’s reliance on petroleum.

Mr. SCHWARTZ.  The issue with natural gas is with the structure required to use natural gas as the key source of fuel for transportation. We don’t have it now. It would cost trillions of dollars. But we already have broad distribution of electric power.

Over the long term, it is the Council’s position that the most effective means for achieving true energy security is the electrification of short-haul transportation. America’s cars and light-duty trucks consumed approximately 8 million barrels of oil per day in 2008, about 40% of the U.S. total. Aggressively transitioning this component of the vehicle fleet to high rates of electrification will dramatically reduce oil consumption and thereby reduce the oil intensity of the U.S. economy. The Council has outlined a number of policy steps the federal government must implement, including vehicle tax credits, increased R&D spending for batteries, and a substantial investment in electricity generation, transmission, and grid management. The Council recognizes that widespread electrified ground transport will require a dramatic shift in consumer choice, technology and infrastructure. This transformation will only be achieved if we commit to a decades-long, sustained national effort that leverages smart, aggressive public spending with private ingenuity and flexibility. If we as a nation take the necessary steps, reductions in oil consumption from electrification of short-haul travel will reach meaningful levels within the next two decades.

The global oil market is extremely susceptible to boom and bust cycles. Investment and operational decisions in key nations are uneven and inefficient, often based on short-term considerations. Therefore, the Council has long recognized the need for market-friendly standards and mandates in the United States, regardless of oil price. As long as oil prices fluctuate unpredictably, the nation faces a near-impossible investment climate for alternatives to oil and for technologies that use oil more efficiently.

Our national leadership must be mindful of the dangers of increasing electric power demand (from electrification) without providing for diverse sources of power generation. If current trends are allowed to persist, a great deal of incremental U.S. power generation could be derived from natural gas. Despite recent developments in onshore unconventional gas production, there remains a very real possibility that America will be forced to import greater quantities of liquefied natural gas (LNG) in the coming decades. We must not trade one national security risk for another.

As a general rule, greater stability and regulatory certainty are vital for businesses to thrive. According to the Baker Hughes rig count, roughly 40% of the active rigs in the world are exploring and producing in the United States, despite the fact that U.S. resources are among the most costly to develop in the world. In part, this is because the U.S. is the world’s single largest market for petroleum products. However, it is also reflective of the fact that the United States currently maintains one of the most stable, favorable regulatory and tax environments in the world for oil and gas producers. At the same time, there is probably no more important factor than oil prices in determining the output of existing domestic oil wells. Roughly 20% of U.S. oil production currently derives from stripper wells-defined as those wells which produce less than 15 barrels of oil per day. A recent analysis from Sanford Bernstein suggested that the majority of this production is likely to shut down in 2009 as a result of today’s low-price environment. Beyond the onshore stripper wells, deepwater production in the Gulf of Mexico is among the most expensive oil to produce in the world, with marginal cost estimated at $75 per barrel. In other words, oil prices at $40 per barrel put intense pressure on producers who are highly leveraged to such costly production. At a minimum, low oil prices are likely to force many operators to postpone investing in new, more costly production. It is also worth noting that the most promising growth in domestic natural gas production is derived from relatively costly shale, tight, and deep gas. As natural gas prices have collapsed in tandem with oil prices, domestic producers of unconventional gas have been forced to slash capital spending and re-evaluate future production plans.  Over the long-term, the secular price trend for oil and natural gas is clearly headed upward, but there will many bumps along the road.

I would suggest that the most important thing our leaders can do is to move quickly to put policies in place that will promote energy security and safeguard the economy. We know from polling that Americans are not ideological on the energy issue. If presented with an honest assessment of the challenges we face, they support a realistic plan that balances efficiency and increased energy supply with a long-term transition away from oil and other fossil fuels to the extent feasible. What we must not do is continue to put off the hard choices while clinging to the tired rhetoric of ‘‘energy independence’’ and the inert sloganeering of ‘‘drill baby drill.’’

A truly reformed national energy system will require a sustained and concerted effort on the part of America’s political leaders. In turn, this will require the ongoing support of American voters as the nation implements an energy policy that reduces dependence on oil and makes greater use of cleaner and/or renewable fuels. No doubt, this represents a daunting challenge. It is one we have largely failed to meet to date, because after each price spike or ‘‘energy crisis’’ subsides, national attention shifts to other issues and willingness to spend money to address a problem that appears to have passed becomes a lower priority. Lower prices at the pump are a substantial part of the problem. Because of the size and the scope of the existing oil related infrastructure, solutions to our energy problems will take years to address. To the extent that the public loses interest in energy security as a result of low fuel prices, it is difficult to sustain support for sound energy policies. Then, by the time we face a ‘‘crisis,’’ it is too late to act.

Launch a weatherization program. Increasing energy efficiency in homes through weatherization is among the most cost-effective means to reduce energy consumption. Moreover, it utilizes existing technology, can begin immediately, and is labor intensive. Congress should increase funding for weatherization by $5 billion and expand eligibility for lower income households to participate in the program.

Build new transmission lines. There is broad consensus that we need to upgrade the capacity of the nation’s electrical grid and modernize its operation. Many of the obstacles to doing so, however, are not related to a lack of federal funds. One critical issue is that the existing regulatory process was not designed to plan and build a national electrical grid. The best use of federal funds to assist in upgrading the grid would be to provide funds to the federal power marketing agencies (BPA, SWPA, and WAPA) to construct new transmission lines. While most high voltage transmission lines are built and owned by private or municipal utilities or cooperatives, these power marketing agencies do, in fact, build and own transmission lines-primarily in the West. At Congress’ first opportunity, it should establish an interconnect-wide grid planning process that would develop a transmission plan, grant federal siting authority for the plan, and allocate the cost of the transmission lines built pursuant to the plan across all customers in the relevant interconnect.

Smart grid. In addition to upgrading the grid’s capacity, we need to modernize its operation. Advanced digital technology can operate the grid more efficiently and reliably, enable new demand response technologies and programs, and expand access to the grid to distributed generation and renewables. Most of the technology required to develop the smart grid can be paid for by utilities’ customers under existing cost allocation practices. However, the government should fund pilot programs that deploy new technology so that the market can more quickly determine which technologies and practices work best in the marketplace and deploy that technology in the shortest time frame possible. The government should provide at least $5 billion for such programs, which will create jobs and accelerate the deployment of critical technologies. c. Early infrastructure for electrification of transportation. In order to take full advantage of the oil savings possible through the use of plug-in hybrid electric or fully electric vehicles, drivers will need access to recharging stations not just at their homes, but also at other places where they park their cars-particularly at work. Yet, until there is a critical mass of plug-in electric or fully electric vehicles, installation of public recharging stations may not be a high priority for local governments or commercial real estate developers. Public recharging stations are estimated to cost $700 to $1,000 per outlet. Congress should establish grants to municipalities for installing outlets, provided that a minimum number of units are installed. The minimum number of units required to become eligible for the credit should be a function of city size. Congress should also provide tax credits to commercial real estate developers that install recharging facilities accessible to at least 5% of their parking spaces and make those spaces available to PHEVs and EVs. Promoting the establishment of at least one million recharging stations will facilitate the deployment of PHEVs and EVs and enhance our energy security. To be sure, an aggressive program to deploy EV charging stations may outpace widespread availability of the electric vehicles themselves. However, the Council supports this approach on the grounds that it serves stimulus job-creation goals while laying the groundwork for consumer acceptance of EVs down the road. The design of stations should be coordinated with relevant automakers.

Invest in battery R&D. The absence of batteries with sufficient capacity that can be recharged quickly and manufactured at a reasonable price is the primary stumbling block for the electrification of our short-haul transportation. The Council believes this is the most critical step the nation can take toward reducing our dependence on oil. Congress should allocate $2 to $3 billion over 3 years to fund advanced battery research.

Federal purchases of highly efficient vehicles. As the largest consumer in the nation, with a presence that extends throughout the economy, the federal government is well situated to help establish the market for electric vehicles. Either Congress, by statute, or the President, by Executive Order, should direct government agencies with a minimum size fleet to purchase either PHEVs or EVs if they are available and meet agency requirements. By doing so, the government can provide an early guaranteed market for PHEV and EV producers. This will accelerate scaling of EV production and may facilitate access to capital for automakers seeking collateralize debt. If suitable PHEVs and EVs are not available, agencies should be required to choose among the three most efficient vehicles for each class of car as defined by the Environmental Protection Agency for the purpose of calculating fuel- economy standards. Doing so will promote the development of markets for vehicles that will enhance our energy security.

Restructure tax credits for renewable energy. Because they are relatively new and are involved in a very capital-intensive industry, most renewable energy companies do not have enough taxable income to utilize existing tax credits intended to incent investments in renewable energy facilities. Moreover, the institutional investors with whom the renewable companies entered into partnerships to allow them to monetize the credits have disappeared in the recent financial crisis. Congress should establish a grant program as an alternative to the existing tax credits to allow the renewable companies to monetize the value of the tax credits. Otherwise, there is likely to be a severe collapse of the renewable industry until the economy recovers and tax equity partners are once again able and willing to partner with companies to build renewable generating capacity.

 

Karen A. Harbert, Executive VP & Managing Director, Institute for 21st Century Energy, Chamber of Commerce.  The United States now imports roughly 60% of our oil from foreign nations, which is almost double the amount we imported in the 1970s. This has put our economy and our national security at risk. It is also a huge drain on our economic resources. In 2008, the United States sent between $400 and $700 billion overseas for imported oil. Think what could be accomplished if even a fraction of that money remained here at home.

Our nation’s energy infrastructure is a ticking time bomb. Unless we make it an immediate priority to modernize it, blackouts, brownouts, service interruptions, and rationing will become more and more commonplace, with all that implies for lost productivity.

Various U.S. laboratories and others have evaluated the weak points in our energy infrastructure and have described numerous scenarios where a seemingly modest, routine occurrence could escalate into a debilitating energy supply disruption in very short order.

The term ‘energy infrastructure’ may conjure up images of pipes, wires, transformers, and power plants, but our nation’s most important energy infrastructure are the energy industry professionals—the engineers, scientists, computer programmers, skilled tradesmen, etc.—who ensure that we have the energy we need today and in the future. Our energy industry employs millions of people today, but nearly half of this workforce is eligible to retire within the next ten years.

At the same time, our universities and trade schools are graduating fewer students in science, engineering, and trade crafts, leaving many to wonder from where tomorrow’s energy professionals will come. In the coming years, we need government at all levels to build incentives that will motivate U.S. students and adults to train for and enter science, technology, engineering, and trade careers. In the interim, we need to reform our nation’s visa and immigration policies so that the United States can retain U.S.-trained, foreign-born scientists who are now being lured to other countries with less restrictive immigration and work policies.

It is a simple fact that for the next several decades much of the energy needed to power economic growth will likely be supplied by fossil fuels. Many developing countries have large resources of coal, natural gas, and oil, and it would be naive to believe that they will not use it.

Comprehensive energy reform cannot be done with an eye toward 2-year political cycles; it must be done with an eye toward the next 20 or 30 years. This means working together in a bipartisan fashion and across the 13 federal agencies and regulatory commissions that have some responsibility for energy policy and the dozens of Congressional committees and subcommittees. It means putting the needs of the nation ahead of the desires of one particular interest group, business sector, or region of the country.   It will take the government and the private sector working together. This teamwork cannot be achieved if the government issues dictates and implements burdensome regulations.

Dianne R. Nielson, Ph.D., Energy Advisor, Office of the Governor, Salt Lake City, UtahWestern Governors are concerned that the United States lacks an effective, long term energy policy. Energy security is a critical component of that. Both energy efficiency to reduce demand and a diversity of energy resources and technologies must be part of the solution. Western Governors are working individually in their states and regionally together to meet those challenges.

In the last 2 years WGA has been involved with a wide range of stakeholders in developing a number of reports including achieving greater energy efficiency in buildings, deploying near zero technologies for power plants fueled by coal resources, developing transportation fuels of the future and all of these reports are now forming the basis of work that we are doing moving forward to develop energy policy. For the past 8 months the Western Governors Association has been managing the Western Renewable Energy Zone Project in conjunction with the Department of Energy which is funding the effort. By identifying the most developable renewable resource zones within the West and the Western Interconnect, load serving entities, transmission providers and state regulators will be able to make more informed decisions about the cost of renewable power, the optimum transition needed to bring that power to consumers.

Senator BARRASSO. Wyoming is a big coal state. Right now coal is the most affordable, available, reliable and secure source of energy. It’s a source of 50% of electricity in the nation. It’s what helps keep down the cost of electricity. You talked about $100 billion dollars in clean energy projects and possibly 2 million jobs from that, about $50,000 per job. What do we tell the coal miners in Wyoming, the people that work for the trains and to transport the coal? It’s a major part of our economy as those people want to continue to develop coal and work with investments and innovative approaches to make sure that coal is as clean as possible because all of us want to properly balance energy, the economy and the environment.

Senator Mark Udall, Colorado.  Energy and natural resource issues have been a passion of mine for years. I grew in the West and spent more time under the stars than under the roof of my house. I’ve climbed all of Colorado’s 54 14,000 foot mountains and I’m intimately familiar with our Western lands. In the House, I used this knowledge to work to build bridges between various stakeholders and find solutions that respect the many values of our lands. I’ve tried to do the same with energy issues.

My passion for energy and natural resource issues are one of the main reasons that I sought election to the Senate and sought to be on this Committee. The topic that brings us here today is certainly one of the most pressing challenges facing our nation.

Energy is literally what powers our economy and our lives—yet our dependence on foreign oil threatens our national security and our environment. The current crisis between Russia and Ukraine is a perfect example of how access to oil can become a national security issue. And American dependence on oil from the Middle East has certainly contributed to the terrorism threat that America faces from Al Qaeda and other extremist groups.

I think there are a lot of questions still about oil shale, the amount of energy that’s needed to produce oil shale. Do you produce more energy at the end point than you actually put in? There are also grave concerns about the amount of water that’s necessary to produce oil shale. There are at least 5 different experimental technologies being used when it comes to oil shale production. So let’s proceed, but let’s proceed cautiously.

KAREN HARBERT, Executive Vice President and Managing Director, Institute for 21st Century Energy, Chamber of Commerce. We need to find business models that reward efficiency both at the supply side out on the consumer side. On the utility side those that get their revenue from producing more electricity we need to de-couple those profits from selling more electricity and rewarding them from making efficiency investments. There are ways in fiscal policy to actually reward those investments. So that there is a tax benefit to making those investments that will then allow them to still recoup profits but to make them actually profitable for selling less electricity. We also need to look at the building environment. The built environment here in the United States consumes a tremendous amount of electricity. There are no incentives for builders whether at the residential or commercial level to build more efficient buildings. After all it’s the tenant that pays the utility bills, not the builder. So currently we have a very low threshold of efficiency requirements in commercial buildings. We should raise that. We should reward them for efficiency improvements in those buildings. Likewise, consumers, if they have the monitors in their homes where they can make smart choices. They are able to make the choice of when they’re going to spend their money or not and same with the utilities at the different levels along the line.

Senator Jeanne Shaheen, New Hampshire.   In New Hampshire and New England we have some particular challenges relative to energy policy.   We are very dependent on foreign oil and foreign sources of fossil fuels. About 90% of our source of energy in New Hampshire and New England comes from foreign sources of fossil fuels. We also have a higher than normal percentage of individual buildings so that our efficiency costs for our buildings is more than in most States and more than 50% of people heat their homes with number 2 heating oil. So we have some significant challenges.

Kit Batten Ph.D., Senior Fellow, Center for American Progress Action Fund. America’s dependence on oil leaves us vulnerable to energy supply disruptions and to price volatility. What’s more, climatic shifts in developing countries are expected to trigger or exacerbate food shortages, water scarcity, the spread of disease, and natural resource competition. Thus, global warming is a threat multiplier for instability and will fuel political turmoil, drive already weak states toward collapse, threaten regional stability, and increase security costs. Committing to investments in fuels that have lower greenhouse gas emissions on a lifecycle basis in comparison to traditional gasoline is imperative to reduce our global warming emissions and ultimately avoid or lessen these risks and associated costs.

In the past few years, the body of scientific research and evidence surrounding the lifecycle greenhouse gas emission of a range of alternative biofuels has also grown. In 2008 two studies published in Science criticized the use of biofuels, particularly corn-based ethanol, as causing more greenhouse gas emissions than conventional fuels. The studies also note that clearing natural habitats to grow crops for biofuels generally leads to more carbon emissions, and that clearing large areas of land in general can lead to food and water shortages and reduced biodiversity. This type of scientific analysis of lifecycle greenhouse gas emissions can help us design the most effective standards to promote only those fuels with the lowest emissions and the greatest sustainability.

The fastest, cheapest way to reduce our oil dependence is to reduce demand. Increased oil production from conventional fuels, even including the areas previously under moratorium, has the potential to increase oil supplies by about 1.8 million barrels per day in 2030. By contrast, reducing demand for oil has the potential to reduce consumption by 9 to 10 million barrels per day

The United States possesses only 2-3% of the estimated world oil reserves, but it consumes 25% of the world’s oil, and U.S. oil production has dropped relentlessly for the past 20 years. In September 2008, Congress let a long-standing moratorium on leasing and drilling for oil in certain offshore areas expire, yet this will have little effect on oil production between now and 2030. According to the Energy Information Agency, opening the areas of the lower 48 states’ outer continental shelf that were formerly closed to leasing would increase oil production by only about 200,000 barrels per day between now and 2030.

Increasing fuel efficiency for passenger and non-passenger automobiles from 25 mpg to 35 mpg by 2020, will decrease oil use by 2.5 million barrels per day by 2030.

We must reduce our dependence on oil for many different reasons, including energy security, national security, economic growth, and reducing greenhouse gas emissions. Taking steps to develop renewable and low-carbon energy resources as well as investing in low-carbon energy are key to enhancing energy security and transitioning to a low-carbon economy.

The transition to a green economy—at home in the United States, and globally— can be a source of increased business opportunity, innovation, and competitiveness; job creation; stronger, more prosperous communities; and improved energy and national security. This transition must be at the center of both America’s energy policy and each step of our economic policy—stabilization, stimulus, recovery, and growth.

Unfortunately, the pace of innovation generated by this public investment has not been sufficient given the urgency and scale of today’s energy challenge. The various measures that it has employed (including direct federal support for RD&D, indirect financial incentives, and mandatory regulations) have been developed and implemented individually with too little regard for technological and economic reality and too much regard for regional and industry special interests. There has not been an integrated approach to energy technology innovation that encompasses priority areas of focus, the responsibilities of various funding agencies, and the mix of financial assistance measures that are available. If the United States simply continues to pursue energy innovation as it has in the past, then the path to a low-carbon economy will be much longer and costlier than necessary.

The United States needs a fresh approach to energy RD&D that successfully integrates the efforts of the numerous departments and agencies that are engaged in energy-related work, including the Department of Energy, the Department of Agriculture, the Department of Commerce, the Department of Defense, the National Science Foundation, and the Environmental Protection Agency. This new approach will need to address the shortcomings that have frequently plagued energy RD&D efforts, such as the practice of spending significant resources on demonstration projects that provide little useful information to the private sector. The Apollo and Manhattan Projects are sometimes held up as models of innovation to be emulated, but the energy innovation challenge is fundamentally different because it requires the private sector to adopt new technologies that can succeed in the competitive marketplace. These were not considerations in our country’s efforts to put a man on the moon or to build a nuclear weapon. Consequently, we recommend at least doubling the size of the federal energy RD&D budget and creating a new interagency group, the Energy Innovation Council, or EIC, that will be responsible for developing a multi-year National Energy RD&D Strategy for the United States.

At best, even with carbon capture and storage if we were to capture the carbon generated by oil shale liquid fuel development, we still would have to deal with the carbon emissions that come from burning that oil in our tailpipes. The environmental pollution that results as a result of developing oil shale, whether it’s air pollution, water pollution, greater salinity deposits and the extreme electricity costs that go into oil shale production, the extreme water costs that go into oil shale production, all make it in terms of our focus, a non-viable alternative.

[ Scorecard: dependence on oil mentioned 17 times in this excerpt ]

Posted in Energy Dependence, Energy Policy | Tagged , | Leave a comment

A Nuclear spent fuel fire at Peach Bottom in Pennsylvania could force 18 million people to evacuate

[If electric power were out 12 to 31 days (depending on how hot the stored fuel was), the fuel from the reactor core cooling down in a nearby nuclear spent fuel pool could catch on fire and cause millions of flee from thousands of square miles of contaminated land, because these pools aren’t in a containment vessel.

This could happen from the long power outage resulting from an electromagnetic pulse, which could take the electric grid down for a year ( see U.S. House hearing testimony of Dr. Pry at The EMP Commission estimates a nationwide blackout lasting one year could kill up to 9 of 10 Americans through starvation, disease, and societal collapse.  At this hearing, Dr. Pry said “Seven days after the commencement of blackout, emergency generators at nuclear reactors would run out of fuel. The reactors and nuclear fuel rods in cooling ponds would meltdown and catch fire, as happened in the nuclear disaster at Fukushima, Japan. The 104 U.S. nuclear reactors, located mostly among the populous eastern half of the United States, could cover vast swaths of the nation with dangerous plumes of radioactivity” )

After the nuclear fuel that generates power at a nuclear reactor is done, it’s retired to a spent fuel pool full of water about 40 feet deep.  Unlike the nuclear reactor, which is inside a pressure vessel inside a containment vessel, spent fuel pools are almost always outside the main containment vessel.  If the water inside ever leaked or boiled away, it is likely the spent fuel inside would catch on fire and release a tremendous amount of radiation.

Nuclear engineers aren’t stupid.  Originally these pools were designed to be temporary until the fuel had cooled down enough to be transported off-site for reprocessing or disposal.  But now the average pool has 10 to 30 years of fuel stored at a much higher density than the pools were designed for, in buildings that vent to the atmosphere and can’t contain radiation if there’s an accident.

There are two articles from Science below (and my excerpts from the National Academy of Sciences these articles refer to in APPENDIX A)

If the electric grid power fails, backup diesel generators can provide power for 7 days without resupply of diesel fuel under typical nuclear plant emergency plans. If emergency diesel generators stop working, nuclear power plants are only required to have “alternate ac sources” available for a period of 2 to 16 hours. Once electric power is no longer supplied to circulation pumps, the spent fuel pool would begin to heat up and boil off.  It would only take 4 to 22 days from when water was no longer cooling the fuel to ignite the zirconium cladding within 2 to 24 hours (depending on how much the fuel had decayed). Without more water being added to the spent fuel pool, the total time from grid outage to spontaneous zirconium ignition would likely be 12-31 days (NIRS).

The National Research Council estimated that if a spent nuclear fuel fire happened at the Peach Bottom nuclear power plant in Pennsylvania, nearly 3.5 million people would need to be evacuated and 12 thousand square miles of land would be contaminated.  A Princeton University study that looked at the same scenario concluded it was more likely that 18 million people would need to evacuated and 39,000 square miles of land contaminated.

Besides a geomagnetic or nuclear EMP threat, there can also be a loss of offsite power from events initiated by severe weather (i.e. hurricanes, tornadoes, etc) that could cause a spent fuel pool to catch on fire.  Other events include an internal fire, loss of pool cooling, loss of coolant inventory, an earthquake, drop of a cask, aircraft impact, or a missile.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

Stone, R. May 24, 2016. Spent fuel fire on U.S. soil could dwarf impact of Fukushima. Science Magazine.

A fire from spent fuel stored at a U.S. nuclear power plant could have catastrophic consequences, according to new simulations of such an event.

A major fire “could dwarf the horrific consequences of the Fukushima accident,” says Edwin Lyman, a physicist at the Union of Concerned Scientists, a nonprofit in Washington, D.C. “We’re talking about trillion-dollar consequences,” says Frank von Hippel, a nuclear security expert at Princeton University, who teamed with Princeton’s Michael Schoeppner on the modeling exercise.

The revelations come on the heels of a report last week from the U.S. National Academies of Sciences, Engineering, and Medicine on the aftermath of the 11 March 2011 earthquake and tsunami in northern Japan. The report details how a spent fuel fire at the Fukushima Daiichi Nuclear Power Plant that was crippled by the twin disasters could have released far more radioactivity into the environment.

The nuclear fuel in three of the plant’s six reactors melted down and released radioactive plumes that contaminated land downwind. Japan declared 1100 square kilometers uninhabitable and relocated 88,000 people. (Almost as many left voluntarily.) After the meltdowns, officials feared that spent fuel stored in pools in the reactor halls would catch fire and send radioactive smoke across a much wider swath of eastern Japan, including Tokyo. By a stroke of luck, that did not happen.

But the national academies’s report warns that spent fuel accumulating at U.S. nuclear plants is also vulnerable. After fuel is removed from a reactor core, the radioactive fission products continue to decay, generating heat. All nuclear power plants store the fuel onsite at the bottom of deep pools for at least 4 years while it slowly cools. To keep it safe, the academies report recommends that the U.S. Nuclear Regulatory Commission (NRC) and nuclear plant operators beef up systems for monitoring the pools and topping up water levels in case a facility is damaged. The panel also says plants should be ready to tighten security after a disaster.

At most U.S. nuclear plants, spent fuel is densely packed in pools, heightening the fire risk. NRC has estimated that a major fire at the spent fuel pool at the Peach Bottom nuclear power plant in Pennsylvania would displace an estimated 3.46 million people from 31,000 square kilometers of contaminated land, an area larger than New Jersey. But Von Hippel and Schoeppner think that NRC has grossly underestimated the scale and societal costs of such a fire.

NRC used a program called MACCS2 for modeling the dispersal and deposition of the radioactivity from a Peach Bottom fire. Schoeppner and Von Hippel instead used HYSPLIT, a program able to craft more sophisticated scenarios based on historical weather data for the whole region.
Nightmare scenarios

In their simulations, the Princeton duo focused on Cs-137, a radioisotope with a 30-year half-life that has made large tracts around Chernobyl and Fukushima uninhabitable. They assumed a release of 1600 petabecquerels, which is the average amount of Cs-137 that NRC estimates would be released from a fire at a densely packed pool. It’s also approximately 100 times the amount of Cs-137 spewed at Fukushima. They simulated such a release on the first day of each month in 2015.

The contamination from such a fire on U.S. soil “would be an unprecedented peacetime catastrophe,” the Princeton researchers conclude in a paper to be submitted to the journal Science & Global Security. In a fire on 1 January 2015, with the winds blowing due east, the radioactive plume would sweep over Philadelphia, Pennsylvania, and nearby cities. Shifting winds on 1 July 2015 would disperse Cs-137 in all directions, blanketing much of the heavily populated mid-Atlantic region. Averaged over 12 monthly calculations, the area exposed to more than 1 megabecquerel per square meter — a level that would trigger a relocation order — is 101,000 square kilometers. That’s more than three times NRC’s estimate, and the relocation of 18.1 million people is about five times NRC’s estimates.

NRC has long mulled whether to compel the nuclear industry to move most of the cooled spent fuel now held in densely packed pools to concrete containers called dry casks. Such a move would reduce the consequences and likelihood of a spent fuel pool fire. As recently as 2013, NRC concluded that the projected benefits do not justify the roughly $4 billion cost of a wholesale transfer. But the national academies’s study concludes that the benefits of expedited transfer to dry casks are fivefold greater than NRC has calculated.

“NRC’s policies have underplayed the risk of a spent fuel fire,” Lyman says. The academies panel recommends that NRC “assess the risks and potential benefits of expedited transfer.” NRC spokesperson Scott Burnell in Washington, D.C., says that the commission’s technical staff “will take an in-depth look” at the issue and report to NRC commissioners later this year.

SIDEBAR 1: According to “Beyond Nuclear“, “Frank von Hippel, a nuclear security expert at Princeton University, teamed up with Princeton’s Michael Schoeppner on the modeling exercise. The study examines the Peach Bottom nuclear power plant in Pennsylvania, a Fukushima Daiichi twin design, two reactor plant. As the article reports:

In their simulations, the Princeton duo focused on Cs-137, a radioisotope with a 30-year half-life that has made large tracts around Chernobyl and Fukushima uninhabitable. They assumed a release of 1600 petabecquerels, which is the average amount of Cs-137 that NRC estimates would be released from a fire at a densely packed pool. It’s also approximately 100 times the amount of Cs-137 spewed at Fukushima. They simulated such a release on the first day of each month in 2015.  The contamination from such a fire on U.S. soil “would be an unprecedented peacetime catastrophe,” the Princeton researchers conclude in a paper to be submitted to the journal Science & Global Security. In a fire on 1 January 2015, with the winds blowing due east, the radioactive plume would sweep over Philadelphia, Pennsylvania, and nearby cities. Shifting winds on 1 July 2015 would disperse Cs-137 in all directions, blanketing much of the heavily populated mid-Atlantic region [see image, above left]. Averaged over 12 monthly calculations, the area exposed to more than 1 megabecquerel per square meter — a level that would trigger a relocation order — is 101,000 square kilometers [nearly 39,000 square miles]. That’s more than three times NRC’s estimate, and the relocation of 18.1 million people is about five times NRC’s estimates. (emphasis added)

Von Hippel also serves on a National Academies of Science (NAS) panel examining lessons to be learned from the Fukushima nuclear catastrophe. As also reported by Richard Stone in Science Magazine, that NAS panel has just released a major report. It reveals that a high-level radioactive waste storage pool fire was narrowly averted at Fukushima Daiichi by sheer luck. It also reveals that major security upgrades are needed at U.S. nuclear power plant high-level radioactive waste “wet” pool and dry cask storage facilities. (See the full NAS report here.)

NAS called on NRC to address safety and security risks to high-level radioactive waste storage as early as 2004. NRC never has, 15 years after the 9/11 attacks, and five years after the Fukushima nuclear catastrophe began.”

Stone, R. May 20, 2016. Near miss at Fukushima is a warning for U.S., panel says. Science Magazine.

Japan’s chief cabinet secretary called it “the devil’s scenario.” Two weeks after the 11 March 2011 earthquake and tsunami devastated the Fukushima Daiichi Nuclear Power Plant, causing three nuclear reactors to melt down and release radioactive plumes, officials were bracing for even worse. They feared that spent fuel stored in the reactor halls would catch fire and send radioactive smoke across a much wider swath of eastern Japan, including Tokyo.

Thanks to a lucky break detailed in a report released today by the U.S. National Academies, Japan dodged that bullet. The near calamity “should serve as a wake-up call for the industry,” says Joseph Shepherd, a mechanical engineer at the California Institute of Technology in Pasadena who chaired the academy committee that produced the report. Spent fuel accumulating at U.S. nuclear reactor plants is also vulnerable, the report warns. A major spent fuel fire at a U.S. nuclear plant “could dwarf the horrific consequences of the Fukushima accident,” says Edwin Lyman, a physicist at the Union of Concerned Scientists, a nonprofit in Washington, D.C., who was not on the panel.

After spent fuel is removed from a reactor core, the fission products continue to decay radioactively, generating heat. Many nuclear plants, like Fukushima, store the fuel onsite at the bottom of deep pools for at least 5 years while it slowly cools. It is seriously vulnerable there, as the Fukushima accident demonstrated, and so the academy panel recommends that the U.S. Nuclear Regulatory Commission (NRC) and nuclear plant operators beef up systems for monitoring the pools and topping up water levels in case a facility is damaged. It also calls for more robust security measures after a disaster. “Disruptions create opportunities for malevolent acts,” Shepherd says.

At Fukushima, the earthquake and tsunami cut power to pumps that circulated coolant through the reactor cores and cooled water in the spent fuel pools. The pump failure led to the core meltdowns. In the pools, found in all six of Fukushima’s reactor halls, radioactive decay gradually heated the water. Of preeminent concern were the pools in reactor Units 1 through 4: Those buildings had sustained heavy damage on 11 March and in subsequent days, when explosions occurred in Units 1, 3, and 4.

The “devil’s scenario” nearly played out in Unit 4, where the reactor was shut down for maintenance. The entire reactor core—all 548 assemblies—was in the spent fuel pool, and was hotter than fuel in the other pools. When an explosion blew off Unit 4’s roof on 15 March, plant operators assumed the cause was hydrogen—and they feared it had come from fuel in the pool that had been exposed to air. They could not confirm that, because the blast had destroyed instrumentation for monitoring the pool. (Tokyo Electric Power Company, the plant operator, later suggested that the hydrogen that had exploded had come not from exposed spent fuel but from the melted reactor core in the adjacent Unit 3.) But the possibility that the fuel had been exposed was plausible and alarming enough for then-NRC Chairman Gregory Jaczko on 16 March to urge more extensive evacuations than the Japanese government had advised—beyond a 20-kilometer radius from the plant.

Later that day, however, concerns abated after a helicopter overflight captured video of sunlight glinting off water in the spent fuel pool. In fact, the crisis was worsening: The pool’s water was boiling away because of the hot fuel. As the level fell perilously close to the top of the fuel assemblies, something “fortuitous” happened, Shepherd says. As part of routine maintenance, workers had flooded Unit 4’s reactor well, where the core normally sits. Separating the well and the spent fuel pool is a gate through which fuel assemblies are transferred. The gate allowed water from the reactor well to leak into the spent fuel pool, partially refilling it. Without that leakage, the academy panel’s own modeling predicted that the tops of the fuel assemblies would have been exposed by early April; as the water continued to evaporate, the odds of the assemblies’ zirconium cladding catching fire would have skyrocketed. Only good fortune and makeshift measures to pump or spray water into all the spent fuel pools averted that disaster, the academy panel notes.

At U.S. nuclear plants, spent fuel is equally vulnerable. It is for the most part densely packed in pools, heightening the fire risk if cooling systems were to fail. NRC has estimated that a major fire in a U.S. spent fuel pool would displace, on average, 3.4 million people from an area larger than New Jersey. “We’re talking about trillion-dollar consequences,” says panelist Frank von Hippel, a nuclear security expert at Princeton University.

Besides developing better systems for monitoring the pools, the panel recommends that NRC take another look at the benefits of moving spent fuel to other storage as quickly as possible. Spent fuel can be shifted to concrete containers called dry casks as soon as it cools sufficiently, and the academy panel recommends that NRC “assess the risks and potential benefits of expedited transfer.” A wholesale transfer to dry casks at U.S. plants would cost roughly $4 billion.

REFERENCES

NIRS. 2011. Petition for Rulemaking submitted to NRC by Foundation for Resilient Societies.  Nuclear Information and Resource Service.  www.nirs.org/reactorwatch/natureandnukes/petrulemaking262011.pdf

 

APPENDIX A

[ After finding the Science articles above, I stopped working on my extract and comments. FYI below is how far I got. I was mainly interested in the effects of a power outage, which the last NAS paper said would be covered in their next Fukushima review — this one.  Though much of what I wanted to know was classified and not included in their report. Alice Friedemann ]

NRC. 2016. Lessons Learned from the Fukushima Nuclear Accident for Improving Safety and Security of U.S. Nuclear Plants: Phase 2. National Academies Press.  238 pages.  http://www.nap.edu/21874

The Devil’s Scenario

By late March 2011—some 2 weeks after the earthquake and tsunami struck the Fukushima Daiichi plant—it was far from obvious that the accident was under control and the worst was over. Chief Cabinet Secretary Yukio Edano feared that radioactive material releases from the Fukushima Daiichi plant and its sister plant (Fukushima Daini) located some 12 km south could threaten the entire population of eastern Japan: “That was the devil’s scenario that was on my mind. Common sense dictated that, if that came to pass, then it was the end of Tokyo.” (RJIF, 2014)

Here is the worst case “devil’s scenario (Kubota 2012):

  • Multiple vapor and hydrogen explosions and a loss of cooling functions at the six reactors at Tokyo Electric Power Co’s Fukushima Daiichi nuclear plant lead to radiation leaks and reactor failures.
  • Thousands of spent fuel rods, crammed into cooling pools at the plant, melt and mix with concrete, then fall to the lower level of the buildings.
  • In a possible domino effect, a hydrogen explosion at one reactor forces workers to evacuate due to high levels of radiation, halting cooling operations at all reactors and spent fuel pools. Reactors and cooling pools suffer serious damage and radiation leaks.
  • TOKYO EVACUATION. Massive radioactive contamination forces residents in a 170-km radius or further to evacuate while those in a 250-km radius or further may voluntarily evacuate.  Tokyo, Japan’s capital, is located about 240 km (150 miles) southwest of the plant and the greater metropolitan area is home to some 35 million people.
  • Radiation levels take several decades to fall.

WHAT ACTUALLY HAPPENED

  • The 9.0 magnitude earthquake and a tsunami exceeding 15 meters knocked out cooling systems at the six-reactor plant and meltdowns are believed to have occurred at Nos. 1, 2 and 3.
  • Hydrogen explosions occurred at the No. 1 and No. 3 reactor buildings a few days after the quake. Radiation leaks forced some 80,000 residents to evacuate from near the plant and more fled voluntarily, while radioactive materials have been found in food including fish and vegetable and water.
  • Reactor No. 4 was under maintenance and 550 fuel rods had been transferred to its spent fuel pool, which already had about 1,000 fuel rods. The pool caught fire and caused an explosion.
  • Reactors No. 5 and 6 reached cold shutdown — meaning water used to cool fuel rods is below boiling point — nearly 10 days after the tsunami but it took more than nine months to achieve that state at Nos. 1-3.
  • Decommissioning the reactors will take 30 to 40 years and some nearby areas will be uninhabitable for decades.

And here’s what the NRC has to say about the Unit 4 pool

The events in the Unit 4 pool should serve as a wake-up call to nuclear plant operators and regulators about the critical importance of having robust and redundant means to measure, maintain, and, when necessary, restore pool cooling.  These events  also have important implications for accident response actions. As water levels decrease below about 1 meter above the top of the fuel racks, radiation levels on the refueling deck and surrounding areas will increase substantially, limiting personnel access. Moreover, once water levels reach approximately 50% of the fuel assembly height, the tops of the rods will begin to degrade, changing the fuel geometry and increasing the potential for large radioactive material releases into the environment (Gauntt et al., 2012).

These observations bear directly on the safety of pool storage following large offloads of fuel from reactors.  For example, consider what might have occurred in the Unit 4 spent fuel pool had the reactor been shut down and the core been offloaded to the pool 48 days before March 11 rather than the actual 102 days earlier, and had there been no water leakage [into the pool].  [In this case], pool water levels would have reached 50% of fuel assembly height before 10.6 days had elapsed—which was the time elapsed between the onset of the accident on March 11 and the first addition of water to the pool in Unit 4. In this hypothetical situation, if the core had been offloaded closer to the time of the accident or if the water addition had been delayed longer than 10.6 days, then there could have been damage to the fuel with the potential for a large release of radioactive material from the pool, particularly because the most recently offloaded (and highest-power) fuel was not dispersed in the pool but was concentrated in adjacent locations within the racks.

INTRODUCTION

This is the second and final part, focuses on three issues: (1) lessons learned from the accident for nuclear plant security, (2) lessons learned for spent fuel storage, and (3) reevaluation of conclusions from previous Academies studies on spent fuel storage. The present report provides a reevaluation of the findings and recommendations from NRC (2004, 2006).

New recommendations:

  1. The U.S. nuclear industry and the U.S. Nuclear Regulatory Commission should strengthen their capabilities for identifying, evaluating, and managing the risks from terrorist attacks, especially spent fuel storage risks
  2. Nuclear plant operators and their regulators should upgrade and/or protect nuclear plant security infrastructure and systems and train security personnel to cope with extreme external events and severe accidents. Such upgrades should include: independent, redundant, and protected power sources dedicated to plant security systems that will continue to function independently if safety systems are damaged; diverse and flexible approaches for coping with and reconstituting plant security infrastructure, systems, and staffing during and following extreme external events and severe accidents;
  3. The U.S. nuclear industry and its regulator should improve the ability of plant operators to measure real-time conditions in spent fuel pools and maintain adequate cooling of stored spent fuel during severe accidents and terrorist attacks with hardened and redundant physical surveillance systems (e.g., cameras), radiation monitors, pool temperature monitors, pool water-level monitors, and means to deliver pool makeup water or sprays even when physical access to the pools is limited by facility damage or high radiation levels.
  4. The U.S. Nuclear Regulatory Commission should perform a spent fuel storage risk assessment to elucidate the risks and potential benefits of expedited transfer of spent fuel from pools to dry casks. This risk assessment should address accident and sabotage risks for both pool and dry storage.
  5. Some of the committee-recommended improvements have not been made by the USNRC or nuclear industry. In particular, the USNRC has not required plant licensees to install pool temperature monitors, yet these are essential in an accident to evaluate independently whether drops in pool water levels are due to evaporation or leakage, and must have independent power, be seismically rugged, and operate under severe accident conditions.

The committee found that the spent fuel storage facilities (pools and dry casks) at the Fukushima Daiichi plant maintained their containment functions during and after the March 11, 2011, earthquake and tsunami.

However, the loss of power, spent fuel pool cooling systems, and water level- and temperature-monitoring instrumentation in Units 1-4 and hydrogen explosions in Units 1, 3, and 4 hindered efforts by plant operators to monitor conditions in the pools and restore critical pool-cooling functions.

Plant operators had not been trained to respond to these yet they successfully improvised ways to monitor and cool the pools using helicopters, fire trucks, water cannons, concrete pump trucks, and ad hoc connections to installed cooling systems. These improvised actions were essential for preventing damage to the stored spent fuel and the consequent release of radioactive materials to the environment.

The spent fuel pool in Unit 4 was of particular concern because it had a high decay-heat load.

The committee used a steady-state energy-balance model to provide insights on water levels in the Unit 4 pool during the first 2 months of the accident (i.e., between March 11 and May 12, 2011). This model suggests that water levels in the Unit 4 pool declined to less than 2 m (about 6 ft) above the tops of the spent fuel racks by mid-April 2011.

The model suggests that pool water levels would have dropped below the top of active fuel had there not been leakage of water into the pool from the reactor well and dryer/separator pit through the separating gates. This water leakage was accidental; it was also fortuitous because it likely prevented pool water levels from reaching the tops of the fuel racks. The events in the Unit 4 pool show that gate leakage can be an important pathway for water addition or loss from some spent fuel pools and that reactor outage configuration can affect pool storage risks.

Once water levels reach half of the fuel assembly height, the tops of the rods will begin to degrade, changing the fuel geometry and increasing the potential for large radioactive material releases into the environment.

The safe storage of spent fuel in pools depends critically on the ability of nuclear plant operators to keep the stored fuel covered with water.

This has been known for more than 40 years and was powerfully reinforced by the Fukushima Daiichi accident. If pool water is lost through an accident or terrorist attack, then the stored fuel can become uncovered, possibly leading to fuel damage including runaway oxidation of the fuel cladding (a zirconium cladding fire) and the release of radioactive materials to the environment.

The spent fuel pools at Fukushima Daiichi Units 1-4 contained many fewer assemblies than are typically stored in spent fuel pools at U.S. nuclear plants. [The report doesn’t say how many fewer].

The storage capacity of U.S. spent fuel pools ranges from fewer than 2,000 assemblies to nearly 5,000 assemblies, with an average storage capacity of approximately 3,000 spent fuel assemblies. U.S. spent fuel pools are typically filled with spent fuel assemblies up to approximately three-quarters of their capacity (USNRC NTTF, 2011, p. 43).

ELECTRIC POWER

All offsite electrical power to the plant was lost following the earthquake, and DC power was eventually lost in Units 1-4 following the tsunami. Offsite AC power was not restored until 9 to 11 days later. Security equipment requiring electrical power was probably not operating continuously during this blackout period.

Regulations do not specify the performance requirements for these backup power supplies.  These backup supplies need to be adequately protected and sized to cope with a long-duration event such as occurred at the Fukushima Daiichi plant.

Recommendations:

  1. To have portable backup equipment capable of providing water and power to the reactor. Such equipment includes, for example, electrical generators, batteries, and battery chargers; compressors; pumps, hoses, and couplings; equipment for clearing debris; and equipment for temporary protection against flooding.
  2. To stage this equipment in locations both on- and offsite where it will be safe and deployable.

The Unit 1-4 spent fuel pools are equipped with active cooling systems; in particular the Spent Fuel Pool Cooling and Cleanup (FPC) systems, which are located within the reactor buildings below the refueling decks and in a nearby radwaste building. This system is designed to maintain pool temperatures in the range 25°C to 35°C (77°F to 95°F) by pumping the pool water through heat exchangers. The system also filters the pool water and adds makeup water as necessary to maintain pool water levels.

All of these features require electrical power.

The pools and refueling levels contain instruments to monitor water levels, temperatures, and air radiation levels. These measurements are displayed in the main control rooms. The temperature and water-level indicators are limited to a few locations near the tops of the pools for the purpose of maintaining appropriate water levels during normal operations: Pool water level is monitored by two level switches installed 1 foot above and half a foot below the normal water level in the pool.  Pool water temperature is monitored by a sensor 1 foot below the normal water level of the pool.

This instrumentation also requires electrical power to operate and has no backup power supply.

NRC (2014) provides a discussion of key events at the Fukushima Daiichi plant following the March 11, 2011, earthquake and tsunami. To summarize, Units 1-4 lost external power as a result of earthquake-related shaking. Units 1-4 also lost all internal AC power and almost all DC power for reactor cooling functions as a result of tsunami-related flooding. Efforts by plant operators to restore cooling and vent containments in time to avert core damage were unsuccessful. As a result, the Unit 1, 2, and 3 reactors sustained severe core damage and the Unit 1, 3, and 4 reactor buildings were damaged by explosions of combustible gas, primarily hydrogen generated by steam oxidation of zirconium and steel in the reactor core and, secondarily, by hydrogen and carbon monoxide generated by the interaction of the molten core with concrete.

The loss of AC and DC power and cooling functions also affected the Unit 1-4 spent fuel pools: The pools’ Spent Fuel Pool Cooling and Cleanup systems, secondary cooling systems, and pool water-level and temperature instrumentation became inoperable. High radiation levels and explosion hazards prevented plant personnel from accessing the Unit 1-4 refueling decks. Consequently, no data on pool water levels or temperatures were available for almost 2 weeks after the earthquake and tsunami. Moreover, even after pool instrumentation was restored, it was of limited value because of the large swings in pool water levels that occurred during the accident.  Improvised instrumentation and aerial observations were used to monitor pool conditions. Aerial and satellite photography were particularly important sources of information in the early stages of the accident although the images were not always interpreted correctly.

EARTHQUAKE EFFECTS

The earthquake caused the reactor buildings to sway, which likely caused water to slosh from the pools. No observational data on sloshing-related water losses are available, however. Analyses performed by the plant owner, TEPCO, suggest that sloshing reduced pool water levels by about 0.5 m (TEPCO, 2012a, Attachment 9-1). The sloshed water spilled onto the refueling decks and likely flowed into the reactor buildings through deck openings such as floor drains.

The explosions in the Unit 1, 3, and 4 reactor buildings likely caused additional water to be sloshed from the pools in those units. Again, no observational data on explosion-related water losses are available. Sloshing due to building motion resulting from the explosions is unlikely to be significant. But sloshing will occur if there is a spatially non-uniform pressure distribution created on the pool surface by an explosion in the region above the pool. This is particularly likely for high-speed explosions that create shock or detonation waves.  TEPCO estimates that an additional 1 meter of water was sloshed from each of the pools as a result of the explosions (TEPCO, 2012a, Attachment 9-1, p. 3/9).

Emergency response center actions

Personnel in the plant’s Emergency Response Center (see NRC, 2014, Appendix D) were focused on cooling the Unit 1-3 reactors and managing their containment pressures during the first 48 hours of the accident. They knew that restoring cooling in the spent fuel pools was less urgent and prioritized accordingly. Beginning on March 13, 2011, operators became increasingly concerned about water levels in the pools; their concerns increased following the explosions in the Unit 3 and 4 reactor buildings on March 14 and 15, respectively

By the morning of March 15, 2011, it was apparent that the Unit 1-3 reactors had been damaged and were releasing radioactive material. TEPCO evacuated all but about 70 personnel from the plant because of safety concerns (personnel began returning a few hours later). That same day, TEPCO initiated a comprehensive review of efforts to cool the spent fuel pools and made it a priority to determine the status of the Unit 4 pool. TEPCO added the Unit 3 pool to its priority list on the morning of March 16 after steam was observed billowing from the top of the Unit 3 reactor building.

Unit 1 Pool. The explosion in the Unit 1 reactor building on March 12, 2011, blew out the wall panels on the fifth floor, but the steel girders that supported the panels remained intact. The roof collapsed onto the refueling deck and became draped around the crane and refueling machinery. This wreckage prevented visual observations of and direct access to the pool. TEPCO estimated that the pool lost about 129 tonnes of water from the earthquake- and explosion-related sloshing. This lowered the water level in the pool to about 5.5 meters above the top of the racks. Because of the very low decay heat in Unit 1, this pool was of least concern.

Spent Fuel Heat-up Following Loss-of-Pool-Coolant Events

Spent fuel continues to generate heat from the decay of its radioactive constituents long after it is removed from a reactor. The fuel is stored in water-filled pools (i.e., spent fuel pools) to provide cooling and radiation shielding. An accident or terrorist attack that damaged a spent fuel pool could result in a partial or complete loss of water coolant. Such loss-of-pool-coolant events can cause the fuel to overheat, resulting in damage to the metal (zirconium) cladding of the fuel rods and the uranium fuel pellets within and the release of radioactive constituents to the environment.

The loss of water coolant from the pool would cause temperatures in the stored spent fuel to increase because air is a less effective coolant than water. The magnitude and rate of temperature increase depends on several factors, including how long the fuel has been out of the reactor and the rate and extent of water loss from the pool. As fuel temperatures rise, internal pressures in the fuel rods will increase and the rod material will soften. At about 800°C (1472°F), internal pressures in the fuel rod will exceed its yield stress, resulting in failure, a process known as fuel ballooning. Thermal creep of the fuel rod above about 700°C (1292°F) can also result in ballooning. Once the fuel cladding fails, the gaseous and volatile fission products stored in the gap between the fuel rod and pellets will be released. The fission product inventory varies depending on the type of fuel and its irradiation history; typically, on the order of a few percent of the total noble gas inventory (xenon, krypton), halogens (iodine, bromine), and alkali metals (cesium, rubidium) present in the fuel will be released. Between about 900°C (1652°F) and 1200°C (2192°F), highly exothermic chemical reactions between the fuel rods and steam or air will begin to accelerate, producing zirconium oxide.

The reaction in steam also generates large quantities of hydrogen. Deflagration (i.e., rapid combustion) of this hydrogen inside the spent fuel pool building can damage the structure and provide a pathway for radioactive material releases into the environment. Further temperature increases can drive more volatile fissile products out of the fuel pellets and cause the fuel rods to buckle, resulting in the physical relocation of rod segments and the dispersal of fuel pellets within the pool.

At about 1200°C the oxidation reaction will become self-sustaining, fully consuming the fuel rod cladding in a short time period if sufficient oxygen is available (e.g., from openings in the spent fuel pool building) and producing uncontrolled (runaway) temperature increases. This rapid and self-sustaining oxidation reaction, sometimes referred to as a zirconium cladding fire, may propagate to other fuel assemblies in the pool. In the extreme, such fires can produce enough heat to melt the fuel pellets and release most of their fission product inventories.

Unit 4 Pool

The Unit 4 reactor was shut down for maintenance, and large-scale repairs were in on March 11, 2011.

The explosion that occurred in the Unit 4 reactor building at 6:14 on March 15, 2011, destroyed the roof and most of the walls on the fourth and fifth (refueling deck) floors, and it damaged some of the walls on the third floor. TEPCO (2012a) has suggested that the explosion was due to the combustion of hydrogen that was generated in Unit 3 and flowed into Unit 4 through the ventilation system. The fifth-floor slab was pushed upward and the fourth-floor slab was depressed. The explosion also deposited debris around the reactor building, onto the refueling deck, and into the pool . Fires were reported in the damaged building later that morning and on the morning of March 16; these fires self-extinguished and were later attributed to the ignition of lubricating oil.

The damage to the Unit 3 and 4 building structures and steam emissions from both buildings raised grave concerns about the spent fuel pools in those units. Unit 4 was of particular concern because the reactor contained no fuel and therefore could not have been the source of hydrogen or other combustible gas. The only apparent source of combustible gas within Unit 4 was hydrogen from the steam oxidation of spent fuel in the fully or partially drained Unit 4 spent fuel pool.

Plant operators well understood the hazard posed by the spent fuel in the Unit 4 pool: The pool was loaded with high-decay-heat fuel; its water level was dropping because of large evaporative water losses; and openings in the Unit 4 building created by the explosion created pathways for radioactive materials releases into the environment.

The extensive visible damage to the Unit 4 reactor building and high level of decay heat in the Unit 4 pool continued to drive concerns about pool water levels. Operators began to add water to the Unit 4 pool.

Prime Minister Kan asked Dr. Kondo, then-chairman of the Japanese Atomic Energy Commission, to prepare a report on worst-case scenarios from the accident. Dr. Kondo led a 3-day study involving other Japanese experts and submitted his report (Kondo, 2011) to the prime minister on March 25, 2011. The existence of the report was initially kept secret because of the frightening nature of the scenarios it described. An article in the Japan Times quoted a senior government official as saying, “The content [of the report] was so shocking that we decided to treat it as if it didn’t exist.” When the existence of the document was finally acknowledged in January 2012, Special Advisor (to the Prime Minister) Hosono stated: “Because we were told there would be enough time to evacuate residents (even in a worst-case scenario), we refrained from disclosing the document due to fear it would cause unnecessary anxiety (among the public). . . .”

One of the scenarios involved a self-sustaining zirconium cladding fire in the Unit 4 spent fuel pool….Voluntary evacuations were envisioned out to 200 km because of elevated dose levels. If release from other spent fuel pools occurred, then contamination could extend as far as Tokyo, requiring compulsory evacuations out to more than 170 km and voluntary evacuations out to more than 250 km; the latter includes a portion of the Tokyo area. There was particular concern that the zirconium cladding fire could produce enough heat to melt the stored fuel, allowing it to flow to the bottom of the pool, melt through the pool liner and concrete bottom, and flow into the reactor building. After leaving office, Prime Minister Kan stated that his greatest fears during the crisis were about the Unit 4 spent fuel pool (RJIF, 2014).

Two important observations can be made from the committee’s analysis of water levels in the Unit 4 pool. First, because of the substantial uncertainties cited above, the committee cannot rule out the possibility that spent fuel in the Unit 4 pool became partially uncovered sometime prior to April 21, 2011. If the fuel was uncovered, however, then it was not substantial enough to cause fuel damage or substantially increase external dose rates in areas around the Unit 4 building. Fuel damage will not begin immediately when the water level drops below the top of the rack. Simulations of loss-of-cooling accidents (Gauntt et al., 2012) predict that it is possible to recover without fuel damage as long as the collapsed25 water level does not drop below the mid-height of the fuel for an extended period of time.

Second, leakage through the gate seals was essential for keeping the fuel in the Unit 4 pool covered with water. Had there been no water in the reactor well, there could well have been severe damage to the stored fuel and substantial releases of radioactive material to the environment. This is the “worst-case scenario” envisioned by then–Atomic Energy Commission of Japan Chairman Dr. Shunsuke Kondo.  To illustrate this second observation, the committee modeled a hypothetical scenario in which there is no water leakage into the Unit 4 pool from the reactor well and dryer-separator pit.  Without water leakage, pool water levels could have dropped well below the top of active fuel (located 4 m above the bottom of the pool) in early April 2011.

Finally, the damage observed in the Unit 3 gates demonstrates a pathway by which a severe accident could compromise spent fuel pool storage safety: drainage of water from a spent fuel pool through a damaged gate breach into an empty volume such as a dry reactor well or fuel transfer canal. A gate breach could drain a spent fuel pool to just above the level of the racks in a matter of hours, and the resulting high radiation fields on the refueling deck could hinder operator response actions. The committee judges that an effort is needed to assess the containment performance of spent fuel pool gates under severe accident conditions during all phases of the operating cycle.

Assessment of spent fuel pool performance, including gate leakage, is not a new topic for the USNRC. A review of historical data in 1997 (USNRC, 1997c) documented numerous instances of significant accidental drainage of pools in pressurized water reactor and BWR plants due to various failures including gate seals. The report recommended that “[t]he overall conclusions are that the typical plant may need improvements in SFP [spent fuel pool] instrumentation, operator procedures and training, and configuration control” (p. xi). Furthermore, the report goes on to identify the most prevalent reason for loss of pool inventory was leaking fuel pool gates. Given the potential for gate leakage under normal operations it is not surprising that it is also an issue under severe accident conditions.

Lessons Learned for Nuclear Plant Security

To the committee’s knowledge, TEPCO has not publicly disclosed the impacts of the earthquake and tsunami on plant security systems. Nevertheless, the committee infers from TEPCO’s written reports, as well as its own observations during a November 2012 tour of the Fukushima Daiichi plant, that security systems at the plant were substantially degraded by the earthquake and tsunami and the subsequent accident.  Tsunami damage and power losses likely affected the integrity and operation of numerous security systems, including lighting, physical barriers and other access controls, intrusion detection and assessment equipment, and communications equipment.

Such disruptions can create opportunities for malevolent acts and increase the susceptibility of critical plant systems to such acts.  Nuclear plant operators and their regulators should upgrade and/or protect nuclear plant security infrastructure and systems and train security personnel to cope with extreme external events and severe accidents. Such upgrades should include 1) Independent, redundant, and protected power sources dedicated to plant security systems that will continue to function independently if safety systems are damaged; 2) Diverse and flexible approaches for coping with and reconstituting plant security infrastructure, systems, and staffing during and following extreme external events and severe accidents; 3) the events at the plant suggest an important lesson from the accident: Extreme external events and severe accidents can have severe and long lasting impacts on the security systems at nuclear plants. Such long-lasting disruptions can create opportunities for malevolent acts and increase the susceptibility of critical plant systems to such acts. Similar situations could occur as a result of other natural disasters. For example, a hurricane or destructive thunderstorm that spawned tornados could damage onsite and offsite power substations and high-voltage pylons, causing a loss of a nuclear plant’s offsite power. The storm could also damage security fences, cameras, and other intrusion detection equipment. Relief security officers and other site personnel may not be able to report to duty on schedule if storm-related damage was widespread in surrounding communities. An adversary could use this disruption to advantage in carrying out a malevolent act.

The Fukushima Daiichi accident illustrates that full restoration of security measures could potentially take days to weeks after an extreme external event or severe accident: Damaged security equipment must be restored and destroyed equipment must be replaced.

TERRORISM, SABOTAGE, SECURITY

A determined violent external assault, attack by stealth, or deceptive actions, including diversionary actions, by an adversary force capable of operating in each of the following modes: A single group attacking through one entry point, multiple groups attacking through multiple entry points, a combination of one or more groups and one or more individuals attacking through multiple entry points, or individuals attacking through separate entry points, with the following attributes, assistance and equipment:

(A) Well-trained (including military training and skills) and dedicated individuals, willing to kill or be killed, with sufficient knowledge to identify specific equipment or locations necessary for a successful attack;

(B) Active (e.g., facilitate entrance and exit, disable alarms and communications, participate in violent attack) or passive (e.g., provide information), or both, knowledgeable inside assistance;

(C) Suitable weapons, including handheld automatic weapons, equipped with silencers and having effective long range accuracy;

(D) Hand-carried equipment, including incapacitating agents and explosives for use as tools of entry or for otherwise destroying reactor, facility, transporter, or container integrity or features of the safeguards system; and

(E) Land and water vehicles, which could be used for transporting personnel and their hand-carried equipment to the proximity of vital areas; and (ii) An internal threat; and (iii) A land vehicle bomb assault, which may be coordinated with an external assault; and (iv) A waterborne vehicle bomb assault, which may be coordinated with an external assault; and (v) A cyber attack.

An adversary who lacks the strength, weaponry, and training of the nuclear plant’s security forces might utilize attack strategies that do not require direct confrontations with those forces. For example, an adversary might choose to attack perceived weak points in the plant’s support infrastructure (e.g., offsite power and water supplies, key personnel) rather than mounting a direct assault on the plant. The goals of such asymmetric attacks might be to cause operational disruptions, economic damage, and/or public panic rather than radiological releases from a plant’s reactors or spent fuel pools. In fact, such attacks would not necessarily need to result in any radiological releases to be considered successful.

Offsite power substations, piping, fiber optic connection points, and other essential systems provide an adversary the opportunity to inflict damage with very little personal risk and without confronting a nuclear plant’s security forces. The psychological effects of such attacks, even if these do not result in the release of radioactive material, might have consequences comparable to or greater than the actual physical damage. In the extreme, such attacks could lead to temporary shutdowns of, or operating restrictions on, other nuclear plants until security enhancements could be implemented. (Japan shut down all its nuclear power reactors and briefly entertained the dismantlement of its nuclear power industry due to public pressure following the Fukushima Daiichi accident.)

Detailed information about the evolution of the accident at the Fukushima Daiichi plant and its compromised safety systems is widely available on the Internet and in reports such as this one. This information could be used by terrorists to plan and carry out asymmetric attacks on nuclear plants in hopes of creating similar cascading failures.

In the event of a catastrophic event or attack, security systems must be designed and installed to be quickly reconstituted. Hardened power and fiber optic cables must permit “plug-and-play” installation of replacements for inoperable equipment. Reestablishment of security is critical because an adversary who might otherwise be deterred from attacking a site might be encouraged to carry out an attack at a compromised facility.

The USNRC requires licensees to implement an Insider Mitigation Program to oversee and monitor the initial and continuing trustworthiness and reliability of individuals having unescorted access in protected or vital areas of nuclear plants. There is a long-standing assumption by the USNRC that this program reduces the likelihood of an active insider (GAO, 2006). USNRC staff was not able to provide an explanation that was adequate to the committee on how it assesses the effectiveness of these measures for mitigating the insider threat. Moreover, to the committee’s knowledge, there are no programs in place at the USNRC to specifically evaluate the effectiveness of these measures for mitigating the insider threat.

  1. 2.1 Reevaluation of Finding 3B from NRC (2006)

NRC (2006) considered four general types of terrorist attack scenarios:

  1. Air attacks using large civilian aircraft or smaller aircraft laden with explosives,
  2. Ground attacks by groups of well-armed and well-trained individuals,
  3. Attacks involving combined air and land assaults, and
  4. Thefts of spent fuel for use by terrorists (including knowledgeable insiders) in radiological dispersal devices.

The report noted that “. . . only attacks that involve the application of large energy impulses or that allow terrorists to gain interior access have any chance of releasing substantial quantities of radioactive material. This further restricts the scenarios that need to be considered. For example, attacks using rocket-propelled grenades (RPGs) of the type that have been carried out in Iraq against U.S. and coalition forces would not likely be successful if the intent of the attack is to cause substantial damage to the facility. Of course, such an attack would get the public’s attention and might even have economic consequences for the attacked plant and possibly the entire commercial nuclear power industry.” (NRC, 2006, p. 30) The concluding sentence speaks to terrorist intent and metrics for success. That is, if the intent of a terrorist attack is to instill fear into the population and cause economic disruption, then an attack need not result in any release of radioactive material from the plant to be judged a success. The classified report (NRC, 2004) identified particular terrorist attack scenarios that were judged by its authoring committee to have the potential to damage spent fuel pools and result in the loss of water coolant (Section 2.2 in NRC, 2004). The present committee asked USNRC staff whether any of these attack scenarios had been examined further since NRC (2004) was issued. Staff was unable to present the committee with any additional technical analyses of these scenarios. Consequently, the present committee finds that the USNRC has not undertaken additional analyses of terrorist attack scenarios to provide a sufficient technical basis for a reevaluation of Finding 3B in NRC (2004). The present committee did not have enough information to evaluate the particular terrorist attack scenarios identified in NRC (2004) and therefore cannot judge their potential for causing damage to spent fuel pools. The committee notes, however, that new remote-guided aircraft technologies have come into widespread use in the civilian and military sectors since NRC (2004) was issued. These technologies could potentially be employed in the attack scenarios described in NRC (2004). Other types of threats, particularly insider and cyber threats, have grown in prominence since NRC (2004) was issued. There is a need to more fully explore these threats to understand their potential impacts on nuclear plants.

6 Loss-of-Coolant Events in Spent Fuel Pools

Reconfiguring spent fuel in pools can be an effective strategy for reducing the likelihood of fuel damage and zirconium cladding fires following loss-of-pool-coolant events. However, reconfiguring spent fuel in pools does not eliminate the risks of zirconium cladding fires, particularly during certain periods following reactor shutdowns or for certain types of pool drainage conditions. These technical studies also illustrate the importance of maintaining water coolant levels in spent fuel pools so that fuel assemblies do not become uncovered.

The particular conditions under which fuel damage and zirconium cladding fires can occur, as well as the timing of such occurrences, are not provided in this report because they are security sensitive.

 

Spent Fuel Pool Loss-of-Coolant Accidents (LOCAs)

In a complete-loss-of-pool-coolant scenario, most of the oxidation of zirconium cladding occurs in an air environment.   For a partial-loss-of-pool-coolant scenario (or slow drainage in a complete-loss-of-pool-coolant scenario), the initial oxidation of zirconium cladding will occur in a steam environment:

The zirconium-steam reaction leads to the formation of hydrogen, which can undergo rapid deflagration in the pool enclosure, resulting in overpressures and structural damage. This damage can provide a pathway for air ingress to the pool, which can promote further zirconium oxidation and allow radioactive materials to be released into the environment. Debris from the damaged enclosure can fall into the pool and block coolant passages.

After the water level drops below the rack base plate, convective air flow is established. If the steam is exhausted, then the zirconium-steam reaction is replaced by the zirconium-oxygen reaction. However, prior to the onset of convective air flow, fuel cladding temperatures can exceed the threshold for oxidation, and fuel damage and radioactive material release can occur. The time to damage and release depends on pool water depth relative to the stored fuel assemblies.  There is a higher hazard for zirconium cladding fires in partially drained pools.

[To prevent this] nuclear power plants need to be able to provide at least 500 gallons per minute (gpm) of makeup water to the plant’s spent fuel pools for 12 hours.   The operator would first use installed equipment, if available, to meet these goals. If such equipment is not available, then operators would provide makeup water (e.g., from the condensate storage tank) with a portable injection source (pump, flexible hoses to standard connections, and associated diesel engine-generator) that can provide at least 500 gpm of spent fuel pool makeup. The portable equipment would be staged on site and could also be brought in from regional staging facilities.

If pool water levels cannot be maintained above the tops of the fuel assemblies, then portable pumps and nozzles would be used to spray water on the uncovered fuel assemblies. FLEX requires a minimum of 200 gpm to be sprayed onto the tops of the fuel assemblies to cool them (NEI, 2012).

Water and spray strategies need to work even if physical access to pools is hindered by structural damage or radiation levels make the site inaccessible even if permanently installed equipment is damaged. However, physical access might not be possible if the building is damaged or the pool is drained (in the latter case, high radiation levels would likely limit physical access to the pool). The spent fuel pools in Units 1-4 of the Fukushima Daiichi plant were not accessible after the hydrogen explosions because of debris and high radiation levels.

Expedited Transfer of Spent Fuel from Pools to Dry Casks

Spent fuel pools at U.S. nuclear plants were originally outfitted with “low-density” storage racks that could hold the equivalent of one or two reactor cores of spent fuel. This capacity was deemed adequate because plant operators planned to store spent fuel only until it was cool enough to be shipped offsite for reprocessing. However, reprocessing of commercial spent fuel was never implemented on a large scale in the United States; consequently, spent fuel has continued to accumulate at operating nuclear plants.

S. nuclear plant operators have taken two steps to manage their growing inventories of spent fuel. First, “high-density” spent fuel storage racks have been installed in pools to increase storage capacities. This action alone increased storage capacities in some pools by up to about a factor of 5 (USNRC, 2003). Second, dry cask storage has been established to store spent fuel that can be air cooled. Typically, transfers of the oldest (and therefore coolest) spent fuel from pools to dry casks are made only when needed to free up space in the pool for offloads of spent fuel resulting from reactor refueling operations. The objective of accelerated or expedited transfer would be to reduce the density of spent fuel stored in pools: “Expedited transfer of spent fuel into dry storage involves loading casks at a faster rate for a period of time to achieve a low density configuration in the spent fuel pool (SFP). The expedited process maintains a low density pool by moving all fuel cooled longer than 5 years out of the pool.

The low-density configuration achieved by expedited transfer would reduce inventories of spent fuel stored in pools. This might improve the coolability of the remaining fuel in the pools if water coolant was lost or if cooling systems malfunctioned.

Events capable of causing the loss of cooling in spent fuel pools:

  1. seismic events
  2. drops of casks and other heavy loads on pool walls
  3. loss of offsite power
  4. internal fire
  5. loss of pool cooling or water inventory
  6. inadvertent aircraft impacts
  7. wind-driven missiles (the impacts of heavy objects such as storm debris on the external walls of spent fuel pools)
  8. failures of pneumatic seals on the gates in the spent fuel pool

The USNRC’s analyses are of limited use for assessing spent fuel storage risks  because

  1. Spent fuel storage sabotage risks are not considered.
  2. Dry cask storage risks are not considered.
  3. The attributes considered in the cost-benefit analysis (Section 7.3.2) are limited by OMB and USNRC guidance and do not include some expected consequences of severe nuclear accidents.
  4. The analysis employs simplifying bounding assumptions that make it technically difficult to assign confidence intervals to the consequence estimates or make valid risk comparisons.

The present committee’s recommended risk analysis would provide policy makers with a more complete technical basis for deciding whether earlier movements of spent fuel from pools into dry cask storage would be prudent to reduce the potential consequences of accidents and terrorist attacks on stored spent fuel. This recommended risk analysis should • Consider accident and sabotage risks for both pool and dry cask storage. • Consider societal, economic, and health consequences of concern to the public, plant operators, and the USNRC. • More fully account for uncertainties in scenario probabilities and consequences.

A complete analysis would also include similar considerations for sabotage threats, including the consequences should a design-basis-threat (DBT) event fail to be mitigated, as well as the consequences should beyond-DBT events occur and fail to be mitigated. A complete analysis would consider a broad range of potential threats including insider and cyber threats. Sabotage initiators can differ from accident initiators in important ways: For example, most accident initiators occur randomly in time compared to the operating cycle of a nuclear plant. Sabotage initiating events can be timed with certain phases of a plant’s operating cycle, changing the conditional probabilities of certain attack scenarios as well as their potential consequences. There may be additional differences between accident and sabotage events with respect to timing, severity of physical damage, and magnitudes of particular consequences, for example radioactive material releases.

The following three conditional probabilities could have correlated and high numerical values if knowledgeable and determined saboteurs attack the plant in certain ways during certain parts of its operating cycle:

P(loss of offsite power | sabotage),

P(operating cycle vulnerability | loss of offsite power & sabotage), and

P(liner damage leading to loss of coolant | operating cycle vulnerability & sabotage).

If one assumes, for example, that these conditional probabilities are 1.0, then release frequencies will be about two orders of magnitude higher than those for a seismic initiator. This increased frequency is a consequence of the correlated behavior of the saboteurs with the reactor operating cycle and a high probability of success using a strategy that exploits plant vulnerabilities. On the other hand, decreasing these three conditional probabilities by a factor of 2 (corresponding to either less successful attackers or more successful defenders) will decrease the likelihood of a release by a factor of 10. Although the conditional probabilities used in the foregoing scenarios are entirely fictitious (and the scenarios themselves are in no way representative of the broad range of scenarios that could be considered), their use illustrates two important points: (1) A large range of F(release) outcomes are possible depending on the conditional probabilities used in the analysis, and, therefore, (2) it is essential to characterize the uncertainties in F(release) as part of the analysis. A sabotage risk assessment could be used to estimate these outcomes and uncertainties. The committee judges that it is not technically justifiable to exclude sabotage risks without the type of technical analysis that is routinely performed for assessing reactor accident risks. Such an analysis would consider both design-basis and beyond-design-basis threats. The likelihoods of these threats could be assessed through elicitation

SPENT FUEL POOL STUDY

The Spent Fuel Pool Study analyzed the consequences of a beyond design- basis earthquake on a spent fuel pool at a reference plant4 containing a General Electric Type 4 boiling water reactor (BWR) with a Mark I containment. The USNRC selected an earthquake having an average occurrence frequency of 1 in 60,000 years and a peak ground acceleration of 0.5-1.0 g (average 0.7 g) as the initiating event for this analysis.6 The study examined the effects of the earthquake on the integrity of the spent fuel pool and the effects of loss of pool coolant on its stored spent fuel.  A modeling analysis was carried out to identify initial damage states to the pool structure from this postulated seismic event. The analysis concluded that structural damage to the pool leading to water leaks (i.e., tears in the steel pool liner and cracks in the reinforced concrete behind the liner) was most likely to occur at the junction of the pool wall and floor. This leak location would result in complete drainage of the pool if no action was taken to plug the leak or add make-up water. Given the assumed earthquake, the leakage probability was estimated to be about 10 percent.

Leak scenarios

  1. No leak in the spent fuel pool
  2. A “small leak” in the pool that averages about 200 gallons per minute for water heights at least 16 feet above the pool floor (i.e., at the top of the spent fuel rack).
  3. A “moderate leak” in the pool that averages about 1,500 gallons per minute for water heights at least 16 feet above the pool floor

Reactor operating cycle phases:7

  • OCP1: 2-8 days; reactor is being defueled.
  • OCP2: 8-25 days; reactor is being refueled.
  • OCP3: 25-60 days; reactor in operation.
  • OCP4: 60-240 days; reactor in operation.
  • OCP5: 240-700 days; reactor in operation.

Fuel configurations in the pool:8

  • A “high-density” storage configuration in which hot (i.e., recently discharged from the reactor) spent fuel assemblies are surrounded by four cooler (i.e., less recently discharged from the reactor) fuel

assemblies in a 1 × 4 configuration throughout the pool (Figure 7.2).

  • A “low-density” storage configuration in which all spent fuel older than 5 years has been removed from the pool.

Mitigation scenarios:

  • A “mitigation” case in which plant operators are successful in deploying equipment to provide makeup water and spray cooling required by 10 CFR 50.54(hh)(2)10 (see Chapter 2).
  • A “no-mitigation” case in which plant operators are not successful in taking these actions [MY NOTE: BECAUSE THE ELECTRIC GRID IS DOWN FROM AN EMP OR OTHER DISASTER ]

Some key results of the consequence modeling are shown in Table 7.1.

Some of the loss-of-coolant scenarios examined in the study resulted in damage to, and the release of, radioactive material from the stored spent fuel. Releases began anywhere from several hours to more than 2 days after the postulated earthquake. The largest releases were estimated to result from high-density fuel storage configurations with no mitigation (Figure 7.1). The releases were estimated to be less than 2 percent of the cesium-137 inventory of the stored fuel for medium-leak scenarios, whereas releases were estimated to be one to two orders of magnitude larger for small-leak scenarios with a hydrogen combustion event. Hydrogen combustion was found to be “possible” for high-density pools but “not predicted” for low-density pools.

Operating-cycle phase (OCP) played a critical role in determining the potential for fuel damage and radioactive materials release. The potential for damage is highest immediately after spent fuel is offloaded into the pool (OCP1) because its decay heat is large. The potential for damage decreases through successive operating-cycle phases (OCP2-OCP5). In fact, only in the first three phases (OCP1-OCP3) is the decay heat sufficiently large to lead to fuel damage in the first 72 hours after the earthquake for complete drainage of the pool. These three “early in operating cycle” phases (Figure 7.1) constitute only about 8 percent of the operating cycle of the reactor.

In fact, a spent fuel pool accident can result in large radioactive material releases, extensive land contamination, and large-scale population dislocations.

NRC 2016 TABLE 7.1 A  & B

TABLE 7.1 Key Results from the Consequence Analysis in the Spent Fuel Pool Study

NOTE: The individual early fatality risk estimates and individual latent cancer fatality risk

estimates shown in the table were not derived from a risk assessment. They were computed using the postulated earthquake and scenario frequencies shown in the table. PGA = peak ground acceleration. a) Seismic hazard model from Petersen et al. (2008). b) Given that the specified seismic event occurs. c) Given atmospheric release occurs. d) Results from a release are averaged over potential variations in leak size, time since reactor shutdown, population distribution, and weather conditions (as applicable); additionally, “release frequency-weighted” results are multiplied by the release frequency. e) Linear no-threshold and population weighted (i.e., total amount of latent cancer fatalities predicted in a specified area, divided by the population that resides within that area). f) First year post-accident; calculation uses a dose limit of 500 mrem per year, according to Pennsylvania Code, Title 25 § 219.51. g) Mitigation can moderately increase release size; the effect is small compared to the reduction in release frequency. h) Largest releases here are associated with small leaks (although sensitivity results show large releases are possible from moderate leaks). Assuming no complications from other spent fuel pools/reactors or shortage of available equipment/staff, there is a good chance to mitigate the small leak event. i) Kevin Witt, USNRC, written communication, December 22, 2015.

For example, Figures 7.3A, 7.3B, and 7.3C show the estimated radioactive material releases, land interdiction, and displaced persons for the reference plant in the Spent Fuel Pool Study. Also shown for comparison purposes are the same consequences for the Fukushima Daiichi accident taken from the committee’s phase 1 report

NRC 2016 FIGURE 7.3    B  C

FIGURE 7.3 Selected consequences from the Spent Fuel Pool Study as a function of fuel loading (1 × 4 loading; low-density loading) and mitigation required by 10 CFR 50.54(hh)(2). Notes: Consequences for the Fukushima Daiichi accident are shown for comparison. (A) Radioactive material releases. (B) Land interdiction (see footnote 26 for an explanation of the values for the Fukushima bar). (C) Displaced populations. SOURCE: Table 7.1 in this report; IAEA (2015), NRA (2013), NRC (2014, Chapter 6), UNSCEAR (2013).

These figures illustrate three important points:

  1. A spent fuel pool accident can result in large releases of radioactive material, extensive land interdiction, and large population displacements.
  2. Effective mitigation of such accidents can substantially reduce these consequences for some fuel configurations (cf. the bars in the figures for 1 × 4 mitigated and unmitigated scenarios) but can increase consequences for others (cf. the bars in the figures for low density unmitigated and unmitigated scenarios).
  3. Low-density loading of spent fuel in pools can also substantially reduce these consequences and also reduce the need for effective mitigation measures.

Note that the Fukushima estimate includes land that is both interdicted and likely condemned

The Spent Fuel Study (USNRC, 2014a) reports only interdicted land. One of the difficulties with USNRC (2014a) is that, unlike previous studies, the condemned land is not reported. Of the 430 mi2 (1,113 km2) that were evacuated as of May 2013, 124 mi2 (320 km2) was reported as “difficult to return,” which gives an indication of the amount of land that may ultimately be condemned.

A similar point can be made by examining the unweighted results from the Expedited Transfer Regulatory Analysis (USNRC, 2013) for a “sensitivity case” that removes the 50-mile limit for land interdiction and population displacements and raises the value of the averted dose conversion factor from $2,000 per person-rem to $4,000 per person-rem. This scenario postulates the evacuation of 3.46 million people from an area of 11,920 mi2, larger than the area of New Jersey (Table 7.2).  In comparison, approximately 88,000 people were involuntarily displaced from an area of about 400 mi2 as a consequence of the Fukushima accident (MOE, 2015).

The cost-benefit analysis did not consider some other important health consequences of spent fuel pool accidents, in particular social distress. The Fukushima Daiichi accident produced considerable psychological stresses within populations in the Fukushima Prefecture over the past 4 years, even in areas where radiation levels are deemed by regulators to be acceptable for habitation. Radiation anxiety, insomnia, and alcohol misuse were significantly elevated 3 years after the accident (Karz et al., 2014). The incidence of mental health problems and suicidal thoughts also were high among residents forced to live in long-term shelters after the accident

Complex psychosocial effects were also observed, including discordance within families over perceptions of radiation risk, between families over unequal compensatory treatments, and between evacuees and their host communities (Hasegawa et al., 2015).

Sailor et al. (1987) used a modified version of SFUEL to estimate the risks (likelihoods) of zirconium cladding fires as a function of racking density. They estimated that risks could be reduced by a factor of 5 by switching from high- to low-density racks. This estimate was based on the reduction of minimum decay times before the fuel could be air cooled, and also on the reduction in the likelihood of propagation of a zirconium cladding fire from recently discharged fuel assemblies to older fuel assemblies in the low-density racks compared to high-density racks. However, Sailor et al. (1987) cautioned that “the uncertainties in the risk estimate are large.

The regulatory analysis for the resolution of Generic Issue 821 (Throm, 1989) was intended to determine whether the use of high-density racks poses an unacceptable risk to the health and safety of the public. The analysis concluded that no regulatory action was needed; that is, the use of high-density storage racks posed an acceptable risk. The technical analysis was based on the studies of Benjamin et al. (1979) and Sailor et al. (1987) and used the factor-of-5 reduction in the likelihood (i.e., the conditional probability of a fire given a drained pool) of a zirconium cladding fire for switching to low-density racks from high-density racks. A cost-benefit analysis analogous to that employed in USNRC (2014a) found that the costs associated with reracking existing pools (and moving older fuel in the pool to dry storage to accommodate reracking) substantially exceeded the benefits in terms of population dose reductions.

The assumptions and methodology used in the regulatory analysis for Generic Issue 82 are similar to those used in USNRC (2014a): A seismic event is considered the most likely initiator of the accident and spent fuel pool damage frequency is taken to be about 2 × 10–6 events per reactoryear. Moreover, USNRC (2014a) reached essentially the same conclusions as the regulatory analysis for the resolution of Generic Issue 82 (Throm, 1989).

A more pessimistic view on the uncertainties of modeling spent fuel pool loss-of-coolant accidents was expressed by Collins and Hubbard (2001): “In its thermal-hydraulic analysis . . . the staff concluded that it was not feasible, without numerous constraints, to establish a generic decay heat level (and therefore a decay time) beyond which a zirconium fire is physically impossible. Heat removal is very sensitive to these additional constraints, which involve factors such as fuel assembly geometry and SFP rack configuration. However, fuel assembly geometry and rack configuration are plant specific, and both are subject to unpredictable changes after an earthquake or cask drop that drains the pool. Therefore, since a non-negligible decay heat source lasts many years and since configurations ensuring sufficient air flow for cooling cannot be assured, the possibility of reaching the zirconium ignition temperature cannot be precluded on a generic basis.” (p. 5-2)

There is still a great deal to be learned about the impacts of the accident on the Fukushima Daiichi plant, including impacts on spent fuel storage. Additional information will likely be uncovered as the plant is dismantled and studied, perhaps resulting in new lessons learned and revisions to existing lessons, including those in this report.

References

ACRS (Advisory Committee on Reactor Safeguards). 2012a. Draft Interim Staff Guidance Documents in Support of Tier 1 Orders. July 17. http://pbadupws.nrc.gov/ docs/ML1219/ML12198A196.pdf.

ACRS. 2012b. Response to the August 15, 2012 Edo Letter Regarding ACRS Recommendations in Letter Dated July 17, 2012 on the Draft Interim Staff Guidance Documents in Support of Tier 1 Orders. November 7. http://pbadupws.nrc.gov/docs/ML1231/ML12312A197.pdf

ACRS. 2013. Spent Fuel Pool Study. July 18. http://pbadupws.nrc.gov/docs/ ML1319/ML13198A433.pdf.

AFM (Department of the Army Field Manual). 1991. Special Operations Forces Intelligence and Electronic Warfare Operations. Field Manual No. 34-36, Appendix D: Target Analysis Process. September 30. Washington, DC: Department of the Army.

Amagai, M., N. Kobayashi, M. Nitta, M. Takahashi, I. Takada, Y. Takeuch, Y. Sawada, and M. Hiroshima. 2014. Factors related to the mental health and suicidal thoughts of adults living in shelters for a protracted period following a large-scale disaster. Academic Journal of Interdisciplinary Studies 3(3): 11-16.

ASME/ANS (American Society of Mechanical Engineers/American Nuclear Society). 2009. Standard for Level 1/Large Early Release Frequency Probabilistic Risk Assessment for Nuclear Power Plant Applications.

ASME/ANS RA-Sa-2009. New York: ASME Technical Publishing Office.

Bader, J. A. 2012. Inside the White House during Fukushima: Managing Multiple Crises. Foreign Affairs (Snapshots), March 8. http://www.foreignaffairs.com/articles/137320/jeffrey-a-bader/inside-the-white-house-during-fukushima#.

Benjamin, A. S., D. J. McCloskey, D. A. Powers, and S. A. Dupree. 1979. Fuel Heat-up Following Loss of Water during Storage. NUREG/CR-0649, SAND77-1371. Albuquerque, NM: Sandia National Laboratories. http://pbadupws.nrc.gov/docs/ML1209/ ML120960637.pdf.

Bennett, B. T. 2007. Understanding, Assessing, and Responding to Terrorism: Protecting Critical Infrastructure and Personnel. Hoboken, NJ: John Wiley & Sons.

Bier, V., M. Corradini, R. Youngblood, C. Roh, and S. Liua. 2014. Development of an updated societal-risk goal for nuclear power safety: Probabilistic safety assessment and management, PSAM-12. INL/CON-13-30495. Proceedings of the Conference on Probabilistic Safety Assessment and Management, Honolulu, Hawaii, June 22-27. http:// psam12.org/proceedings/paper/paper_199_1.pdf

Blustein, P. September 26 2013. Fukushima’s Worst-Case Scenarios. Much of what you’ve heard about the nuclear accident is wrong. Slate.com

http://www.slate.com/articles/health_and_science/science/2013/09/fukushima_disaster_new_information_about_worst_case_scenarios.2.html

Boyd, C. F. 2000. Predictions of Spent Fuel Heatup after a Complete Loss of Spent Fuel Pool Coolant. NUREG-1726. Washington, DC: U.S. Nuclear Regulatory Commission. http://pbadupws.nrc.gov/docs/ML0037/ML003727905.pdf.

Bromet, E. J., and L. Litcher-Kelly. 2002. Psychological response of mothers of young children to the Three Mile Island and Chernobyl Nuclear Plant accidents one decade later. In J. Havenaar, J. Cwikel, and E. Bromet (Eds.), Toxic Turmoil: Psychological and Societal Consequences of Ecological Disasters (pp. 69-84). New York: Springer Science+Business Media LLC/Springer US.

Brown, G. G., and L. A. Cox, Jr. 2011a. Making terrorism risk analysis less harmful and more useful: Another try. Risk Analysis 31(2): 193-195.

Brown, G. G., and L. A. Cox, Jr. 2011b. How probabilistic risk assessment can mislead terrorism risk analysts. Risk Analysis 31(2): 196-204.

Budnitz, R. J., G. Apostolakis, D. M. Boore, L. S. Cluff, K. J. Coppersmith, C. Allin Cornell, and P. A. Morris. 1998. Use of technical expert panels: Applications to probabilistic seismic hazard analysis. Risk Analysis 18(4): 463-469.

Chen, S. R., W. C. Lin, Y. M. Ferng, C. C. Chieng, and B. S. Pei. 2014. CFD simulating the transient thermal–hydraulic characteristics in a 17 x 17 bundle for a spent fuel pool under the loss of external cooling system accident. Annals of Nuclear Energy 73(2014): 241-249.

Clauset, A., M. Young, and K. S. Gleditsch. 2007. On the frequency of severe terrorist events. Journal of Conflict Resolution 51(1): 58-87.

Cleveland, K. May 18, 2014. Mobilizing Nuclear Bias: The Fukushima Nuclear Crisis and the Politics of Uncertainty. The Asia-Pacific Journal.  http://apjjf.org/2014/12/20/Kyle-Cleveland/4116/article.html

Collins, T. E., and G. Hubbard. 2001. Technical Study of Spent Fuel Pool Accident Risk at Decommissioning Nuclear Power Plants. NUREG-1738. Washington, DC: U.S. Nuclear Regulatory Commission. http://pbadupws.nrc.gov/docs/ML0104/ ML010430066.pdf.

Cooke, R. M., A. M. Wilson, J. T. Tuomisto, O. Morales, M. Tainio, and J. S. Evans. 2007. A probabilistic characterization of the relationship between fine particulate matter and mortality: Elicitation of European experts. Environ Science & Technology 41(18): 6598-6605.

Danzer, A. M., and N. Danzer. 2014. The Long-Run Consequences of Chernobyl: Evidence on Subjective Well-Being, Mental Health and Welfare. Center for Economic Studies and Ifo Institute, Working Paper No. 4855.

DCLG (Department for Communities and Local Government). 2009. Multi-criteria analysis: A manual. 08ACST05703. London: DCLG. https://www.gov.uk/government/ uploads/system/uploads/attachment_data/file/7612/1132618.pdf

Denning, R., and S. McGhee. 2013. The societal risk of severe accidents in nuclear power plants. Transactions of the American Nuclear Society 108: 521-525.

DHS (U.S. Department of Homeland Security). 2010. Nuclear Reactors, Materials, and Waste Sector-Specific Plan: An Annex to the National Infrastructure Protection Plan. Washington, DC: DHS. https://www.dhs.gov/xlibrary/assets/nipp-ssp-nuclear-2010.pdf

DHS. 2013. NIPP 2013: Partnering for Critical Infrastructure Security and Resilience. Washington, DC: DHS. http://www.dhs.gov/sites/default/files/publications/National-Infrastructure-Protection-Plan-2013-508.pdf

EPRI (Electric Power Research Institute). 2004. Probabilistic Consequence Analysis of Security Threats—A Prototype Vulnerability Assessment Process for Nuclear Power Plants. Technical Report No. 1007975. Palo Alto, CA: EPRI.

EPRI. 2012a. Summary of the EPRI Early Event Analysis of the Fukushima Daiichi Spent Fuel Pools Following the March 11, 2011 Earthquake and Tsunami in Japan, Technical Update. Palo Alto, CA: EPRI. http://www.epri.com/abstracts/Pages/ ProductAbstract.aspx?ProductId=000000000001025058

EPRI. 2012b. Practical Guidance on the Use of PRA in Risk-Informed Applications with a Focus on the Treatment of Uncertainty. Palo Alto, CA: EPRI. http://www. epri.com/abstracts/Pages/ProductAbstract.aspx?ProductId=000000000001026511

Ezell, B., and A. Collins. 2011. Letter to the Editor. Risk Analysis 31(2): 192.

Ezell, B. C., S. P. Bennett, D. von Winterfeldt, J. Sokolowski, and A. J. Collins. 2010. Probabilistic risk analysis and terrorism risk. Risk Analysis 30(4): 575-589. https://www.dhs.gov/xlibrary/assets/rma-risk-assessment-technical-publication.pdf.

Forester, J., A. Kolaczkowski, S. Cooper, D. Bley, and E. Lois. 2007. ATHEANA User’s Guide: Final Report. NUREG 1880. Washington, DC: U.S. Nuclear Regulatory Commission. http://pbadupws.nrc.gov/docs/ML0721/ML072130359.pdf.

Frye, R. M., Jr. 2013. The use of expert elicitation at the U.S. Nuclear Regulatory Commission. Albany Law Journal of Science and Technology 23(2): 309-382. http://www.albanylawjournal.org/Documents/Articles/23.2.309-Frye.pdf.

GAO (U.S. Government Accountability Office). 2006. Nuclear Power Plants: Efforts Made to Upgrade Security, but the Nuclear Regulatory Commission’s Design Basis Threat Process Should Be Improved. GAO-06-388. Washington, DC: GAO. http://www.gao. gov/new.items/d06388.pdf.

Garrick, B. J., J. E. Hall, M. Kilger, J. C. McDonald, T. O’Toole, P. S. Probst, E. R. Parker, R. Rosenthal, A. W. Trivelpiece, L. A. VanArsdal, and E. L. Zebroski. 2004. Confronting the risks of terrorism: Making the right decisions. Reliability Engineering and System Safety 86(2): 129-176.

Gauntt, R., D. Kalinich, J. Cardoni, J. Phillips, A. Goldmann, S. Pickering, M. Francis, K. Robb, L. Ott, D. Wang, C. Smith, S. St. Germain, D. Schwieder, and C. Phelan. 2012. Fukushima Daiichi Accident Study (Status as of April 2012). SAND2012-6173. Albuquerque, NM, and Livermore, CA: Sandia National Laboratories. https://fukushima.inl.gov/PDF/FukushimaDaiichiAccidentStudy.pdf.

GIF (The Proliferation Resistance and Physical Protection Evaluation Methodology Working Group of the Generation IV International Forum). 2011. Evaluation Methodology for Proliferation Resistance and Physical Protection of Generation IV Nuclear Energy Systems. Revision 6. September 15. GEN IV International Forum. https:// www.gen-4.org/gif/upload/docs/application/pdf/2013-09/gif_prppem_rev6_final.pdf. (Last accessed March 8, 2016.)

Gneiting, T., F. Balabdaoui, and A. E. Raftery. 2007. Probabilistic forecasts, calibration and sharpness. Journal of the Royal Statistical Society, Series B 69(2): 243-268. http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9868.2007.00587.x/epdf

Government of Japan. 2011. Report of Japanese Government to the IAEA Ministerial Conference on Nuclear Safety: The Accident at TEPCO’s Fukushima Nuclear Power Stations. June. Tokyo: Government of Japan NERHQ (Nuclear Emergency Response Headquarters). www.kantei.go.jp/foreign/kan/topics/201106/iaea_houkokusho_e.html

Government of Japan. 2015. Events and Highlights on the Progress Related to Recovery Operations at Fukushima Daiichi Nuclear Power Station https://www. iaea.org/sites/default/files/highlights-japan1115.pdf

Hasegawa, A., K. Tanigawa, A. Ohtsuru, H. Yabe, M. Maeda, J. Shigemura, T. Ohira, T. Tominaga, M. Akashi, N. Hirohashi, T. Ishikawa, K. Kamiya, K. Shibuya, S. Yamashita, and R. K. Chhem. 2015. Health effects of radiation and other health problems in the aftermath of nuclear accidents, with an emphasis on Fukushima. The Lancet 386(9992): 479-488.

Hirschberg, S., C. Bauer, P. Burgherr, E. Cazzoli, T. Heck, M. Spada, and K. Treyer. 2016. Health effects of technologies for power generation: Contributions from normal operation, severe accidents and terrorist threat. Reliability Engineering and System Safety 145(2016): 373-387.

Hugo, B. R., and R. P. Omberg. 2015. Evaluation of Fukushima Daiichi Unit 4 spent fuel pool. International Nuclear Safety Journal 4(2): 1-5.

Hung, T.-C., V. K. Dhir, B.-S. Pei, Y.-S. Chen, and F. P. Tsai. 2013. The development of a three-dimensional transient CFD model for predicting cooling ability of spent fuel pools. Applied Thermal Engineering 50(2013): 496-504. IAEA (International Atomic Energy Agency). 2015. The Fukushima Daiichi Accident. http://www-pub.iaea.org/books/IAEABooks/10962/The-Fukushima-DaiichiAccident

Insua, D. R., and S. French. 1991. A framework for sensitivity analysis in discrete multi-objective decision-making. European Journal of Operational Research 54(2): 176-190. Investigation Committee (Investigation Committee on the Accident at Fukushima Nuclear Power Stations of Tokyo Electric Power Company). 2011. Interim Report. December 26. Tokyo: Government of Japan. http://www.cas.go.jp/jp/seisaku/icanps/ eng/interim-report.html

Investigation Committee. 2012. Final Report on the Accident at Fukushima Nuclear Power Stations of Tokyo Electric Power Company. July 23. Tokyo: Government of Japan. http://www.cas.go.jp/jp/seisaku/icanps/eng/final-report.html

Jäckel, B. S. 2015. Status of spent fuel in the reactor buildings of Fukushima Daiichi 1-4. Nuclear Engineering and Design 283(March): 2-7.

Jo, J. H., P. F. Rose, S. D. Unwin, V. L. Sailor, K. R. Perkins, and A. G. Tingle. 1989. Value/ Impact Analyses of Accident Preventive and Mitigative Options for Spent Fuel Pools. NUREG/CR-5281, BNL-NUREG-52180. Upton, NY, and Washington, DC: Brookhaven National Laboratory and U.S. Nuclear Regulatory Commission. http://pbadupws.nrc.gov/docs/ML0716/ML071690022.pdf

Kaplan, S., and B. J. Garrick. 1981. On the quantitative definition of risk. Risk Analysis 1(1): 11-27.

Karz, A., J. Reichstein, R. Yanagisawa, and C. L. Katz. 2014. Ongoing mental health concerns in post-3/11 Japan. Annals of Global Health 80(2): 108-114.

Keeney, R. L., and D. von Winterfeldt. 1991. Eliciting probabilities from experts in complex technical problems. IEEE Transactions on Engineering Management 38: 191-201. Kondo, S. 2011. Sketches of Scenarios of Contingencies at Fukushima Daiichi Nuclear Power Plant [in Japanese]. March 25. http://www.asahi-net.or.jp/~pn8r-fjsk/saiakusinario.pdf

Kotra, J. P., M. P. Lee, N. A. Eisenberg, and A. R. DeWispelare. 1996. Branch Technical Position on the Use of Expert Elicitation in the High-Level Radioactive Waste Program. NUREG-1563. November. Washington, DC: U.S. Nuclear Regulatory Commission. http://www.nrc.gov/reading-rm/doc-collections/nuregs/staff/sr1563/sr1563.pdf

Kubota, Yoko. February 17, 2012. Factbox: Japan ‘s hidden nightmare scenario for Fukushima. Reuters.

Lewis, E. E. 2008. Fundamentals of Nuclear Reactor Physics. Burlington, MA, and San Diego, CA: Academic Press (Elsevier). Lienhard IV, J. H., and J. H. Leinhard V. 2015. A Heat Transfer Textbook, 4th Edition. Cambridge, MA: Philgiston Press. http://ahtt.mit.edu/.

Lindgren, E. R., and S. G. Durbin. 2007. Characterization of Thermal-Hydraulic and Ignition Phenomena in Prototypic, Full-Length Boiling Water Reactor Spent Fuel Pool Assemblies Phenomena in Prototypic, Full-Length Boiling Water Reactor Spent Fuel Pool Assemblies 2270. Washington, DC, and Albuquerque, NM: U.S. Nuclear Regulatory Commission and Sandia National Laboratory. http://pbadupws.nrc.gov/docs/ML1307/ ML13072A056.pdf.

Masunaga, T., A. Kozlovsky, A. Lyzikov, N. Takamura, and S. Yamashit. 2014. Mental health status among younger generation around Chernobyl. Archives of Medical Science 9(6): 1114-1116. Mellers, B., E. Stone, T. Murray, A. Minster, N. Rohrbaugh, M. Bishop, E. Chen, J. Baker, Y. Hou, M. Horowitz, L. Ungar, and P. Tetlock. 2015. Identifying and cultivating superforecasters as a method for improving probabilistic predictions. Perspectives on Psychological Science 10(3): 267-281. MOE (Japan Ministry of the Environment). 2015. Progress on Off-site Cleanup Efforts in Japan (April). Tokyo: MOE. http://www.export.gov/japan/build/groups/public/@eg_jp/documents/webcontent/eg_jp_085466.pdf.

Morgan, M. G. 2014. Use (and abuse) of expert elicitation in support of decision making for public policy. Proceedings of the National Academy of Sciences of the United States of America 111(20): 7176-7184. http://www.pnas.org/content/111/20/7176. full.pdf.

NAIIC (Nuclear Accident Independent Investigation Commission). 2012. The Official Report of the Fukushima Nuclear Accident Independent Investigation Commission. Tokyo: National Diet of Japan. https://www.nirs.org/fukushima/naiic_report.pdf.

NEA (Nuclear Energy Agency). 2015. Status Report on Spent Fuel Pools under Loss-ofCooling and Loss-of-Coolant Accident Conditions: Final Report. NEA/CNSI/R(2015)-2. Paris: NEA–Organisation for Economic Co-Operation and Development. https://www.oecd-nea.org/nsd/docs/2015/csni-r2015-2.pdf.

NEI (Nuclear Energy Institute). 2009. B.5.b Phase 2 & 3 Submittal Guidance. NEI 06-12, Revision 3. Washington, DC: NEI.

NEI. 2012. Diverse and Flexible Coping Strategies (FLEX) Implementation Guide. NEI 12-06, Revision B1. Washington, DC: NEI. http://pbadupws.nrc.gov/docs/ML1214/ ML12143A232.pdf.

Nishihara, K., H. Iwamoto, and K. Suyama. 2012. Estimation of Fuel Compositions in Fukushima-Daiichi Nuclear Power Plant. JAEA-Data/Code 2012-018. Tokai, Japan: Japan Atomic Energy Agency.

NRA (Nuclear Regulation Authority of Japan). 2013. Monitoring Air Dose Rates from a Series of Aircraft Surveys across the Two Years after the Fukushima Daiichi NPS Accident. June 5. Tokyo: Radiation Monitoring Division, Secretariat of the Nuclear Regulation Authority. https://www.nsr.go.jp/data/000067128.pdf.

NRA. 2014. Analysis of the TEPCO Fukushima Daiichi NPS Accident: Interim Report (Provisional Translation). October. Tokyo: NRA. https://www.iaea.org/sites/ default/files/anaylysis_nra1014.pdf.

NRC (National Research Council). 2004. Safety and Security of Commercial Spent Nuclear Fuel Storage (U). Washington, DC: National Research Council.

NRC. 2006. Safety and Security of Commercial Spent Nuclear Fuel Storage: Public Report. Washington, DC: The National Academies Press. http://www.nap. edu/catalog/11263/safety-and-security-of-commercial-spent-nuclear-fuel-storage-public.

NRC. 2008. Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change. Washington, DC: The National Academies Press. http://www.nap. edu/catalog/12206/department-of-homeland-security-bioterrorism-risk-assessment-a-callfor.

NRC. 2010. Review of the Department of Homeland Security’s Approach to Risk Analysis. Washington, DC: The National Academies Press. http://www.nap. edu/catalog/12972/review-of-the-department-of-homeland-securitys-approach-to-riskanalysis.

NRC. 2011. Understanding and Managing Risk in Security Systems for the DOE Nuclear Weapons Complex. Washington, DC: The National Academies Press. http:// www.nap.edu/catalog/13108/understanding-and-managing-risk-in-security-systems-forthe-doe-nuclear-weapons-complex.

NRC. 2014. Lessons Learned from the Fukushima Nuclear Accident for Improving Safety of U.S. Nuclear Plants. Washington, DC: The National Academies Press. http://www.nap.edu/catalog/18294/lessons-learned-from-the-fukushima-nuclear-accidentfor-improving-safety-of-us-nuclear-plants.

OMB (U.S. Office of Management and Budget). 1992. Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs. Circular No. A-94. Washington, DC: Office of Management and Budget. https://www.whitehouse.gov/sites/default/files/ omb/assets/a94/a094.pdf.

Parfomak, P. W. 2014. Physical Security of the U.S. Power Grid: High Voltage Transformer Substations. July 17. Washington, DC: Congressional Research Service.

Petersen, M. D., A. D. Frankel, S. C. Harmsen, C. S. Mueller, K. M. Haller, R. L. Wheeler, R. L. Wesson, Y. Zeng, O. S. Boyd, D. M. Perkins, N. Luco, E. H. Field, C. J. Wills, and K. S. Rukstales. 2008. Documentation for the 2008 Update of the United States National Seismic Hazard Maps. U.S. Geological Survey Open-File Report 2008–1128. Reston, VA: U.S. Geological Survey. http://pubs.usgs.gov/of/2008/1128/. (Last accessed February 26, 2016.)

Povinec, P., K. Hirose, and M. Aoyama. 2013. Fukushima Accident: Radioactivity Impact on the Environment. Waltham, MA: Elsevier.

RJIF (Rebuild Japan Initiative Foundation’s Independent Investigation Commission on the Fukushima Nuclear Accident). 2014. The Fukushima Daiichi Nuclear Power Station Disaster: Investigating the Myth and Reality. London: Routledge.

Ross, K., J. Phillips, R. O. Gauntt, and K. C. Wagner. 2014. MELCOR Best Practices as Applied in the State-of-the-Art Reactor Consequence Analyses (SOARCA) Project. NUREG/ CR-7008. Washington, DC: Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission. http://pbadupws.nrc.gov/docs/ML1423/ML14234A136. pdf.

Sailor, V. L., K. R. Perkins, J. R. Weeks, and H. R. Connell. 1987. Severe Accidents in Spent Fuel Pools in Support of Generic Safety, Issue 82. NUREG/CR-4982, BNLNUREG-52093. July. Upton, NY: Brookhaven National Laboratory. http:// www.osti.gov/scitech/servlets/purl/6135335.

Satopää, V., J. Baron, D. P. Foster, B. A. Mellers, P. E. Tetlock, and L. H. Ungar. 2014. Combining multiple probability predictions using a simple logit model. International Journal of Forecasting 30(2): 344-356. Sehgal, B. R. (Ed.). 2011. Nuclear Safety in Light Water Reactors: Severe Accident Phenomenology. Oxford, UK: Academic Press (Elsevier).

Sugiyama, G., J. S. Nasstrom, B. Pobanz, K. T. Foster, M. Simpson, P. Vogt, F. Aluzzi, M. Dillon, and S. Homann. 2013. NARAC Modeling During the Response to the Fukushima Dai-ichi Nuclear Power Plant Emergency. LLNL-CONF-529471. Livermore, CA: Lawrence Livermore National Laboratory. https://e-reports-ext.llnl.gov/ pdf/564098.pdf.

Tateiwa, K. 2015. Spent Fuel and Spent Fuel Storage Facilities at Fukushima Daiichi. Presentation to the Committee on Lessons Learned from the Fukushima Nuclear Accident for Improving Safety and Security of U.S. Nuclear Plants. January 29. Washington, DC: National Academies of Sciences, Engineering, and Medicine.

Teodorczyk, A., and J. E. Shepherd. 2012. Interaction of a Shock Wave with a Water Layer. Technical Report FM2012.002. May. Pasadena, CA: California Institute of Technology. http://shepherd.caltech.edu/EDL/publications/reprints/galcit_fm2012-002. pdf.

TEPCO (Tokyo Electric Power Company). 2011. Fukushima Nuclear Accident Analysis Report (Interim Report). December 2. Tokyo: TEPCO. http://www.tepco.co.jp/ en/press/corp-com/release/11120205-e.html.

TEPCO. 2012a. Fukushima Nuclear Accident Analysis Report. June 20. Tokyo: TEPCO. http://www.tepco.co.jp/en/press/corp-com/release/2012/1205638_1870.html.

TEPCO. 2012b. The Integrity Evaluation of the Reactor Building at Unit 4 in the Fukushima Daiichi Nuclear Power Station. Presentation to the Government and TEPCO’s Mid to Long Term Countermeasure Meeting Management Council. May. https:// www.oecd-nea.org/nsd/fukushima/documents/Fukushima4_SFP_integrity_May_2012. pdf.

TEPCO. 2012c. The skimmer surge tank drawdown at the Unit 4 spent fuel pool. January 23. TEPCO. 2012c. The skimmer surge tank drawdown at the Unit 4 spent fuel pool. January 23. e.pdf. (Last Accessed February 26, 2016.)

Throm, E. 1989. Regulatory Analysis for the Resolution of Generic Issue 82 “Beyond Design Basis Accidents in Spent Fuel Pools.” NUREG-1353. April. Washington, DC: Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission. http:// pbadupws.nrc.gov/docs/ML0823/ML082330232.pdf.

UNSCEAR (United Nations Scientific Committee on the Effects of Atomic Radiation). 2013. Sources, Effects and Risks of Ionizing Radiation. New York: United Nations. http://www.unscear.org/docs/reports/2013/14-06336_Report_2013_Annex_A_Ebook_ website.pdf.

USNRC (U.S. Nuclear Regulatory Commission). 1983. Safety Goals for Nuclear Power Plant Operation. NUREG-0880 (Revision 1). Washington, DC: U.S. Nuclear Regulatory Commission. http://pbadupws.nrc.gov/docs/ML0717/ML071770230.pdf.

USNRC. 1986. Safety Goals for the Operation of Nuclear Power Plants. 51 Federal Register 28044 (August 4, 1986) as corrected and republished at 51 Federal Register 30028 (August 21, 1986).

USNRC. 1990. Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants. NUREG-1150. October. Washington, DC: USNRC. http://www.nrc.gov/ reading-rm/doc-collections/nuregs/staff/sr1150/.

USNRC. 1997a. Perimeter Intrusion and Alarm Systems. Regulatory Guide 5.44 (Rev. 3). Washington, DC: Office of Nuclear Regulatory Research, USNRC.

USNRC. 1997b. Regulatory Analysis Technical Evaluation Handbook, Final Report. NUREG/ BR-0184. January. Washington, DC: Office of Nuclear Regulatory Research, USNRC. http://www.nrc.gov/about-nrc/regulatory/crgr/content-rqmts/nuregbr-0184. pdf.

USNRC. 1997c. Operating Experience Feedback Report: Assessment of Spent Fuel Cooling. NUREG-1275, Vol. 12. Washington, DC: Office for Analysis and Evaluation of Operational Data, USNRC. http://pbadupws.nrc.gov/docs/ML0106/ ML010670175.pdf. (Last accessed April 5, 2016.)

USNRC. 2003. Resolution of Generic Safety Issues. NUREG-0933. October. Washington, DC: USNRC. http://nureg.nrc.gov/sr0933/.

USNRC. 2004. Regulatory Analysis Guidelines of the U.S. Nuclear Regulatory Commission. NUREG/BR-0058, Revision 4. September. Washington, DC: Office of Nuclear Regulatory Research, USNRC. http://www.nrc.gov/reading-rm/doc-collections/nuregs/brochures/br0058/br0058r4.pdf.

USNRC. 2007a. Independent Spent Fuel Storage Installation Security Requirements for Radiological Sabotage. SECY-07-0148. Washington, DC: USNRC.

USNRC. 2007b. A Pilot Probabilistic Risk Assessment of a Dry Cask Storage System at a Nuclear Power Plant. NUREG-1864. Washington, DC: Office of Nuclear Regulatory Research, USNRC. http://pbadupws.nrc.gov/docs/ML0713/ML071340012. pdf.

USNRC. 2009a. Draft Technical Basis for a Rulemaking to Revise the Security Requirements for Facilities Storing Spent Nuclear Fuel and High-Level Radioactive Waste, Revision 1. NRC-2009-0558. http://pbadupws.nrc.gov/docs/ML0932/ML093280743.pdf.

USNRC. 2009b. An Approach for Determining the Technical Adequacy of Probabilistic Risk Assessment Results for Risk-Informed Activities. RG 1.200, Revision 2. March. Washington, DC: USNRC. http://pbadupws.nrc.gov/docs/ML0904/ML090410014. pdf.

USNRC. 2011a. Intrusion Detection Systems and Subsystems: Technical Information for NRC Licensees. NUREG-1959. March. Washington, DC: Office of Nuclear Security and Incident Response, USNRC. Avalable at http://pbadupws.nrc.gov/docs/ML1111/ ML11112A009.pdf.

USNRC. 2011b. Prioritization of Recommended Actions to Be Taken in Response to Fukushima Lessons Learned. SECY-11-0037. October 3. http://pbadupws. nrc.gov/docs/ML1127/ML11272A111.html.

USNRC. 2012a. Letter from E. J. Leeds and M. R. Johnson with Order Modifying Licenses with Regard to Requirements for Mitigation Strategies for Beyond-Design-Basis External Events. Order EA-12-049. March 12. Washington, DC: USNRC. http:// pbadupws.nrc.gov/docs/ML1205/ML12054A735.pdf.

USNRC. 2012b. State-of-the-Art Reactor Consequence Analyses (SOARCA) Report. NUREG-1935, January. Washington, DC: Office of Nuclear Regulatory Research, USNRC. http://www.nrc.gov/reading-rm/doc-collections/nuregs/staff/ sr1935/.

USNRC. 2012c. Proposed Orders and Requests for Information in Response to Lessons Learned from Japan’s March 11, 2011, Great Tohoku Earthquake and Tsunami. SECY12-0025. February 17. Washington, DC: USNRC. http://pbadupws.nrc.gov/ docs/ML1203/ML12039A111.pdf.

USNRC. 2013. Regulatory Analysis for Japan Lessons-Learned Tier 3 Issue on Expedited Transfer of Spent Fuel. COMSECY 13-0030. November 12. Washington, DC: USNRC. http://www.nrc.gov/reading-rm/doc-collections/commission/commsecy/2013/2013-0030comscy.pdf.

USNRC. 2014a. Consequence Study of a Beyond-Design-Basis Earthquake Affecting the Spent Fuel Pool for a U.S. Mark I Boiling Water Reactor. NUREG-2161. September. Washington, DC: Office of Nuclear Regulatory Research, USNRC. http://pbadupws. nrc.gov/docs/ML1425/ML14255A365.pdf.

USNRC. 2014b. Qualitative Consideration of Factors in the Development of Regulatory Analyses and Backfit Analyses. SECY-14-0087. August 14. Washington, DC: USNRC. http://pbadupws.nrc.gov/docs/ML1412/ML14127A451.pdf.

USNRC. 2016. Revision to JLD-ISG-2012-01: Compliance with Order EA-12-049, Order Modifying Licenses with Regard to Requirements for Mitigation Strategies for BeyondDesign-Basis External Events. Interim Staff Guidance, Revision 1. January 22. Washington, DC: Japan Lessons-Learned Division, USNRC. http://pbadupws.nrc. gov/docs/ML1535/ML15357A163.pdf. (Last accessed April 5, 2016.)

USNRC NTTF (U.S. Nuclear Regulatory Commission Near-Term Task Force). 2011. Recommendations for Enhancing Reactor Safety in the 21st Century: The Near-Term Task Force Review of Insights from the Fukushima Dai-Ichi Accident. Rockville, MD: USNRC. http://pbadupws.nrc.gov/docs/ML1118/ML111861807.pdf.

Wagner, K. C., and R. O. Gauntt. 2008. Analysis of BWR Spent Fuel Pool Flow Patterns Using Computational Fluid Dynamics: Supplemental Air Cases. SANDIA Letter Report, Revision 3. January.

Wang, C., and V. M. Bier. 2012. Optimal defensive allocations in the face of uncertain terrorist preferences, with an emphasis on transportation. Homeland Security Affairs, DHS Centers of Excellence Science and Technology Student Papers. March. https://www.hsaj.org/articles/210.

Wang, D., I. C. Gauld, G. L. Yoder, L. J. Ott, G. F. Flanagan, M. W. Francis, E. L. Popov, J. J. Carbajo, P. K. Jain, J. C. Wagner, and J. C. Gehin. 2012. Study of Fukushima Daiichi Nuclear Power Station Unit 4 spent-fuel pool. Nuclear Technology 180(2): 205-215.

Wataru, M. 2014. Spent Fuel Management in Japan. International Nuclear Materials Management Spent Fuel Management Seminar XXIX, January 15. http://www. inmm.org/AM/Template.cfm?Section=29th_Spent_Fuel_Seminar&Template=/CM/ ContentDisplay.cfm&ContentID=4373.

Willis, H. H., and T. LaTourrette. 2008. Using probabilistic terrorism risk modeling for regulatory benefit-cost analysis: Application to the Western Hemisphere travel initiative in the land environment. Risk Analysis 28(2): 325-339.

Willis, H. H., T. LaTourrette, T. K. Kelly, S. Hickey, and S. Neill. 2007. Terrorism Risk Modeling for Intelligence Analysis and Infrastructure Protection. Technical Report. Santa Monica, CA: Center for Terrorism Risk Management Policy, RAND.

Wreathall, J., D. Bley, E. Roth, J. Multer, and T. Raslear. 2004. Using an integrated process of data and modeling in HRA. Reliability Engineering and System Safety 83(2): 221-228.

 

 

Posted in Blackouts, EMP Electromagnetic Pulse, Nuclear Power, Nuclear spent fuel fire, Nuclear Waste | Tagged , , , , | 1 Comment

U.S. House meeting on terrorist threats to energy security

[ Even though this hearing was over a decade ago, the issues are still the same.  Nothing has changed.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

House 109-70. July 27, 2005. Terrorist threats to energy security. U.S. House of Representatives, 41 pages

EDWARD R. ROYCE, CALIFORNIA.   The possibility of energy terrorism—attacks on the world’s energy infrastructure—doesn’t generate the same attention as potential chemical or biological or nuclear terrorism. But the economic implications of such attacks are potentially enormous. Many believe that the reason we are looking at oil at $60 a barrel is the fact that we have a ‘‘terror premium’’ factored into the price of a barrel of oil.

Some suggest that oil terrorism is emerging as a major threat to the global economy. Combating this threat should be a part of our complex goal of improving our Nation’s energy security. Because of U.S. energy demands and the global nature of energy markets, terrorists can strike at us almost anywhere in the world… There is strong evidence that a relatively small disruption to oil production throughout the world could spike world energy prices, severely harming the American economy. We have taken steps to improve the security of the energy infrastructure of this country since 9/11. But, unfortunately, terrorist attacks abroad could hurt us as if they were committed here at home.

Al-Qaeda and others seem to be thinking this way. Al-Qaeda documents call for, in their words, ‘‘hitting wells and pipelines that will scare foreign companies from working there and stealing Muslim treasures.’’ Last February a message posted on an al-Qaeda-affiliated Web site entitled ‘‘Map of Future al-Qaeda Operations’’ stated that terrorists would make it a priority to attack Middle East oil facilities.

The vulnerability of Saudi Arabia to energy terrorism is a particular concern. By far, Saudi Arabia is the world’s most important oil-producing country, being the largest exporter and the only country with significant excess production capacity.

Saudi intelligence reportedly disrupted an attack against the Ras Tanura refinery– the largest in the world in 2002. Over the last few years there have been several deadly attacks on Western oil workers, including Americans. These attacks have disrupted oil markets and drove up insurance premiums. It is worth noting that some Saudis support these terrorist attacks by their financial support for Wahhabism abroad.

[ Note: an attack was made on Saudi Arabia’s largest oil complex at Abqaiq in February 2006 that was foiled, two pieces on this are here and here) ]

Pipelines, which carry one-half the world’s oil and most of its natural gas, are generally built above ground, making them common targets for terrorists and insurgents. Pipelines have been attacked in Chechnya, Turkey, Nigeria, Colombia, and elsewhere, costing local governments billions of dollars. In Iraq, pipeline attacks have been pervasive. It is estimated that pipeline sabotage has cost Iraq more than $10 billion in oil revenues, despite the high priority coalition forces have put on pipeline protection. There is concern that the insurgents who have been attacking Iraqi pipelines have gained a measure of expertise, which will be transferred elsewhere.

Global shipping choke points are vulnerabilities in the world’s energy system. The Strait of Malacca is one of the world’s busiest sea lanes, through which half the world’s oil supplies and two-thirds of its liquefied natural gas transit to energy-dependent northeast Asia. The narrow and shallow straits have a long history of piracy, and today well-established terrorist groups operate in the region, including Jemaah Islamiya. Some believe several troubling scenarios are possible, including a terrorist hijacking of an oil or LNG tanker, to be turned into a floating bomb to be detonated in a busy seaport.

These issues are just one part of the complex issue of energy security. An important task in setting policy is gauging the likelihood of a potential terrorist threat and assessing the likely impact. Only with that information on the table can priorities be established. It is my hope that today we can answer some of these questions in this regard and begin to look at the adequacy of policies designed to address terrorist threats abroad to our energy security.

ROBBIE DIAMOND, PRESIDENT, SECURING AMERICA’S FUTURE ENERGY (SAFE). Thank you for holding this hearing to advance our understanding of America’s dependence on oil and the serious national security vulnerabilities of this dependence which, if exploited, could result in widespread economic dislocation and increased global instability. I speak to you today on behalf of Securing America’s Future Energy (SAFE), a nonpartisan group that is committed to reducing America’s dependence on oil in order to improve our national security and strengthen the economy. SAFE is working to transform oil dependence from a rhetorical turn of phrase and an insider’s game to a tangible economic and national security issue that compels political leaders, business executives and the public to act now.

2005 Oil ShockWave   [another one was held in 2007]

On June 23, 2005, SAFE, in partnership with the National Commission on Energy Policy, conducted a high profile Cabinet Level Oil Crisis Simulation called Oil ShockWave, which explored the extent and acuteness of the economic and national security threat and the possible consequences of American oil dependence. In this half-day exercise, top former government officials took part in a series of Principals meetings of the Cabinet or of a Special Working Group over a seven month period in order to advise the President on how to respond to a series of events that affect world oil supplies. The scenarios were designed to simulate a decline in world oil production due to regional instability and to terrorism. The simulation events began in December 2005 to provide some distance from current events.

Situations were presented primarily through pre-produced newscasts shown on video screens as well as ‘‘injects’’ or notes given to Cabinet members throughout the simulation. The participants were informed of their roles ahead of time, but they were not informed about the events and situations they would encounter.

Participants. The Oil ShockWave Cabinet was comprised of the following bi-partisan group of former Cabinet members and senior government and national security officials:

  1. Robert M. Gates, former Director of Central Intelligence and current President of Texas A&M;
  2. James Woolsey, former Director of Central Intelligence.
  3. Carol Browner, former Administrator of the Environmental Protection Agency;
  4. Richard N. Haass, former Director of Policy Planning at the Department of State and current President of the Council on Foreign Relations;
  5. General P.X. Kelley, USMC (Ret.), former Commandant of the Marine Corps and member of the Joint Chiefs of Staff;
  6. Frank Kramer, former Assistant Secretary of Defense for International Security Affairs;
  7. Don Nickles, former US Senator (R–OK);
  8. Gene B. Sperling, former National Economic Advisor and head of the National Economic Council;
  9. Linda Stuntz, former Deputy Secretary of Energy;

Why We Developed ‘‘Oil ShockWave’’

We believed that developing and conducting a simulation would be an engaging format to generate attention for this issue, but more importantly to foster an understanding of our energy insecurity. The simulation was designed to make this issue real and tangible for the public as well as lawmakers and policymakers. The oil markets are so vast and complex and the threats are so varied that sometimes it is difficult to comprehend the issue of oil use, oil dependence, and oil security threats and risks.  The facts themselves are incredibly compelling and persuasive. For instance:

  • 97% of transportation in the United States is fueled by oil
  • The transportation sector alone consumes 68% of all US oil
  • Total US oil consumption is forecasted to increase by 40% from 2003 to 2025
  • 125% increase in the demand for oil in India and China 2003 to 2025
  • $7.4 billion increase in the US oil bill per year for each one-dollar increase in the price of oil.

It was important for us to get beyond some of the general statements of oil dependence and look into the specific issues, threats, consequences, and responses. There is nothing like watching, listening, and learning as a group of former Cabinet members and senior government officials sit in a ‘‘mock’’ situation room responding in real time to a series of plausible and credible events.

II) How We Developed ‘‘Oil ShockWave’’?

From the first day we started planning the simulation, we believed that being profoundly realistic and having unimpeachable credibility was imperative. Therefore, we recruited and worked with a group of experts in the fields of national security, world oil production and distribution, trading, and macroeconomics to develop and verify the authenticity and plausibility of all aspects of the scenario from the oil market disturbances to the impact on oil prices and the economy. These included former members of the oil industry, oil analysts and traders, former and current military officials, intelligence and national security experts, and other specialists.   We worked diligently to stay away from the sensational. As Robert Gates told the Washington Post after Oil ShockWave, ‘‘the scenarios portrayed were absolutely not alarmist; they’re realistic.’’ Jim Woolsey, another former Director of Central Intelligence, who played the Secretary of Homeland Security called the attacks during a post-simulation interview ‘‘relatively mild compared to what is possible.’’

Beyond the terrorist threat to a vast and vulnerable oil infrastructure and system, it was the danger of political instability in countries/regimes that are major oil producers that presented the greatest risk to the US and our oil dependence. Freedom House considers only 9% of world oil reserves to be in countries that are considered ‘‘free’’ and Transparency International has shown that oil riches are highly correlated to their corruption rating. In many respects, it is the political instability and possible violence that force international oil expertise to leave the country and scares away foreign investment that is a more serious threat to the long-term stability of oil markets and the ability to meet world demand. For instance, some of the slowdown in Russian production that is an important element of world oil supply and demand forecasts is simply attributable to a tougher regulatory and less secure investment environments based on recent actions by the Russian government against Yukos and other oil interests.

The Scenario

With political violence and unrest in Nigeria, the fifth largest supplier of oil to the US, forcing foreign companies to ‘‘shut in’’ or close 600,000 barrels of oil per day in the Niger Delta for the foreseeable future. The situation is exacerbated by a very cold winter in the northern Hemisphere that increases demand by 700,000 barrels of oil per day. Based on the current projections of demand and supply at the time, these events result in a gap of more than 2 million barrels per day between supply and demand. We predicted this shortfall would drive a barrel of oil from $58 at the start of the simulation to $82 per barrel at the end of Segment 1. The price of gasoline rose from $2.21 to $3.31 respectively.

This turned out to be more realistic and plausible than we could have expected. Several days before we conducted Oil ShockWave, crude oil prices broke $60 on news of possible unrest and al Qaeda activity in Nigeria. We had initially been debating if a starting price for oil at $58 was too high. In fact, we were a bit low!

The second part, involved coordinated terrorist attacks in the US and Saudi Arabia. The first attack is on the Haradh natural gas processing plant in Saudi Arabia, about 280 km southeast of Dharan, taking 250,000 barrels of oil off the market that now needs to be diverted for domestic use. There is also a failed attempt to ram a hijacked super tanker into another tanker at a loading jetty at Ras Tanura, the world’s largest oil port. Finally, the Secretary of Homeland Security informs the Cabinet that a super tanker has rammed into another tanker at the port of Valdez in Alaska and there has been a ground attack on the holding tanks that are now on fire. The attack on the port of Valdez takes another 1 million barrels of oil off the market per day. This means that the world oil shortfall is about 3.4 million barrels per day. We predicted this shortfall would drive a barrel of oil to $123 and the cost of gasoline to $4.74 per gallon. This type of coordinated attack bears the classic signature of al Qaeda.

The last part takes place 6 months after the initial event. A new campaign of terror against foreign nationals in Saudi Arabia has forced them to be evacuated. In the prior 48 hours, 120 Americans have been killed and another 100 wounded; altogether more than 200 foreign nationals have been killed and 250 have been wounded. It is the highly aggressive crackdown on dissidents and al Qaeda sympathizers after the attacks in January on the Haradh natural gas processing plant and Ras Tanura that appears to be resulting in this popular backlash and terror campaign. The loss of international oil expertise means that Saudi Arabia will not be able to meet future demand growth and to build, hold, and use spare capacity. This scenario drove the price of oil to $161 per barrel and the price of gas to $5.74 per gallon. It is critical to note that no additional oil was taken off the market. The mere inability to have Saudi Arabia as the producer of last resort is enough to create unimaginable consequences.

The final economic analysis we conducted regarded the economic effects of oil at $120 per barrel. This is roughly the price of a barrel of oil at the end of part 2. Some of the key findings were as follows:

  • a recession following two quarters of declining GDP and a decline in 2006 GDP compared to 2005 GDP; approximately 800,000 jobs were expected to be lost during 2006, and over 2 million were expected to be lost in 2007, relative to baseline forecasts;
  • a $2,680 increase in annual gasoline costs to the average US household, driving average annual household gasoline costs to a total of $5,214;
  • a historically significant decline in the S&P 500;
  • a dramatic increase of the current accounts deficit—to $1.087 trillion in 2006 and to $1.052 trillion in 2007—as a result of the increased cost to purchase ‘‘foreign’’ oil.
  • consumers spending more on gasoline and thus cutting other spending;
  • certain energy intense capital is idled or its utilization rate falls;
  • automobile purchases decline sharply due to the uncertainty of oil prices; •
  • air travel falls as airfares rise due to higher fuel prices;
  • lower consumer spending due to lower consumer confidence.

The potential economic effects of oil in the last part were not estimated because crude oil at $161 is so far outside the range of experience that there were no models on which to base estimates.

III) What We Learned From ‘‘Oil ShockWave’’?

There is really no such thing as ‘‘foreign oil.’’ Oil is a fungible global commodity. A change in supply or demand anywhere will affect prices everywhere. Second, we discovered that taking such a small amount of oil off the market could have significant impact on crude oil prices and gasoline. Oil markets are currently precariously balanced. Small supply/demand imbalances can have dramatic effects. We essentially took only 3.5 million barrels off a roughly 84 million barrel global daily market. This means that a supply shortfall of approximately 4% could cause prices to rise to $161 per barrel of oil or to $5.74 per gallon of gasoline. This would create tremendous national security and economic problems for the country.

Prices of crude oil rose quickly. It would not necessarily take much to go from $60 to $123 or even $161.

Once oil supply disruptions occur, little can be done in the short term to protect the US economy from its impacts. There are few good short-term solutions.

There are a number of supply-side and demand-side policy options available that would significantly improve US oil security. Benefits from these measures will take a decade or more to mature, and thus should be enacted as soon as possible. This is the reason we must act now to end this national and economic security vulnerability.

US foreign and military policy is influenced by—and often constrained by— U.S. oil dependence. For example, during Oil ShockWave, the Saudi Arabian and the Chinese governments attempt to extract concessions out of the US in order for them to accede to US requests to help alleviate the crisis. The  Saudi Arabian government demands among other things that the US stop pressuring them to democratize and to stop discussing and investigating money laundering allegations and donations to al Qaeda in order to increase production capacity.  And the Chinese government demands the US stops discussing Chinese human rights violations and stops selling weapons to Taiwan in order to accede to a request to reduce demand voluntarily. It should be noted that in both cases the Oil ShockWave Cabinet refused to accede to these demands.

The Strategic Petroleum Reserve (SPR) or the emergency supply of federally owned crude oil (approximately 640 million barrels of oil) in underground salt caverns, offers at best limited protection against a major supply disruption. More importantly, determining when to use the SPR was more of an art than a science. There never seemed to be an appropriate opportunity and the Cabinet spent much time arguing when and how to release oil from the SPR. For instance, military and security were always concerned that releasing oil from the SPR could leave the US without any options if matters deteriorated further. There were also concerns that any announcement of a release of oil from the SPR could be overtaken or overshadowed by world events and thus prove meaningless as a psychological weapon.

Furthermore, it was noted that releasing oil from the SPR could have the opposite effect and actually contribute to an increase in prices, as any release would be seen as confirmation about the acuteness of the crisis. Finally, the SPR is virtually meaningless in Segment 3 if Saudi Arabia is truly unable to increase production for a sustained period of time.

The oil system is vulnerable to attacks on key energy infrastructure both overseas and at home.  Because that infrastructure is simply too vast to protect, we must seek other ways to reduce this vulnerability such as reducing demand and finding alternatives to diversify fuel sources. It should be noted that during Oil ShockWave Saudi Arabian security forces were able to foil terrorist attacks on Ras Tanura, a major oil facility. We thought it would be useful and telling to have a crisis despite the fact that Saudi Arabia was generally successful in protecting their major oil facilities. Most ominously, al Qaeda and Bin Laden have explicitly called for attacks and even attempted attacks on the oil infrastructure and by extension the Western economic system.

The stability of the entire oil-based global economy is currently dependent on Saudi Arabia’s ability to increase production dramatically and over a short timeframe. Given existing terrorist threats and political tensions in Saudi Arabia, this situation is fraught with enormous liabilities. This does not account for the argument made by many that oil revenues have likely funded terrorism and fueled hatred against America.

In the event of a crisis, the US has a few short-term options—such as tapping the Strategic Petroleum Reserve and implementing emergency demand measures, like carpooling, reducing speed limits, alternative drive days. The short-term options, however, are generally good for less than a year.

Conclusion

With 97% of transportation in the US fueled by oil, oil is the lifeblood of the US economy.

Oil ShockWave demonstrated that the nation must move rapidly to protect the nation from an oil supply crisis that could have dramatic economic and national security implications. Any meaningful interruption of global oil supplies would seriously strain the ability of the US to fund an aggressive and comprehensive war on terrorism. Key oil facilities have been attacked before, and it is virtually certain there will be more attacks. Most interestingly, it is instability, sometimes as the result of terrorism, in oil producing countries that poses such as serious threat to US oil security. (Of note, the stability of Saudi Arabia and its ability to meet short-term and long-term demand requirements are critical to the entire oil-based economy.)

There are also serious questions about the use of oil revenues to fund terrorism and hatred against America. It took a series of unsurprising events to drive the price of crude oil to $161 per barrel and the price of gasoline to $5.74 per gallon. More importantly, it only took a supply shortfall of approximately 4% or 3.5 million barrels out of a daily global market of roughly 84 million barrels to reach these prices in Oil ShockWave.

Unfortunately, once an oil supply disruption happens, there are no good short term answers. It is thus essential that the President and Congress immediately implement a long-term strategy for reducing America’s oil dependence. We need a concerted effort in the halls of Washington and boardrooms across the country. This is a grave national and economic security issue demanding the attention of our political and business leaders.

When we were attacked on 9 /11, many people were surprised at the terrorist threat and the US vulnerability. Our response to 9/11 must be to make sure that we are not surprised again. We must anticipate and prepare for the next attack by acknowledging the vulnerabilities and addressing them. Few weaknesses demand greater attention than oil security.

JOHN P. DOWD, SENIOR RESEARCH ANALYST, SANFORD C. BERNSTEIN & COMPANY, INC.

The risk of a supply disruption in the oil markets appears to be at one of the highest levels in history, primarily because of the thin cushion of spare capacity. With limited spare oil producing capacity, even a relatively small disruption in supply would cause shortages. This has caused oil to trade at a premium to expectations based on inventory levels, a premium described as either a ‘‘terror premium’’ or a ‘‘risk premium’’ by participants in the markets.

This premium appears to be directly proportional to the amount of spare productive capacity held in reserve. If there were 6 million barrels per day of idle capacity, no single terrorist act would be sufficient to cause a shortage. However, with only 2.2 million barrels per day of spare capacity, which is enough capacity to meet a little more than one year of demand growth, the oil markets are the mercy of political stability in Venezuela, Nigeria, and Iraq, as well as terrorist acts.

In theory, the solution is simple. If we increase the amount of spare capacity, we will reduce the risks that terrorist actions pose to the crude markets, and crude oil prices will ebb as a result. In practice, there are several complicating factors that will likely inhibit an effective supply-side or demand-side solution. On the supply-side, the primary concern stems from the inability of non-OPEC producers to materially increase production. The supply response to higher oil prices has been anemic. Over the past two decades, the working assumption has been that oil prices could not permanently move above $25 because doing so would invite a non-OPEC production response. However, despite record investment, we have yet to see any significant production response. To the contrary, production growth from countries outside of OPEC and the Former Soviet Union has declined each decade over the past five. In the 1970’s, these countries grew production 3.1% annually. Over the past decade, they grew production only 1.1% annually, even though investment was considerably higher.

Spare oil capacity will likely dwindle further as a consequence of Chinese demand. While all of the growth in Chinese oil demand over the past decade has been offset by increased exports from the Former Soviet Union, this does not appear likely going forward. Russian production growth stopped last September. This is potentially a game changing event that will only accentuate the sensitivity of the oil markets to terrorist attacks.

Finally, the risk of disruptions will likely grow as the global oil supply is increasingly sourced from unstable regions. Throughout history, oil companies have taken a very rational approach to investment, in which they have weighed political risk against geologic risk when deciding where to develop oil. One consequence is that the industry increasingly has demonstrated a propensity to invest in politically risky areas, because the world’s oil basins have matured and the geologic risks have increased. As highlighted by the Oil ShockWave simulation, the price of oil in the US is highly dependent on developments far outside of our borders.

If oil demand continues to grow faster than supply, the amount of spare capacity will shrink further and the oil markets will likely become even more sensitive to potential disturbances. For instance, if global oil consumption grows at a pace of 3.1% next year rather than current expectations of 2.1%, the amount of surplus capacity will be 830,000 barrels per day less than the current forecast. This is larger than the impact of the Nigerian disruptions sited in the first Oil ShockWave scenario.

It is relatively easy to narrow down where our oil dependency lies in the US: transportation.

Meaningfully reducing demand for transportation fuels is the only realistic way of gaining greater energy independence in the US. The challenge is that the obvious solution, encouraging the use of diesel fuels and the use of more fuel efficient vehicles, is also politically the most difficult. However, the potential is huge. Improving the average fuel efficiency of the US vehicle fleet by just 2 mpg would reduce US gasoline demand by roughly 1 million barrels per day. This is equivalent to all of the growth in US gasoline consumption over the past 8 years.

GAL LUFT, PH.D., CO-DIRECTOR, INSTITUTE FOR THE ANALYSIS OF GLOBAL SECURITY (IAGS)

IAGS is an energy security think tank which follows and analyzes the relations between energy and our national and international security.

Since 9/11 it has become increasingly apparent that terrorist groups have identified the world energy system as the Achilles heel of the West. Throughout the world jihadist terrorists attack oil and gas installations almost on a daily basis with significant impact on the oil market.

What makes oil interesting for terrorists are the unique conditions that have been created in the oil market. Until recently, the oil market had sufficient wiggle room to deal with occasional supply disruptions. Such disruptions could be offset by the spare production capacity owned by some OPEC producers, chiefly Saudi Arabia. This spare capacity has been the oil market’s main source of liquidity. But due to the sudden growth in demand in developing Asia this liquidity mechanism has eroded from 7 mbd in 2002 which constituted 9% of the market to about 1.5 mbd today, less than 2%. As a result, the oil market today resembles a car without shock absorbers: the tiniest bump on the road can send a passenger to the ceiling. Without liquidity, the only one mechanism left to bring the market to equilibrium is rapid and uncontrolled price increases.

This reality plays into the hands of terrorists who want to hurt the Western economy. The war on radical Islam is often described as an ideological or even religious war. But for the jihadists it is also an economic war. Osama bin Laden’s strategy is based on the conviction that the way to bring down a superpower is to weaken its economy through protracted guerilla warfare. We ‘‘bled Russia for 10 years until it went bankrupt and was forced to withdraw [from Afghanistan] in defeat. [. . .] We are continuing in the same policy to make America bleed profusely to the point of bankruptcy,’’ bin Laden boasted in his October 2004 videotape.

His logic is simple: To bring the U.S. to suffer a fate similar to that of the Soviet Union, the terrorists need to drain America’s resources and bring it to the point it can no longer afford to preserve its military and economic dominance. As the U.S. loses standing in the Middle East, the jihadists can gain ground and remove from power regimes they view as corrupt and illegitimate while defeating other infidels who inhabit the land of Islam. One of the Islamists’ methods to achieve this goal is to attack oil, which jihadists call ‘‘the provision line and the feeding to the artery of the life of the crusader’s nation.’’

Striking pipelines, tankers, refineries and oil fields is easy and effective. Terrorists no longer need to come to the U.S. and wreak havoc in our cities. They can cause enormous economic damage by hitting our energy supply at the generating points, where they enjoy strong support on the ground. These attacks have already imposed a ‘‘fear premium’’ in the oil market of $10–$15. For the U.S., an importer of more than 11 million barrels a day, this fear premium alone costs $40–$60 billion a year. The cause and effect are not lost on terrorists. ‘‘We call our brothers in the battlefields to direct some of their great efforts towards the oil wells and pipelines,’’ reads a jihadist website. ‘‘The killing of 10 American soldiers is nothing compared to the impact of the rise in oil prices on America and the disruption that it causes in the international economy.’’

Higher oil prices also mean a transfer of wealth of historical proportions from oil-consuming countries—primarily the U.S.—to the Muslim world, where three quarters of global oil reserves are concentrated. The windfall benefits jihadists as petrodollars trickle their way through charities and government handouts to madrassas and mosques.

The most popular targets are pipelines, through which about 40% of world’s oil flows. They run over thousands of miles and across some of the most volatile areas in the world. Pipelines are very easily sabotaged. A simple explosive device can put a critical section of pipeline out of operation for weeks. This is why pipeline sabotage has become the weapon of choice of the insurgents in Iraq.  Attacks on pipelines in Iraq have strategic impact on U.S. efforts there. They undermined the prospects of Iraqi construction by denying the Iraqi economy much needed oil revenues. They also have a corrosive influence on the morale of the Iraqis and their attitude toward the presence of U.S. forces in their country. Iraqis are growing increasingly vexed by the slow progress in the reconstruction effort and the inability of the government to guarantee a reliable supply of electricity, which is primarily derived from oil. Worse, the sabotage campaign has created an inhospitable investment climate in Iraq and scared away oil companies that were supposed to develop its oil and gas industry.

Emulating the success of the saboteurs in Iraq, terrorists in many oil-producing countries have set their sights on and attacked pipelines and other oil installations in Sudan, Chechnya, India, Saudi Arabia, Pakistan, Turkey, Colombia, Nigeria, Azerbaijan, Indonesia and the Philippines.

Terror at sea (also see Luft, G, et. al. 2004 Terrorism Goes to Sea,  Foreign Affairs)

There is growing evidence that terrorists find the unpoliced sea to be their preferred domain of operation. Terrorist groups such as al Qaeda, Hezbollah, Jemaah Islamiyah, the Popular Front for the Liberation of Palestine-General Command, and Sri Lanka’s Tamil Tigers have long sought to develop a maritime capability. Today, over 60% of the world’s oil and almost all of its liquefied natural gas is shipped on 3,500 tankers through a small number of ‘chokepoints’—straits and channels narrow enough to be blocked, and vulnerable to piracy and terrorism. The most important chokepoints are the Strait of Hormuz, through which 13 million barrels of oil are moved daily, Bab el-Mandab, which connects the Red Sea to the Gulf of Aden and the Arabian Sea, and the Strait of Malacca, between Indonesia and Malaysia. Thirty percent of the world’s trade and 80% of Japan’s crude oil passes through the latter, including half of all sea shipments of oil bound for East Asia and two-thirds of global liquefied natural gas shipments. The Bosporus, linking the Black Sea to the Mediterranean, is less than a mile wide in some areas and is one of the most threatened chokepoints. Ten percent of the 50,000 ships that pass through it each year are tankers carrying Russian and Caspian oil.

Most of the critical chokepoints are located in areas where Islamic fundamentalism is prevalent. The Strait of Hormuz is controlled by Iran; Bab el-Mandab is controlled by Yemen, the ancestral home of bin Laden. Part of the 500-mile long Strait of Malacca courses through Indonesia’s oil rich province Aceh, inhabited by one of the world’s most radical Muslim populations.

Many terror experts have expressed concern that al Qaeda might seize a ship or a boat or even a one-man submarine and crash it into a supertanker in one of the chokepoints. Were terrorists to attack such a vessel the resulting explosion and spreading stain of burning oil could shut down the channel with a profound impact on the oil market. Tankers are too slow and cumbersome to maneuver away from attackers; they have no protection and they have nowhere to hide. al Qaeda terrorists have demonstrated repeatedly their intent and ability to strike them. In January 2000 al Qaeda attempted to ram a boat loaded with explosives into the USS The Sullivans in Yemen. The attack was aborted when the boat sank under the weight of the explosives. Later, in October, al Qaeda suicide bomber in high-powered speedboat packed with explosives blew a hole in the USS Cole, killing 17 sailors. In June 2002, a group of al Qaeda operatives suspected of plotting raids on British and American tankers passing through the Strait of Gibraltar was arrested by the Moroccan government; and in October that year, the organization badly holed a French supertanker off the coast of Yemen. According to FBI Director Robert Mueller ‘‘any number of [terror] attacks on ships . . . have been thwarted.’’

To make things worse, there are increasing signs of collaboration between terrorists and pirates. According to International Maritime Bureau (IMB), pirate attacks on ships have tripled in the last decade. Each year 350–400 piracy attacks take place worldwide in which hundreds of seafarers are being killed, assaulted, or kidnapped. The majority of the attacks take place in the Philippines, Indonesia, Bangladesh and Nigeria. Most of the ships attacked are oil and chemical tankers. Maritime security experts have repeatedly warned about the collusion between piracy and terror, voicing concerns that Islamist groups operating in these regions could capitalize on the disorder and target strategic chokepoints by placing a bomb on a supertanker or ramming a ship into one.

One scenario our economy cannot withstand is a major attack on one of Saudi Arabia’s oil facilities. In addition to being holder of a quarter of the world’s oil reserves holder of most of the world’s spare production capacity Saudi Arabia is the only country in the world that has facilities that process more than 3 mbd. Over half of Saudi Arabia’s oil reserves are contained in just eight fields and about two-thirds of Saudi Arabia’s crude oil is processed in a single enormous facility called Abqaiq, 25 miles inland from the Gulf of Bahrain. On the Persian Gulf, Saudi Arabia has just two primary oil export terminals: Ras Tanura—the world’s largest offshore oil loading facility, through which a tenth of global oil supply flows daily—and Ras alJu’aymah. On the Red Sea, a terminal called Yanbu is connected to Abqaiq via the 750-mile East-West pipeline. The Saudi oil system is target rich and extremely vulnerable to terrorist acts. This is not only due to al Qaeda’s strong presence in the kingdom and its ability to carry out coordinated attacks but also because of the number of strategic targets. A terrorist attack on each one of the hubs of the Saudi oil complex or a simultaneous attack on a few of them is not a fictional scenario. In summer 2002, a group of Saudis was arrested for involvement in a plot to sabotage Ras Tanura and pipelines connected to it. A single terrorist cell hijacking an airplane in Kuwait or Dubai and crashing it into Abqaiq or Ras Tanura, could turn the complex into an inferno. This could take up to 50% of Saudi oil off the market for at least six months and with it most of the world’s spare capacity. Such an attack could be more economically damaging than a dirty nuclear bomb set off in New York City.

Since September 11 it has become apparent that there is no shortage of suicide terrorists who are willing to sacrifice their lives for the sake of killing the infidel but recent events in Iraq and Saudi Arabia show that there are those who are also willing to give away their lives for the sake of denying us oil.  If we stay on the present course, America will bleed more dollars each year as its enemies gather strength and the world economy will be at the mercy of oil kamikazes determined to go for its jugular. A smart combination of military and energy policies is our best hope for breaking the economic backbone of the jihadists before they do so to us.

BETTY MCCOLLUM, MINNESOTA. We know we are vulnerable and so I have two questions. One is: Why do you think we, as a country—and I don’t want to get into party identifying, or whose President when, or whatever—why haven’t we, as a country, in your opinion, done what we need, or started to do what we need to do, in terms of conservation, fuel efficiency and investing in renewables? Norway, which has a huge oil field of its own, went through and did a lot of those things on their own to make their oil profits last longer. They were thinking out into the future, and they have oil.  Secondly, what do you think the international community should do, because we are talking about other sovereign nations where we are receiving our oil from. Should the U.N. be looking at this? Should there be alliances put forward? Should the private sector, which is also very international now in these markets, should they be moving forward? Is there any creative thinking about what to do out there? Because America, as you pointed out, cannot police all these oil pipelines nor do I believe we should.

Mr. LUFT. As for the first question, why haven’t we done the right things, that would be like asking, why haven’t we done the right things prior to 9/11?

Unfortunately, the American public and its representatives tend to respond to crisis. We may need a crisis to wake us all up and do the right things.

Even though people tend to complain about high gas prices, our gas prices are still the lowest in the industrialized world. If you go to Japan or Europe, you see, you buy gas for way over $5 a gallon. So I think that we are not there in terms of public awareness and public understanding of how fragile the system is. But we will get there with the aid of the likes of bin Laden and others that will show us the light, and then we will respond in kind. I think that this is very unfortunate, but this is where Congress should step up to the plate and make us more secure.

Mr. DOWD. I wanted to respond to Ms. McCollum’s question. There are  clearly political reasons why we are in this problem today. We look at the energy bill today and conservation was not in it before.

In the 1970s we had similar problems, and we responded by doubling or tripling investment in the oil industry and by essentially doubling the fuel efficiency of the U.S. auto fleet. It took both steps in order to solve the problem and it took a very, very long time. Now, that is a very political issue. I don’t want to really delve into that. That is not my area of expertise.

But another reason why we are in this situation today is that the expected supply response has not materialized, and this has caught virtually everybody in the energy industry off guard. If we could grow non-OPEC oil production, 3, 4, 5 percent a year, we would have a spare source of supply. We would have something in reserve in order to meet unforeseen developments. If we step back to 10 years ago, the expectation had been that the investment in the deep water in the Gulf of Mexico, West Africa, offshore Brazil, North Sea, would lead to an acceleration of nonOPEC production. And the surprise is it hasn’t happened. The surprise is, outside of OPEC and the former Soviet Union, reserve replacement has been less than one, 4 years in a row. That is, the amount of oil we find every year versus what we produce has actually been less than outside of those countries. We have run into this surprise before. We have run into a situation, if we look at U.S. natural gas production since 1996, everybody was expecting a production response. We haven’t seen it. We have literally doubled the number of rigs looking for natural gas in the U.S. since 1996, and U.S. natural gas production is down slightly. These are new challenges that really have surprised everybody. I don’t think I am overstating that.  I am not trying to say that there are no regions in the world that are capable of growing production. It is fair to say that something like 60% of the countries that produce oil are seeing their production decline. So it is fair to say that there are success stories. The production growth that we are seeing in the deep water in the west African region, in the Canadian oil sands, and in certain parts of the world, is actually being offset by production declines in other basins.

STRATEGIC PETROLEUM RESERVE

Mr. DIAMOND. What surprised me most in Oil ShockWave were the responses to the use of the Strategic Petroleum Reserve. It really proved an elusive challenge to these people to decide—I mean, here we have this tremendous group of national security and energy experts, and they could not come to any unanimous conclusion to actually release the reserve. You had a breakdown of the national security folks saying, ‘‘Let’s not use it; you know, things could get worse. We could need it to go to war.’’ You had market people saying that we shouldn’t use it because when was the price high enough to use it. If we use it, we might just confirm speculation that things are worse than they are, and the price would just go up and have a contrary effect.

[NOTE: the same thing happens in the 2007 Oil Shockwave – some participants think that the SPR belongs to the Navy, and even if it doesn’t, we should save the SPR for the military in case things get far worse – presumably for war to keep oil supplies flowing]

And then ultimately, you know, they got to a point where in the last segment in Saudi Arabia itself—it wasn’t terrorist attacks but, rather, terrorism against foreign nationals and international oil expertise, which meant that we didn’t take any more oil off the market from Saudi Arabia. Rather, they just could not increase their production from where they were today and actually even deal with some of their natural depletion. And at that point the SPR, the Strategic Petroleum Reserve, in their minds was sort of a useless entity in that this was a much longer-term problem. The prices were so high that it would be just natural demand reduction. And in the end, they just could not come to a unanimous conclusion of when to use it or not. So it is more of an art than a science. And it is not a long-term solution to any of our issues.

Mr. LUFT. No country will invest billions of dollars in producing spare capacity. So we need to assume that spare capacity is history in the hands of the consumers. We need to invest in producing spare capacity in the hands of the consumers. That is through developing a more robust internationally managed Strategic Petroleum Reserve, and we recommend a 3-billion-barrel global reserve. We need to also realize that we have a responsibility toward other countries that don’t have this, particularly our neighbors in the Western Hemisphere. We have responsibility for their future, because we don’t want every country to begin to—so, you know, we have 700 million today, which we can use for our own market. But the reason we need more is because we need to be able to export oil in time of emergency to those countries that don’t have those reserves at hand.

Mr. ROYCE. Have you assured yourself that what we pour into the ground as part of this reserve that we get 100 percent of that back? I have always wondered about the porousness of that. I have always wondered about that strategy, and if there isn’t quite a bit of lost oil, crude, as a result of that.

Mr. LUFT. The domes have no known leakage or loss

Mr. DIAMOND. I have a bit of a different opinion. I would say we have to keep asking our questions about the SPR, and most of the people shrugged and said, I am not sure it will actually work. You know we are talking about can only get 4 million a day out of it. That is the rate of flow. We have never done more than 1 million barrels. We have never done it for a very long time. I would say there is a lot of debate. The oil is there. They are not sure they can get it out the same way. Also there were issues on the West Coast, meaning if you took it out of the SPR one of the problems we had is because Alaska oil is so important in California there may be extra shortage in California and the SPR wouldn’t necessarily be helpful to that area. And with the SPR, there are only two publicly held reserves in Germany and Japan. The rest is held by private companies, including in the United States. There are apparently billions and billions of barrels held by private companies. The other opinion we received by many people is because of just-in-time inventories in the oil business today, that is nothing too much to rely on either. So, you know, there was a lot of debate saying we let the SPR work during the simulation because we didn’t want to get into that argument. But even if you assumed it would work, it was very difficult to figure out when to use.

BETTY MCCOLLUM, MINNESOTA . I have a question. I didn’t know whether or not to ask it, but then you brought up the developing world. You look at the world over there, and the oil consumers are in the north, and we are the industrialized and developed countries. All the exploration that people are pretty much looking forward to in the future is in the Southern Hemisphere, the countries that are developing. What—as we talk about the millennium development goals for Africa, and as Africa moves forward—because that is the goal that I think we all share in becoming more sustainable and more secure—Africa is going to want to start to consume some of its own product, just as Latin America will. Has anybody looked at how that moves forward? Or do we, without realizing it, suppress their development, by our consumption of their natural resource, of what they will be able to do in the future?

Mr. LUFT. Africa. One of the things we need to worry about—and I agree that there is a lot of exploration in the Southern Hemisphere. But there is also a lot of exploration, particularly in Central Asia, very important energy domain for oil and gas. And I think there are two similarities between Africa and Central Asia. We are talking about emerging countries that don’t have a good mechanism of democracy and institutions. We want to make sure that in our search for non-OPEC, non-Middle East oil, we don’t replicate the problems that we see today in the Middle East. We don’t want to replicate the Middle East in Western Africa and Central Asia. We are dealing with tribal societies, very corrupt, very dictatorial. They don’t have a good record of handling oil revenues. We need to make sure that in our pursuit of running outside of the Middle East—because the dependency is bothering us from a national security point of view—we don’t create a Middle East in Western Africa and in Central Asia, because that will be more of the same. They have a problem in absorbing the revenues. They also have a problem—if you look at Nigeria, in Nigeria you see gas lines today. People are waiting in line to get gasoline. They have so much oil, yet they don’t have a good handle of the supply chain, refining capacity. These issues—and bear in mind the second most corrupt country in the world, according to Transparency International, and a third of Nigeria is controlled by Sharia Law, because those who have the oil are not necessarily those who run the country and so on.

There are many, many issues. And add to the fact that it is clear, both by Exxon Corporation as well as PFC Energy Report and others, that the reserves in the non-OPEC world are running out much faster than the reserves in OPEC. So if we increase production in those countries, we need to make sure that we have alternatives down the line, because we are heading toward a situation that once those reserves are being depleted, our dependency on the Middle East, on OPEC, will be stronger than it is today.

Mr. DIAMOND. Another interesting point brought up in Oil ShockWave was that they had trouble dealing with  a short-term spike — there are few short-term solutions. You can ask the American people to do some of these things, they can last for a year or so, and there are different amounts of draconian nature in some of these things.  But they really had a hard time. How do you ask the American people to wait for 5 or 10 years, to wait for other other solutions if a prolonged crisis happens in Saudi Arabia and we needed to dramatically reduce our demand?  That was really the crunch. The oil experts didn’t know how to deal with that

Mr. LUFT. Mr. Chairman, I want to comment on the model of Chad. One of the things we are seeing today in the developing world is that a new type of relationship is going on between developing countries and China. The Chinese don’t impose any limitations on distribution of wealth or human rights or any of this stuff that we are talking about. What they do, in exchange, is they provide the developing money. They come with cash, they build ports, railways, telecommunications system, et cetera.

Mr. ROYCE. This Subcommittee has looked at many of the different terrorist threats facing this country, including the threats of terrorists getting their hands on WMD, and you have presented a case here that this is one of the foremost threats facing the country, as panelists. So the question, I think, for us is: What should the priorities be, where should our focus be? Because we can’t do everything. So let us just have a quick response in terms of your answers to that.

Mr. DOWD. I think the focus should be what you control. We can hope for an acceleration in oil production, but here in the U.S., from a political point of view, we can’t control it. It will be difficult to protect facilities globally. Should we try? Yes, but that really is not under our control. What we control is what we consume here. I think the focus has to be on the CAFE standards.

Mr. DIAMOND. There are three solutions to this, which is an increasing supply, decreasing demand dramatically and finding alternatives. And I think it is important to say that increasing supply is a critical component because, you know, it is such a tight market and any extra supply can help. If that is the only solution, that this country thinks we can drill our way out of this problem, we are in for a shock.

Mr. LUFT. When we monitor the attacks and we look at the trends, we only look at politically-motivated attacks. We have to remember that, particularly in the developing world, there is a lot of looting going on. People just puncture a pipeline to get the oil, and will sell it on the black market. This is not politically motivated, but it also adds a lot of pressure and a lot of loss.

Mr. ROYCE. I have seen it in Nigeria, yes, firsthand.

BRAD SHERMAN, CALIFORNIA. We should remember that there is one world price for oil, and that American consumers will be forced to pay that price. Even if United States oil companies have secure sources of oil from Africa or Latin America, they will charge us that price. The best insurance to prevent terrorist activities from causing a spike, or an extreme spike in the price of oil, is the Strategic Petroleum Reserve, and this should, again, not be just a U.S. concern. There is one world price; thus if there was an interruption of 10 or 20 percent of the world’s oil production and the U.S. were to open its Strategic Petroleum Reserve, that would be in effect feeding a world supply. What is fair is that all energy-consuming nations should have a Strategic Petroleum Reserve, whether within their borders or elsewhere, so that we can act in concert to keep the price of oil at what we have now adjusted to, and that is this extreme $60 a barrel, or hopefully less.

India and China and other developing Asian countries are thirsty for oil. This will drive up world prices solely, or, God forbid, quickly, if we have any interruption or even the threat of an interruption. China is, of course, reaching out to some unsavory regimes for oil such as Iran and Sudan. And Hugo Chavez, who may style himself as the new Castro, dreams of the day when he can sell his 1.2 million barrels a day to China instead of the United States. I look forward to learning what we can do to assure a supply of oil at a price that does not reflect further shocks; what we can do to make our economy  immune to the possible oil shocks to come. Obviously, the thing we could do is to move toward a time when we are not so oil dependent.

The days when 94% of our transportation needs are met by oil need to end.

Mr. DOWD. What do we think is the primary concern of executives in the oil industry? I know that the executives I talked to are primarily focused on their own companies and achieving their business plans. As a result, they are concerned with access to oil service equipment. They are concerned with costs. It should be known that the cost of making oil, the cost of finding oil, are moving up very, very rapidly. For instance, when we look at the return of capital on the public EMP companies in the U.S., it is actually flat between 2001 and 2005, which is actually a stunning statement. Oil prices have almost doubled, but the returns that people are making in exploration and development have actually stayed flat.

Mr. ROYCE. Yes. In deep-water drilling we get excited about the potential. We forget about the potential costs.

Mr. DOWD. That is right. But the point being that this cost escalation that we are seeing in the industry doesn’t look cyclical. Between 1992 and 2002, according to the American Petroleum Institute, the average cost of a well in the U.S. increased at a rate of 9 percent per year. Reserves added per well in the U.S. didn’t go up. We are seeing structural inflation that is really very geologically driven in the high-cost area.

[ In other words, “Drill Baby Drill” has stopped working.  Economists have always promised, and still do, that all you need to do is throw money at shortages and whatever it is you need will appear quicker than Aladdin after rubbing the magic lamp. But it isn’t true – more money was spent, 9% a year, and oil reserves didn’t go up. ]

 

Posted in Caused by Scarce Resources, Chokepoints, Middle East, Oil Shocks, Transportation | Tagged , , , , , , | Leave a comment