U.S. Senate hearing on energy dependence and vulnerabilities

[This is one of the most important hearings on U.S. Energy Policy and our dependence on oil for the past decade, though the title of the session should have been Energy Dependence, not Independence.  It is ironic that for many years congressional hearings were held on a century or more of U.S. energy independence due to fracked oil and gas. 

Yet in 2016 the oil and gas bubble is predictably bursting because oil and gas companies borrowed about $300 billion more than they earned, thanks mainly to clueless middle-class investors who bought high-yield bond and mutual funds and got suckered by yet another Wall Street scam. If only they’d followed shaleoilbubble.org, resilience.org, energyskeptic.com, and the news media links I’ve put at the bottom of this post. What follows are excerpts from the 103 page transcript.]

Senate 109-412. March 7, 2006. Energy Independence. U.S. SENATE Committee on energy & natural resources. U.S. Senate Hearing.


Please accept my thanks for the opportunity to submit this statement as part of the record of today’s hearing in the issue of oil dependence—or, as President Bush put it, our ‘‘addiction’’ to oil. Let me be clear that I am under no illusions that our economy can be completely energy independent in the literal sense of that term. We can, however, ensure that our economy grows while becoming less and less oil-intensive. We have the technology to do it, we have the homegrown fuels to do it and, more and more, I believe we have the will to do it. And, if we succeed we will be making our economy more and more resilient against the dangers and shocks of the global oil system, while freeing our national security and our foreign policy from the very real threats and distortions that our oil-dependence imposes.

While geologists and economists can debate when the oil supply will ‘‘peak,’’ what is indisputable is that demand is now exploding as developing nations such as India and China increase consumption.

According to the IEA, global demand for oil—now about 85 million barrels a day— will increase by more than 50% to 130 million barrels a day between now and 2030 if nothing is done. The industrialized world’s dependence on oil heightens global instability. The authors of the IEA report note that the way things are going ‘‘we are ending up with 95% of the world relying for its economic well-being on decisions made by five or six countries in the Middle East.’’ The recent attack on the Abqaiq oil processing facility in Saudi Arabia reminds us not only of our dangerous dependence on foreign oil, but that that vulnerability is recognized by our enemies. Besides the Mideast, I would add that Nigeria is roiled by instability, Venezuela’s current leadership is hostile to us and Russia’s resurgent state power has ominous overtones.

We are just one well-orchestrated terrorist attack or political upheaval away from a $100-a-barrel overnight price spike that would that would send the global economy tumbling and the industrialized world, including China and India, scrambling to secure supplies from the remaining and limited number of oil supply sites. History tells us that wars have started over such competition.

Left unchecked, I fear that we are literally watching the slow but steady erosion of America’s power and independence as a nation—our economic and military power and our political independence. We are burning it up in our automobile engines and spewing it from our tailpipes because of our absolute dependence on oil to fuel our cars and trucks. We need to transform our total transportation infrastructure from the refinery to the tailpipe and each step in between because transportation is the key to energy independence.

That dependence on oil—and that means foreign oil because our own reserves are less than 1 percent of the world’s oil reserves—puts us in jeopardy in three key ways—a convergence forming a perfect storm that is extremely dangerous to America’s national security and economy.

We must depend for our oil on a global gallery of nations that are politically unstable, unreliable, or just plain hostile to us. All that and much more should make us worry because if we don’t change—it is within their borders and under their earth and waters that our economic and national security lies. Doing nothing about our oil dependency will make us a pitiful giant—like Gulliver in Lilliput—tied down by smaller nations and subject to their whims. And we will have given them the ropes and helped them tie the knots.

The structure of the global oil market deeply affects—and distorts—our foreign policy. Our broader interests and aspirations must compete with our own need for oil and the growing thirst for it in the rest of the world—especially by China and India. As a study in the journal Foreign Affairs makes clear, China is moving aggressively to compete for the world’s limited supplies of oil not just with its growing economic power, but with its growing military and diplomatic power as well.

We can take on this problem now and stand tall as the free and independent giant we are by reducing America’s dependence on oil.

I can almost hear colleagues murmur, So, Senator Lieberman, what else is new? We’ve been hearing this for years and nothing has happened. I can’t blame you if you are skeptical. The struggle for oil independence has been going on at least since Jimmy Carter was President.

But things have changed since the days of Jimmy Carter and even since last summer. There is a new understanding of the depth of the crisis that our oil dependence is creating. Last summer’s doubling of gasoline and crude oil prices hit tens of millions of Americans with the global reality of oil demand and pricing. And Hurricane Katrina reminded us how vulnerable our supplies can become. This reality is bipartisan.

We will push harder for more and quicker production and commercialization of biomass-based fuels.

As always, there is a do-nothing crowd that says the ever-rising price of gasoline and crude oil are the cure—that with higher prices people will reduce consumption and the market will respond with greater investments in the supply of oil to bring prices down. But all that would do is perpetuate the problem. Market-driven oil-dependency is still dependency on foreign oil, driving us further down the current path toward national insecurity and economic and environmental troubles.

Some say that we can ease the crisis through greater domestic drilling—in places like the Arctic Refuge and other public lands or off our shores. But that won’t make a dent in the problem. In the world of oil, geology is destiny and the U.S. today has only 1 percent of the world’s oil reserves.

And that small new supply wouldn’t matter much in the global market, since the price of oil produced within the United States rises and falls with the global market, regardless of where it is produced. We just don’t have enough oil in the U.S. anymore. And no matter how much more we drill, we will still be paying the world price of oil—not an American price.


FRANK VERRASTRO, Director & Senior Fellow, Energy Program, Center for Strategic & International Studies

We cannot ignore preparations for transitioning to the inevitable post-oil world, a transition which former Energy and Defense Secretary, Jim Shlesinger, has characterized as the greatest challenge this country and the world will face outside of war.

Analysis performed by EIA and the National Renewable Energy Lab estimates that even under optimistic assumptions, alternative transport fuels, excluding electric hybrid plug-ins, can be expected to displace or replace a maximum of 10% of conventional liquid transport fuels by 2030, leaving petroleum-based fuels, new technologies, conservation, and improved efficiency gains to deal with the remaining 90%.

For purposes of comparison, a billion gallons of alternative fuels per year roughly translates to 65,000 barrels a day of conventional gasoline and maybe less depending on energy content. And we currently consume over nine million barrels a day of gas every day. So while contributions from alternate fuels will help meet increased demand, petroleum-based fuels are likely to remain the overwhelming fuel of choice for at least the next 20 years.

To the extent practicable, every effort should be made to pursue policies and changes that fully take into account investment in market practices and utilize as much as possible existing infrastructure and currently available technologies.

And fuels alone are not the answer. We need radical changes to our motor vehicles, both in terms of energy and design and construction material, as well as to the way we transport goods and people.

We frequently speak about politically unstable sources of supplies from around the globe, but the largest protracted losses of global oil and gas output in both 2004 and 2005 were the results of hurricanes in the U.S. Gulf of Mexico.

my professional background also includes a variety of energy policy positions in the White House, and the Departments of Interior and Energy, as well as senior executive positions dealing with both upstream and downstream issues in the energy sector, first as Director of Refinery Policy and Crude Oil Planning for TOSCO Corporation, and more recently as a Senior Vice President at Pennzoil Company.

My concern over the continued ability of this nation to secure energy supplies from an increasing list of inaccessible, high risk or less than reliable parts of the world has prompted policymakers to once again raise the issues of both the desirability and achievability of energy independence.

Consumers have come to both enjoy and expect a healthy domestic economy, which is underpinned by an energy supply that is at once available, affordable, secure, and environmentally benign. In this new world are those criteria able to be satisfied or are they just beyond the reach of current energy paradigms and policies? Global energy demand is projected to increase by 50% over the next 25 years, yet the relative shares of the 5 major fuel groups—oil, natural gas, coal, nuclear and renewables—are expected to remain remarkably constant, with fossil fuel consumption still accounting for over 85% of total energy demand in 2025. In the developing world, that figure exceeds 90%, carrying obvious consequences for consumer competition and the environment. As we consider our energy options, I would strongly urge that we not forget the substantial contributions that conservation and improved efficiency can make to achieving our future energy goals.

In the power generation sector, it currently takes 3 to 4 units of primary energy to produce one unit of delivered electricity. Conservation, efficiency and infrastructure delivery improvements coupled with additional contributions from renewable energy sources can obviate the need for additional, incremental production of fossil fuels for power generation purposes.

Analyzing this forecasted future leads to 2 inescapable conclusions. The first is that absent major technological breakthroughs, significant changes in consumption patterns and policies, or massive dislocations that alter the course of events, current consumption trends are simply unsustainable in the long term. Even with a significant contribution from a wide range of alternative fuels, conventional energy sources will continue to dominate the landscape for at least the next several decades.

For the past 30 years, U.S. oil policy initiatives have centered around 4 major themes: increasing and diversifying sources of conventional and unconventional energy supplies both at home and abroad; encouraging, wherever practicable and politically achievable, the adoption of improvements in conservation and fuel efficiency; the expansion of the strategic petroleum reserve; and reliance on Saudi Arabia to balance oil markets and moderate prices.

For the most part, in an era of surplus supply, this strategy has largely worked. Times and market conditions, however, may well be changing. Global demand for all energy forms is accelerating, and resources are increasingly controlled by national players, whose primary national objectives may not conform to traditional market practices or concerns.

It took the world:

  • 18 years (from 1977-1995) to grow global oil demand from 60 to 70 million barrels per day (mmb/d)
  • 8 years to grow from 70 to 80 mmb/d 
  • 4 years at current growth rates to reach over 90 mmb/d by 2010.

Forecasts for oil consumption in 2030 approximate 115-120 mmb/d—roughly half again as much as we currently consume. Setting aside the debate about resource availability or so called ‘‘peak oil,’’ market growth of that magnitude will require huge investments, place enormous strains on transportation and infrastructure needs, and carry significant implications for security, global geopolitics and the environment.

In addition, the entry of new market players, like China and India, with growing energy appetites and expanding economies may pose competitive threats to America’s market dominance. Added to that are heightened security concerns about threats to infrastructure and facilities posed by terrorist groups and insurgents. Taken together, these changing circumstances have the potential to re-order the marketplace and fundamentally alter the geopolitical balance that has governed the past half century. Such changes may also warrant a thoughtful recalibration of our economic, security, environmental, energy and foreign policy calculations and policy choices.

The United States is currently the world’s largest producer, consumer, and net importer of energy. We are home to roughly 5% of the world’s population and produce 17% of the total energy supplied. Yet in the process of generating some 30% of global GDP, America consumes nearly a quarter of the world’s energy.

Projected supplies of LNG IMPORTS [Now many in Congress want to EXPORT LNG] assume that additional re-gasification capacity will be permitted and constructed either within the United States or in areas proximate to U.S. borders—an uncertain assumption. In addition to environmental, safety, competition, and siting issues, opponents of additional LNG re-gas projects increasingly cite security and foreign policy concerns about exposing the U.S. electric grid system to reliance on imports from countries, many of which are oil exporters found in troubled regions of the world.

Biomass. Since only a portion of the plant material can be used to produce ethanol, issues have been raised about how to handle the residual waste material—e.g., stalks, leaves and husks. A partial answer to this dilemma has resulted in research into what is called cellulosic ethanol, but transportation and energy content issues still remain to be resolved. For example, since a gallon of ethanol contains less energy than a comparable gallon of gasoline, poorer mileage ratings and more frequent fuel stops are impediments that need to be overcome. Additionally, cold weather start problems and transport in carriers other than pipelines may complicate gasoline substitution on a national scale.

Based on current government data, the capital investment costs for most, if not all, of these synthetic fuel technologies is considerably more than that required for a traditional crude oil refinery (see page 57, of EIA’s 2006 Annual Energy Outlook). Further, for purposes of comparison, EIA estimates that there is currently some 300,000 b/d of installed corn ethanol capacity in the United States and an additional 12,000 b/d of biodiesel capacity. Additionally, excluding ‘‘pilot’’ facilities, the latest EIA statistics indicate that there are currently no commercial BTL, GTL or CTL plants in the United States. In contrast, U.S. refining capacity currently exceeds 17,000,000 barrels per day and domestic gasoline demand averages over 9,000,000 barrels per day.

Absent significant policy and regulatory changes to promote increased fuel efficiency, major technological breakthroughs, and substantial changes in consumer/ driver behavior (based on environmental, security or foreign policy considerations), petroleum based fuels will remain the overwhelming fuel of choice for at least the next 20-30 years.

Given projections for increasing fuel demand, the inescapable conclusion is that oil imports will also be with us for decades to come. In that context, we would do well to ratchet down the political rhetoric surrounding the notion of achieving energy independence and instead refocus our efforts to deal with an inter-dependent energy future and simultaneously prepare for the (longer term) transition to a post-oil world, a transition which former Energy and Defense Secretary James Schlesinger has characterized as ‘‘. . . the greatest challenge this country and the world will face—outside of war.’’

U.S. OIL IMPORTS—SOURCES AND CONCERNS. In his State of the Union address, President Bush advanced the challenge of reducing this nation’s ‘‘addiction to oil’’ and reducing by 75% our reliance on oil imports from the Middle East. At best, this line was a thinly veiled attempt to drum up domestic political support for a valiant yet difficult effort to reduce petroleum consumption. At worst, it showed a decided lack of understanding of U.S. import sources, global oil markets and reserve holders.

PITFALLS AND WARNINGS. As with any transformational change, issues surrounding the approach, time horizon and levers designed to accomplish the objective remain keys to success. Dealing with an energy transition is no less daunting. To the extent practicable, every effort should be made to pursue policies and changes that fully take into account investment and market practices and utilize as much as possible existing infrastructure and currently available technologies. Minimizing uncertainty, avoiding conflicting or contradictory policy signals, and evaluating/selecting options based on economic efficiency and merit rather than political efficacy are also are highly recommended.

Changing market and political conditions may complicate America’s policy agenda going forward, and these include:

  1. Energy security, broadly defined in terms of attacks on infrastructure, and greater vulnerability to imported energy supply threats, either physical or financial, due to growing production concentration;
  2. Market developments, particularly in alternative fuels and with respect to climate change. In the future, markets may drive policy more than policy drives markets;
  3. Less multilateral cooperation in the international oil trading and investment market places as governments pursue specific narrow interests;
  4. Increased vulnerability to supply disruptions due to growing natural gas import dependence in the power sector; and
  5. Political hostility to U.S. policy in specific regions as allies and friends abandon the United States to ensure their own political survival.

This almost inevitable growth in reliance on foreign supplies would seem to be a call to action, to define and implement policies that would both expand domestic supplies while setting demand management efforts in motion. To do so, however, requires a certain political will on the part of both the U.S. consumer and the government. And, to date, despite higher energy prices, real and threatened interruptions in supply, environmental damage, hurricanes and blackouts, that critical ingredient remains lacking.

All energy producer/exporters and consumer/importers are bound together by a mutual interdependency. All are vulnerable to any event, anywhere, at any time, which impacts on supply or demand. This means that the U.S. energy future likely will be shaped, at least in part, by events outside of our control and beyond our influence. Calls for energy independence, absent major technological breakthroughs and a national commitment, ring hollow, and in the near term are both unrealistic and unachievable.

In the absence of decisive political will to undertake those steps necessary to improve efficiency, promote conservation, encourage the development of domestic energy resources and renewable energy forms, learning to manage the risks accompanying import dependency may be the only reasonable course of action.



United States dependence on oil is the preeminent challenge of our generation. U.S. oil consumption affects more than just prices at the pump; it impacts our national security, our economy, our fiscal health and our environment. The United States uses 25% of the world’s oil but controls only 3% of the world’s proven oil reserves. As of right now, our demand from oil is only expected to grow, from nearly 21 million barrels a day now to 28 million barrels per day in 2030, of which nearly 70% will be imported. While demand in the U.S. will grow by approximately 25%, demand in China, India and other developing countries is projected to grow by 66%. To meet the projected world demand, global output would have to expand by 57% in 2025.

The Energy Information Administration’s (EIA) most recent forecast states that the price of crude is expected to remain high at $57 per barrel in 2030. The International Energy Agency (IEA) price forecast is even more dire. According to the IEA, if oil producing countries in the Middle East and Africa do not make immediate investments to increase production, the price will rise to $86 barrel in 2030. Even if the region does make the necessary investments, prices could average $65 a barrel.

These forecasts assume the current projections for supply and demand but do not address the consequences of a supply disruption caused by terrorism, political unrest or weather. Last summer, the National Commission on Energy Policy and Securing America’s Energy Future conducted a simulation called Oil Shock Wave to explore the potential security and economic consequences of an oil supply crisis. The event started by assuming that political unrest in Nigeria combined with unseasonably cold weather in North America contributed to an immediate global oil supply shortfall. This sent prices to over $80 barrel. The simulation then assumed that 3 terrorist attacks occur in important ports and processing plants in Saudi Arabia and Alaska which sent oil prices immediately soaring to $123 a barrel and $161 barrel 6 months later. At these prices, the country goes into a recession and millions of jobs are lost as a result of sustained oil prices.

This simulation almost became reality with the failed attack on Abqaiq in Saudi Arabia last month. Had the attack been successful, it would have removed 4to 6 million barrels per day from the global market sending prices soaring around the world and would likely have had a devastating impact on our economy.

One of the lessons from September 11th is that we can no longer be so dependent on places like Saudi Arabia, Russia and Venezuela for our energy supply. Yet we are more dependent on foreign oil from hostile countries today than we were on September 11th—making us more vulnerable and putting the United States in a uniquely disturbing position of bankrolling both sides in the War on Terror. This goes to the heart of our security and our sovereignty. As the world confronts the prospect of a nuclear Iran, our leverage is dramatically limited by the fact that Iran is the second largest exporter of oil. We and our allies are vulnerable to energy blackmail. A few months ago, the Russians decided they weren’t pleased with the Ukrainian elections, so they simply decided to stop exporting natural gas to them— nearly causing an economic crisis in the region.

How can we be sure that the radicals and America-haters who control the oil will never do that to us? Our economy is vulnerable to the price volatility of the oil market and we must do what we can to build resilience into our economy. Decreasing the oil intensity of our economy will help us weather price shocks and make us more secure. We can reduce oil intensity by reducing our demand for oil.

The risks faced above ground by depending on unstable suppliers and good weather are too great and to a certain extent out of our control. If the attack on Abqaiq would have been successful, there is little that we could do to moderate its impact on our economy and lower the prices which is why it is urgent that Congress and the President act now to start reducing our dependence on oil. There is no magic bullet to address a major shock to the oil market and we must take the steps necessary to reduce our dependence on oil which will make our nation stronger. We must bring the same urgency to energy security that we have on the War on Terror.

The Vehicle and Fuel Choices for American Security Act (VFCASA makes significant reductions in our oil use. We chose this title because nothing less than our national security is at stake. This bill would reduce projected oil use by 2.5 million barrels per day in 2016 and 7 million barrels per day in 2026. It also provides tools to meet these aggressive targets by improving the efficiency of vehicles and increasing the production and use of biofuels. VFCASA includes new approaches for manufacturers, the federal government, scientists and consumers, all designed to encourage greater energy security.  Other Senators are Joseph Lieberman of Connecticut, Sam Brownback of Kansas, Norm Coleman of Minnesota, Lindsey Graham of South Carolina, Ken Salazar of Colorado, Jeff Sessions of Alabama, Bill Nelson of Florida, Richard Lugar of Indiana, Barack Obama of Illinois, Johnny Isakson of Georgia and Lincoln Chafee of Rhode Island. I hope that in the future we all look back on the day this bill was introduced as the beginning of a major shift in our national security strategy. I hope that history will say we saw a challenge to our national security and prosperity and then met it and mastered it.

The legislation requires that in 2012, 10% of vehicles manufactured be flexible fuel vehicles, alternative fueled vehicles, hybrids, plug-in hybrids, advanced diesels and other oil saving vehicle technologies. This percentage rises each year until 50% of the new vehicle fleet will be one of these oil saving technologies. It also provides tax incentives for U.S. manufacturing facilities to retool existing facilities to produce advanced technology vehicles which will help shift the vehicle fleet to more efficient vehicles while minimizing the job impact of an increased market share of advanced technology vehicles. The bill builds on the Energy Policy Act (EPAct) of 2005 by expanding the number of consumers that can take advantage of the tax credit available for the purchase of more efficient vehicles. It offers a tax credit to private fleet owners who invest in more efficient vehicles.

VFCASA contains robust research provisions in the areas of electric drive transportation, including battery research, lightweight materials and cellulosic biofuels. Each of these technologies hold great potential to play a key role in reducing our dependence on oil. For instance, lightweight materials, such as carbon composites and steel alloys, hold the promise of being able to double automotive fuel economy while improving safety without increasing the cost of the vehicle. Cellulosic biofuels, which the President mentioned in the State of the Union, have the promise to be cheaper than gasoline and produce 7 to 14 times more energy than is used in its production. My bill doubles the funding for bioenergy research contained in EPAct and provides additional funding for production incentives for the production of cellulosic biofuels. The average American automobile might remain in operation for 15 years or more. This means that it is essential that we begin immediately to deploy oil saving technologies.

Addressing our dependence on oil is a challenge that we can no longer ignore. Events in the world from September 11th to Hurricane Katrina to the recent attempted terrorist attack in Saudi Arabia continue to show us how urgent it is that we act immediately. I hope that this hearing today is the only the Committee’s first step in tackling the challenge of American oil dependence.


DIANNE FEINSTEIN, U.S. SENATOR from California (raise fuel economy, close SUV/light-truck loophole)

The amount of oil imported into the United States has climbed from 6 million barrels of oil per day in 1973 to 12 million barrels per day in 2004 (Energy Information Administration). And the percentage of foreign oil consumed in the U.S. has climbed from 35% in 1973 to 59% in 2004.

So while there has been a lot of talk about decreasing our nation’s dependence on foreign oil, most of it has been empty rhetoric. This week’s cover story of BusinessWeek is ‘‘The New Middle East Oil Bonanza.’’ With oil prices so high, partially due to fear of oil production disruptions in Nigeria, Saudi Arabia, Venezuela, and elsewhere, billions of dollars are going into the coffers of oil-producing nations.

I am seriously concerned about the impacts of America’s overdependence on foreign oil. This cannot continue. For foreign policy and for environmental reasons, the overdependence on oil is a real problem. With 5% of the world’s population, we cannot continue to use 25% of the world’s oil supply. Especially not with India and China developing at their current pace. There are things we could do today to reduce our dependency on oil, and yet we need the political will to get them accomplished. Specifically, we must raise the nation’s fuel economy standards. The Consumer Federation of America estimates that increasing the fuel economy of our domestic fleet by 5 miles per gallon would save about 23 billion gallons of gasoline each year, reducing oil imports by an estimated 14%. A fleet-wide increase of 10 miles per gallon would save 38 billion gallons, cutting imports by almost 20%. That is why I have introduced a very modest bill for the past three Congresses that would close a loophole in current law that allows SUVs and other light trucks to meet less stringent fuel economy standards than other passenger vehicles.

If the SUV loophole were closed, the savings would be rather dramatic. More than 480,000 SUVs were sold in the first quarter of 2005. If those SUVs achieved an average fuel economy of 27.5 miles per gallon, we would reduce gasoline use by more than 81 million gallons of a year. And that’s just for SUVs sold in the first quarter of 2005. If this bill were to pass, the United States would save 1 million barrels of oil a day and decrease foreign oil imports by 10%. Yet the automobile manufacturers continue to fight this proposal tooth and nail and for reasons I cannot understand. The technology to make these vehicles more efficient is available today and American auto companies are making vehicles to meet fuel economy standards in other countries. China, for instance, has issued fuel efficiency standards that are more stringent than ours. If American auto companies hope to make cars that will compete in China, then they will need to make them more fuel efficient. I hope the representative from Ford will be able to address this issue in her statement. If the Federal Government is not going to act, Congress should not stop the States from acting.


James Woolsey, VP Booz Allen Hamilton

I believe that energy independence is principally an issue of oil and conventional oil. The dangers of petroleum dependence and the urgency, I think, are guided by many factors.

  1. The current transportation infrastructure is committed to oil and oil-compatible products. So major investments, whereas they may be wise, in electricity generation of different types, whether it is renewables, nuclear, or whatever, has very little impact today on oil use. They are important for other reasons, but not particularly with respect to oil use. 

  2. The greater Middle East is going to continue to be the low-cost and dominant petroleum producer for the foreseeable future and hold two-thirds of the world’s proven reserves.

  3. The growth we expect in China and India and elsewhere is going to keep demand up for a substantial time and put the greater Middle East and particularly Saudi Arabia more and more in the driver’s seat.

  4. Petroleum infrastructure is very vulnerable to terrorist attacks and other types of potential cut-offs. Ten days ago, we had the attack at Abqaiq. We have hurricane damage possible in the gulf coast. We have the possibility of regime change in the Middle East. There was almost a coup in Saudi Arabia in 1979. This reliance on this part of the world is going to be a problem for us for a long time.

  5. The possibility exists not only of a regime change and terrorist attacks, but also of financial disruption as a result of how much we are borrowing to finance our oil habits.

We borrow approximately a billion dollars every working day, $250 billion a year, about a third of our overall trade deficit, in order to import oil.

And over the last 30 years, some $70 to $100 billion of that has been provided by Saudi Arabia as a government and certainly more by individuals to causes such as the Wahhabi schools in Madras and Pakistan, and elsewhere in the Middle East. We found when I was chairman of the Board of Freedom House, even mosques here in the United States, very, very strongly hate literature. We are paying for that, and that is essentially the same set of beliefs that are propagated by al Qaeda. The only difference between the Wahhabis and al Qaeda is who should be in charge. But the underlying hatred of other religions, democracy and the rest, we pay for in no small measure through our borrowing for oil.

For many developing countries, oil debt is a huge share of their national debt and, therefore, of their problem of poverty. We suggest, and these suggestions were stated by former Secretary of State, George Schultz, and I in a piece last summer—we co-chaired the committee on the present danger—that one should focus on making changes that can be made within the existing infrastructure, can be made relatively soon, and which use cheap or even waste products as feedstocks. And those are the reasons why in the last several pages of testimony, Mr. Chairman, that I suggest that we concentrate—even though there are other worthy things to do—we concentrate on such things as biofuels, particularly ethanol from cellulose, which in the long run is going to be much cheaper than making it from corn or other starches, that we concentrate on diesel from waste products of all kinds, which is coming to be technologically quite feasible.

I served as Director of Central Intelligence, 1993-95, one of the four Presidential appointments I have held in two Republican and two Democratic administrations; these have been interspersed in a career that has been generally in the private practice of law and now in consulting.

Energy security has many facets—including particularly the need for improvements to the electrical grid to correct vulnerabilities in transformers and in the Supervisory Control and Data (SCADA) systems. But energy independence for the U.S. is in my view preponderantly a problem related to oil and its dominant role in fueling vehicles for transportation.

These dangers in turn give rise to two proposed directions for government policy in order to reduce our vulnerability rapidly. In both cases it is important that existing technology should be used, i.e. technology that is already in the market or can be so in the very near future and that is compatible with the existing transportation infrastructure. To this end government policies in the United States and other oil-importing countries should: (1) encourage a shift to substantially more fuel-efficient vehicles within the existing transportation infrastructure, including promoting both battery development and a market for existing battery types for plug-in hybrid vehicles; and (2) encourage biofuels and other alternative and renewable fuels that can be produced from inexpensive and widely-available feedstocks—wherever possible from waste products.


1. The current transportation infrastructure is committed to oil and oil-compatible products. Petroleum and its products dominate the fuel market for vehicular transportation. This dominance substantially increases the difficulty of responding to oil price increases or disruptions in supply by substituting other fuels.

Substituting other fuels for petroleum in the vehicle fleet as a whole has generally required major, time-consuming, and expensive infrastructure changes. One exception has been some use of liquid natural gas (LNG) and other fuels for fleets of buses or delivery vehicles, and the use of corn-derived ethanol mixed with gasoline in proportions up to 10 per cent ethanol (‘‘gasohol’’) in some states. Neither has appreciably affected petroleum’s dominance of the transportation fuel market.

There are imaginative proposals for transitioning to other fuels for transportation, such as hydrogen to power automotive fuel cells, but this would require major infrastructure investment and restructuring. If privately-owned fuel cell vehicles were to be capable of being readily refueled, this would require reformers (equipment capable of reforming, say, natural gas into hydrogen) to be located at filling stations, and would also require natural gas to be available there as a hydrogen feed-stock. So not only would fuel cell development and technology for storing hydrogen on vehicles need to be further developed, but the automobile industry’s development and production of fuel cells also would need to be coordinated with the energy industry’s deployment of reformers and the fuel for them. Moving toward automotive fuel cells thus requires us to face a huge question of pace and coordination of large-scale changes by both the automotive and energy industries. This poses a sort of industrial Alphonse and Gaston dilemma: who goes through the door first? (If, instead, it were decided that existing fuels such as gasoline were to be reformed into hydrogen on board vehicles instead of at filling stations, this would require on-board reformers to be developed and added to the fuel cell vehicles themselves—a very substantial undertaking.)

It is because of such complications that the National Commission on Energy Policy concluded in its December, 2004, report ‘‘Ending The Energy Stalemate’’ (‘‘ETES’’) that ‘‘hydrogen offers little to no potential to improve oil security and reduce climate change risks in the next twenty years.’’ (p. 72) To have an impact on our vulnerabilities within the next decade or two, any competitor of oil-derived fuels will need to be compatible with the existing energy infrastructure and require only modest additions or amendments to it.

2. The Greater Middle East will continue to be the low-cost and dominant petroleum producer for the foreseeable future Home of around two-thirds of the world’s proven reserves of conventional oil—45% of it in just Saudi Arabia, Iraq, and Iran—the Greater Middle East will inevitably have to meet a growing percentage of world oil demand.

One need not argue that world oil production has peaked to see that this puts substantial strain on the global oil system. It will mean higher prices and potential supply disruptions and will put considerable leverage in the hands of governments in the Greater Middle East as well as in those of other oil-exporting states which have not been marked recently by stability and certainty: Russia, Venezuela, and Nigeria.

Deep-water drilling and other opportunities for increases in supply of conventional oil may provide important increases in supply but are unlikely to change this basic picture. If world production of conventional oil has peaked or is about to, this of course further deepens our dilemma and increases costs sooner. Even if other production comes on line, e.g. from unconventional sources such as tar sands in Alberta or shale in the American West, their relatively high cost of production could permit low-cost producers of conventional oil, particularly Saudi Arabia, to increase production, drop prices for a time, and undermine the economic viability of the higher-cost competitors, as occurred in the mid-1980’s.

3. The petroleum infrastructure is highly vulnerable to terrorist and other attacks. The radical Islamist movement, including but not exclusively al Qaeda, has on a number of occasions explicitly called for worldwide attacks on the petroleum infrastructure and has carried some out in the Greater Middle East. A more well-planned attack than the one that occurred ten days ago at Abquaiq—such as that set out in the opening pages of Robert Baer’s recent book, Sleeping With the Devil, (terrorists flying an aircraft into the unique sulfur-cleaning towers at the same facility)—could take some six million barrels per day off the market for a year or more, sending petroleum prices sharply upward to well over $100/barrel and severely damaging much of the world’s economy. Domestic infrastructure in the West is not immune from such disruption. U.S. refineries, for example, are concentrated in a few places, principally the Gulf Coast.

Last summer’s accident in the Texas City refinery—producing multiple fatalities—points out potential infrastructure vulnerabilities, as of course does this past fall’s hurricane damage in the Gulf. The Trans-Alaska Pipeline has been subject to several amateurish attacks that have taken it briefly out of commission; a seriously planned attack on it could be far more devastating. In view of these overall infrastructure vulnerabilities policy should not focus exclusively on petroleum imports, although such infrastructure vulnerabilities are likely to be the most severe in the Greater Middle East. It is there that terrorists have the easiest access, and the largest proportion of proven oil reserves and low-cost production are also located there. But nothing particularly useful is accomplished by changing trade patterns. To a first approximation there is one worldwide oil market and it is not generally helpful for the U.S., for example, to import less from the Greater Middle East and for others then to import more from there. In effect, all of us oil-importing countries are in this together.

4. The possibility exists, both under some current regimes and among those that could come to power in the Greater Middle East, of embargoes or other disruptions of supply. It is often said that whoever governs the oil-rich nations of the Greater Middle East will need to sell their oil. This is not true, however, if the rulers choose to try to live, for most purposes, in the 7th century. Bin Laden has advocated, for example, major reductions in oil production and oil prices of $200/barrel or more. As a jihadist Web site has just stated in the last few days: ‘‘[t]he killing of 10 American soldiers is nothing compared to the impact of the rise in oil prices on America and the disruption that it causes in the international economy.’’ Moreover, in the course of elaborating on Iranian President Ahmedinejad’s threat to destroy Israel and the U.S., his chief of strategy, Hassan-Abbassi, has recently bragged that Iran has already ‘‘spied out’’ the 29 sites ‘‘in America and the West’’ which they (presumably with help from Hezbollah, the world’s most professional terrorist organization) are prepared to attack in order to ‘‘destroy Anglo-Saxon civilization.’’ One can bet with reasonable confidence that some of these sites involve oil production and distribution. In 1979 there was a serious attempted coup in Saudi Arabia. Much of what the outside world saw was the seizure by Islamist fanatics of the Great Mosque in Mecca, but the effort was more widespread. Even if one is optimistic that democracy and the rule of law will spread in the Greater Middle East and that this will lead after a time to more peaceful and stable societies there, it is undeniable that there is substantial risk that for some time the region will be characterized by chaotic change and unpredictable governmental behavior. Reform, particularly if it is hesitant, has in a number of cases in history been trumped by radical takeovers (Jacobins, Bolsheviks). There is no reason to believe that the Greater Middle East is immune from these sorts of historic risks.

5. Wealth transfers from oil have been used, and continue to be used, to fund terrorism and Its ideological support. Estimates of the amount spent by the Saudis in the last 30 years spreading Wahhabi beliefs throughout the world vary from $70 billion to $100 billion. Furthermore, some oil-rich families of the Greater Middle East fund terrorist groups directly. The spread of Wahhabi doctrine—fanatically hostile to Shi’ite and Suffi Muslims, Jews, Christians, women, modernity, and much else—plays a major role with respect to Islamist terrorist groups: a role similar to that played by angry German nationalism with respect to Nazism in the decades after World War I. Not all angry German nationalists became Nazis and not all those schooled in Wahhabi beliefs become terrorists, but in each case the broader doctrine of hatred has provided the soil in which the particular totalitarian movement has grown. Whether in lectures in the madrassas of Pakistan, in textbooks printed by Wahhabis for Indonesian schoolchildren, or on bookshelves of mosques in the U.S., the hatred spread by Wahhabis and funded by oil is evident and influential. On all points except allegiance to the Saudi state Wahhabi and al Qaeda beliefs are essentially the same. In this there is another rough parallel to the 1930’s—between Wahhabis’ attitudes toward al Qaeda and like-minded Salafist Jihadi groups today and Stalinists’ attitude toward Trotskyites some sixty years ago (although there are of course important differences between Stalin’s Soviet Union and today’s Saudi Arabia). The only disagreement between Stalinists and Trotskyites was on the question whether allegiance to a single state was the proper course or whether free-lance killing of enemies was permitted. Stalinist hatred of Trotskyites and their free-lancing didn’t signify disagreement about underlying objectives, only tactics, and Wahhabi/Saudi cooperation with us in the fight against al Qaeda doesn’t indicate fundamental disagreement between Wahhabis and al Qaeda on, e.g., their common genocidal fanaticism about Shia, Jews, and homosexuals. So Wahhabi teaching basically spreads al Qaeda ideology

6. The current account deficits for the U.S. and a number of other countries create risks ranging from major world economic disruption to deepening poverty, and could be substantially reduced by reducing oil imports. The U.S. in borrows about $2 billion every calendar day from the world’s financial markets to finance the gap between what we produce and what we consume. The single largest category of imports is the approximately $1 billion per working day, or $250 billion a year, borrowed to import oil. The accumulating debt increases the risk of a flight from the dollar or major increases in interest rates. Any such development could have major negative economic consequences for both the U.S. and its trading partners.

If such deficits are to be reduced, however, say by domestic production of substitutes for petroleum, this should be based on recognition of real economic value such as waste cleanup, soil replenishment, or other tangible benefits.

Government policies with respect to the vehicular transportation market:

Encourage improved vehicle mileage, using technology now in production The following three technologies are available to improve vehicle mileage substantially. [We should] take advantage of diesels’ substantial mileage advantage over gasoline-fueled internal combustion engines. Heavy penetration of diesels into the private vehicle market in Europe is one major reason why the average fleet mileage of such new vehicles is 42 miles per gallon in Europe and only 24 mpg in the U.S. Although the U.S. has, since 1981, increased vehicle weight by 24% and horsepower by 93%, it has actually somewhat lost ground with respect to mileage over that near-quarter century. In the 12 years from 1975 to 1987, however, the U.S. improved the mileage of new vehicles from 15 to 26 mpg.

Hybrid gasoline-electric vehicles now on the market generally show substantial fuel savings over their conventional counterparts.

Light-weight carbon composite construction. Constructing vehicles with inexpensive versions of the carbon fiber composites that have been used for years for aircraft construction can substantially reduce vehicle weight and increase fuel efficiency while at the same time making the vehicle considerably safer than with current construction materials.

Encourage the commercialization of alternative transportation fuels that can be available soon, are compatible with existing infrastructure, and can be derived from waste or otherwise produced cheaply. Biomass (cellulosic) ethanol The use of ethanol produced from corn in the U.S. and sugar cane in Brazil has given birth to the commercialization of an alternative fuel that is coming to show substantial promise, particularly as new feedstocks are developed.

Senator DORGAN.I think a dispassionate observer living off of our planet seeing that we use 84 million barrels a day with one quarter of that used in the United States, which imports 60% of that oil from other parts of the globe, most of them covered with sand, would ask: How could they not have been concerned about that? Why didn’t they take dramatic action, because tonight or tomorrow or next Saturday or God forbid next month a terrorist action or some other cataclysmic action could just simply throw this country’s economy flat on its back?


Mr. WOOLSEY. I think it’s extremely urgent, Senator Dorgan. I think that this could collapse on us at any time. There was almost a coup in Saudi Arabia in 1979. And Iran could cut us off for a while for its own reasons of pursuing its nuclear program, terrorist attacks in a number of places. This is something that we need to fix and we need to fix now. In my view reducing our dependence on conventional oil is an integral part of the war on terror. I believe we will be in this war for decades, much like the Cold War, and that one key to winning it is to cease funding the ideology of hatred that our enemies feed upon. We borrow $250 billion/year to import oil—an increasing share it will come from the Middle East as the years go on. The Saudis then, to take one example, provide around $4 billion/year to the Wahhabis who then use much of it to run, e.g., madrassas in Pakistan and elsewhere that teach this hatred. Indeed one could say that, other than the Civil War, this is the only war the U.S. has fought in which we pay for both sides.

Nuclear energy may be one good way to produce electricity, especially because it does not emit global warming gases. But it is largely irrelevant to the question oil addiction because only 2-3% of our electricity comes from oil.



I have studied the White House Fact Sheet on the Advanced Energy Initiative with some puzzlement. The stated purpose is ‘‘to help break America’s dependence on foreign source of energy.’’ This can only mean oil: the U.S. does not import coal, uranium is in surplus, and natural gas imports are small (although Administration policy is to increase them by several-fold, creating a new dependence). However, the section on ‘‘diversifying energy sources’’ is all about electricity, which has almost nothing to do with oil. This confusion between oil and electricity, conflating them both into ‘‘energy,’’ bemuses energy experts the world over who assume that responsible U.S. officials must understand these fundamentals; yet such jumbled formulations persist.

Energy independence is not only about oil. Many sources of LNG raise similar concerns of security, dependence, site vulnerability, and cost. I do not expect that Iran and Russia would be more reliable, long-run sources of gas than Persian Gulf states are today of oil.

Coal and nuclear generation of electricity have virtually nothing to do with displacing oil, which is the nub of the Nation’s energy security problem.

I don’t think we need to spend more (although more well-targeted energy R&D would certainly be valuable), but we definitely need to spend smarter. The lion’s share of both current and new energy R&D funding is going, as usual, to the least promising but most politically powerful technologies—coal and nuclear—that can by their nature contribute virtually nothing to getting America off oil. This and the ill-conceived subsidies in last year’s Energy Policy Act don’t simply divert Federal funds from best buys; they also leverage untold sums of private capital into non-solutions. These mistaken Federal energy priorities in the 1980s, in practical effect, created today’s oil crisis because of what they didn’t do and what they dissuaded private investors from doing. Today’s repetition of this policy error is setting the stage for another, longer, worse oil crisis.

The Strategic Petroleum Reserve (SPR) is useful, though I’ve heard disturbing recent reports about its ability to sustain maximum output, and I remain concerned about the vulnerability of its centralized facilities to disruption by hurricanes or terrorism.

I’d prefer greater emphasis on distributed stockpiles of refined products rather than crude oil, rotated as needed to guard against deterioration. The oil system used to have much larger product stockpiles close to its customers than it does today, because bean-counters have wrung out inventory as mere carrying-cost overhead, sapping its societal value for private gain.

Europe is generally ahead in this regard; many governments require market actors, both suppliers and major customers, to carry refined-product stocks that are already in the form and at the place where they’d be needed by final customers. With so many simultaneous disruptions in the world oil system, and strong incentive to cause more, I think the case for such distributed product stocks (duly protected against attack) is now unassailable. So is the even more powerful case for efficient use of oil. This gives the most bounce per buck by stretching existing stocks and buying more time to mend what’s broken or improvise substitutes.

The grave security problems I identified 27 years ago in our Nation’s energy infrastructure should have been fixed, but instead, most of them have been worsened. These self-inflicted vulnerabilities are an attractive nuisance for Al Qa’eda, and we should at least stop multiplying them. Current Federal energy policy perpetuates American’s expanding oil dependence, because it ranges from modest support (advanced biofuels) to inaction (natural-gas and electric efficiency) to opposition (seriously improving light-vehicle efficiency). The resulting oil dependence funds both sides of the war, impugns U.S. moral standing, has bailed out the nearly empty Iranian and Saudi treasuries, has created (in effect) such leaders as Ahmadinejad, Chavez, El-Bashir, and Putin, systematically distorts foreign policy and postures, poisons foreign attitudes, weakens competitiveness, and enhances vulnerability and fragility.

Meanwhile, Federal policy strongly favors overcentralized system architecture, as seen in Katrina’s damage and in bigger, more frequent regional blackouts. It creates terrorist targets, from LNG and nuclear facilities to Iraqi infrastructure. Its centerpiece, ANWR drilling, would create an all-American Strait of Hormuz in a world that already has one such chokepoint too many. It lavishly supports expansion of nuclear power and reverses the Ford-Cheney reprocessing moratorium, thus worsening proliferation. On top of that, it sacrifices what’s left of the nonproliferation regime, painfully built over a half century, to support the nuclear bureaucracy that makes 3% of India’s electricity, while ignoring the vastly greater and cheaper potential to improve the peaceful 97%.

The Japanese have been on a steady course to conserve energy and reduce their dependence on imported energy while their GDP continues to grow. They’re turning down their thermostats and shutting off their idling car and truck engines to save energy. Opinion polls show that more than 75% of Japan’s citizens view energy conservation as a personal responsibility. Many are willing to shell out extra cash for efficient appliances and office equipment. Do you think that Americans can gain energy independence without feeling a little pain? Are American consumers willing to accept some financial pain for energy independence gain? I think most Americans hunger for leaders who engage their patriotic personal involvement in a great national project to shed our oil burden. Winning the Oil Endgame showed how to do this through entrepreneurship and innovation rather than through cost, pain, or sacrifice. But those interested—and there are many— in changing careless habits should be welcomed too, because markets work better when they’re mindful. Just please don’t confuse efficiency (which is widely called ‘‘conservation’’ in the Pacific Northwest but nowhere else in the country) with curtailment (which is what many Americans from other regions think ‘‘conservation’’ means): they should be discussed separately and in unambiguous language, not interchangeably.

We should worry not only about already attacked Saudi oil choke points like Abqaiq and Ras Tanura but also about the all-American Strait of Hormuz proposed in Alaska.

DOE policy that did not undercut DOD’s mission would shift from brittle energy architecture, the next major failure inevitable, to more efficient, resilient, diverse, dispersed, renewable systems that make it impossible. It would avoid electricity investments that are meant to prevent blackouts, but instead make them bigger and more frequent. It would stop creating attractive nuisances for terrorists from vulnerable LNG and nuclear facilities to over-centralized U.S. and Iraqi electric infrastructure. And it would acknowledge the nuclear proliferation correctly identified by the President as the gravest threat to national security is driven largely by nuclear power.

The key to wringing twice the work from our oil is tripled efficiency, cars, trucks, and planes, integrating the best 2004 technologies for ultra-light steels or composites, better aerodynamics in tires, and advanced propulsion can do this with 2-year paybacks.

I believe the shortest path to an energy policy that enhances security and prosperity is free-market economics, letting all ways to save or produce energy compete fairly at honest prices, no matter which kind they are, what technology they use, where they are, how big they are, or who owns them.

Bigger power plants sending bigger bulk power flows through longer transmission lines tend to make the grid less stable (id.). Leading engineering analysts of electric-grid theory are reaching similar conclusions, e.g., http://www.ece.wisc.edu/~dobson/PAPERS/carrerasHICSS03.pdf

Gasoline taxes are a pretty good signal to drive less if you have alternatives, but they are a very weak signal to buy an efficient car because that price signal in the fuel is diluted many fold by the other costs of buying and running a car and then heavily discounted at consumer discount rates. So consumers really only look at the first 2 or 3 years of fuel savings. CAFE standards, are pretty well gridlocked. We found that a more effective method would be to take each size class of light vehicles and institute forward a feebate system. That is a combination of a fee and a rebate, so that within each size class separately, the less efficient vehicles pay a fee according to how inefficient they are and the more efficient vehicles get a rebate paid for by the fees according to how efficient they are. So you would have an incentive within each size class to buy a more efficient vehicle, but no incentive to buy a different size than you wanted.

I would say tripled efficiency, cars, trucks, and planes, and a diverse dispersed, decentralized resilient, invulnerable electric system [are best]. If you are asking on a policy level, I would say size and revenue-neutral feebates and encouraging the States to reward gas and electric utilities for cutting your bill, not for selling you more energy. That would free up half the gas in the country and a lot of that could be substituted back for oil.


Senator MURKOWSKI. Mr. Lovins, in looking at your testimony as well as some of the backup documentation that you have provided with it, you are arguing against producing more oil from Alaska basically from the security perspective. And I keep reading with interest the same phrase you have used, the all-American Strait of Hormuz, as well as the reference to this world’s biggest chapstick. We realize that it is a long silver thread running through the State providing a valuable resource to the country. Do you have the same issues in terms of security for a natural gas pipeline to meet that energy need for this country that you have indicated in your comments about oil?

Mr. LOVINS.   I think many of the details would differ. The gas pipeline would not be hot and would not have to be above ground and very exposed. You would not have the coal restart problem that a hot oil pipeline does. That is the source of the chapstick comment. I would call your attention to the more recent article originally entitled ‘‘The Alaskan Threat to National Energy Security’’ that’s cited toward the end of footnote 5 in my prepared testimony, and it was published just weeks before 9/11 with a title change by the editor. And the annotated version of that, which is cited, details that the security issues I described have not gone away. You’ll find the scariest episode in the 30-year record you refer to, Senator, is not the drunk taking a potshot at the line. Rather it is the disgruntled engineer who was very fortunately caught months before blowing up three critical and very hard to fix parts of the line with 14 bombs he had already built and cold weather tested. And he was caught only because he involved someone else in the plot who turned him in. He was not aiming to hurt the United States. He intended to make money in the oil future’s market. But as Mr. Woolsey and I wrote in the Christian Science Monitor in 2002, that guy was an amiable bungler compared to our al Qaeda adversaries.

Basing Federal policy on sound market principles and ‘‘best buys first’’ would be a propitious change from recent tendencies. So would a clear focus on oil, rather than confusing oil with electricity.

Senator Domenici: How do you respond to those, like me, who say that an economy run entirely without oil by the 2040’s is quite difficult to believe?

Lovins: First, I would respectfully invite you to examine the analysis we presented on 20 September 2004 in Winning the Oil Endgame and its Technical Annexes, all posted free at www.oilendgame.com. Our scenario achieves half its oil displacement by substituting saved natural gas and advanced biofuels for oil.

Most R&D has been and still is mis-allocated to favored technologies that are already mature or show no hope of becoming competitive. The money seems to be allocated more by pork-barrel politics than by risk-adjusted public return. Second, total federal energy R&D is far too small for its actual and rhetorical priority.

I’d add that the Federal government is doing far too much to distort private markets, deliberately causing huge mis-allocations of private capital. I’d love to see a thorough, transparent, and defensible compilation of Federal energy subsidies—

My Institute did the first thorough analysis of Federal energy subsidies, summarized in ‘‘Hiding the True Costs of Energy Sources,’’

Nuclear power in FY84 got 34% of the subsidies (excluding Price-Anderson) but delivered 1.9% of the energy; each of its subsidy dollars delivered 1/80th as much as a dollar of subsidies to renewables and efficiency. The latest analyses by the top contemporary independent scholar in this field, Doug Koplow (www.earthtrack.net), confirm that Federal energy subsidies are still large and probably even more distortive. There is little point developing new technologies if such massive market interventions favoring rivals continue to suppress their adoption.

Alaska’s onshore methane hydrates may bubble out of the thawing tundra on their own, causing a global climate disaster. I haven’t seen a convincing argument that onshore or offshore methane hydrates can be extracted without a substantial risk of major uncontrolled releases of methane. Lacking such grounds for confidence that the operation could avoid making our planet more like Venus, I hope the hydrates stay right where they are. And we don’t need them if, more cheaply, we use energy in a way that saves money.

Regrettably, current Federal policy has only limited relevance to eliminating oil dependence, and much of its content that is relevant is unhelpful. Most of the public policy initiatives that are both relevant and helpful are coming from the States.

Coal gasification is a feasible but costly way to produce gas or liquids. It is quite carbon-intensive as normally conceived. All carbon-sequestered ‘‘clean coal’’ innovations are in my view a 4th-best approach, after energy efficiency, renewables, and combined-heat-and-power (co-, tri-, and polygeneration), so I’d give it a lower overall priority in energy R&D than it currently has. Having a lot of coal is in my view a less important reason to use it than whether it can provide energy services at least cost. R&D should be driven by cost-effectiveness, not resource bases.

If it’s possible to stop mandating and subsidizing sprawl, or otherwise to advance the smart-growth agenda, that too would bear huge longer-term dividends by reducing vehicle-miles travelled,

Electricity reforms can save almost no oil, they are extremely important to creating a resilient national energy system—including the ability to get power to filling stations so customers can pump gas! [pumpheads are electric now, they used to be a manual  handcrank socket; so when Florida had a prolonged power outage surface transportation stopped too]

Senator FEINSTEIN. The Bush administration found that 99% of flexible-fuel vehicles on the road today never use a drop of E-85 ethanol. As a result, the administration found that this loophole actually increases America’s oil dependence by 14 to 17 billion gallons of gasoline per year. As I understand it, Ford uses its fuel economy credits for these flex-fuel vehicles to lower fuel economy standards for the rest of the automobiles so that we are not really doing much to increase vehicle economy. What would you suggest we do to really increase fuel economy? I had a bill just to bring SUVs over 10 years up to the fuel economy of the sedans which the fleet number, as you said, is 27 miles per gallon as opposed to the SUV at 20 miles per gallon. And it went down because there is really no support for that. Detroit opposes it very strongly. What do we do that Detroit could support to really rapidly increase fuel economy standards?

Ms. CISCHKE. We have to be very sensitive to what the consumers want to buy. Right now in the auto industry, over 30 vehicles get better than 30 miles per gallon in fuel economy, yet it accounts for less than 5% of our sales. So we have a challenge in terms of putting vehicles out there that nobody wants to buy. And that is a real problem for all the auto companies.

When you mentioned the E-85 usage, this is kind of a chicken and the egg type situation. We need the fuel in order to make the vehicles run on E-85, but the fuel is not going to be there unless there is enough volume of vehicles. We have to address to what our consumers are demanding and we have got to find a way to make them want to buy more fuel-efficient vehicles.


Mr. VERRASTRO. Flexible-fuel vehicles run on about 10 to 15% ethanol, not 85%. E-85 is a totally different bird. There are evaporative emissions issues in terms of the environment. There are also massive transportation and distribution issues. You cannot put it in a pipeline. In our country on the coast, we have the greatest demand for fuels. If you grow corn or use cellulosic ethanol and then transport it to the coast and you cannot put it in pipelines, you have to find a different distribution system. Clearly in Europe, the oil companies have taken to incorporate biodiesel and biomass and other fuels at their retail stations. It is the cost of a tank and a pump. But this transition to move to E-85, I am not sure that that is the answer. Brazil, as Jim Woolsey just said, is kind of the poster child for ethanol. And over the weekend, they reduced the content of the ethanol in their fuel from 25 percent to 20 percent because they cannot produce enough of it. So to think that we are going to grow our way crop-wise into an energy solution, I think is far reaching.


STATEMENT OF THE AMERICAN PETROLEUM INSTITUTE. API is a national trade association representing more than 400 companies involved in all aspects of the oil and natural gas industry, including exploration and production, refining, marketing and transportation, as well as the service companies that support our industry.

We live in an energy interdependent world, and complete energy independence is probably unachievable and certainly undesirable.

We can no longer afford to place off limits vast areas of the Eastern Gulf of Mexico, off the Atlantic and Pacific coasts, and offshore Alaska. Similarly, we cannot afford to deny Americans consumers the benefits that will come from opening the Arctic National Wildlife Refuge and from improving and expediting approval processes for developing the substantial resources on federal, multi-use lands in the West. In fact, we do have an abundance of competitive domestic oil and gas resources in the U.S. According to the latest published estimates, there are more than 131 billion barrels of oil and more than 1000 TCF of natural gas remaining to be discovered in the United States.

Much of these oil and gas resources—78% of the remaining to be discovered oil and 62% of the gas—are expected to be found beneath federal lands and coastal waters. Natural gas, which fuels our economy—not only heating and cooling homes and businesses but also generating electricity. It is used by a wide array of industries—fertilizer and agriculture; food packaging; pulp and paper; rubber; cement; glass; aluminum, iron and steel; and chemicals and plastics. And, natural gas is an essential feedstock for many of the products used in our daily lives—clothing, carpets, sports equipment, pharmaceuticals and medical equipment, computers, and auto parts.

Unlike oil, natural gas imports in the form of liquefied natural gas (LNG) are limited by the lack of import terminals. There are only 5 operating in the United States. A number of additional terminals have been proposed but many have run into not-in-my-backyard opponents and complex permitting requirements.

There is a misperception by some about the time and costs involved in any transition to the next generation of fuels. Consider what would be involved in replacing the dominant role of oil with a substitute like hydrogen or solar power. Most experts agree that such a transition would require dramatic advances in technology and massive capital investments—and take several decades to accomplish, if at all.

Based on various studies, the energy savings from corn-based ethanol are moderate—3 to 20%—because production from corn requires significant energy input. And, judging from this past year, ethanol is higher-priced than gasoline and, measured on a BTU basis, considerably more expensive. In addition, some have estimated that the total amount of ethanol that could be produced by converting the entire 2005 U.S. corn crop into ethanol would be about 31.1 billion gallons—an amount equal to just 22.2 percent of U.S. gasoline consumption last year.

We hope that people will better understand that, in today’s global energy marketplace, U.S. ‘‘energy independence’’ is impossible.

We hope they come to see that, instead, ‘‘energy interdependence’’ is essential. We hope consumers will come to recognize that their interests are best served when we can source fuels from multiple providers located both in the U.S. and throughout the world. Sourcing flexibility is one of our most powerful energy security tools. We also want others to understand that we can operate only where governments permit us to do so.


AMORY LOVINS. We are particularly concerned that FERC is making America’s power system more prone to regional blackouts by continuing to push larger, longer bulk power flows through more and bigger transmission lines, rather than allowing or, preferably, requiring fair competition (whether market or administrative) by demand-side and distributed options so as to achieve a least-cost system solution

FERC is the last bastion of central planning in the Federal Government, and last year gained new authority to site supply-side resources, or override state and local objections to them, without having to consider cheaper alternatives, ranging from end-use efficiency and demand response to micropower. This will probably result in further construction of vulnerable, terrorist-magnet, and uneconomic LNG terminals, with potentially catastrophic consequences for nearby communities and increased financial risks for investors.

Another desirable focus for FERC’s attention would be ensuring that as utilities automate distribution systems, their topology should be made bidirectional, so that distribution shifts from a tree structure (distributing centrally generated electrons to dispersed customers) to a web structure (gracefully handling power flows any which way). This is largely a State regulatory matter, but Federal standards would probably help, and State attention to this issue could be encouraged in many ways.

Still another area for FERC reform would remove the transmission roadblock facing wind developers, especially in and near the Dakotas. In essence, the incumbent lignite operators in that region aren’t allowing fair transmission access, and FERC has not yet intervened to promote it, so a cheap, climate-safe, domestic resource exceeding 300 GWe just on tribal lands in the Dakotas remains virtually unexploited. Broadly, I think State Commissions should follow Texas’s example (under then PUCT Chairman Pat Woods’ and Governor Bush’s leadership) of allowing distributed generators to ‘‘plug and play’’ freely: if the inverter meets IEEE 1547, UL, and local building code requirements, no other approval or procedure should be required. Federal policy should encourage this outcome uniformly, and should encourage State Commissions to remove artificial constraints as to feed-in generators’ unit size, the symmetry of TOU vs. flat-rate payments vs. charges, and other accounting arrangements to ensure a level playing-field for distributed resources. Federal policy should give no preference to big over small or to supply-side over demand-side resources; all should compete fairly as a central principle of Federal energy policy.

Hybrid and fuel-cell cars are worthy, and plug-in hybrids may be, but they’d all work better and cost less if combined with an apparently missing element: advanced materials that eliminate half the car’s weight and fuel use, improve its safety, and doesn’t raise its production cost.

I hope the Congress will note that much of the recent troubles at NREL—not a place one should be trying to divert or demoralize during an energy crisis—arose from ~15% of its budget’s being, in effect, hijacked by Congressional earmarks. If NREL is to do its job and retain its excellent people, such raids must cease.

I’m gratified by the Pentagon’s increasing focus on radically reducing fuel-logistics footprint in theater: if seriously implemented, this could create the industrial base that can lead the civilian vehicle industries off oil, just as DoD research transformed the civilian economy by inventing for military purposes the Internet, GPS, and the jetengine and chipmaking industries—all foundations of America’s and especially California’s economy.

It’s vital that in all countries, biofuels be done in an environmentally and socially sustainable way—unlike some recent destruction of tropical forests to make way for palm-oil plantations to produce biodiesel. Even more important is to share and greatly accelerate developing countries’ adoption of advanced end-use efficiency in all sectors. .

The most comprehensive threat to national energy security today is national energy policy. This Committee should reexamine its approach, and stop energy policy from undercutting DoD’s mission.

Roughly 4-8% of U.S. gasoline or 2-4% of crude oil could be quickly saved by:

  • reducing speed limits for all non-Class 8 vehicles to 60 mph in zones now above this limit under Federal (and if possible State) jurisdiction
  • changing EPA rules so that HOV lanes and preferential parking now available only to Alternative Fuel Vehicles are also available to hybrid and all-electric vehicles (EPA’s inaction on this is frustrating many States that wish to make this change)
  • giving so-called double-tax-credit to State and local nonprofit vehicle buyers such as public safety agencies for adopting high-efficiency hybrids
  • authorizing all citizens to deduct mass transit costs on IRS Schedule A
  • providing for universal approval of ‘‘parking cash-out’’ (as long practiced in Southern California) and perhaps requiring it for large employers
  • for a few years, extending the Federal tax credit for AFVs, hybrids, and all-electric vehicles to far more than the current 60,000 per manufacturer
  • eliminating continuing loopholes in CAFE rules
  • clarifying that NHTSA does have authority to extend to cars its 23 August 2005 proposed decision to base future light-truck CAFE rules on size, not weight

Roughly 12-18% of diesel fuel could be rapidly saved by heavy-truck reforms proposed in Winning the Oil Endgame and in our memo for Senator

  • Roughly 4-6% of gasoline and diesel fuel could be promptly saved by:
  • immediately switching all Federal civilian (and non-tactical military) road vehicle procurement to the top 5%, or at worst 10%, of efficiency in their subclass
  • saving ~3% through proper tire inflation, including rental and commercial fleets as well as individual owners
  • exerting Federal pressure to improve traffic-light timing on major urban streets and to speed adoption of electronic tolling (with careful controls to protect personal privacy) and of ‘‘urban box’’ congestion charges
  • encouraging proper engine tuning and air-filter replacement, as well as EPA’s other gas mileage tips
  • having NHTSA clarify that manufacturers and sellers of hybrid cars are allowed to advise buyers how to drive them for optimal efficiency (thus reversing the false impression, spread chiefly by Consumer’s Reports, that hybrids are inherently much less efficient than they actually are if properly driven)
  • DoD initiatives to make military-platform (and -facility) energy efficiency a high priority—in doctrine, requirements-writing, acquisition, design pedagogy and practice, operations, and reward systems—should be strongly encouraged.

Targeted military science and technology investments in ultralight materials and their low-cost manufacturing could create the advanced-materials industrial cluster that is the most important single manufacturing innovation for getting off oil.

We would also like to see greater investment in improved road traffic management infrastructure in order to reduce congestion and save fuel.

The integrated approach aims at producing clear and quantifiable reductions in CO2 through a range of options (e.g. vehicle technology, alternative fuels, taxation, eco-driving, gear shift indicators, consumer information and labeling, consumer behavior and congestion avoidance).

Hydrogen fuel cell vehicles are seen by Ford and the industry as a long-term alternative transportation solution. They are clean and efficient, with zero tailpipe emissions, and use a renewable fuel source. Although FCVs are in development today, much work remains to meet the functionality, durability, and affordability demands of automotive consumers.

Automobile fuel economy has been mandated via the CAFE program for about 30 years. Most industry and government experts agree that the program has not been an effective way to reduce petroleum consumption, and that it has had dramatic competitive and economic impacts. For one thing, it takes a long time for the vehicle fleet to turn over. New CAFE standards take time to implement, and their effects take even more time to make their way through the vehicle fleet. Another problem is that higher fuel economy simply makes it cheaper for people to drive more. Vehicle miles traveled have increased substantially over the life of the CAFE program and tend to overwhelm improvements in fuel economy. Addressing our dependence on foreign oil must include taking steps to reduce vehicle miles traveled. We support

Automakers are already producing more than 100 models that achieve 30 mpg or more on the highway; however, the consumer demand for these vehicle models is low.

Coal gasification, followed by synthesis to liquids that are suitable for transportation fuels, is a known technology. These are large plants with substantial investment, and their long-term commercial operation must be certain. A related technology, recovery of remote natural gas with synthesis to liquid fuels (Gas-to-Liquids, GTL) is now considered economical in select cases, and several large GTL plants are now planned for Qatar, with diesel fuel to be supplied to Europe, where diesel demand now exceeds supply. Gasification of coal (Coal-to-Liquids, CTL) adds a substantial processing step compared with natural gas as the resource. So the overall efficiency of CTL will be less than GTL, with a corresponding increase in CO2 as a byproduct. The GTL path will be an issue for total CO2 emissions unless carbon capture and sequestration is implemented with the GTL plant. Carbon capture and sequestration trial projects are proceeding with good success.


At the end of this year, Ford will have already put nearly 2,000,000 Flexible Fuel Vehicles on the nation’s roads. However, applying technologies too broadly, too fast, and too soon (even those already on other vehicle lines in the fleet) can result in poor performance and ultimately customer rejection of promising technologies. Ford’s typical engineering practices require that new technologies be phased into production over several years such that there is a cycle of manufacturing and customer service experience in the field. In the case of E85 FFVs, this experience has been limited due to the lack of fuel availability. Moreover, because ethanol is a unique fuel with unique properties, these vehicles require unique hardware and engineering. For example, fuel tanks with low permeation characteristics are required. It also requires a special fuel pump and fuel lines to deliver the fuel to the engine. Unique injectors introduce the fuel into the engine where special calibrations programmed into the on-board computer determine how much ethanol is in the fuel and how best to set spark timing and fuel flow to ensure the engine operates properly and meets emission standards on all ethanol and gasoline mixtures. Because there is more than one fuel calibration within an FFV, costly development and certification testing is doubled. Many of the FFV parts and processes are patented by Ford and are the result of innovative ideas by our best engineers, and we’re proud of them. The bottom line . . . making an FFV is a significant investment for auto manufacturers.



We’ve been saying for decades that we need to decrease our dependence on foreign supplies of energy. The first major calls for action followed the oil embargo of 1973. In that year, we imported approximately 28% of the oil we consumed. A restriction of supply by a group of hostile nations caused prices to increase by an average of 40% during that embargo and introduced a new weapon in global conflict. In 2005, we imported roughly 59% of the oil we consumed. This trend of increased dependence is a troublesome one.

Wyoming produces roughly 10% of the nation’s primary energy, with far less than 1% of the nation’s people. We have oil, natural gas, uranium, and wind resources to name a few.

We also have coal—a resource with enormous potential for increasing our energy independence. Coal is economical and abundant. It constitutes roughly half of the electricity generated in the United States. Advancement of coal gasification technologies, carbon sequestration, and improved mining techniques reduce many of the environmental concerns that people have had in the past. And greater use of cheaper Western coal makes this fuel a much more attractive choice going forward. We have coal here in the United States and we need to use it. We continue to develop wind, we have hydroelectric dams, and we will hopefully see the construction of new nuclear plants in the near future.

We consume roughly two thirds of the oil we use in the transportation sector. Because of its large share of consumption, policy changes affecting the transportation sector can have a significant impact on reducing foreign dependence. Increased mileage standards, elimination of boutique fuels, lowered speed limits, and greater use of alternative fuels are just a few of the many ideas that have been advanced to decrease the transportation sector’s consumption of oil. I contend that coal can make a difference in the transportation sector as well. Wyoming recently announced plans to construct a coal-to-liquids plant. The National Mining Association believes that continued use of this technology could replace as much as 2 million barrels per day of oil and 5 trillion cubic feet of natural gas per day by 2025.

I believe that the bill introduced by the Chairman and Ranking Member for lease sales in the Gulf of Mexico’s Area 181 is exactly the sort of thing we need in the short term.



It is time we stopped treating foreign oil dependence as another abstract statistic whose consequence is far removed from Americans’ daily lives. The United States is going to have to face the reality that we must break our foreign energy dependence or risk losing our autonomy. Our nation’s energy dependence is undeniably one of the greatest threats to our national security and our freedom.

By 2025 it is estimated that nearly 75% of America’s oil supply will be imported. Also consider that two-thirds of the world’s proven oil reserves are in the Middle East and that terrorists have identified oil as a strategic vulnerability—increasing attacks against oil infrastructure worldwide. One can just imagine what would happen if OPEC, which currently accounts for well over 50% of our oil supplies, shut off the oil spigot. Beyond the national security implications, oil dependence also carries serious economic consequences. The total economic penalty of our oil dependence, including loss of jobs, output, and tax revenue, is estimated to exceed $300 billion annually.

One facet of this plan to reach 2.5 million barrels per day of oil savings is to promote the development and use of advanced and alternative fuel efficient vehicles. Key pieces include tax credit incentives for advanced technology motor vehicles, expansion of the consumer tax credits for advanced vehicles, loan guarantees and grants for hybrid vehicle projects, and a new federal commitment to hybrid vehicle technologies and materials. The national fuel savings generated by this bill will be immense, but if we want to free ourselves from foreign oil dependence, we must produce more fuel here at home.

I believe we need a national energy policy that increases availability of flex fuel vehicles, invests heavily in E-85 infrastructure, includes a sugar-to-ethanol program, and sets a national mandate for ethanol that matches our energy independence ambitions.



I have been a longtime supporter of ethanol and biodiesel. I know that I would rather get fuel from farmers in Missouri and across the country than import it from foreign countries. I believe that the greatest provision of the energy bill was the Renewable Fuels Standard which mandated the use of ethanol in our nation’s fuel supply. The amount of biofuels to be mixed with gasoline sold in the United States is mandated at increases annually up to 7.5 billion gallons by 2012. Since the passage of the bill, 34 new ethanol plants are under construction, with 8 existing U.S. plants being expanded. And, there are more than 150 new plants in the planning stages. This construction and investment in farming will create thousands of new jobs while making us less reliant on foreign sources of oil.

While hydrogen vehicles are exciting—they are a long way off.


PETE V. DOMENICI, NEW MEXICO. It is clear that the United States needs to reduce our dependence on foreign sources of energy. We particularly need to reduce our reliance on oil from unstable regions of the world whose values and priorities are often in conflict with America’s initiatives and place in the world. Last year, U.S. net imports equaled 59% of our demand, with 41% of our total imports came from OPEC countries, which is 27% of the total U.S. consumption.

Dependence to this extent can determine our national security, our economic strength, and our foreign policy. In order to make necessary changes, we have to be realistic about what is possible in the near term, but certainly we have to look with real energy and enthusiasm toward the long-term. Making energy self-sufficiency the immediate goal would deny the reality of this situation and only invite discouragement and failure. This would be akin to putting all of our resources in the hopes of finding an elusive cure for a disease at the expense of taking important steps to treat and alleviate the symptoms in the interim. To that end, I have said on a number of occasions that while I support the advancement of science technology to reduce our dependence on foreign energy sources, I think we must also build a bridge to that age by accessing the oil and gas resources available in our country and we must reasonably and responsibly conserve our energy.

For example, I believe we should have acted on ANWAR a long time ago. The majority of the Senate believes that ANWAR brings us closer to achieving energy security and I would venture to say that not a single member of this body believes that continuing to block ANWAR strengthens our energy security. Blocking progress is not a substitute for substantive policy.

In my first year in the Senate, President Nixon set a goal of energy self-sufficiency by 1980. I do not know if any of you remember that. Since that time, successive administrations, scores of members of Congress from both parties, including me, have set similar goals. I believe that energy self-sufficiency is attainable, but I do not believe it is in the short term. Nonetheless, we must pursue it as a goal in my opinion vigorously.


ROBERT MENENDEZ, NEW JERSEY. I was not at all pleased to see the budget that came out less than a week later. A budget that did not take the serious steps towards the new technologies that we need to end that addiction. A budget that shortchanges vital energy efficiency efforts such as the weatherization program that helps reduce energy costs for our low-income families and seniors. A budget that cuts funding for some promising forms of renewable energy, cuts funding for research into vehicle technologies, and even cuts funding for a program designed to make the federal government more energy efficient. Quite simply, the president has failed to match his rhetoric with real action. OCS Even more disheartening is the continuing efforts of the administration to dig and drill their way out of dependence on foreign oil. Shortly after the budget was released, the Interior Department’s Minerals Management service unveiled their new proposed 5-year plan for the outer continental shelf, which included a plan to begin drilling off the Virginia coast. This is flatly unacceptable for my own state of New Jersey, because the ocean knows no borders, and an environmental catastrophe off the coast of Virginia would not stay confined to the waters of Virginia. The area to be leased is less than 75 miles off the southern tip of New Jersey, more than close enough to put our beaches and vital tourism industry at serious risk. The plan also shows that instead of seriously confronting our addiction, the administration would rather simply tap another vein.

CAFÉ standards. As many of our witnesses have said in the past, and will be expressing again today, the most effective way to confront our energy problems is through efficiency. We have made excellent strides in the past few decades to make our country more energy efficient, and one of the keys to that success has been Corporate Average Fuel Economy, or CAFE, standards. According to statistics compiled by the Rocky Mountain Institute, between 1977 and 1985 our oil use went down 17% and our oil imports went down 50%, and the biggest factor in that drop was the 7.6 mile-per-gallon improvement in new domestic cars over that time. But in the 20 years since then, our overall vehicle fleet has actually become less efficient. The CAFE standard for passenger cars has been stagnant for the past two decades, and the standard for light trucks is barely 1 mile-per-gallon higher than it was in 1987. Increasing fuel economy standards should be part of the energy independence solution and part of our national energy policy.

Another federal efficiency program that is part of the solution is Weatherization, which provides grants to states to allow them to make the homes of low-income families and seniors more energy efficient. This has a two-fold benefit. First, it lowers energy costs, which makes it easier for people to pay their heating or cooling bills, and reduces the amount of money that we need to spend on essential assistance programs like LIHEAP. Second, it reduces our overall energy needs. According to the Oak Ridge National Laboratory, every $1 invested in the weatherization program returns $3.81 in energy and non-energy benefits, and because of the program the country saves the equivalent of 15 million barrels of oil each year. And yet, despite this track record of success, the administration has proposed cutting the program by 33%, denying over 30,000 families—families that are on the lowest rung of the economic ladder and most desperately need help—the ability to get their homes weatherized.

We also need to shift from fossil fuels to renewable sources of energy. My own state of New Jersey has become a national leader in this field, recently enacting new incentives for the use of solar, wind, and other renewable energies, and moving towards enacting a robust renewable portfolio standard—20% by 2020. The state has put its money where its mouth is, giving over $43 million of incentives for new solar power installations over the past five years.



Senator THOMAS. I think we have a real opportunity to convert coal, which is our largest fossil resource, to diesel fuel, for example. We can do that very shortly. What do we do in the next 4 of 5 years?

Mr. WOOLSEY. Well, Senator, cellulosic ethanol is now coming on the market, Iogen in Canada, backed by Shell oil, diesel from waste products such as turkey carcasses from a Canagra slaughter house——

Senator THOMAS. Tell me about the volume of that, however. Oil from turkey carcasses obviously is not going to amount to much of anything.


LISA MURKOWSKI, ALASKA. For years we’ve heard that energy independence is a pure pipe dream given that America—not counting ANWR—has just over 20 billion barrels of proven conventional oil reserves (1.6% of known world reserves), while the Middle East has 57% of the world’s known supply of conventional oil and nearly as much gas. But with rises in both oil and natural gas prices because of the exhaustion of much of the cheap ‘‘conventional oil and gas,’’ because of sharp increases in demand for energy from developing nations and because of environmental fears, we may well be moving into a period when unconventional fuels and new technology, including alternative fuels, can increase our domestic energy production and dare we say permit energy ‘‘independence.’’ The Pentagon last year began seriously funding research efforts to promote bio and synthetic fuel development to meet military needs. The Energy Policy Act of last summer provided research funding, tax incentives and policy changes to spur biofuels like ethanol, and hybrid vehicle sales to cut consumption; increased oil and gas recovery from heavy oil deposits and by use of carbon dioxide to produce more fuel from aging fields.

JIM BUNNING, KENTUCKY. I think that with energy prices at these highs, we can see clearly that our national security is threatened by our continued reliance on imported oil. I think one of our top priorities should be on our most abundant domestic fossil fuel: Coal. New technologies will make burning coal both cleaner and more efficient. We are even developing coal-to-liquid technology that can create a synthetic transportation fuel from coal. American coal reserves will be our best tool to overcome our reliance on Middle East oil. We also have other domestic energy reserves, like ANWR and the Outer-Continental Shelf. I believe we can tap these oil and natural gas reserves in an environmentally sound way. I also think we need to develop our renewable fuels, especially stimulating biodiesel and ethanol production. Many of you have focused on biodiesel and transportation fuels, but coal is our most abundant domestic fossil fuel and accounts for half of our electric generation. I believe we can lessen our dependence on imports by using clean coal power and nuclear energy to replace the imported natural gas and oil that currently goes to producing electricity.

From “In the Media” at shalebubble.org




Posted in Energy Independence, Energy Policy, Fossil Fuels, Limits To Growth | Tagged , , , , , | Leave a comment

NAS 2015 report on how to fix our falling-apart inland waterway system

[After reading two congressional hearings, one in 2008, and another in 2013, about how the inland waterway system was falling apart, and had been for 30 years, I was curious to know why such an important asset would be let go to waste. In the testimony, it was said that more money was collected in fees by the government than doled back out in capital and maintenance expenses (true from 1991 to 2006 it appears from Figure 3-2, but not true since then).

At the 2013 hearing witnesses said that the Army Corps of Engineers has a set budget, so if the money put into the Inland Waterway Trust Fund was actually given to ports and rivers, other inland projects would not be funded.  This is probably why the waterway system was given less money back for over 15 years than fees paid into it.

As the study below states: “The selection of waterways projects for authorization has a long history of being driven largely by political and local concerns”.

This report shows what an irrational, byzantine mess the approval and funding process is, the details of which you need to read in the 157 page report, since mainly overviews are extracted below. Read the report yourself to see how the waterways projects might be improved (ranked by need, how to assess what needs to be done, getting more money to O&M, etc).

National energy policy is not based on energy efficiency. There were no café standards for decades. While the rest of the world strove to make and use fuel-efficient cars, rail, trucks, and ships, In America we’ve allowed massive, polluting gas guzzling vehicles that pummeled the hell out of our bridge and road infrastructure, wasting decades of oil that future generations will wish still existed when the next, permanent oil crisis arrives.

It’s tempting to blame local, state, and federal politicians for treating us like babies and not explaining the energy crisis, or car makers, but the truth is, energy efficient cars have existed for a long time, but American consumers chose to buy wasteful SUVs and light trucks.

This document chides government policy as well, but in bureaucratese: “A national freight system perspective on the efficiency of the nation’s freight network is generally lacking, and no mechanism exists for prioritizing spending across modes”.

Now that we’re at peak oil, a lot more attention and funding ought to go to the waterway system.

Alice Friedemann, www.energyskeptic.com]

NAS. 2015. TRB special report 315: funding and managing the U.S. inland waterways system: what policy makers need to know. Transportation research board, National Academy of Sciences. 157 pages.

Inland waterway system stats:

  • The inland waterways system moves 6 to 7 percent of all domestic cargo in terms of total ton-miles, mostly coal, petroleum and petroleum products, food and farm products, chemicals and related products, and crude materials.
  • Inland waterways include more than 36,000 miles of commercially navigable channels and roughly 240 working lock sites.
  • Barges mostly carry energy: coal, crude petroleum, petroleum products, and natural gas based fertilizers

2013 Commodities carried by USACE at http://www.navigationdatacenter.us/wcsc/pdf/pdrgcm13.pdf

  • Tons
  • Millions     Commodity
  • 312.3     Coal                      
  • 418.9     Crude petroleum
  • 508.6     Petroleum products
  • 39.9       Chemical fertilizer
  • 140.6      Chemicals excluding fertilizers
  • 53           Lumber, logs, wood chips, pulp
  • 163.5      Sand, gravel, shells, clay, salt, and slag
  • 85.4        Iron ore, iron, and steel waste and scrap
  • 29.5        Non-ferrous ores and scrap
  • 45           Primary non-metal products
  • 72           Primary metal products
  • 270         Food and food products
  • 121         Manufactured goods
  • 62.3        Unknown and not elsewhere classified products
  • 2,275      TOTAL

The inland waterways system provides for the domestic barge shipping component of the nation’s freight transportation system. The system infrastructure is managed by the U.S. Army Corps of Engineers (USACE) and funded through the USACE inland navigation budget. The United States established and funded the federal inland waterways system early in the nation’s history to promote commercial shipping and the U.S. economy. Commercial shipping continues to drive federal economic interest in the system. The Executive Committee of the Transportation Research Board (TRB) initiated this consensus study of the inland waterways system because of reports of deteriorating and aged infrastructure combined with inadequate capital investment, a growing backlog of capital needs, and declining federal funding for inland navigation.

The primary concern of this report is funding for lock and dam infrastructure on rivers or river systems. Locks and dams are the main mechanism for enabling cargo movements and the most expensive component in maintaining the inland waterways for barge transportation, although other activities such as dredging are necessary and can be costly. The Great Lakes and the Saint Lawrence River are part of the larger inland marine transportation system but not a focus of this report because of the small number of locks and dams they contain.

Beyond the Scope. Issues related to ports and harbors are beyond the scope. USACE is responsible for deep draft harbor dredging to ensure that harbor channels can accommodate flows of freight carried on large vessels for international commerce. However, ports and harbors are managed and funded differently from the inland waterways and are not a focus of this report. Panama Canal expansion also is not addressed in this report except to the extent that it relates to arguments for the building of larger locks on parts of the inland waterways system. Broader water resource management and funding challenges and opportunities for the nation are beyond the scope of this report. USACE has three primary mission areas: navigation for freight transportation, flood control and damage reduction, and ecosystem restoration. Other activities performed by USACE include safety and disaster relief, hurricane and storm damage reduction, water supply, hydroelectric power generation, and waterborne recreation. This report focuses on funding for the inland waterways system with regard to the freight transportation mission;

The main cost in providing for barge service is maintaining locks and other infrastructure that enables cargo movements. While many locks are more than 50 years old, age is not a useful indicator of their condition. Many locks have been rehabilitated, and lock performance correlates poorly with age. The large backlog of capital projects also is not a reliable indicator of funding required for maintaining reliable freight service. The navigation share of these projects is modest, maintenance costs are not included in the backlog, and Congress has authorized more projects than can be funded.

The most critical need for the inland waterways system is a sustainable and well-executed plan for maintaining system reliability and performance that ensures efficient use of limited navigation resources. Time lost due to delays at locks and locks out of commission for repairs is a cost to shippers and an important consideration in deciding on future investments to maintain reliable freight service. System-wide, about 20% of time lost in transportation is caused by scheduled and unscheduled outages. A more targeted operations and maintenance (O&M) budget would prioritize facilities that are most in need of maintenance and for which the economic cost of disruption would be highest.

In contrast to the need to focus on system reliability, much of the policy discussion about the inland waterways system centers on the user charges to support the Inland Waterways Trust Fund, which is dedicated to capital improvement projects.

The passage of an increase in the barge fuel tax by the 113th Congress only heightens the urgency of settling on a plan for maintenance, since under federal law any new revenues from the barge fuel tax can be used only for construction and not for O&M, for which the federal government pays the full cost. Because funds for capital projects raised by the barge fuel tax must be matched by the federal government, O&M competes directly with construction for federal general revenues. O&M now accounts for about 75% of the requested inland navigation budget (roughly $650 million annually). Without a new funding strategy that prioritizes O&M and repairs, repairs may continue to be deferred until reaching $20 million (the point at which they become classified as a capital expenditure), which would result in further deterioration and in an inefficient and less reliable system.

More reliance on a “user-pays” funding strategy for the commercial navigation system is feasible, would generate new revenues for maintenance, and would promote economic efficiency. In a climate of constrained federal funds and with O&M becoming a greater part of the inland navigation budget, it is reasonable to examine whether beneficiaries could help pay for the system to increase revenues for the system and improve economic efficiency. Indeed, Congress, in the 2014 Water Resources Reform and Development Act (Section 2004, Inland Waterways Revenue Studies), called for a study of whether and how the various beneficiaries of the waterways might be charged. A reconceived system of user charges would focus policy attention on a sustainable plan for system performance and efficiency. Since users are not responsible for the cost of O&M, strong incentives exist to overcapitalize the system. Dedicating revenues from users to O&M instead of only capital expenditures would focus maintenance spending on the assets that users most value and result in a system that is more cost-effective and efficient.

Commercial navigation is the primary beneficiary of the inland waterways, and commercial carriers impose significant marginal costs on the system. Charging commercial navigation beneficiaries for the costs associated with their use of the system is feasible. User charges may be restructured in a variety of ways. There is no single best option; the preferred choice for achieving a policy goal may be to combine one or more of the options, such as an increase in the barge fuel tax with user fees. Charging user fees on the basis of facility and segment usage would identify the parts of the waterways most valued by shippers and warranting maintenance. Multiple criteria would apply in choosing among the user charge options: ease of administration, revenue potential, distribution of burden across user groups, and design components that would reinforce the efficient use of resources and cost-effective expenditures. A trust fund for maintenance would ensure that all new funds collected are dedicated to inland navigation while providing greater latitude for USACE to disburse funds for maintaining the system according to criteria approved by Congress and with the involvement of the Inland Waterways Users Board, whose current advisory role is limited to capital spending.

Asset management can help prioritize maintenance and ascertain the level of funding required for the system. A standard process for assessing the ability of the inland waterways system to meet demand for commercial navigation service and for prioritizing spending for maintenance and repairs is lacking. For reasons explained in this report, the capital projects backlog and age of inland waterways infrastructure are not reliable indicators of the needs of the system or the amount of investment required. Regardless of who pays for the system, a program of economically efficient asset management (EEAM), fully implemented and linked to the budgeting process, would prioritize maintenance spending and ascertain the funding levels required for reliable freight service.


The inland waterways navigation system is part of the U.S. marine transportation system (MTS), which provides for both passenger transport and domestic freight transportation infrastructure and coastal gateways for global trade (TRB 2004). The MTS includes navigable waterways and public and private ports on three coasts (Atlantic, Pacific, and Gulf) and the Great Lakes as well as a network of inland waterways (CMTS 2008). It includes, by extension, inland highway and rail connections between ports and inland markets that ensure access to the water for shippers and customers in all 50 states (AASHTO 2013; CMTS 2008). The inland and intra-coastal waterways directly serve 41 states (Clark et al. 2012). The inland waterways system comprises navigable rivers linked by a series of major canals. Lock and dam infrastructure is the chief mechanism in enabling the upstream and downstream movement of cargo, and its installation is the most expensive component in providing for navigation service (McCartney et al. 1998).

Waterways are categorized as deep draft, shallow draft, both (allowing both shallow and deep draft vessels), or non-navigable, as the inland and intra-coastal waterways are access routes for deep draft vessels; with those included, the committee counts 41 (43 including the District of Columbia and Puerto Rico). Some are coastal states (e.g., California, Delaware, New Jersey, Maryland) with minor inland or intra-coastal waterways outside of the committee’s charter. For example, 12 states, ranked by ton-miles, account for 80% of ton-miles and 74% of tons moved by inland waterway.

Because of shallow drafts and seasonal changes in navigable depths, fixed infrastructure is required in many parts of the river system to maintain open navigation for commerce.

Most of the navigable channels are rivers located in the central and eastern half of the country. The largest river system is the Mississippi, which is navigable for about 1,800 miles from New Orleans, Louisiana, to Minneapolis, Minnesota, and has a large tributary system. In the western part of the country the largest inland waterway is the Columbia–Snake River system.

Water transportation contributes nearly $115 billion in value added to U.S. GDP, compared with nearly $120 billion from truck transportation, more than $60 billion from air transportation, more than $30 billion from rail transportation, and $15 billion from pipeline transportation

Upper Mississippi River

The Upper Mississippi River flows south from Minneapolis, Minnesota, 858 miles to the mouth of the Ohio River at Cairo, Illinois. The navigation channel above Saint Louis, Missouri, is maintained at a minimum depth of 9 feet by a system of 27 locks and dams. Agriculture-related products dominate the commodity flows on this river. Farm products, primarily grain bound for export through the Gulf Coast deepwater ports, account for 32 percent of the tonnage. The Upper Mississippi also is the top regional source for corn and soybean exports. The second-ranked commodity is coal, which accounts for 22% of the tonnage. Much of the chemical tonnage (10 percent of the total) consists of fertilizers shipped upbound back to the farm belt. The dominant flows on the Upper Mississippi illustrate the modal competition and cooperation aspects of much waterborne commerce. For example, much of the grain is shipped by truck or rail to waterside grain elevators for transloading to barges, which then transload again to deepwater vessels in southern Louisiana for export to world grain markets. Trains also bring grain to the Gulf Coast, so for some farms there is at times a genuine modal choice between rail and water transport. However, grain transactions turn on margins as low as cents per bushel, so most shippers are essentially heavily dependent on one mode or the other. During the height of the harvest season, the capacities of both the rail and the inland waterways systems are stretched to keep up with shipping demand. The coal traffic on the system consists largely of low-sulfur coal that is shipped by unit train from the western coal fields to large transloading facilities at places like Cora and Metropolis, Illinois, where it is loaded onto barges for movement to waterside electric power plants on the Ohio and Mississippi Rivers. Usually, competition among transport modes to serve a major shipper facility occurs when the facility site is being selected. Once the decision is made to locate a facility on a particular mode (e.g., a grain elevator or power plant is located on a river), goods movement tends to depend on that mode.

Lower Mississippi River

The Lower Mississippi River flows 956 miles from the mouth of the Ohio River at Cairo, Illinois, to the Mouth of Passes in the Gulf of Mexico. There are no navigation locks on this portion of the inland waterways system. Navigation depth is maintained by river training works such as groins and revetments and by periodic maintenance dredging of shoals. Operations on this segment typically feature large tows, since the size of tows is not constrained by lock sizes. Table 2-3 shows the commodity tonnages on the 720-mile stretch from Cairo to Baton Rouge, Louisiana. The commodity mix there is similar to that on the Upper Mississippi, but the quantities are 50 to 100 percent greater.

Ohio River System

The Ohio River begins at the junction of the Allegheny and Monongahela Rivers at Pittsburgh, Pennsylvania, and flows in a southwesterly direction 981 miles to its mouth at Cairo, Illinois, where it empties into the Mississippi River. Navigation is maintained at a minimum 9-foot channel depth by 20 locks and dams on the Ohio River (Olmsted Lock will replace two older locks near the lower end of the river). Table 2-3 shows the commodity flow on the entire Ohio River system, which includes the Ohio mainstem and its tributaries. The Monongahela, Kanawha, and Tennessee Rivers contribute significant flow to the Ohio. Coal is the dominant commodity on the system, making up 59 percent of the tonnage in 2012. Most is steam coal, which moves both inbound and outbound on the system. Coal mines in Appalachia send coal to the river via conveyor belt, truck, and rail for shipment to river-located electric power generation plants. Those power plants also receive upbound coal from other sources, and there is still considerable movement of metallurgical coal on the Ohio and its tributaries. The second-ranked commodity group, crude materials (nearly 22 percent of the total), consists primarily of sand, gravel, and limestone. While rail lines run parallel along most of the Ohio, they are primarily part of the nation’s extensive east–west manufactured products and foodstuffs distribution system. As a practical matter, the large quantities of coal and crude materials moving on the Ohio could not easily be diverted to rail. Coal alone would require the railroads to handle more than 1 million additional carloads annually and to provide in excess of 26 more train movements per day (Kruse et al. 2012). Furthermore, most of the shipping and receiving facilities for this traffic are designed and operated specifically to handle barge shipments. Thus, as was the case for the Upper Mississippi, rail, truck, pipeline, and conveyor belts are complementary to water transport.

Gulf Intracoastal Waterway

The GIWW provides a protected route along the Gulf Coast from Saint Marks, Florida, to the Mexican border at Brownsville, Texas. The total distance is 1,109 miles, and the maintained minimum channel depth is 12 feet. The system includes 10 locks, which serve a variety of purposes. The Inner Harbor Navigation Canal lock at New Orleans connects the Mississippi River to the GIWW and overcomes elevation differences between the river and the canal. The lock is currently one of the most congested on the entire inland waterways system. As would be expected in view of the GIWW’s location in the largest petrochemical region of the United States, petroleum and chemicals dominate the system’s commodity flow. Together they made up 76.5 percent of the tonnage in 2012. Crude materials ranked third, at nearly 15 percent. Within these broad groups a wide variety of specific commodities are moved, in keeping with the region’s complex industrial base. Pipelines are the main competing and complementary mode, but the circumstances of individual plant locations and outputs defy any easy generalizations.

Illinois River

The Illinois extends 292 miles from Lockport, Illinois, to its mouth at the Mississippi River at Grafton, Illinois, just above Saint Louis. Above Lockport, various channels connect the Illinois River and the Mississippi River system to Lake Michigan at Chicago, Illinois. The Illinois has a minimum maintained channel depth of 9 feet and seven lock sites with single chambers 600 feet long by 110 feet wide. These dimensions require the typical tow of 15 jumbo barges to double lock, and the lack of auxiliary chambers means that any lock outage will shut down navigation. The Illinois is a typical moderate-use waterway. It moved 31 million tons in 2012. The commodity mix was similar to that on the Mississippi, but with a smaller proportion of coal and a greater proportion of petroleum and chemicals.

Columbia River System

The Columbia River has the longest inland navigation channel on the U.S. West Coast. The Columbia provides a shallow draft waterway (14-foot depth) from Kennewick, Washington, to Vancouver, Washington, and Portland, Oregon, a distance of approximately 225 miles. Below Portland, a deep draft channel (40 feet) extends approximately 100 miles to the river’s mouth at the Pacific Ocean. There are four navigation dams on the shallow draft section. Above Kennewick, the Snake River allows navigation for 140 miles upstream to Lewiston, Idaho. The Willamette River drains northwestern Oregon and flows into the Columbia near Portland, where it forms part of that city’s deep draft harbor. Agriculture dominates flows on the Columbia. Food and farm products constituted 53 percent of the tonnage in 2012. About 76 percent of these agricultural products were grain and soybeans shipped for export. The Columbia River is the top gateway for U.S. wheat exports. It accounts for about 16 percent of all food and farm products moved on the inland waterways and about 3 percent of all food and farm imports and exports. Crude materials, largely forest products and sand and gravel, made up another 20 percent of the tonnage. The river also plays an important role in distribution of petroleum products throughout the region. There are rail lines along both the north and the south shores of the Columbia River. They are running at or near capacity, with much of that capacity devoted to serving the intermodal container trade.

Commodity Trends by Corridor

  • Coal: Ohio River system, including the Allegheny and Monongahela Rivers;
  • Food and farm: Upper Mississippi and Illinois Rivers to New Orleans, Louisiana
  • Petrochemical: Mississippi River from Saint Louis, Missouri, to New Orleans
  • Manufactured goods: Mississippi River from Saint Louis to New Orleans
  • Crude materials: Ohio and Upper Mississippi Rivers (from Saint Louis) to New Orleans
  • Food & farm: Columbia River system, including Columbia, Snake, and Willamette Rivers;
  • Chemical goods: Gulf Intracoastal Waterway (GIWW)
  • Petroleum goods: GIWW.

As shown in Table 2-3, the principal commodities carried on inland waterways system corridors are coal, petroleum and petroleum products, food and farm products, chemicals and related products, crude materials, manufactured goods, and manufactured equipment. Examination of annual commodity trends for several of the chief commodities on most of the primary corridors during the period 2000 to 2013 indicates adequate capacity in the system. Aside from petroleum products moving on the Lower Mississippi, commodity movement appears to be stable or declining for more than a decade for most corridor segments

Modal Shift to Road or Rail Resulting from Loss of Waterway Corridor

The Transportation Research Board’s Executive Committee wanted this study to cover possible impacts of a major diversion of freight from water on highway systems should a waterway fail because of deferred maintenance. In view of the volume that can be moved by one barge being equal to the payloads of many trucks, state officials have expressed concern about the consequences of massive numbers of heavy trucks replacing shipments that had moved by water for highway congestion and pavement and bridge infrastructure.

Age of Locks

Figure 2-8 shows a map of inland waterways lock infrastructure by original construction date. Figure 2-9 shows the average age of lock and dam infrastructure in comparison with other federal and state infrastructure and transportation assets. The average age of the locks in 1940 was less than 10 years; in 1980 the average age of the locks was about 30 years (whether or not major rehabilitation work was considered); in 2014 the average age was 59 years.


After rehabilitation is accounted for, in 2014 more than 50 percent of the locks were more than 50 years old

76% of barge cargo (in ton-miles) moves on just 22% of the 36,000 inland waterway miles. About 50% of the inland waterway ton-miles moves on 6 major corridors that represent 16% of the inland waterway miles—the Upper Mississippi River, the Illinois River, the Ohio River, the Lower Mississippi River, the Columbia River system, and the GIWW.

Some inland waterways segments have minimal or no freight traffic.

With shrinking resources for the system and growing demands on the USACE O&M budget, targeting commercial navigation investments mainly to portions of the system important for moving freight would be prudent.

Lost transportation time due to delays and lock unavailability (outages) is a cost to shippers and an important consideration in deciding on future investments. Systemwide, about 80 percent of lost transportation time is attributable to delays. On average, 49 percent of tows in 2013 were delayed across the 10 highest-tonnage locks, with an average length of tow delay of 3.8 hours. Some delay is expected for routine maintenance, weather, accidents, and other reasons, but delays can be affected by maintenance outages caused by decreases in the reliability of aging machinery or infrastructure. About 12% of lost time on the inland waterways system is due to scheduled closures and about 8% is due to unscheduled closures, which indicates that up to 20% of lost time could be addressed with more targeted O&M resources. Targeting O&M resources toward major facilities with frequent lockages and high volumes and where the lost time due to delay is significantly higher than the river average could improve navigation performance. Most lost service due to delay occurs at high-demand locks used for agricultural exports and so may be caused by congestion related to peaks in seasonal shipping. Data are not available to explain the causes of delay at locks, which makes up 80 percent of lost transportation hours. Delays might be attributable to seasonal peak volumes due to weather, harvest, under-capacity, or other causes. Collection of data and development of performance metrics would enhance understanding of whether delay problems could be most efficiently addressed by more targeted O&M, traffic management, capacity enhancement, or some combination of these measures. Some high-use locks are located on waterways designated as low or moderate use, which has implications for how to allocate funds across parts of the system. This situation can occur because of seasonal peaks in the movement of certain commodities, such as harvested food and farm products, or from navigation closures caused by annually recurring weather conditions, such as ice or flooding. The tonnage moved through each lock during peak demand periods, as well as the type and value of the cargo, could be considered in funding allocations instead of considering only average annual waterway ton-miles. Likewise, some rivers and waterborne corridors may move as much or more tonnage on a seasonal basis as rivers classified as high use but receive low-use classification on the basis of annual ton-miles of transport rather than seasonal peak ton-miles.

The advanced age of lock and dam infrastructure is often used to communicate funding needs for the system. Age is not a good indicator of lock condition. A substantial number of locks have been rehabilitated, which would be expected to restore performance to its original condition if not better. Dating the age of assets from the time of the last major rehabilitation, as is done for highway infrastructure such as bridges, would be more accurate. Furthermore, with some exceptions, little correlation exists between the age of locks and their performance as measured by delay experienced by system users. A more useful approach for targeting funds to improve system performance than focusing on age as a proxy for lock functioning would be to identify waterway segments and facilities where the lost time due to delay (based on millions of tons delayed) is substantially higher than the system average.

Federal Role in the Inland Waterways System

The inland waterways infrastructure is managed by the U.S. Army Corps of Engineers (USACE) and funded from the USACE budget.

USACE, under its Civil Works Program headed by the Assistant Secretary for Civil Works, plans, constructs, operates, and maintains a large water resources infrastructure that includes locks and dams for inland navigation; maintenance of harbor channel depths; dams, levees, and coastal barriers for flood risk management; hydropower generation facilities; and recreation. The primary USACE Civil Works mission areas are support of navigation for freight transportation and public safety; reduction of flood and storm damage; and protection and restoration of aquatic ecosystems, such as the rebuilding of wetlands and the performance of environmental mitigation for USACE facilities. Hydropower generation is an important activity of USACE, although it has not been considered a primary mission. Other USACE responsibilities include recreation, maintenance of water supply infrastructure (municipal water and wastewater facilitates), and disaster relief and remediation beyond flood disaster relief (e.g., remediation of formerly used nuclear sites

Whereas some federal agencies have broad authorities, Congress authorizes each capital investment for capacity expansion, facility replacement, or major rehabilitation of USACE water infrastructure projects. A construction project generally originates with a request to a congressional office from communities, businesses or other organizations, and state and local governments for federal assistance.1 Since 1974, the process for authorizing federal water resources projects, including infrastructure for freight transportation, has been the omnibus bill typically called the Water Resources Development Act (WRDA).2 On the basis of this legislation, Congress authorizes individual capital projects and numerous other USACE activities and provides policy direction in areas such as project delivery, revenue generation,

Benefit–cost analysis is the primary criterion used in selecting capital expenditures projects for funding. Projects that pass a minimum threshold for determining that the benefit exceeds the cost are eligible for congressional authorization and funding.

Two types of congressional authorizations are required for a construction project—one for investigation and one for project implementation.3 First, authority is provided for a feasibility study in which the local USACE district investigates engineering feasibility, formulates alternative plans, conducts benefit–cost analysis, and assesses environmental impacts under the National Environmental Policy Act.4 The study results are conveyed to Congress through a Chief of Engineers Report (Chief’s Report) that contains either a favorable or an unfavorable recommendation for each project. Study results also are submitted to the executive office of the Office of Management and Budget (OMB), which applies its own fiscal, benefit–cost, and other criteria to assess whether projects warrant funding according to executive branch objectives. Congress considers USACE study results, recommendations of OMB, and other factors in choosing projects to authorize. Thus, both the projects selected for initial study and the project authorizations are at the discretion of Congress.

After Congress authorizes a project, it becomes eligible to receive implementation funding in annual Energy and Water Development appropriations acts. The appropriations process begins with the submission of the annual President’s budget. To be included in the President’s budget, authorized projects must compete within the overall USACE program ceiling not only for initial funding but also for continued annual funding throughout the project’s life cycle

Once Congress receives the President’s budget request, it is “marked up” by the House and Senate Appropriations Committees, where project funding levels are adjusted in response to congressional priorities. Even if an authorized project has received initial construction funding, there is no assurance that it will receive sufficient appropriations each year to provide for an efficient construction schedule. The actual funding for the project over its life cycle may be much less suitable.

  1. At this early stage, USACE typically engages in an advisory role to answer technical questions or to assess the level of interest in possible projects and the support of nonfederal entities (state, tribal, county, or local agencies and governments) that may become sponsors.
  2. The 2014 authorizing legislation is titled the Water Resources Reform and Development Act (WRRDA).
  3. If the geographic area was investigated in previous studies, the study may be authorized by a resolution of either the House Transportation and Infrastructure Committee or the Senate Environment and Public Works Committee.
  4. According to WRRDA 2014, at any point during a feasibility study, the Secretary of the Army may terminate the study when it is clear that a project in the public interest is not possible for technical, legal, or financial reasons.
  5. After a project is authorized, modifications beyond a certain cost and scope require additional congressional authorization. A previous National Research Council (NRC) report (2012) encouraged less reliance on WRDA as the main vehicle for authorizing projects for USACE infrastructure. The traditional focus on WRDA for authorizing large new construction projects in particular is less relevant to a system that is mostly “built out” and for which the main concern is a sustainable source of funding for ongoing operations and maintenance (O&M) and major repairs. Although WRDA drives capital funding for freight transportation on the inland waterways, it is largely disconnected from federal legislative processes and efforts related to other freight modes. Similarly, the goal of the USACE planning process is to determine whether a navigation project is eligible for funding, not to assess whether the project will be the most efficient option for meeting national freight transportation needs and economic interests given the availability of other modes. (The benefit–cost analyses required for the authorization of navigation projects must consider other modes to a degree, as described later in this chapter.)

A national freight system perspective on the efficiency of the nation’s freight network is generally lacking, and no mechanism exists for prioritizing spending across modes.

Operations and Maintenance O&M projects can be authorized under WRDA, but it has not often been used for this purpose (see NRC 2012, Table 2-2, for exceptions in WRDA 2007). USACE headquarters sets priorities for O&M investments as part of the budgeting process on the basis of information gathered from USACE districts and divisions. Eight USACE divisions coordinate projects and budgets in 38 district offices across the United States. Districts develop plans, priorities, and rankings for investigations, construction, and O&M and submit them to USACE divisions. Divisions prioritize projects across their districts and provide division-wide rankings of projects to USACE headquarters. USACE headquarters considers division priorities and rankings, administration budget priorities, and other factors in ranking requests.6 The number of projects funded each year depends on the annual budget appropriation by Congress.

The local assessment of assets and maintenance needs follows general guidelines, but it has many local variations. For example, districts may develop their own asset management systems for assessing and communicating the condition of infrastructure and level of service being provided for navigation and O&M and repair needs. According to a past NRC report, with respect to water resources funding, “neither the Congress nor the administration provides clear guiding principles and concepts that the USACE might use in prioritizing OMR [operations, maintenance, and repair] needs and investments” (NRC 2012, 11). Full benefit–cost analysis is applied only to construction and not to O&M,7 which is appropriate given the costs of conducting benefit–cost analysis relative to the cost of O&M projects.

Distinctions Among O&M, Major Rehabilitation, and Construction USACE separates projects labeled as “major rehabilitation” from its O&M budget. Major rehabilitation projects meet the following criteria established in a series of Water Resources Development Acts from 1986 to 2014.8 ? Requires approval by the Secretary of the Army and construction is funded out of the Construction General Civil Works appropriation for USACE. ? Includes economically justified structural work for restoration of a major project feature that extends the life of the feature significantly or enhances operational efficiency. ? Requires a minimum of 2 fiscal years to complete. ? Costs more than $20 million in capital outlays for reliability improvement projects or more than $2 million in capital outlays for efficiency improvement projects. These thresholds are adjusted annually by regulation and are subject to negotiation.

Major rehabilitation projects are treated as capital projects for new construction in the budgeting process instead of being considered an expense of maintaining the system. The decision to classify major rehabilitations a capital expenditure instead of as an O&M expense is arbitrary.9


Cost-Sharing Rules Before 1978, the inland navigation system was funded almost entirely through general revenues collected from taxpayers. Congress transformed funding for the inland waterways by passing two pieces of legislation: the Inland Waterways Revenue Act of 1978 and the Water Resources Development Act of 1986, which created the funding framework followed today. This legislation established a tax on diesel fuel for commercial vessels paid by the barge industry and an Inland Waterways Trust Fund (IWTF) to pay for construction with fuel tax revenues. It also increased the nonfederal cost-sharing requirements for inland navigation construction projects.

The required cost share depends on whether the navigation project is classified as a capital cost or as O&M. For single-purpose navigation projects and multiple-purpose projects assigned to the navigation budget, the federal government pays 100 percent of O&M costs, 50 percent of capital costs (including capacity expansion, replacement, and major rehabilitation), and 100 percent of rehabilitation costs up to $20 million (costs for a single repair or set of repairs that exceed this amount are considered major rehabilitation and a capital cost). The waiving or adjustment of cost-sharing requirements for individual projects is infrequent and typically requires authorization by Congress. The federal share for commercial navigation is paid via general revenues. The commercial users’ share is paid for with a diesel fuel tax per gallon via the IWTF; the tax is collected by the Internal Revenue Service. The fuel tax was initially set at $0.04 per gallon and is not indexed to inflation. In 1986 legislation, the tax was set to rise to its current level of $0.20 per gallon, where it has remained until 2014, when the 113th Congress approved an increase in the barge fuel tax to $0.29 per gallon. In contrast to the cost share for navigation, the O&M costs for nonnavigation projects are paid for partly by sponsors. The federal share depends on the type of water resource project (see Table 3-1). For many project types (e.g., levees), the nonfederal sponsor is responsible for O&M once construction is complete. Furthermore, inland waterways feasibility studies to determine the eligibility of a navigation project for funding are entirely a federal expense; in contrast, for deepwater navigation and nonnavigation projects, the federal share for feasibility studies is 50 percent.

Patterns and Trends in Funding for the Inland Waterways System

In terms of constant dollars, funding for construction and O&M for lock and dam facilities is at its lowest point in more than 20 years and is on a downward trajectory (see Figures 3-1a and 3-1b). The balance of the IWTF, which is used to pay 50 percent of construction costs, has declined. The fund was at its highest level, $413 million, in 2002 (see Figure 3-2). The balance fell sharply between 2005 and 2010 as expenditures for inland waterways exceeded fuel tax collections and interest on the trust fund balance. Reasons for the decline include increased appropriations, lower fuel tax revenues than in previous years, large construction costs, and construction cost overruns. Capital projects are funded incrementally by Congress through the annual budgeting and appropriations process. Incremental federal funding, an increasingly common procedure in which only a portion of the total budget for a project is appropriated, contributes to project delivery delays and higher costs (NRC 2011; NRC 2012, 29, gives another example on the Lower Monongahela River). Between 2005 and 2010, Congress made a conscious effort to “spend down” the IWTF to accelerate project completions and reduce the size of the backlog of authorized projects.

Capital Projects Backlog

A substantial number of water resources projects that have been authorized by Congress via WRDA remain unfunded through the appropriations process. These projects are known as the

Congress considers the recommendations of USACE and OMB, but the selection of waterways projects for authorization has a long history of being driven largely by political and local concerns (Ferejohn 1974).

While concerns about the backlog have been expressed, its size is not a reliable indicator of the funding needed for the inland navigation system for at least three reasons. First, O&M spending is not reflected in the backlog. With the aging of the system, maintenance has become a higher priority. Second, navigation projects make up only a portion of the backlog ($4.1 billion) (CRS 2011); most of the backlog relates to waterways infrastructure serving other purposes such as flood control.

Third, not all of the projects in the navigation backlog are priorities. In contrast to its practice for other modes, Congress authorizes and appropriates funds on a project-by-project basis. Benefit–cost analysis is used to determine whether a construction projects meets a minimum threshold of eligibility for pursuing authorization and appropriations and is generally suitable for this purpose,16 but the lack of a prioritization process based on a formal assessment of system needs has resulted in the authorization of more projects than can be funded within the constraints of the budget. The current practice is for OMB to set a minimum benefit–cost ratio that projects must meet to be included in the President’s annual budget request.17 While benefit– cost analysis is used in determining whether a project meets a minimum threshold for authorization, there is no indication that projects are further ranked against each other during the authorization process (GAO 2010). Because more projects are authorized than can be funded, priorities are sorted out in the budgeting and appropriations process, in which both the executive branch and Congress participate. IWUB, as part of a capital projects business

For these reasons, a method for prioritizing projects on the basis of the service needs of the system may be more useful than an attempt to estimate and seek funding for the entire backlog. As for O&M, a standard process is needed for prioritizing spending for capital projects for construction and major rehabilitation and to ascertain the level of funding required across the system to maintain reliable freight service. (Prioritization is discussed in Chapter 4.) A number of temporary measures have been


States and private enterprise led the initial building of inland waterways infrastructure and charged for use of the waterways. Federal involvement in the inland waterways system began in the 18th century, when the scope and scale of inland waterways projects grew beyond what any private entity or state could or would take on, especially without the ability to realize a monetary return on investment. Congress made these federal investments to promote inland waterways commerce, which was central to the economic development of the United States. This history has led to a unique federal role in the inland waterways system among all the freight transportation modes. Today, waterborne transportation is the only freight mode for which Congress authorizes and appropriates funds (for construction and O&M) on a project-by-project basis. Federal management and decision-making responsibilities for freight transportation generally are fragmented across jurisdictional lines in Congress, multiple federal agencies, and different silos of funding. Whereas USACE and the U.S. Coast Guard (part of the Department of Homeland Security) manage the marine and inland waterways systems, the U.S. Department of Transportation has responsibilities for highway, aviation, rail, and pipeline. Various congressional committees are responsible for authorizations and appropriations for the different modes. Decisions about inland waterways investments, including ports, channels, and infrastructure, are made largely at the federal level.18


However, most decisions about highway investments are made at the state and metropolitan levels. For ports, investment decisions are made mainly by independent private entities and sometimes by state or bi-state port authorities. As private transport industries, railroads and pipelines make their own decisions about investments.

Public and private shares of funding also differ across modes. Highways, aviation, ports (harbor and channel dredging and maintenance), and the inland waterways all receive federal aid for capital costs. In addition, the inland waterways, harbors, and channels receive federal general revenues support for O&M.

Rail and pipeline, with which the inland waterways system competes to some degree, are almost entirely private enterprises, with minimal federal assistance for infrastructure.

For highways, the federal government pays a significant share for new construction, but O&M is a state and local financial responsibility.

The federal government, through general revenues, pays more for water transportation as a percentage of total O&M and construction costs compared with federal contributions to highways and rail. For the inland waterways system, federal support is used to cover a large shortfall between the fees paid by users and total system costs.

In contrast, fees paid by the users of highway and rail modes cover a much greater share of the capital and O&M costs of those transportation systems. General federal tax revenues pay about 90% of total inland waterways system costs

This compares with virtually no federal general revenue support for rail system users and pipeline, and historically only about 25 percent federal support for highways, which are primarily derived from user fees.

Federal Subsidies for the Various Freight Transportation Modes

Federal subsidies for the various freight modes are complicated and contested among advocates for the modes, in part because of disagreements about (a) direct subsidies that are funded by various public sources and (b) indirect subsidies that result from costs imposed on the public (externalities) that are not part of market transactions between shippers and carriers. No authoritative study has estimated either direct or indirect subsidies across the various freight modes, although a previous Transportation Research Board study (TRB 1996) developed and pilot-tested a methodology for estimating freight external costs.


Assessing direct subsidies is more straightforward among the modes with which water competes (rail, pipeline, and, to a much lesser degree, trucking). Freight railroads are private entities that fund the vast bulk of their operations and capital and maintenance spending from their own funds. Limited federal funds are available for grade separation projects (to separate traffic for safety and mobility), a modest federal loan guarantee program is available (principally for short lines), and state governments occasionally provide public funding for such purposes as raising bridges or tunnels for double-stack trains or to improve rail access to state ports. Although public funding is minimal in proportion to the $20 billion to $25 billion railroads have invested in capital stock annually since 2007,22 railroad modal competitors point out that many railroad rights-of-way were initially given in the 19th century by the federal government and states to encourage railroad development. Because pipelines are entirely private, the evaluation of subsidies is easier than for rail. Although long-distance truck–barge competition is unlikely because of the much higher cost of truck movements per ton-mile, there may be short segments in which truck and barge would compete. The trucking assessment of competitive subsidies is most complex because trucks use highways that are shared with passengers. Although both freight and passenger operators pay fuel taxes and other user fees, there is continued debate about whether the largest and heaviest trucks pay their share of the costs of building and maintaining highways (GAO 2012). Moreover, after decades of relying almost exclusively on federal and state user fees to fund interstate and intercity highways, in the past decade Congress has used general funds to supplement user fee revenues to the Highway Trust Fund (HTF) for the federal share of highway capital spending (CBO 2014). (Improved fuel economy and political opposition to raising fuel taxes have resulted in insufficient user fees into the HTF to pay for the federal share of highway capital improvements.)

Trucking is involved in at least one segment of all freight moves and often two,

Whereas trucks can serve almost all O-D pairs because of the ubiquity of roads and highways, and railroads reach many OD pairs as well, waterways are far more limited.


In a climate of constrained federal funds and with O&M becoming a greater part of the inland navigation budget, a pressing policy issue is how to pay to preserve the inland waterways system for commercial navigation. The structures (locks and dams) built and maintained for freight transportation have resulted in beneficiaries beyond commercial navigation. It is reasonable and, from an economic perspective, potentially efficiency enhancing to consider whether these beneficiaries could help pay for the system. Congress, in the 2014 WRRDA (Section 2004, Inland Waterways Revenue Studies), called for a study of whether and how the various beneficiaries of the waterways might be charged. The sections below assess the available evidence on benefits of the inland waterways used for freight transportation and the economic and practical considerations in charging for the benefits received.

Commercial navigation is the primary beneficiary of the inland waterways system. Benefits beyond commercial navigation may include hydropower generation, recreation, flood damage avoidance, municipal water supply, irrigation, higher property values for property owners, sewage assimilation, mosquito control, lower consumer costs because the availability of barge shipping may result in more competitive railroad pricing (referred to as water-compelled rates), and environmental benefits associated with lower fuel emissions of barge compared with other modes.

A possible national benefit of investing in the inland waterways is the environmental advantage that barge may have over other modes: barge’s lower fuel usage per ton-mile than other transportation modes may result in lower air emissions. Whether barge or rail is the more energy-efficient mode (measured as fuel use per ton-mile) depends in large part on the water

The total federal share of the cost of the inland waterways system is estimated to be about 90 percent (TRB 2009). The federal share is roughly 25 percent for the highways used by motor carriers and 0 percent for pipelines and nearly so for railroads (both private industries for which the federal role is primarily one of safety and environmental regulation). Whereas federal general revenues cover all O&M expenses for the inland waterways, states pay 100 percent of the O&M expenses, mostly from user fees, for intercity highways used by motor carriers. O&M expenses for railroads and pipelines are paid for by the private industries responsible for these modes.

Examination of whether beneficiaries could help pay for the system is rational and would improve economic efficiency. Commercial navigation beneficiaries are a viable option, since commercial carriers impose significant marginal costs.

A benefit–cost analysis prepared by USACE is the primary source of technical information that Congress uses during the authorization process in deciding when spending is justified for capital projects. While benefit–cost analyses have been used for determining whether a project meets a minimum threshold for funding, they have not been used to rank projects, and the result has been far more projects being authorized than can be afforded within the constraints of the budget. A method for prioritizing projects on the basis of the service needs of the system would be more useful than an attempt to estimate and seek funding for the existing backlog.

As mentioned, USACE’s primary mission with respect to navigation is to provide conditions that enable the passage of commercial traffic. The main cost of providing these conditions is the maintenance of lock and dam infrastructure, but the maintenance of channels and pools is part of the cost. USACE has developed a conceptual framework (described in more detail below) that considers the age of infrastructure and other elements consistent with EEAM to prioritize repairs that would cost-effectively extend the life of an asset or critical component of the asset and achieve a reliable navigation system. The elements include the probability of failure of the infrastructure; infrastructure usage (demand), defined as whether the waterway has low, moderate, or high levels of freight traffic; and the economic consequences of failure to shippers and carriers. This approach recognizes the importance of economic consequences for strategic investment instead of assuming that all navigation infrastructure needs to be maintained at its original condition. For USACE, the goal of prioritizing investments is to produce the greatest national economic development benefit, which for commercial navigation has meant maximizing reductions in the cost of cargo transported by using USACE waterway infrastructure. In practical terms, this means reducing the risk of physical failure and maintaining a target level of delays.

Although the specific procedures of the approach are just beginning to be implemented and refined and often are not clear, the framework is being applied at program, district, and headquarters levels to guide the identification of maintenance needs and funding requests. USACE intends to use the framework to implement a standardized assessment of assets across the system (outcomes-based assessment). The assessment is planned to cover all important aspects of asset management. However, USACE has not fully developed a set of measures or a standard methodology for assessing risk across all assets in the inland waterways system. Additional considerations that would need attention are described in the section of this chapter on implementation.


A standard process is lacking for assessing the ability of the inland waterways system to meet demand for commercial navigation service and for prioritizing spending for maintenance and repairs. An asset management program focused on economic efficiency, fully implemented and linked to the budgeting process, would prioritize maintenance spending and ascertain the funding levels required for reliable freight service. A well-executed program of asset management would promote rational and data-driven investment decisions based on system needs and minimize the broader influences that affect the budgeting process. USACE has adopted a generally appropriate framework for asset management that is mostly consistent with EEAM, but it is not yet fully developed or deployed across districts. The framework recognizes the importance of economic consequences for strategic investments and does not assume, as in the past, that all navigation infrastructure needs to be maintained at its original condition. The approach appropriately includes assessment of three main elements that follow from EEAM: the probability of failure of the infrastructure; infrastructure usage (demand), defined as whether the waterway has low, moderate, or high levels of freight traffic; and the economic consequences of failure to shippers and carriers.

This chapter discusses funding options for the inland waterways commercial navigation system other than reliance for the most part on federal general revenues. The immediate users of the inland waterways are the companies operating the barge tows that move commercial freight. They are the focus of this chapter. However, the burden of payments by the barge industry is not borne fully by the operators, and they do not enjoy all the benefits. The industries that use barge shipping benefit from the low cost of shipping their products, mostly commodities that are low in value relative to their weight such as coal, petroleum and petroleum products, food and farm products, chemicals and related products, crude materials, and to a lesser degree manufactured goods and equipment. These commodities are sold for a price that is set by the market. If barge companies become the direct payers of a new user charge, their cost may be passed on in whole or in part in the form of increased costs to the shippers of these commodities and, in turn, to the producers and consumers of the commodities. The first section below describes the taxes or fees that might be paid by companies operating the barge tows that move commercial freight. The options could be used alone or in various combinations.

In recent years, proposals have been made to add to or replace the inland waterways barge fuel tax with user-specific fees. In contrast to a tax, user-specific fees are direct charges paid by an identifiable user in exchange for the opportunity to pass through a lock or use a portion of the waterways. Failure to pay the fee results in being excluded from the use of a service (i.e., denial of passage through a lock, use of a particular segment, or passage during times of peak traffic).

Direct Promotion of Efficient Use of Waterway Resources

The design of a user payment strategy can promote a waterways system that uses resources more efficiently (CBO 1992). The requirement that users of the system pay for its costs generates signals concerning the value of the system to the users and whether the benefits of the system justify the costs. In the private sector, payments by purchasers of a good or service send a clear signal concerning whether the purchasers are willing to pay the costs associated with providing it. Similarly, if users of the inland waterways system pay for the costs of navigation service on the various parts of the system (on a river segment or at a lock and dam facility), the payments show which parts of the system are cost-effective components of the national freight transportation system and should be maintained (GAO 2008). Parts of the system for which shippers are not able or willing to pay may be discontinued or justified under revenue streams other than federal navigation funding, as discussed later.


Debates about funding for the inland waterways system have long centered on the level of funding required, the roles of the federal government and users in paying for the system, and how users and other beneficiaries could be charged. These issues deserve renewed attention in light of shrinking federal budgets, declining appropriations for the inland waterways system, and increasing maintenance needs for its infrastructure.


The policy context in which these issues were considered and the committee’s conclusions are summarized below. Three main messages emerge, as follows:

  1. Reliability and performance of the inland waterways freight system are the priorities for funding.
  2. Reliability and performance will depend more on investments in operations and maintenance (O&M) than on capital expenditures for larger locks.
  3. More reliance on a user-pays approach to funding the inland waterways for commercial navigation is feasible, would provide additional revenues for maintenance, and would promote economic efficiency for the system.

POLICY CONTEXT. The infrastructure of the federal inland waterways system is managed by the U.S. Army Corps of Engineers (USACE) and funded through USACE’s navigation budget. The nation’s inland waterways include more than 36,000 miles of commercially navigable channels and roughly 240 working lock sites. The chief and most expensive component of providing for navigation service is the installation and maintenance of lock and dam infrastructure to enable the upstream and downstream movement of cargo. Historically, the federal government invested in the building of the inland waterways system to aid in the physical expansion of the United States and the growth of the U.S. economy by facilitating cargo shipments. Before 1978, the federal government paid all costs associated with construction and maintenance of the inland waterways. Legislation passed in 1978 and 1986 established the current funding and cost-sharing framework. Today, 11,000 miles of the inland waterways are subject to a federal fuel tax paid by the barge industry via the Inland Waterways Trust Fund to cover up to 50% of the cost of construction and major rehabilitation of lock and dam infrastructure.


The federal government pays 50 percent of construction costs from general revenues and 100 percent of the cost of O&M (by budgetary definition, O&M includes repairs up to $20 million; repairs that exceed $20 million and meet other criteria are considered major rehabilitation and classified as a capital expenditure). Although policy debates about funding for the inland waterways have focused on capital projects, O&M, which is paid for entirely with federal general revenues, now accounts for three-fourths of the annual budget request for inland navigation.

Because of historical precedent, the federal role in the management and funding of the inland waterways for commercial navigation is greater than for other freight modes. The total federal share of the cost of the inland waterways system is estimated to be about 90%t. The federal share is roughly 25% for the highways used by motor carriers and 0 percent for pipelines and nearly so for railroads (both private industries for which the federal role is primarily one of safety and environmental regulation). Whereas federal general revenues cover all O&M expenses for the inland waterways, states pay 100% of the O&M expenses, mostly from user fees, for intercity highways used by motor carriers. O&M expenses for railroads and pipelines are paid for by the private industries responsible for these modes.

With the exception of a one-time infusion of funds from federal economic stimulus legislation in 2009, the funds appropriated for inland navigation have declined over the past decade in terms of constant dollars for both O&M and construction. The level of funding required to sustain a reliable inland waterways system is not clear. The level of service required from the system, and therefore the parts of the existing system that need to be maintained, has not yet been defined. USACE does not have established systemwide guidance and procedures for the assessment of inland waterways infrastructure and the prioritization of maintenance and repair spending for reliable commercial navigation. In view of stagnant federal appropriations, system users have recognized that they need to pay more and supported an increase in the barge fuel tax by the 113th Congress. However, the increase will not be sufficient to maintain the system and only heightens the urgency of settling on a plan for maintenance, since under federal law any new revenues from the barge fuel tax can be used only for construction and not for O&M. Moreover, because funds raised by the barge fuel tax for capital projects must be matched by the federal government, O&M competes directly with construction for federal general revenue funds. Without a new funding strategy that prioritizes O&M, maintenance may be deferred until it reaches $20 million (the point at which it becomes classified as a capital expenditure), which would result in further deterioration and in a less cost-effective and less reliable system.

USACE has missions and management responsibilities that extend beyond providing for commercial navigation. With the authorization of Congress, USACE, under its Civil Works Program headed by the Assistant Secretary for Civil Works, plans, constructs, operates, and maintains the following: lock and dam infrastructure for commercial shipping; channel depths required for ports and harbors; dams, levees, and coastal barriers for flood risk management; and hydropower generation facilities. Other USACE responsibilities include maintenance of water supply infrastructure (municipal water and wastewater facilitates) and provision of waterborne recreation (i.e., boating). For the most part, these missions are independent of one another, since most projects are authorized for a single purpose. However, for many navigation projects, the availability of pools behind dams has allowed others to benefit from water supply for municipal, industrial, and farming purposes and for recreation. Any decisions about funding for navigation will need to consider the implications for this broader range of beneficiaries.


The following considerations warrant particular attention in decisions about funding for the inland waterways system.

  1. The Inland Waterways System Is a Small but Important Component of the National Freight System

The role of the inland waterways system in national freight transportation has changed significantly since the system was built to promote the early economic development of the nation. Today barges carry a relatively small but steady portion of freight, mainly bulk commodities that include in rough order of importance coal, petroleum and petroleum products, food and farm products, chemicals and related products, crude materials, manufactured goods, and manufactured equipment. Annual trends in inland waterways shipments show that freight traffic is static or declining. Overall demand for the inland waterways system is static, whereas demand for the rail and truck modes is growing. In recent years, the inland waterways system has transported 6 to 7 percent of all domestic cargo (measured in ton-miles). The truck mode has carried the greatest share of freight, followed by rail, pipeline, and water.

  1. The Most Critical Need for the Inland Waterways System Is a Sustainable and Well-Executed Plan for Maintaining System Reliability and Performance That Ensures Efficient Use of Limited Navigation Resources Lost transportation

The time due to delays and lock unavailability (outages) is a cost to shippers and an important consideration in deciding on future investments. System-wide, about 80 percent of lost transportation time is attributable to delays. On average, 49 percent of tows in 2013 were delayed across the 10 highest-tonnage locks, with an average length of tow delay of 3.8 hours. While some delay is expected for routine maintenance, weather, accidents, and other reasons, lost transportation hours (delays and unavailabilities) can be affected by maintenance outages related to decreased reliability of aging machinery or infrastructure. Lost transportation hours also can be affected by capacity limitations, which may be intermittent or seasonal. About 12 percent of lost time on the inland waterways system is due to scheduled closures and about 8 percent is due to unscheduled closures. Thus, 20 percent of lost transportation time could be addressed with more targeted O&M resources. Directing O&M resources toward major facilities with frequent lockages and high volumes and where the lost time due to delay is significantly higher than the river average could improve navigation performance. Data are not available on the reasons for delay. Delays might be attributable to intermittent or seasonal peaks in volume due to weather, harvest, undercapacity, or other causes. Most lost time due to delay is at locks with periods of high demand often related to peaks in seasonal shipping, mainly for agricultural exports.

Furthermore, the inland waterways cover a vast geographic area, but the freight flows are highly concentrated. Seventy-six percent of barge cargo (in ton-miles) moves on just 22% of the 36,000 inland waterway miles. About 50 percent of the inland waterway ton-miles moves on six major corridors—the Upper Mississippi River, the Illinois River, the Ohio River, the Lower Mississippi River, the Columbia River system, and the Gulf Intracoastal Waterway—which represent 16 percent of the total waterway miles. Some inland waterway segments have minimal or no freight traffic. The nation needs a funding strategy that targets funds to waterway segments and facilities essential to freight transportation and away from places that are not as important. This “triage” is already occurring in USACE’s budgeting process.

  1. More Reliance on a User-Pays Approach to Funding the Commercial Navigation System Is Feasible and Could Generate New Revenues for Maintenance While Promoting Economic Efficiency

In a climate of constrained federal funds, and with O&M becoming a greater part of the inland navigation budget, it is reasonable to examine whether beneficiaries could help pay for the system to increase revenues for the system and improve economic efficiency. Indeed, Congress, in the 2014 Water Resources Reform and Development Act (Section 2004, Inland Waterways Revenue Studies), called for a study of whether and how the various beneficiaries of the waterways might be charged. Federal general revenues presently cover most of the cost of the inland waterways system. Commercial navigation users, the primary identifiable beneficiaries of the system, pay a share of the construction costs through a barge fuel tax, but none of the costs of O&M.

A system more reliant on user payments would provide needed revenue for maintenance and promote economic efficiency. It also would be more consistent with the federal posture toward other freight transportation modes. Setting user charges to move the inland waterways system closer to economic efficiency would provide for more adequate maintenance for the important parts of the system and contribute to a more efficient national freight transportation system. Economic efficiency is promoted when user charges are first used to recover the O&M costs of the inland waterways and when user fees relate directly to the service provided. In the long run, user payments structured properly to include O&M and depreciation could also provide enough revenue to replace components of the system as they wear out. User charges for the inland waterways system can take the form of a dedicated tax such as the current fuel tax, a user fee, or some combination. The fuel tax can be an important source of revenue, but revenue potential alone is not sufficient for judging a funding strategy. User fees (segment- or facility-specific) instead of or in addition to the fuel tax are an option to consider as part of a comprehensive funding approach. Criteria for choosing among the user payment options include the following: promotion of efficient use of waterway resources, distribution of burden, ease of administration, promotion of user support for cost-effective expenditures, and requirements for congressional authorization. No single payment alternative offers a perfect choice; for example, the preferred option for achieving a policy goal may combine an increase in the barge fuel tax with other user fees.

To gain support from commercial navigation users, any additional revenues from users would be dedicated to the inland waterways system to ensure a source of funds for meeting system priorities and to respond to concerns of users that new payments intended for navigation could be reappropriated for other purposes. A revolving trust fund for maintenance would help ensure that all new funds collected are dedicated to inland navigation. Rules and conditions for managing the fund would be set by Congress if such a fund were authorized. The fund would be administered by USACE, and the Inland Waterways Users Board’s advisory role, which is currently limited to capital spending for construction, could be broadened to include spending for O&M and repairs. Amounts from the Inland Waterways Trust Fund are disbursed through congressional appropriations under current practice, which can result in delays in funding and deferred maintenance with increased costs. Direct administration of the trust fund would allow the spending of O&M funds as needed to provide reliable freight service and avoid the increased costs associated with deferred maintenance.

Because of constraints on its budget, USACE has already begun identifying waterways and facilities where commercial navigation is essential to national freight transportation or where significant commercial traffic continues. A policy and a process for identifying the components of the system essential for freight transportation are needed. A path to removing the cost of parts of the system not essential for freight service presently charged to the federal inland navigation budget may further the prospect of shifting to a user-based funding approach for commercial navigation service. Alternative plans and potential funding mechanisms are available for segments and facilities that are deemed not essential to freight transportation but that may provide other benefits.

Deciding the amount beneficiaries would need to pay for the commercial navigation system and how to allocate the costs among beneficiaries would be complex tasks. The economic value of parts of the system to commercial navigation beneficiaries would need to be identified, and a systemwide assessment of the assets required to achieve a reliable level of freight service would need to be made (see next conclusion).

  1. Asset Management Can Help Prioritize Maintenance and Ascertain the Level of Funding Required for the System

Regardless of who pays for the system, a standard process for prioritizing spending of available funds is needed. The capital projects backlog is not a reliable indicator of the amount of funding required for the system. A modest amount of the backlog is for navigation projects. A portion of the navigation backlog includes major rehabilitation to maintain the system, but it does not include O&M. Furthermore, the navigation backlog may include projects that are a lower priority for spending. Congress has long authorized and appropriated USACE capital projects on a project-by-project basis. A benefit–cost analysis prepared by USACE is the primary source of technical information that Congress uses during the authorizations process in deciding when spending is justified for capital projects. While benefit–cost analyses have been used for determining whether a project meets a minimum threshold for funding, they have not been used to rank projects, and the result has been far more projects being authorized than can be afforded within the constraints of the budget. A method for prioritizing projects on the basis of the service needs of the system would be more useful than an attempt to estimate and seek funding for the existing backlog.

The advanced age of locks is often used to communicate funding needs for the inland waterways system. Age, however, is not a good indicator of lock condition. A substantial number of locks have been rehabilitated, which would be expected to restore performance to its original condition if not better.

A general framework of locks and their performance as measured by delay experienced by system users. Dating the age of assets from the time of the last major rehabilitation, as is done for highway infrastructure such as bridges, would be more accurate. USACE does not publish consistent records of rehabilitation dates for its various lock and dam assets, however. Making such information available to policy makers, alongside information about the reliability and performance of the system, could improve the efficient allocation of available resources.

An asset management program focused on economic efficiency, fully implemented and linked to the budgeting process, would prioritize maintenance spending and ascertain the funding levels required for reliable freight service. A well-executed program of asset management would promote rational and data-driven investment decisions based on system needs and minimize the broader influences that affect the budgeting process. USACE has adopted a generally appropriate framework for asset management that is mostly consistent with the economically efficient asset management (EEAM) concept described in Chapter 4, but it is not yet fully developed or deployed across USACE districts. The framework recognizes the importance of economic consequences for strategic investment instead of assuming that all navigation infrastructure needs to be maintained at its original condition. The approach appropriately includes assessment of three main elements that follow from EEAM: the probability of failure of the infrastructure; infrastructure usage (demand), defined as whether the waterway has low, moderate, or high levels of freight traffic; and the economic consequences of failure to shippers and carriers.

Whereas maintenance is a priority for the system, decisions about whether to invest in construction for capacity expansion at key bottlenecks and how to prioritize these investments against other investments for the system will continue to arise. Decisions about whether investments in construction to expand capacity at the corridor level are economically justified would require more information about delays and the ability of nonstructural alternatives or smaller-scale structural improvements (to increase processing time) to achieve the desired level of service. Collection of data and development of performance metrics would enhance understanding of whether delay problems could be most efficiently addressed by more targeted O&M, traffic management, capacity enhancement, or some combination of these. Once an asset management approach was fully developed and applied, it could be used to prioritize allocation of resources for O&M and indicate areas where major rehabilitation or other capital spending should be considered.


Total barge  %
coal 182.7 24.77
petroleum and petroleum products 252.4 34.22
chemicals 70.4 9.54
crude materials 111.5 15.12
primary manufactured goods 31 4.20
food and farm products 76.1 10.32
all manufactured equipment 12.2 1.65
other 1.3 0.18


Apalachicola, Chattahoochee, and Flint River System: A Multiple-Purpose River System Not Reflecting Today’s Economic and Environmental Values

The Apalachicola–Chattahoochee–Flint Rivers basin originates in northeast Georgia, crosses the state boundary into central Alabama, and then follows the Alabama state line south until it terminates in Apalachicola Bay, Florida. The basin covers 50 counties in Georgia, 10 in Alabama, and eight in Florida. Extending a distance of approximately 385 miles, the basin drains 19,600 square miles. The Apalachicola, Chattahoochee, and Flint River Waterway consists of a channel 9 feet deep and 100 feet wide from the mouth of the Apalachicola River to the head of navigation at Columbus, Georgia, for the Chattahoochee River and at Bainbridge, Georgia, for the Flint River. The total waterway distance is 290 miles, with a lift of 190 feet accomplished by three locks and dams. Provision of navigation services is just one of several purposes for which the system’s operations are authorized; others are water supply, flood control, hydropower generation, recreation, and management of water releases for several nonfederal power generation dams. Commercial use of the waterway has declined steadily over time and now is minimal, mainly haulage of sand and gravel. According to the Waterborne Commerce Statistics Center, no commercial traffic occurred over the 5 years from 2008 to 2012. Nevertheless, channel maintenance of the lower reaches of the waterway requires dredging and clearing, which has severe adverse impacts on the ecological health of Apalachicola Bay, one of the most economically productive water bodies in the United States. While these efforts have been strongly opposed by the state of Florida through regulatory and other measures such as not providing dredged material disposal areas, USACE has found ways to provide navigation services. In addition to the financial outlays by the federal government for navigation, operation of the upstream reservoirs to provide navigation “windows” uses releases of water that are highly valued by other users, including municipalities and lake recreationists. Because the cost of O&M assigned to navigation is borne by federal taxpayers, opposition to continued provision of navigation services comes largely from the environmental organizations and Florida. Furthermore, the lack of navigation benefits is only a small issue in the conflicts over the operation of this major multiple-purpose reservoir system. Growing demands for municipal water supply in Georgia have led to “water wars” among the states for decades, which have not been successfully addressed administratively by USACE or by Congress.



Posted in Ships and Barges | Tagged , , | Leave a comment

Understanding peak oil theory, 2007 U.S. House hearing

House 109-41. December 7, 2005. Understanding the Peak Oil theory. U.S. House of Representatives.

[What follows are excerpts from the 95 page transcript of this hearing, the only one about peak oil and the possibility that peak production may happen soon. And also the only hearing where most of the speakers explaining peak oil, including Representative Roscoe Bartlett, were scientists. From now on think-tank experts and CEO’s of large companies, not scientists, promise peak oil production is decades away and that the U.S. has 100 years of energy independence. Has Congress only invited bureaucrats rather than scientists and engineers since 2005 so that after the next energy crisis they can say they knew nothing? Though of course they know we’re in deep trouble — see the March 7, 2006 “Energy Independence” Senate hearing.  Alice Friedemann   www.energyskeptic.com ]

RALPH M. HALL, TEXAS. We are having this hearing today to learn more about peak oil theory, to hear different opinions, and to learn what we can do about it, if anything. While some theorists believe that we have reached our peak, the point at which the rate of world oil production cannot increase at any time, there are others that tell us that we are not going to peak any time soon, and others who still believe oil is continuously being created and will therefore never peak. We have not been ignoring a possible peak in oil production and this energy bill that was signed into law in August had provisions that address oil usage by promoting conservation and conventional and unconventional production. Whether or not we are reaching our peak, it seems responsible to continue in the vain we are going in by continuing to work on ways to conserve energy while increasing our domestic supply of oil and using research to develop substitutes for conventional oil.


JOE BARTON, CHAIRMAN, Committee on Energy & Commerce. I asked Chairman Hall to hold this hearing on the “peak oil” theory at Congressman Bartlett’s request. Congressman Bartlett is an active and persuasive advocate for peak oil theorists and I look forward to hearing his views and perspectives on peak oil.

TOM UDALL, NEW MEXICO. Mr. Bartlett and I started the House Peak Oil Caucus to bring immediate and serious attention to this issue. The continued prosperity of the United States depends on the Nation taking immediate and intelligent action concerning Peak Oil.

I have had a chance to review the testimony of my colleague, Mr. Bartlett, and also that of Mr. Aleklett and I agree with their analysis that the peak in oil production will occur in the next two decades and potentially as early as 2010. The central theme here is there is not much time to act.

Our economy and way of life is dependent on cheap oil. In many ways, cheap oil is responsible for our prosperity. Since oil provides about 40% of the world’s energy, a peak and global oil production will be a turning point in human history. Oil and natural gas literally transport heat and feed our country.

Therefore, we must act immediately to diversify our energy supplies to mitigate the economic recession and social and political unrest that will undoubtedly accompany the peak in oil and natural gas production if we do not act.

The United States’ demand for oil continues to increase by about 2% a year, and global demand has increased faster than production. The once substantial cushion between world oil production and demand has decreased. This phenomenon has increased the price of oil and consequently huge amounts of American money up to $25 million per hour goes abroad to pay for foreign oil. And as many people are now  aware, some of this money goes to governments and groups who are considered a threat to our national security. Middle Eastern Countries flush with oil dollars help fuel the terrorism we are fighting.

Some say that market forces will take care of the Peak Oil problem. They argue that as we approach or pass the peak of production the price of oil will increase and the alternatives will become more competitive.

But no available alternative is anywhere near ready to replace oil in the volumes we use it today.

The main problem with the market force argument is that current U.S. oil prices do not accurately reflect the full social costs of oil consumption. Currently in the United States, Federal and State taxes add up to about 40 cents per gallon of gasoline. A world resources institute analysis found that fuel related costs not covered by drivers are at least twice that much. The current price of oil does not include the full cost of road maintenance, health, and environmental costs attributed to air pollution, the financial risk of global warning, or the threats to national security.

Over the past 100 years, fueled by cheap oil, the United States has led the revolution in the way the world operates.

Replacing this resource in a relatively short time is not only an incredible challenge, but also imperative to the survival of our way of life.

We must produce effective policies that create a new generation of scientists devoted to changing the way we produce energy. We must also commit to decreasing our demand for oil. We can start by increasing efficiency. The United States consumes 25% of the world’s oil. Of that 25%, two-thirds is used for transportation. Hence, transportation in the U.S. accounts for 16.5% of the world’s oil consumption.

It is obvious that more efficient transportation is one of the keys in reducing our demand for oil. Transporting goods and people by rail is at least five times as efficient as automobiles. Therefore, we must revive and reinvest in our passenger and freight rail system. A modest increase in fuel efficiency of our automobile fleet from 25 miles per gallon to 33 miles per gallon using existing technology would decrease our demand for oil by 2.6 million barrels a day or about 1 billion barrels a year. However, the turnover rate for the automobile fleet is 10 to 15 years, therefore, we must start immediately.

Simple, everyday things like automobile maintenance also increase efficiency. According to the Department of Energy, proper inflation of car tires can increase fuel efficiency by 3%, translating to the equivalent of 100 million barrels of oil per year. The buildings in which we work and live are terribly inefficient. We could easily reduce their energy consumption by one half. We must immediately weatherize and make more energy efficient tens of millions of buildings. Our bold new initiative must instill these ideas in the American consciousness.

The sooner we start the smaller our sacrifices will be. These tasks will not be easy but I am confident that we will achieve our goal for we have little in the way of alternatives.

The theory of Peak Oil states that, like any finite resource, oil will reach a peak in production after which supply will steadily and sharply decrease.

In 1956, Shell Oil geologist M. King Hubbert predicted that oil production in the contiguous United States would peak in about 1970 and be followed by a sharp decline. At the time, many dismissed his predictions as false, but history shows they were remarkably accurate.

A growing number of geologists, economists and politicians now agree that the peak in the world’s oil production is imminent; predicted to occur within one or two decades. Some disagree with this prediction, calling it a doomsday scenario and say that technological advances will buy us more time before we reach peak production. Theirs, however, is not the consensus view and even they agree that a peak in the world’s oil production is inevitable

The strongest evidence that the peak in world oil production is imminent is that for the last thirty years, production of oil has exceeded discovery of new oil resources.

The reason for this is relatively simple. Oil is a limited commodity and the large oil fields with easily extractable resources were naturally the first ones to be exploited.

These fields were found thirty or forty years ago in the Middle East (Saudi Arabia, Iraq, Iran and the United Arab Emirates) and are still the main suppliers of the world’s oil. As the finite supply of oil in these deposits diminishes, exploration for new supplies continues. However, new discoveries tend to be small and rapidly exhausted, making them less economically viable.

Meanwhile, global demand for oil, which is at an all-time high, continues to rise. The United States demand continues to increase by about 2% per annum. Also, with the globalization of the market economy and increases in oil-driven industrial production in Asia, new consumers are contributing to rising demand. To meet rising demand oil companies must increase production, accelerating us towards the peak.

The United States only possesses 2% of the world’s oil reserves and only produces 8% of the world’s oil capacity. Therefore, we are not in a position to control the world’s oil production.

Oil is a very powerful resource with an incredibly high energy density. For example, the energy in just one barrel of oil (42 gallons) is equivalent to eight people working full time for a year.

Over the past 100 years, fueled by affordable oil, the United States has led a revolution in the way the world operates. For example, petroleum-based fertilizers are used to inexpensively grow remarkable amounts of food and airline transportation allows us to reach virtually anywhere in the world within 24 hours helping to create a global economy. However, the sustainability of the oil-based economy is rapidly decreasing.

Reaching a peak in oil production has the potential to destroy our economy and cause great social and political unrest.

And the carbon released using fossil fuels is contributing to dramatic changes in the earth’s climate.

Therefore, replacing this resource in a relatively short time is not only an incredible challenge but also imperative to the survival of our way of life.

ROSCOE BARTLETT, Maryland. Thirty of our leading citizens, Boyden Gray, McFarland, Jim Woolsey, and 27 others, including a lot of retired four star admirals and generals, wrote a letter to the President saying Mr. President, the fact that we have only 2% of the known reserves of oil and we used 25% of the world’s oil and import nearly two-thirds of what we use is a totally unacceptable national security risk. We need to do something about that. I would submit that if you do not believe that there is such a thing as Peak Oil, you need to understand that this really is a big national security risk. And the things that we need to do to transition to alternatives so that we are not so dependent on foreign oil are exactly the same things that we need to do to attenuate the effects of Peak Oil.

We have only 2% of the world’s oil reserves but we are producing 8% of the world’s oil which means that we are pumping our oil roughly four times faster than the rest of the world. We are really good at pumping oil. These data have made me opposed to drilling in ANWR and offshore, because if we have only 2% of the known reserves of oil, how is it in our national security interest to use up that little bit of oil we have as quickly as possible? If we could pump the offshore oil and ANWR oil tomorrow, what would we do the day after tomorrow? And there will be a day after tomorrow. I would like to husband these resources. This is very much like having money in the bank that is yielding really high interest rates. If you have money in the bank yielding really high interest rates, you probably would leave it there and that is what I think we need to do for the moment with this oil.

To put this discussion in context, we really need to go back about six decades to the mid-40s and 50s. A scientist in the Shell Oil Company M. King Hubbert was looking at oil fields, their exploitation and exhaustion. He noted that they all tended to follow a rough bell curve and he theorized that if he could add up all those little bell curves he would have one big bell curve where he could predict when we would reach our maximum production in this country. He made a prediction in 1956 that we would peak at about 1970, which was correct. We are now about halfway down what many people call Hubbert’s peak. Texas has been a big contributor to oil in our country. And notice that we did reach maximum oil production in 1970. And in spite of Prudhoe Bay, which produced a quarter of the oil that we were pumping in our country, it has been pretty much downhill since Prudhoe Bay peaked.

I remember the fabled Gulf of Mexico oil discovery that was supposed to solve our oil problem for the foreseeable future. But it didn’t. The observation was made that we are not running out of oil, and that is true. There is still a lot of oil there. As a matter of fact, worldwide there is probably about half the oil there yet to be recovered than we have recovered so far.

The same M. King Hubbert that predicted that we would peak in 1970 and he was correct there, predicted that the world would peak would be about now If M. King Hubbert was right about our country, why shouldn’t he be right about the world? And we have known for at least 25 years that M. King Hubbert was right about our country. By 1980, when President Reagan came to office, we were already 10 years down the other side of Hubbert’s peak and we knew very well that we were sliding down Hubbert’s peak. The response was to drill more wells, but we really did not find any more oil. You cannot find what is not there. You cannot pump what you have not found.

Most of the discoveries of oil occurred 30 or 40 years ago. For the last two and a half decades there has been an ever decreasing discovery of oil. Since the early 1980’s, we have been using more oil than we have found. It is obvious you cannot pump more oil than you have found.

If we have enhanced oil recovery we can recover it more quickly. But all that does is cause us to reach a higher peak a little later, and change the shape of the down slope to fall off more steeply.

Mr. Green mentioned crying wolf, and yes, we have cried wolf several times in the past. But in the parable the wolf did come. I think he ate all the sheep and the people. So one day the wolf will come and that is what we are trying to do is to avoid the kind of catastrophe that they had in the parable.

Energy Return on Invested (EROI)

When we are looking at replacing the fossil fuels we have been using, you have to look at energy profit ratio. We are now producing oil from the oil shales in Canada at about $30 a barrel, maybe less than that when it is selling at $60. That is really a good dollar profit ratio. But I understand that they are now using more energy from natural gas than they get out of the oil they produce. So the energy profit ratio is negative. That is a good thing for them because they got a lot of gas, it is cheap, it is hard to transport to other places and oil is in high demand and they can sell it for twice the production cost so that makes a lot of sense. But at the end of the day with the limited energy resources in the world, we really should not be producing energy with a negative energy profit ratio.

Exponential Growth

Two hundred and fifty years of coal, I wish we would stop saying that unless we qualified it by saying at present use rates, because as soon as you increase use just 2% a year, it shrinks to 85 years. If you use energy to convert it to a gas or a liquid, you have now shrunk it to 50 years.

Yeah coal is there, it is a finite resource. We really need to husband it, because it is not 250 years worth.

Albert Einstein said that that exponential growth was the most powerful force in the universe, the power of compound interest. If you have not heard Dr. Albert Bartlett’s Hour Lecture on Energy, pull it up and read it. It is the most interesting one hour lecture I have ever heard. One of his examples is an ancient kingdom where the king was so pleased with a subject he promised to give him anything reasonable he asked. His subject asked for a grain of rice on the first square of a chessboard, twon on the second, and to double the grains on each subsequent square. And the king thought stupid fellow, I would have given him something really meaningful and all he asked for is a little bit of rice on a chessboard. The number of grains on the 64th squares would be 18,446,744,073,709,551,615 and weigh 461,168,602,000 metric tons, a mountain of rice higher than Mount Everest, and over one thousand times the global production of rice in 2010. That is the power of exponential growth.

I would also like to note that the population curve of the world roughly follows the production curve for oil. We started out with about a billion people and now we have about 7 billion people almost literally eating oil and gas because of the enormous amounts of energy that go into producing food. Almost half the energy that goes into producing a bushel of corn comes from the natural gas that we use to produce the nitrogen fertilizer.

Just a comment or two about energy density and how difficult it is going to be to replace oil. And by the way, we are about 100 years into the age of oil. In another 100 years or so, we will be through the age of oil. In 5,000 years of recorded history, 200 or 300 years is just a blip, just a tick in the history of man. We found this incredible wealth under the ground. And rationally what we should have done as a civilization is to ask ourselves what will we do with this incredible wealth to do the most good for the most people over time. Each barrel represents about 50,000 man hours of effort, the equivalent of having 12 people work all year for you. And today at the pump with gas prices about $2, it costs you, 42 gallons costs you less than $100. That is incredible.

If you have some trouble getting your arms around that, imagine how far that gallon of gas or diesel takes your car or truck and how long it would take you to push it the same distance to get some idea of the energy density. And how long would it take you to get it there? If you go work really hard in your yard all day, I will get more work out of an electric motor with less than 25 cents worth of electricity. That gives us you some idea as to the incredible energy density in these fossil fuels. What wealth it was we found under the ground. And almost like children who found the cookie jar, we had no restraint. We tried to use it up as quickly as we could use it up. And there will be an age of oil. One day there will be no more economically feasible recovery of oil, gas, and coal.

The cheapest oil that we use that we buy is the oil that we do not use. And so if we are going to have any energy to invest in alternatives it will take three things. Money we will not worry about that. We will just borrow money from our kids and our grand kids. But you cannot borrow time and you cannot borrow energy from our kids and our grand kids. And we are going to have to make big investments of both time and energy to get these alternatives. In order to have energy to invest, we are going to have to have enormous conservation efforts now so that we free up some of the oil because if in fact we are reaching peak oil, when we have reached peak oil, all the oil that is produced is needed by the world’s economies, none will be available to invest in the alternatives.

So I would suggest that maybe the goal would be to find a way to have high quality of life without increasing energy use.

We are very much like the young couple whose grandparents have died and left them a big inheritance and they now established a lifestyle where 85% of the money they spend comes from their grandparent’s inheritance and only 15% from their income. But they look at their grandparent’s inheritance and the amount they are spending and it is going to be gone before they retire. So they are going to clearly have to do one of two things. Either spend less money or make more money. Similarly, 85% of the energy we use today comes from fossil fuels. And only 15% of the energy comes from the alternatives.

By and by, all of the energy will need to come from the alternatives. Of the 15% that is not fossil fuels, a bit more than half of that comes from nuclear. This could and should grow probably. But that will not be the water reactors we have because fissionable uranium is of finite supply in the world. We will have to move to breeder reactors and the problems that come with that.

I think planning to solve our energy future with fusion is a bit like planning to solve our personal economic problems by winning the lottery.

Of the other 7% renewable energy almost half of that is conventional hydro. We have maxed out hydropower in our country. We have dammed up all the rivers that could be dammed and maybe a few that we should not have dammed.

The next biggest source of alternative energy is wood. Not the West Virginia hillbilly, it is the timber industry and the paper industry burning what would otherwise be a waste product. And then the next biggest one is waste.

And now we are down to the things that we will transition to in the future, solar. We have been growing at 30% a year. That doubles in 2.5 years. It was .07% in 2000 and now it is .28%, big deal. That is a long way from any meaningful contribution. The same thing is true of wind.

Just a word of caution about energy from agriculture, the world has to eat. If we will eat the corn and the soybeans that the pig and the chicken and steer would have eaten, maybe we can get more energy from agriculture. And be careful, Mr. Chairman, about taking biomass to produce energy because we are barely able today to maintain the quality of our topsoils without returning much of that biomass to create humus in the soil.

Geothermal we need to exploit as much as we can.

We need the kind of commitment we had in World War II. No new cars were made for three years. They rationed gasoline. They rationed tires. They rationed sugar. You brought the grease from your kitchen to a central depository. I think we need a program that’s a combination of putting a man on the moon, with the urgency of the Manhattan Project and the involvement of every one of our citizens to avoid a bumpy ride.

Perhaps Matt Savinar is more pessimistic than he needs to be, but he is not an idiot. He says, dear reader, civilization as we know it is coming to an end soon. I hope not, Mr. Chairman.

At the start of the age of oil, world population was one billion; now it’s seven billion. The population of the United States is almost 300 million and increasing by nearly 30 million people every decade. Nitrogen fertilizer is made from natural gas. In a very real sense, oil feeds the world.

I thank the Committee for scheduling this hearing and inviting distinguished witnesses to discuss House Resolution 507 which expresses “the sense of the House of Representatives that the United States, in collaboration with other international allies, should establish an energy project with the magnitude, creativity, and sense of urgency that was incorporated in the “Man on the Moon” project to address the inevitable challenges of “Peak Oil.”

Shell Oil company geologist M. King Hubbert first identified “Peak Oil” in the 1940’s and 1950’s. He discovered oil field production follows a bell curve. Oil flows slowly at first, then rapidly increases, reaches a maximum or peak when half of the oil has been extracted, and then production declines rapidly. Adding the curves from individual wells in the United States, Hubbert projected in 1956 that “Peak Oil” for the United States would occur in 1970. He was right. U.S. oil production peaked and has declined every year since 1971. Despite sharp increases in prices and better technology, US domestic oil production has declined every year since then.

Just as Hubbert was right about the United States, peak oil has occurred in other countries and global peak oil will happen. Oil production is declining in 33 of the world’s 48 largest oil-producing countries.

U.S. natural gas production has also peaked. The United States is now the world’s largest importer of both oil and natural gas. From importing one third of the oil we used before the Arab Oil Embargo, the U.S. now imports about two thirds of the oil we use. After U.S. oil production peaked in 1970, our country started and we are continuing to accelerate down a path of growing energy insecurity.

The United States used to be the world’s largest oil producer. After the U.S. peaked in 1970, Saudi Arabia became the world’s largest single oil producer and the leader of OPEC nations which became the world’s dominant oil suppliers.

Global “Peak Oil” has not yet occurred, but will.

I met with the President at the White House on June 29, 2005 and was impressed by his understanding of the need for our government to act now to prepare for global “Peak Oil”.

On October 5, 2005, Department of Energy Secretary Samuel Bodman requested the National Petroleum Council to study “Peak Oil” and the oil and natural gas industry’s ability to produce enough oil and natural gas at prices that would not cripple the American economy. Our country’s leadership is slowly becoming aware of “Peak Oil”.

However, it is my hope because of hearings like this and the testimonies given by some of our most prominent figures, our country’s leadership will start to see the urgency in addressing this issue, and make it the centerpiece of their agenda.

For example, in testimony before the U.S. Senate Committee on Foreign Relations on November 16, former CIA Director James Woolsey discussed “seven reasons why dependence on petroleum and its products for the lion’s share of the world’s transportation fuel creates special dangers in our time.” 1. Transportation infrastructure is dependent upon oil 2. The Middle East will continue to be the low-cost and dominant petroleum producer. 3. Petroleum infrastructure is highly vulnerable to terrorist and other attacks. 4. The possibility is increasing of embargoes or supply disruptions under regimes that could come to power in the Greater Middle East. 5. Oil revenue transfers fund terrorism. 6. Current account deficits for a number of countries create risks ranging from major world economic disruption to deepening poverty that could be reduced by reducing oil imports. 7. Oil used for transportation produce greenhouse gases that increase the risk of climate change. The planes, ships and trucks of our military run on oil.

Tight supplies and high oil prices threaten our national security and the Department of Defense is responding. For instance, in an October 11, 2005 memo on “Assured Fuels,” Assistant Secretary of the Navy for Research, Development and Acquisition John J. Young, Jr., endorsed a recommendation by the Naval Research Advisory Committee in its “2005 Summer Study of Future Fuels” to set the goal of the Navy to become independent from reliance on foreign oil by 2020. Secretary Young explained, “In light of the current painful reality of DoD fuel price adjustments, and the risks to our fuel sources posed by natural disasters and terrorist threats, I believe we need to act on this recommendation with a sense of urgency.”

For many years, Saudi Arabia maintained enough production flexibility to leverage oil prices at around $20 per barrel. In recent years, the cushion between world supply and demand whittled away. Three years ago in November 2002, the prompt price for immediate delivery of oil was $27 per barrel NYMEX WTI (New York Mercantile Exchange – West Texas Intermediate). The price for contracts on 10-year long term derivatives combining NYMEX and forward swaps market transactions was between $22 and $24 per barrel. Beginning in December 2003, the price for 10-year contracts began a sharp upward trend that has not abated. The change was prompted by an increase in long term contract purchases by the Chinese and the judgment by market participants that Saudi Arabia could no longer maintain sufficient extra capacity to drive the price of oil down. In November 2005, the prompt price for immediate delivery of oil was $60 per barrel after a spike to $71 per barrel after Hurricane Katrina. The price for 10-year contracts was $59 per barrel. In the past three years, the prompt price increased two times from $27 per barrel to $60 per barrel. The 10-year price increased almost three times from $22 per barrel to $59 per barrel. The world’s largest banks are the primary transactors in the private forward swaps markets on behalf of clients who are among the world’s largest and best financed institutions and companies. Those price increases in oil, the emergence of a well-defined forward swaps market in oil and the larger magnitude increase between the prompt and 10-year price represent a dramatic change in world oil markets.

A December 1, 2005 CRS report (prepared at my request) documents and ranks countries that experienced declines in oil production between 2003 and 2004. Despite the increase in oil prices, United Kingdom oil production declined 228 thousand barrels. United States oil production declined 159 thousand barrels. Australia declined 83 thousand barrels. Norway declined 76 thousand barrels. Indonesia declined 57 thousand barrels. Argentina declined 50 thousand barrels. Other countries with production declines included: Egypt, Oman, Syria, Yemen Brazil, Columbia and Italy. At the same time, demand for oil is increasing. China and India are increasing their oil consumption. China increased consumption 51.3% and is the world’s second largest importer of oil, behind the United States. Developing countries around the world are increasing their demand for oil consumption at rapid rates. For example, the average consumption increase, by percentage, from 2003 to 2004 for the countries of Belarus, Kuwait, China, and Singapore was 15.9%;

In order to keep energy costs affordable, improve the environment, safeguard economic prosperity, and reduce the trade deficit, the United States must move rapidly to increase the productivity with which it uses fossil fuels, and to accelerate the transition to renewable fuels and a sustainable, clean energy economy.

There is no one silver bullet to solve this problem. Only through a combination of conservation, improved efficiency, and a combination of alternate sources of energy for transportation and ultimately renewable sources of energy (i.e. wind, solar, geothermal, harnessing ocean tides) will we be able to meet the energy demands of the future.

How and when we as individuals and government leaders will respond to global “Peak Oil” is what we need to address immediately. I believe global “Peak Oil” presents our country with a challenge as daunting as the one that faced the astronauts and staff of the Apollo 13 program. Contingency planning, training, incredible ingenuity, and collaboration to solve the problem brought the Apollo 13 astronauts back home safe. The U.S. government must lead and inspire Americans’ unmatched ingenuity and creativity to end our unacceptable and unsustainable energy vulnerability and to prevent a worldwide economic tsunami from global “Peak Oil”.

We in the Congress must work with and on behalf of our constituents to debate, develop and start implementing appropriate policy changes and legislation to make Americans more secure, as we did in the 1940’s with the Manhattan Project. The federal government took an active role in funding a crash program, in partnership with the United Kingdom and Canada, to develop the first nuclear weapon in order to defeat Nazi Germany. Now, we again must adopt a crash program, this time in cooperation with our international allies. We must overcome the obstacles we can foresee and those that will emerge. “Peak Oil” will inflict unprecedented pressure upon our citizens and strain the capability of our social, economic, and political institutions. We must survive the challenges of “Peak Oil” only with the tools we have available. We have no choice.

The Hydrogen Economy

Hydrogen, of course, is not a source of energy. We will always use more energy producing hydrogen than we get out of it because we are not going to suspend the first and second laws of thermodynamics.

To understand what hydrogen will do for us, please think of it as a battery. It is just a way of carrying energy from one place to another. Hydrogen is not a solution to our energy problem, it is simply a way–for instance, using energy from coal, you cannot put a trunk full of coal in your car and go down the road. But you can use coal to produce electricity. The electricity can split water into hydrogen and oxygen. You can then use the hydrogen in a fuel cell to take your car down the road. So you can run, you can use coal to take your car down the road.

I would remind you that even some things God cannot do. God cannot make a square circle. There are not infinite resources here. And so you have to qualify what the marketplace can do in terms of that. But as you will hear in later testimony from SAIC, none of the alternatives have the potential for being ramped up quickly enough to make up the slack [of declining oil production]. That is the reality. We should have started 20 years ago if we wanted to make sure we were not going to have any dislocations.


KJELL ALEKLETT, PH.D., PROFESSOR, Department of Radiation Sciences, Uppsala University.

By choosing the wording Peak Oil Theory, some persons might think that this is just a theory and it is not reality. I must say sorry, ladies and gentlemen, Peak Oil is reality.

As a summary of m y written testimony, I would like to highlight the following points.

  1. Peak Oil will come because oil is a limited resource.
  2. Fifty years ago the world was consuming 4 billion barrels of oil per year and the average discovery rate (the rate of finding undiscovered oil fields) was around 30 billion barrels per year. Today we consume 30 billion barrels per year and the discovery rate is dropping toward 4 billion barrels per year. This is significant; Chevron is even running an ad saying, “The world consumes two barrels of oil for every barrel discovered.” By discovery, I mean only new oil fields. Some analysts include reserve growth—newly accessible oil in old fields—as new discoveries, but we are using the same approach as in World Energy Outlook 2004, IEA, International Energy Agency.
  3. We can only empty the reserves that we have at a limited speed. Depending on demand, Peak Oil will happen within the near future.
  4. Another problem is that most countries are planning to increase their import of oil. Very few countries are planning to decrease their import of oil.
  5. Studies of the correlation between oil consumption and the growth of GDP in individual countries such as Sweden or China, as well as for the world, shows that since the Second World War, there has never been an increase in GDP without an increase in the use of oil
  6. The enormous resources of oil sands in Canada are often mentioned as a lifesaver for the world. Our group in Uppsala has made studies that show that even a crash program for production of oil from Canadian oil sands will yield only a limited amount of oil. By 2018, it might be possible to produce 3.5 million barrels per day. If that should rise to 6 million in 2040, they need to open up a couple of nuclear power plants to get heat to get the oil out of the ground.
  7. Excluding deep water oilfields, output from 54 of the 65 largest oil-producing countries in the world are in decline
  8. If we extend the decline in existing fields through 2030, and accept the 2004 Energy Information Administration estimate that global demand will be 122 mbpd, then we need 10 new Saudi Arabias. Some might call this a doomsday scenario, but if so I’m not the doomsayer, this was said by Sadad Al Husseini, until recently vice-director of Saudi Aramco, the largest oil company in the world.
  9. There is at present an extreme dependence on supply from the Middle East holding more than 60% of the global oil reserves. A key country is Saudi Arabia, which is supposed to hold about 20% of the global reserves of conventional oil and much of the world’s spare capacity.

Currently, 2010 is the most likely year for Peak Oil. And the question is then more oil be produced for export. And if you look at the 20 largest countries for export, you have as number two on the list, Russia. Russia will not increase their export because they need more oil within Russia. Number three on the list is Norway, and the production in Norway is declining at 10% per year. And I could go down the list. In principle, there are only one, two, three, four countries that can increase their production for export.

The role of the Swedish Academy of Science is an independent non-Government organization with expertise in most of the sciences. The academy has a made a statement about oil that said to avoid acute, economic, social, and environmental problems worldwide, we need a global approach with the widest possible international cooperation. Activities in this direction have started and they should be strongly encouraged and intensified.

Technically advanced countries like the United States have a particular responsibility. If you or one of the members of the committee have grandchildren, they will also face Peak Oil. What you decide to do will affect the future for our grandchildren. I hope that you are not the kind of politicians we used to see that can only promise that they can do better in the future and maybe promise to take care of crisis when it happens. As Peak Oil is here in the near future, we need action now.

Now consider China, a developing country with 21% of the global population. It consumes 8% of the global oil supply, and thinks it is fair to claim 21% of daily global consumption, or 17.6 million barrels per day (mbpd). During the last five years the average annual GDP growth in China has been 8.2% and the average increase in oil consumption 8.4% per year. We can now see the same correlation between increase in GDP and use of oil in China as in Sweden 50 years ago. If China’s economy grows 8% per year over the coming five years, we can expect that it will need an increase in the consumption of oil of 3 million barrels per day by 2010.

According to Professor Pang Xiongqi at the China University of Petroleum in Beijing, China’s oil production will plateau in 2009 and then start to decline. This means that the total increase in consumption must be imported. As China is already importing 3 million barrels per day, it will have to increase imports 100% during the next five years. Where will it come from?

Since 2001, when ASPO was founded, we have tried to tell the world that there will soon be a problem supplying the world with crude oil while demand continues to rise.

Unfortunately, few have heeded our alerts, even though the signs have been so obvious that a blind hen could see them.

If we extrapolate the downward discovery slope from the last 30 years , we can estimate that about 135 billion “new” barrels of oil will be found over the next 30 years. The latest large oil field system to be found was the North Sea (in 1969), which contains about 60 billion barrels. In 1999 the North Sea field production peaked at 6 mbpd. Our extrapolation suggests that over the next 30 years we will discover new oil fields equal to twice the size of the North Sea—a very pessimistic prediction, according to our opponents. But I think the oil industry would be ecstatic to find two new North Sea-size oil provinces.

The problem we are facing is that we are using too much oil per year, 30 billion barrels per year. A 5 billion barrel east Texas oil field is only a couple of months of global oil consumption. You do not find big fields often anymore. The largest field discovered during the last 20 years is in Kazakhstan, and it is 10 billion barrels, equal to 4 months of global oil demand. Tar sands could produce 3 million barrels a day in a crash program. But there are 2 problems: Only a small part is the best part – the mined tar sands. The larges part must be obtained with in situ methods and that means you must heat it up and take it out, and for that you need a lot of energy.

The problem with the technology in Texas and the other lower 48 States hasn’t stopped production decline. If you use all the technology you have in the 5 billion barrel East Texas field, the decline rate is just increasing. You are talking about 10 to 20% per in decline now. So what technology did was bring it out faster and now we see, for instance, that in the North Sea, which has had advanced technology from the beginning, the decline is now 10% per year, 10% per year! So do not hope that technology will solve the problem. It might make the problem even worse in the future.

The World Energy Outlook 2005 base-case scenario projects that by 2030 global oil demand will be 115 million barrels per day, which will require increasing production by 31 million barrels per day over the next 25 years, of which 25 mbpd is predicted to come from fields that have yet to be discovered. That is, we’ll have to find four petroleum systems of the size of the North Sea. Is this reality?

Every oilfield reaches a point of maximum production. When production falls advanced technologies can reduce but not eliminate the decline. The oil industry and the IEA accept the fact that the total production from existing oil fields is declining. ExxonMobil informed shareholders that the average production decline rate for the global oil fields are between 4 and 6% per year (The Lamp, 2003, Vol 85, #1). Current global production is 84 mbpd, so next year at this time current fields may produce a total of roughly 80 mbpd. Given the expected increase in global GDP, one year from now total oil demand will be 85.5 mbpd—so new capacity might have to make up for 1.5 mbpd plus 4 mbpd, or 5.5 mbpd. Two years from now the needed new production will be 11 mbpd and in 2010 at least 25 mbpd. Can the industry deliver this amount?

Indonesia, a member of the Organization of Petroleum Exporting Countries (OPEC), not only can’t produce enough oil to meet its production quota, it can’t even produce enough for domestic consumption. Indonesia is now an oil importing country. Within six years, five more countries will peak. Only a few countries—Saudi Arabia, Iraq, Kuwait, United Arab Emirates, Kazakhstan, and Bolivia—have the potential to produce more oil than before. By 2010, production from these 6 countries and from deepwater fields will have to offset the decline in 59 countries and the increased demand from the rest of the world.

Can they do it? Let’s look at Saudi Arabia, which in the early 1980s produced 9.6 million barrels per day. According to the IEA and the EIA Saudi Arabia must produce 22 mbpd by 2030. But Sadad Al Husseini claims that “the American government’s forecasts for future oil supplies are a dangerous over-estimate.” The Saudi Ghawar oil field, the largest in the world, may be in decline (see for example the book “Twilight in the dessert” by Mathew Simmons). Saudi Aramco says that production can be increased to 12.5 mbpd in 2015. They plan a new pipeline with a capacity of 2.5 mbpd, so it looks like they are willing to increase production to 12.5 mbpd, but so far there are no signs of reaching 22 mbpd.

Now consider Iraq, which in 1979 produced 3.4 mbpd. Iraq officially claims reserves of 112 billion barrels of crude oil, but ASPO (and other analysts) think that one-third of the reported reserves are fictitious “political barrels.” At a recent meeting in London, I was told (privately, by a person who is in a position to know) that Iraqi reserves available today for production total 46 billion barrels. If this is the case, it will be hard for Iraq to reach its former peak production level in a short time. And so on. It’s time to ask, can the Middle East ever again produce at the peak rates of the 1970s?

The examples of Sweden and China suggest that, if past economic development patterns are followed, doubling GDP will require doubling global oil production. Can this even be done?

The United States, the wealthiest country in the world, has 5% of the global population and uses 25% of the oil. It is time to discuss what the United States should do to cut consumption—and rapidly. In February 2005 a report for the U.S. Department of Energy (DOE), (Peaking of World Oil Production: Impacts, Mitigation, & Risk Management) argued that “world oil peaking represents a problem like none other. The political, economic, and social stakes are enormous. Prudent risk management demands urgent attention and early action.” Any serious program launched today will take 20 years to complete.

What about oil sands?

The enormous reserves of oil sands in Canada are often mentioned as a lifesaver for the world. The report to DOE in February inspired us to undertake a “Crash Program Scenario Study for the Canadian Oil Sand Industry” (B. Söderbergh, F. Robelius, and K. Aleklett, to be published). In the study we found that Canada must very soon decide if its natural gas should be exported to USA or instead used for the oil sands industry. In a short-term crash program the maximum production from oil sands will be 3.6 million barrels per day in 2018. This production cannot offset even the combined decline of just the Canadian and North Sea provinces. A long-term crash program would give 6 million barrels by 2040, but then new nuclear power plants would be needed to generate steam for the in-situ production.

The problem is that we should have started preparing for peak oil at least 10 years ago. We must act now, as otherwise the bumps and holes in the road might be devastating. I like to summarize the global situation for Peak Oil the following way: When I was born in 1945, none of the four small farms in my little Swedish village used oil for anything. Ten years later, the oil age had arrived: we had replaced coal with oil for heating, my father had bought a motorcycle, and tractors were seen in the fields. From 1945 to 1970, Sweden increased its use of energy by a factor of five, or nearly 7% per year for 25 years.

It is very likely that the world is now entering a challenging period for energy supply, due to the limited resources and production problems now facing conventional (easily accessible) oil. Nearly 40% of the world’s energy is provided by oil, and over 50% of the latter is used in the transport sector. An increasing demand for oil from emerging economies, such as China and India, is likely to further accentuate the need for new solutions

Some analysts maintain that there are inherent technical problems in the Saudi oilfields, but this is not an uncontested viewpoint. It is uncertain how much the oil production in the Middle East can be increased in the next few years and to what extent it would be in the interest of these countries to greatly increase production. It is clear that, even in these countries, conventional oil is a limited resource that they are almost totally dependent on. It is, however, also clear that the countries of the Middle East are undergoing massive internal and regional changes which may have negative consequences for the global oil supply system. Mitigation measures must be initiated in the next few years in order to secure a continued adequate supply of liquid fuels, especially for the transport sector. Over the longer term, completely new solutions are required. Therefore, increased R&D (Research and Development) in the energy sector is urgently needed.

Key points

  1. Shortage of oil. The global demand for oil is presently growing by nearly 2% per year and the current consumption is 84 million barrels per day (1 barrel=159 liters) or 30 billion barrels per year. Finding additional supplies to increase the production rate is becoming problematical, since most major oilfields are well matured. Already 54 of the 65 most important oil-producing countries have declining production and the rate of discoveries of new reserves is less than a third of the present rate of consumption.
  1. Reserves of conventional oil. In the last 10-15 years, two-thirds of the increases in reserves of conventional oil have been based on increased estimates of recovery from existing fields and only one-third on discovery of new fields. In this way, a balance has been achieved between growth in reserves and production. This can’t continue. Half of current oil production comes from giant fields and very few such fields have been found in recent years. Oil geologists have a wide range of opinions on how much conventional oil there is yet to be discovered, but new reservoirs are expected to be mainly found in the deeper water, outer margins of the continental shelves, and in the physically hostile and sensitive environments of the Arctic, where the production costs will be much higher and lead times much longer than they are today. A conservative estimate of discovered oil reserves and undiscovered recoverable oil resources is about 1200 billion barrels, according to the US Geological Survey; this includes 300 billion barrels in the world’s, as yet unexplored, sedimentary basins.
  1. Middle East’s key role. Only in the Middle East and possibly the countries of the former Soviet Union is there a potential to significantly increase production rates to compensate for decreasing rates in other countries. Saudi Arabia is a key country in this context, providing 9.5 million barrels per day (11% of the current global production rate). Their proven reserves are 130 billion barrels and their reserve base is said to include an additional 130 billion barrels. Iraq also has considerable untapped oil reserves.
  2. Unconventional oil resources. In addition to conventional oil, there are very large hydrocarbon resources, so-called unconventional oil, including gas, heavy oil and tar sands and oil shales, coal, from which liquid fuels can be produced.  At present, 1 million barrels of oil per day comes from Canadian tar sand and 0.6 million barrels from Venezuelan heavy oil. The Canadian government estimates that by 2025 the daily production rate will have increased to 3 million barrels per day. Thus, the problem with these unconventional oils is not so much price, but lead times and non-price related aspects, such as the effects on the environment and availability of water and natural gas for the production process.
  1. Immediate action on supplies. Forceful measures to improve the search for and recovery of conventional oil as well as improving the production rate of unconventional oil are required to avoid price spikes, leading to instability of the world economy in the next few decades. Improved recovery of oil in existing fields can be expected. The estimated reserves of conventional oil are, however, located primarily in unexplored sedimentary basins, in environments difficult to access. A substantial part has yet to be found! Sizable contributions from unconventional oil need time (some decades) to become really effective. It is necessary to have public funding for long term petroleum-related research, since this must not be an exclusive task for the oil companies.
  1. Liquid fuels and a new transport system. Oil supply is a severe liquid fuels problem. Major programs need, therefore, to be implemented to develop alternatives to oil in the transport sector. Until these measures have been introduced, which may take one to two decades, demand for oil for the needs of a globally expanding transport sector will continue to rise; other users of oil will suffer, including those concerned with power generation.
  1. Economic considerations. At present the high oil prices are due to the limitations of worldwide production, refining and transportation capacities. Furthermore, the price is influenced by the threat of terrorist attacks on the world’s oil supply, transport system and infrastructure.
  1. Environmental concerns. Constraints on unconventional oil similar to those imposed on other fossil fuels (for example emission controls and CO2 sequestration) will be necessary and provide major challenges for industry.

In view of the importance of the world’s future energy supply, The Royal Swedish Academy of Sciences (the Academy that awards the Nobel Prizes in physics, chemistry, and The Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel) has recently established an Energy Committee. The Academy is an independent nongovernmental organization, with expertise in most of the sciences as well as economic, social, and humanistic fields. The Energy Committee has selected a number of subjects to be studied in some depth and one of these deals with oil and related carbon-based fuels. The Academy organized hearings and a seminar before subsequently (on October 14, 2005) issuing a statement about oil (the full statement can be found at the end of this text). I’ll note just one excerpt from the general remarks: “It is very likely that the world is now entering a challenging period for energy supply, due to the limited resources and production problems now facing conventional (easily accessible) oil.

For the United States, saving oil is the most important thing that you can do. I mean why should you consume twice as much oil per person than we do in Europe? We are doing quite well with half the amount of oil.

Thanks to the Committee for this opportunity to discuss Peak Oil and the work of Uppsala Hydrocarbon Depletion Study Group, Uppsala University, Sweden. We are also members of ASPO, the Association for the Study of Peak Oil and Gas, and since 2003 I’ve been president of ASPO. Members have an interest in determining the date and impact of the peak and decline of the world’s production of oil and gas, due to resource constraints (www.peakoil.net). The mission is to: 1. Define and evaluate the world’s endowment of oil and gas. 2. Model depletion, taking due account of demand, economics, technology and politics. 3. Raise awareness of the serious consequences for Mankind.


Robert Hirsch, Senior Energy Program Advisor, SAIC, lead author of 2005 Peaking of World Oil Production: Impacts, mitigation, & risk management, Department of Energy.

The era of plentiful, low-cost petroleum is approaching an end.

Oil is the lifeblood of modern civilization. It fuels most transportation worldwide and is a feedstock for pharmaceuticals, agriculture, plastics and a myriad of other products used in everyday life. The earth has been generous in yielding copious quantities of oil to fuel world economic growth for over a century, but that period of plenty is changing.

The world has never confronted a problem like Peak Oil.

Oil peaking represents a liquid fuels problem, not an energy crisis in the sense that that term has been used. Motor vehicles, aircraft, trucks, and ships have no ready alternative to liquid fuels, certainly not the large existing capital stock. And that capital stock has lifetimes measured in decades. Solar, wind, and nuclear power produce electricity, not liquid fuels; their widespread use in transportation is at least 30 to 50 years away.

Risk minimization mandates the massive implementation of mitigation well before the onset of the problem. Since we do not know when peaking is going to occur, that makes a tough problem for you folks as decision makers because if you are going to start 20 years ahead of something that is indeterminate, you have a tough time making the arguments. Mustering support is going to be difficult. We would all like to believe that the optimists are right about peak oil, but the risks, again the risks of them being wrong, are beyond anything that we have experienced, the risks of error are beyond imagination.

The peaking of world oil production represents an enormous risk to the United States and the world. Peak Oil is not a theory. Maximum conventional oil production is coming, but we cannot predict when because no one has the verified data needed for a credible forecast. Peaking could be soon. Our studies through the Department of Energy indicate that soon is within 20 years.

Saudi Arabia

The economic future of the United States is inextricably linked to Saudi Arabia because they are the lynchpin of future world oil production.

No one outside of Saudi Arabia knows how much oil they have in the ground because that is a closely held state secret. Also, no one outside of Saudi Arabia knows how much and how fast the Saudis will be willing to develop what they have. Like it or not, Saudi Arabia is not required to satisfy world needs and conserving their oil is in their national interest. Think risk.

Until recently, OPEC assured the world that oil supply would continue to be plentiful, but that position is changing. In fact, some in OPEC are now warning that oil supply will not be adequate to satisfy world demand in 10-15 years (Moors). Dr. Sadad al-Husseini, retired senior Saudi Aramco oil exploration executive, is on record as saying that the world is heading for an oil shortage; in his words “a whole new Saudi Arabia (will have to be found and developed) every couple of years” to satisfy current demand forecasts (Haas). So the messages from the world’s “breadbasket of oil” are moving from confident assurances to warnings of approaching shortage. Think risk.

Today, EIA is forecasting adequate world’s oil supplies for decades into the future. The question is, are they going to get it right this time? The National Petroleum Council has been asked by Secretary Bodman to assess Peak Oil. Are they going to get it right this time? Think risk.

It is important to recognize that oil production peaking is not “running out.” Peaking is the maximum oil production rate, which typically occurs after roughly half of the recoverable oil in an oil field has been produced.

What is likely to happen on a world scale will be similar to what happens with individual oil fields, because world production is by definition the sum total of production from all of the world’s oil fields.

A recent analysis for the Department of Energy focused on what might be done to mitigate the peaking world oil production. It became abundantly clear early in our study that effect of mitigation would be dependent on the large scale implementation of mega projects and mega changes. We performed a transparent scenario analysis based on crash program mitigation worldwide which is the fastest that is humanly possible. The timing was left open because we do not know when peaking is going to occur. The results were startling. If we wait until peaking occurs, the world will have a problem with adequate liquid fuels for more than two decades. If we start ten years before peaking occurs, that will allay the problem somewhat but in ten years after that, a problem will arise. And finally, if we initiate a crash program 20 years before peaking occurs, we have the possibility, a possibility of avoiding the problem.

If we get oil peaking wrong, how bad might the economic damage be? Unfortunately, there is a paucity of analysis in this area which is tough analysis to do. One study called oil shock wave, which I believe was mentioned earlier, was performed by a group of distinguished former high level Government officials not too long ago. They concluded at a sustained 4% global shortfall would result in oil at $160 a barrel which would push the United States into recession losing millions of jobs.

Note that oil shock wave focused on a multi-year drop in oil supply of 4% total but experts in this business will tell you that 4 to 8% per year is entirely possible and is happening in many parts of the world. Think risk.

Chinese officials have forecast the peaking world oil production around 2012. As this committee knows, China has been making huge investments to secure oil for its own country doing this around the world and paying premium prices. They tried to buy Unocal and that did not work. They offered a premium in that particular case.


Oil was formed by geological processes millions of years ago and is typically found in underground reservoirs of dramatically different sizes, at varying depths, and with widely varying characteristics. The largest oil fields are called “super giants,” many of which were discovered in the Middle East. Because of their size and other characteristics, super giant oil fields are generally the easiest to find, the most economic to develop, and the longest-lived.

The world’s last super giant oil fields were discovered in the 1960s. Since then, smaller fields of varying sizes have been found in what are called “oil prone” locations worldwide — oil is not found everywhere.

The concept of the peaking of world oil production follows from the fact that the output of an oil individual field rises after discovery, reaches a peak, and then declines. Oil fields have lifetimes typically measured in decades, and peak production often occurs roughly a decade or so after discovery under normal circumstances.

Oil is usually found thousands of feet below the surface. Oil fields do not typically have an obvious surface signature, so oil is very difficult to find. Advanced technology has greatly improved the discovery process and reduced exploration failures. Nevertheless, world oil discoveries have been steadily declining for decades.


“Reserves” is an estimate of the amount of oil in an oil field that can be extracted at an assumed cost. Thus, a higher oil price outlook often means that more oil can be produced. However, geological realities place an upper limit on price-dependent reserves growth. Specialists who estimate reserves use an array of technical methodologies and a great deal of judgment. Thus, different estimators might calculate different reserves from the same data.

Sometimes self-interest influences reserve estimates, e.g., an oil field owner may provide a high estimate in order to attract outside investment, influence customers, or further a political agenda.

Reserves and production should not be confused. Reserves estimates are but one factor used in estimating future oil production from a given oil field. Other factors include production history, local geology, available technology, oil prices, etc. An oil field can have large estimated reserves, but if a well-managed field is past maximum production, the remaining reserves can only be produced at a diminishing rate. Sometimes declines can be slowed, but a return to peak production is impossible. This fundamental is not often appreciated by those unfamiliar with oil production, and it is often a major factor in misunderstanding the basic nature of oil production.


In the past, higher prices led to increased estimates of conventional oil reserves worldwide. However, this price-reserves relationship has its limits, because oil is found in discrete packages (reservoirs) as opposed to the varying concentrations characteristic of many minerals. Thus, at some price, world reserves of recoverable conventional oil will reach a maximum because of geological fundamentals. Beyond that point, insignificant additional conventional oil will be recoverable at any realistic price. This is a geological fact that is often not understood by economists, many of whom are accustomed to dealing with hard minerals, whose geology is fundamentally different.

Oil companies and governments have conducted extensive exploration worldwide, but their results have been disappointing for decades. On this basis, there is little reason to expect that future oil discoveries will dramatically increase. A related fact is that oil production is in decline in 33 of the world’s 48 largest oil-producing countries.


Exploration for and production of petroleum has been an increasingly more technological enterprise, benefiting from more sophisticated engineering capabilities, advanced geological understanding, improved instrumentation, greatly expanded computing power, more durable materials, etc. Today’s technology allows oil fields to be more readily discovered and better understood sooner than heretofore.

Some economists expect improved technologies and higher oil prices will provide ever-increasing oil production for the foreseeable future. To gain some insight into the effects of higher oil prices and improved technology on oil production, consider the history of the U.S. Lower 48 states. This region was one of the world’s richest, most geologically varied, and most productive up until 1970, when production peaked and started into decline.

In constant dollars, oil prices increased by roughly a factor of three in 1973-74 and another factor of two in 1979-80. In addition to these huge oil price increases, the 1980s and 1990s were a golden age of oil field technology development, including practical 3-D seismic, economic horizontal drilling, dramatically improved geological understanding, etc. Nevertheless, Lower 48 oil production still trended downward, showing no pronounced response to either price or technology. In light of this experience, there is no reason to expect that the worldwide situation will be different: Higher prices and improved technology are unlikely to yield dramatically higher conventional oil production.


Various individuals and groups have used available information and geological tools to develop forecasts for when world oil production might peak. A sampling is shown in Table 1, where it is clear that many believe that peaking is likely within a decade.


A recent analysis for the U.S. Department of Energy addressed the question of what might be done to mitigate the peaking of world oil production. Various technologies that are commercial or near commercial were considered: 1. Fuel efficient transportation, 2. Heavy oil/Oil sands, 3. Coal liquefaction, 4. Enhanced oil recovery, 5. Gas-to-liquids.

It became abundantly clear early in this study that effective mitigation will be dependent on the implementation of mega-projects and mega-changes at the maximum possible rate. This finding dictated the focus on currently commercial technologies that are ready for implementation.

New technology options requiring further research and development will undoubtedly prove very important in the longer-term future, but they are not ready now, so their inclusion would be strictly speculative.

Initiating a mitigation crash program 20 years before peaking offers the possibility of avoiding a world liquid fuels shortfall for the forecast period. The reason why such long lead times are required is that the worldwide scale of oil consumption is enormous – a fact often lost in a world where oil abundance has been taken for granted for so long. If mitigation is too little, too late, world supply/demand balance will have to be achieved through massive demand destruction and shortages, which would translate to extreme economic hardship.


In an effort to gain some insight into the possible character of world oil production peaking, a number of regions and countries that have already past oil peaking were recently analyzed. Areas that had significant peak oil production and that were not encumbered by major political upheaval or cartel action were Texas, North America, the United Kingdom, and Norway. Three other countries that are also past peak production, but whose maximum production was smaller, were Argentina, Colombia, and Egypt. Examination of these actual histories showed that in all cases it was not obvious that production was about to peak a year ahead of the event, i.e., production trends prior to peaking did not provide long-range warning. In most cases the peaks were sharp, not gently varying or flat topped, as some forecasters hope. Finally, in some cases post-peak production declines were quite rapid. It is by no means obvious how world oil peaking will occur, but if it follows the patterns displayed by these regions and countries, the world will have less than a year warning.


Oil peaking represents a liquid fuels problem, not an “energy crisis” in the sense that term has often been used. Motor vehicles, aircraft, trains, and ships simply have no ready alternative to liquid fuels, certainly not for the existing capital stock, which have very long lifetimes. Non-hydrocarbon-based energy sources, such as renewables and nuclear power, produce electricity, not liquid fuels, so their widespread use in transportation is at best many decades in the future. Accordingly, mitigation of declining world conventional oil production must be narrowly focused, at least in the near-term.


It is possible that peaking may not occur for a decade or more, but it is also possible that peaking may be occurring right now. We will not know for certain until after the fact. The world is thus faced with a daunting risk management problem. The world has never confronted a problem like this. Risk minimization requires the implementation of mitigation measures well prior to peaking.


Over the past century world economic development has been fundamentally shaped by the availability of abundant, low-cost oil. Previous energy transitions (wood to coal, coal to oil, etc.) were gradual and evolutionary; oil peaking will be abrupt and revolutionary. The world has never faced a problem like this. Without massive mitigation at least a decade before the fact, the problem will be pervasive and long lasting. Oil peaking represents a liquid fuels problem.

Robert L. Hirsch is a Senior Energy Program Advisor for SAIC and a consultant in energy. Previous employment included executive positions at the U.S. Atomic Energy Commission, the U.S. Energy Research and Development Administration, Exxon, ARCO, EPRI, and Advance Power Technologies, Inc. Dr. Hirsch is a past Chairman of the Board on Energy and Environmental Systems at the National Academies. He has a Ph.D. in engineering

Haas, P. August 21, 2005. The Breaking Point. New York Times Magazine.

Moors, K.F. How Reliable are Saudi Production and Reserve Estimates? Dow Jones


Robert Esser, Senior consultant & director, Global oil & Gas resources, Cambridge Energy Research Associates (CERA):

CERA does not recognize a peak in oil capacity until at least 2030.

We at CERA have been conducting continuing research on the future of oil supplies. The following are our basic conclusions. One, the world is not running out of oil imminently, or in the medium term. Our field by field activity based analysis points to a substance build-up of liquid capacity over the next several years. Two, an increasing share of supplies will come from non-traditional or unconventional oils from the ultra-deep waters, from oil sands, from gas related liquids in which we include condensates and natural gas liquids and also the conversion of gas to liquids. Three, rather than an isolated peak, we should expect an undulating plateau, perhaps three or four decades from now. Peaking does not imply a precipitous decline towards running out. Four, one reason for the general pessimism about future supplies is that based on Cambridge Energy’s reserve study, the reserve disclosure rules mandated by the Securities and Exchange Commission are based on decades old technology and need to be updated to reflect the new technology which is now available to verify reserves. Five, the major risk to this outlook, however, are not below ground geological factors but above ground geopolitical factors.

Our sources of new supply: new capacity comes from the development of recent discoveries, older discoveries only recently made available – such as all of those huge fields now being developed in the Caspian Sea area – existing field reserve upgrades, and the drilling response to high prices which will tend to reduce decline rates in mature areas. Accordingly, the CERA outlook is a more optimistic picture than many of the other publicly available outlets and strongly contradicts those who believe Peak Oil is imminent.

Key trends: in our core scenario, which is at the high end of our expectations, CERA expects capacity could increase by as much as 15 million barrels a day to 102 million barrels a day by 2010. This is up from the 87 million barrels a day currently with a further increase of 6 million barrels a day to 108 million barrels by 2015. This is a 25% increase. All regions except the United States and the North Sea will show strong growth to 2020. Non-OPEC countries with strong growth in exports include Russia, Azerbaijan, Kazakhstan, Angola, Brazil, and Canada. Actually right now there is no more intense exploration in producing play than the Canadian oil sands. Strong growth takes place in both OPEC and non-OPEC countries till 2010, however, we also recognize that this will moderate by 2015.

This is led by gas related liquids associated with the gas under development to meet the soaring demand for liquefied natural gas, especially for the United States and other country and regional gas demand growth. The inclusion of these gas related liquids is certainly warranted as they too satisfy the demand of the liquid oil demand.

The increases in capacity are also underpinned by the development of the characteristic very large discoveries recently made in very deep waters since the late 1990’s. The top ten discoveries alone each year add something on the order of 2 to 2-1/2 million barrels a day. Accordingly, CERA does not recognize a peak in oil capacity until at least 2030.

Many risks loom on the horizon that could impact productive capacity. Most of these are above ground risks such as severe lack qualified manpower and the shortage of rigs. Political risks occur in most OPEC countries especially in Iraq, Iran, Venezuela, and non-OPEC Russia. Other risks include access to areas of major under discovered reserve potential, a slowdown in the company sanction of new field development, and this is most important, an unexpected higher than assumed decline rate in some of the large Middle East fields, and lastly, delayed Government sanction of certain long awaited projects in Iran, Kuwait, and the UAE. Should many of these concerns take place in the near future, capacity in 2010 could be 5 million barrels a day lower than projected.

In addition to crude oil from conventional settings, our analysis concludes that unconventional oil—condensates, natural gas liquids (NGLs), deepwater production, extra heavy oils and gas-to-liquids (GTLs) will represent about 35% of total capacity in 2015— compared to 10% in 1990.

Political risks also have an impact on capacity expansion in the Middle East, where the situation in Iraq continues to be highly problematic, and there is growing uncertainty over events in Iran. In Russia, changes in ownership, the constraints of geology, and the fiscal and regulatory systems, as well as logistical bottlenecks and geological challenges – all these have led to the end of Russia’s high supply growth era. In Venezuela fiscal and political changes have hindered the recovery of oil production and investment in the aftermath of the late 2002/early 2003 disruption and are likely to have continuing impact.


Our views about the peak oil debate have been reinforced by a detailed new audit of our own analysis and also further evidence that has come to light concerning the enormous scale of field reserve upgrades of existing fields. We also draw upon the proprietary databases of IHS, of which CERA is now part. These are the most extensive and complete databases on field production around the world. We see no evidence to suggest a peak before 2020, nor do we see a transparent and technically sound analysis from another source that justifies belief in an imminent peak.

It will be a number of decades into this century before we get to an inflexion point that will herald the arrival of the “undulating plateau. Assuming no serious political crises in key producing countries or an unexpected shortfall in investment, global oil production capacity will continue to grow strongly toward 102.4 mbd by 2010 from the current level of 87.2 mbd. [NOTE: BUT IT DIDN’T GO UP 15.2 mbd, it went up only .7 mbd. EIA world oil production 2005 = 73.9 mbd, 2010 = 74.6, not sure where CERA came up with 87.2 mbd]

Production capacity of extra heavy oil from Canada and Venezuela will expand from 1.8 mbd in 2005 to 4.9 mbd in 2015 . Despite accidents earlier in 2005 the Canadian projects are moving forward at an accelerating pace. Expansion from 1.2 mbd currently to 3.4 mbd by 2015 is anticipated, with approximately half being mined and the remainder in situ.

[Canadian oil sand production was only 2.2 mbd in 2014 and possibly less in 2016 due to oil bubble popping]


Today Alberta produces just over 2 million barrels a day and will grow to 2.5 million in three to four years and about 3 million barrels per day before 2015. Alberta crude oil production from oil sands is currently in excess of 1 million barrels per day (bbl/d). Production is anticipated to reach 3 million bbl/d by 2015, and 5 million bbl/d by 2030.

Alberta is recognized as the home of the second largest oil reserves in the world. From initial reserves in place of 1.7 trillion barrels of oil, there are currently 174.5 billion barrels of oil in established reserves and 315 billion barrels believed to be ultimately recoverable.

Replies of Robert Hirsch to questions asked by representatives

I was on the National academy panel that reviewed a hydrogen program and provided the report that came out a year ago. We spent a year looking into the issues in a great deal of detail. It is technically feasible to do hydrogen, but it is not economically feasible. And for the economics to make any sense at all, you have to have breakthroughs in two areas in particular. One is in fuel cells which are totally inadequate for the application right now and the other is onboard storage. You cannot predict when those breakthroughs are going to occur. We took an optimistic view as to when these vehicles might enter the market in order to see how long it would take for them to have an impact. But do not bet on it. You just cannot bet on it because the things that are needed that are essential to go do not exist now.

My work was for the Department of Energy, National Energy Technology Laboratory and I am familiar with the work at other laboratories. Computer simulations are not worth a damn if you do not have data to go in that has any kind of certainty to it. And that data does not exist. It simply does not exist. When CERA makes their estimates, they are using estimates. When other people predict other dates for peaking, they are using estimates. They are taking bits and pieces of information. In some cases they are basing their projections on what somebody tells them without any independent verification. So a computer program with bad data is going to give you a bad result.

I ran exploration and production research at Atlantic Richfield and we looked at not only the technologies that were being developed, but we looked off into the future and there have been improvements, 3D, 4D seismic has come along, there is horizontal drilling that was developed in large part by somebody who was in the laboratory that I managed. There is deep water. What has happened there is rather dramatic and rather marvelous. But if you look at all of those things and the character of the problem, there will definitely be improvements made, but they are not going to change the basic picture. They will change the time by maybe a matter of years.

You have to keep in mind that some of those technologies in fact will drain reservoirs faster than would otherwise be the case. And under those conditions, you are going to have a big ramp up, but then you are going to have a much sharper drop afterwards.

I think that all of us would agree that you do not pick winners in a situation like this; you go with anything that is reasonable. I totally agree with my colleague here that biodiesel, as wonderful as it sounds, is going to be a sliver in terms of the problem. And finally, in terms of having a program, there needs to be a will first and there needs to be a worldwide will and then there needs to be Government stepping in and facilitating the private sector to doing things on a basis that has not been done before. That is the only way you are going to minimize the risks. That is not what we are talking about today in detail, but that is effectively what has to happen.

ROSCOE BARTLETT. When it comes to ethanol production you should not look at the total BTU’s in ethanol and assume that those will contribute to our energy usage. The production of ethanol I hope will have an energy profit ratio which is positive. But it will never be very positive. We will always be putting a major percentage of the energy into producing ethanol that we get out of ethanol. Just a word of caution in looking at ethanol and that goes for any of the things produced in agriculture by the way.

I understand that the Canadian oil sands may be using more energy from natural gas to produce the oil than they are getting out of the oil. That is fine if it is stranded gas, but ultimately we will have a real limitation on what we can do there. They are now thinking of building a nuclear power plant to get the quantities of energy that they use to do this. So I would just like to caution that the enormous reserves in the oil sands and tar shales are not net energy realizable. You may end up using six barrels of oil and get a net energy of one barrel of oil. I do not know what that energy profit ratio will be but it ain’t high.

Fuel cells: two problems with fuel cells, one is storage that was mentioned. We had experts testifying recently and they said of the three methods of storing it one is as a gas in a pressure vessel that is just too heavy. Another is a liquid. The insulation is too much and the difficulty of pressurizing it is too great. But the only feasible way that it will become economically widely used is to have solid state storage which really means you are dealing with a hydrogen battery. And a fundamental question is – is the hydrogen battery fundamentally more energy efficient than an electron battery, which we have a whole lot of. I understand if you could wave a magic wand that every vehicle in the world today would have a fuel cell in it that we would use all the platinum in the world. So clearly, you have got to have some big breakthroughs in fuels cells before this is going to be feasible.

One of the ways of producing more oil is to drill as many wells in Saudi Arabia as we drilled in our country. We have about what three fourths of all the oil wells in the world in our country. Yeah, you will get more oil more quickly from Saudi fields but all you are doing is climbing a hill and the peak is going to be higher. You are going to fall off the peak and the descent, you know, you cannot pump what is not there and if you are able to pump it more quickly now there is going to be less to pump in the future.

I would just like to note that there are risks that responding too early, you are using resources you might have used for something else but I think that the risk of responding too late are overwhelming, that any rational people would buy, you know, maybe responding too early. Thank God it is too early because if it is too late we are really in for a big problem.

I would like to caution about energy from agriculture. Two cautions, one, we are barely able to feed the world. Tonight a fourth of the world will go to bed hungry.

And I would like to caution you to be careful about how much biomass you want to rape from our topsoils.

We are barely able now to maintain the quantity and quality of our topsoils and that is because we are not returning humus to them. I asked the Department of Agriculture, do you think we have more and better topsoil? The answer is no. For every bushel of corn we raise in Iowa, we lose three bushels of topsoil down the Mississippi River. So I would be very cautious about how much energy you expect–and by the way, it is not–the energy profit ratio from agriculture is not high. We would have to have a much more energy efficient agriculture if we are going to get any energy from agriculture in the future.

Twenty-five of the 48 oil producing countries in the world are now in decline. How are we going to get more oil in the future if that is true? And, you know, what to do? I think what we need to do is obvious, a massive effort of conservation, a big investment in efficiency, and big investments in alternatives. I do not think what we need to do is questionable. I think the will to do it may be very questionable.

KJELL ALEKLETT. We have now 65 countries in the world that are major producers of oil; 54 out of those 65 have already passed the peak of production and are going down. The next five years, another five countries will be past the peak, for instance China and Mexico that we know about. And so by 2010 there will be six countries that might be closer to increase their production. One of those is Bolivia and they are making something like 800,000. But take for instance Brazil that is considered to be one of the successful nations. What they have found down there is something like 12 billion barrels of oil in ultra deep water. And 12 billion barrels when we are consuming 30 billion barrels per year, well can that save the world?

Yes, Saudi Arabia will go up to 12.5 and they are committed to do that, but Kuwait for instance, the big field there is declining now. They are officially saying that and many other things. So I do not think it is possible to get this increase. And just look at numbers and start to think for yourself because that is what we need now, even level thinking.

ROBERT HIRSCH. Well I would just amplify on his points. If you have got the overwhelming number of countries in the world that have been oil producers that are already past their peak, that means less production from them and world demand continues to increase, therefore, the gap is not just the increased gap, it is the increase plus the loss that is associated with these others on the down slope and that gets to be bigger, and bigger, and bigger, and the rates catch up to you very, very quickly.

UDALL. I think the crucial part of the debate here and you have–the panelists have hit it several times that it does not matter when we peak. The important thing and, you know we have people today that are reliable folks that are saying that we peaked already. And one of our panelists says it is 2030, others it could be 2010, we do not know. But I do not think we should be getting in that debate. The focus we should have on this is what should we be doing to move us forward.

These panelists have hit on the idea of political will around the world and I think that is very, very important, us to have the will and the stamina to really take this on. My understanding we are doing a very small amount of research compared to what people do and Governments do in other areas. And what about global warming?

ALEKLETT. Let me start with the global warming, please, because if you look into these scenarios about how much carbon dioxide that will be produced in the future, it is obvious that they are overestimating the amount that can be produced from oil and natural gas. And it is now more or less agreed that we can burn all the natural gas and the conventional oil and it will not affect so much the global change. The problem is coal. We should work on coal. We should not have the carbon dioxide coming out in the air from coal. That is a big problem for the future.

HIRSCH. I would like to comment on your point about political will because the political will in our system with the way things are working right now is very hard to muster. In fact, in the current circumstances, the high probability would be to wait until the problem hits because then the political will be there, because consumers will be screaming. I would say that China has the political will and China is out acquiring and investing in ways to secure their own supply. They seem to have the political will which we do not have as yet.

ALEKLETT. Another thing with China is that they can say that you are not allowed to buy a car that takes so much gas; you must buy one that takes a smaller amount. Another thing is we have the world problem with diesel coming out. Everyone thinks that diesel should be used because you get better efficiency on the cars. But the problem is that the capacity of producing diesel in the refineries is not enough in the world.

Another thing I think we have to consider is the countries in the Middle East and North Africa. These countries have 75% of the remaining resources of oil in the world and these people also understand that this is the only resource they have to make money for the future.

I have visited the Middle East a couple of times now and every time when I am down there they said we had to think about future generations, our children and grandchildren; they must get money for something also. So why should we pump all now that we do not need the money when our children need it in the future. In Kuwait the parliament says no to increasing production to save it for future generations, so do not count on these countries increasing their production, because they know that they need it in the future.

When I lived in California, I liked to go up in the gold country and the ghost towns there were quite chilling. They are ghost towns because there was a limited resource of gold and silver.

UDALL . Any predictions at all on prices in terms of gasoline? I mean are we going to go back down in terms of the price per barrel?

HIRSCH. I do not think anybody could tell you. And anybody that gives you a prediction may not understand the problem. It is too complex. There are too many forces at work. There are things that happen that are unpredictable. You cannot predict the price.



Posted in Energy Policy, Peak Oil | Tagged , | 4 Comments

We’ll all be Flint Michigan someday: U.S. water infrastructure is falling apart

NRC. 2006. Drinking Water Distribution Systems: Assessing and Reducing Risks Committee on Public Water Supply Distribution Systems: Assessing and Reducing Risks.  National Research Council, National Academies Press.

[ According to this Free National Research Council report, most water systems and distribution pipes will be reaching the end of their expected life spans in the next 30 years.

With nearly a million miles of utility water infrastructure, 5 million miles of private home and building infrastructure, 154,000 storage facilities, and more,  it will be hard to replace within 30 years, and the EPA estimated the cost would be over $205 billion dollars.

This is important because one of the main reasons lifespan rose above 50 years last century was clean drinking water.  Residents in Flint who drank lead-poisoned water may not only have their lifespan shortened, but their quality of life reduced as well. Being able to harvest your own rainwater and store it is one way to protect yourself. Excerpts from this 404 page document follow. They are not in order. ]

U.S. Water infrastructure is falling apart (my title)

TABLE 4-7 Material Life Expectancies

Distribution System Component Typical Life Expectancies,


Concrete & metal storage tanks 30
Transmission pipes 35
Valves 35
Mechanical valves 15
Hydrants 40
Service Lines 30
SOURCE: EPA (2004). EPA’s Note: These expected useful lives are drawn from a variety of sources. The estimates assume that assets have been properly maintained.

The extent of water distribution pipes in the United States is estimated to be a total length of 980,000 miles (1.6 x 106 km), which is being replaced at an estimated rate of once every 200 years. Rates of repair and rehabilitation have not been estimated.

There is a large range in the type and age of the pipes that make up water distribution systems. The oldest cast iron pipes from the late 19th century are typically described as having an expected average useful lifespan of about 120 years because of the pipe wall thickness.

In the 1920s the manufacture of iron pipes changed to improve pipe strength, but the changes also produced a thinner wall. These pipes have an expected average life of about 100 years.

Pipe manufacturing continued to evolve in the 1950s and 1960s with the introduction of ductile iron pipe that is stronger than cast iron and more resistant to corrosion. Polyvinyl chloride (PVC) pipes were introduced in the 1970s and high-density polyethylene in the 1990s. Both of these are very resistant to corrosion but they do not have the strength of ductile iron. Post-World War II pipes tend to have an expected average life of 75 years.

In the 20th century, most of the water systems and distribution pipes were relatively new and well within their expected lifespan. However, as is obvious from the above paragraph and recent reports, these different types of pipes, installed during different time periods, will all be reaching the end of their expected life spans in the next 30 years. Indeed, an estimated 26 percent of the distribution pipe in the country is unlined and in poor condition. For example, an analysis of main breaks at one large Midwestern water utility that kept careful records of distribution system management documented a sharp increase in the annual number of main breaks from 1970 (approximately 250 breaks per year) to 1989 (approximately 2,200 breaks per year). Thus, the water industry is entering an era where it must make substantial investments in pipe repair and replacement.

An EPA report on water infrastructure needs predicted that transmission and distribution replacement rates will rise to 2%/year by 2040 in order to adequately maintain the water infrastructure, which is about four times the current replacement rate.

These data on the aging of the nation’s infrastructure suggest that utilities will have to engage in regular and proactive infrastructure assessment and replacement in order to avoid a future characterized by more frequent failures, which might overwhelm the water industry’s capability to react effectively. Although the public health significance of increasingly frequent pipe failures is unknown given the variability in utility response to such events, it is reasonable to assume that the likelihood of external distribution system contamination events will increase in parallel with infrastructure failure rates.

Corrosion and leaching of pipe materials, growth of biofilms and nitrifying microorganisms, and the formation of Disenfectant By-Products (DBPs) are events internal to the distribution system that are potentially detrimental. Furthermore, most are exacerbated by increased water age within the distribution system. External contamination can enter the distribution system through infrastructure breaks, leaks, and cross connections as a result of faulty construction, backflow, and pressure transients.

Repair and replacement activities as well as permeable pipe materials also present routes for exposing the distribution system to external contamination.

All of these events act to compromise the integrity of the distribution system.

The physical integrity of the distribution system is always in a state of change, and the aging of the nation’s distribution systems and eventual need for replacement are growing concerns. Maintaining such a vast physical infrastructure is a challenge because of the complexity of individual distribution systems, each of which is comprised of a network of mains, fire hydrants, valves, auxiliary pumping or booster disinfection substations, storage reservoirs, standpipes, and service lines along with the plumbing systems in residences, large housing projects, high-rise buildings, hospitals, and public buildings. This is further complicated by factors that vary from system to system such as the size of the distribution network for the population served, the predominant pipe material and age of pipelines, water pressure, the number of line breaks each year, water storage capacity, and water supply retention time in the system.

Risks from Drinking Water

  • Drinking water can serve as a transmission vehicle for a variety of hazardous agents: enteric microbial pathogens from human or animal fecal contamination (e.g., noroviruses, E. coli O157:H7, Cryptosporidium)
  • aquatic microorganisms that can cause harmful infections in humans (e.g., nontuberculous mycobacteria, Legionella)
  • toxins from aquatic microorganisms (such as cyanobacteria)
  • several classes of chemical contaminants (organic chemicals such as benzene, polychlorinated biphenyls, and various pesticides; inorganic chemicals such as arsenic and nitrates; metals such as lead and copper
  • disinfection byproducts or DBPs such as trihalomethanes
  • radioactive compounds

Contaminants in drinking water can produce adverse effects in humans due to multiple routes of exposure. In addition to risk from ingestion, exposure can also occur from inhalation and dermal routes. For example, inhalation of droplets containing respiratory pathogens (such as Legionella or Mycobacterium) can result in illness. It is known that DBPs present in drinking water may volatilize resulting in inhalation risk, and these compounds (and likely other organics) may also be transported through the skin (after bathing or showering) into the bloodstream. Reaction of disinfectants in potable water with other materials in the household may also result in indoor air exposure of contaminants; for example Shepard et al. (1996) reported on release of volatile organics in indoor washing machines. Thus, multiple routes of exposure need to be considered when assessing the risk presented by contaminated distribution systems.

It has been recognized for some years that consumers face risk from multiple hazards, and that action to reduce the risk from one hazard may increase the risk from other hazards given the same exposure.


The distribution system is a critical component of every drinking water utility. Its primary function is to provide the required water quantity and quality at a suitable pressure, and failure to do so is a serious system deficiency. Water quality may degrade during distribution because of the way water is treated or not treated before it is distributed, chemical and biological reactions that take place in the water during distribution, reactions between the water and distribution system materials, and contamination from external sources that occurs because of main breaks, leaks coupled with hydraulic transients, and improperly maintained storage facilities, among other things. Furthermore, special problems are posed by the utility’s need to maintain suitable water quality at the consumers tap, and the quality changes that occur in consumers’ plumbing, which is not owned or controlled by the utility. The primary driving force for managing and regulating distribution systems is protecting the health of the consumer, which becomes more difficult as our nation’s distribution systems age and become more vulnerable to main breaks and leaks.


Water distribution systems carry drinking water from a centralized treatment plant or well supplies to consumers’ taps. These systems consist of pipes, pumps, valves, storage tanks, reservoirs, meters, fittings, and other hydraulic appurtenances. Spanning almost 1 million miles in the United States, distribution systems represent the vast majority of physical infrastructure for water supplies,

The issues and concerns surrounding the nation’s public water supply distribution systems are many.

Of the 34 billion gallons of water produced daily by public water systems in the United States, approximately 63 percent is used by residential customers. More than 80 percent of the water supplied to residences is used for activities other than human consumption such as sanitary service and landscape irrigation. Nonetheless, distribution systems are designed and operated to provide water of a quality acceptable for human consumption. Another important factor is that in addition to providing drinking water, a major function of most distribution systems is to provide adequate standby fire-flow. In order to satisfy this need, most distribution systems use standpipes, elevated tanks, storage reservoirs, and larger sized pipes. The effect of designing and operating a distribution system to maintain adequate fire flow and redundant capacity is that there are longer transit times between the treatment plant and the consumer than would otherwise be needed.

The type and age of the pipes that make up water distribution systems range from cast iron pipes installed during the late 19th century to ductile iron pipe and finally to plastic pipes introduced in the 1970s and beyond. Most water systems and distribution pipes will be reaching the end of their expected life spans in the next 30 years.

External and internal corrosion should be better researched and controlled in standardized ways. There is a need for new materials and corrosion science to better understand how to more effectively control both external and internal corrosion, and to match distribution system materials with the soil environment and the quality of water with which they are in contact.

Corrosion is poorly understood and thus unpredictable in occurrence. Insufficient attention has been given to its control, especially considering its estimated annual direct cost of $5 billion in U.S. for the main distribution system, not counting premise plumbing.

Outbreak surveillance data currently provide more information on the public health impact of contaminated distribution systems. In fact, investigations conducted in the last five years suggest that a substantial proportion of waterborne disease outbreaks, both microbial and chemical, is attributable to problems within distribution systems.

Contamination from cross-connections and back-siphonage were found to cause the majority of the outbreaks associated with distribution systems, followed by contamination of water mains following breaks and contamination of storage facilities. The situation may be of even greater concern because incidents involving domestic plumbing are less recognized and unlikely to be reported. In general the identified number of waterborne disease outbreaks is considered an underestimate because not all outbreaks are recognized, investigated, or reported to health authorities.

Hydraulic Integrity

Maintaining the hydraulic integrity of distribution systems is vital to ensuring that water of acceptable quality is delivered in acceptable amounts. The most critical element of hydraulic integrity is adequate water pressure inside the pipes. The loss of water pressure resulting from pipe breaks, significant leakage, excessive head loss at the pipe walls, pump or valve failures, or pressure surges can impair water delivery and will increase the risk of contamination of the water supply via intrusion. Another critical hydraulic factor is the length of time water is in the distribution system. Low flows in pipes create long travel times, with a resulting loss of disinfectant residual as well as sections where sediments can collect and accumulate and microbes can grow and be protected from disinfectants. Furthermore, sediment deposition will result in rougher pipes with reduced hydraulic capacity and increased pumping costs. Long detention times can also greatly reduce corrosion control effectiveness by impacting phosphate inhibitors and pH management. A final component of hydraulic integrity is maintaining sufficient mixing and turnover rates in storage facilities, which if insufficient can lead to short circuiting and generate pockets of stagnant water with depleted disinfectant residual.

Positive water pressure should be maintained. Low pressures in the distribution system can result not only in insufficient firefighting capacity but can also constitute a major health concern resulting from potential intrusion of contaminants from the surrounding external environment. A minimum residual pressure of 20 psi under all operating conditions and at all locations (including at the system extremities) should be maintained.

Breaches in physical and hydraulic integrity can lead to the influx of contaminants across pipe walls, through breaks, and via cross connections. These external contamination events can act as a source of inoculum, introduce nutrients and sediments, or decrease disinfectant concentrations within the distribution system, resulting in a degradation of water quality. Even in the absence of external contamination, however, there are situations where water quality is degraded due to transformations that take place within piping, tanks, and premise plumbing. These include biofilm growth, nitrification, leaching, internal corrosion, scale formation, and other chemical reactions associated with increasing water age. Maintaining water quality integrity in the distribution system is challenging because of the complexity of most systems. That is, there are interactions between the type and concentration of disinfectants used, corrosion control schemes, operational practices (e.g., flow characteristics, water age, flushing practices), the materials used for pipes and plumbing, the biological stability of the water, and the efficacy of treatment.

Microbial growth and biofilm development in distribution systems should be minimized. Even though the general heterotrophs found in biofilms are not likely to be of public health concern, their activity can promote the production of tastes and odors, increase disinfectant demand, and may contribute to corrosion. Biofilms may also harbor opportunistic pathogens (those causing disease in the immunocompromised). This issue is of critical importance in premise plumbing where long residence times promote disinfectant decay and subsequent bacterial growth and release. Residual disinfectant choices should be balanced to meet the overall goal of protecting public health. For free chlorine, the potential residual loss and DBP formation should be weighed against the problems that may be introduced by chloramination, which include nitrification, lower disinfectant efficacy against suspended organisms, and the potential for deleterious corrosion problems.

Premise plumbing includes that portion of the distribution system associated with schools, hospitals, public and private housing, and other buildings. It is connected to the main distribution system via the service line. The quality of potable water in premise plumbing is not ensured by EPA regulations,

Virtually every problem previously identified in the main water transmission system can also occur in premise plumbing. However, unique characteristics of premise plumbing can magnify the potential public health risk relative to the main distribution system and complicate formulation of coherent strategies to deal with problems. These characteristics include:

  1. a high surface area to volume ratio, which along with other factors can lead to more severe leaching and permeation;
  2. variable, often advanced water age, especially in buildings that are irregularly occupied;
  3. more extreme temperatures than those experienced in the main distribution system
  4. low or no disinfectant residual, because buildings are unavoidable “dead ends” in a distribution system;
  5. potentially higher bacterial levels and regrowth due to the lack of persistent disinfectant residuals, high surface area, long stagnation times, and warmer temperatures. Legionella in particular is known to colonize premise plumbing, especially hot water heaters;
  6. exposure routes through vapor and bioaerosols in relatively confined spaces such as home showers;
  7. proximity to service lines, which have been shown to provide the greatest number of potential entry points for pathogen intrusion;
  8. higher prevalence of cross connections, since it is relatively common for untrained and unlicensed individuals to do repair work in premise plumbing;
  9. variable responsible party, resulting in considerable confusion over who should maintain water quality in premise plumbing.


The first municipal water utility in the United States was established in Boston in 1652 to provide domestic water and fire protection. The Boston system emulated ancient Roman water supply systems in that it was multipurpose in nature. Many water supplies in the United States were subsequently constructed in cities primarily for the suppression of fires, but most have been adapted to serve commercial and residential properties with water. By 1860, there were 136 water systems in the United States, and most of these systems supplied water from springs low in turbidity and relatively free from pollution. However, by the end of the nineteenth century waterborne disease had become recognized as a serious problem in industrialized river valleys. This led to the more routine treatment of water prior to its distribution to consumers. Water treatment enabled a decline in the typhoid death rate in Pittsburgh, PA from 158 deaths per 100,000 in the 1880s to 5 per 100,000 in 1935

Similarly, both typhoid case and death rates for the City of Cincinnati declined more than tenfold during the period 1898 to 1928 due to the use of sand filtration, disinfection via chlorination, and the application of drinking water standards. It is without a doubt that water treatment in the United States has proven to be a major contributor to ensuring the nation’s public health.


They span almost 1 million miles in the United States and include an estimated 154,000 finished water storage facilities. As the U.S. population grows and communities expand, 13,200 miles (21,239 km) of new pipes are installed each year.

Because distribution systems represent the vast majority of physical infrastructure for water supplies, they constitute the primary management challenge from both an operational and public health standpoint.

Their repair and replacement represent an enormous financial liability; EPA estimates the 20-year water transmission and distribution needs of the country to be $183.6 billion, with storage facility infrastructure needs estimated at $24.8 billion.

Infrastructure Distribution system infrastructure is generally considered to consist of the pipes, pumps, valves, storage tanks, reservoirs, meters, fittings, and other hydraulic appurtenances that connect treatment plants or well supplies to consumers’ taps. The characteristics, general maintenance requirements, and desirable features of the basic infrastructure components in a drinking water distribution system are briefly discussed below.


The systems of pipes that transport water from the source (such as a treatment plant) to the customer are often categorized from largest to smallest as transmission or trunk mains, distribution mains, service lines, and premise plumbing. Transmission or trunk mains usually convey large amounts of water over long distances such as from a treatment facility to a storage tank within the distribution system. Distribution mains are typically smaller in diameter than the transmission mains and generally follow the city streets. Service lines carry water from the distribution main to the building or property being served. Service lines can be of any size depending on how much water is required to serve a particular customer and are sized so that the utility’s design pressure is maintained at the customer’s property for the desired flows. Premise plumbing refers to the piping within a building or home that distributes water to the point of use. In premise plumbing the pipe diameters are usually comparatively small, leading to a greater surface-to-volume ratio than in other distribution system pipes.

The three requirements for a pipe include its ability to deliver the quantity of water required, to resist all external and internal forces acting upon it, and to be durable and have a long life. The materials commonly used to accomplish these goals today are ductile iron, pre-stressed concrete, polyvinyl chloride (PVC), reinforced plastic, and steel. In the past, unlined cast iron and asbestos cement pipes were frequently installed in distribution systems, and thus are important components of existing systems

If premise plumbing is included, the figure for total distribution system length would increase from almost 1 million miles to greater than 6 million miles.

Inclusion of premise plumbing and service lines in the definition of a public water supply distribution system is not common because of their variable ownership, which ultimately affects who takes responsibility for their maintenance. Most drinking water utilities and regulatory bodies only take responsibility for the water delivered to the curb stop, which generally captures only a portion of the service line. The portion of the service line not under control of the utility and all of the premise plumbing are entirely the building owner’s responsibility.

A grid/looped system, which consists of connected pipe loops throughout the area to be served, is the most widely used configuration in large municipal areas. In this type of system there are several pathways that the water can follow from the source to the consumer. Looped systems provide a high degree of reliability should a line break occur because the break can be isolated with little impact on consumers outside the immediate area. Also, by keeping water moving looping reduces some of the problems associated with water stagnation, such as adverse reactions with the pipe walls, and it increases fire-fighting capability. However, loops can be dead-ends, especially in suburban areas like cul-de-sacs, and have associated water quality problems. Most systems are a combination of both looped and branched portions.

Transmission mains are spaced from 1.5 to 2 miles (2,400 to 3,200 m) apart with dual-service mains spaced 3,000 to 4,000 feet (900 to 1,200 m) apart. Service mains should be located in every street.

Storage Tanks and Reservoirs

Storage tanks and reservoirs are used to provide storage capacity to meet fluctuations in demand (or shave off peaks), to provide reserve supply for firefighting use and emergency needs, to stabilize pressures in the distribution system, to increase operating convenience and provide flexibility in pumping, to provide water during source or pump failures, and to blend different water sources. The recommended location of a storage tank is just beyond the center of demand in the service area. Elevated tanks are used most frequently, but other types of tanks and reservoirs include in-ground tanks and open or closed reservoirs. Common tank materials include concrete and steel. An issue that has drawn a great deal of interest is the problem of low water turnover in these facilities resulting in long detention times. Much of the water volume in storage tanks is dedicated to fire protection, and unless utilities properly manage their tanks to control water quality, there can be problems attributable to both water aging and inadequate water mixing. Excessive water age can be conducive to depletion of the disinfectant residual, leading to biofilm growth, other biological changes in the water including nitrification, and the emergence of taste and odor problems. Improper mixing can lead to stratification and large stagnant (dead) zones within the bulk water volume that have depleted disinfectant residual. As discussed later in this report, neither historical designs nor operational procedures have adequately maintained high water quality in storage.

Security is an important issue with both storage tanks and pumps because of their potential use as a point of entry for deliberate contamination of distribution systems.


Pumps are used to impart energy to the water in order to boost it to higher elevations or to increase pressure. Pumps are typically made from steel or cast iron. Most pumps used in distribution systems are centrifugal in nature, in that water from an intake pipe enters the pump through the action of a “spinning impeller” where it is discharged outward between vanes and into the discharge piping. The cost of power for pumping constitutes one of the major operating costs for a water supply.


The two types of valves generally utilized in a water distribution system are isolation valves (or stop or shutoff valves) and control valves. Isolation valves (typically either gate valves or butterfly valves) are used to isolate sections for maintenance and repair and are located so that the areas isolated will cause a minimum of inconvenience to other service areas. Maintenance of the valves is one of the major activities carried out by a utility. Many utilities have a regular valve-turning program in which a percentage of the valves are opened and closed on a regular basis. It is desirable to turn each valve in the system at least once per year. The implementation of such a program ensures that water can be shut off or diverted when needed, especially during an emergency, and that valves have not been inadvertently closed. Control valves are used to control the flow or pressure in a distribution system. They are normally sized based on the desired maximum and minimum flow rates, the upstream and downstream pressure differentials, and the flow velocities. Typical types of control valves include pressure-reducing, pressure-sustaining, and pressure-relief valves; flow-control valves; throttling valves; float valves; and check valves. Most valves are either steel or cast iron, although those found in premise plumbing to allow for easy shut-off in the event of repairs are usually brass. They exist throughout the distribution system and are more widely spaced in the transmission mains compared to the smaller-diameter pipes. Other appurtenances in a water system include blow-off and air-release/vacuum valves, which are used to flush water mains and release entrained air. On transmission mains, blow-off valves are typically located at every low point, and an air release/vacuum valve at every high point on the main. Blow-off valves are sometimes located near dead ends where water can stagnate or where rust and other debris can accumulate. Care must be taken at these locations to prevent unprotected connections to sanitary or storm sewers.

Hydrants are primarily part of the firefighting aspect of a water system. Proper design, spacing, and maintenance are needed to insure an adequate flow to satisfy fire-fighting requirements. Fire hydrants are typically exercised and tested annually by water utility or fire department personnel. Fire flow tests are conducted periodically to satisfy the requirements of the Insurance Services Office or as part of a water distribution system calibration program. Fire hydrants are installed in areas that are easily accessible by fire fighters and are not obstacles to pedestrians and vehicles. In addition to being used for firefighting, hydrants are also for routine flushing programs, emergency flushing, preventive flushing, testing and corrective action, and for street cleaning and construction projects. Infrastructure Design and Operation The function of a water distribution system is to deliver water to all customers of the system in sufficient quantity for potable drinking water and fire protection purposes, at the appropriate pressure, with minimal loss, of safe and acceptable quality, and as economically as possible. To convey water, pumps must provide working pressures, pipes must carry sufficient water, storage facilities must hold the water, and valves must open and close properly. Indeed, the carrying capacity of a water distribution system is defined as its ability to supply adequate water quantity and maintain adequate pressure (Male and Walski, 1991). Adequate pressure is defined in terms of the minimum and maximum design pressure supplied to customers under specific demand conditions. The maximum pressure is normally in the range of 80 to 100 psi; for example, the Uniform Plumbing Code requires that water pressure not exceed 80 psi (552 kPa) at service connections, unless the service is provided with a pressure-reducing valve. The minimum pressure during peak hours is typically in the range of 40 to 50 psi (276–345 kPa), while the recommended minimum pressure during fire flow is 20 psi (138 kPa).

Residential Drinking Water Provision

Of the 34 billion gallons of water produced daily by public water systems in the United States, approximately 63 percent is used by residential customers for indoor and outdoor purposes. Mayer et al. (1999) evaluated 1,188 homes from 14 cities across six regions of North America and found that 42 percent of annual residential water use was for indoor purposes and 58 percent for outdoor purposes. Outdoor water use varies quite significantly from region to region and includes irrigation. Of the indoor water use, less than 20 percent is for consumption or related activities, as shown below:

  • Human Consumption or Related Use – 17.1 %……
  • Faucet use – 15.7 %
  • Dishwasher – 1.4 %
  • Human Contact Only – 18.5 %……………………
  • Shower – 16.8 %
  • Bath – 1.7 %
  • Non-Human Ingestion or Contact Uses – 64.3 %…
  • Toilet – 26.7 %
  • Clothes Washer – 21.7 %
  • Leaks – 13.7 %
  • Other – 2.2 %

Most of the water supplied to residences is used primarily for laundering, showering, lawn watering, flushing toilets, or washing cars, and not for consumption. Nonetheless, except in a few rare circumstances, distribution systems are assumed to be designed and operated to provide water of a quality acceptable for human consumption. Normal household use is generally in the range of 200 gallons per day (757 L per day) with a typical flow rate of 2 to 20 gallons per minute (gpm) [7.57–75.7 L per minute (Lpm)]; fire flow can be orders of magnitude greater than these levels, as discussed below.

Fire Flow Provision

Besides providing drinking water, a major function of most distribution systems is to provide adequate standby fire flow,

Fire-flow requirements for a single family house vary from 750 to 1,500 gpm

The duration for which these fire flows must be sustained normally ranges from 3 to 8 hours. In order to satisfy this need for adequate standby capacity and pressure, most distribution systems use standpipes, elevated tanks, and large storage reservoirs. Furthermore, the sizing of water mains is partly based on fire protection requirements set by the Insurance Services Office. (The minimum flow that the water system can sustain for a specific period of time governs its fire protection rating, which then is used to set the fire insurance rates for the communities that are served by the system.) As a consequence, fire-flow governs much of the design of a distribution system, especially for smaller systems. A study conducted by the American Water Works Association Research Foundation confirmed the impact of fire-flow capacity on the operation of, and the water quality in, drinking water networks. It found that although the amount of water used for firefighting is generally a small percentage of the annual water consumed, the required rates of water delivery for firefighting have a significant and quantifiable impact on the size of water mains, tank storage volumes, water age, and operating and maintenance costs. Generally nearly 75 percent of the capacity of a typical drinking water distribution system is devoted to fire fighting.

The effect of designing and operating a system to maintain adequate fire flow and redundant capacity is that there are long transit times between the treatment plant and the consumer, which may be detrimental to meeting drinking water MCLs. Snyder et al. (2002) recommended that water systems evaluate existing storage tanks to determine if modification or elimination of the tanks was feasible. Water efficient fire suppression technologies exist that use less water than conventional standards. In particular, the universal application of automatic sprinkler systems provides the most proven method for reducing loss of life and property due to fire, while at the same time providing faster response to the fire and requiring significantly less water than conventional fire-fighting techniques. Snyder et al. (2002) also recommended that the universal application of automatic fire sprinklers be adopted by local jurisdictions for homes as well as in other buildings. There is a growing recognition that embedded designs in most urban areas have resulted in distribution systems that have long water residence times due to the large amounts of storage required for firefighting capacity. More than ten years ago, Clark and Grayman (1992) expressed concern that long residence times resulting from excess capacity for firefighting and other municipal uses would also provide optimum conditions for the formation of DBPs and the regrowth of microorganisms. They hypothesized that eventually the drinking water industry would be in conflict over protecting public health and protecting public safety.

Because existing water distribution systems are designed primarily for fire protection, the majority of the distribution system uses pipes that are much larger than would be needed if the water was intended only for personal use. This leads to residence times of weeks in traditional systems versus potentially hours in a system comprised of much smaller pipes. In the absence of smaller sized distribution systems, utilities have had to implement flushing programs and use higher dosages of disinfectants to maintain water quality in distribution systems. This has the unfortunate side effect of increasing DBP formation as well as taste and odor problems, which contribute to the public’s perception that the water quality is poor. Furthermore, large pipes are generally cement-lined or unlined ductile iron pipe typically with more than 300 joints per mile. These joints are frequently not water tight, leading to water losses as well as providing an opportunity for external contamination of finished water.

From an engineering perspective it seems intuitively obvious that it is most efficient to satisfy all needs by installing one pipe and to minimize the number of pipe excavations. This philosophy worked well in the early days of water system development. However, it has resulted in water systems with long residence times (and their negative consequences) under normal water use patterns and a major investment in above-ground (pumps and storage tanks) and belowground (transmission mains, distribution pipes, service connections, etc.) infrastructure. Therefore as suggested in Okun (2005) it may be time to look at alternatives for supplying the various water needs in urban areas such as dual distribution systems.

However, the creation of dual distribution systems necessitates the retrofitting of an existing water supply system and reliance on existing pipes to provide non-potable supply obtained from wastewater or other sources. Large costs would be incurred when installing the new, small diameter pipe for potable water, disconnecting the existing system from homes and other users so that it could be used reliably for only non-potable needs, and other retrofitting measures.

The potential for cross connections or misuse of water supplies of lesser quality is greatly increased in dual distribution systems and decentralized treatment.

Water System Diversity

Water utilities in the United States vary greatly in size, ownership, and type of operation. The SDWA defines public water systems as consisting of community water supply systems; transient, non-community water supply systems; and non-transient, non-community water supply systems. A community water supply system serves year-round residents and ranges in size from those that serve as few as 25 people to those that serve several million. A transient, non-community water supply system serves areas such as campgrounds or gas stations where people do not remain for long periods of time. A non-transient, non-community water supply system serves primarily non-residential customers but must serve at least 25 of the same people for at least six months of the year (such as schools, hospitals, and factories that have their own water supply).

There are 159,796 water systems in the United States that meet the federal definition of a public water system (EPA, 2005b). Thirty-three (33) percent (52,838) of these systems are categorized as community water supply systems, 55 percent are categorized as transient, non-community water supplies, and 12 percent (19,375) are non-transient, non-community water systems. Overall, public water systems serve 297 million residential and commercial customers. Although the vast majority (98 percent) of systems serves less than 10,000 people, almost three quarters of all Americans get their water from community water supplies serving more than 10,000 people. Not all water supplies deliver water directly to consumers, but rather deliver water to other supplies. Community water supply systems are defined as “consecutive systems” if they receive their water from another community water supply through one or more interconnections

Some utilities rely primarily on surface water supplies while others rely primarily on groundwater. Surface water is the primary source of 22 percent of the community water supply systems, while groundwater is used by 78 percent of community water supply systems. Of the non-community water supply systems (both transient and non-transient), 97 percent are served by groundwater. Many systems serve communities using multiple sources of supply such as a combination of groundwater and/or surface water sources. This is important because in a grid/looped system, the mixing of water from different sources can have a detrimental influence on water quality, including taste and odor, in the distribution system.

Water supply systems serving cities and towns are generally administered by departments of municipalities or counties (public systems) or by investor owned companies (private systems). Public systems are predominately owned by local municipal governments, and they serve approximately 78 percent of the total population that uses community water supplies. Approximately 82 percent of urban water systems (those serving more than 50,000 persons) are publicly owned. There are about 33,000 privately owned water systems that serve the remaining 22 percent of people served by community water systems. Private systems are usually investor-owned in the larger population size categories but can include many small systems as part of one large organization. In the small- and medium-sized categories, the privately owned systems tend to be owned by homeowners associations or developers.

Infrastructure Viability over the Long Term

For the purposes of this report, distribution system integrity is defined as having three basic components: (1) physical integrity, which refers to the maintenance of a physical barrier between the distribution system interior and the external environment, (2) hydraulic integrity, which refers to the maintenance of a desirable water flow, water pressure, and water age, taking both potable drinking water and fire flow provision into account, and (3) water quality integrity, which refers to the maintenance of finished water quality via prevention of internally derived contamination. This division is important because the three types of integrity have different causes of their loss, different consequences once they are lost, different methods for detecting and preventing a loss, and different remedies for regaining integrity. Factors important in maintaining the physical integrity of a distribution system include the maintenance of the distribution system components, such as the protection of pipes and joints against internal and external corrosion and the presence of devices to prevent cross-connections and backflow. Hydraulic integrity depends on, for example, proper system operation to minimize residence time and on preventing the encrustation and tuberculation of corrosion products and biofilms on the pipe walls that increase hydraulic roughness and decrease effective diameter. Maintaining water quality integrity in the face of internal contamination can involve control of nitrifying organisms and biofilms via changes in disinfection practices.

Older industrial cities in the northeast and Midwest United States no longer have industries that use high volumes of water, and they have also experienced major population shifts from the inner city to the suburbs. As a consequence, the utilities have an overcapacity to produce water, mainly in the form of oversized mains, at central locations, while needing to provide water to suburbs at greater distances from the treatment plant. Both factors can contribute to problems associated with high water residence times in the distribution system.

Currently, 51 organic chemicals, 16 inorganic chemicals, seven disinfectants and disinfection byproducts (DBPs), four radionuclides, and coliform bacteria are monitored for compliance with the SDWA.   The SDWA does not directly address distribution system contamination for most compounds.

Water Security-related Directives and Laws

Although not a new issue, security has become paramount to the water utility industry since the events of September 11, 2001. The potential for natural, accidental, and purposeful contamination of water supply has been present for decades whether in the form of earthquakes, floods, spills of toxic chemicals, or acts of vandalism.

One of most common means of contaminating distribution systems is through a cross connection. Cross connections occur when a nonpotable water source is connected to a potable water source. Under this condition contaminated water has the potential to flow back into the potable source. Backflow can occur when the pressure in the distribution system is less than the pressure in the nonpotable source, described as backsiphonage. Conditions under which backsiphonage can occur include water main breaks, firefighting demands, and pump failures. Backflow can also occur when there is increased pressure from the nonpotable source that exceeds the pressure in the distribution system, described as backpressure. Backpressure can occur when industrial operations connected to the potable source are exerting higher internal pressure than the pressure in the distribution system or when irrigation systems connected to the potable system are pumping from a separate water source and the pump pressure exceeds the distribution system pressure.

Some states rely solely on plumbing codes to address cross connections and backflow, which is problematic because plumbing codes, in most cases, do not require testing and follow-up inspections of backflow prevention devices.

Houses are built to code but many fall out of compliance due to age and as the code changes. In addition there are no organizations that advise homeowners on how to maintain their plumbing systems such as when flushing is necessary, water temperature recommendations, home treatment devices, etc. (Chaney, 2005).

The barrier must be non-permeable since contaminants can enter through breaks or failures in materials as well as through the materials themselves. Table 4-1 gives examples of the infrastructure components that constitute this physical barrier, what they protect against, and the materials of which they are commonly constructed. A variety of components and materials make up this physical barrier. Four major component types are delineated and referred to repeatedly in this chapter: (1) pipes including mains, services lines, and premise plumbing; (2) fittings and appurtenances such as crosses, tees, ells, hydrants, valves, and meters;

TABLE 4-1 Infrastructure Components, What They Protect Against, and Common Materials

  • Pipe. Protects Against Soil, groundwater, sewer exfiltration, surface runoff, human activity, animals, insects, and other life forms. Materials: Asbestos cement, reinforced concrete, steel, lined and unlined cast iron, lined and unlined ductile iron, PVC, polyethylene and HDPE, galvanized iron, copper, polybutylene
  • Pipe wrap and coatings. Supporting role in that it preserves the pipe integrity. Material: Polyethylene, bitumastic, cement-mortar
  • Pipe linings. Supporting role in that it preserves the pipe integrity. Materials: Epoxy, urethanes, asphalt, coal tar, cement-mortar, plastic inserts
  • Service lines. Protects from Soil, groundwater, sewer exfiltration, surface runoff, human activity, animals, insects, and other life forms. Materials: Galvanized steel or iron, lead, copper, chlorinated PVC, crosslinked polyethylene, polyethylene, polybutylene, PVC, brass, cast iron
  • Premise (home and building) plumbing. Protects against Air contamination, human activity, sewage and industrial non-potable water. Materials: Copper, lead, galvanized steel or iron, iron, steel, chlorinated PVC, PVC, cross-linked polyethylene, polyethylene, polybutylene
  • Fittings and appurtenances (meters, valves, hydrants, ferrules). Protects against Soil, groundwater, sewer exfiltration, surface runoff, human activity, animals, insects, and other life forms. Materials: Brass, rubber, plastic
  • Storage facility walls, roof, cover, vent hatch. Protects against Air contamination, rain, algae, surface runoff, human activity, animals, birds, and insects. Materials: Concrete, steel, asphaltic, epoxy, plastics
  • Backflow prevention devices. Protects against Nonpotable water. Materials: Brass, plastic
  • Gaskets and joints. Protects against Soil, groundwater, sewer exfiltration, surface runoff, human activity, animals, insects, and other life forms. Materials: Rubber, leadite, asphaltic,

Cast iron pipe (lined or unlined) has been largely phased out due to its susceptibility to both internal and external corrosion and associated structural failures. Ductile-iron pipe (with or without a cement lining) has taken its place because it is durable and strong, has high flexural strength, and has good resistance to external corrosion from soils. It is, however, quite heavy, it might need corrosion protection in certain soils, and it requires multiple types of joints. Concrete, asbestos cement, and polyvinyl chloride (PVC) plastic pipe have been used to replace metal pipe because of their relatively good resistance to corrosion. Polyethylene pipe is growing in use, especially for trenchless applications like slip lining, pipe bursting, and directional drilling. High-density polyethylene pipe is the second most commonly used pipe. It is tough, corrosion resistant both internally and externally, and flexible. The manufacturer estimates its service life to be 50 to 100 years


Losses in physical integrity are caused by an abrupt or gradual alteration in the structure of the material barrier between the external environment and the drinking water, by the absence of a barrier, or by the improper installation or use of a barrier. These mechanisms are summarized in Table 4-2 (which shows that failure is cause by factors such as: Corrosion, permeation, too high internal water pressure or surges, shifting earth, exposure to UV light, stress from overburden, temperature fluctuations, freezing, natural disasters, aging and weathering.

Infrastructure components break down or fail over time due to chemical interactions between the materials and the surrounding environment, eventually leading to holes, leaks, and other breaches in the barrier. These processes can occur over time scales of days to decades, depending on the materials and conditions present. For example, plastic pipes can be very rapidly compromised by nearby hydrophobic compounds (e.g., solvents in the vadose zone that result from surface or subsurface contamination), with the resulting permeation of those compounds into the distribution system through the pipe materials. Both internal and external corrosion can lead to structural failure of pipes and joints, thereby allowing contaminants to infiltrate into the distribution system via leaks or subsequent main breaks. Materials failure can be hastened if the distribution system water pressure is too high, from overburden stresses on pipes, and during natural disasters. Indeed, hurricanes and earthquakes have caused extensive sudden damage to distribution systems, including broken service lines and fire hydrants, pipes disconnected or broken by the uprooting of trees, cracks in cement water storage basins, and seam separations in steel water storage tanks

A second major contributor to the loss of physical integrity is when certain critical components are absent, either by oversight or due to vandalism. For example, the absence of backflow prevention devices and covers for storage facilities can allow external contaminants to enter distribution systems.

Finally, human activity involving distribution system materials can allow contamination to occur such as through unsanitary repair and replacement practices, unprotected access to materials, or the improper handling of materials leading to unintentional damage. One must even consider the installation of flawed materials, which might, for example, be brought about because of a lack of protection of materials during storage and handling. Structural Failure of Distribution System Components Metallic pipe failures are divided generally into two categories: corrosion failures and mechanical failures. Common types of failures for iron mains include: • Bell splits or cracks that require cutting out the joint and replacing it with a mechanical fitting; these are typical for leadite joints • Splits at tees and offsets and other fittings that require replacement • Circumferential cracks or round cracks and holes, more typical in smaller diameter pipe (< 10 in.). These can result from a lack of soil support, causing the pipe to be called upon to act as a beam • Splits or longitudinal cracks or spiral cracks that will blow out. Longitudinal cracks are more common for larger pipe (> 12 in.) and can result from crushing under external loads or from excessive internal pressure • Spiral failures in medium diameter pipe • Shearing failures in large diameter pipe • Pinholes (corrosion hole) caused by internal corrosion • Tap or joint blowout • Crushed pipe

A simpler categorization can be found in Romer et al. (2004), who summarized three types of pipe failures as weeping failures, pipe breaks, and sudden failures. A weeping failure is where a leak allows an unnoticeable exchange of water to and from the surrounding soil. A pipe break includes a hole in the pipe or a disengagement of a bell-and-spigot joint. A sudden failure is the bursting of a pipe wall or shear of the pipe cross section, as would occur for a concrete pipeline, or a blow out, which refers to a complete break in a pipe. Pipe breaks can occur for a myriad of reasons such as normal materials deterioration, joint problems, movement of earth around the pipe, freezing and thawing, internal and external corrosion, stray DC currents, seasonal changes in internal water temperature, heavy traffic overhead including accidents that damage fire hydrants, changes in system pressure, air entrapment, excessive overhead loading, insufficient surge control (such as with water hammer and pressure transients), and errors in construction practices

One overriding factor in determining the potential for pipe failure is the force exerted on the water main. Contributors to this force include changes in temperature, which cause contraction and expansion of the metal and the surrounding soil, the weight of the soil over the buried main, and vibrations on the main caused by nearby activities such as traffic. An important consideration in this regard is the erosion potential of the supporting soil beneath the buried main. In the construction of a main, special sand and soil can be laid beneath it to help it bear external forces. But the movement of water in the ground beneath the main can wash away the finer material and create small or large caverns under the pipe. The force now bearing down on top of the pipe must be taken by the pipe itself, without the help of supporting material underneath. If these forces exceed the strength of the pipe, the main breaks. Most often these breaks occur at the weakest part of the main, i.e., the joint.

The factors that cause pipe failures can compound one another, hastening the process. For example, if a main develops small leaks because of corrosion, water within the distribution system can exfiltrate into the area surrounding the pipe, eroding away the supporting soil. Leakage that undermines the foundation of a water main can also occur from nearby sewer lines, go on essentially unnoticed, and eventually lead to water main collapse

Table 4-3 summarizes common problems that lead to pipe failures for pipes of differing materials. These are some of the principal factors, but they are not the only factors that act individually or in combination to lead to a main break. Other factors could include a street excavation that accidentally disturbs a water main and the misuse of fire hydrants.

Other components of distribution system also experience structural failure, although they have not historically received the attention afforded to pipes.

TABLE 4-3 Most Common Problems that Lead to Pipe Failure for Various Pipe Materials Pipe Material (common sizes) PVC and Polyethylene (4-36 in.) Problems Excessive deflection, joint misalignment and/or leakage, leaking connections, longitudinal breaks from stress, exposure to sunlight, too high internal water pressure or frequent surges in pressure, exposure to solvents, hard to locate when buried, damage can occur during tapping Cast/Ductile Iron (4-64 in,) (lined and unlined) Internal corrosion, joint misalignment and/or leakage, external corrosion, leaking connections, casting/manufacturing flaws Steel (4-120 in.) Internal corrosion, external corrosion, excessive deflection, joint leakage, imperfections in welded joints Asbestos-Cement (4-35 in.) Internal corrosion, cracks, joint misalignment and/or leakage, small pipe can be damaged during handling or tapping, pipe must be in proper soil, pipe is hard to locate when buried Concrete (12-16 to 144-168 in.) (prestressed or reinforced) Corrosion in contact with groundwater high in sulfates and chlorides, pipe is very heavy, alignment can be difficult, settling of the surrounding soil can cause joint leaks, manufacturing flaws

Corrosion as a Major Factor

Corrosion is the degradation of a material by reaction with the local environment. In water distribution systems, the term corrosion refers to dissolution of concrete linings and concrete pipe, as well as to the deterioration of metallic pipe and valves via redox reactions (e.g., iron pipe rusting). Degradation originating from the inside of the pipe via reactions with the potable water is termed internal corrosion. Degradation originating outside the pipe on surfaces contacting moist soil is referred to as external corrosion. Both internal and external corrosion can cause holes in the distribution system and cause loss of pipeline integrity. In some cases holes are formed directly in pipes by corrosion, as is the case with pinholes, but in many other instances corrosion weakens the pipe to the point that it will fail in the presence of forces originating from the soil environment. The type of corrosion and mode of failure causing loss of physical integrity are highly system specific. External corrosion can be exacerbated by a low soil redox potential, low soil pH, stray currents, and dissimilar metals or galvanic corrosion

Internal corrosion is influenced by pH, alkalinity, disinfectant type and dose, type of bacteria present in biofilms, velocity, water use patterns, use of inhibitors, and many other factors.

Some utilities have tried to avoid the issue by using plastic pipe. Even so, unprotected metal materials are regularly used at the present time, illustrating the water industry’s lack of attention to the problem. According to Romer et al. (2004), “approximately 72 percent of the materials reported in use for water mains are iron pipe, approximately two-thirds of the reported corrosion is in corrosive soils, and approximately two-thirds of the corrosion is on the pipe barrel.” In addition, metallic or cementitious pipe are often designed on the basis of their hydraulic capabilities first and foremost, and corrosion resistance is often a secondary consideration. The annual direct costs of corrosion are estimated to be $5 billio for the main distribution system (not counting premise plumbing).

Issues with Service Lines

Recent evidence indicates that service lines (the piping between the water main and the customer’s premises) and their fittings and connections (ferrules, curb stops, corporation stops, valves, and meters) can account for a significant proportion of the leaks in a distribution system

Many galvanized and lead pipe service lines are being replaced with copper or plastic pipe (chlorinated polyvinyl chloride or CPVC). CPVC and copper each have their benefits and weaknesses. Installation of CPVC requires less skill compared to installation of copper, although if workers are not careful installation can result in cracking and damage to CPVC pipe. CPVC is better for corrosive soils and waters, while copper is more resistant to internal biofilm growth. Buried CPVC pipe is difficult to locate compared to metal or copper pipe because it does not conduct electrical current for tracing. CPVC can impart a “plastic” flavor to water while the copper pipe can impart a “metallic” flavor. With CPVC, low levels of vinyl chloride can leach into the water.

Permeation refers to a mechanism of pipe failure in which contaminants external to the pipe materials and non-metallic joints compromise the structural integrity of the materials and actually pass through them into the drinking water. Permeation is generally associated with plastic pipes and with chemical solvents such as benzene, toluene, ethylbenzene, and xylenes (BTEX) and other hydrocarbons associated with oil and gasoline, all of which are easily detected using volatile organic chemical gas chromatography analyses. These chemicals can readily diffuse through the plastic pipe matrix, alter the plastic material, and migrate into the water within the pipe. Such compounds are common in soils surrounding gasoline spills (leaking storage tanks), at abandoned industrial sites, and near bulk chemical storage, electroplaters, and dry cleaners

Human Activities that Lead to Contamination. A second major cause of physical integrity loss is human activity surrounding construction, repair, and replacement that can introduce contamination into the distribution system. Any point where the water distribution system is opened to the atmosphere is a potential source of contamination. This is particularly relevant when laying new pipes, engaging in pipe repairs, and rehabilitating sites.

The average number of main repairs a year for a single utility ranges from 66 to 901 (which corresponds to 7.9–35.6 repairs per 100 miles of pipe per year), it is clear that exposure of the distribution system to contamination during repair is an inescapable reality.

TABLE 4-4 Potential for Contaminant Entry during Water Main Activities Activity Broken service line fills trench during installation Pipe gets dirty during storage before installation Trench dirt gets into pipe during installation Rainwater fills trench during installation Street runoff gets into pipe before installation Pipe is delivered dirty Trash gets into pipe before installation Vandalism occurs at the site Animals get into pipe before installation

The installation process for buried pipe is not the only place where contamination can occur. The storage of pipe, pipe fittings, and valves along roadways or in pipe yards prior to installation can expose them to contamination from soil, storm water runoff, and pets and wildlife. Damage to pipes prior to their installation is also possible, such as during pipe storage and handling or actual manufacturing defects such as surface impurities or nicks.

Similar issues surface for storage facilities that do not have adequate protection to prevent their contamination. There are 154,000 treated water storage facilities in the United States encompassing a variety of types including elevated tanks, standpipes, open and covered reservoirs, underground basins, and hydropneumatic storage tanks. Storage facilities are susceptible to external contamination from birds, insects, other animals, wind, rain, and algae. Indeed, coliform occurrences have been associated with birds roosting in the vent ports of covered water reservoirs. This is most problematic for uncovered storage facilities, although storage facilities with floating covers are also susceptible to bacterial contamination due to rips in the cover from ice, vandalism, or normal operation. Even with covered storage facilities, contaminants can gain access through improperly sealed access openings and hatches or faulty screening of vents and overflows.

The general rule is that there should be a horizontal separation of at least 10 ft (3 m) between water and sewer lines, and that the water line should be at least 1 ft (0.3 m) above the sewer (although variations to this general rule may occur from state to state). This rule, however, is fairly recent in comparison to the average age of the nation’s buried infrastructure.

Birds, and consequently bird excrement, are probably the biggest concern for storage tanks and reservoirs with floating covers. Sea gulls, for example, can be found roosting at storage facilities. Open reservoirs also offer the opportunity for detrimental changes in water quality because of exposure to the atmosphere or sunlight, such as changes in pH, dissolved oxygen, and algal growth. Even when covered, storage facilities can suffer from algal growth on the tops of floating covers that can gain entry into the tank through rips and tears or missing hatches. Algae can also be airborne or carried by birds and gain entry into storage tanks through open hatches and vents. Algae increase the chlorine demand of the stored water, reduce its oxygen content upon their degradation, affect taste and odor, and in some cases release byproducts. Chemical contaminants gain access to storage facilities via air pollution and surface-water runoff into open storage reservoirs. For example, accidental spills of chemicals during truck transport on highways adjacent to reservoirs are a potential threat, and can be very serious if the chemicals are present in a concentrated form and highly toxic. Surface-water runoff into open reservoirs can also introduce pesticides, herbicides, fertilizers, silt, and humic materials from nearby land. The potential for chemical contamination of storage facilities continues to be overlooked in regulations in comparison to microbial contamination.

Even a water utility with a good program of corrosion control and pipe replacement can experience an annual pipe break rate of around 750 to 850 breaks per year

Hydraulic Integrity

The hydraulic integrity of a water distribution system is defined as its ability to provide a reliable water supply at an acceptable level of service—that is, meeting all demands placed upon the system with provisions for adequate pressure, fire protection, and reliability of uninterrupted supply (Cesario, 1995; AWWA, 2005). Water demand is the driving force for the operation of municipal water systems.

From an infrastructure perspective, a water distribution system is an elaborate conveyance structure in which pumps move water through the system, control valves allow water pressure and flow direction to be regulated, and reservoirs smooth out the effects of fluctuating demands (flow equalization) and provide reserve capacity for fire suppression and other emergencies. All these distribution system components and their operations and complex interactions can produce significant variations in critical hydraulic parameters, such that many opportunities exist for the loss of hydraulic integrity and degradation of service. This, in turn, may lead to serious water quality problems, some of which may threaten public health. One of the most critical components of hydraulic integrity is the maintenance of adequate pressure, defined in terms of the minimum and maximum design pressure supplied to customers under specific demand conditions. Low pressures, caused for example by failure of a pump or valve, may lead to inadequate supply and reduced fire suppression capability or, in the extreme, intrusion of potentially contaminated water. High pressures will intensify wear on valves and fittings and will increase leakage and may cause additional leaks or breaks with subsequent repercussions on water quality. High pressures will also increase external load on water heaters and other fixtures. Pipes and pumps must be sized to overcome the head loss caused by friction at the pipe walls and thus to provide acceptable pressure under specific demands, while sizing of control valves is based on the desired flow conditions, velocity, and pressure differential. A related need is to ensure that pressure fluctuations associated with surge conditions are kept below an acceptable limit. Excessive pressure surges generate high fluid velocity fluctuations and may cause resuspension of settled particles as well as biofilm detachment. A second element of hydraulic integrity is the reliability of supply, which refers to the ability of the system to maintain the desirable flow rate even when components are out of service (e.g., facility outage, pipe break) and is normally accomplished by providing redundancy in the system. Examples include looping of the pipe network and the development of backup sources to ensure multiple delivery points to all areas.

Pipe Deterioration

Pipe deterioration resulting in leaks or breaks can lead to a loss of hydraulic integrity because adequate pressures can no longer be maintained.


Aging pipe infrastructure and chronic water main breaks are a common problem for many water utilities. Analysis of water industry data showed that on average, main breaks occur 700 times per day in the United States

Pressure Transients and Changes in Flow Regime

Rapid changes in pressure and flow caused by events such as rapid valve closures or pump stoppages and hydrant flushing can create pressure surges of excessive magnitude. These transient pressures, which are superimposed on the normal static pressures present in the water line at the time the transient occurs, can strain the system leading to increased leakage and decreased system reliability, equipment failure, and even pipe rupture in extreme cases.

High-flow velocities can remove protective scale and tubercles, which will increase the rate of corrosion. Uncontrolled pump shutdown can lead to the undesirable occurrence of water-column separation, which can result in catastrophic pipeline failures due to severe pressure rises following the collapse of the vapor cavities.

Vacuum conditions can create high stresses and strains that are much greater than those occurring during normal operating regimes. They can cause the collapse of thin-walled pipes or reinforced concrete sections, particularly if these sections were not designed to withstand such strains. In less drastic cases, strong pressure surges may cause cracks in internal lining, damage connections between pipe sections, and destroy or cause deformation to equipment such as pipeline valves, air valves, or other surge protection devices. Sometimes the damage is not realized at the time, but may cause the pipeline to collapse in the future, especially if combined with repeated transients. Transient pressure and flow regimes are inevitable. All systems will, at some time, be started up, switched off, or undergo rapid flow changes such as those caused by hydrant flushing, and they will likely experience the effects of human errors, equipment breakdowns, earthquakes, or other risky disturbances

Gullick et al. (2004) studied intrusion occurrences in distribution systems and observed 15 surge events that resulted in a negative pressure. Most were caused by the sudden shutdown of pumps at a pump station because of either unintentional (e.g., power outages) or intentional (e.g., pump stoppage or startup tests) circumstances. Friedman et al. (2004) confirmed that negative pressure transients can occur in the distribution system and that the intruded water can travel downstream from the site of entry. Locations with the highest potential for intrusion were sites experiencing leaks and breaks, areas of high water table, and flooded air-vacuum valve vaults.

Examples of emergency situations include earthquakes, hurricanes, power failures, equipment failures, or transmission main failures. All these activities can result in a reduction in system capacity and supply pressure and changes to the flow paths of water within the distribution system.

Another function of SCADA is the ability to monitor and remotely control local conditions of water system components based on any desired range of operating conditions or set points. For example, a pump can be set to turn on or off automatically when the pressure at a critical location or the water level in a reservoir drops to a specified lower limit or goes above a specified upper limit. Alarms can be set to alert operators when a fault within the system equipment (e.g., equipment operating out of its normal range or overheating of a pump) and any breach in the system hydraulic integrity is detected. For example, extreme fluctuations in pressure and flow readings could result from pressure surges generated from a power failure at a pump station. SCADA could then divert water to the affected region from a different pump station, thus ensuring adequate supply and fire flow protection.

SCADA systems also contain pertinent system operational information required for water distribution network modeling (Cesario, 1995), such as the boundary conditions (e.g., tank water levels, valve and pump statuses and settings) for the network model as well as local flow and pressure conditions.

Water Quality Integrity

As discussed in Chapters 4 and 5, breaches in physical and hydraulic integrity can lead to the influx of contaminants across pipe walls, through breaks, and via cross connections. These external contamination events can act as a source of inoculum, introduce nutrients and sediments, or decrease disinfectant concentrations within the distribution system, resulting in a degradation of water quality. Even in the absence of external contamination, however, there are situations where water quality is degraded due to transformations that take place within piping, tanks, and premise plumbing. Most measurements of water quality taken within the distribution system cannot differentiate between the deterioration caused by externally vs. internally derived sources.

An obvious risk to public health from distribution system biofilms is the release of pathogenic bacteria. As discussed in Chapter 3, there are instances where opportunistic pathogens have been detected in biofilms, including Legionella, Aeromonas spp., and Mycobacterium spp. Assessing risk from these organisms in biofilms is complicated by the potential for two modes of transmission. Aeromonas spp. causes disease by ingestion, while the other two organisms cause the most severe forms of disease after inhalation.

Coliform Bacteria. Total coliform bacteria (a subset of Gram-negative bacteria) are used primarily as a measure of water treatment effectiveness and can occasionally be found in distribution systems. The origins of total coliform bacteria include untreated surface water and groundwater, vegetation, soils, insects, and animal and human fecal material. Typical coliform bacteria found in drinking water systems include Klebsiella pneumoniae, Enterobacter aerogenes, Enterobacter cloacae, and Citrobacter freundii. Other typical species and genera are shown in Table 3-2. Although most coliforms are not pathogenic, they can indicate the potential presence of fecal pathogens and thus in the absence of more specific data may be used as a surrogate measure of public health risk. Indeed, the presence of coliforms is the distribution system is usually interpreted to indicate an external contamination event, such as injured organism passage through treatment barriers or introduction via water line breaks, cross connections, or uncovered or poorly maintained finished water storage facilities. However, biofilms within distribution systems can support the growth and release of coliforms, even when physical integrity (i.e., breaches in the treatment plant or distribution system) and disinfectant residual have been maintained, such that their presence may not necessarily indicate a recent external contamination event. Coliform regrowth in the distribution system is more likely during the summer months when temperatures are closer to the optimum growth temperatures of these bacteria. Thermotolerant coliforms (capable of growth at 44.5 oC), also termed “fecal coliforms” have a higher association with fecal pollution than total coliforms. And Escherichia coli is considered to be even more directly related to fecal pollution as it is commonly found in the intestinal track of warm-blooded animals.


[Also of interest is TABLE 8-1 Characteristics of U.S. Public and Private Transmission Systems but I don’t have the time to add it to this post ]

Posted in Water, Water | Tagged , , , , | Leave a comment

Military Threats: Peak oil, population, climate change, pandemics, economic crises, cyberattacks, failed states, nuclear war

[ The military is realistic about the challenges the world faces and often presents them with far more clarity than you’ll find from any other government institution. In 2010, the military took a look at some of the world’s toughest issues, with a report “The Joint Operating Environment”. To save you the time of reading this 76 page document I’ve condensed it to the main points of this joint effort of all the military branches. Alice Friedemann  www.energyskeptic.com ]

Every military force in history that has successfully adapted to the changing character of war and the evolving threats it faced did so by sharply defining the operational problems it had to solve.

The enemy’s capabilities will range from explosive vests worn by suicide bombers to long-range precision-guided cyber, space, and missile attacks. The threat of mass destruction – from nuclear, biological, and chemical weapons – will likely expand from stable nation states to less stable states and even non-state networks.

Today’s interlocking trading and communications networks may tempt leaders to consider once again that war is, if not impossible, then at least obsolete. Accordingly, any future war would cost so much in lives and treasure that no “rational” political leader would pursue it.

The problem is that rationality is often a matter of perspective – in the cultural, political, and ideological eye of the beholder. For what must have seemed perfectly rational reasons, Saddam Hussein invaded two of Iraq’s six neighbors in the space of less than ten years and sparked three wars during the period he ruled.

The real danger in a globalized world, where even the poorest have access to pictures and media portrayals of the developed world, lies in a reversal or halt to global prosperity. Such a possibility would lead individuals and nations to scramble for a greater share of shrinking wealth and resources, as they did in the 1930s with the rise of Nazi Germany in Europe and Japan’s “co-prosperity sphere” in Asia.

The long-term strategic consequences of the current financial crises are likely to be significant. Over the next several years a new international financial order will likely arise that will redefine the rules and institutions that underpin the functioning, order, and stability of the global economy. There is one new watchword that will continue to define the global environment for the immediate future: “interconnectedness.” Until a new structure emerges, strategists will have to prepare to work in an environment where the global economic picture can change suddenly, and where even minor events can cause a cascading series of unforeseen consequences.

Large exporting nations accept U.S. dollars for their goods and use them both to build foreign exchange reserves and to purchase U.S. treasuries (which then finance ongoing U.S. federal operations). The dollar’s “extraordinary privilege” as the primary unit of international trade allows the U.S. to borrow at relatively low rates of interest. However, the emerging scale of U.S. Government borrowing creates uncertainty about both our ability to repay the ever growing debt and the future value of the dollar. Moreover, “any sudden stop in lending…would drive the dollar down, push inflation and interest rates up, and perhaps bring on a hard landing for the United States…”

The precise nature of a “hard landing” of this sort is difficult to predict should creditor nations such as China demand higher interest rates, increasing the perception that the U.S. no longer controls its own financial fate. This dynamic could encourage the establishment of new reserve currencies as global economic actors search for alternatives to the dollar. Changing conditions in the global economy could likewise have important implications for global security also, including a decreased ability of the United States to allocate resources for defense purposes, less purchasing power for available dollars, and shifting power relationships around the world in ways unfavorable to global stability. Domestically, the future of the U.S. financial picture in both the short and long term is one of chronic budget deficits and compounding debt.


Although these fiscal imbalances have been severely aggravated by the recent financial crisis and attendant global economic downturn, the financial picture has long term components which indicate that even a return to relatively high levels of economic growth will not be enough to right the financial picture. The near collapse of financial markets and slow or negative economic activity has seen U.S. Government outlays grow in order to support troubled banks and financial institutions, and to cushion the wider population from the worst effects of the slowdown. These unfunded liabilities are a reflection of an aging U.S. Baby-Boom population increasing the number of those receiving social program benefits, primarily Social Security, Medicare, and Medicaid, versus the underlying working population that pays to support these programs.17 The foregoing issues of trade imbalance and government debt have historic precedents that bode ill for future force planners. Habsburg Spain defaulted on its debt some 14 times in 150 years and was staggered by high inflation until its overseas empire collapsed. Bourbon France became so beset by debt due to its many wars and extravagances that by 1788 the contributing social stresses resulted in its overthrow by revolution. Interest ate up 44% of the British Government budget during the interwar years 1919-1939, inhibiting its ability to rearm against a resurgent Germany.

Unless current trends are reversed, the U.S. will face similar challenges, anticipating an ever-growing percentage of the U.S. government budget going to pay interest on the money borrowed to finance our deficit spending. Rising debt and deficit financing of government operations will require ever-larger portions of government outlays for interest payments to service the debt. Indeed, if current trends continue, the U.S. will be transferring approximately seven percent of its total economic output abroad simply to service its foreign debt.

Interest payments are projected to grow dramatically, further exacerbated by recent efforts to stabilize and stimulate the economy, far outstripping the current tax base shown by the black line. Interest payments, when combined with the growth of Social Security and health care, will crowd out spending for everything else the government does, including National Defense.

U.S. Defense Spending: The “Hidden Export” The global trade and finance illustration on page 19 overlooks one large “export” that the United States provides to the world – the armed force that underpins the open and accessible global system of trade and travel that we know as “globalization.” At a cost of 600 billion dollars a year, U.S. Joint Forces around the world provide safety and security for the major exporters to access and use the global commons for trade and commerce.

A more immediate implication of these twin deficits will likely mean far fewer dollars available to spend on defense. In 1962 defense spending accounted for some 49% of total government expenditures, but by 2008 had dropped to 20% of total government spending. Following current trend lines, by 2028 the defense budget will likely consume between 2.6 percent and 3.1 percent of GDP – significantly lower than the 1990s average of 3.8%. Indeed, the Department of Defense may shrink to less than ten percent of the total Federal budget.

For over six decades the U.S. has underwritten the “hidden export” of global security for the great trading nations of the world, yet global and domestic pressures will dramatically impact the defense budget in the face of rising debt and trade imbalances. This may diminish this service which is of great benefit to the international community. In this world, new security exporters may rise, each having opinions and objectives that differ from the global norms and conventions that we have encouraged since our own emergence as a great power a century ago. Moreover, they will increasingly have the power to underwrite their own not-so-hidden export of military power. Unless we address these new fiscal realities we will be unable to engage in this contest on terms favorable to our nation.


To meet even the conservative growth rates posited in the economics section, global energy production would need to rise by 1.3% per year. By the 2030s, demand is estimated to be nearly 50% greater than today. To meet that demand, even assuming more effective conservation measures, the world would need to add roughly the equivalent of Saudi Arabia’s current energy production every seven years. Absent a major increase in the relative reliance on alternative energy sources (which would require vast insertions of capital, dramatic changes in technology, and altered political attitudes toward nuclear energy), oil and coal will continue to drive the energy train. By the 2030s, oil requirements could go from 86 to 118 million barrels a day (MBD). Although the use of coal may decline in the Organization for Economic Cooperation and Development (OECD) countries, it will more than double in developing nations. Fossil fuels will still make up 80% of the energy mix in the 2030s, with oil and gas comprising upwards of 60%. The central problem for the coming decade will not be a lack of petroleum reserves, but rather a shortage of drilling platforms, engineers and refining capacity.

Peak Oil

Petroleum must continue to satisfy most of the demand for energy out to 2030.

OPEC: To meet climbing global requirements, OPEC will have to increase its output from 30 MBD to at least 50 MBD. Significantly, no OPEC nation, except perhaps Saudi Arabia, is investing sufficient sums in new technologies and recovery methods to achieve such growth. Some, like Venezuela and Russia, are actually exhausting their fields to cash in on the bonanza created by rapidly rising oil prices.

The Chinese are laying down approximately 1,000 kilometers of four-lane highway every year, a figure suggestive of how many more vehicles they expect to possess, with the concomitant rise in their demand for oil. The presence of Chinese “civilians” in the Sudan to guard oil pipelines underlines China’s concern for protecting its oil supplies and could portend a future in which other states intervene in Africa to protect scarce resources. The implications for future conflict are ominous, if energy supplies cannot keep up with demand and should states see the need to militarily secure dwindling energy resources.

Another potential effect of an energy crunch could be a prolonged U.S. recession which could lead to deep cuts in defense spending (as happened during the Great Depression). Joint Force commanders could then find their capabilities diminished at the moment they may have to undertake increasingly dangerous missions. Should that happen, adaptability would require more than preparations to fight the enemies of the United States, but also the willingness to recognize and acknowledge the limitations of America’s military forces. The pooling of U.S. resources and capabilities with allies would then become even more critical. Coalition operations would become essential to protecting national interests.

OPEC and Energy Resources

OPEC nations will remain a focal point of great-power interest. These nations may have a vested interest in inhibiting production increases, both to conserve finite supplies and to keep prices high.

Should one of the consumer nations choose to intervene forcefully, the “arc of instability” running from North Africa through to Southeast Asia easily could become an “arc of chaos,” involving the military forces of several nations.

OPEC nations will find it difficult to invest much of the cash inflows that oil exports bring. While they will invest substantial portions of such assets globally through sovereign wealth funds – investments that come with their own political and strategic difficulties – past track records, coupled with their appraisal of their own military weaknesses, suggest the possibility of a military buildup. With the cost of precision weapons expected to decrease and their availability increasing, Joint Force commanders could find themselves operating in environments where even small, energy-rich opponents have military forces with advanced technological capabilities. These could include advanced cyber, robotic, and even anti-space systems. Finally, presuming the forces propelling radical extremism at present do not dissipate, a portion of OPEC’s windfall might well find its way into terrorist coffers, or into the hands of movements with deeply anti-modern, anti-Western goals – movements which have at their disposal increasing numbers of unemployed young men eager to attack their perceived enemies.

World Oil Chokepoints (page 30) (U.S. department of energy & Energy Information Administration

  • Strait of Hormuz 17 MBD
  • Strait of Malacca 15 MBD
  • Suez Canal Sumed Pipline 4.5 MBD
  • Bab-el-Mandeb 3.3 MBD
  • Turkish Straits 2.4 MBD
  • Baku-Tbilisi-Ceylan Pipeline 1 MBD
  • Panama Canal 0.5 MBD

A severe energy crunch is inevitable without a massive expansion of production and refining capacity. While it is difficult to predict precisely what economic, political, and strategic effects such a shortfall might produce, it surely would reduce the prospects for growth in both the developing and developed worlds. Such an economic slowdown would exacerbate other unresolved tensions, push fragile and failing states further down the path toward collapse, and perhaps have serious economic impact on both China and India. At best, it would lead to periods of harsh economic adjustment.

One should not forget that the Great Depression spawned a number of totalitarian regimes that sought economic prosperity for their nations by ruthless conquest.

During the next 25 years, coal, oil, and natural gas will remain indispensable to meet energy requirements. The discovery rate for new petroleum and gas fields over the past two decades (with the possible exception of Brazil) provides little reason for optimism that future efforts will find major new fields.

By 2012, surplus oil production capacity could entirely disappear, and as early as 2015, the shortfall in output could reach nearly 10 MBD.

Natural disease also has an impact on the world’s food supply. The Irish Potato Famine was not an exceptional historical event. As recently as 1954, 40% of America’s wheat crop failed as a result of black-stem disease. There are reports of a new aggressive strain of this disease (Ug99) spreading across Africa and possibly reaching Pakistan. Blights threatening basic food crops such as potatoes and corn would have destabilizing effects on nations close to the subsistence level. Food crises have led in the past to famine, internal and external conflicts, the collapse of governing authority, migrations, and social disorder. In such cases, many people in the crisis zone may be well-armed and dangerous, making the task of the Joint Force in providing relief that much more difficult. In a society confronted with starvation, food becomes a weapon every bit as important as ammunition.

Access to fish stocks has been an important natural resource for the prosperity of nations with significant fishing fleets. Competition for access to these resources has often resulted in naval conflict. Conflicts have erupted as recently as the Cod War (1975) between Britain and Iceland and the Turbot War (1995) between Canada and Spain. In 1996, Japan and Korea engaged in a naval standoff over rocky outcroppings that would establish extended fishing rights in the Sea of Japan. These conflicts saw open hostilities between the naval forces of these states, and the use of warships and coastal protection vessels to ram and board vessels. Over-fishing and depletion of fisheries and competition over those that remain have the potential for causing serious confrontations in the future.


As we approach the 2030s, the world’s clean water supply will be increasingly at risk. Growing populations and increasing pollution, especially in developing nations, are likely to make water shortages more acute. Most estimates indicate nearly 3 billion (40%) of the world’s population will experience water stress or scarcity.

Absent new technology, water scarcity and contamination have human and economic costs that are likely to prevent developing nations from making significant progress in economic growth and poverty reduction.

The unreliability of an assured supply of rainwater has forced farmers to turn more to groundwater in many areas. As a result, aquifer levels are declining at rates between one and three meters per year. The impact of such declines on agricultural production could be profound, especially since aquifers, once drained, may not refill for centuries. Glacial runoff is also an important source of water for many countries. The great rivers of Southeast Asia, for example, flow through India, Pakistan, China, Nepal, Thailand, and Burma, and are fed largely from glacial meltwaters in the Himalaya Range. Construction of dams at the headwaters of these rivers may constrict the flow of water downstream, increasing the risk of water-related population stresses, cross-border tension, migration and agricultural failures for perhaps a billion people who rely on them.

One should not minimize the prospect of wars over water. In 1967, Jordanian and Syrian efforts to dam the Jordan river were a contributing cause of the Six-Day War between Israel and its neighbors. Today Turkish dams on the upper Euphrates and Tigris Rivers, the source of water for the Mesopotamian basin pose similar problems for Syria and Iraq. Turkish diversion of water to irrigate mountain valleys in eastern Turkey already reduces water downstream. Even though localized, conflicts sparked by water scarcity easily could destabilize whole regions. Continuing crisis in Sudan’s Darfur region is an example of what could happen on a wider scale between now and the 2030s. Indeed, it is precisely along other potential conflict fault lines that potential crises involving water scarcity are most likely.

Were they called on to intervene in a catastrophic water crisis, they might well confront chaos, with collapsing or impotent social networks and governmental services. Anarchy could prevail, with armed groups controlling or warring over remaining water, while the specter of disease resulting from unsanitary conditions would hover in the background.

The latter is only one potential manifestation of a larger problem. Beyond the problems of water scarcity will be those associated with water pollution, whether from uncontrolled industrialization, as in China, or from the human sewage expelled by the mega-cities and slums of the world. The dumping of vast amounts of waste into the world’s rivers and oceans threatens the health and welfare of large portions of the human race, to say nothing of the affected ecosystems. While joint forces rarely will have to address pollution problems directly, any operations in polluted urban areas will carry considerable risk of disease. Indeed, it is precisely in such areas that new and deadly pathogens are most likely to arise. Hence, commanders may be unable to avoid dealing with the consequences of chronic water pollution.


The impact of climate change, specifically global warming and its potential to cause natural disasters and other harmful phenomena such as rising sea levels, has become a concern.

Shrinking sea ice opens new areas for natural resource exploitation, but may raise tensions between Arctic nations over the demarcation of exclusive economic zones and between Arctic nations and maritime states over the designation of important new waterways as international straits or internal waters.

Global sea levels have been on the rise for the past 100 years. Some one-fifth of the world’s population as well as one-sixth of the land area of the world’s largest urban areas are located in coastal zones less than ten meters above sea level.

Furthermore, populations in these coastal areas are growing faster than national averages. In places such as China and Bangladesh, this growth is twice that of the national average. Should global sea levels continue to rise at current rates, these areas will see more extensive flooding and increased saltwater intrusion into coastal aquifers upon which coastal populations rely, compounding the impact of increasing shortages of fresh water. Additionally, local population pressures will increase as people move away from inundated areas and settle farther up-country. In this regard, tsunamis, typhoons, hurricanes, tornadoes, earthquakes and other natural catastrophes have been and will continue to be a concern of Joint Force commanders. In particular, where natural disasters collide with growing urban sprawl, widespread human misery could be the final straw that breaks the back of a weak state.

If such a catastrophe occurs within the United States itself – particularly when the nation’s economy is in a fragile state or where U.S. military bases or key civilian infrastructure are broadly affected – the damage to U.S. security could be considerable. Areas of the U.S. where the potential is great to suffer large-scale effects from these natural disasters are the hurricane-prone areas of the Gulf and Atlantic coasts, and the earthquake zones on the west coast and along the New Madrid fault.


One of the fears haunting policy makers is the appearance of a pathogen, either manmade or natural, able to devastate mankind, as the “Black Death” did in the Middle East and Europe in the middle of the Fourteenth Century. Within barely a year, approximately a third of Europe’s population died.

The crucial element in any response to a pandemic may be the political will to impose quarantine.

A repetition of the 1918 influenza pandemic, which led to the deaths of millions world-wide, would have the most serious consequences for the United States and the world politically as well as socially. The dangers posed by the natural emergence of a disease capable of launching a global pandemic are serious enough, but the possibility exists also that a terrorist organization might acquire a dangerous pathogen.

The deliberate release of a deadly pathogen, especially one genetically engineered to increase its lethality or virulence, would present greater challenges than a naturally occurring disease like SARS. While the latter is likely to have a single point of origin, terrorists could seek to release the pathogen at several different locations in order to increase the rate of transmission across a population. This would seriously complicate both the medical challenge of bringing the disease under control

The implications for the Joint Force of a pandemic as widespread and dangerous as that of 1918 would be profound. American and global medical capabilities would soon find themselves overwhelmed. If the outbreak spreads to the United States, the Joint Force might have to conduct relief operations in support of civil authorities that, consistent with meeting legal prerequisites, could go beyond assisting in law enforcement and maintaining order. Even as Joint Force commanders confronted an array of missions, they would also have to take severe measures to preserve the health of their forces and protect medical personnel and facilities from public panic and dislocations. Thucydides captured the moral, political, and psychological dangers that a global pandemic would cause in his description of the plague’s impact on Athens: “For the catastrophe was so overwhelming that men, not knowing what would happen next to them, became indifferent to every rule of religion or of law.


A timely high-resolution imagery of much of the globe is already available. This has empowered not only states, but also citizens who have, for example, used such imagery to identify hundreds of facilities throughout North Korea. As a result, the future Joint Force commander will not be able to assume that his deployments and operations will remain hidden; rather, they will be exposed to the scrutiny of both adversaries and bystanders.

China’s 2007 successful test of a direct-ascent anti-satellite weapon sent shock waves throughout the international community and created tens of thousands of pieces of space debris. Then in 2009, a commercial telecommunications satellite was destroyed in a collision with a defunct Russian satellite, raising further concerns about the vulnerability of low-earth orbit systems. These events and others highlight the need to protect and operate our space systems in an increasingly contested and congested orbital environment. The relative vulnerability of space assets plus our heavy reliance on them could provide an attractive target for a potential adversary.

Cooperation & competition among conventional powers

The great question confronting Europe is whether some impending threat – an aggressive and expansionist hegemon, competition for resources, the internal stress of immigration, or violent extremism – will inspire them to raise a larger armed force to preserve their security. It is also conceivable that combinations of regional powers with sophisticated capabilities could band together to form a powerful anti-American alliance. It is not hard to imagine an alliance of small, cash-rich countries arming themselves with high-performance long-range precision weapons. Such a group could not only deny U.S. forces access into their countries, but could also prevent American access to the global commons at significant ranges from their borders.

Deng Xiaoping’s advice for China to “disguise its ambition and hide its claws” may represent as forthright a statement as the Chinese can provide. What does appear relatively clear is that the Chinese are thinking in the long term regarding their strategic course. Rather than emphasize the future strictly in military terms, they seem willing to see how their economic and political relations with the United States develop, while calculating that eventually their growing strength will allow them to dominate Asia and the Western Pacific.

The Chinese are interested in the strategic and military thinking of the United States. In the year 2000, the PLA had more students in America’s graduate schools than the U.S. military, giving the Chinese a growing understanding of America and its military. As a potential future military competitor, China would represent a most serious threat to the United States, because the Chinese could understand America and its strengths and weaknesses far better than Americans understand the Chinese. This emphasis is not surprising, given Sun Tzu’s famous aphorism: “Know the enemy and know yourself; in a hundred battles you will never be in peril. When you are ignorant of the enemy, but know yourself, your chances of winning or losing are equal. If ignorant both of your enemy and of yourself, you are certain in every battle to be in peril.”

The Chinese are working hard to ensure that if there is a military confrontation with the United States sometime in the future, they will be ready. Chinese discussions exhibit a deep respect for U.S. military power. There is a sense that in certain areas, such as submarine warfare, space, and cyber warfare, China can compete on a near equal footing with America. Indeed, competing in these areas in particular seems to be a primary goal in their force development. One does not devote the significant national treasure required to build nuclear submarines for coastal defense. The emphasis on nuclear submarines and an increasingly global Navy in particular, underlines worries that the U.S. Navy possesses the ability to shut down China’s energy imports of oil, 80% of which goes through the straits of Malacca. As one Chinese naval strategist expressed it: “the straits of Malacca are akin to breathing – to life itself.

With the collapse of the Soviet Union, Russia lost the lands and territories it had controlled for the better part of three centuries. Not only did the collapse destroy the economic structure that the Soviets created, but the weak democratic successor regime proved incapable of controlling the criminal gangs or creating a functioning economy.

Since 2000, Russia has displayed a considerable recovery based on Vladimir Putin’s reconstitution of rule by the security services – a move most Russians have welcomed – and on the influx of foreign exchange from Russia’s production of petroleum and natural gas. How the Russian government spends this revenue over the long term will play a significant role in the kind of state that emerges. The nature of the current Russian regime itself is also of concern. To a considerable extent, its leaders have emerged from the old KGB, suggesting a strategic perspective that bears watching. At present, Russian leaders appear to have chosen to maximize petroleum revenues without making the investments in oil fields that would increase oil and gas production over the long term. With its riches in oil and gas, Russia is in a position to modernize and repair its ancient and dilapidated infrastructure and to improve the welfare of its long suffering people. Nevertheless, the current leadership has displayed little interest in such a course. Instead, it has placed its emphasis on Russia’s great power status. For all its current riches, the brilliance of Moscow’s resurgence, and the trappings of military power, Russia cannot hide the conditions of the remainder of the country. The life expectancy of Russia’s male population, 59 years, is 148th in the world and places the country somewhere between East Timor and Haiti.

In the Caucasus region, especially Georgia and its Abkhazian and South Ossetian provinces, Russia has provided direct support to separatists. In other cases, such as the conflict between Armenia and Azerbaijan or in the Trans-Dnestrian region of Moldova, Russia provides indirect support to keep these conflicts simmering. These conflicts further impoverish areas

They lay astride new and vulnerable routes to access the oil of the Caspian Basin and beyond. They encourage corruption, organized crime, and disregard legal order and national sovereignty in a critical part of the world. In the future, they could exacerbate the establishment of frameworks for regional order and create a new “frontier of instability” around Russia. Indeed, while many of its European neighbors have almost completely disarmed, Russia has begun a military buildup. Since 2001, the Russians have quadrupled their military budget with increases of over 20% per annum over the past several years. In 2007, the Russian parliament, with Putin’s enthusiastic support, approved even greater military appropriations through 2015. Russia cannot recreate the military machine of the old Soviet Union, but it may be attempting to make up for demographic and conventional military inferiority by modernizing. Russia’s failure to diversify its economy beyond oil and natural gas, together with its accelerating demographic collapse, will create a Russia of greatly decreased political, economic, and military power by the 2020s. One of the potential Russias that could emerge in coming decades could be one that focuses on regaining its former provinces in the name of “freeing” the Russian minorities in those border states from the illtreatment they are supposedly receiving.

The Pacific and Indian Oceans

The rim of the great Asian continent is home to a number of states with significant nuclear potential. China and Russia are members of the Nuclear Nonproliferation Treaty (NPT) and have significant nuclear arsenals at their command. India and Pakistan have demonstrated the capacity to detonate nuclear devices, possess the means to deliver them, and are not party to the NPT, while others such as North Korea and Iran are pursuing nuclear weapons technology (and the means to deliver them) as well. Several friends or Allies of the United States, such as Japan and South Korea are highly advanced technological states and could quickly build nuclear devices if they chose to do so.

While the region appears stable on the surface, political clefts exist. There are few signs that these divisions, which have deep historical, cultural, and religious roots, will be mitigated. Not all of Asia’s borders are settled. China, Japan and Russia have simmering territorial disputes over maritime boundaries, while demographic and natural resource pressures across the Siberia/Manchuria border have significant implications for Moscow’s control of its far east. If one includes the breakup of the British Raj in 1947-1948, India and Pakistan have fought three brutal wars, while a simmering conflict over the status of Kashmir continues to poison relations between the two powers.

The Vietnamese and the Chinese have a long record of antipathy, which broke out into heavy fighting in the late 1970s, and China’s claims that Taiwan is a province of the mainland and that scattered islands as far afield as Malaysia are Chinese territory obviously represent a set of troublesome flashpoints. The continuing dispute between India and Pakistan over Kashmir may be the most dangerous disagreement that exists between two nuclear armed powers. The Chinese have backed up their claims to the Spratleys, which Vietnam and the Philippines also claim, with force.

How contested claims in this important sea lane are resolved will have important implications for trade routes and energy exploration, and as an area of growing naval competition. The Kurile Islands, occupied by the Soviets at the end of World War II, remain a contentious issue between Russia and Japan. The uninhabited islands south of Okinawa are in dispute between Japan and China, both drawn to the area by the possibility of oil. Much of the Yellow Sea remains in dispute between the Koreas, Japan, and China, again because of its potential for oil. The straits of Malacca represent the most important transit point for world commerce, the closure of which for even a relatively short period of time would have a devastating impact on the global economy. There is at present a subtle, but sustained military buildup throughout the region. India could more than quadruple its wealth over the course of the next two decades, but large swaths of its population will likely remain in poverty through the 2030s. Like China, this will create tensions between the rich and the poor. Such tension, added to the divides among its religions and nationalities, could continue to have implications for economic growth and national security. Nevertheless, its military will receive substantial upgrades in the coming years. That fact, combined with its proud martial traditions and strategic location in the Indian Ocean, will make India the dominant player in South Asia and the Middle East. Like India, China and Japan are also investing heavily in military force modernization, particularly with an emphasis in naval forces that can challenge their neighbors for dominance in the seas surrounding the East and South Asian periphery. The buildup of the navies by the powers in the region has significant implications for how the United States develops its strategy as well as for the deployments of its naval forces.

India sits on the rim of an ocean pivotal to U.S. interests, and possesses a navy larger than any other in the region. It borders a troubled Pakistan, a growing China, is in a neighborhood at high risk of nuclear proliferation, is a common target for radical ideological groups using terrorist tactics, and sits astride key sea lanes linking East Asia to the oil fields of the Middle East. An estimated 700 million Indians still earn under $2 a day.

From a security standpoint, the NATO alliance will have the potential to field substantial, world-class military forces and project them far beyond the boundaries of the continent, but this currently seems a relatively unlikely possibility, given demographic shifts between native-born Europeans and immigrants from the Middle East and Southwest Asia. Europe is undergoing a major cultural transformation, making it less willing to project military power into likely areas of conflict. Perhaps this will change with the recognition of a perceived threat. The next 25 years will provide two good candidates: Russia and continued terrorism fueled by global violent extremism.

The Baltic and Eastern European regions will likely remain flashpoints as a number of historical issues such as ethnicity or the location of national boundaries, which have led to conflict in the past, continue to simmer under the surface. Russian efforts to construct a gas pipeline to Western Europe under the Baltic Sea rather than a less costly land route through Eastern Europe suggests a deliberate aim to separate the Central and Western European NATO countries from the Baltic and Eastern European members of NATO. Continued terrorist attacks in Europe might also spark a popular passion for investing in military forces.

Should violent extremists persist in using this tactic to attack the European continent with increasing frequency and intensity, there might be a response that includes addressing this threat on a global scale rather than as an internal security problem.

Central and South America

Military challenges in South America and Central America will likely arise from within states, rather than between them. Many internal stresses will continue to challenge the continent, particularly drug cartels and criminal gangs, while terrorist organizations will continue to find a home in some of the continent’s lawless border regions.

The power of criminal gangs fueled by drug money may be the primary impediment to economic growth, social progress, and perhaps even political stability and legitimacy in portions of Latin America. The cartels work to undermine and corrupt the state, bending security and legal structures to their will, while distorting and damaging the overall economic potential of the region. That criminal organizations and cartels are capable of leveraging expensive technologies to smuggle illicit drugs across national borders serves to illustrate the formidable resources that these groups can bring to bear. Taking advantage of open trade and finance regimes and global communications technologies, these groups attempt to carve out spaces free from government control and present a real threat to the national security interests of our friends and allies in the Western Hemisphere.

The assault by the drug cartels on the Mexican government and its authority over the past several years has also recently come into focus, and reminds one how critical stability in Mexico is for the security of the United States and indeed the entire region. Mexico has the 14th largest economy on Earth, significant natural resources, a growing industrial base, and nearly free access to the biggest export market in the world immediately to its north.

In addition to conventional bank transfers, syndicates import between $8 billion and $10 billion in bulk cash each year. As traditional land routes for smuggling drugs into the US have been shut down, in most of the US there has been an increase in drug price and a decrease in drug purity but as in any conflict, the enemy has adapted, and now the maritime routes have become critical to smugglers.

As Mexico becomes successful, the drug problem will expand into a greater regional problem, so a holistic approach is needed. The economics are shifting as well, with the United Kingdom and Spain now the most lucrative markets and the problem spilling into Japan, Russia, and China.

Unless Venezuela’s current regime changes direction, it could use its oil wealth to subvert its neighbors for an extended period while pursuing anti-American activities on a global scale with the likes of Iran, Russia, and China, in effect creating opportunities to form anti-American coalitions in the region.

Sub-Saharan Africa presents a unique set of challenges, including economic, social, and demographic factors, often exacerbated by bad governance, interference by external powers, and health crises such as Acquired Immune Deficiency Syndrome (AIDS). Even pockets of economic growth are under pressure and may regress as multiple problems challenge government to build the capacity to respond. Some progress in the region may occur, but it is almost certain that many of these nations will remain on any list of the poorest nations on the globe. Exacerbating their difficulties will be the fact that the national borders, drawn by the colonial powers in the Nineteenth Century, bear little relation to tribal and linguistic realities. The region is endowed with a great wealth of natural resources, a fact which has already attracted the attention of several powerful states. This could represent a welcome development because in its wake might follow foreign expertise and investment for a region in dire need of both. The importance of the region’s resources will ensure that the great powers maintain a vested interest in the region’s stability and development.

Relatively weak African states will be very hard-pressed to resist pressure by powerful state and non-state actors who embark on a course of interference. This possibility is reminiscent of the late Nineteenth Century, when pursuit of resources and areas of interest by the developed world disturbed the affairs of weak and poverty stricken regions.

Based on current evidence, a principal nexus of conflict will continue to be the region from Morocco to Pakistan through to Central Asia. Across this part of the globe a number of historical, dormant conflicts between states and nations over borders, territories, and water rights exist, especially in Central Asia and the Caucasus. Radical extremists will present the first and most obvious challenge. The issue here is not terrorism per se, because terrorism is merely a tactic by which those who lack the technology, weapons systems, and scruples of the modern world can attack their enemies throughout the world. Radical extremists who advocate violence constitute a transnational, theologically-based insurgency that seeks to overthrow regimes in the Islamic world. They bitterly attack the trappings of modernity as well as the philosophical underpinnings of the West

At a minimum radical Islam seeks to eliminate U.S. and other foreign presence in the Middle East, a region vital to U.S. and global security, but only as a first step to the creation of a Caliphate stretching from Central Asia in the East to Spain in the West and extending deeper into Africa. The problems in the Arab-Islamic world stem from the past five centuries, during which the rise of the West and the dissemination of Western political and social values paralleled a concomitant decline in the power and appeal of their societies.

Today’s Islamic world confronts the choice of either adapting to or escaping from a globe of interdependence created by the West. Often led by despotic rulers, addicted to the exports of commodities which offered little incentive for more extensive industrialization or modernization, and burdened by cultural and ideological obstacles to education and therefore modernization, many Islamic states have fallen far behind the West, South Asia, and East Asia. The rage of radical Islamists feeds off the lies of their often corrupt leaders, the rhetoric of radical imams, the falsifications of their own media, and resentment of the far more prosperous developed world. If tensions between the Islamic world’s past and the present were not enough, the Middle East, the Arab heartland of Islam, remains divided by tribal, religious, and political divisions, making continued instability inevitable. Iran has an increasingly important role in this center of instability. A society with a long and rich history, Iran has yet to live up to its potential to be a stabilizing force in the region. Although the U.S. has removed Iran’s most powerful adversary (Saddam) and reduced the Taliban, the regime continues to foment instability in areas far from its own borders. Despite a population that remains relatively favorable to the United States, the cleric dominated regime appears ready to continue dedicating its diplomatic and military capabilities to confrontation with the United States and Israel, and to cultivate an array of very capable proxy forces around the world. Hezbollah in Lebanon, Hamas in Gaza, various groups in Afghanistan, Yemen, Iraq, and the Caucasus, and other client states will serve to extend and solidify Iranian influence abroad.

Extreme volatility in oil prices is eroding national revenues due to the failure of the regime to diversify the national economy, which stifles the future prosperity of the Iranian people. Iran must create conditions for its economic viability beyond the near term or face insolvency, internal dissension and ferment, and possible upheaval. The economic importance of the Middle East with its energy supplies hardly needs emphasis. Whatever the outcome of the conflicts in Iraq and Afghanistan, U.S. forces will find themselves again employed in the region on numerous missions ranging from regular warfare, counterinsurgency, stability operations, relief and reconstruction, to engagement operations. The region and its energy supplies are too important for the U.S., China, and other energy importers to allow radical groups to gain dominance or control over any significant portion of the region.


Weak and failing states will remain a condition of the global environment over the next quarter of a century. Such countries will continue to present strategic and operational planners serious challenges, with human suffering on a scale so large that it almost invariably spreads throughout the region, and in some cases possesses the potential to project trouble throughout the globalized world.

Many, if not the majority, of weak and failing states will be in Sub-Saharan Africa, Central Asia, the Middle East, and North Africa. A current list of such states much resembles the lists of such states drawn up a generation ago, suggesting a chronic condition, which, despite considerable aid, provides little hope for solution.

There is one dynamic in the literature of weak and failing states that has received relatively little attention, namely the phenomenon of “rapid collapse.” For the most part, weak and failing states represent chronic, long-term problems that allow for management over sustained periods. The collapse of a state usually comes as a surprise, has a rapid onset, and poses acute problems. The collapse of Yugoslavia into a chaotic tangle of warring nationalities in 1990 suggests how suddenly and catastrophically state collapse can happen – in this case, a state that had hosted the 1984 Winter Olympics at Sarajevo, then quickly became the epicenter of the ensuing civil war. The erosion of state authority by extremist Islamist groups bears consideration due to the disastrous consequences for U.S. security such weakness could create. Pakistan is especially under assault, and its collapse would carry with it the likelihood of a sustained violent and bloody civil and sectarian war, an even bigger haven for violent extremists, and the question of what would happen to its nuclear weapons. That “perfect storm” of uncertainty alone might require the employment of U.S. and coalition forces in a situation

Risk Level

One of the most troubling and frighteningly common human disasters that occur as states collapse is that of ethnic cleansing and even genocide. This extreme violence, leading to the death and displacement of potentially millions, is usually traced to three interlocking factors. These include the collapse of state authority, severe economic turmoil, and the rise of charismatic leaders proposing the “ultimate solution” to the “problem” of ethnic or religious diversity or the division of economic or political spoils.

The drive to create ethnically or ideologically pure political entities has been a consistent feature of the era of self-determination and decolonization. The retreat of the European empires followed by the contraction of the dangerous, yet relatively stable U.S.-Soviet confrontation has laid bare a world of complex ethnic diversity and violent groups attempting to secure power while keeping ethnic minorities under heel. As sources of legitimate order have crumbled, local elites compete for the benefits of power. The stakes are particularly high in ethnically diverse regions.

Where might we expect to see a similar toxic mix of mismatched political governance, difficult economic circumstances, and aggressive politicians willing to use human differences to further their pursuit of political power? The ethnic confrontations in Europe have been largely solved through wars, including the Second World War and the Balkan Conflict; however, they may reemerge in new areas where migration, demographic decline, and economic stress take hold. Most problematic is the vast arc of instability between Morocco and Pakistan where Shias, Sunnis, Kurds, Arabs, Persians, Jews, Pashtuns, Niall Ferguson, “The Next War of the World” Foreign Affairs (September 2006), p. 66. Baluchs, and other groups compete with one another.

Many areas of central Africa are also ripe for severe ethnic strife as the notion of ethnically pure nation states animate old grievances, with Rwanda and the Congo being examples of where this path might lead. As Lebanon, Bosnia, Rwanda, and current operations in Iraq and Afghanistan have shown, the Joint Force may be called upon to provide order and security in areas where simmering political, racial, ethnic, religious, and tribal differences create the potential for large scale atrocities.


While states and other conventional powers will remain the principal brokers of power, there is an undeniable diffusion of power to unconventional, non-state, or trans-state actors. While these groups have rules of their own, they exist and behave outside the recognized norms and conventions of society. Some transnational organizations seek to operate beyond state control and acquire the tools and means to challenge states and utilize terrorism against populations to achieve their aims. These unconventional transnational organizations possess no regard for international borders and agreements. The discussion below highlights two examples: militias and super-empowered individuals. Militias represent armed groups, irregular yet recognizable as an armed force, operating within ungoverned areas or in weak failing states. They range from ad hoc organizations with shared identities to more permanent groups possessing the ability to provide goods, services, and security along with their military capabilities.

Militias challenge the sovereignty of the state by breaking the monopoly on violence traditionally the preserve of states. An example of a modern day militia is Hezbollah, which combines state-like technological and warfighting capabilities with a “substate” political and social structure inside the formal state of Lebanon. One does not need a militia to wreak havoc. Pervasive information, combined with lower costs for many advanced technologies, have already resulted in individuals and small groups possessing increased ability to cause significant damage and slaughter. Time and distance constraints are no longer in play. Such groups employ niche technologies capable of attacking key systems and providing inexpensive countermeasures to costly systems. Because of their small size, such groups of the “super-empowered” can plan, execute, receive feedback, and modify their actions, all with considerable agility and synchronization. Their capacity to cause serious damage is out of all proportion to their size and resources.


In the 1940’s the Democratic West faced down and ultimately defeated an extreme ideology that espoused destruction of democratic freedoms: Nazism. Afterward, these same powers resisted and overcame another opposing ideology that demanded the diminution of individual liberties to the power of the state: Communism.

We now face a similar, but even more radical ideology that directly threatens the foundation of western secular society. Al Qaeda terrorists, violent militants in the Levant, radical Salafist groups in the Horn of Africa, and the Taliban in the mountains of Afghanistan are all examples of local groups pursuing local interests, but tied together by a common, transnational, and violent ideology. These groups are driven by an uncompromising, nihilistic rage at the modern world, and accept no middle ground or compromise in pursuing their version of the truth. Their goal is to force this truth on the rest of the world’s population. These radical ideological groups have discovered how to form cellular, yet global networks that operate beyond state control and have the capacity – and, most importantly, the will – to challenge the authority of states. Because these organizations do not operate within the international diplomatic systems, they will locate bases of operations in the noise and complexity of cities and use international law and the safe havens along borders of weak states to shield their operations and dissuade the U.S. from engaging them militarily.

Combining extreme ideologies with modern technology, they use the Internet and other means of communications to share experiences, tactics, funding, and best practices to maintain a constant flow of relatively sophisticated volunteers for their effort. Moreover, they have made common cause with other unconventional powers and will use these organizations to shelter their efforts and as fronts for their operations. These radical groups are constructing globe-spanning “narratives” that effectively dehumanize their opponents, legitimizing in their eyes any tactic no matter how abhorrent to civilized norms of conduct. They believe that their target audience is the 1.1 billion Muslims who are 16 percent of the world’s total population. The use of terror tactics to shock and silence moderate voices in their operational areas includes suicide bombing and improvised explosive devices to kill and maim as many as possible.

Most troubling is the possibility, indeed likelihood, that some of these groups will achieve a weapons of mass destruction (WMD) capability through shared knowledge, through smuggling, or through the deliberate design of an unscrupulous state. The threat of attacks both abroad and in the homeland using nuclear devices, custom bio-weapons, and advanced chemical agents intended to demonstrate dramatically our security weaknesses are real possibilities we must take account of in our planning and deterrent strategies.

No one should harbor the illusion that the developed world can win this conflict in the near future. As is true with most insurgencies, victory will not appear decisive or complete. It will certainly not rest on military successes. The treatment of political, social, and economic ills can help, but in the end will not be decisive. What will matter most will be the winning of a “war of ideas,” much of which must come from within the Islamic world itself.


A continuing challenge to American security will be the proliferation of nuclear weapons. Throughout the Cold War, U.S. planners had to consider the potential use of nuclear weapons both by and against the Soviet Union. For the past 20 years, Americans have largely ignored issues of deterrence and nuclear warfare. We no longer have that luxury.

Since 1998, India and Pakistan have created nuclear arsenals and delivery capabilities. North Korea has on two occasions attempted to test nuclear devices and likely has produced the fissile material required to create weapons. North Korea is likely to attempt to weaponize its nascent nuclear capability to increase its leverage with its neighbors and the United States.

Furthermore, the Iranian regime is pressing forward aggressively with its own nuclear weapons program. The confused reaction in the international community to Iran’s defiance of external demands to discontinue its nuclear development programs may provide an incentive for others to follow this path. Unless a global agreement to counter proliferation is successful, the Joint Force must consider a future in which issues of nuclear deterrence and use are a primary feature. Some state or non-state actors may not view nuclear weapons as tools of last resort. It is far from certain that a state whose culture is deeply distinct from that of the United States, and whose regime is either unstable or unremittingly hostile (or both), would view the role of nuclear weapons in a fashion similar to American strategists. The acquisition of nuclear weapons by other regimes, whether they were hostile or not, would disrupt the strategic balance further, while increasing the potential for the use of nuclear weapons.

Add to this regional complexity the fact that multiple nuclear powers will very likely have the global reach to strike other states around the world. These rising nuclear powers may view use of WMD very differently from the U.S. and may be willing to employ them tactically to achieve short term objectives. The stability of relations among numerous states capable of global nuclear strikes will be of central importance for the Joint Force. Assured second-strike capabilities and relations based on mutually-assured destruction may mean greater stability, but may effectively reduce access to parts of the world. On the other hand, fragile nuclear balances and vulnerable nuclear forces may provide tempting targets for nuclear armed competitors.

Any discussion of weapons of mass destruction must address also the potential use of biological weapons by sovereign states as well as non-state actors. By all accounts, such weapons are becoming easier to fabricate – certainly easier than nuclear weapons – and under the right conditions they could produce mass casualties, economic disruption, and terror on the scale of a nuclear strike. The knowledge associated with developing biological weapons is widely available, and the costs for their production remain modest, easily within reach of small groups or even individuals. The U.S. ability to deter nuclear armed states and non-state actors needs to be reconsidered and perhaps updated to reflect this changing landscape.

More advanced weaponry will be available to more groups, conventional and unconventional, for a cheaper price. This will allow relatively moderately funded states and militias to acquire long-range precision munitions, projecting power farther out and with greater accuracy than ever before. At the high end, it has already been seen that this reach extends into space with the public demonstration of anti-satellite weapons. Whether a small oil-rich nation or a drug cartel, cash will be able to purchase lethal capabilities. If manpower is a limiting factor, the advances in robotics provide a solution for those who can afford the price. This has the sobering potential to amplify further the power of the “super-empowered” guerrilla.

A current example of the kind of technological surprise that could prove deadly would be an adversary’s deployment and use of disruptive technology, such as electro-magnetic pulse (EMP) weapons against a force without properly hardened equipment. The potential effects of an electromagnetic pulse resulting from a nuclear detonation have been known for decades. The appearance of non-nuclear EMP weapons could change operational and technological equations. They are being developed, but are joint forces being adequately prepared to handle such a threat? The impact of such weapons would carry with it the most serious potential consequences for the communications, reconnaissance, and computer systems on which the Joint Force depends at every level.

High powered microwave (HPM) weapons will offer both the Joint Force and our adversaries new ways to disrupt, degrade, or even destroy unshielded electrical systems, as well as electronics and integrated circuits upon which command and control, ISR and weapon systems themselves are based. The non-explosive, non-lethal aspects of HPM will prove adaptive against a variety of threats embedded and operating among civilian populations in urban environments.


By the 2030s, five billion of the world’s eight billion people will live in cities. Fully two billion of them will inhabit the great urban slums of the Middle East, Africa, and Asia. Many large urban environments will lie along the coast or in littoral environments. With so much of the world’s population crammed into dense urban areas and their immediate surroundings, future Joint Force commanders will be unable to evade operations in urban terrain. The world’s cities, with their teeming populations and slums, will be places of immense confusion and complexity, physically as well as culturally. They will also provide prime locations for diseases and the population density for pandemics to spread.

There is no modern precedent for major cities collapsing, even in the Eighteenth and Nineteenth Centuries, when the first such cities appeared. Cities under enormous stress, such as Beirut in the 1980s and Sarajevo in the 1990s, nevertheless managed to survive with only brief interruptions of food imports and basic services. As in World War II, unless contested by an organized enemy, urban areas are always easier to control than the countryside. In part, that is because cities offer a pre-existing administrative infrastructure through which forces can manage secured areas while conducting stability operations in contested locations. The effectiveness of that pre-existing infrastructure may be tested as never before under the stress of massive immigration, energy demand, and food and water shortages in the urban sprawl that is likely to emerge.

What may be militarily effective may also create the potential for large civilian casualties, which in turn would most probably result in a political disaster, especially given the ubiquitous presence of the media. As well, the nature of operations in urban environments places a premium on decentralized command and control, ISR, fire support, and aviation. Combat leaders will need to continue to decentralize decision-making down to the level where tactical leaders can act independently in response to fleeting opportunities.


In an uncertain world, which will inevitably contain enemies who aim either to attack the United States directly or to undermine the political and economic stability on which America, its allies, and the world’s economy depend, the nation’s military forces will play a crucial role. Yet, war is an inherently uncertain and costly endeavor. As the United States has discovered in Iraq and Afghanistan, there is no such thing as a rapid, decisive operation that does not generate unforeseen second and third order effects.

Preventing war will prove as important as winning a war.

Deterrence also depends on the belief on the part of the adversary that the United States will use its military power in defense of its national interests.

After protracted action in Afghanistan and Iraq, the force now faces a period of reconstitution and rebalancing which will require significant physical, intellectual, and moral effort that may take a decade to complete. During this time, our forces may be located significant distances from a future fight. Thus, the Joint Force will be challenged to maintain both a deterrent posture and the capacity and capability to be forward engaged around the world, showing the flag and displaying the ability to act in ways to both prevent and win wars.

Shadow Globalization: “Bazaars of Violence”

The globalization of trade, finance, and human travel across international boundaries in the commercial world has an analogous dark side as well. Criminal and terrorist networks are intermingling to construct their own “shadow globalization,” building micro markets, and trade and financial networks that will enable them to coordinate nefarious activities on a global scale.

The ubiquity and ease of access to these markets outside of legal structures attract shadow financing from a much larger pool, irrespective of geography. In these markets, rates of innovation in tactics, capabilities, and information sharing will accelerate and will enable virtual organizational structures that quickly coalesce, plan, attack, and dissolve.

As they grow, these markets will allow adversaries to generate attacks at a higher level of rapidity and sophistication beyond law enforcement’s capability to interdict. For example, we have seen Somali pirates hiring indigenous spotters to identify ships leaving foreign harbors as prime targets for hijackings.

We should expect shadow globalization to encourage this outsourcing of criminality to interface increasingly with insurgencies, such that actors in local conflicts will impact on a global scale, with perhaps hundreds of groups and thousands of participants. The line between insurgency and organized crime will likely continue to blur. This convergence can already be seen in the connections between the FARC and cocaine trafficking, MEND and stolen oil, and the Taliban and opium production. This convergence means that funding for violent conflicts will interplay and abet the growth of global gray and black markets.

The current size of these markets is already $2-3 trillion and is growing faster than legal commercial trade; it has the potential to equal a third of global GDP by 2020. If so, violent insurgencies will have the ability to trade within this economic regime, amassing financial resources in exchange for market protection, and to mobilize those resources to rival state military capabilities in many areas. This gives them the increased ability to co-opt and corrupt state legal structures.

Shadow globalization may not be merely an Internet phenomenon, as groups are able to buy or lease their own commercial aircraft, fast boats, submarines, and truck fleets, and to move people and cargo across regions outside state-controlled legal trade regimes. Moreover, collaboration among younger generations through ever more powerful social media will likely be globally mainstream by 2025. The sophistication, ubiquity, and familiarity of these technologies will enable faster and more efficient market formation. This means that micro-market interaction will be both natural and habitual to its participants, creating opportunities for “flash micro-markets” and symbiosis between legal and illicit market elements.

It is in this political-strategic environment that the greatest surprises for Americans may come. The United States has dominated the world economically since 1915 and militarily since 1943. Its dominance in both respects now faces challenges brought about by the rise of powerful states. Moreover, the rise of these great powers creates a strategic landscape and international system, which, despite continuing economic integration, will possess considerable instabilities. Lacking either a dominant power or an informal organizing framework, such a system will tend toward conflict.

Between now and the 2030s, the military forces of the United States will almost certainly find themselves involved in combat.


There are two particularly difficult scenarios that will confront joint forces between now and the 2030s. The first and most devastating would be a major war with a powerful state or hostile alliance of states.

Given the proliferation of nuclear weapons, there is considerable potential for such a conflict to involve the use of such weapons.

While major regular war is currently in a state of hibernation, one should not forget that in 1929 the British government adopted as its basic principle of defense planning the assumption that no major war would occur for the next ten years. Until the mid-1930s the “Ten Year Rule” crippled British defense expenditures. The possibility of war remained inconceivable to British statesmen until March 1939, despite the movement of formerly democratic governments to Fascism. The one approach that would deter a major conflict involving U.S. military forces, including a conflict involving nuclear weapons, is the maintenance of capabilities that would allow the United States to wage and win any possible conflict. As the Romans so aptly commented, “if you wish for peace, prepare for war.” Preventing war will in most instances prove more important than waging it. In the long term, the primary purpose of the military forces of the United States must be deterrence, for war in any form and in any context is an immensely expensive undertaking both in lives and national treasure.

Americans must not allow themselves to be deluded into believing their future opponents will prove as inept and incompetent as Saddam Hussein’s regime was in 1991 and again in 2003.

Having seen the capabilities of U.S. forces in both regular and irregular war, future opponents will understand “the American way of war” in a particularly detailed and thorough way.

More sophisticated opponents of U.S. military forces will certainly attack American vulnerabilities. For instance, it is entirely possible that attacks on computers, space, and communications systems will severely degrade command and control of U.S. forces.

Conflicts or events beyond the scope of traditional war, such as 9/11, or non-attributable use of WMD, will create demands that will stress the Joint Force.

In planning for future conflicts, Joint Force commanders and their planners must factor two important constraints into their calculations: logistics and access. The majority of America’s military forces will find themselves largely based in North America. Thus, the first set of problems involved in the commitment of U.S. forces will be logistical. In the 1980s many defense pundits criticized the American military for its supposed over-emphasis on logistics, and praised the German Wehrmacht for its minimal “tooth to tail” ratio in the Second World War. What they missed was that the United States had to project its military forces across two great oceans, then fight massive battles of attrition in Europe and in East Asia. Ultimately, the logistical prowess of U.S. and Allied forces translated into effective combat forces, defeated the Wehrmacht on the Western Front, crushed the Luftwaffe in the skies over Germany, and broke Imperial Japan’s will. The tyranny of distance will always influence the conduct of America’s wars, and joint forces will confront the problems associated with moving forces over great distances and then supplying them with fuel, munitions, repair parts, and sustenance. In this regard, a measure of excess is always necessary, compared to “just in time” delivery. Failure to keep joint forces who are engaged in combat supplied could lead to disaster, not just unstocked shelves. Understanding that requirement represents only the first step in planning, but it may well prove the most important.

The crucial enabler for America’s ability to project its military power for the past six decades has been its almost complete control over the global commons.

Any projection of military power in the future will require a similar enabling effort, and must recognize that the global commons have now expanded to include the domains of cyber and space. The Joint Force must have redundancy built in to each of these areas to ensure that access and logistics support are more than “single-point safe” and cannot be disrupted through a single point of attack by the enemy. In America’s two recent wars against Iraq, the enemy made no effort to deny U.S. forces entry into the theater. Future opponents, however, may not prove so accommodating.

The second constraint confronting planners is that the United States may not have uncontested access to bases in the immediate area from which it can project military power. Even in the best case, allies will be essential to providing the base structure required for arriving U.S. forces.

But there may be other cases in which uncontested access to bases is not available for the projection of military forces. This may be because the neighborhood is hostile, smaller friendly states have been intimidated, negative perceptions of America exist, or states fear giving up a measure of sovereignty. Furthermore, the use of bases by the Joint Force might involve the host nation in conflict. Hence, the ability to seize bases in enemy territory by force from the sea and air could prove the critical opening move of a campaign. Given the proliferation of sophisticated weapons in the world’s arms markets, potential enemies – even relatively small powers – will be able to possess and deploy an array of longer-range and more precise weapons. Such capabilities in the hands of America’s enemies will obviously threaten the projection of forces into a theater, as well as attack the logistical flow on which U.S. forces will depend. Thus, the projection of military power could become hostage to the ability to counter long-range systems even as U.S. forces begin to move into a theater of operations and against an opponent. The battle for access may prove not only the most important, but the most difficult.

One of the major factors in America’s success in deterring potential aggressors and projecting its military power over the past half century has been the presence of its naval forces off the coasts of far-off lands. Moreover, those forces have proven of enormous value in relief missions when natural disasters have struck. They will continue to be a significant factor in the future. Yet, there is the rising danger with the increase in precision and longer range missiles that presence forces could be the first target of an enemy’s action in their exposed positions.

The Joint Force can expect future opponents to launch both terrorist and unconventional attacks on the territory of the continental United States, while U.S. forces, moving through the global commons, could find themselves under persistent and effective attack. In this respect, the immediate past is not necessarily a guide to the future.

Unfortunately, we must also think the unthinkable – attacks on U.S. vital interests by implacable adversaries who refuse to be deterred could involve the use of nuclear weapons or other WMD.

Our joint forces must also have the recognized capability to survive and fight in a WMD, including nuclear, environment. This capability is essential to both deterrence and effective combat operations in the future joint operating environment. If there is reason for the Joint Force commander to consider the potential use of nuclear weapons by adversaries against U.S. forces, there is also the possibility that sometime in the future two other warring states might use nuclear weapons against each other. In the recent past, India and Pakistan have come close to armed conflict beyond the perennial skirmishing that occurs along their Kashmir frontier. Given India’s immense conventional superiority, there is considerable reason to believe such a conflict could lead to nuclear exchanges. As would be true of any use of nuclear weapons, the result could be massive carnage, uncontrolled refugee flows, and social collapse – all in all, a horrific human catastrophe.

Future adversaries will work through surrogates, including terrorist and criminal networks, manipulate access to energy resources and markets, and exploit perceived economic and diplomatic leverage in order to complicate our plans. Such approaches will be obscure and difficult to detect,

We face an era of failed states, destabilized elements and high end asymmetric threats. We must be prepared to adapt rapidly to each specific threat, and not narrowly focus only on preferred modes of warfare.

As Mao suggested, the initial approach in irregular war must be a general unwillingness to engage the regular forces they confront. Rather, according to him, they should attack the enemy where he is weakest, and in most cases this involves striking his political and security structures. It is likely that the enemy will attack those individuals who represent the governing authority or who are important in the local economic structure: administrators; security officials; tribal leaders; school teachers; and business leaders among others, particularly those who are popular among the locals.

The current demographic trends and population shifts around the globe underline the increasing importance of cities. The urban landscape is steadily growing in complexity, while its streets and slums are filled with a youthful population that has few connections to their elders. The urban environment is subject to water scarcity, increasing pollution, soaring food and living costs, and labor markets in which workers have little leverage or bargaining power. Such a volatile mixture is a recipe for trouble.

Joint forces will very likely find themselves involved in combat and relief operations in cities. Such areas will provide adversaries with environments that allow them to hide, mass, and disperse, while using the cover of innocent civilians to mask their operations.

They will also be able to exploit the interconnections of urban terrain to launch attacks on infrastructure nodes with cascading political effects. Urban geography will provide enemies with a landscape of dense buildings, an intense information environment, and complexity, all of which ease the conduct of operations.

Any urban military operation will require a large number of troops which could consume manpower at a startling rate. Moreover, operations in urban terrain will confront Joint Force commanders with a number of conundrums. The very density of buildings and population will inhibit the use of lethal means, given the potential for collateral damage and large numbers of civilian casualties. Such inhibitions could increase U.S. casualties. Additionally, any collateral damage carries with it difficulties in winning the “battle of narratives.” How crucial is the connection between collateral damage and disastrous political implications is suggested by the results of a remark an American officer made during the Tet offensive that American forces “had to destroy a village to save it.” That thought process and suggestion of indiscriminate violence reverberated throughout the United States and was one contributing factor to the erosion of political support for the war. Terrorists will be able to internalize lessons rapidly from their predecessors and colleagues without the bureaucratic hindrances found in nation states. One must also note the growing convergence of armed groups and terrorist organizations with criminal cartels like the drug trade to finance their activities. Such cooperative activities will make terrorism and criminal cartels only more dangerous and effective.

Where an increase in terrorist activity intersects with energy supplies or weapons of mass destruction, Joint Force commanders will confront the need for immediate action, which may require employment of significant conventional capabilities.

As Sir Michael Howard once commented, the military profession is not only the most demanding physically, but the most demanding intellectually. Moreover, it confronts a problem that no other profession possesses: There are two great difficulties with which the professional soldier, sailor, or airman has to contend in equipping himself as commander. First, his profession is almost unique in that he may only have to exercise it once in his lifetime, if indeed that often. It is as if a surgeon had to practice throughout his life on dummies for one real operation;

Secondly, the complex problem of running a [military service] at all is liable to occupy his mind so completely that it is easy to forget what it is being run for.

The Joint Operating Environment has spoken thoroughly about the asymmetric application of power by potential enemies against U.S. military forces. There is also an asymmetry with respect to the defense spending of the United States and its potential opponents, particularly in irregular contexts. One needs only to consider the enormous expenditures the United States has made to counter the threat posed by improvised explosive devices (IED). The United States has spent literally billions to counter these crude, inexpensive, and extraordinarily effective devices. If one were to multiply this ratio against a global enemy, it becomes untenable.

If we expect to develop and sustain a military that operates at a higher level of strategic and operational understanding, the time has come to address the recruiting, education, training, incentive, and promotion systems so that they are consistent with the intellectual requirements for the future Joint Force.

Do make it clear that generalship, at least in my case, came of understanding, of hard study and brainwork and concentration. Had it come easy to me, I should not have done [command] so well. If your book could persuade some of our new soldiers to read and mark and learn things outside drill manuals and tactical diagrams, it would do a good work. I feel a fundamental crippling in curiousness about our officers. Too much body and too little head. The perfect general would know everything in heaven and earth. So please, if you see me that way and agree with me, do use me as a text to preach for more study of books and history, a greater seriousness in military art. With two thousand years of example behind us, we have no excuse, when fighting, for not fighting well…

The defining element in military effectiveness in war lies in the ability to recognize when prewar visions and understanding of war are wrong and must change. Unfortunately, in terms of what history suggests, most military and political leaders have attempted to impose their vision of future war on the realities of the conflict in which they find themselves engaged, rather than adapting to the actual conditions they confront. The fog and friction that characterize the battle space invariably make the task of seeing, much less understanding what has actually happened, extraordinarily difficult. Moreover, the lessons of today, no matter how accurately recorded and then learned, may no longer prove relevant tomorrow. The enemy is human and will consequently learn and adapt as well. The challenges of the future demand leaders who possess rigorous intellectual understanding. Providing such grounding for the generals and admirals, sergeants and chiefs of the 2030s will ensure that the United States is as prepared as possible to meet the threats and seize the opportunities of the future.


USJFC. 2010. The Joint Operating Environment. United States Joint Forces Command.

Posted in Disasters, Military, Pandemics | Tagged , , | Leave a comment

General Charles Wald: Dial 1-800-The-U.S.-Military to solve your oil dependency issues

General Charles Wald, U.S. Air Force (retired), Former Deputy Commander, U.S. European command, and member of the Energy Security Leadership Council

Source: Senate 110-6. January 10, 2007. Geopolitics of Oil. United States Senate Hearing. 90 pages.

I recently retired from the Air Force after 35 years of service and during my career had the opportunity to fly combat over Vietnam, Cambodia, Iraq and Bosnia and learned much regarding how to use military assets to effectively solve national security problems.

But I also learned that many believed the U.S. military is solely responsible for security. I like to call this the ‘‘Dial 1-800-The-U.S.- Military’’ syndrome, because it reflects how people assume the U.S. military is a “toll-free” resource that can be called on to perform tasks that no one else has either the capability or will to execute.

I recall a recent meeting with several major global oil company executives in Kazakhstan. Before we began our discussion, one of the executives thanked me and the U.S. military for protecting the free flow of oil around the world. The executive’s world view included the expectation that the U.S. military will be there to provide worldwide security and to ensure the free flow of oil without any assistance from others. This struck me, and frankly, does not seem like a good model, particularly for the United States. The U.S. cannot and should not be everywhere to protect all the vulnerable components of the global oil infrastructure. The global economy relies on a massive oil infrastructure that stretches far beyond the Persian Gulf to pipelines in the Caucasus and offshore drilling rigs in the Gulf of Guinea. Surveying this situation, I realized that the U.S. military could not protect this vast infrastructure without partners. And, trust me, there should be partners out there, because the free flow of oil is in the best interest of many people all over the world.

With regard to the oil dependence issue, military response and capabilities are by no means the only effective tools available and in many cases are not appropriate. In fact, the single most effective step the United States can take to improve its energy security is to increase transportation efficiency. The transportation sector is responsible for nearly 70 percent of the oil the United States consumes. Within the transportation sector, oil—nearly 13 million barrels per day of it—accounts for 97% of delivered energy. More than 8 mb/d are used to fuel the over 220 million light-duty vehicles that Americans rely on for mobility.

CAFE standards legislated in 1973 during the Arab oil embargo were instrumental in helping America lower oil usage by the 1980’s, but there has been little progress since the original mileage targets were met. As a consequence, America’s light-duty vehicle fleet now has the worst average fuel efficiency in the developed world.

Some may be surprised to hear from a former General talk about fuel efficiency standards but they shouldn’t be. In the military, we learned that forced protection isn’t only about protecting weak spots, it’s also about reducing vulnerabilities before you go into harm’s way. That’s why lowering the Nation’s demand for oil is so critical.

Nearly all of our U.S. military commands have some oil security tasks and in essence they provide a blanket of security that benefits all nations. Central Command guards access to the oil supplies in the Middle East; Southern Command defends Colombia’s Cano Limon pipeline; Pacific Command patrols the tanker routes in the Indian Ocean, the South China Sea and the Western Pacific; and my last assignment, as deputy commander of European Command, which included, by the way, most of Africa. We patrolled the Mediterranean, provided security in the Caspian Sea and off the West Coast of Africa.

During that assignment, I became more appreciative of the size and scope of the oil security challenge. While surveying that challenge, it became apparent that the U.S. military could not protect that vast infrastructure without partners—and trust me, there should be partners in this mission. The free flow is clearly in the best interests of people all over the world. These interested parties certainly cannot replicate all the capabilities of the U.S. military, but their contributions can free up military tasks that only the U.S. military can successfully accomplish.

The armed forces of the United States have thus far been successful in fulfilling our energy security mission and they continue to carry out their duties professionally and with great courage. As a result of this success, many have come to believe—and I believe, falsely—that energy security can be achieved solely by military means. We need to change this paradigm because the U.S. military is not the best instrument for confronting all the strategic dangers emanating from oil dependence. The 1973 oil embargo is the most famous example of the use of energy as a political strategic weapon.


Since 1980, the U.S. Government, through military application, has put about $50 billion to $60 billion a year into the Persian Gulf. That doesn’t count the current Iraq war or the 1990 Iraq war. And that’s good for our country, for security interests, but the problem is, we’re subsidizing world energy. There is nobody else in the world doing this, and really, if you look at how much we’re paying per gallon, me, as a U.S. citizen today, for gasoline, you could almost say it’s $7 a gallon, based on the fact that we’re subsidizing world security on this issue.

The United States protects the global oil trade for the benefit of all nations. In part, this is because the U.S. has unmatched military capabilities. But another reason is that other nations know the U.S. military is out there doing the job.

The implicit strategic and tactical demands of protecting the global trade have been recognized by national security officials for decades, but it took the Carter Doctrine of 1980, proclaimed in response to the Soviet Union’s invasion of Afghanistan, to formalize this critical military commitment.

The Carter Doctrine committed the U.S. to defending the Persian Gulf against aggression by any ‘‘outside force.’’ President Reagan built on this foundation by creating a military command in the Gulf and ordering the U.S. Navy to protect Kuwaiti oil tankers during the Iran-Iraq War. The Gulf War of 1991, which saw the United States lead a coalition of nations in ousting Iraqi leader Saddam Hussein from Kuwait, was an expression of an implicit corollary of the Carter Doctrine: the U.S. would not allow Persian Gulf oil to be dominated by a radical regime—even an ‘inside force’ that posed a dangerous threat to the international order. More recently, the security agenda in the Gulf has expanded beyond state actor aggression to include concerns about terrorist attacks on facilities and supply lines.


Since issuing his 1996 ‘‘Declaration of War’’ against the U.S. and its partners, Osama bin Ladin has warned of attacks on oil installations in the Persian Gulf. Last year, the world came close to experiencing an oil supply shock when an Al- Qaeda attack on the Abqaiq facility through which approximately 60% of Saudi Arabian oil exports pass was barely foiled. In addition to attacking physical infrastructure, Al Qaeda operatives have also targeted expatriates in their residential areas, in particular in Riyadh, Saudi Arabia (October 2002) and in al-Khobar (May 2004).

Iraq is also the scene of persistent insurgent and terrorist attacks on pipelines and pumping stations, especially in the North of the country. These attacks have severely limited Iraqi oil exports to the Mediterranean through Turkey, and they are a major reason why Iraqi oil production has stubbornly remained below its prewar peak. The lost output has cost Iraq billions of dollars at a time when it needs every dollar and while U.S. taxpayers have spent billions on the reconstruction of the country. But if violence continues, and especially if it spreads to the south, where most of the oil and export facilities are located, then all of Iraq’s oil production could be at risk. The implications of this supply cut would be severe.

The danger of attacks on shipping is proven—in October 2002, the French supertanker Limburg was rammed by a small boat packed with explosives off the coast of Yemen. Most oil shipments have to pass through a handful of maritime chokepoints. Roughly 80% of Middle East oil exports pass through the Strait of Hormuz (17 mb/d), Bab el Mandeb (3 mb/d), or the Suez Canal/Sumed Pipeline (3.8 mb/d). Another 11.7 mb/d pass through the Straight of Malacca and 3.1 mb/d through the Turkish Straits. All of these passageways are vulnerable to accidents, piracy, and terrorism. Since alternative routes are lacking, the effect of a major blockage at one of these points could be devastating. Even unsuccessful attacks on tankers are likely to raise insurance rates and thus oil prices.


The armed forces of the United States have been extraordinarily successful in fulfilling their energy security missions, and they continue to carry out their duties with great professionalism and courage. But, ironically, this very success may have weakened the nation’s strategic posture by allowing America’s political leaders and the American public to believe that energy security can be achieved by military means alone. We need to change the paradigm, because the U.S. military is not the best instrument for confronting all of the strategic dangers emanating from oil dependence. This is particularly true when oil is used a political weapon.

The 1973 Arab embargo is still the most famous example of the use of energy as a political strategic weapon. But in recent years, it has been Russia that has shown the most willingness to play this dangerous game, as at the beginning of 2006, when it stopped natural gas exports to the Ukraine, which in turn withheld the natural gas destined for Western Europe. The danger of conflict with a nuclear power like Russia should make it abundantly clear that there are limits on how we can use military power to guarantee energy flows. But we can take political steps to counter Russia’s brandishing oil and natural gas as political weapons. Russia wants to join the World Trade Organization (WTO) as a full member. Russia’s entry into this organization must be made contingent on its behavior. Russia must make a commitment to fostering energy security; there should be no reward for sowing insecurity.

Of course, energy exporting governments don’t need to resort to full-fledged embargoes to hurt the U.S. and other importers. Exporters can manipulate price through less drastic production cuts. Tellingly, after oil prices dropped from their 2006 peak of $78 to about $60 in the U.S. market, OPEC members began to cut back on production. Governments in oil-producing countries can also constrain future supply through investment decisions that lead to long-term stagnant or glowing growth in production and exports, or even decline. Often enough, future supply destruction is the unintended or accepted consequence of an insistence on government control of natural resources. Currently, an estimated 80-90% of global oil reserves are controlled by national oil companies (NOCs), which are highly susceptible to being constrained by political objectives, even if these undermine long-term supply growth.

State-controlled production is frequently inefficient, relying on outdated technology and reserve management techniques. Russia, whose government has made it abundantly clear that it wants to maintain near absolute control over its energy resources. This power grab has curtailed foreign investment, and ultimately limited production as well. Russia’s oil industry stands as a testament to the dangers of political meddling in oil production. After the collapse of the Soviet Union, Russian production plummeted to only 6 mb/d in the mid-1990s, but then the efforts of private companies helped push production back to over 9 mb/d, achieving 10% annual growth rates in 2003 and 2004.1 However, with the subsequent expropriations of private enterprises such as Yukos, the production growth curve has flattened. Government control over production in Russia will also adversely impact the massive Shtokman natural gas field and Sakhlain-2 oil projects. President Putin has determined that tight government control of resources is more important than the greater revenue that would accrue from increased production achieved through cooperation with Western oil companies.

In an oil-dependent world facing increasingly tight supplies, the growing power of oil exporting countries and the shift in strategic calculations of other important countries have all added up to lessen U.S. diplomatic leverage.

Iran, which exports to the United States’ European and Asian allies, has threatened the use of the oil weapon to retaliate against efforts to constrain their nuclear program. The European Union relies on Middle Eastern oil, and Russian gas continues to complicate U.S. foreign policy efforts, especially when considering our efforts to stop Iran from developing nuclear weapons. China, with its rapidly growing dependence on foreign oil also blocks U.S. diplomatic initiatives in an effort to strengthen its own ties with oil exporters.

Given all these factors, it is imperative that the United States make energy security a top strategic priority. Toward that end, we should mobilize and leverage all of our national security resources, including our economic power, our investment markets, our technological products and our unsurpassed military strength. Curtailing demand is the most important security step we can take.

We need a comprehensive national security strategy for energy security. We must be prepared for sudden supply shocks triggered by terrorism or politics. We must promote greater diversity of fuel options while improving the efficiency of our Nation’s fleet.

CAN. May 2009. Powering America’s Defense: Energy and the Risks to National Security. 74 pages. PoweringAmericasDefense.org

Retired Air Force General Chuck Wald wants to see major changes in how America produces and uses energy. He wants carbon emissions reduced to help stave off the destabilizing effects of climate change.

“We’ve always had to deal with unpredictable and diverse threats,” Gen. Wald said. “They’ve always been hard to judge, hard to gauge. Things that may seem innocuous become important. Things that seem small become big. Things that are far away can be felt close to home. Take the pirates off the African coast. To me, it’s surprising that pirates, today, would cause so much havoc. It’s a threat that comes out of nowhere, and it becomes a dangerous situation.

“I think climate change will give us more of these threats that come out of nowhere. It will be harder to predict them. A stable global climate is what shaped our civilizations. An unstable climate, which is what we’re creating now with global warming, will make for unstable civilizations. It will involve more surprises. It will involve more people needing to move or make huge changes in their lives. It pushes us into a period of nonlinear change. That is hugely destabilizing.

“Our hands are tied in many cases because we need something that others have. We need their oil.

He gives another reason for major changes in our energy policy: He wants to reduce the pressure on our military.

“My perception is that the world, in a general sense, has assumed the U.S. would ensure the flow of oil around the world,” Gen. Wald said. “It goes back to the Carter Doctrine. I remember seeing the picture of the five presidents in the Oval Office. [He referred to a January photo, taken just before President Obama assumed office. Most people would not guess it was Jimmy Carter who said the U.S. would protect the flow of Persian Gulf oil by any means necessary. But he did. He recognized it as a vital strategic resource.

“And since that time, as global demand has grown, we see oil used more and more often as a tool by foreign leaders. And that shapes where we send our military. You look at the amount of time we spend engaged, in one way or another, with oil producing countries, and it’s staggering. Hugo Chavez in Venezuela gets a lot of our attention because he has a lot of oil. We spend a lot of money and a lot of time focused on him, and on others like him.

Gen. Wald cautions against simplistic responses to the challenge of energy dependency.

“The problem is dependence, and by that I mean our hands are tied in many cases because we need something that others have. We need their oil. But the solution isn’t really independence. We’re not going to become truly independent of anything. None of this is that simple. Reaching for independence can lead us to unilateralism or isolationism, and neither of those would be good for the U.S. The answer involves a sort of interdependence. We need a diversity of supply, for us and for everybody. We need clean fuels that are affordable and readily available, to us and to everybody. That’s not independence. It might even be considered a form of dependency-but we’d be dependent on each other, not on fossil fuels.”

Posted in Military | Tagged | Leave a comment

The U.S. Military on Peak Oil and Climate Change

CNA. May 2009. Powering America’s Defense: Energy and the Risks to National Security. Center for Naval Analyses. 74 pages.

[ Excerpts from this document follow ]

The destabilizing nature of increasingly scarce energy resources, the impacts of rising energy demand, and the impacts of climate change all are likely to increasingly drive military missions in this century.

GENERAL CHARLES F. “CHUCK” WALD, USAF (RET.) Former Deputy Commander, Headquarters U.S. European Command (USEUCOM); Chairman, CNA MAB

Retired Air Force General Chuck Wald wants to see major changes in how America produces and uses energy. He wants carbon emissions reduced to help stave off the destabilizing effects of climate change.

“We’ve always had to deal with unpredictable and diverse threats,” Gen. Wald said. “They’ve always been hard to judge, hard to gauge. Things that may seem innocuous become important. Things that seem small become big. Things that are far away can be felt close to home. Take the pirates off the African coast. To me, it’s surprising that pirates, today, would cause so much havoc. It’s a threat that comes out of nowhere, and it becomes a dangerous situation.

“I think climate change will give us more of these threats that come out of nowhere. It will be harder to predict them. A stable global climate is what shaped our civilizations. An unstable climate, which is what we’re creating now with global warming, will make for unstable civilizations. It will involve more surprises. It will involve more people needing to move or make huge changes in their lives. It pushes us into a period of nonlinear change. That is hugely destabilizing.

“Our hands are tied in many cases because we need something that others have. We need their oil.

He gives another reason for major changes in our energy policy: He wants to reduce the pressure on our military.

“My perception is that the world, in a general sense, has assumed the U.S. would ensure the flow of oil around the world,” Gen. Wald said. “It goes back to the Carter Doctrine. I remember seeing the picture of the five presidents in the Oval Office. [He referred to a January photo, taken just before President Obama assumed office. Most people would not guess it was Jimmy Carter who said the U.S. would protect the flow of Persian Gulf oil by any means necessary. But he did. He recognized it as a vital strategic resource.

“And since that time, as global demand has grown, we see oil used more and more often as a tool by foreign leaders. And that shapes where we send our military. You look at the amount of time we spend engaged, in one way or another, with oil producing countries, and it’s staggering. Hugo Chavez in Venezuela gets a lot of our attention because he has a lot of oil. We spend a lot of money and a lot of time focused on him, and on others like him.

Gen. Wald cautions against simplistic responses to the challenge of energy dependency.

“The problem is dependence, and by that I mean our hands are tied in many cases because we need something that others have. We need their oil. But the solution isn’t really independence. We’re not going to become truly independent of anything. None of this is that simple. Reaching for independence can lead us to unilateralism or isolationism, and neither of those would be good for the U.S. The answer involves a sort of interdependence. We need a diversity of supply, for us and for everybody. We need clean fuels that are affordable and readily available, to us and to everybody. That’s not independence. It might even be considered a form of dependency-but we’d be dependent on each other, not on fossil fuels.”

Many of our overseas deployments were defined… by the strategic decision to ensure the free flow of oil to the U.S. and our allies.

VICE ADMIRAL RICHARD H. TRULY, USN (RET.) Former NASA Administrator, Shuttle Astronaut and the first Commander of the Naval Space Command

On DoD’s Efficiency Needs

Having served as commander of the space shuttle, retired Vice Admiral Richard Truly has traveled great distances on a single tank of fuel. His views on energy, however, are shaped by his time as Director of the National Renewable Energy Laboratory, and by a clear sense of how America’s energy choices affect troops on the ground. He believes the fastest gains for the U.S. military will come from a focus on energy efficiency.

This issue “is well recognized by a lot of the troops. They’ve seen friends getting hurt because of poor energy choices we’ve made in the past.”

“Efficiency is the cheapest way to make traction,” Adm. Truly said. “There’s a thousand different ways for the military to take positive action. And these are things that can help them from a war-fighter’s point of view and also make things cheaper in the long run.

“You can see the need by what we’ve done in Iraq and Afghanistan on logistics,” he said. “We’ve put inefficient systems very deep into these regions. And as a result, we end up with long lines of fuel trucks driving in. And we have to protect those fuel trucks with soldiers and with other vehicles.”

Truly sees key obstacles in the way of change. “The Defense Department is the single largest fuel user in the country, but if you compare it to the fuel used by the American public, it’s a piker,” Adm. Truly said. “When you think of the companies that make heavy vehicles, DoD is an interesting customer to them, but it’s not how they make their money. These companies are in the business of selling large numbers of commercial vehicles. So even if our military wants a new semi with a heavy-duty fuel-efficient diesel engine, it’s not likely to happen unless there is enough interest from other sec sectors to justify mass production. The real demand, if it exists, comes from the other 99 percent of users. That’s the rest of us. The real big market is the American people, and it’s their attitude that needs to change.”

GENERAL PAUL J. KERN, USA (RET.) Former Commanding General, U.S. Army Materiel Command

On the Vulnerability of Energy Inefficiency

In 1991, General Paul Kern commanded the Second Brigade of the 24th Infantry Division in its advance toward Baghdad—a sweeping left hook around Kuwait and up the Euphrates River Valley. It involved moving 5,000 people, plus materiel support, across 150 kilometers of desert. The route covered more ground than the Red Ball Express, which moved materiel across the Western European front in World War II.

“As we considered the route and began planning, our biggest concern was not our ability to fight the Iraqis; it was keeping ourselves from running out of fuel,” Gen. Kern said. “We also made a decision to never let our tanks get below half full, because we didn’t want to refuel in the middle of a fight.”

Meeting this commitment, given the fuel inefficiency of the Abrams tank, required stopping every two and a half hours. Fueling was done with 2,250-gallon HEMMT fuel tankers, which in turn were refueled by 5,000-gallon line-haul tankers (similar to those seen on U.S. highways).

“We set up and moved out in a tactical configuration, and were ready to fight whenever necessary,” Gen. Kern said. “To refuel, we would stop by battalions and companies. As we advanced, we laid out a system with roughly 15 stations for refueling. This was occurring almost continuously. We did it at night in a blinding sandstorm— having rehearsed it was key.”

The vulnerability of these slow-moving, fuel-intense supply lines has made Gen. Kern a strong advocate for increasing fuel efficiency in military operations. “The point of all this is that the logistics demands for fuel are so significant. They drive tactical planning. They deter determine how you fight. More efficiency can give you more options. That’s what you want as a commander.”

Gen. Kern used a different example—the 2003 northeast power outage, when 50 million people lost electric power—to highlight another energy impact on military operations. “I was running the Army Materiel Command,” Gen. Kern said. “We had a forward operation in Afghanistan, which would forward all the requisitions back here. They had a generator and a satellite radio to talk, but when the outage hit here in the U.S., they had no one to talk to. We quickly came up with back-up plans, but it showed me the vulnerability of the infrastructure here to support a deployments.

“In some cases, the need to communicate with supply depots is day-to-day. The Afghan operation then was very fragile. Access was very important. Everything was getting flown in, and because you couldn’t get a lot in with each trip, we wanted a continuous flow. That’s a factor in agility—if you have less materiel on the ground, you can be more agile. But with the limited supplies, you do want to be in constant contact. You want that continuous flow. When the power goes out here, or if we have a lengthy collapse of the grid, that flow of materiel affects our troops in important ways.”

Gen. Kern said agility (and continuous communications) will be increasingly important.

“If you think of humanitarian relief, you don’t know what the community needs. You can’t know that in advance, so you have to be agile. The same is true with asymmetrical threats—you don’t know what you’ll face. You build strong communications networks to help you respond quickly—that’s the planning you can do in advance. But these networks depend, for the most part, on our power grids. That’s a vulnerability we need to address.”

GENERAL GORDON R. SULLIVAN, USA (RET.) Former Chief of Staff, U.S. Army; Former Chairman of the CNA MAB

On the Connections Between Energy, Climate, and Security

Former U.S. Army Chief of Staff General Gordon R. Sullivan served as chairman of the Military Advisory Board that released National Security and the Threat of Climate Change. He started that process with little connection to the issue of climate change, but the briefings have stayed with him. He keeps reaching out for new information on the topic.

“What we have learned from the most recent reports is that climate change is occurring at a much faster pace than the scientists previously thought it could,” Gen. Sullivan said. “The Arctic is a case-in-point. Two years ago, scientists were reporting that the Arctic could be ice-free by 2040. Now, the scientists are telling us that it could happen within just a few years. The acceleration of the changes in the Arctic is stunning. “The climate trends continue to suggest the globe is changing in profound ways,” Gen. Sullivan said. He noted that these lead indicators should be enough to prompt national and global responses to climate change, and referenced military training to explain why. “Military professionals are accustomed to making decisions during times of uncertainty. We were trained to make decisions in situations defined by ambiguous information and little concrete knowledge of the enemy intent. We based our decisions on trends, experience, and judgment. Even if you don’t have complete information, you still need to take action. Waiting for 100 percent certainty during a crisis can be disastrous.” Gen. Sullivan said the current economic crisis is not a reason to postpone climate solutions.

“There is a relationship between the major challenges we’re facing,” Gen. Sullivan said. “Energy, security, economics, climate change—these things are connected. And the extent to which these things really do affect one another is becoming more apparent. It’s a system of systems. It’s very complex, and we need to think of it that way. “And the solutions will need to be connected. It will take the industrialized nations of the world to band together to demonstrate leadership and a willingness to change— not only to solve the economic problems we’re having, but to address the issues related to global climate change. We need to look for solutions to one problem that can be helpful in solving other problems. And here, I’d say the U.S. has a responsibility to lead. If we don’t make changes, then others won’t.” Gen. Sullivan tends to keep his discussions of climate change focused on the national security aspects. But he occasionally talks about it from a different perspective, and describes some of the projected changes expected to hit his native New England if aggressive measures are not embraced. “I have images of New England that stick with me,” Gen. Sullivan said. “Tapping sugar maples in winter. Fishing off the Cape. These were images I held close when I was stationed overseas. They were important to me then. And they are important to me now when I think of how we’ll respond to climate change. Those treasures are at risk. There’s a lot at stake.”

GENERAL CHARLES G. BOYD, USAF (RET.) Former Deputy Commander-in-Chief, Headquarters U.S. European Command (USEUCOM)

On Climate Change and Human Migrations

Retired Air Force General Chuck Boyd, former Deputy Commander-in-Chief of U.S. Forces in Europe, sees the effects of climate change in a particular context, one he came to understand while serving as executive director of the U.S. Commission on National Security/ 21st Century (commonly known as the Hart-Rudman Commission). The Commission’s reports, issued in advance of the 9/11 terrorist attacks, predicted a direct attack on the homeland, noted that the risks of such an attack included responses that could undermine U.S. global leadership, and outlined preventative and responsive measures. He explains this context by telling the story of a dinner at the home of the Japanese ambassador to the United Nations.

“When I was at EUCOM, I formed a friendship with the UN High Commissioner for Refugees, Madame Sadako Ogata,” Gen. Boyd said. “I was seated next to her at this dinner. When I told her about the project, she said you cannot talk about security without talking about the movement of people. She said we had to come to Geneva to talk with her about it. “She’s this little bitty person with a moral presence that’s overwhelming,” said Gen. Boyd, after a pause. “She’s a bit like Mother Teresa in that way. So we went—we went to Geneva.

“We spent the day with her and a few members of her staff pouring over a map of the world,” he said. “We looked at the causes of dislocations—ethnic, national and religious fragmentation mostly. And we looked at the consequences. It was very clear that vast numbers of conflicts were being caused by these dislocations. She was very strategic in her thinking. And she made the point that this phenomenon—the movements of people—would be the single biggest cause of conflicts in the 21st century.”

For Gen. Boyd, climate change is an overlay to the map of dislocations and conflicts provided by Madame Ogata. “When you add in some of the effects of climate change —the disruption of agricultural production patterns, the disruption of water availability—it’s a formula for aggravating, in a dramatic way, the problem and consequences of large scale dislocation. The more I think about it, the more I believe it’s one of the major threats of climate change. And it’s not well understood.

“As water availability changes, people who need water will fight with people who have water and don’t want to share it. It’s the same with agriculture. When people move away from areas that can’t sustain life anymore, or that can’t sustain their standard of living, they move to areas where they are not welcome. People will fight these incursions. Their interaction with different cultures causes tension. It’s very much like the tension we see with religious fragmentation. It’s the same pattern of consequences Madame Ogata was describing, only on a larger scale. This is about instability. It is a destabilizing activity, with murderous consequences.”

VICE ADMIRAL DENNIS V. MCGINN, USN (RET.) Former Deputy Chief of Naval Operations for Warfare Requirements and Programs

On Supporting Our Troops

Resource scarcity is a key source of conflict, especially in developing regions of the world. Without substantial change in global energy choices, Vice Admiral Dennis McGinn sees a future of potential widespread conflict.

“Increasing demand for, and dwindling supplies of, fossil fuels will lead to conflict. In addition, the effects of global climate change will pose serious threats to water supplies and agricultural production, leading to intense competition for essentials,” said the former commander of the U.S. Third Fleet, and deputy chief of naval operations, warfare requirements and programs. “The U.S. cannot assume that we will be untouched by these conflicts. We have to understand how these conflicts could play out, and prepare for them.” With an issue as big as climate change, Adm. McGinn said, “You’re either part of the solution or part of the problem. And in this case, the U.S. has to be more than just part of the solution; we need to be a big part of it. We need to be a leader. If we are not, our credibility and our moral authority are diminished. Our political and military relationships are undermined by not walking the walk.”

He believes these issues of credibility have a direct impact on our military. It’s one of many reasons why he sees climate change and energy security as inextricably linked national security threats. “We have less than ten years to change our fossil fuel dependency course in significant ways. Our nation’s security depends on the swift, serious and thoughtful response to the inter-linked challenges of energy security and climate change. Our elected leaders and, most importantly, the American people should realize this set of challenges isn’t going away. We cannot continue business as usual. Embedded in these challenges are great opportunities to change the way we use energy and the places from which we get our energy. And the good news is that we can meet these challenges in ways that grow our economy and increases our quality of life.”

Adm. McGinn is clear about the important role to be played by the American public. “Our national security as a democracy is directly affected by our energy choices as individual citizens,” Adm. Mc- Ginn said. “The choices we make, however small they seem, can help reduce our dependence on oil and have a beneficial effect on our global climate.” Individually, it may be hard to see, but collectively we can all make a tangible contribution to our national security. One way of thinking about this is that our wise energy choices can provide genuine support for our troops. “A yellow ribbon on a car or truck is a wonderful message of symbolic support for our troops,” said Adm. McGinn. “I’d like to see the American people take it several steps further. If you say a yellow ribbon is the ‘talk,’ then being energy efficient is the ‘walk’. A yellow ribbon on a big, gas-guzzling SUV is a mixed message. We need to make better energy choices in our homes, businesses and transportation, as well as to support our leaders in making policies that change the way we develop and use energy. If we Americans truly embrace this idea, it is a triple win: it reduces our dependence on foreign oil, it reduces our impact on the climate and it makes our nation much more secure.”

Executive Summary

Our dependence on foreign oil

  • reduces our international leverage
  • places our troops in dangerous global regions
  • funds nations and individuals who wish us harm, and who have attacked our troops and cost lives
  • weakens our economy, which is critical to national security
  • The market for fossil fuels will be shaped by finite supplies and increasing demand. Continuing our heavy reliance on these fuels is a security risk.

The Electric Grid

Our domestic electrical system is also a significant risk to our national security: many of our large military installations rely on power from a fragile electrical grid that is vulnerable to malicious attacks or interruptions caused by natural disasters. A fragile domestic electricity grid makes our domestic military installations, and their critical infrastructure, unnecessarily vulnerable to incident, whether deliberate or accidental.

Climate change

Destabilization driven by ongoing climate change has the potential to add significantly to the mission burden of the U.S. military in fragile regions of the world.

The effects of global warming will require adaptive planning by our military. The effects of climate policies will require new fuels and energy systems.

A business as usual approach to energy security poses an unacceptably high threat level from a series of converging risks.   Due to the destabilizing nature of increasingly scarce resources, the impacts of energy demand and climate change could increasingly drive military missions in this century.


Diversifying energy sources and moving away from fossil fuels where possible is critical to future energy security. While the current financial crisis provides enormous pressure to delay addressing these critical energy challenges, the MAB warns against delay. The economic risks of this energy posture are also security risks.

The U.S. consumes 25% of world oil production, yet controls less than 3% percent

And the supply is getting increasingly tight. Oil is traded on a world market, and the lack of excess global production makes that market volatile and vulnerable to manipulation by those who control the largest shares. Reliance on fossil fuels, and the impact it has on other economic instruments, affects our national security, largely because nations with strong economies tend to have the upper hand in foreign policy and global leadership. As economic cycles ebb and flow, the volatile cycle of fuel prices will become sharper and shorter.

What the military wants

  • First crack at trying out new technologies and vehicles, because the Department of Defense (DoD) is the nation’s single largest consumer of energy. DoD should also try to use less energy via distributed and renewable energy and use low-carbon liquid fuels
  • The military would like to see personal transport electrified to make more liquid fuels available for aircraft and the armed services.
  • Americans should be called upon again to use less fuel (to free up fuel for us, the military) like they did in WW II, when they also grew food locally in Victory Gardens, and contributed in other ways to the war effort

These steps could be described as sacrifices, frugality, lifestyle changes—the wording depends on the era and one’s perspective. Whatever the terminology, these actions made the totality of America’s war effort more successful. They shortened the war and saved lives.

Energy for America’s transport sector depends almost wholly on the refined products of

a single material: crude oil. Energy for homes, businesses, and civic institutions relies heavily on an antiquated and fragile transmission grid to deliver electricity. Both systems—transport and electricity—are inefficient. This assessment applies to our military’s use of energy as well.

Our defense systems, including our domestic military installations, are dangerously oil dependent, wasteful, and weakened by a fragile electrical grid.

In our view, America’s energy posture constitutes a serious and urgent threat to national security—militarily, diplomatically, and economically. This vulnerability is exploitable by those who wish to do us harm. America’s current energy posture has resulted in the following national security risks:

  • U.S. dependence on oil weakens international leverage, undermines foreign policy objectives, and entangles America with unstable or hostile regimes.
  • Inefficient use and over-reliance on oil burdens the military, undermines combat effectiveness, and exacts a huge price tag—in dollars and lives.
  • S. dependence on fossil fuels undermines economic stability, which is critical to national security.
  • A fragile domestic electricity grid makes our domestic military installations, and their critical infrastructure, unnecessarily vulnerable to incident, whether deliberate or accidental.

Dependence on oil constitutes a threat to U.S. national security. The United States consumes 25% of the world’s oil production, yet controls less than 3% of an increasingly tight supply. 16 of the top 25 oil-producing companies are either majority or wholly state-controlled. These oil reserves can give extraordinary leverage to countries that may otherwise have little; some are using that power to harm Western governments and their values and policies.

Another troubling aspect of our oil addiction is the resulting transfer of wealth. American and overall world demand for oil puts large sums in the hands of a small group of nations; those sums, in the hands of certain governments or individuals, can be used to great harm. Iran’s oil exports, which reached an estimated $77 billion in 2008, provide 40 percent of the funding for a government that the U.S. State Department says is the world’s “most active state sponsor of terrorism”. Iran provides materiel to Hezbollah, supports insurgents in Iraq, and is pursuing a nuclear weapons program.

Saudi Arabian private individuals and organizations, enriched by the country’s $301 billion in estimated 2008 oil, reportedly fund organizations that promote violent extremism revenues [18]. The sad irony is that this indirectly funds our adversaries. As former CIA Director James Woolsey said, “This is the first time since the Civil War that we’ve financed both sides of a conflict”.

America’s strategic leadership, and the actions of our allies, can be greatly compromised by a need (or perceived need) to avoid antagonizing some critical oil suppliers. This has become increasingly obvious since the early 1970s, when the first OPEC embargo quadrupled oil prices, contributed to an inflationary spiral, and generated tensions across the Atlantic as European nations sought to distance themselves from U.S. policies not favored by oil-exporting nations.

Oil has been the central factor in the mutually supportive relationship between the U.S. and Saudi Arabia. While the Saudis have been key allies in the region since World War II and serve as one of the nation’s most critical oil suppliers, Saudi Arabia is also one of the most repressive governments in the world.

Sudan provides another example: in an effort to pressure the Sudanese government to stop the genocide occurring in Darfur, the U.S. and most of Europe have limited or

halted investment in Sudan. However, China and Malaysia have continued to make investments worth billions of dollars (mainly in the oil industry) while actively campaigning against international sanctions against the country. Sudan, which depends upon oil for 96% of its export revenues, exports the vast majority of its oil to China and provides China with nearly 8% of its oil imports

While oil can enable some nations to flex their muscles, it can also have a destabilizing effect on their economic, social, and political infrastructure.

When the natural resource that caused the Dutch disease goes from boom to bust (as has been the case with oil), the economy and social fabric of the afflicted nation can be left in tatters.

Nigeria, which accounts for nearly 9 percent of U.S. oil imports, has experienced a particularly high level of economic and civil unrest related to its oil.

In addition to Dutch disease, Nigeria also shows another corrosive impact of oil. The large oil trade (and unequal distribution of its profits) has fueled the Movement for the Emancipation of the Niger Delta (MEND), an armed group that stages attacks against the foreign multinational oil companies and the Nigerian government. In one of its most serious actions September 2008, the MEND retaliated against a strike by the Nigerian military by attacking pipelines, flow stations, and oil facilities; they also claimed 27 oil workers as hostages and killed 29 Nigerian soldiers. The result was a decrease in oil production of 115,000 barrels per day over the week of attacks. In the years preceding this attack, instability caused by the MEND decreased oil production in the Niger Delta by 20%.

The MEND is but one example of a group operating in an unstable region that targets oil and its infrastructure for its strategic, political, military, and economic consequences. By 2007 in Iraq, in comparison to pre-2003 levels, effects from the war and constant harassment of the oil infrastructure by insurgent groups and criminal smuggling elements reduced oil production capacity in the northern fields by an estimated 700,000 barrels per day.

In 2006, al Qaeda in the Arab Peninsula carried out a suicide bombing against the Abqaiq oil production facility in Saudi Arabia, which handles about two-thirds of the country’s oil production. Fortunately, due largely to the intense focus of the Saudis on hardening their processing facilities (to which they devote billions of dollars each year), the attack was suppressed before the bombers could penetrate the second level of security gates. However, both the Saudi level of protection and al Qaeda’s selection of the oil infrastructure as a target signify the strategic and economic value of such facilities.

These attacks have demonstrated the vulnerability of oil infrastructure to attack; a series of well-coordinated attacks on oil production and distribution facilities could have serious negative consequences on the global economy. Even these small-scale and mostly unsuccessful attacks have sent price surges through the world oil market. In the U.S., dependence on foreign oil has had a marked impact on national security policies.

Much of America’s foreign and defense policies have been defined, for nearly three decades, by what came to be known as the Carter Doctrine. In his State of the Union address in January 1980, not long after the Soviet Union invaded Afghanistan, President Jimmy Carter made it clear that the Soviets had strayed into a region that held “great strategic importance”. He said the Soviet Union’s attempt to consolidate a position so close to the Straits of Hormuz posed “a grave threat to the free movement of Middle East oil.” He then made a declaration that went beyond a condemnation of the Soviet invasion by proclaiming the following: An attempt by any outside force to gain control of the Persian Gulf region will be regarded as an assault on the vital interests of the United States of America, and such an assault will be repelled by any means necessary, including military force. When President Carter made his declaration, the U.S. imported roughly 40 percent of its oil.


That percentage has since doubled. In fact, due to the increase in U.S. demand, the total annual volume of oil imported into the U.S. has tripled since the early 1980s. As a result, the stakes are higher, and the U.S. has accordingly dedicated an enormous military presence to ensure the unimpeded flow of oil-in the Persian Gulf and all across the globe. Our Commanders-in-Chief chose this mission not because they want America to be the world’s oil police; they did so because America’s thirst for oil leaves little choice.

Supply lines delivering fuel and other supplies to forward operating bases can stretch over great distances, often requiring permission for overland transport through one or more neighboring countries. As these lines grow longer, and as convoys traverse hotly contested territory, they become attractive targets to enemy forces. A Defense Science Board (DSB) task force identified this movement of fuel from the point of commercial procurement to the point of use by operational systems and forces as a grave energy risk for DoD. Ensuring convoy safety and fuel delivery requires a tremendous show of force. Today, armored vehicles, helicopters, and fixed-wing fighter aircraft protect the movement of fuel and other supplies. This is an extraordinary commitment of combat resources, and it offers an instructive glimpse of the true costs of energy inefficiency and reliance on oil.

Let us be clear here: logistics operations and their associated vulnerabilities are nothing new to militaries; they have always been a military challenge. Even if the military did not need fuel for its operations, some amount of logistics supply lines would still be required to ensure our forces have the supplies they need to complete their missions. However, the fuel intensity of today’s combat missions adds to the costs and risks. As in-theater demand increases, more combat troops and assets must divert to protect fuel convoys rather than directly engage enemy combatants. This reduces our combat effectiveness, but there is no viable alternative: our troops need fuel to fight.

The broad battle space in their wake required heavy security-the supply convoys bringing new supplies of fuel were constantly under threat of attack. The security measures necessary to defend this vast space slowed American movements and reduced the options available to Army and Marine field commanders. It prompted a clear challenge from Marine Lieutenant General James Mattis: “Unleash us from the tether of fuel” [36]. This “Unleash us from the tether of fuel”. This mile fuel convoys are exposed as they crawl along dangerous mountainous routes.

Combat. Forward operating bases-staging grounds for direct military engagement-contain communications infrastructure, living quarters, administrative areas, eating facilities and industrial activities necessary to maintain combat systems. All of these require electricity. The electricity used to power these facilities is provided by towed-in generators fueled by JP-8, the same fuel used by combat systems. The fuel used by these generators comes from the same vulnerable supply chain that provides liquid fuel for motorized vehicles.

A study of the 2003 I Marine Expeditionary Force (I MEF) in Iraq found that only 10 percent of its ground fuel use was for the heavy vehicles that deliver lethal force, including M1A1 tanks, armored vehicles, and assault amphibious vehicles; the other 90 percent was consumed by vehicles-including Humvees, 7-ton trucks, and logistics vehicles-that deliver and protect the fuel and forces. It is the antithesis of efficiency: only a fraction of the fuel is used to deliver lethal force. A different study showed that, of the U.S. Army’s top ten battlefield fuel users, only two (numbers five and ten on the list) are combat platforms; four out of the top ten are trucks, many of them used to transport liquid fuel and electric generating equipment [39]. The use of electric power extends beyond the battlefield bases: an infantry soldier on a 72-hour mission in Afghanistan today carries more than 26 pounds of batteries, charged by these generators. The weight of the packs carried by these troops (of which 20 to 25% can be batteries) hinders their operational capability by limiting their maneuverability and causing muscular-skeletal injuries. Soldiers and marines may not be tethered directly to fuel lines, but they are weighed down by electrical and battery systems that are dangerously inefficient.

The military uses fuel for more than mobility. In fact, one of the most significant consumers of fuel at forward operating bases in operations in Afghanistan and Iraq is not trucks or combat systems; it is electric generators.

In 2006, while commanding troops in Iraq’s Al Anbar province, Marine Corps Major General Richard Zilmer submitted an urgent request because American supply lines were vulnerable to insurgent attack by ambush or roadside bombs. “Reducing the military’s dependence on fuel for power generation could reduce the number of road-bound convoys,” he said, adding that the absence of alternative energy systems means “personnel loss rates are likely to continue at their current rate.

In addition to burdening our military forces, over-reliance on oil exacts a huge monetary cost, both for our economy and our military. The fluctuating and volatile cost of oil greatly complicates the budgeting process within the Department: just a $10 change in the per-barrel cost of oil translates to a $1.3 billion change to the Pentagon’s energy costs. Over-allocating funds to cover energy costs comes with a high opportunity cost as other important functions are under-funded; an unexpected increase results in funds being transferred from other areas within the Department, causing significant disruptions to training, procurement and other essential functions2. In addition to buying the fuel, the U.S devotes enormous resources to ensure the military receives the fuel it needs to operate. A large component of the logistics planning and resources are devoted to buying, operating, training, and maintaining logistics assets for delivering fuel to the battlefield-and these delivery costs exceed the cost of buying the commodity. For example, each gallon of fuel delivered to an aircraft in- flight costs the Air Force roughly $42; for ground forces, the true cost of delivering fuel to the battlefield, while very scenario dependent, ranges from $15 per gallon to hundreds of dollars per gallon. A more realistic assessment of what is called the “fully burdened price of fuel” would consider the costs attributable to oil in protecting sea lanes, operating certain military bases and maintaining high levels of forward presence. Buying oil is expensive, but the cost of using it in the battlespace is far higher.

The volatile fossil fuel markets have a major impact on our national economy, which in turn affects national security. Upward spikes in energy prices-tied to the wild swings now common in the world’s fossil fuel markets-constrict the economy in the short-term, and undermine strategic planning in the long-term. Volatility is not limited to the oil market: the nation’s economy is also wrenched by the increasingly sharp swings in price of natural gas and coal. This volatility wreaks havoc with government revenue projections, making the task of addressing strategic and systemic national security problems much more challenging. It also makes it more difficult for companies to commit to the long-term investments needed to develop and deploy new energy technologies and upgrade major infrastructure.

A significant and long-lasting trade deficit can put us at a disadvantage in global economic competitions. In 2008, our economy paid an average of $28.5 billion each month to buy foreign oil. This amount is expected to grow: while oil prices wax and wane periodically, in the long term, oil prices are trending upward. This transfer of wealth means America borrows heavily from the rest of the world, making the U.S. dependent economically.

We are also dependent economically on a global energy supply market increasingly susceptible to manipulation. In recent years, even the smallest incident overseas, such as just a warning of pipeline attack from the MEND in Nigeria, has caused stock markets to roil and oil prices to jump. Perhaps most worrisome in regard to the manipulation of the global oil trade are the critical chokepoints in the delivery system: 40 percent of the global seaborne oil trade moves through the Strait of Hormuz; 36 percent through the Strait of Malacca, and 10 percent through the Suez Canal. The economic leverage provided by the Strait of Hormuz has not been lost on Iran, which has employed the threat of closing down the shipping lane to prevent an attack on its nuclear program.

For the U.S., our economic might and easy access to natural resources have been important components of national strength, particularly over the last century. They have also allowed us to use economic aid and soft power mechanisms to retain order in fragile regions-thereby avoiding the need to use military power. When economies are troubled, domestic strife increases, prospects of instability increase, and international leverage diminishes. This is why the discussions of energy and economy have been joined, and is why both are matters of national security.

At military installations across the country, a myriad of critical systems must be operational 24 hours a day, 365 days a year. They receive and analyze data to keep us safe from threats, they provide direction and support to combat troops, and stay ready to provide relief and recovery services when natural disasters strike or when someone attempts to attack our homeland. These installations are almost completely dependent on commercial electrical power delivered through the national electrical grid. When the DSB studied the 2003 blackout and the condition of the grid, they concluded it is “fragile and vulnerable… placing critical military and homeland defense missions at unacceptable risk of extended outage”.

As the resiliency of the grid continues to decline, it increases the potential for an expanded and/ or longer duration outage from natural events as well as deliberate attack. The DSB noted that the military’s backup power is inadequately sized for its missions and military bases cannot easily store sufficient fuel supplies to cope with a lengthy or widespread outage. An extended outage could jeopardize ongoing missions in far-flung battle spaces for a variety of reasons:

  • The American military’s logistics chains operate a just-in-time delivery system familiar to many global businesses. If an aircraft breaks down in Iraq, parts may be immediately shipped from a supply depot in the U.S. If the depot loses power, personnel there may not fill the order for days, increasing the risk to the troops in harm’s way.
  • Data collected in combat zones are often analyzed at data centers in the U.S. In many cases, the information helps battlefield commanders plan their next moves. If the data centers lose power, the next military move can be delayed, or taken without essential information.
  • The loss of electrical power affects refineries, ports, repair depots, and other commercial or military centers that help assure the readiness of American armed forces.

When power is lost for lengthy periods, vulnerability to attack increases.

Destabilization driven by ongoing climate change has the potential to add significantly to the mission burden of the U.S. military in fragile regions of the world. In our view, confronting these converging risks is critical to ensuring America’s energy- secure future.

The demand for oil is expected to increase even as the supply becomes constrained. A 2007 Government Accountability Office (GAO) report on peak oil, which considered a wide range of studies on the topic, concluded that the peak in production is likely to occur sometime before 2040. While that 30-year time-frame may seem long to some, it is familiar to military planners, who routinely consider the 30- to 40-year life span of major weapon systems. According to the International Energy Agency (IEA), most countries outside of the Middle East have already reached, or will soon reach, the peak of their oil production. This includes the U.S., where oil production peaked in 1970.

Our 2007 report identified the national security risks associated with climate change. Chief among the report’s findings:

  • The NIA finds that climate change impacts—including food and water shortages, the spread of infectious disease, mass migrations, property damage and loss, and an increase in the intensity of extreme weather events—will increase the potential for conflict.
  • The impacts may threaten the domestic stability of nations in multiple regions, particularly as factions seek access to increasingly scarce water resources.
  • Projected impacts of climate change pose a serious threat to America’s national security.
  • Climate change acts as a threat multiplier for instability in some of the most volatile regions of the world.
  • Projected impacts of climate change will add to tensions even in stable regions of the world. Climate change, national security, and energy dependence are a related set of global challenges.

The NIA describes potential impacts on global regions. In describing the projected impacts in Africa, for example, it suggests that some rainfall-dependent crops may see yields reduced by up to 50 percent by 2020. In testimony before the U.S. Congress, Dr. Fingar said the newly established Africa Command “is likely to face extensive and novel operational requirements. Sub-Saharan African countries, if they are hard hit by climate impacts, will be more susceptible to worsening disease exposure. Food insecurity, for reasons both of shortages and affordability, will be a growing concern in Africa as well as other parts of the world. Without food aid, the region will likely face higher levels of instability, particularly violent ethnic clashes over land ownership.” This proliferation of conflicts could affect what Dr. Fingar described as the “smooth-functioning international system ensuring the flow of trade and market access to critical raw materials” that is a key component of security strategies for the U.S. and our allies. A growing number of humanitarian emergencies will strain the international community’s response capacity, and increase the pressure for greater involvement by the U.S. Dr. Fingar stated that “the demands of these potential humanitarian responses may significantly tax U.S. military transportation and support force structures, resulting in a strained readiness posture and decreased strategic depth for combat operations.” In addition, the NIA cites threats to homeland security, including severe storms originating in the Gulf of Mexico and disruptions to domestic infrastructure.

Admiral Blair, in his February 2009 testimony, referenced the NIA and described some of the potential impacts of energy dependency and climate change: “Rising energy prices increase the cost for consumers and the environment of industrial-scale agriculture and application of petrochemical fertilizers. A switch from use of arable land for food to fuel crops provides a limited solution and could exacerbate both the energy and food situations. Climatically, rainfall anomalies and constricted seasonal flows of snow and glacial melts are aggravating water scarcities, harming agriculture in many parts of the globe. Energy and climate dynamics also combine to amplify a number of other ills such as health problems, agricultural losses to pests, and storm damage. The greatest danger may arise from the convergence and interaction of many stresses simultaneously. Such a complex and unprecedented syndrome of problems could cause outright state failure, or weaken important pivotal states counted on to act as anchors of regional stability.

Some of the many ways climate change will adversely affect our military’s ability to carry out its already challenging missions: A changing Arctic forces a change in strategy. As the Arctic Ocean has become progressively more accessible, several nations are responding by posturing for resource claims, increasing military activity, expanding commercial ventures, and elevating the volume of international dialogue. Due to the melting ice, the U.S. is already reconsidering its Arctic strategy. The change in strategy will lead to a change in military intelligence, planning, and operations. The Arctic stakes are high: 22% of the world’s undiscovered energy reserves are projected to be in the region (including 13% of the world’s petroleum and 30% of natural gas).

Damage to and loss of strategic bases and critical infrastructure

As sea level rises, storm waves and storm surges become much more problematic. Riding in at a higher base level, they are much more likely to overflow coastal barriers and cause severe damage. Recent studies project that, by the end of the century, sea levels could rise by nearly 1 meter. A 1-meter rise in sea level would have dramatic consequences for U.S. installations across the globe,

Storm intensity affects readiness and capabilities. The projected increase in storm intensity can affect our ability to quickly deploy troops and materiel to distant theaters.

Increased conflict stretches American military. In other sections, we have noted the likelihood of increased global conflicts, which in turn increases the likelihood that American military forces will be engaging in multiple theaters simultaneously. In addition, at the very same time, there may be increased demands for American-led

humanitarian engagements in response to natural disasters exacerbated or caused by climate change.

The destabilizing nature of increasingly scarce energy resources, the impacts of rising energy demand, and the impacts of climate change all are likely to increasingly drive military missions in this century.

Many Americans recall World War II references to the Pacific Theater and European Theater. Climate change introduces the notion of a global theater; its impacts cannot be contained or managed regionally. It changes planning in fundamental ways. It forces us to make changes in this new, broader context.

Given the risks outlined earlier, diversifying our energy sources and moving away from fossil fuels where possible is critical to our future energy security.

Some energy choices could contradict future national climate goals and policies, which should lead us to avoid such energy options. Developing coal-to-liquid (CTL) fuels for the U.S. Air Force is a useful example.

Because of America’s extensive coal resources, turning coal into liquid aviation fuel is, on the surface, an attractive option to make the nation more energy independent.

However, unless cost-effective and technologically sound means of sequestering the resulting carbon emissions are developed, producing liquid fuel from coal would emit nearly twice as much carbon as the equivalent amount of conventional liquid fuel.

What does a new energy future look like? It will have a number of features, including:

Diversity. Electricity produced with sources like wind, solar, and geothermal power would produce substantially more of our nation’s electricity than today. Solar thermal facilities (these not only generate electricity during sunlight hours, they heat liquids that can be used to power steam generators at night) offer a current example of how the intermittency of some renewable sources can be overcome.

Additional low carbon solutions, such as nuclear energy, will also be part of a diversified energy portfolio.

Stability. Because the sources of these renewable energy technologies are free and abundant—in the U.S. and in many regions around the world—they would bring stability to our economy. This is quite the opposite of the current crude oil, coal, and natural gas markets, which are highly unstable.

Smarter use of energy resources. The wide-scale adoption of “smart grid” technologies (such as advanced electricity meters that can indicate which household appliances are on and communicate that information back to the grid) would allow power to be used with maximum efficiency, be able to heal the grid in the event of natural disasters and cyber attacks, and allow for all sources of electricity to provide power to the grid.

Electrification of ground transport. Relying on transport vehicles powered largely with electricity derived from this low carbon sector, such as plug-in hybrids, would reduce America’s need for imported oil for use in transportation.

Bio-based mobility fuels. For mobility applications that are likely to require liquid fuels into the foreseeable future—including aviation and military operations—non-food-based biofuels would be employed that are made with materials and processes that do not tax productive farmlands. To ensure that domestically produced fuel does not need to be transported to theaters of military operations, these bio-based fuels would be designed to match the specifications of military fuels (such as JP-8). In the interim, significant gains in mobility efficiency could make liquid petroleum fuels more available and affordable to the military when or if it is needed.

A U.S. Department of Energy study indicated that 20% of America’s electrical supplies could come from wind power by 2030. Similar, but less aggressive, growth curves can be projected for utility-scale solar power generation. Google, which has experience in scaling new technologies, reports that the U.S. can generate nearly all of its electrical power from non-carbon sources by 2030. While renewable energy generating plants currently cost more than their fossil counterparts, renewable energy production is expected to become competitive with traditional electricity

“Islanding some major bases is a great idea,” Magnus said. “You want to make sure that, in a natural or manmade disaster, the basic functions of an electrical grid can be conducted from a military installation. That’s a great idea. And a great challenge. And you can not only island, but be in a position where you can take energy from the grid when needed, and deliver energy back to the grid when you have a surplus. There will be tremendous resistance from the public utilities, so we need to find a way for everyone to benefit.

“It’s going to change the shorelines. It’s going to change the amount of snowmelt from mountains and glaciers. Some areas will experience increased rainfall, and some will experience increased drought. These are destabilizing events, even if they happen slowly. People in marginal economic areas will be hardest hit—and guess where we send our military? “The more instability increases, the more pressure there will be to use our military,” he said. “That’s the issue with climate change.

The U.S. is all about preventing big wars by managing instability. But as populations get more desperate, the likelihood of military conflicts will go up. We’ll have to cope with the ill effects of climate change.”

Resource scarcity is a key source of conflict, especially in developing regions of the world. Without substantial change in global energy choices, Vice Admiral Dennis McGinn sees a future of potential widespread conflict. “Increasing demand for, and dwindling supplies of, fossil fuels will lead to conflict. In addition, the effects of global climate change will pose serious threats to water supplies and agricultural production, leading to intense competition for essentials,” said the former commander of the U.S. Third Fleet, and deputy chief of naval operations, warfare requirements and programs. “The U.S. cannot assume that we will be untouched by these conflicts. We have to understand how these conflicts could play out, and prepare for them.”

“We have less than ten years to change our fossil fuel dependency course in significant ways. Our nation’s security depends on the swift, serious and thoughtful response to the inter-linked challenges of energy security and climate change. Our elected leaders and, most importantly, the American people should realize this set of challenges isn’t going away. We cannot continue business as usual.

We can invest more heavily in technologies that may require more patience and risk than most traditional investors can tolerate. The Department can provide essential aid in moving important new energy systems through what venture capitalists call “the valley of death”-the period after prototyping and before fully developing the product to scale. DoD also excels at the combination of speed and scale-building a huge or complex system in a short period of time. This challenge to hit speed and scale is the same challenge facing developers of new energy technologies.

Task Force has been pursuing a number of projects, including testing exterior spray foam to insulate temporary structures such as tents and containerized living units. Based on an estimated energy savings of 40 to 75%, Multi-National Force Iraq awarded a $95 million contract to insulate nine million square feet of temporary structures. The use of spray foam is estimated to have taken about 12 fuel transport trucks off the road every day in Iraq.

Tinker and Robins Air Force Bases have worked with their neighboring utilities to install 50 to 80 MW combustion gas turbines with dual fuel capability that allow the bases to disconnect from grid (that is, “island” from the grid) in the event of an emergency;

The Army is playing a role in providing an early market for the nascent electric vehicle market. In January 2009, the Army announced the single largest acquisition of neighborhood electric vehicles (NEV) [102]. By 2011, the Army will have acquired 4,000 NEVs, which cost nearly 60-percent less to operate than the gasoline-powered vehicles they will replace.

The U.S. Air Force has demonstrated national leadership in adopting renewable energy at their installations.

“Aircraft carriers or nuclear subs at a port like Norfolk are a real challenge to the electrical system,” Adm. Nathman said. When those ships shut down and start pulling from the grid, it’s an enormous demand signal. And you can’t have interruptions in that power, because that power supports nuclear reactor operations.”

The U.S. military will be able to procure the petroleum fuels it requires to operate in the near-and mid-term time horizons. However, as carbon regulations are implemented and the global supplies of fossil fuels begin to plateau and diminish in the long-term, identifying an alternative to liquid fossil fuels is an important strategic choice for the Department.

Recognizing this circumstance, DARPA has signaled that it will invest $100 million in research and development funding to derive JP-8 from a source other than petroleum. In early 2009, DARPA awarded more than half of that funding to three firms in an effort to develop price-competitive JP-8 from non-food crops such as algae and other plant-based sources.

The ongoing research efforts and progress to date by DoD in finding alternative liquid fuels, however, should not be interpreted to mean that this will be an easy task to accomplish. The equipment and weapons platforms of the Services are complex in both their variety and their operational requirements. For example, when considering the U.S. Navy, the fleet uses 187 types of diesel engines, 30 variations of gas/steam turbine engines, 7,125 different motors (not to mention the various types of nuclear reactors for aircraft carriers and submarines). The Navy also procures liquid fuels for its carrier- and land-based aircraft, which feature a mix of turbojet, turboprop, turboshaft, and turbofan engines. Finding a fuel that contains the appropriate combination of energy content (per unit mass and volume) is a challenging area of research.

How America responds to the challenges of energy dependence and climate change will shape the security context for the remainder of this century; it will also shape the context for U.S. diplomatic and military priorities.

Over dependence on imported oil-by the U.S. and other nations-tethers America to unstable and hostile regimes, subverts foreign policy goals, and requires the U.S. to stretch its military presence across the globe; such force projection comes at great cost and with great risks. Within the military sector, energy inefficient systems burden the nation’s troops, tax their support systems, and impair operational effectiveness. The security threats, strategic and tactical, associated with energy use were decades in the making; meeting these challenges will require persistence.

Both the defense and civilian systems have been based on dangerous assumptions about the availability, price, and security of oil and other fossil fuel supplies. It is time to abandon those assumptions.


Finding 1: The nation’s current energy posture is a serious and urgent threat to national security. The U.S.’s energy choices shape the global balance of power, influence where and how troops are deployed, define many of our alliances, and affect infrastructure critical to national security. Some of these risks are obvious to outside observers; some are not. Because of the breadth of this finding, we spell out two major groupings of risk.

Finding 1A: Dependence on oil undermines America’s national security on multiple fronts. America’s heavy dependency on oil—in virtually all sectors of society—stresses the economy, international relationships, and military operations—the most potent instruments of national power. Over dependence on imported oil—by the U.S. and other nations— tethers America to unstable and hostile regimes, subverts foreign policy goals, and requires the U.S. to stretch its military presence across the globe; such force projection comes at great cost and with great risks. Within the military sector, energy inefficient systems burden the nation’s troops, tax their support systems, and impair operational effectiveness. The security threats, strategic and tactical, associated with energy use were decades in the making; meeting these challenges will require persistence. Both the defense and civilian systems have been based on dangerous assumptions about the availability, price, and security of oil and other fossil fuel supplies. It is time to abandon those assumptions.

Finding 1B: The U.S.’s outdated, fragile, and overtaxed national electrical grid is a dangerously weak link in the national security infrastructure. The risks associated with critical homeland and national defense missions are heightened due to DoD’s reliance on an electric grid that is out-dated and vulnerable to intentional or natural disruptions. On the home front, border security, emergency response systems, telecommunications sys tems, and energy and water supplies are at risk because of the grid’s condition. For military personnel deployed overseas, missions can be impaired when logistics support and data analysis systems are affected by grid interruptions. An upgrade and expansion of the grid and an overhaul of the regulations governing its construction and operations are necessary enablers to growth of renewable energy production—which is also a key element of a sound energy and climate strategy. Others have made compelling arguments for this investment, citing the jobs growth and environmental benefits. We add our voices, but do so from a different perspective: Improving the grid is an investment in national security.

Finding 2: A business as usual approach to energy security poses an unacceptably high threat level from a series of converging risks. The future market for fossil fuels will be marked by increasing demand, dwindling supplies, volatile prices, and hostility by a number of key exporting nations. Impending regulatory frameworks will penalize carbon-intensive energy sources. Climate change poses severe security threats to the U.S. and will add to the mission burden of the military. If not dealt with through a systems-based approach, these factors will challenge the U.S. economically, diplomatically, and militarily. The convergence of these factors provides a clear and compelling impetus to change the national and military approach to energy.

Finding 3: Achieving energy security in a carbon-constrained world is possible, but will require concerted leadership and continuous focus. The value of achieving an energy security posture in a future shaped by the risks and regulatory framework of climate change is immense. The security and economic stability of the U.S. could be improved greatly through large-scale adoption of a diverse set of reliable, stable, low-carbon, electric energy sources coupled with the aggressive pursuit of energy efficiency. The electrification of the transportation sector would alleviate the negative foreign policy, economic, and military consequences of the nation’s current oil dependency. While this future is achievable, this transformation process will take decades; it will require patience, stamina, and the kind of vision that bridges generations. Ensuring consistency of the nation’s energy security strategy with emerging climate policies can also serve to broaden the base of support for sensible new energy development and help to unify a wide range of domestic policies.

Finding 4: The national security planning processes have not been sufficiently responsive to the security impacts of America’s current energy posture. For much of the post-World War II period, America’s foreign and defense policies were aimed at protecting stability where it existed, and promoting it where it did not. Our national security planning process has continuously evolved to mitigate and adapt to threats as they arose. From the perspective of energy security, this process has left the nation in a position where our energy needs undermine: our national ideals, our ability to project influence, our security at home, our economic stability, and the effectiveness of our military. America’s current energy and climate policies make the goal of stability much more difficult to achieve. While some progress has been made to recognize the risks of our energy posture (including within the U.S. military), the strategic direction of the nation has yet to change course sufficiently to avoid the serious threats that will arise as these risks continue to converge.

Finding 5: In the course of addressing its most serious energy challenges, the Department of Defense can contribute to national solutions as a technological innovator, early adopter, and test-bed. The scale of the energy security problems of the nation demands the focus of the Defense Department’s strong capabilities to research, develop, test, and evaluate new technologies. Historically, DoD has been a driving force behind delivering disruptive technologies that have maintained our military superiority since World War II. Many of these technical breakthroughs have had important applications in the civilian sector that have strengthened the nation economically by making it more competitive in the global marketplace. The same can be true with energy. By pursuing new energy innovations to solve its own energy security challenges, DoD can catalyze some solutions to our national energy challenges as well. By addressing its own energy security needs, DoD can stimulate the market for new energy technologies and vehicle efficiency tools offered by innovators. As a strategic buyer of nascent technologies, DoD can provide an impetus for small companies to obtain capital for expansion, enable them to forward-price their proven products, and provide evidence that their products enjoy the confidence of a sophisticated buyer with stringent standards. A key need in bringing new energy systems to market is to achieve speed and scale: these are hallmarks of American military performance.

Priority 1: Energy security and climate change goals should be clearly integrated into national security and military planning processes. The nation’s approach to energy and climate change will, to a large extent, shape the security context for the remainder of this century. It will shape the context for diplomatic and military engagements, and will affect how others view our diplomatic initiatives-long before the worst effects of climate change are visible to others. Strategy, National Military Strategy, and Quadrennial Defense Review should more realistically describe the nature and severity of the threat

Priority 2: DoD should design and deploy systems to reduce the burden that inefficient energy use places on our troops as they engage overseas. Because the burdens of energy use at forward operating bases present the most significant energy related vulnerabilities to deployed forces, reducing the energy consumed in these locations should be pursued as the highest level of priority. In the operational theater, inefficient use of energy can create serious vulnerabilities to our forces at multiple levels. The combat systems, combat support systems, and electrical generators at forward operating bases are energy intensive and require regular deliveries of fuel; the convoys that provide this fuel and other necessary supplies are long and vulnerable, sometimes requiring protection of combat systems such as fixed wing aircraft and attack helicopters. Individual troops operating in remote regions are subject to injury and reduced mobility due to the extreme weight of their equipment (which can include up to 26 pounds of batteries).

We encourage readers to view our earlier report: “National Security and the Threat of Climate Change.”)

Military Advisory Board (MAB) Members

CHAIRMAN: General Charles F. “Chuck” Wald, USAF (Ret.) Former Deputy Commander, Headquarters U.S. European Command (USEUCOM)

General Charles G. Boyd, USAF (Ret.) Former Deputy Commander-in-Chief, Headquarters U.S. European Command (USEUCOM)

Lieutenant General Lawrence P. Farrell, Jr., USAF (Ret.) Former Deputy Chief of Staff for Plans and Programs, Headquarters U.S. Air Force

General Paul J. Kern, USA (Ret.) Former Commanding General, U.S. Army Materiel Command

General Ronald E. Keys, USAF (Ret.) Former Commander, Air Combat Command

Admiral T. Joseph Lopez, USN (Ret.) Former Commander-in-Chief, U.S. Naval Forces Europe and of Allied Forces, Southern Europe

General Robert Magnus, USMC (Ret.) Former Assistant Commandant of the U.S. Marine Corps

Vice Admiral Dennis V. McGinn, USN (Ret.) Former Deputy Chief of Naval Operations for Warfare Requirements and Programs

Admiral John B. Nathman, USN (Ret.) Former Vice Chief of Naval Operations and Commander of U.S. Fleet Forces

Rear Admiral David R. Oliver, Jr., USN (Ret.) Former Principal Deputy to the Navy Acquisition Executive

General Gordon R. Sullivan, USA (Ret.) Former Chief of Staff, U.S. Army, and Former Chairman of the CNA MAB

Vice Admiral Richard H. Truly, USN (Ret.) Former NASA Administrator, Shuttle Astronaut and the first Commander of the Naval Space Command

MAB Executive Director: Ms. Sherri Goodman, General Counsel, CNA. Former Deputy Under Secretary of Defense for Environmental Security

We would also like to thank the following persons for briefing the Military Advisory Board (in order of appearance):

Dr. Martha Krebs, Deputy Director for Research and Development, California Energy Commission and former Director, Office of Science, U.S. Department of Energy; Mr. Dan Reicher, Director, Climate Change and Energy Initiatives, Google.org, and former Assistant Secretary of Energy for Energy Efficiency and Renewable Energy; Dr. Kathleen Hogan, Director, Climate Protection Partnerships Division, U.S. Environmental Protection Agency; The Honorable Kenneth Krieg, Distinguished Fellow, CNA, and former Under Secretary of Defense for Acquisition, Technology, and Logistics; Dr. Joseph Romm, Senior Fellow, Center for American Progress, and former Acting Assistant Secretary of Energy, Office of Energy Efficiency and Renewable Energy; Mr. Ray Anderson, Founder and Chairman, Interface, Inc.; Mr. Jeffrey Harris, Vice President for Programs, Alliance to Save Energy; Dr. Vaclav Smil, Distinguished Professor, Faculty of Environment, University of Manitoba; Mr. Kenneth J. Tierney, Corporate Senior Director of Environmental Health, Safety and Energy Conservation, Raytheon; Dr. Ben Schwegler, Vice President and Chief Scientist, Walt Disney Imagineering Research and Development; Mr. Fred Kneip, Associate Principal, McKinsey; The Honorable John Deutch, Institute Professor, MIT, former Director of Central Intelligence, Central Intelligence Agency, and former Deputy Secretary of Defense; Mr. David Hawkins, Director, Climate Programs, Natural Resources Defense Council; Dr. Jeffrey Marqusee, Executive Director of the Strategic Environmental Research and Development Program (SERDP) and the Director of the Environmental Security Technology Certification Program (ESTCP); Mr. Michael A. Aimone, Assistant Deputy Chief of Staff for Logistics, Installations and Mission Support, Headquarters U.S. Air Force; Mr. Alan R. Shaffer, Principal Deputy Director, Defense Research and Engineering, Office of Director of Defense Research and Engineering, U.S. Department of Defense; Mr. Christopher DiPetto, Deputy Director, Developmental Test & Evaluation, Systems and Software Engineering, U.S. Department of Defense; and, The researchers from the National Renewable Energy Laboratory: Ms. Bobi Garret, Mr. Dale Garder, Dr. Rob Farrington, Dr. Mike Cleary, Mr. Tony Markel, Dr. Mike Robinson, Dr. Dave Mooney, Dr. Kevin Harrison, Mr. Brent Nelson, Mr. Bob Westby

Posted in Alternative Energy, Military | Tagged , , , | Leave a comment

Carbon Capture and Storage not likely to ever be commercial: too expensive, uses up to 30% of the power

House 112-179. September 20, 2012. The American initiative part 29: a focus on H.R. 6172. House of Representatives.

[ Excerpts from the 205 page transcript of the hearing follow ]

Key points about Carbon Capture and Storage (CCS) clean coal technology:

  • Large-scale commercialization remains years, if not decades, away
  • It is too expensive: EPA and DOE’s National Energy Technology Lab estimates that applying CCS to new coal-based units would increase the cost of electric power by 80%
  • CCS technology is in an early stage of development, so not a single CCS developer in the world can guarantee its technology will work at commercial scale, and without such a guarantee, power plant operators will not invest in CCS technology.
  • CCS reduces the EROEI substantially: Many of the current pilot projects estimate the parasitic load and cycle efficiency penalties to be at least 25 or 30% of a generating station output. So if CCS technology were retrofitted to an existing 2,000 MW coal-fired station the output from the plant would be reduced by 500 to 600 MW at a minimum.
  • Finding enough storage will be difficult
  • Very serious questions remain regarding the implications injection processes will have on mineral and property rights, monitoring C02 plumes across property lines or state boundaries, and the verification systems necessary to ensure long term monitoring to be sure no CO2 is escaping

ED WHITFIELD, KENTUCKY. Today we will be focusing on H.R. 6172, which would prohibit EPA’s proposed New Source Performance Standard for greenhouse gases from being finalized until it is technologically and economically feasible.

I don’t think that anyone is not aware of the fact that this administration has a strong bias against coal. We all are familiar with the President’s comments in San Francisco when he was running for President that people would be able to build coal plants if he is elected President but they would be bankrupt. Yesterday, many of you read about Alpha Resources closing down eight coalmines, 1,200 jobs. Patriot Coal recently announced they were going into bankruptcy. Murray Coal up in Ohio, West Virginia, Kentucky and Illinois has announced they are going to be closing down three mines. And I understand the argument on the other side because they say it has nothing to with us, it has nothing to do with our regulations, this is because natural-gas prices are low, which is true. But even if that were not the case, once this regulation becomes final, no one will be able to build a new coal power plant in America.

BOBBY L. RUSH, A REPRESENTATIVE IN CONGRESS FROM THE STATE OF ILLINOIS. Today’s hearing will focus on H.R. 6172, a bill that prohibits the EPA from finalizing standards of performance under section 111 of the Clean Air Act for carbon dioxide emissions from existing or new fossil fuel-fired power plants unless or until carbon capture and storage is found to be technologically and economically feasible.

Ironically this bill comes on the heels of the last markup the subcommittee held where the majority defeated an amendment I offered that would have exempted future clean-coal projects from the arbitrary December 2011 deadline, and my Republican colleagues’ misguided attempts to disrupt the Department of Energy loan program by prohibiting any funding for future proposals regardless of the merits or technological advances of those projects. So as the first attempt to abandon any new Department of Energy funding for future clean-coal projects, the majority party is now bringing forth a bill that would block and delay EPA rules from finalizing the proposed carbon pollution standards for new power plants or any future carbon pollution standards for existing power plants until carbon capture and sequestration is technologically and economically feasible. This bill to most people would seem simply another attempt to try and shield the dirtiest polluters from commonsense air quality standards that would make their facilities cleaner and more efficient while protecting Americans’ health.

FRED UPTON, MICHIGAN. We are extremely concerned about the impacts that this proposed rule would have on the future of affordable coal-fired power generation in America if indeed it is finalized. As currently written, the rule requires any new coal-fired plants to install costly carbon capture and sequestration technology. However, even President Obama’s Department of Energy has acknowledged that CCS technology is not yet commercially available and that large-scale commercialization remains years, if not decades, away.

Leaders in CCS technology and industry stakeholders agree that significant technical, legal and regulatory hurdles still need to be overcome in order to successfully bring CCS to commercial scale. And because CCS technology remains in its early stages of development, not a single CCS developer in the world can currently guarantee that its technology will work at commercial scale, and without such a guarantee, power plant operators will not, and cannot, make investment in CCS technology.

HENRY A. WAXMAN, CALIFORNIA. This committee has heard a lot of arguments from victims and people are being convinced that they are victims by the government when that is not the case. Let me cite an example. This committee had a hearing on EPA’s proposed regulation of farm dust. Can anybody think of anything more ridiculous than regulating farm dust that is ubiquitous to farms? So this committee rushed legislation to protect the farmers from EPA regulation of farm dust even though EPA said they had no plans to regulate farm dust, and we passed a bill. Do you know what the bill did? It provided for repeal of regulations from open-pit mining that put out particulate matter and toxic substances in the air. So the farmers were told they were victims and they were being used for a different purpose.

We don’t have the technology to remove the carbon from coal and store it. It is a technology we all should want to have. But the industry has no incentive to develop that technology because they are doing fine selling coal and using coal without that technology. That would just be an extra expense.

The Republicans in this House passed H.R. 910, the Upton- Inhofe bill. That would have barred EPA from reducing dangerous carbon pollution and codified science denial by overturning EPA’s scientific finding that carbon pollution endangers health and welfare. It is a premise that climate change is a hoax, and since that time early last year, this Republican House has proved to be the most anti-environmental in the history of the Congress. Republicans have voted more than 300 times on the House Floor to weaken longstanding public-health and environmental laws, block environmental standards, defund protections of our air, water and public lands, and oppose clean energy. They voted 47 times to block action on climate change. When they passed that Upton-Inhofe bill a year and a half ago, House Republicans argued the science was uncertain, EPA was exceeding its authority. By now, everybody should understand that they were wrong on both counts. The science has been clear and clearer, and just look at all the signs of climate change occurring around us: recent wildfires, droughts, heat waves, exactly the type of extreme weather events that scientists have been predicting for years and that this committee has been ignoring.

The EPA is not overreaching. The courts have affirmed their power to regulate in this area. It is about time we try to help the people in the coal area be viable in a new economy that is coming. Otherwise you can scare them with talk of war against them but it is a dishonest approach. It doesn’t help them. It stirs up the feelings of victimology by the people in these areas, and I suppose it is supposed to help Republicans in the election. But sometimes let us stop playing politics and deal with national urgent matters, and this committee has refused to do it for a year and a half.

Eugene Trisko. I am an attorney in private practice, here today to testify on behalf of the United Mine Workers of America to support the enactment of H.R. 6172. I have had the honor of representing the UMWA in Clean Air Act and domestic international climate change issues for the past 25 years. H.R. 6172 is sound policy and a commonsense solution to the threat to new advanced coal generation posed by EPA’s proposed carbon pollution standard rule. That rule sets a uniform CO2 emissions rate of 1,000 pounds of CO2 per megawatt-hour applicable to both coal and natural-gas combined cycle units. New coal units would need to employ CCS technology to comply while new natural-gas combined cycle units could comply without CCS.

EPA and DOE’s National Energy Technology Lab estimates that applying CCS to new coal-based units would increase the cost of electric power by 80 percent.

CCS has not been commercially demonstrated in this country as indicated by the findings of the 2010 Interagency Task Force Report on Carbon Capture and Storage. EPA’s proposed rule is simply a means of forcing winners and losers in the future market for electric generation.

Coal is an indispensable part of America’s energy supply and must be a core element of any all-of-the-above energy policy. More than one-third of our Nation’s electricity is generated by coal, mainly in baseload plants. The principal alternatives to coal for future baseload generation are nuclear and natural gas. While natural-gas prices have declined recently, substantial uncertainty surrounds future natural-gas prices, particularly in view of the 40- to 60-year lifetimes of electric generation assets.

John N. Voyles,.fr. On behalf of LG&E and KU Energy LLC. We are aware of no full scale application of carbon capture and storage (CCS) in continuous operation on a fossil-fueled electric generating unit.

The energy penalty to add CCS technology to a coal-fired electric generating unit is prohibitively high. Many of the current pilot projects estimate the parasitic load and cycle efficiency penalties to be at least 25 or 30% of a generating station output. For a company like mine, those penalties would mean if CCS technology were retrofitted to an existing 2,000 MW coal-fired station producing power for our customers today, the output from the plant would be reduced by 500 to 600 MW at a minimum.

An even bigger challenge is the application of C02 storage technology. While some carbon dioxide is successfully being utilized in enhanced oil or methane recovery operations and other pilots have successfully injected small quantities of CO2 into deep saline aquifers, the volume of storage necessary to facilitate such operations on a continuous basis for the life of an electric generating station has yet to be established. Very serious questions remain regarding the implications such injection processes have on mineral and property rights, the monitoring of the C02 plume across property lines or state boundaries and the verification systems necessary to ensure long term monitoring is taken into account. We believe these questions loom much larger than the simple view that CO2 can be captured and injected underground and might be done more cost effectively, with less energy penalties at some undetermined point in the future. Until such time as CCS technology is commercially available to be deployed at full scale in a technical and economical manner, we are concerned that any standard of performance proposed

Robert Hilton, Vice President of Power Technologies for Government Affairs for Alstom. Alstom has completed work on four pilot and validation-scale plants and has 10 pilots, validation, and commercial-scale plants in operation, design, or construction worldwide. These CCS projects include both coal and gas generation.

We are here today to specifically address the status of CCS as a commercial technology. CCS is, within the realm of innovation, no different than any other technology under development. It is required to move through various stages of development at consistently larger scale. Alstom has taken each of its CCS-related technologies from the bench level to validation scale with the aim of finally reaching commercial. However, to date, no CCS technologies have been deployed at commercial scale. Validation scale is the proof of technology in real field conditions. This is important. It is at this point we can say confidently that the basic technology works. CCS technology is technologically feasible now.

The final stage to reach commercial status is to perform a demonstration at full scale. It is critical to define the risk of technology to make offers. This cannot be defined until the technology can be shown to work at full scale. This is the first opportunity we have to work with the exact equipment in the exact operating conditions that will become the subject of contractual conditions including performance and other contractual guarantees. This also becomes the first opportunity to optimize the process and equipment to effect best performance and seek cost reduction. Based on these criteria, Alstom does not currently deem its technologies for CCS commercial and, to my knowledge, there are no other technology suppliers globally that can do so.

In its recent rulemaking, EPA has required CCS for all new coal plants and, conceivably gas plants. While Alstom, in conjunction with AEP, has run the largest plant, we are not ready to do this on 500- or 1,000-megawatt plants. It

The current DOE program for first generation technologies on CCS has encountered serious difficulties in bringing projects of commercial scale to operation. It appears that most of the projects, if they continue, are not likely to become operational until 2017 with the exception of Radcliffe/Kemper. Globally the picture is similar. The EU, and notably the UK, are targeting 2016 for commercial scale demos to start up. The Chinese have a road map aimed at two commercial scale demos to begin operation in 2016. But note: these are startups. A period of operation must follow before the technology is deemed ready for commercial offer.

CCS has been in development for approximately the last 12-14 years- a relatively short time for such a complex and critical technology. In the power industry, development periods of 20-25 years are common.

While Alstom, in conjunction with American Electric Power, have built and operated the largest continuous CCS operation on a coal plant through to sequestration, this plant was approximately 50 MWh. This plant, while proving the technology works very well, was not of such scale as to use the real equipment required for a 500 or 1000 MW Coal plant. Many of the components including the chillers and heat exchangers will change for use on a larger plant.

While this plant was capable of capturing and storing over 100,000 tons per year, it was not ready to be offered commercially on a 3-6 million ton per year power plant. [My comment: that is a HUGE amount of CO2 to store].

Baseload Operation

All power plants have some load variation that will have impacts on a plant’s heat rate and CO2 emissions. A typical PC baseload plant may operate 60% of the time at 100% load and another 35% between 50-75% load. The average capacity factor would be about 85% and it would have an average heat rate typically about 1% higher than at 100% load. This alone would be sufficient to increase the specific CO2 emission from a PC plant firing Wyoming subbituminous coal from 1781 to 1799 Ib CO2/MWh – essentially at the 1800 limit.

Cycling Operation

A typical PC cycling plant may operate 30% of the time at 100% load, another 55% between 50-75% load, with the balance of operation at even lower loads. The average capacity factor would be about 70% and it would have an average heat rate typically about 4·5% higher than at 100% load. A 5% heat rate increase from cycling operation would increase the specific CO2 emission of the Illinois bituminous coal from 1698 to 1783 Ib CO2/MWh – already getting very close to the 1800 limit. Note that this is particularly significant as more plants are expected to cycle in the future as renewables increase their share of power generation.

Degradation Due To Plant Age

Power plants are designed to operate for 30 years and many existing plants have operated much longer than that. Normal wear and tear is to be expected which has an impact on the plant heat rate. Looking at just the steam turbine, the plant heat rate could deteriorate by about 1% after 10 years of operation.

Site Factors

Other factors can impact a modern plant design that can also have a negative impact on plant heat rate and thus the CO2 emissions. For example, areas with limited water resources could require an air-cooled condenser vs. water cooling. Local water temperature can also have an impact on condenser operating pressure and heat rate. Table 2 summarizes the impact of an increase in plant heat rate due to the above factors on the specific CO2 emissions for a state-of-the-art USC PC power plant. A plant that is required to cycle would likely have a heat rate 5% higher than its design 100% load heat rate. In this scenario, a bituminous coal would just barely meet the standard and the lower rank fuels would exceed the 1800 Ib CO2/MWh target. It is likely that the bituminous plant would also exceed this target when site specific factors, impacts of startup, shutdown, and age deterioration are also factored in. The cycling impact could be even more significant in the future as renewables assume a larger portion of the total power generation.

Table 2: Impact of Heat Rate Degradation on Specific CO2 Emissions

The power industry normally has heavy production in the winter and summer and less production in the shoulder months of fall and spring.

Among the many challenges faced in implementing technology to reduce CO2 emissions from the power generation sector, minimizing both the energy penalty and the cost of electricity for fossil fueled power plants equipped with CCS are two of the most significant. Many parameters have to be taken into account to calculate these costs, including those related to technical performance. Evaluations and comparisons often result in endless debates due to the infinite number of possible combinations of these input parameters.

The “IPCC Summary for Policymakers” published in May 2007, gives a target for the maximum concentration of Greenhouse Gas (GHG) in the atmosphere of 450 ppm CO2 equivalent. This is required in order to give a reasonable chance of limiting the earth’s long-term surface temperature increase to a maximum of 2°C above pre-industrial levels by 2100. This figure was agreed by all countries at Copenhagen & Cancun. To achieve this goal, CO2 emissions will need to be reduced massively. The main contributors to CO2 emissions today are Power Generation (40%), Transport (20%) and Industry (20%). Power generation currently emits 12 GtCO2/yr. Power is projected to grow significantly, and the 2°C goal will require full de-carbonization of Power generation. Low carbon technologies are needed both for new power generation plants, and for the existing installed base. The possibilities to reduce CO2 emissions in the Power sector include: i) demand reduction, ii) efficiency increase, iii nuclear, iv) renewables (wind, hydro, solar, biomass ), and v) Carbon Capture and Storage (CCS). This last alternative will by necessity play a major role:

The IEA calculates that 54 to 67% of worldwide electricity generation will still be provided by fossil power plants in 2035. CCS is the only option to deal with the resulting emissions during a transition period until around 2050 after which time it may be possible to move toward a power generation system not reliant on fossil fuels.

John Christy, Alabama State Climatologist, Professor of Atmospheric Science, and Director of the Earth Systems Science Center at the University of Alabama at Huntsville. A climate change denier.


Posted in CCS Carbon Capture & Storage, Congressional Record U.S. | Tagged , , | 2 Comments

Electromagnetic pulse threat to infrastructure (U.S. House hearings)

In 2012 and again in 2014, the U.S. House of Representatives held hearings on the threat of electromagnetic pulses — from either the sun or nuclear blaststo critical U.S. infrastructure.  The testimony at these hearings could be mistaken for a grade B science fiction movie.  But it’s not a Hollywood thriller.  Below are excerpts from the transcripts of the 2012 and 2014 hearings.

Chair, Michael McCaul (Texas). Some would say it is a low probability, but the damage that could be caused in the event of an EMP attack both by the sun, a solar event, or a man-made attack would be catastrophic. We talk a lot about a nuclear bomb in Manhattan, and we talk about a cybersecurity threat, the grid, power grid, in the Northeast, and all these things would actually probably pale in comparison to the devastation that an EMP attack could perpetrate on Americans. We have extraordinary capability in this country to do great things. We are a responsible Nation with our power and with our might. But a nation, a rogue nation, with that type of capability in the wrong hands could be devastating.

Side note: House Rep McCaul has just come out with a 2016 book “Failures of Imagination: The Deadliest Threats to Our Homeland–and How to Thwart Them”.   Yet this book doesn’t mention the threat of an electromagnetic pulse (EMP). He doesn’t explain why EMP is no longer a threat, so he’s lost credibility with me, and I won’t be buying his book and reviewing it.

VICE CHAIRMAN SCOTT PERRY (Pennsylvania): In 1962, the United States conducted a test named STARFISH Prime where the military detonated a 1.4-megaton thermonuclear bomb about 25 miles above Johnston Atoll in the in the Pacific. In space, six American, British, and Soviet satellites suffered damage, and 800 miles away in Hawaii, burglar alarms sounded, street lights blinked out, and phones, radios, and televisions went dead. While only 1 percent of the existing street lights were affected, it became clear that electromagnetic pulse, or EMP, could cause significant damage.

EMP is simply a burst of electromagnetic radiation that results from certain types of high-energy explosions or from a suddenly fluctuating magnetic field. A frightening point is that EMP can be generated by nuclear weapons, from naturally-occurring sources such as solar storms, or specialized non-nuclear EMP weapons.

Nuclear weapon EMPs are most catastrophic when a nuclear weapon is detonated at a high altitude at approximately 30 kilometers, or 20 miles, above the intended target. The consequences of such an attack could be catastrophic. All electronics, power systems, and information systems could be shut down. This could then cascade into interdependent infrastructure such as water, gas, and telecommunications. While we understand that this is an extreme case, we must always be prepared in case a rogue state decides to utilize this technology.

Currently the nations of Russia and China have the technology to launch an EMP attack, and we have speculated that Iran and North Korea may be developing EMP weapon technology

Since most critical infrastructure, particularly electrical infrastructure is in the hands of private owners, the Federal Government has limited authority to mandate preparedness. DHS has no statutory authority whatsoever to regulate the electric grid.

Trent Franks (Arizona): With each passing year, our society becomes increasingly dependent on technology and an abundant supply of electricity. Our entire American way of life relies upon electrical power and technology. Our household appliances, food-distribution systems, telephone and computer networks, communication devices, water and sewage plants would grind to a halt without it. Nearly every single facet of modern human life in America is susceptible to being crippled by a major Electromagnetic Pulse or Geomagnetic Disturbance event. We are so reliant on our electric power grid that we specifically consider it ‘‘critical infrastructure’’.

Chairman and Members of the committee, it strikes at my very core when I think of the men, women, and children in cities and rural towns across America with a possibility of no access to food, water, or transportation. In a matter of weeks or months at most, a worst-case scenario could bring devastation beyond imagination.

The effects of geomagnetic storms and electromagnetic pulses on electric infrastructure are well-documented, with nearly every space, weather, and EMP expert recognizing the dramatic disruptions and cataclysmic collapses these pulses can bring to electric grids. In 2008, the EMP Commission testified before The Armed Services Committee, of which I am a member, that the U.S. society and economy are so critically dependent upon the availability of electricity that a significant collapse of the grid, precipitated by a major natural or man-made EMP event, could result in catastrophic civilian casualties. This conclusion is echoed by separate reports recently compiled by the DOD, DHS, DOE, NAS, along with various other Government agencies and independent researchers. All came to very similar conclusions. We now have 11 Government studies on the severe threat and vulnerabilities we face from EMP and GMD.

We have known the potentially devastating effects of sufficiently intense electromagnetic pulse on the electronic systems and its risk to our National security. More troubling, our enemies know.

More than a year ago, an unknown number of shooters with AK–47s knocked out 17 large transformers during a highly-choreographed assault on the PG&E Metcalf Transmission Substation in California. While the power company was able to avoid blackouts, the damage to the facility took nearly 4 weeks to repair.

This is not an isolated incident and world-wide adversaries are taking notice in the vulnerability of our grid.

We as a Nation have spent billions of dollars over the years hardening our nuclear triad, our missile-defense capabilities, and numerous other critical elements of our National security apparatus against the effects of electromagnetic pulse, particularly the type of electromagnetic pulse that might be generated against us by an enemy.

However, our civilian grid, which the Defense Department relies upon for nearly 99% of its electricity needs, is completely vulnerable to the same kind of danger. This constitutes an invitation on the part of certain enemies of the United States to use the asymmetric capability of an EMP weapon against us.

We also face the threat of a natural EMP event. Since the last occurrence of a major geomagnetic storm in 1921, the Nation’s high-voltage and extra-high- voltage systems have increased in size more than ten-fold.

HON. PETE SESSIONS: The possibility that a single nuclear weapon detonated in space high over this country could unleash intense electromagnetic pulses (EMP), disrupting for many months—if not indefinitely—the supply of power to large area. Until recently, information about EMP was Classified and many of us have little knowledge of the serious danger such threats represents to everything we hold dear.

Dr. William Graham, the chairman of the EMP Threat Commission, believes that, if the power goes out and stays out for even 1 year’s time, as many as 9 out of 10 of us would perish.

we need not face such a horrific prospect. We know how to protect electrical and electronic devices from the effects of EMP. In fact, the Department of Defense has been doing it with respect to the military’s nuclear deterrent and command-and-control systems for over 50 years. There are, in short, proven and easily implementable techniques that can now be applied to ensure the resilience ofthe U.S. electric grid and the things that depend upon it in 21st Century America—which is just about everything.

Dr. Peter Vincent Pry is the executive director of the Task Force on National and Homeland Security, a Congressional advisory board dedicated to achieving protection of the United States from electromagnetic pulse and other threats. Dr. Pry is also the director of the United States Nuclear Strategy Forum, an advisory body to Congress on policies to counter weapons of mass destruction. Dr. Pry has served on the staffs of the Congressional Commission on the Strategic Posture of the United States, the Commission to Assess the Threat to the U.S. from an EMP Attack, the House Armed Services Committee, as an intelligence officer with the CIA, and as a verification analyst at the U.S. Arms Control and Disarmament Agency.

Mr. PRY.  Natural EMP from a geomagnetic super-storm like the 1859 Carrington Event or the 1921 Railroad Storm, a nuclear EMP attack from terrorists or rogue states as practiced by North Korea during the nuclear crisis of 2013 are both existential threats that could kill 9 of 10 Americans through starvation, disease, and societal collapse.

A natural EMP catastrophe or nuclear EMP attack could black out the National electric grid for months or years and collapse all the other critical infrastructures, communications, transportation, banking and finance, food and water, necessary to sustain modern society and the lives of 310 million Americans.

EMP is a clear and present danger:

  • A Carrington-class coronal mass ejection narrowly missed the earth in July 2012.
  • Last April, during the nuclear crisis with North Korea over Kim Jong-Un’s threatened nuclear strikes against the United States, Pyongyang apparently practiced an EMP attack with its KSM–3 satellite that passed over the U.S. heartland and over the Washington, D.C.- New York City corridor.
  • Iran, estimated to be within 2 months of nuclear weapons by the administration, has a demonstrated capability to launch an EMP attack from a vessel at sea. The Iranian Revolutionary Guard Navy commenced patrols off the East Coast of the United States in February.

An EMP attack is a high-tech means of killing millions of people the old-fashioned way—through starvation, disease, and societal collapse.

A single nuclear weapon detonated at high altitude will generate an electromagnetic pulse that can cause catastrophic damage across the entire contiguous United States to the critical infrastructures—electric power, telecommunications, transportation, banking and finance, food and water—that sustain modern civilization and the lives of 310 million Americans. Nature can also generate an EMP causing similarly catastrophic consequences across the entire contiguous United States— or even across the entire planet—by means of a solar flare from the Sun that causes on Earth a great geomagnetic storm. Non-nuclear weapons, often referred to as radio frequency weapons, can also generate an EMP, much more limited in range than a nuclear weapon, that can damage electronics, and could cause the collapse of critical infrastructures locally, perhaps with cascading effects over an area as large as a major city.

Any nuclear warhead detonated at high altitude, 30 kilometers (18.6 miles) or more above the Earth’s surface, will generate an electromagnetic pulse. The immediate effects of EMP are disruption of, and damage to, electronic systems and electrical infrastructure. EMP is not reported in the scientific literature to have direct harmful effects on people. Because an EMP attack would detonate a nuclear warhead at high-altitude, no other nuclear effects—such as blast, thermal radiation, or radioactive fallout—would be experienced by people on the ground or flying through the atmosphere. However, because modern civilization and life itself now depends upon elec Gamma rays, and the fireball from a high-altitude nuclear detonation, interact with the atmosphere to produce a super-energetic radio wave—the EMP—that covers everything within line-of-sight from the explosion to the Earth’s horizon.

Even a relatively low-altitude EMP attack, where the nuclear warhead is detonated at an altitude of 30 kilometers, will generate a damaging EMP field over a vast area, covering a region equivalent to New England, all of New York, and half of Pennsylvania. A nuclear weapon detonated at an altitude of 400 kilometers (~250 miles) over the center of the United States would place an EMP field over the entire contiguous United States and parts of Canada and Mexico.

It is a myth is that rogue states or terrorists need a sophisticated intercontinental ballistic missile to make an EMP attack. In fact, any missile, including short- range missiles that can deliver a nuclear warhead to an altitude of 30 kilometers or more, can make a catastrophic EMP attack on the United States, by launching off a ship or freighter. Indeed, Iran has practiced ship-launched EMP attacks using Scud missiles—which are in the possession of scores of nations and even terrorist groups. An EMP attack launched off a ship, since Scuds are common-place and a warhead detonated in outer space would leave no bomb debris for forensic analysis, could enable rogue states or terrorists to destroy U.S. critical infrastructures and kill millions of Americans anonymously.

The EMP generated by a nuclear weapon has three components, designated by the U.S. scientific-technical community E1, E2, and E3.

E1 is caused by gamma rays, emitted by the nuclear warhead, that knocks electrons off of molecules in the upper atmosphere, causing the electrons to rotate rapidly around the lines of the Earth’s magnetic field, a phenomenon termed the Compton Effect. The E1 component of nuclear EMP is a shockwave, transmitting thousands of volts of energy in mere nanoseconds of time, and having a high-frequency (short) wavelength that can couple directly into small objects, like personal computers, automobiles, and transformers. E1 is unique to nuclear weapons and is too fast and too energetic to be arrested by protective devices used for lightning.

The E2 component of a nuclear EMP is comparable to lightning in its energetic content and medium (milliseconds) frequency and wavelength. Protective devices used for lightning are effective against E2.

E3 is caused by the fireball of a nuclear explosion, the expanding and then collapsing fireball causing the Earth’s magnetic field to oscillate, generating electric currents in the very large objects that can couple into the low frequency, long (seconds) wavelength part of the EMP that is E3. The E3 waveform can couple directly only into objects having at least one dimension of great length. Electric power and telecommunications lines that run for kilometers in many directions are ideally suited for receiving E3. Although E3 compared to E1 appears to deliver little energy, just volts per meter, this is multiplied manifold by power and telecommunications lines that are typically many kilometers long, building up E3 currents that can melt Extremely High-Voltage (EHV) transformers, typically designed to handle 750,000 volts. Small electronics can also be destroyed by E3 if they are connected in any way to an E3 receiver—like a personal computer plugged into an electric outlet, which of course is connected to power lines that are ideal E3 receivers, or like the electronic servo-mechanisms that operate the controls of large passenger airliners, that can receive E3 through the metal skin of the aircraft wings and body. Protective devices used for lightning are not effective against E3.

The Soviets executed a series of nuclear detonations in which they exploded 300 kiloton weapons at approximately 300, 150, and 60 kilometers above their test site in South Central Asia. They report that on each shot they observed damage to overhead and underground buried cables at distances of 600 kilometers. They also observed surge arrestor burnout, spark-gap breakdown, blown fuses, and power supply breakdowns.

A high-yield nuclear weapon is not necessary to make an EMP attack. Although a high-yield weapon will generally make a more powerful EMP field than a low- yield nuclear weapon, ALL nuclear weapons produce gamma rays and EMP. The EMP Commission found, by testing modern electronics in simulators, that ANY nuclear weapon can potentially make a catastrophic EMP attack on the United States. Even a very low-yield nuclear weapon—like a 1-kiloton nuclear artillery shell—will produce enough EMP to pose a catastrophic threat. This is so in part because the U.S. electric grid is so aged and overburdened, and because the high-tech electronics that support the electric grid and other critical infrastructures are over 1 million times more vulnerable to EMP than the electronics of the 1960s.

The EMP Commission also found that, contrary to the claim that high-yield nuclear weapons are necessary for an EMP attack, that very low-yield nuclear weapons of special design can produce significantly more EMP than high-yield nuclear weapons. The EMP Commission found further that Russia, probably China, and possibly North Korea are already in possession of such weapons. Russian military writings call these ‘‘Super-EMP’’ nuclear weapons, and credibly claim that they can generate 200 kilovolts per meter—many times the 30 KVs/meter attributed to a high-yield (20 megaton) nuclear weapon of normal design. Yet a Super-EMP warhead can have a tiny explosive yield, perhaps only 1 kiloton, because it is specially designed to produce primarily gamma rays that generate the E1 electromagnetic shockwave component of the EMP effect. Super-EMP weapons are specialized to generate an overwhelming E1, and produce no E2 or E3 but do not need to, as their E1 is so potent.

In 2004, credible Russian sources warned the EMP Commission that design information and ‘‘brain drain’’ from Russia had transferred to North Korea the capability to build a Super-EMP nuclear weapon ‘‘within a few years.’’ In 2006 and again in 2008, North Korea tested a nuclear device of very low yield, 1–3 kilotons, and declared these tests successful. South Korean military intelligence, in open-source reporting, independently corroborates the Russian warning that North Korea is developing a Super-EMP nuclear warhead. North Korea’s proclivity to sell anything to anyone, including missiles and nuclear technology to fellow rogue nations Iran and Syria, makes Pyongyang’s possession of Super-EMP nuclear weapons especially worrisome.

Geomagnetic storms rarely affect the United States, but regularly damage nations located at high northern latitudes, such as Canada, Norway, Sweden, Finland, and Russia. Damage from a normal geomagnetic storm can be severe. For example, in 1989 a geomagnetic storm over Canada destroyed the electric power grid in Quebec. The EMP Commission was the first to discover and report in 2004 that every hundred years or so the Sun produces a great geomagnetic storm. Great geomagnetic storms produce effects similar to the E3 EMP from a multi-megaton nuclear weapon, and so large that it would cover the entire United States—possibly even the entire planet.

Geomagnetic storms, even great geomagnetic storms, generate no E1 or E2, only E3, technically called the magnetohydrodynamic EMP. Nonetheless, E3 alone from a great geomagnetic storm is sufficient to end modern civilization. The EMP produced, given the current state of unpreparedness by the United States and every nation on Earth, could collapse power grids everywhere on the planet and destroy EHV transformers and other electronic systems that would require years to repair or replace.

Modern civilization cannot exist for a protracted period without electricity. Within days of a blackout across the United States, a blackout that could encompass the entire planet, emergency generators would run out of fuel, telecommunications would cease as would transportation due to gridlock, and eventually no fuel. Cities would have no running water and soon, within a few days, exhaust their food supplies. Police, Fire, Emergency Services and hospitals cannot long operate in a blackout. Government and industry also need electricity in order to operate.

The EMP Commission warns that a natural or nuclear EMP event, given current unpreparedness, would likely result in societal collapse.

Terrorists, criminals, and even lone individuals can build a non-nuclear EMP weapon without great trouble or expense, working from Unclassified designs publicly available on the internet, and using parts available at any electronics store. In 2000, the Terrorism Panel of the House Armed Services Committee sponsored an experiment, recruiting a small team of amateur electronics enthusiasts to attempt constructing a radiofrequency weapon, relying only on unclassified design information and parts purchased from Radio Shack. The team, in 1 year, built two radiofrequency weapons of radically different designs. One was designed to fit inside the shipping crate for a Xerox machine, so it could be delivered to the Pentagon mail room where (in those more unguarded days before 9/11) it could slowly fry the Pentagon’s computers. The other radiofrequency weapon was designed to fit inside a small Volkswagon bus, so it could be driven down Wall Street and disrupt computers— and perhaps the National economy. Both designs were demonstrated and tested successfully during a special Congressional hearing for this purpose at the U.S. Army’s Aberdeen Proving Ground.

Radiofrequency weapons are not merely a hypothetical threat. Terrorists, criminals, and disgruntled individuals have used home-made radiofrequency weapons. The U.S. military and foreign militaries have a wide variety of such weaponry. Moreover, non-nuclear EMP devices that could be used as radiofrequency weapons are publicly marketed for sale to anyone, usually advertised as ‘‘EMP simulators.’’ For example, one such simulator is advertised for public sale as an ‘‘EMP Suitcase.’’ This EMP simulator is designed to look like a suitcase, can be carried and operated by one person, and is purpose-built with a high energy radiofrequency output to destroy electronics. However, it has only a short radius of effect. Nonetheless, a terrorist or deranged individual who knows what he is doing, who has studied the electric grid for a major metropolitan area, could—armed with the ‘‘EMP Suitcase’’— black out a major city.

A CLEAR AND PRESENT DANGER. An EMP weapon can be used by state actors who wish to level the battlefield by neutralizing the great technological advantage enjoyed by U.S. military forces. EMP is also the ideal means, the only means, whereby rogue states or terrorists could use a single nuclear weapon to destroy the United States and prevail in the War on Terrorism or some other conflict with a single blow. The EMP Commission also warned that states or terrorists could exploit U.S. vulnerability to EMP attack for coercion or blackmail: ‘‘Therefore, terrorists or state actors that possess relatively unsophisticated missiles armed with nuclear weapons may well calculate that, instead of destroying a city or military base, they may obtain the greatest political-military utility from one or a few such weapons by using them—or threatening their use—in an EMP attack.’’

The EMP Commission found that states such as Russia, China, North Korea, and Iran have incorporated EMP attack into their military doctrines, and openly describe making EMP attacks against the United States. Indeed, the EMP Commission was established by Congress partly in response to a Russian nuclear EMP threat made to an official Congressional Delegation on May 2, 1999, in the midst of the Balkans crisis. Vladimir Lukin, head of the Russian delegation and a former Ambassador to the United States, warned: ‘‘Hypothetically, if Russia really wanted to hurt the United States in retaliation for NATO’s bombing of Yugoslavia, Russia could fire an SLBM and detonate a single nuclear warhead at high altitude over the United States. The resulting EMP would massively disrupt U.S. communications and computer systems, shutting down everything.’’

China’s military doctrine also openly describes EMP attack as the ultimate asymmetric weapon, as it strikes at the very technology that is the basis of U.S. power. Where EMP is concerned, ‘‘The United States is more vulnerable to attacks than any other country in the world’’: ‘‘Some people might think that things similar to the ‘Pearl Harbor Incident’ are unlikely to take place during the information age. Yet it could be regarded as the ‘Pearl Harbor Incident’ of the 21st Century if a surprise attack is conducted against the enemy’s crucial information systems of command, control, and communications by such means as… electromagnetic pulse weapons… Even a superpower like the United States, which possesses nuclear missiles and powerful armed forces, cannot guarantee its immunity…In their own words, a highly computerized open society like the United States is extremely vulnerable to electronic attacks from all sides. This is because the U.S. economy, from banks to telephone systems and from power plants to iron and steel works, relies entirely on computer networks… When a country grows increasingly powerful economically and technologically…it will become increasingly dependent on modern information systems… The United States is more vulnerable to attacks than any other country in the world.’’

Iran—the world’s leading sponsor of international terrorism—in military writings openly describes EMP as a terrorist weapon, and as the ultimate weapon for prevailing over the West: ‘‘If the world’s industrial countries fail to devise effective ways to defend themselves against dangerous electronic assaults, then they will disintegrate within a few years… American soldiers would not be able to find food to eat nor would they be able to fire a single shot.’’

The threats are not merely words. The EMP Commission assesses that Russia has, as it openly declares in military writings, probably developed what Russia describes as a ‘‘Super-EMP’’ nuclear weapon—specifically designed to generate extraordinarily high EMP fields in order to paralyze even the best protected U.S. strategic and military forces. China probably also has Super-EMP weapons. North Korea too may possess or be developing a Super-EMP nuclear weapon, as alleged by credible Russian sources to the EMP Commission, and by open-source reporting from South Korean military intelligence. But any nuclear weapon, even a low-yield first generation device, could suffice to make a catastrophic EMP attack on the United States. Iran, although it is assessed as not yet having the bomb, is actively testing missile delivery systems and has practiced launches of its best missile, the Shahab–III, fuzing for high- altitude detonations, in exercises that look suspiciously like training for making EMP attacks. As noted earlier, Iran has also practiced launching from a ship a Scud, the world’s most common missile—possessed by over 60 nations, terrorist groups, and private collectors.

A Scud might be the ideal choice for a ship-launched EMP attack against the United States intended to be executed anonymously, to escape any last-gasp U.S. retaliation. Unlike a nuclear weapon detonated in a city, a high-altitude EMP attack leaves no bomb debris for forensic analysis, no perpetrator ‘‘fingerprints.’’ Under present levels of preparedness, communications would be severely limited, restricted mainly to those few military communications networks that are hardened against EMP.

Today’s microelectronics are the foundation of our modern civilization, but are over 1 million times more vulnerable to EMP than the far more primitive and robust electronics of the 1960s, that proved vulnerable during nuclear EMP tests of that era. Tests conducted by the EMP Commission confirmed empirically the theory that, as modern microelectronics become ever smaller and more efficient, and operate ever faster on lower voltages, they also become ever more vulnerable, and can be destroyed or disrupted by much lower EMP field strengths.

Microelectronics and electronic systems are everywhere, and run virtually everything in the modern world. All of the civilian critical infrastructures that sustain the economy of the United States, and the lives of 310 million Americans, depend, directly or indirectly, upon electricity and electronic systems.

Of special concern is the vulnerability to EMP of the Extra-High-Voltage (EHV) transformers, that are indispensable to the operation of the electric grid. EHV transformers drive electric current over long distances, from the point of generation to consumers (from the Niagara Falls hydroelectric facility to New York City, for example). The electric grid cannot operate without EHV transformers—which could be destroyed by an EMP event. The United States no longer manufactures EHV transformers. They must be manufactured and imported from overseas, from Germany or South Korea, the only two nations in the world that manufacture such transformers for export. Each EHV transformer must be custom-made for its unique role in the grid. A single EHV transformer typically requires 18 months to manufacture. The loss of large numbers of EHV transformers to an EMP event would plunge the United States into a protracted blackout lasting years, with perhaps no hope of eventual recovery, as the society and population probably could not survive for even 1 year without electricity.

Another key vulnerability to EMP are Supervisory Control And Data Acquisition systems (SCADAs). SCADAs essentially are small computers, numbering in the millions and ubiquitous everywhere in the critical infrastructures, that perform jobs previously performed by hundreds of thousands of human technicians during the 1960s and before, in the era prior to the microelectronics revolution. SCADAs do things like regulating the flow of electricity into a transformer, controlling the flow of gas through a pipeline, or running traffic control lights. SCADAs enable a few dozen people to run the critical infrastructures for an entire city, whereas previously hundreds or even thousands of technicians were necessary. Unfortunately, SCADAs are especially vulnerable to EMP.

EHV transformers and SCADAs are the most important vulnerabilities to EMP, but are by no means the only vulnerabilities. Each of the critical infrastructures has their own unique vulnerabilities to EMP:

The National electric grid, with its transformers and generators and electronic controls and thousands of miles of power lines, is a vast electronic machine—more vulnerable to EMP than any other critical infrastructure. Yet the electric grid is the most important of all critical infrastructures, and is in fact the keystone supporting modern civilization, as it powers all the other critical infrastructures. As of now it is our technological Achilles Heel. The EMP Commission found that, if the electric grid collapses, so too will collapse all the other critical infrastructures. But, if the electric grid can be protected and recovered, so too all the other critical infrastructures can also be restored.

Transportation is a critical infrastructure because modern civilization cannot exist without the goods and services moved by road, rail, ship, and air. Cars, trucks, locomotives, ships, and aircraft all have electronic components, motors, and controls that are potentially vulnerable to EMP. Gas stations, fuel pipelines, and refineries that make petroleum products depend upon electronic components and cannot operate without electricity. Given our current state of unpreparedness, in the aftermath of a natural or nuclear EMP event, transportation systems would be paralyzed.

Traffic control systems that avert traffic jams and collisions for road, rail, and air depend upon electronic systems, that the EMP Commission discovered are especially vulnerable to EMP.

Communications is a critical infrastructure because modern economies and the cohesion and operation of modern societies depend to a degree unprecedented in history on the rapid movement of information—accomplished today mostly by electronic means. Telephones, cell phones, personal computers, television, and radio are all directly vulnerable to EMP, and cannot operate without electricity. Satellites that operate at Low-Earth-Orbit (LEO) for communications, weather, scientific, and military purposes are vulnerable to EMP and to collateral effects from an EMP attack. Within weeks of an EMP event, the LEO satellites, which comprise most satellites, would probably be inoperable.

Banking and finance are the critical infrastructure that sustain modern economies. Whether it is the stock market, the financial records of a multinational corporation, or the ATM card of an individual—financial transactions and record keeping all depend now at the macro- and micro-level upon computers and electronic automated systems. Many of these are directly vulnerable to EMP, and none can operate without electricity. The EMP Commission found that an EMP event could transform the modern electronic economy into a feudal economy based on barter.

Food has always been vital to every person and every civilization. The critical infrastructure for producing, delivering, and storing food depends upon a complex web of technology, including machines for planting and harvesting and packaging, refrigerated vehicles for long-haul transportation, and temperature-controlled warehouses. Modern technology enables over 98 percent of the U.S. National population to be fed by less than 2 percent of the population. Huge regional warehouses that resupply supermarkets constitute the National food reserves, enough food to feed the Nation for 30–60 days at normal consumption rates, the warehoused food preserved by refrigeration and temperature control systems that typically have enough emergency electrical power (diesel or gas generators) to last only about an average of 3 days. Experience with storm-induced blackouts proves that when these big regional food warehouses lose electrical power, most of the food supply will rapidly spoil. Farmers, less than 2 percent of the population as noted above, cannot feed 310 million Americans if deprived of the means that currently makes possible this technological miracle.

Water too has always been a basic necessity to every person and civilization, even more crucial than food. The critical infrastructure for purifying and delivering potable water, and for disposing of and treating waste water, is a vast networked machine powered by electricity that uses electrical pumps, screens, filters, paddles, and sprayers to purify and deliver drinkable water, and to remove and treat waste water. Much of the machinery in the water infrastructure is directly vulnerable to EMP. The system cannot operate without vast amounts of electricity supplied by the power grid. A natural or nuclear EMP event would immediately deprive most of the U.S. National population of running water. Many natural sources of water—lakes, streams, and rivers—would be dangerously polluted by toxic wastes from sewage, industry, and hospitals that would backflow from or bypass wastewater treatment plants, that could no longer intake and treat pollutants without electric power. Many natural water sources that would normally be safe to drink, after an EMP event, would be polluted with human wastes including feces, industrial wastes including arsenic and heavy metals, and hospital wastes including pathogens.

Emergency services such as police, fire, and hospitals are the critical infrastructure that upholds the most basic functions of government and society—preserving law and order, protecting property and life. Experience from protracted storm-induced blackouts has shown, for example in the aftermath of Hurricanes Andrew and Katrina, that when the lights go out and communications systems fail and there is no gas for squad cars, fire trucks, and ambulances, the worst elements of society and the worst human instincts rapidly takeover. The EMP Commission found that, given our current state of unpreparedness, a natural or nuclear EMP event could create anarchic conditions that would profoundly challenge the existence of social order.

MICHAEL J. FRANKEL, Senior Scientist, Penn State University, Applied Research Laboratory  

Another important analytic insight provided by the Commission was its understanding and raising the alarm for the prospect of simultaneous failures of the system. All engineers design their systems against single-point failure.

Nobody designs against multiple failures. Here and there you may find some engineers who design against two simultaneous failures. But these failures can be affected not just by EMP. They could be affected by cyber. The important thing is that if there are simultaneous failures over large areas, the analysis of the Commission was things are very likely to fail, and restoration will take a very long time.

While not often considered in tandem, it is more correct to consider EMP vulnerabilities as one end of a continuous spectrum of cyber threats to our electronic-based infrastructures. They share both an overlap in the effects produced—the failure of electronic systems to perform their function and possibly incurring actual physical damage—as well as their mode of inflicting damage. They both reach out through the connecting electronic distribution systems, and impress unwanted voltages and currents on the connecting wires. In the usual cyber case, those unwanted currents contain information—usually in the form of malicious code—that instructs the system to perform actions unwanted and unanticipated by its owner. In the EMP case, the impressed signal does not contain coded information. It is merely a dump of random noise which may flip bit states, or damage components, and also ensures the system will not behave in the way the owner expects.

This electronic noise dump may thus be thought of as a ‘‘stupid cyber’’. When addressing the vulnerability of our infrastructures to the cyber threat, it is important that we not neglect the EMP end of the cyber threat spectrum. And there is another important overlap with the cyber threat. With the grid on the cusp of technological change in the evolution to the ‘‘smart grid’’, the proliferation of sensors and controls which will manage the new grid architecture must be protected against cyber at the same time they must be protected against EMP. Cyber and EMP threats have the unique capability to precipitate highly multiple failures of these many new control systems over a widely distributed geographical area, and such simultaneous failures, as previously discussed, are likely to signal a wider and more long-lasting catastrophe.

Another important legacy of the EMP Commission was to first highlight the danger to our electric grid due to solar storms, which may impress large—and effectively DC—currents on the long runs of conducting cable that make up the distribution system. While this phenomenon has long been known, and protected against, by engineering practices in the power industry, the extreme 100-year storm first analyzed by the Commission is now widely recognized to represent a major danger to our National electrical system for which adequate protective measures have not been taken and whose consequences—the likely collapse of much of the National grid, possibly for a greatly extended period, may rightly be termed catastrophic. At this point, the only scientific controversy attending the likelihood of our system being subject to a so-called super solar storm, is related to the time-constant. But these events have already occurred within the last century or so, they will occur again. We should be ready.

The final report of EMP Commission contained 75 recommendations to improve the survivability, operability, resilience, and recovery of all the critical infrastructures, and in particular of the most key of all, the electrical grid. Most of these recommendations were pointed towards the Department of Homeland Security. While there have been some conversations, it has been hard to detect much of an active resonance at all issuing from the Department. They have not, as far as I know, even designated EMP as a one of their 10 of 15 disaster scenarios for advanced planning circumstances. And this at a time when they do include a low-altitude nuclear disaster—certainly disastrous but not one that would produce wide-ranging EMP.


For severe space weather, the most recent events occurred roughly 90 and 150 years ago, but the timing of the next such occurrence, as with all extreme natural disasters, is unknown.

Mr. PERRY. If we do harden and protect the grid, but this affects potentially all electric and electronic devices, so even though we harden the grid and power stations and can produce power and so on and so forth, will the systems in individual homes and businesses, like refrigerators and heating and cooling systems, will they be affected to the point where they will all need to be replaced, or even while we have power to our homes, none of the lights will come on and so on so forth?

Mr. PRY. It depends on the scenario. If you are talking about a geomagnetic storm, it puts at the wavelength of that, which we call E3, or magnetohydrodynamic EMP is so long that it needs to couple into long lines, like power lines, railroad tracks. It won’t couple into automobiles, refrigerators, personal computers, and things of that sort. So under that scenario, yes, if you basically keep the electric grid on, you will be able to recover the rest of the society pretty promptly. In the nuclear case of a nuclear EMP, it has an electromagnetic shock wave that we call E1. This can couple into personal computers, automobiles, and the like, and so you will have deeper societal damage; but then, again, it depends on the kind of weapon used. If it is a primitive, first-generation nuclear weapon, you know, it is not likely to do that across the whole country. It would be more limited to a several-State-size region. If it is the worst-case kind of a nuclear weapon, like a super EMP weapon, which is what we think Russia, China, and probably North Korea have, you know, then you are talking about a scenario where you are having massive, deep damage to personal computers, and refrigerators, and lights and the rest. But if you don’t have the bulk power system surviving, there is no hope of recovery under those circumstances. Under that worst-case scenario, what you are doing is you are mitigating a catastrophe and turning it into a manageable disaster, a situation where you won’t have massive loss of life, hopefully.

Mr. PERRY. How would you rate the likelihood that the United States will face an EMP event from either a high-altitude electromagnetic pulse, a HEMP, or a massive solar storm?

Mr. FRANKEL. I will take that one. You guys can as well. I think that the likelihood that the United States will face at some point a so-called massive solar storm, and thus our entire system will be under the footprint, if you will, of a massive solar storm, is about 100 percent. It will happen. The uncertainty here, I believe, is the time constant. It could happen next year, it could be 100 years, but probably not 1,000 years. The probability that we will be faced with a nuclear HEMP I would say is unknown. I don’t call it high. I don’t call it low. I would say it is an unknown probability.

Ms. CLARKE. I just wanted to clarify for the record from Dr. Pry and Dr. Frankel. I see that both of you served as staff on the EMP Commission in 2004 or thereabouts, but I am trying to get a sense of what organizations you are representing today, and how can we learn more about those organizations?

Mr. FRANKEL. I am representing only my status as a senior scientist at the Penn State University.

Mr. PRY. We both served on the Congressional EMP Commission through its life, from 2001 to 2008. I am currently the executive director of the Task Force on National and Homeland Security, which was an effort to continue the EMP Commission, because the Commissioners, including the chairman, believed it was terminated prematurely before its work was completed. So this task force is an attempt to continue the EMP Commission in some way. Dr. Graham, for example, who is the chairman of that Commission, is the chairman of my task force, and I am here today representing the task force.

Mr. PERRY. Dr. Pry. You mentioned in your testimony a satellite passing over the Washington-New York corridor. I would like you to describe the importance or the potential importance of that, and in that context also please describe the National electric grid interconnection, what regions of the country are most vulnerable to grid collapse as a result of EMP attack.

Mr. PRY. Well, the KSM–3 satellite was orbited by North Korea in December 2012, about 3 months before we had our gravest nuclear crisis with North Korea when in February 2012 they ignited— they conducted their third nuclear test, violating international law, and when the United States international community moved to impose additional sanctions to punish North Korea for this, they started threatening to make nuclear strikes against the United States. There was a nuclear crisis so grave during the period from February 12 through the end of April that, you know, the President was sending B–2 bombers over the demilitarized zone to do practice bombing runs and demonstration exercises; strengthened the National missile defense, including moving a THAAD interceptor to Guam just in case Kim Jong-Un tried to deliver on these nuclear threats. In the midst of this crisis, the KSM–3, which was still orbiting, its orbit followed the exact orbit that the Soviets had come up with in the Cold War for a secret nuclear weapon to conduct a surprise nuclear attack called a fractional orbital bombardment system. It is basically a space launch vehicle that uses a nuclear weapon disguised as a satellite, and instead of launching over the North Pole and following a normal ballistic trajectory toward the United States, it launches south and crosses over the south polar region and comes up from—approaches from the south because we don’t have any ballistic missile early warning radars in that location or interceptors, and we are blind to the south and defenseless, and so you would be able to detonate a warhead and do an EMP attack and catch us by surprise. That was the plan during the Cold War, and the trajectory and the altitude of this satellite were precisely the same as the kinds of fobs that the Soviets had used. Between April 8 and the 16th of April, it went from the center of the United States, and on the 16th was passing over the Washington, DC/New York corridor, which is the ideal location for putting down a peak field, because if you look at where our EHV transformers are located, they are most deeply located, the largest numbers of them, the map is just almost a solid block of red because it is so densely concentrated, the EHV transformers in that area. If you wanted to take down the eastern grid, that would be the best place to place a peak EMP field. Taking out the eastern grid is really all you have to do because 75 percent of our power is generated in the eastern grid. The western grid is the next most important, and the Texas grid is the third most important. But that was the KSM–3 threat and its relationship to the grid system.

Mr. PERRY. Speaking of those, the transformers, it has been noted that the Extremely High-Voltage, the EHV transformers which are indispensable to the electric grid, are expensive and hard to replace. If you know, what is the lead time for manufacturing new or replacement transformers, and given that there are limited manufacturers in the United States, where are the suppliers located?

Mr. PRY. There are two places that manufacture these for export, South Korea and Germany, and we are still dependent on them.

There is a DHS briefing going around that says we have limited capabilities to manufacture EHV transformers in the United States. In fact, we currently don’t really have demonstrated capability to manufacture these transformers in the United States yet. They have to be made by hand the way they were made back in Nikola Tesla’s day, the inventor of the EHV transformer. So every one of them is custom made, every one of them has a unique role to play in the grid. They aren’t mass produced. It is not easy. There is a lot of artisanship that goes into the making of these transformers.

Brazil tried to become independent of making its own EHV transformers a decade ago, and it took them 5 years before they were able to start attempting to make their first transformers, and they didn’t perform well. So now Brazil gave up on that, and it has to import them.

So it remains to be seen if the United States can actually manufacture any of its own EHV transformers. We haven’t manufactured one and put them out in the field and seen if they last and stand up to this. It takes 18 months under normal conditions to build one of these transformers.

Ms. CLARKE. Thank you, Mr. Chairman. I just wanted to add to the DHS question that I had raised earlier that one of the observations of the Sandy event was the unintended consequence of the grid going—the electricity going out was that people forgot that fuel stations are run through—by electricity, and so we ended up having a fuel crisis at the same time. So there is sort of a collateral damage piece to this that I hope is acknowledged as we go through this discussion about what happens in areas when just in a short period of time electrical shortages occur or the grid goes out, because even if you were trying to move physical assets, if you don’t prepare for things like fuel stations that are run by electricity, you will have a massive issue.

Mr. FRANKS. We realize that if indeed we did lose our grid, in a worst-case scenario, and we are not projecting a worst-case scenario, but if it did happen, really the aftermath where society would begin to tear ourselves apart seems to be the most frightening aspect of it to me. So the cost of doing nothing is significantly high, and I think you have demonstrated that well, but could you give us a sense of how expensive it would be to harden our bulk power system enough to recover from a major event; in other words, where we keep our main components intact, and we can bring our grid back on-line? I have been told that a couple, $3 billion over 5 years might do it, and that might be less than $1 per year per ratepayer. Am I accurately expressing that?

Mr. PRY. Yes. In fact, your estimate is high compared to the Congressional EMP Commission’s estimate, which was that it would cost about $2 billion over 3 to 5 years to harden the bulk power system, and $10–20 billion over that same period, you know, would protect all of the critical infrastructures.

Mr. BECK. The U.S. electric grid is the most complicated in the world both by physical design; by the overlapping regulatory authority, 50 States, a Federal Government, 3,500 electric companies, et cetera. When we did the international study, it was pretty easy, and one of the things where lessons learned was easy was because you could look at Finland, which has one company and one regulator, right? So a much easier thing to deal with. Here it is—that does make it very difficult, and so I have to—in all honesty, and not to try to duck the question, but the answer is somewhat complicated because there are all these agencies, and there isn’t just one agency that is in charge.

Mr. FRANKEL. Yes, certainly the Department of Homeland Security, I think, has the primary responsibility, but we should also not forget the Department of Energy. They have offices of energy assurance, and they should also be playing some role. Right now I don’t discern exactly what it is, but somewhere between those two, with DHS in the primary role, I think that is where you look for leadership. I want to at least mention the Department of Defense not in a leadership role in this instance, but they are doing a lot of relevant work developing hardening techniques. Worried about their own networks and things of that sort, but they have very important technology support to contribute to that sort of thing. But in the end it is not their responsibility, and it is not their mission, and they are not going to do it. You need to look at those two Departments for leadership.

Mr. PRY. I agree with what has been said. The Department of Homeland Security, especially when you are looking at the role from the Critical Infrastructure Protection Act for planning, training, and resource allocation for emergency planners and responders—under the Department of Homeland Security, within the Department of Homeland Security, the logical regulatory authority to work most closely over the electric grid should be the U.S. Federal Energy Regulatory Commission, the U.S. FERC, and this would be addressed by the SHIELD Act that Mr. Franks is sponsoring in front of the House Energy and Commerce Committee. I think this is really like the—almost equally important with the Critical Infrastructure Protection Act in terms of its passage, because the reality and the reason we have this problem is because the electric power industry exists in a 19th Century regulatory environment. I mean, there is no Federal agency that has the kind of regulatory authority relative to the electric power industry that, for example, the Federal Aviation Administration has over the airline industry, you know. I think all Americans and even Tea Party Republicans would agree that, you know, we need an FAA so you have independent inspectors who will go out and see, you know, is there metal fatigue in the wings of this aircraft, and when that airplane can’t fly, and that if an airplane crashes, you have an FAA to inspect the crash and find out what happened so that it never happens again. We do this because hundreds of lives are at stake, and we need to maintain the public safety. That is why we have an FAA. But the U.S. FERC doesn’t have that power. It can ask the NERC, which represents the industry, and previously was a trade association, by the way, and unofficially is a lobby for the electric power industry, and NERC is the one that is in charge. They regulate themselves through the NERC. The FERC can ask them to come up with a plan.

The great 2003 Northeast blackout was caused by a falling tree branch that caused cascading—it took them 10 years for NERC to come up with a plan, vegetation management plan. So not just—you know, cyber 5 years; they were asked for a plan some 5 years before they started moving on that. So U.S. FERC, I say, would be the tip of the spear for dealing with the electric power industry.

Mr. Pry. An electromagnetic pulse (EMP) is a super-energetic radio wave that can destroy, damage, or cause the malfunction of electronic systems by overloading their circuits. EMP is harmless to people biologically, passing through their bodies without injury, like a radio wave. But by damaging electronic systems that make modern society possible, that enable computers to function and airliners to fly for example, EMP can cause mass destruction of property and life.

It would take about 3.5 years to harden the grid.

Thousands of emergency planners and first responders at the Federal, State, and local level want to protect our Nation and their States and communities from the EMP threat, but they are seriously hindered and even prohibited from doing so because the EMP threat is not among the 15 canonical National planning scenarios utilized by the Department of Homeland Security.

House 112-115. September 12, 2012. The EMP threat: Examining the consequences. House of Representatives. 64 pages.

Mr. LUNGREN. An EMP is a burst of electromagnetic radiation typically generated by a high-altitude nuclear explosion or a non-nuclear device. Nuclear weapon EMPs are most effective when detonated high in the altitude above the intended target. Depending on the yield of the weapon and the height of the explosion, nuclear EMPs can destroy large portions of the U.S. power and communications infrastructure

Geomagnetic radiation generated by a naturally occurring solar storm can also damage the same infrastructure. An EMP attack would destroy the electronics and digital circuitry in the area of impact, thereby denying electric power to our homes, businesses, and military.

Our country is dependent on electricity to power our health, financial, transportation, and business systems. If our power system was ever lost for an extended period, according to Dr. William Graham, the chairman of the EMP Commission, it would have catastrophic and lethal consequences for our citizens and the economy. It would also potentially degrade our military defenses.

America’s digital dependence grows every year and we rejoice in that. But the fact of the matter is that along with that dependence comes our EMP vulnerability. What I mean by that is America has gotten used to the digital world. It powers and is implicated in so much of our everyday life, that if it were in fact attacked in a serious way, it would result in some cases, unforeseen circumstances. What I mean by that is most people don’t think about them.

Computer simulations carried out in March 2010 by Oak Ridge National Laboratory demonstrated that an electromagnetic pulse from a nuclear device detonated at high altitude or a powerful solar storm could destroy or permanently damage major sections of our National power grid. According to this Oak Ridge study, the collapse of our power system could impact 130 million Americans, could require 4 to 10 years to fully recover, and could impose economic costs between $1 trillion and $2 trillion.

The National electric grid has almost no backup capability in the event of a power collapse from electromagnetic pulses. According to FERC testimony presented this morning, existing bulk power reliability standards don’t even address EMP vulnerabilities. In addition, with most of the Nation’s power system under private ownership, who view an EMP event as unlikely or so we are told, there is been little preparation for a long-term power collapse. Although the impact of an EMP event has been examined, studied, and debated, I am fearful that little progress seems to have been made in mitigating the EMP threat. Although the United States has conducted numerous exercises to test our readiness against natural events such as hurricanes, we have never conducted an exercise to help us prepare for the severe consequences of a National power outage from an EMP event. I am informed that the Defense Department takes this seriously and, therefore, has taken steps to protect many of their critical infrastructure from an EMP event. Either they are wasting a lot of money because it is not a serious event—we should stop them from doing it and save us billions of dollars—or it is a serious threat to our National defense capabilities, and we ought to look in the same way in terms of our domestic capabilities. That is, what sustains our standard of living, but in some ways, a way of life for the American public.

I don’t want to be an alarmist on this. I want to be a realist on this. That is why we have asked a number of people to testify here today. My thought is that the more information, the greater awareness the American people have and that we as leaders have, the better we will be prepared to deal with this, as long as we understand what the true consequences are.

With most of the Nation’s power system under private ownership, who view an EMP event as unlikely, there has been little preparation for a long-term power collapse.


Electromagnetic pulse (EMP) is a serious threat to the continued existence of the United States as a major military, economic, and social power. Indeed, EMP is a major threat to the continued existence of the United States in any form.

High-altitude Electromagnetic Pulse (HEMP) is the generation of a very intense pulse of radio waves using a nuclear weapon or device exploded in space near the Earth. The radiation from the nuclear bomb excites and agitates the Earth’s ionosphere which generates a large zone of intense radio waves that can disable electronic equipment and communications equipment throughout the Nation.

A HEMP attack consisting of a single high-yield nuclear weapon exploded a couple of hundred miles above the United States would disable electronics and communications through most of the Nation. Most of our Nation’s electronic infrastructure uses solid-state electronics and microprocessors that are quite vulnerable to electromagnetic pulse. The failure of much of our electronics infrastructure would cause serious problems in supplying food, water, electric power, and communications to our population. In addition, the functions of business, government, and law enforcement would be greatly impaired. Panic, rioting, and the failure of law and order would probably occur.

I have devoted many years of my life to bringing the EMP threat to the attention of the Federal Communications Commission (FCC). Donald J. Schellhardt and I have submitted two formal petitions to the FCC calling for a Notice of Inquiry (NOI) and a Notice of Proposed Rule Making (NPRM) on EMP. Refer to Note 4. In addition, we have filed other formal comments with the Commission on this subject. The FCC has declined to take any positive action on EMP. I am rather puzzled that the FCC refuses to act to protect our communications infrastructure from EMP. The subject is certainly interesting and it would be desirable to avoid the great damage that would result from any EMP attack. There is ample evidence that EMP is a real and serious threat to the Nation. Certainly, if an EMP attack did occur, the Nation would not be friendly towards the decision makers who refused to protect against EMP attacks and their consequences.

HOSTILE NATIONS. We can all easily imagine several nations that would be quite happy if the United States were to collapse in response to an EMP attack. In their view, EMP would be a rather convenient method for deleting a major competitor. While launching a missile with a warhead from a ship is not an easy task, it is certainly easier than other methods of eliminating the United States. Also, the structure of the United States may become so shattered by an attack that other nations could actually colonize parts of the former United States.

AMATEUR RADIO can perform local and long-distance communications during and after these chaotic events. Congress should establish legislation that would allow amateur radio operators to establish minimum-sized amateur radio antennas despite opposition of homeowner associations, condominium managements, and rental landlords.

Mr. LUNGREN. We have several panels of distinguished witnesses before us today. The sole witness of our first panel is Congressman Trent Franks. He represents Arizona’s second Congressional district, serves on the Armed Services Committee and the Judiciary Committee, where he currently chairs the Constitutional Law Subcommittee. In addition, Congressman Franks serves as the co-chair of the Congressional EMP Caucus, and has studied this issue for several years.

HON. TRENT FRANKS ( ARIZONA). As a Nation, we have spent billions of dollars over the years hardening our nuclear triad, our missile defense capabilities, and numerous other critical elements of our National security apparatus against the effects of electromagnetic pulse, particularly the type that might be generated by a high-altitude nuclear warhead detonation over our country by one of America’s enemies. However, our civilian grid, which the Defense Department relies upon for nearly 99 percent of its electricity needs, is completely vulnerable to the same kind of danger. This constitutes an invitation on the part of certain enemies of the United States to use the asymmetric capability of EMP against us. There is now evidence that such strategies are being considered by certain of those enemies. We recently witnessed, as you said, Mr. Chairman, the chaos that attends a prolonged power outage when the derecho storm impacted the District of Columbia and the surrounding area. Our sick and elderly suffered without air conditioning. Grocery stores were unable to keep food fresh. Gas lines grew. Thankfully, the derecho had only a regional and limited impact.

In 2004 and 2008, the EMP Commission testified before the Armed Services Committee, of which I am a member, that the U.S. society and economy are so critically dependent upon the availability of electricity that a significant collapse of our grid precipitated by a major natural or manmade EMP event could result in catastrophic civilian casualties. This conclusion is echoed by separate reports recently compiled by the DOD, DHS, DOE, NAS, along with various other agencies and independent researchers.

While there are those certainly who believe that the likelihood of terrorists or rogue nations obtaining nuclear weapons and using them in an EMP attack is remote, the recent events of the Arab Spring our intelligence apparatus did not foresee, show us that regimes can change very quickly. Iran’s increasingly obvious efforts to gain nuclear weapons should serve as a grave and urgent warning to all of us.

Catalyzed by a major solar storm, a high-altitude nuclear blast, or a non-nuclear, device-induced Intentional Electromagnetic Interference, this invisible force of ionized particles has the capability to overwhelm and destroy our present electrical power grids and electrical equipment, including electronic communication networks, radio equipment, integrated circuits, and computers. The reality of the potentially devastating effects of sufficiently intense electromagnetic pulse on the electronic systems/sources of many of our critical defense and National security components is well-established, and beyond dispute.

Automated hardware is particularly important when one considers the shortcomings of procedural safety measures alone in response to an EMP event. According to solar weather experts, there is only 20–30 minutes’ warning from the time we predict a solar storm may affect us to the time it actually does. This is simply not enough time to implement procedures that will adequately protect the grid. Furthermore, these predictions are only accurate one out of three times. This places a crushing dilemma on industry, who must decide whether or not to heed the warning with the knowledge that a wrong decision either way could result in the loss of thousands or even millions of lives and massive legal ramifications beyond expression.

Because of new understandings of how EMP interacts with the Earth’s electromagnetic field, and that it is intensified over large land mass, we now believe that if a warhead with a nuclear yield of just 100 kilotons detonated at an altitude of 400 kilometers over America’s heartland, the resulting damage to our electric grid and infrastructure would be catastrophic across most of the continental United States. Such a result would be devastating to our electricity, transportation, water and food supply, medical care, financial networks, telecommunication and broadcasting systems and our infrastructure in general. Under such a scenario, both military and productive capability would be devastated. The immediate and eventual impact, directly and indirectly, on the human population, especially in major cities, is unthinkable. It should be remembered that EMP was first considered as a military weapon during the ‘‘Cold War’’ as a means of paralyzing U.S. retaliatory forces. America’s EMP commission began their 70-page executive summary describing a one- or two-missile EMP attack as one of the few threats which look as if it could defeat the U.S. military.

Dr. William Graham, the chairman of the EMP Commission, testified before the U.S. House Armed Services Committee, and stated: ‘‘EMP is one of a small number of threats that can hold our society at risk of catastrophic consequences. ‘‘…A determined adversary can achieve an EMP attack capability without having a high level of sophistication. For example, an adversary would not have to have long-range ballistic missiles to conduct an EMP attack against the United States. Such an attack could be launched from a freighter off the U.S. coast using a short- or medium-range missile to loft a nuclear warhead to high altitude. Terrorists sponsored by a rogue state could potentially execute such an attack without revealing their identity.’’ Dr. Graham has said that a major catastrophic EMP attack on the United States could cause an estimated 70–90 percent of the our Nation’s population to become unsustainable.

It is impossible for me to even wrap my mind around that figure.

But for terrorists, this is their ultimate goal, and I believe EMP is their ultimate asymmetric weapon. In 1988, Osama bin Laden called it a religious duty for al-Qaeda to acquire nuclear weapons. U.S. Admiral Mike Mullen, the chairman of the Joint Chiefs of Staff, has stated: ‘‘My worst nightmare is terrorists with nuclear weapons. Not only do I know they are trying to get them, but I know they will use them.’’ This is indeed the greatest danger of all. If a rogue state like Iran steps over the nuclear threshold, rogue regimes and terrorists the world over will have access to these monstrous weapons.

Mahmoud Ahmadinejad again made it clear where he stands on Israel when he declared, ‘‘[Israel] is about to die and will soon be erased from the geographical scene.’’ Jewish author, Primo Levi, was once asked what he had learned from the Holocaust. He replied, ‘‘When a man with a gun says he’s going to kill you—believe him.’’

At this moment, Iranian President Mahmoud Ahmadinejad, a man who, in the same breath, both denies the Holocaust ever occurred, and then threatens to make it happen again, is arrogantly seeking a gun with which he vows to wipe the state of Israel off the map.

He has also said: ‘‘The time for the fall of the satanic power of the United States has come and the countdown to the annihilation of the emperor of power and wealth has started.’’ He has said point-blank, ‘‘The wave of the Islamist revolution will soon reach the entire world.’’ Unfortunately, he talks like a man who knows something the rest of us don’t. It is not enough, to casually dismiss his fanatical rhetoric. When analyzing the nature of any threat, we must always seriously assess two things: A potential enemy’s intent and his corresponding capacity to carry out any such intent.

Mahmoud Ahmadinejad and his regime have stated very clearly their intent to see Israel wiped off the face of the earth and America and the West brought to their knees. Nuclear warheads could give them the capacity to effectively proceed in that endeavor.

Mr. Chairman and Members, these things should not surprise us. We are now 65 years into the nuclear age, and the ominous intersection of jihadist terrorism and nuclear proliferation has been inexorably and relentlessly rolling toward America and the free world for decades. But, when we add the dimension of asymmetric electromagnetic pulse attacks to that equation, we face a menace that may represent the gravest short-term threat to the peace and security of the human family in the world today.

Is a regime change in Pakistan possible? Will there be blowback from our involvement in Libya? What about the current crisis in Syria? Will North Korea ever supply or sell its nuclear technology or warheads to terrorists? Will Iran develop or obtain nuclear weapons? Iran’s increasingly obvious efforts to gain nuclear weapons should serve as a grave and urgent warning to all of us. If terrorists or rogue states do acquire nuclear weapons, hardening our electric grid would become a desperate priority for our Nation. However, that process will take several years, while a regime change takes only weeks and a missile launch only minutes. The fact that we are now 100% vulnerable means we should start securing our electric infrastructure now. Indeed, by reducing our vulnerability we may reduce the likelihood that terrorists or rogue states would attempt such an attack.

We should always remember that 7 decades ago, another murderous ideology arose in the world. The dark shadow of the Nazi swastika fell first upon the Jewish people of Germany. And because the world did not heed the warnings of men like Winston Churchill and respond to that evil in time, it began to spread across Europe until it lit the fires of World War II’s hell on earth which saw atomic bombs fall upon cities and over 50 million people dead worldwide.

History has repeatedly shown humanity to be susceptible to malignant dangers that approach inaudibly and nestle among us with innocuous countenance until a day of sudden calamity finds us empty-handed, broken-hearted, and without excuse.

Mr. LUNGREN. Where is the failure? Is the failure with the Congress? Is the failure with the Executive branch? Is the failure with critical infrastructure owners? If this is as serious as you suggest, as some of these reports suggest, the lack of attention to it is something that bewilders me. You have been involved in a lot of issues on the Armed Services Committee and so forth, and I am trying to figure out what is it that is lacking on this issue that does not garner the attention of the American people? In other words, is there a lack of consensus about the threat? Is there a serious question about whether this is a serious issue?

Mr. FRANKS. I would only suggest to you that when the EMP Commission came to the Armed Services Committee in 2004, I had been aware of EMP. My background is engineering. I had been aware of it, but I thought it was like something that could be catastrophic, but the chances of it happening were so remote.The testimony was that five other nations were developing this as an offensive capability. Certainly, the Soviet Union had a major EMP component in their nuclear strategy. So there is a … clear consensus of the danger this represents. However, when you go over into the civilian areas, it seemed like there is a general, sort of a lackadaisical, kind of a——

Mr. LUNGREN. Let me ask you about that, because I have found most people who are involved in critical infrastructure in the private sector are serious-minded folks. They do recognize the value of their assets. In most cases, when I am dealing with them on issues, I find them to be forward-thinking and to actually try and protect those assets. They articulate that in a way so that they can justify certain capital investments to their shareholders or their ratepayers. Well, let me ask you this: Do you find the attention to the protection of their assets that you believe to be necessary, and if not, why as the owners and protectors of those assets, is this not taken more seriously?

Mr. FRANKS. I think that is a good question. It has been something that has bewildered me to a degree. It seemed just a few years ago, as this became more well-known that there was a more serious—or at least a more recognizable response. It seemed like in the last year, there has been sort of a pushback in parts of industry. My concern is if they have credible, scientific bases for being unconcerned or not addressing it as vigorously as some of us think that it should be, then I would adjure them to bring that testimony and that evidence to the rest of us. Because I can suggest to you that I haven’t seen it. It may be that there is some concern on the part of major manufacturers of these large components, transformers and others, that are somewhat out of professional pride. That they either don’t want to recognize the danger or somehow they feel like that there would be some requirement of reengineering of some of these major components if they did. But I would suggest that the potential liability here is off the charts. The fix here—and this would probably be one of the more important points to point out—the fix here is fairly simple, at least in terms of protecting our electric-producing grid—not all the elements that are connected to it. That is a huge issue. But at least to be able to keep the lights on—electricity coming—that is a fairly easy fix.

The primary thing that the Shield Act addresses is to make sure that our major transformers are 750 KV corridor are not destroyed, which means that we would be in a catastrophic civilizational challenge where we wouldn’t have electricity and wouldn’t be able to perhaps restore it for months or even years. That is the worst-case scenario. The Shield is designed to prevent that. Some of these ancillary damages on cell phones, radios, things like that, it is difficult to mitigate against that in a short-term fix. We have to harden as we go. But my contention is if we take those components as we rebuild them and replace them and harden them against EMP, which we can do that. It adds about 10 percent to the cost of doing that. Then we can eventually get past this vulnerability. But the main big vulnerability that we have right now is the potential damage to our major transformers that could be caused by either a high-altitude electromagnetic pulse or GMD.

Finally, I would just say that the worst-case scenario is so bad that rather than preparing for it, we must prevent it from ever occurring.

Joseph McClelland, Director, Office of Electric Reliability, Federal Energy Regulatory Commission (FERC).

Faced with a National security threat to reliability, there may be a need to act decisively in hours or days, rather than weeks, months, or years. That would not be feasible even under the expedited process. In the meantime, the bulk power system would be left vulnerable to a known National security threat. Moreover, existing procedures, including the expedited action procedure, could widely publicize both the vulnerability and the proposed solution, thus increasing the risk of hostile actions before the appropriate solutions are implemented.

In addition, a reliability standard submitted to the Commission by NERC may not be sufficient to address the identified vulnerability or threat. Since FERC may not directly modify a proposed reliability standard under section 215 and must either approve or remand it, FERC would have the choice of approving an inadequate standard and directing changes, which reinitiates a process that can take years, or rejecting the standard altogether. Under either approach, the bulk power system would remain vulnerable for a prolonged period.

Finally, the open and inclusive process required for standards development is not consistent with the need to protect security-sensitive information. For instance, a formal request for a new standard would normally detail the need for the standard as well as the proposed mitigation to address the issue, and the NERC-approved version of the standard would be filed with the Commission for review. This public information could help potential adversaries in planning attacks.

Regarding man-made events, EMP can also be generated by weapons. Equipment and plans are readily available that have the capability to generate high-energy bursts, termed ‘‘E1’’, that can damage or destroy electronics such as those found in control and communication systems on the power grid. These devices can be portable and effective, facilitating simultaneous coordinated attacks, and can be reused, allowing use against multiple targets. The most comprehensive man-made EMP threat is from a high-altitude nuclear explosion. It would affect an area defined by the ‘‘line-of-sight’’ from the point of detonation. The higher the detonation the larger the area affected, and the more powerful the explosion the stronger the EMP emitted. The first component of the resulting pulse E1 occurs within a fraction of a second and can destroy control and communication electronics. The second component is termed ‘‘E2’’ and is similar to lightning, which is well-known and mitigated by industry. Toward the end of an EMP event, the third element, E3, occurs. This causes the same effect as solar magnetic disturbances. It can damage or destroy power transformers connected to long transmission lines and cause voltage problems and instability on the electric grid, which can lead to wide-area blackouts. It is important to note that effective mitigation against solar magnetic disturbances and non-nuclear EMP weaponry provides effective mitigation against a high-altitude nuclear explosion.

In 2001, Congress established a commission to assess the threat from EMP, with particular attention to be paid to the nature and magnitude of high-altitude EMP threats to the United States; vulnerabilities of U.S. military and civilian infrastructure to such attack; capabilities to recover from an attack; and the feasibility and cost of protecting military and civilian infrastructure, including energy infrastructure.

In 2004, the EMP commission issued a report describing the nature of EMP attacks, vulnerabilities to EMP attacks, and strategies to respond to an attack. A second report was produced in 2008 that further investigated vulnerabilities of the Nation’s infrastructure to EMP. The reports concluded that both electrical equipment and control systems can be damaged by EMP. The reports also pointed out how the interdependencies among the various infrastructures could become vulnerabilities after an EMP. In particular, they point to the electrical infrastructure’s need of the communication and natural gas infrastructures.

In 1859, a major solar storm occurred, causing auroral displays and significant shifts of the Earth’s magnetic fields. As a result, telegraphs were rendered useless and several telegraph stations burned down. The impacts of that storm were muted because semiconductor technology did not exist at the time. Were the storm to happen today, according to an article in Scientific American, it could ‘‘severely damage satellites, disable radio communications, and cause continent-wide electrical black-outs that would require weeks or longer to recover from.’’3 Although storms of this magnitude occur rarely, storms and flares of lesser intensity occur more frequently. Storms of about half the intensity of the 1859 storm occur every 50 years or so according to the authors of the Scientific American article, and the last such storm occurred in November 1960, leading to world-wide geomagnetic disturbances and radio outages. The power grid is particularly vulnerable to solar storms, as transformers are electrically grounded to the Earth and susceptible to damage from geo-magnetically-induced currents. The damage or destruction of numerous transformers across the country would result in reduced grid functionality and even prolonged power outages. In March 2010, Oak Ridge National Laboratory (Oak Ridge) and its subcontractor Metatech released a study that explored the vulnerability of the electric grid to EMP-related events. This study was a joint effort contracted by FERC staff, the Department of Energy, and the Department of Homeland Security and expanded on the information developed in other initiatives, including the EMP commission reports. The series of reports provided detailed technical background and outlined which sections of the power grid are most vulnerable, what equipment would be affected, and what damage could result. Protection concepts for each threat and additional methods for remediation were also included along with suggestions for mitigation. The results of the study support the general conclusion that EMP events pose substantial risk to equipment and operation of the Nation’s power grid and under extreme conditions could result in major long-term electrical outages. In fact, solar magnetic disturbances are inevitable with only the timing and magnitude subject to variability. The study assessed the 1921 solar storm, which has been termed a 1-in-100-year event, and applied it to today’s power grid. The study concluded that such a storm could damage or destroy up to 300 bulk power system transformers, interrupting service to 130 million people for a period of years.

In February 2012, NERC released its Interim Report: Effects of Geomagnetic Disturbances on the Bulk Power System. In it, NERC concluded that the most likely worst-case system impact from a severe geomagnetic disturbance is voltage instability and voltage collapse with limited equipment damage.

The existing reliability standards do not address EMP vulnerabilities. Protecting the electric generation, transmission, and distribution systems from severe damage due to an EMP-related event would involve vulnerability assessments at every level of electric infrastructure.


The Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack recommended in its final report that DHS ‘‘play a leading role in spreading knowledge of the nature of prudent mitigation preparations for EMP attack to mitigate its consequences.’’

EMPs can be high-frequency, similar to a flash of lightning or a spark of static electricity, or low- frequency, similar to an aurora-induced phenomenon. An EMP can spike in less than a nanosecond or can continue longer than 24 hours, depending on its source. The consequences of an EMP range from permanent physical damage to temporary system disruptions and can result in fires, electric shocks to people and equipment, and critical service outages. There are four general classes of EMP.

High-altitude EMP (HEMP) results from a nuclear detonation typically occurring 15 or more miles above the Earth’s surface. The extent of HEMP effects depends on several factors, including the altitude of the detonation, the weapon yield and design, and the electromagnetic shielding, or ‘‘hardening,’’ of assets. One high-altitude burst could blanket the entire continental United States and could cause widespread power outages and communications disruptions and possible damage to the electricity grid for weeks or longer.4 HEMP threat vectors can originate from a missile, such as a sea-launched ballistic missile; a satellite asset; or a relatively low- cost balloon-borne vehicle. A concern is the growing number of nation-states that in the past have sponsored terrorism and are now developing capabilities that could be used in a HEMP attack.

Source Region EMP (SREMP) is a burst of energy similar to HEMP but differs in that it is created when a nuclear weapon detonates at lower altitudes within the atmosphere. SREMP can occur when a detonation occurs on or near the ground, as would likely be the case of a terrorist nuclear device attack. A SREMP’s electromagnetic field is much more limited in range than that from HEMP; it would only affect a delimited geographic area. SREMP can induce very high currents on power cables or metallic communications lines near the fireball, and it can send extreme spikes of energy great distances from the blast zone along these metal lines, potentially causing fires where these lines meet other infrastructures. In addition, the SREMP travels through the air and can damage or disrupt equipment connected to Ethernet cables, telephone lines, and power cords out to 70 miles or more. Electronic systems not connected to power cords or communications lines, such as a cell phone, are generally resistant to SREMP but become useless if the infrastructure that supports them is non-functional. While SREMP is not the primary reason a terrorist would detonate a nuclear weapon, it is important to note that all ground-based detonations create SREMP of sufficient magnitude to cause infrastructure disruptions, including an improvised nuclear device, a crude nuclear device that could be built from the components of a stolen weapon or from using nuclear materials. Given the possible impacts of SREMP, such as secondary fires and the disruptions of power, communications, and other critical infrastructures, it is an important consideration in our Department’s planning to mitigate and respond to this type of attack.

Since the 1980s, our power grid control systems and information infrastructures have been growing in their reliance on the Ethernet and computers, which are much more vulnerable to E1 EMP than previous control and communications systems designs. Likewise, the power grid today is much more vulnerable to (E3 EMP) and solar storms than the grid of the 1970s and 80s due to the increasing network size and evolution to higher operating voltages.

Unlike HEMP and SREMP, which primarily disrupt Earth-based infrastructures, System Generated EMP (SGEMP) is a threat to space-based assets, such as satellites or a space station. SGEMPs originate from a nuclear weapon detonation above the atmosphere that sends out damaging X-rays that strike space systems. Both SGEMP and HEMP are similar, in that they both originate from a high-altitude burst. The Department’s chief concern with SGEMP and other related high- altitude nuclear effects is that satellite or other space systems that support critical communications and navigation services, as well as essential intelligence functions, can be immediately disrupted. SGEMP and other related effects could also harm systems supporting any astronaut in space. The fourth type of EMP is Non-Nuclear EMP, or NNEP. This type of EMP can be created by Radio Frequency Weapons (RFWs), devices designed to produce sufficient electromagnetic energy to burn out or disrupt electronic components, systems, and networks. RFWs can either be electrically-driven, where they create narrowband or wideband microwaves, or they can be explosively driven, where an explosive is used to compress a magnetic field to generate the pulse. Multiple nations have used RFWs since the 1960s to disable or jam security, communications, and navigation systems; induce fires; and disrupt financial infrastructures. Devices that can be used as RFWs have unintentionally caused aircraft crashes and near crashes, pipeline explosions, gas spills, computer damage, vehicle malfunctions, weapons explosions, and public water system malfunctions.5 The Department believes that much of the mitigation and planning we are doing for other types of EMP will help reduce our threat to NNEP.


SOLAR WEATHER is created as a result of massive explosions on the sun that may shoot radiation towards the Earth. These effects can reach the Earth in as little as 8 minutes with Solar Flare X-rays or over 14 hours later with a Coronal Mass Ejection (CME) plasma hurricane. An extreme CME is the Department’s biggest Solar Weather concern. It could create low-frequency EMP similar to a megaton-class nuclear HEMP detonation over the United States, which could disrupt or damage the power grid, undersea cables, and other critical infrastructures. The United States experiences many solar weather events each year, but major storms that could significantly impact today’s infrastructures are not common but have previously occurred in 1921 and 1859 and possibly in several other years prior to the establishment of the modern power grid. The U.S. Department of Energy and utility owners and

In the last 200 years, only the 1859 and 1921 solar superstorms are believed by experts to have exceeded the 4,000 nanoTesla/minute level over the United States. If one of these storms were to occur today, many experts believe they would likely damage key elements of the power grid and could cause very long-term power outages over much of the United States.

POTENTIAL IMPACTS TO CRITICAL INFRASTRUCTURE. Overall, EMP in its various forms can cause widespread disruption and serious damage to electronic devices and networks, including those upon which many critical infrastructures rely, such as communication systems, information technology equipment, and supervisory control and data acquisition (SCADA) modules. SCADA modules are used in infrastructure such as electric grids, water supplies, and pipelines. The disruptions to SCADA systems that could result from EMP range from SCADA control errors to actual SCADA equipment destruction. Secondary effects of EMP may harm people through induced fires, electric shocks, and disruptions of transportation and critical support systems, such as those at hospitals or sites like nuclear power plants and chemical facilities. EMP places all critical infrastructure sectors at risk. Those sectors that rely heavily on communications technology, information technology, the electric grid, or that use a SCADA system are particularly vulnerable. The complex interconnectivity among critical infrastructure sectors means that EMP incidents that affect a single sector will likely affect other sectors—potentially resulting in cascading failures. The interdependent nature of all 18 critical infrastructure sectors complicates the impact of the event and recovery from it.


I would also say that some of the information associated with the likelihood of an EMP being used would have to be done in a closed hearing.

REFERENCES (112-115, 2nd post)

The text of the Congressional Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack is available at the web site: www.empcommission.org.

This document confirms the serious impact of an EMP attack on the infrastructure of the United States.

Severe Space Weather Events—Understanding Societal and Economic Impacts— A Workshop Report, National Academy of Sciences, National Academies Press, Publication Year 2008, PAPERBACK, ISBN–10:0–309–12769–6, ISBN–13:978–0–309– 12769–1. This document can be accessed on-line at the URL: http://www.nap.edu/catalog.php?recordlid=12507.

Robert Schroeder, ‘‘Electromagnetic Pulse and Its Implications for EmComm’’, QST magazine, November 2009, pages 38 through 41. [The term EmComm refers to emergency communication.]

Petitions to the Federal Communications Commission by Donald J. Schellhardt and Nickolaus E. Leggett: Docket RM–5528, Request to Consider Requirements for Shielding and Bypassing Civilian Communications Systems from Electromagnetic Pulse (EMP) Effects. Docket RM–10330, Amendment of the Commission’s Rules to Shield Electronics Equipment Against Acts of War or Terrorism Involving Hostile Use of Electromagnetic Pulse (EMP).

Daniel N. Baker and James L. Green, ‘‘The Perfect Solar Superstorm’’, Sky & Telescope, February 2011, Vol. 121 No. 2, Pages 28–34.

Publications Dealing with the Protection of Civil Equipment and Systems from the Effects of HEMP and HPEM—Issued by the International Electrotechnical Commission (IEC) SC 77C.

Mark Clayton, ‘‘Is US Ready for a ‘Solar Tsunami’? ‘‘The Christian Science Monitor, June 27, 2011, Page 20.

H.R. 668, Secure High-voltage Infrastructure for Electricity from Lethal Damage Act (SHIELD Act). This bill was introduced on February 11, 2011. This bill addresses the subjects of solar geomagnetic storms and electromagnetic pulse (EMP) impacting the electric power industry.

‘‘Report of the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack: Critical National Infrastructures,’’ April 2008, page 181. This report presents the results of the Commission’s assessment of the effects of a high-altitude EMP attack on our critical National infrastructures and provides recommendations for their mitigation.

Graham, Dr. William R. et al., Report of the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack (2004).

Dr. John S. Foster, Jr. et al., Report of the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack (2008).

Odenwald, Sten F. and Green, James L., Bracing the Satellite Infrastructure for a Solar Superstorm, Scientific American Magazine (Jul. 28, 2008).

Robert L. Schweitzer, LTG (ret) USA, ‘‘Radio Frequency Weapons: The Emerging Threat and Policy Implications,’’ Eagan, McAllister Associates, October 1998; see also: Overview of Evolving and Enduring Threats to Information Systems, National Communications System, August 2012.





Posted in Electric Grid, EMP Electromagnetic Pulse, Infrastructure, Infrastructure, Interdependencies, War | Tagged , , , | Leave a comment

Another reason to think oil production probably peaked in 2005

[ In this Kurt Cobb post, Texas oilman Jeffrey brown explains why the story of oil production growth from 2005 to 2014 is probably wrong, because the increase came from lease condensate, not oil.  If this is true then Brown says that worldwide production of condensate “accounts for virtually all of the post-2005 increase in C+C [crude plus condensate] production.” This means almost all of the 4 million-barrel-per-day increase in world “oil” production from 2005 through 2014 may actually be lease condensate. And that means crude oil production proper has been nearly flat during this period.

What follows are excerpts/paraphrasing from Kurt Cobb’s January 17, 2016 article The great condensate con: Is the oil glut just about oil? Alice Friedemann at energyskeptic.com ]

Texas oilman Jeffrey Brown has been pointing out to everyone that the supposed oversupply of crude oil isn’t quite what it seems. Yes, there is a large overhang of excess oil in the market. But how much of that oversupply is honest-to-god oil and how much is so-called lease condensate which gets carelessly lumped in with crude oil? And, why is this important to understanding the true state of world oil supplies?

Lease condensate consists of very light hydrocarbons which condense from gaseous into liquid form when they leave the high pressure of oil reservoirs and exit through the top of an oil well. This condensate is less dense than oil and can interfere with optimal refining if too much is mixed with actual crude oil. The oil industry’s own engineers classify oil as hydrocarbons having an API gravity of less than 45–the higher the number, the lower the density and the “lighter” the substance. Lease condensate is defined as hydrocarbons having an API gravity between 45 and 70.

Refiners are already complaining that so-called “blended crudes” contain too much lease condensate, and they are seeking out better crudes straight from the wellhead. Brown has dubbed all of this the great condensate con.

Brown points out that U.S. net crude oil imports for December 2015 grew from the previous December, according to the U.S. Energy Information Administration (EIA), the statistical arm of the U.S. Department of Energy. U.S. statistics for crude oil imports include condensate, but don’t break out condensate separately. Brown believes that with America already awash in condensate, almost all of those imports must have been crude oil proper.

Brown asks, “Why would refiners continue to import large–and increasing–volumes of actual crude oil, if they didn’t have to–even as we saw a huge build in [U.S.] C+C [crude oil plus condensate] inventories?”

Part of the answer is that U.S. production of crude oil has been declining since mid-2015. But another part of the answer is that what the EIA calls crude oil is actually crude plus lease condensate. With huge new amounts of lease condensate coming from America’s condensate-rich tight oil fields–the ones tapped by hydraulic fracturing or fracking–the United States isn’t producing quite as much actual crude oil as the raw numbers would lead us to believe. This EIA chart breaking down the API gravity of U.S. crude production supports this view. Exactly how much of America’s and the world’s presumed crude oil production is actually condensate remains a mystery. The data just aren’t sufficient to separate condensate production from crude oil in most instances.

Brown explains: “My premise is that U.S. (and probably global) refiners hit in late 2014 the upper limit of the volume of condensate that they could process” and still maintain the product mix they want to produce. That would imply that condensate inventories have been building faster than crude inventories and that the condensate is looking for an outlet.

That outlet has been in blended crudes, that is heavier crude oil that is blended with condensates to make it lighter and therefore something that fits the definition of light crude. Light crude is generally easier to refine and thus more valuable.

Trouble is, the blends lack the characteristics of nonblended crudes of comparable density (that is, the same API gravity), and refiners are discovering to their chagrin that the mix of products they can get out of blended crudes isn’t what they expect.

So, now we can try to answer our questions. Brown believes that worldwide production of condensate “accounts for virtually all of the post-2005 increase in C+C [crude plus condensate] production.” What this implies is that almost all of the 4 million-barrel-per-day increase in world “oil” production from 2005 through 2014 may actually be lease condensate. And that would mean crude oil production proper has been nearly flat during this period–a conjecture supported by record and near record average daily prices for crude oil from 2011 through 2014. Only when demand softened in late 2014 did prices begin to drop.

Here it is worth mentioning that when oil companies talk about the price of oil, they are referring to the price quoted on popular futures exchanges–prices which reflect only the price of crude oil itself. The exchanges do not allow other products such as condensates to be mixed with the oil that is delivered to holders of exchange contracts. But when oil companies (and governments) talk about oil supply, they include all sorts of things that cannot be sold as oil on the world market including biofuels, refinery gains and natural gas plant liquids as well as lease condensate. Which leads to a simple rule coined by Brown: If what you’re selling cannot be sold on the world market as crude oil, then it’s not crude oil.

The glut that developed in 2015 may ultimately be tied to some increases in actual, honest-to-god crude oil production. The accepted story from 2005 through 2014 has been that crude oil production has been growing, albeit at a significantly slower rate than the previous nine-year period–15.7 percent from 1996 through 2005 versus 5.4 percent from 2005 through 2014 according to the EIA. If Brown is right, we have all been victims of the great condensate con which has lulled the world into a sense of complacency with regard to actual oil supplies–supplies he believes have been barely growing or stagnant since 2005.

“Oil traders are acting on fundamentally flawed data,” Brown told me by phone.

Brown points out that it took trillions of dollars of investment from 2005 through today just to maintain what he believes is almost flat production in oil. With oil companies slashing exploration budgets in the face of low oil prices and production declining at an estimated 4.5 and 6.7 percent per year for existing wells worldwide, a recovery in oil demand might push oil prices much higher very quickly.

That possibility is being obscured by the supposed rise in crude oil production in recent years that may just turn out to be an artifact of the great condensate con.

Posted in How Much Left, Kurt Cobb, Peak Oil | Tagged , | Leave a comment