Emerging threats of resource wars. U.S. House hearing

Goh Chun Teck, Lim Xian You, Sum Qing Wei, Tong Huu Khiem. Resource Wars.2011. A hot-seat multiplayer game where players compete with each other for territories that generate resources such as coal, water, gold and gas. A Player can sell resources for money, which he can use to purchase even more territories to grow his empire, or fight with other players to attempt to conquer their territories. NatiOnal University of Singapore. CS2103 Projects AY10/11 Semester 1

Goh Chun Teck, Lim Xian You, Sum Qing Wei, Tong Huu Khiem. 2011. Resource Wars. Players compete with each other for territories that generate resources such as coal, water, gold and gas. A Player can sell resources for money, which he can use to purchase even more territories to grow his empire, or fight with other players to attempt to conquer their territories. National University of Singapore.

[ About half of the pages are images from Brigadier General John Adams “Remaking American security. Supply chain vulnerabilities & national security risks across the U.S. Defense Industrial Base”.  

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

House 113-63. July 25, 2013. The Emerging threat of Resource Wars.  U.S. House of Representatives,  88 pages.

DANA ROHRABACHER, CaliforniaWe import 750,000 tons of vital minerals and material every year.  An increasing global demand for supplies of energy and strategic minerals is sparking intense economic competition that could lead to a counterproductive conflict.

A ‘‘zero sum world’’ where no one can obtain the means to progress without taking them from someone else is inherently a world of conflict.

Additional problems arise when supplies are located in areas where production could be disrupted by political upheaval, terrorism or war.

When new sources of supply are opened up, as in the case of Central Asia, there is still fear that there is not enough to go around and thus conflict emerges.   The wealth that results from resource development and the expansion of industrial production increases power just as it uplifts economies and uplifts the standards of peoples.

This can feed international rivalry on issues that go well beyond economics. We too often think of economics as being merely about ‘‘business’’ but the distribution of industry, resources and technology across the globe is the foundation for the international balance of power and we need to pay more attention to the economic issues in our foreign policy and what will be the logical result of how we deal with those economic and those natural resource issues.

The control of access to resources can be used as political leverage, as we have seen with Russia and China. They both have demonstrated that. Indeed, China is engaged in an aggressive campaign to control global energy supply chains and to protect its monopoly in rare earth elements. This obviously indicates that Beijing is abandoning its ‘‘peaceful rise’’ policy. This is not an unexpected turn of events given the brutal nature of the Communist Chinese regime.

Who owns the resources, who has the right to develop them, where will they be sent and put to use, and who controls the transport routes from the fields to the final consumers are issues that must be addressed. Whether the outcomes result from competition or coercion; from market forces or state command, we will be determining how to achieve and if we will achieve a world of peace and an acceptable level of prosperity or we won’t achieve that noble goal.

My father joined the Marines to fight World War II and it is very clear that natural resources had a great deal to do with the Japanese strategies that led to the Second World War and so we have some of our witnesses may be talking to us and will be talking to us on issues that are of that significance.

WILLIAN KEATING, MASSACHUSETTS.  Today’s hearing topic provides us with an opportunity to look beyond Europe and Eurasia and examine the global impact of depleting resources, climate change and expanding world population and accompanying social rest.

In March, for the first time, the Director of National Intelligence, James R. Clapper, listed ‘‘competition and scarcity involving natural resources’’ as a national security threat on a path and on a par with global terrorism, cyber war, and nuclear proliferation.

He also noted that ‘‘terrorists, militants, and international crime groups are certain to use declining local food security to gain legitimacy and undermine government authority’’ in the future. I would add that the prospect of scarcities of vital resources including energy, water, land, food, and rare earth elements in itself would guarantee geopolitical friction.

Now add lone wolves and extremists who exploit these scenarios into the mix and the domestic relevance of today’s conversation and you can see the importance of this is clear.

Further, it is no secret that threats are more interconnected today than they were 15 years ago. Events which at first seem local and irrelevant have the potential to set off transnational disruptions and affect U.S. national interests. We saw this dynamic play out off the coast of Somalia where fishermen were growing frustrated from lack of government enforcement against vessels harming their stock and where they took up arms and transitioned into dangerous gangs of pirates. Now violent criminals threaten Americans in multinational vessels traveling through the Horn of Africa. Unfortunately, I don’t see a near term end to the coordinated international response that this situation requires. I agree with Mr. Clapper that the depletion of resources stemming from many factors which above all include climate change has potential to raise a host of issues for U.S. businesses worldwide.

PAUL COOK, CALIFORNIA.  In my former life besides being in the military for 26 years, I was a college professor and I have to admit I taught history and I always have got to give the old saw that people who do not understand history are bound to repeat it.

If you look at the history of conflicts and wars and everything else and whether you go back to that famous book, The Haves and Have-Nots, it is always about resources and who has it and who doesn’t have them and who wants them.

But I think we as a country, at least have not picked up on those lessons of history and we are very, very naive about the motivations of certain countries and why they do certain things. And obviously, there are things going on throughout the world right now in Eurasia which underscores some of the things that we are going to talk about today. So I applaud having a hearing on this. I think the title says it all, resource wars, and if we don’t have the war yet, we have had it in the past and we are going to have it in the future.

BRIGADIER GENERAL JOHN ADAMS, USA, RETIRED, PRESIDENT, GUARDIAN SIX CONSULTING, LLC

Remaking American Security” examines 14 defense industrial base nodes vital to U.S. national security. We investigated lower tier commodities and raw materials and subcomponents needed to build and operate the final systems. Based on our research, the current level of risk to our defense supply chains and to our advanced technological capacity is very concerning.

Figure 1. Brigadier General John Adams. May 2013. Military Equipment Chart: Selected defense uses of specialty metals. Remaking American security. Supply chain vulnerabilities & national security risks across the U.S. Defense Industrial Base. Alliance for American Manufacturing.

Figure 1. Brigadier General John Adams. May 2013. Military Equipment Chart: Selected defense uses of specialty metals. Remaking American security. Supply chain vulnerabilities & national security risks across the U.S. Defense Industrial Base. Alliance for American Manufacturing.

The bottom line is that foreign control over defense supply chains restricts U.S. access to critical resources and places American defense capabilities at risk in times of crisis. In the report, we devote a chapter to the importance of access to specialty metals and rare earth elements. Increasingly, these resources are central to modern life and central to modern defense preparedness. The United States has become dependent on imports of key materials from countries with unstable political systems, corrupt leadership, or opaque business environments.

Specialty metals are used in high-strength alloys, semiconductors, consumer electronics, batteries, armor plate, cell phones, and many more defense-specific and commercial applications. The United States lacks access to key minerals and materials that we need for our defense supply chains. There are concerns that corrupt business practices and manipulation of markets is one of the reasons that we have a lack of access to key raw materials, specifically rare earth elements.

China has a monopoly in the mining of key rare earth elements and minerals. China continues to not only involve themselves in the extraction industry, extraction of oxides, but the entire supply chain for rare earth elements and production of such things as advanced magnets which is essential in all modern defense electronics. Smart bombs, for example, have to have advanced magnets. China pulled that supply chain into China. Now is that corrupt? Certainly, there is manipulation. Is that something that we allowed to happen because we had our eye off the ball? I would argue that that is the case.

Compounding the tensions over access to specialty metals, many countries rich in natural resources take a stance of resource nationalism. Within the past decade, countries have attempted to leverage and manipulate extractive mining by threatening to impose extra taxes, reduce imports, reduce exports, nationalize mining operations and restrict licensing. Moreover, the countries themselves, notably China, have taken a more aggressive posture toward mineral resources and now compete aggressively with Western mining operators for extraction control.

We possess significant reserves of many specialty metals with an estimated value of $6.2 trillion. However, we currently import over $5 billion of minerals annually and are almost completely dependent on foreign sources for 19 key specialty metals.

Platinum is used in a wide variety of applications, but the commercial application we are all familiar with is the catalytic converter. But almost every modern engine has to have the platinum group of metals in it. Most of it is mined in South Africa. And I don’t want to go into a long, political discussion of the instability in South Africa, it is what it is. And we have to remember the role of the Chinese in that as well. The Chinese have established over the last 20, 30 years, excellent ties with countries in sub-Saharan Africa. Is that something that again we should note at this point, especially in this august committee?

We have to have a coherent strategic at the U.S. Government level to determine what those critical raw materials are. And then we need to act upon that to make sure that we have got secure access to them for our war fighters.

EDWARD C. CHOW, SENIOR FELLOW, ENERGY AND NATIONAL SECURITY PROGRAM, CENTER FOR STRATEGIC AND INTERNATIONAL STUDIES

With the help of Western investments, Central Asia and the Caucasus today produce around 3.5% of global oil supply and hold around 2.5% of the world’s known proven reserves in oil. For comparison, this is equivalent to four times that of Norway and the United Kingdom combined. Another way of looking at this is to say the region produces around 8.5% of non-OPEC oil and holds around 9.5% of non-OPEC oil reserves. In other words, oil production in Central Asia has added significantly to global supply and will continue to do so in the future. In many ways, the energy future of the region lies as much or more in natural gas than in oil. Central Asia is estimated to hold more than 11% of the world’s proven gas reserves, mostly concentrated in Turkmenistan which has lagged behind Kazakhstan and Azerbaijan in attracting outside investments. The region currently produces less than 5% of global gas supply, so there is tremendous potential for growth.

Given its landlocked geography, Central Asia has to rely on long haul pipelines to take its oil and gas to market. Previously Soviet pipelines in the region almost all head to European Russia either to feed the domestic Soviet market or for trans-shipment to European markets.

When the Soviet Union collapsed in 1991, China was just about to convert from a net oil exporter to net oil importer. It was slow off the mark in the race for Central Asian oil and gas. By the time it focused on this region, most of the large production opportunities have already been acquired by Western companies. From a Chinese point of view, they have been playing catch up ever since.

Today China is the second largest oil importer in the world and an increasingly important importer of gas. With stagnant Chinese domestic production and rapidly growing energy demand, China is destined to replace us as the world’s largest oil importer in a decade or so. Its companies have been investing in oil and gas around the world, including in neighboring Central Asia. Chinese companies now produce around 30% of Kazakhstan’s oil.

The next growing source of competition for Central Asia oil and gas is likely to come from India, which follows closely China in growth in oil and gas demand and consequently oil and gas imports. Indeed, as Chinese demographic growth slows and population ages, India’s energy demand is commonly forecasted to grow faster than China’s in a decade or so.

NEIL BROWN, NON-RESIDENT FELLOW, GERMAN MARSHALL FUND OF THE UNITED STATES

When I joined the Senate committee staff in 2005, we held a lot of hearings on these sorts of issues and at that time it was a lot of doom and gloom. Americans are doing what we do best which is changing the rules of the game through innovation in oil and gas and unconventional sources, efficiency, alternative energy, we are giving ourselves not only economic opportunities, but much more significant foreign policy flexibility and opportunities around the world, including in Central Asia which is important both for the issues that Ed mentioned in terms of the volume of oil and gas and other minerals the region has, but also for the strategic benefits and importance given that it sets above Iran, Pakistan, and Afghanistan.

The rising demand of emerging economies, particularly China, India, and in the Middle East, ironically, has over time really narrowed the margins in the global oil market which meant particularly in the mid-2000s that even small disruptions, attacks in the Niger Delta on Shell’s facilities could have an impact right here at home. I guess one good side of the recession is that demand slowed down so that we got a bit more of a window and also more recently the U.S. has boosted supply, again giving more flexibility. But that structural shift in markets has not changed. So we can expect more of the same, unfortunately, when the economy picks up.

JEFFREY MANKOFF, PH.D., DEPUTY DIRECTOR AND FELLOW, RUSSIA & EURASIAN PROGRAM, CENTER FOR STRATEGIC AND INTERNATIONAL STUDIES

The discovery of new offshore oil and gas deposits in the Eastern Mediterranean Sea is one of the most promising global energy developments of the last several years. Handled wisely, these deposits off Israel and Cyprus, as well as potentially Lebanon, Gaza, and Syria, can contribute to the development and security for countries in the Eastern Mediterranean, and across a wider swathe of Europe. Handled poorly, these resources could become the source of new conflicts in what is an already volatile region.

According to the United States Geological Survey, the Levant Basin in the Eastern Mediterranean holds around 122 trillion cubic feet of natural gas, along with 1.7 billion barrels of crude oil.

The oil and gas resources of the Eastern Mediterranean sit, however, at the heart of one of the most geopolitically complex regions of the world. The Israeli-Palestinian conflict, tensions between Israel and Lebanon, the frozen conflict on Cyprus, and difficult relations among Turkey, the Republic of Cyprus, and Greece all complicate efforts to develop and sell energy from the Eastern Mediterranean. The Syrian civil war has injected a new source of economic and geopolitical uncertainty, and standing in the background is Russia, which is seeking to enter the Eastern Mediterranean energy bonanza, and to maintain its position as the major supplier of oil and gas for European markets.

Israel’s transformation into a significant energy producer is not without its challenges. Most immediate perhaps is the question of how Israel will sell its surplus gas on international markets. The most economical option, at least in the short term, would be the construction of an undersea pipeline allowing Israeli gas to reach European markets through Turkey. Such a pipeline from Israel to Turkey pipeline would be less expensive to build than new Liquefied Natural Gas facilities, would reinforce the recently strained political ties between Turkey and Israel, and would contribute to the diversification of Europe’s energy supplies by bringing a new source of non-Russian gas to Europe. Such a pipeline, however, would likely either run off the coasts of Lebanon and Syria, or have to go to Turkey through Cyprus. Both options are fraught with peril. Though Lebanon and Israel have not demarcated their maritime border, Beirut argues that Israel’s gas fields cross into Lebanese waters, and Hezbollah has threatened to attack Israeli drilling operations. Syria, of course, is in a state of near anarchy. In this perilous environment, finding investors willing to build a pipeline will be challenging, and even if built, such a pipeline would be difficult to secure. Going through Cyprus is also difficult, largely because of the difficult relationship between the Republic of Cyprus and Turkey. However, Cyprus’s own gas fields represent another potential source of conflict. Turkey has not recognized the Republic of Cyprus’s exclusive economic zone and in fact has pressured companies seeking to do business there, and recently also began its own exploratory drilling off of the de facto Turkish Republic of Northern Cyprus without permission from the government in Nicosia. The revenues from Cypriot energy could benefit communities on both sides of the island, but only if a political agreement can be worked out in advance. The major alternative to a pipeline from Israel to Turkey would be to build an LNG, a Liquefied Natural Gas facility to liquefy gas for sale to markets in Asia and the Middle East. Russia, in particular, backs this idea. The push to build new LNG facilities though is only one way in which Moscow and its energy companies are seeking a larger role in the Eastern Mediterranean.

Russian companies are also interested in Israel’s much larger Leviathan field, as well as in the offshore oil and gas off of Lebanon.

Russia will remain the principal supplier of Europe’s gas for many years. The potential volumes from the Eastern Mediterranean could bolster European energy security around the margins, but they are not sufficient not to change this fundamental reality. For that reason, Washington’s main objective in the Eastern Mediterranean should be less about Europe and more about ensuring that energy does not become a source of new resource conflicts, whether between Israel and its neighbors or over Cyprus.

 

Posted in Caused by Scarce Resources, Congressional Record U.S., Military, Supply Chains | Tagged , , , | Leave a comment

House of Representatives hearing 113-2 Feb 13, 2013: American Energy Outlook: Technology market and policy drivers.

House 113-2. February 13, 2013. American Energy Outlook: Technology market and policy drivers. House of Representatives hearing.

[ Excerpts from the 119 page transcript of this hearing ]

Chairwoman Cynthia Lummis, Wyoming. It is difficult to overstate the importance of energy to America’s success. Abundant, affordable energy is arguably the single most important factor to enabling our prosperity, from our health and wellness to our national and economic security. Technology development impacts all components of a healthy, developed energy system, including exploration and production, transportation, and consumption. By providing the private market with the tools to innovate, our energy system can add new technologies to reliably provide affordable and abundant energy. The jurisdiction of this Subcommittee, which includes about $8 billion in research and development at the Department of Energy, provides us a unique opportunity to help share the direction and future of energy in America.

As a Congressman from Wyoming, I see the many benefits associated with energy production. Wyoming is the United States’ second leading producer of total energy. It is the top producer of coal and uranium, third in natural gas, eighth in oil. Wyoming is also a national leader in renewable energy, generating significant energy from wind and geothermal resources as well. In fact, we are number one in wind energy resources, many of which are yet undeveloped. I am a strong supporter of an all-of-the-above energy strategy.

And now, more than ever, Congress and the President must take real steps to advance such a policy. The timing has never been better. U.S. energy is in the early stages of a historic period of technology-driven transformation. Advancement in horizontal drilling and hydraulic fracking has unlocked vast amounts of oil and gas, so much that the International Energy Agency projects that by 2020—that is just seven years from now—the United States will overtake Russia and Saudi Arabia to lead the world in oil production.

The EIA also projects that coal will be the dominant energy source globally by 2030. While domestic use of coal declined last year, the global use of coal is increasing by leaps and bounds. Coal is abundant in America, and it is the only source of energy that can meet the scale of energy demand for those billions of people worldwide who have no electricity at all. And quite frankly, it is not our call to hold those people back by denying them the affordable resources to bring them into the 21st century. Throughout our languishing economic recovery, expanded domestic natural gas is a bright spot in the current economy and has the potential to revitalize America’s economic engine. Increased production has created sorely needed jobs, stimulated local economies, and contributed to low unemployment in States like North Dakota and Wyoming. Additionally, affordable and abundant natural gas is poised to drive a revival in the American manufacturing sector, a sector we heard about a lot last night in the State of the Union speech. Perhaps less obvious, but equally significant, is the potential for increased energy production to help address the Nation’s spiraling debt. As Wyoming’s former State Treasurer, I can testify firsthand to the importance of mineral revenues to Wyoming’s sovereign wealth and ability to provide quality K–12 educations, as well as roads, sewers, and the infrastructure to have a vital, vibrant society. Last week, the Institute for Energy Research reported that increasing access to energy development would, in addition to growing GDP by $127 billion annually, increase federal revenues by $24 billion annually for the next seven years, and $86 billion per year thereafter. Most of the options we have to address the budget crisis, cutting spending and increasing taxes, are difficult to achieve. Increasing energy production should be easy to achieve. Our great energy story here in the United States has not gone unnoticed around the world. The German Economic Minister recently expressed concern that German firms are relocating to the United States primarily due to lower energy prices. While President Obama often cites European energy policies as a model he would like to follow in the United States, statements such as these should provide a powerful reminder of the importance of affordable energy to our global economic competitiveness.

Mr. SWALWELL. Our responsibility is to ensure that this country is prepared for whatever changes that the markets may experience. Overreliance on a limited range of technologies and finite resources is unsustainable and unreasonable. We know that the U.S. uses 20 percent of the world’s oil but that we only have two percent of the world’s oil reserves. Our strength will lay in our ability to transition to new, cleaner, more sustainable resources. Simply, we cannot drill our way out of this problem. However, we can innovate our way out of this problem and we can work to make our country more energy secure and help make a thriving economy. We must be competitive and not let ourselves get behind. As Washington bickers, our competitors are pulling out all of the stops to capitalize on the booming clean energy economy.

We should also leverage equitable and innovative financing mechanisms where the market is not well structured to take on the often high technical and financial risks. With scientific research, nothing is guaranteed and so we need to be willing to take risks. I come from the Bay area, which includes Silicon Valley, where risk-taking is critical to the region’s economy. Taking risks means sometimes you will not succeed, but scientific progress in our country and internationally has never been a straight line. The big energy challenges we face require big lead times to solve. We thus can’t let bureaucratic inertia and partisan politics delay or get in the way of us making investments and encourage research, innovation, and competition.

Adam Sieminski, Administrator for the Energy Information Administration at the U.S. Department of Energy.

EIA projects no growth in transportation energy demand between 2011 and 2042 with declining light-duty vehicle energy consumption of over 1.5 million barrels a day out to 2040. The growth in heavy-duty vehicle demand also spurs some fuel-switching to natural gas, as I mentioned earlier. Natural gas is projected to have a significant impact on heavy-duty vehicle energy consumption in relatively high travel applications such as tractor-trailers, which account for two thirds of all heavy-duty travel.

 

We try to take that into account by looking at the reserve base and ultimate resource base for the different fuels. We are fairly confident that the resource base for natural gas will allow for continuing increases in production in the United States, all the way out to 2040 with shale gas currently accounting for about one third of U.S. production reaching half of U.S. production by 2040. We think that the coal resource base is also pretty strong, and although the deepest research on that was done quite some time ago, one of the reasons that it hasn’t been updated is because the resource base is actually so vast that it didn’t make as much sense to concentrate on that.

Mr. ROHRABACHER. I would like to ask, a few years ago we were gloom and doom about peak oil and how we are going to be energy-wise, things are going to get worse and worse. What about peak oil and gas? Is that just a false alarm?

Adam Sieminski : The problem that I saw as an energy economist, the problem that I always had with the peak oil hypothesis was that it was entirely geology-based. The view assumes that the resource base is completely known, and once you produce half of it that you inevitably are on a downturn. I think that this Committee particularly understands that there is a role for both prices and technology to dramatically change our understanding of the resource base. And that is what we have seen.

ROBERT MCNALLY, PRESIDENT, THE RAPIDAN GROUP

It is hard to overstate but often overlooked how much modern civilization depends on the continuous access to substantial flows of energy from producers to consumers. ‘‘Energy,’’ as Nobel chemist Richard Smalley noted in 2003, ‘‘is the single most important factor that impacts the prosperity of any society.’’ Fossil-based energy, or hydrocarbons—oil, gas, and coal—account for about 3/4 of our energy supply, and experts project that share will grow in coming decades.

As a primary energy source, hydrocarbons are far superior to others, such as biomass or renewables, because they are dense, highly concentrated, abundant, and comparatively easy to transport and store. Our transportation food and electricity systems, among others, depend critically on hydrocarbon energy.

Second, many major energy transitions take a very long time, measured in decades if not generations. Recognizing the overwhelming superiority of hydrocarbons, rapidly industrializing and urbanizing countries in Asia, the Middle East, and Latin America are making enormous investments in hydrocarbon energy production, transportation, refining, distribution, and consumption systems and devices. These could not be quickly replaced in any reasonable scenario. Energy transformations are more akin to a multi- decade exodus than a multiyear moon-shot. Pretending otherwise misleads citizens and distracts from serious debate about real circumstances and practical solutions.

Third, just as history has humbled energy experts who make bold predictions about future energy trends, policymakers should be cautious and restrained when setting arbitrary, unrealistic, and aggressive energy targets, much less spending tax dollars on subsidies or grants in an attempt to reach them. The historical record is littered with failed policy targets

Fourth, energy can deliver unwelcome surprises with no short- term solutions. For instance, our oil production is soaring but so are our gasoline prices. They are at record levels. The combination of rising oil production and prices can be befuddling. Moreover, large gasoline price swings have become more frequent in recent years and consumers are wondering why this is the case. Pump prices at home are determined mainly by crude prices set in a global oil market. Crude oil prices are rising mainly because global supply-and-demand fundamentals are tight and geopolitical disruption risk is high. OPEC’s spare production capacity—almost entirely held by Saudi Arabia and which in the past has been used as a buffer against disruptions or tight markets—is low.

As we saw with Libya in 2011 and Iran in 2012, when the market is tight and fearful, even relatively minor disruptions or risks of disruption anywhere in the world can send our gasoline prices up fast. Unfortunately, there are no effective short-term policy options to counter the short-term crude and gasoline price volatility caused by fundamentally tight and fearful global oil market. A crucial step is to increase oil supply everywhere. In a tight market, every extra barrel counts.

And this leads me to my fifth and final point. Not all surprises in energy are bad. The most pleasant surprise in energy, if not in our entire economy in the last few years, has been the ability of oil and gas producers to unlock vast previously unreachable resources through multistage hydraulic—horizontal hydraulic fracturing of domestic oil and gas reserve trapped in deep shale formations. Last week, Dan Yergin testified before your colleagues in the House Energy Committee and called the boom in unconventional oil and gas production ‘‘the most important energy innovation so far in the 21st century.’’

Mr. SWALWELL. In the United States there are approximately 5 million commercial buildings, approximately 72 billion square feet of commercial buildings. And commercial buildings consume about 19 percent of all energy in the United States.

Mr. SIEMINSKI. Not just EIA but virtually every other research group that has ever looked at the opportunities finds that now that we have moved as rapidly as we have on light-duty vehicles, the next best place to find energy efficiency savings in the United States is likely in the buildings area.

Mr. KENNEDY. I represent a city called Fall River in southeastern Massachusetts, and there is a company there called TPI Composites that manufactures wind turbines along with other military and transportation equipment in their product lines. I spoke just last week with the CEO of TPI Composites and he expressed obviously the importance of the production tax credit for their business model and for facilities that continue to invest in wind energy despite loaded upfront costs that should thus bring an additional element of diversification to our American energy portfolio. So if we know that clean energy technology manufacturing can create high-quality jobs in Fall River, and we know that minimizing uncertainty about our federal investment can create a dependable landscape that encourages further private sector investment in these technologies, but we also recognize that renewable energy alternatives like wind are not yet priced competitive with other existing technologies and traditional fossil fuels, what, then, would your path forward be that you suggest? You testified a bit about the market-based incentives and the need to make energy security policy a priority. While fossil fuels are deeply entwined in our current way of life and our standard of living, federal investments like the production tax credit are industry-wide, that you are not picking individual winners and losers, I think have a value for adding renewables and other clean energy sources to the mix.

Mr. MCNALLY. During these times of stretched fiscal resources and difficult budget questions and constraints, the proper role for Federal Government is in the basic research area. I would rather shut down the production tax credit, which is really helping mature but uneconomic renewable energies, and take some of that money and hire scientists to figure out how to produce batteries that can store and discharge electricity better than they can now.

Ralph Hall, Texas: I know you know the importance of energy. It is a national defense issue for us. In the last ten years, U.S. energy outlook has been transformed from what some refer to as an energy renaissance or revolution. Can you explain how various technological developments and advancements such as widespread adoption of the hydraulic fracturing have revolutionized the U.S. energy outlook?

Mr. MCNALLY. It is really in innovation and technology and the industry figuring out in the late 1980s in Texas and Oklahoma how to get at resources that are vast and that we have known are there. Now, we have known that there are vast amounts of oil and gas trapped in rock 10,000 feet below the ground for decades. We have been using hydraulic fracturing some say since the Civil War throwing dynamite down a hole. The Federal Government reportedly looked at nuclear explosions underneath the ocean floor to stimulate wells by fracturing. But the real innovation came with going after the shale deposits and using hydraulic fracturing. And that turned what we call resources, which is the oil that we think is in the ground but we don’t know how to get out, into reserves, producible by our companies. And we are having continuous improvement and how to frack those wells, how to do so more efficiently, to go horizontally and in multi-stages, not just one straw into the ground. So really, it is a remarkable story of industry progress with some government involvement mainly at the core, basic research level we should note. But it is brought to us by the industry and it has smoothed out our supply curve not only for natural gas but also for oil to the point where, according to some forecasts, we will surpass in the near future Saudi Arabia in production

Mr. VEASEY. I have a concern that I have with the flaring of natural gas. As you know in the Bakken, they are producing a lot of oil but I also know they do not have the pipeline capacity and so they are flaring quite a bit of natural gas. The Texas Railroad Commission does a really good job in Texas of keeping up with the number of permits that are given to operators, but I know in the Eagle Ford in particular and even some in my area, in the Barnett Shale, that there is some flaring going. I know you specifically talked a little bit earlier about the rising cost of natural gas as it goes worldwide particularly. If the Department of Energy decides to export liquefied natural gas, or LNG, is there any technology on the horizon that would make it where we wouldn’t have to flare so much natural gas so we would have more in quantity? I mean I think that that should be one real environmental concern that we have, particularly when you start talking about drilling in remote places like Alaska where there would be a lot of associated gas produced with oil production that would have to be flared off. In Alaska there is a lot of gas that comes up in Alaska with the oil, but it is re-injected back into the formation. And so there is very little flaring taking place in Alaska.

Mr. CRAMER. I spent the last ten years as a public utilities regulator in North Dakota prior to coming to Congress, and one of the things that oftentimes gets overlooked is that while North Dakota is in fact the second-leading producer of oil, largest producer of gas, we mine 30 million tons of coal, generate about 5,000 megawatts of electricity with that coal, export it to many States and provinces, we also enjoyed the lowest natural gas residential retail rates in the country. I am looking right now at the average retail price of electricity to ultimate customer users by end-use sector—that is one of my more common tables that I look up—and see that North Dakota continues to be among the three for lowest-priced electricity States in the country. And so when I hear, frankly, Ms. Jacobson, somebody talk about leveling the playing field for all forms of energy, what I really hear is manipulating the playing field to create an advantage where one doesn’t exist when the playing field is level. And so I would be interested in public policy thoughts as to how we would properly incent the marketplace. My definition, of course, properly might not be the same as yours. But it truly creates the level as opposed to manipulation. The other thing, and then I will let Mr. Sieminski perhaps answer this question first and then we can get into the other stuff, but with regard to electricity prices and the use of the shift by policy from coal to natural gas, realizing that even in my short term on the Public Utilities Commission in North Dakota, the Public Service Commission, that I saw gas at $12 and I saw gas at $2 and everywhere in between. Do we run the risk of tightening this demand-and-supply curve of natural gas even in this abundance to a point where we make ourselves dependent on a fuel source that is so volatile? How much of that do you consider when you consider the price and the outlook going forward?

Mr. HULTGREN. I had heard you mention a little bit earlier, again, of how important basic scientific research is and the fear of really undercutting that, of how that puts us at a disadvantage. The President seems to think that asking us to spend more money on these short-term items is really the only way to achieve clean energy future. He seems to have this sense that we can just buy an immediate change in our economy. My sense is that it is going to take maybe 20 years or even longer of long- term, basic research in the very subjects he is cutting—high energy physics, nuclear physics—in order to produce a change and really change our fundamental ability to produce energy in a cleaner and cheaper way.

Mr. MCNALLY. The reason I thought that we would want to maybe invest in some research into batteries is because the reason—one of the main reasons wind is not economical is because you cannot store electricity. The wind blows in places where we don’t need it and electricity, unlike oil and coal, cannot be stored. So if we can figure out ways to store and discharge electricity, we will make all renewable forms of electricity, solar and wind, more economic. And that is an example of the potential benefit of core research. Another one—and again, my wife calls me Mr. Worst-case-scenario, so I am not known for flowery predictions about wonderful transformations, but I will say, as I said in my testimony, if you ask me what plausible transformative change is out there that could happen in our lifetimes that could completely upend in a positive way our energy outlook, and I would think that is—that we figure out how to get methane hydrate out of the Earth’s crust. Like shale gas and shale oil of the day, we know it is there. We know the resources are enormous. Some estimates say there is 6 trillion TCF in the Gulf of Mexico. That is equal to total proved reserves in the world, conventional reserves. But we have not figured out yet—and we and the Japanese and others are working on it and DOE is doing some good work here—is to get that methane hydrate out of the crust in a safe way that doesn’t create methane burps if you will and emissions.

Those are the kind of problems that humans can solve. We don’t have to figure out how to make algae go into gasoline. We know how to use methane. We just have to figure out how to get it out of the crust. We did it with shale gas and shale oil. I think we can do it with the government’s help in the core basic research area with methane hydrates.

 

 

 

Posted in Energy Policy | Tagged | Leave a comment

A Nuclear spent fuel fire at Peach Bottom in Pennsylvania could force 18 million people to evacuate

[If electric power were out 12 to 31 days (depending on how hot the stored fuel was), the fuel from the reactor core cooling down in a nearby nuclear spent fuel pool could catch on fire and cause millions of flee from thousands of square miles of contaminated land, because these pools aren’t in a containment vessel.

This could happen from the long power outage resulting from an electromagnetic pulse, which could take the electric grid down for a year ( see U.S. House hearing testimony of Dr. Pry at The EMP Commission estimates a nationwide blackout lasting one year could kill up to 9 of 10 Americans through starvation, disease, and societal collapse.  At this hearing, Dr. Pry said “Seven days after the commencement of blackout, emergency generators at nuclear reactors would run out of fuel. The reactors and nuclear fuel rods in cooling ponds would meltdown and catch fire, as happened in the nuclear disaster at Fukushima, Japan. The 104 U.S. nuclear reactors, located mostly among the populous eastern half of the United States, could cover vast swaths of the nation with dangerous plumes of radioactivity” )

After the nuclear fuel that generates power at a nuclear reactor is done, it’s retired to a spent fuel pool full of water about 40 feet deep.  Unlike the nuclear reactor, which is inside a pressure vessel inside a containment vessel, spent fuel pools are almost always outside the main containment vessel.  If the water inside ever leaked or boiled away, it is likely the spent fuel inside would catch on fire and release a tremendous amount of radiation.

Nuclear engineers aren’t stupid.  Originally these pools were designed to be temporary until the fuel had cooled down enough to be transported off-site for reprocessing or disposal.  But now the average pool has 10 to 30 years of fuel stored at a much higher density than the pools were designed for, in buildings that vent to the atmosphere and can’t contain radiation if there’s an accident.

There are two articles from Science below (and my excerpts from the National Academy of Sciences these articles refer to in APPENDIX A)

If the electric grid power fails, backup diesel generators can provide power for 7 days without resupply of diesel fuel under typical nuclear plant emergency plans. If emergency diesel generators stop working, nuclear power plants are only required to have “alternate ac sources” available for a period of 2 to 16 hours. Once electric power is no longer supplied to circulation pumps, the spent fuel pool would begin to heat up and boil off.  It would only take 4 to 22 days from when water was no longer cooling the fuel to ignite the zirconium cladding within 2 to 24 hours (depending on how much the fuel had decayed). Without more water being added to the spent fuel pool, the total time from grid outage to spontaneous zirconium ignition would likely be 12-31 days (NIRS).

The National Research Council estimated that if a spent nuclear fuel fire happened at the Peach Bottom nuclear power plant in Pennsylvania, nearly 3.5 million people would need to be evacuated and 12 thousand square miles of land would be contaminated.  A Princeton University study that looked at the same scenario concluded it was more likely that 18 million people would need to evacuated and 39,000 square miles of land contaminated.

Besides a geomagnetic or nuclear EMP threat, there can also be a loss of offsite power from events initiated by severe weather (i.e. hurricanes, tornadoes, etc) that could cause a spent fuel pool to catch on fire.  Other events include an internal fire, loss of pool cooling, loss of coolant inventory, an earthquake, drop of a cask, aircraft impact, or a missile.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

Stone, R. May 24, 2016. Spent fuel fire on U.S. soil could dwarf impact of Fukushima. Science Magazine.

A fire from spent fuel stored at a U.S. nuclear power plant could have catastrophic consequences, according to new simulations of such an event.

A major fire “could dwarf the horrific consequences of the Fukushima accident,” says Edwin Lyman, a physicist at the Union of Concerned Scientists, a nonprofit in Washington, D.C. “We’re talking about trillion-dollar consequences,” says Frank von Hippel, a nuclear security expert at Princeton University, who teamed with Princeton’s Michael Schoeppner on the modeling exercise.

The revelations come on the heels of a report last week from the U.S. National Academies of Sciences, Engineering, and Medicine on the aftermath of the 11 March 2011 earthquake and tsunami in northern Japan. The report details how a spent fuel fire at the Fukushima Daiichi Nuclear Power Plant that was crippled by the twin disasters could have released far more radioactivity into the environment.

The nuclear fuel in three of the plant’s six reactors melted down and released radioactive plumes that contaminated land downwind. Japan declared 1100 square kilometers uninhabitable and relocated 88,000 people. (Almost as many left voluntarily.) After the meltdowns, officials feared that spent fuel stored in pools in the reactor halls would catch fire and send radioactive smoke across a much wider swath of eastern Japan, including Tokyo. By a stroke of luck, that did not happen.

But the national academies’s report warns that spent fuel accumulating at U.S. nuclear plants is also vulnerable. After fuel is removed from a reactor core, the radioactive fission products continue to decay, generating heat. All nuclear power plants store the fuel onsite at the bottom of deep pools for at least 4 years while it slowly cools. To keep it safe, the academies report recommends that the U.S. Nuclear Regulatory Commission (NRC) and nuclear plant operators beef up systems for monitoring the pools and topping up water levels in case a facility is damaged. The panel also says plants should be ready to tighten security after a disaster.

At most U.S. nuclear plants, spent fuel is densely packed in pools, heightening the fire risk. NRC has estimated that a major fire at the spent fuel pool at the Peach Bottom nuclear power plant in Pennsylvania would displace an estimated 3.46 million people from 31,000 square kilometers of contaminated land, an area larger than New Jersey. But Von Hippel and Schoeppner think that NRC has grossly underestimated the scale and societal costs of such a fire.

NRC used a program called MACCS2 for modeling the dispersal and deposition of the radioactivity from a Peach Bottom fire. Schoeppner and Von Hippel instead used HYSPLIT, a program able to craft more sophisticated scenarios based on historical weather data for the whole region.
Nightmare scenarios

In their simulations, the Princeton duo focused on Cs-137, a radioisotope with a 30-year half-life that has made large tracts around Chernobyl and Fukushima uninhabitable. They assumed a release of 1600 petabecquerels, which is the average amount of Cs-137 that NRC estimates would be released from a fire at a densely packed pool. It’s also approximately 100 times the amount of Cs-137 spewed at Fukushima. They simulated such a release on the first day of each month in 2015.

The contamination from such a fire on U.S. soil “would be an unprecedented peacetime catastrophe,” the Princeton researchers conclude in a paper to be submitted to the journal Science & Global Security. In a fire on 1 January 2015, with the winds blowing due east, the radioactive plume would sweep over Philadelphia, Pennsylvania, and nearby cities. Shifting winds on 1 July 2015 would disperse Cs-137 in all directions, blanketing much of the heavily populated mid-Atlantic region. Averaged over 12 monthly calculations, the area exposed to more than 1 megabecquerel per square meter — a level that would trigger a relocation order — is 101,000 square kilometers. That’s more than three times NRC’s estimate, and the relocation of 18.1 million people is about five times NRC’s estimates.

NRC has long mulled whether to compel the nuclear industry to move most of the cooled spent fuel now held in densely packed pools to concrete containers called dry casks. Such a move would reduce the consequences and likelihood of a spent fuel pool fire. As recently as 2013, NRC concluded that the projected benefits do not justify the roughly $4 billion cost of a wholesale transfer. But the national academies’s study concludes that the benefits of expedited transfer to dry casks are fivefold greater than NRC has calculated.

“NRC’s policies have underplayed the risk of a spent fuel fire,” Lyman says. The academies panel recommends that NRC “assess the risks and potential benefits of expedited transfer.” NRC spokesperson Scott Burnell in Washington, D.C., says that the commission’s technical staff “will take an in-depth look” at the issue and report to NRC commissioners later this year.

SIDEBAR 1: According to “Beyond Nuclear“, “Frank von Hippel, a nuclear security expert at Princeton University, teamed up with Princeton’s Michael Schoeppner on the modeling exercise. The study examines the Peach Bottom nuclear power plant in Pennsylvania, a Fukushima Daiichi twin design, two reactor plant. As the article reports:

In their simulations, the Princeton duo focused on Cs-137, a radioisotope with a 30-year half-life that has made large tracts around Chernobyl and Fukushima uninhabitable. They assumed a release of 1600 petabecquerels, which is the average amount of Cs-137 that NRC estimates would be released from a fire at a densely packed pool. It’s also approximately 100 times the amount of Cs-137 spewed at Fukushima. They simulated such a release on the first day of each month in 2015.  The contamination from such a fire on U.S. soil “would be an unprecedented peacetime catastrophe,” the Princeton researchers conclude in a paper to be submitted to the journal Science & Global Security. In a fire on 1 January 2015, with the winds blowing due east, the radioactive plume would sweep over Philadelphia, Pennsylvania, and nearby cities. Shifting winds on 1 July 2015 would disperse Cs-137 in all directions, blanketing much of the heavily populated mid-Atlantic region [see image, above left]. Averaged over 12 monthly calculations, the area exposed to more than 1 megabecquerel per square meter — a level that would trigger a relocation order — is 101,000 square kilometers [nearly 39,000 square miles]. That’s more than three times NRC’s estimate, and the relocation of 18.1 million people is about five times NRC’s estimates. (emphasis added)

Von Hippel also serves on a National Academies of Science (NAS) panel examining lessons to be learned from the Fukushima nuclear catastrophe. As also reported by Richard Stone in Science Magazine, that NAS panel has just released a major report. It reveals that a high-level radioactive waste storage pool fire was narrowly averted at Fukushima Daiichi by sheer luck. It also reveals that major security upgrades are needed at U.S. nuclear power plant high-level radioactive waste “wet” pool and dry cask storage facilities. (See the full NAS report here.)

NAS called on NRC to address safety and security risks to high-level radioactive waste storage as early as 2004. NRC never has, 15 years after the 9/11 attacks, and five years after the Fukushima nuclear catastrophe began.”

Stone, R. May 20, 2016. Near miss at Fukushima is a warning for U.S., panel says. Science Magazine.

Japan’s chief cabinet secretary called it “the devil’s scenario.” Two weeks after the 11 March 2011 earthquake and tsunami devastated the Fukushima Daiichi Nuclear Power Plant, causing three nuclear reactors to melt down and release radioactive plumes, officials were bracing for even worse. They feared that spent fuel stored in the reactor halls would catch fire and send radioactive smoke across a much wider swath of eastern Japan, including Tokyo.

Thanks to a lucky break detailed in a report released today by the U.S. National Academies, Japan dodged that bullet. The near calamity “should serve as a wake-up call for the industry,” says Joseph Shepherd, a mechanical engineer at the California Institute of Technology in Pasadena who chaired the academy committee that produced the report. Spent fuel accumulating at U.S. nuclear reactor plants is also vulnerable, the report warns. A major spent fuel fire at a U.S. nuclear plant “could dwarf the horrific consequences of the Fukushima accident,” says Edwin Lyman, a physicist at the Union of Concerned Scientists, a nonprofit in Washington, D.C., who was not on the panel.

After spent fuel is removed from a reactor core, the fission products continue to decay radioactively, generating heat. Many nuclear plants, like Fukushima, store the fuel onsite at the bottom of deep pools for at least 5 years while it slowly cools. It is seriously vulnerable there, as the Fukushima accident demonstrated, and so the academy panel recommends that the U.S. Nuclear Regulatory Commission (NRC) and nuclear plant operators beef up systems for monitoring the pools and topping up water levels in case a facility is damaged. It also calls for more robust security measures after a disaster. “Disruptions create opportunities for malevolent acts,” Shepherd says.

At Fukushima, the earthquake and tsunami cut power to pumps that circulated coolant through the reactor cores and cooled water in the spent fuel pools. The pump failure led to the core meltdowns. In the pools, found in all six of Fukushima’s reactor halls, radioactive decay gradually heated the water. Of preeminent concern were the pools in reactor Units 1 through 4: Those buildings had sustained heavy damage on 11 March and in subsequent days, when explosions occurred in Units 1, 3, and 4.

The “devil’s scenario” nearly played out in Unit 4, where the reactor was shut down for maintenance. The entire reactor core—all 548 assemblies—was in the spent fuel pool, and was hotter than fuel in the other pools. When an explosion blew off Unit 4’s roof on 15 March, plant operators assumed the cause was hydrogen—and they feared it had come from fuel in the pool that had been exposed to air. They could not confirm that, because the blast had destroyed instrumentation for monitoring the pool. (Tokyo Electric Power Company, the plant operator, later suggested that the hydrogen that had exploded had come not from exposed spent fuel but from the melted reactor core in the adjacent Unit 3.) But the possibility that the fuel had been exposed was plausible and alarming enough for then-NRC Chairman Gregory Jaczko on 16 March to urge more extensive evacuations than the Japanese government had advised—beyond a 20-kilometer radius from the plant.

Later that day, however, concerns abated after a helicopter overflight captured video of sunlight glinting off water in the spent fuel pool. In fact, the crisis was worsening: The pool’s water was boiling away because of the hot fuel. As the level fell perilously close to the top of the fuel assemblies, something “fortuitous” happened, Shepherd says. As part of routine maintenance, workers had flooded Unit 4’s reactor well, where the core normally sits. Separating the well and the spent fuel pool is a gate through which fuel assemblies are transferred. The gate allowed water from the reactor well to leak into the spent fuel pool, partially refilling it. Without that leakage, the academy panel’s own modeling predicted that the tops of the fuel assemblies would have been exposed by early April; as the water continued to evaporate, the odds of the assemblies’ zirconium cladding catching fire would have skyrocketed. Only good fortune and makeshift measures to pump or spray water into all the spent fuel pools averted that disaster, the academy panel notes.

At U.S. nuclear plants, spent fuel is equally vulnerable. It is for the most part densely packed in pools, heightening the fire risk if cooling systems were to fail. NRC has estimated that a major fire in a U.S. spent fuel pool would displace, on average, 3.4 million people from an area larger than New Jersey. “We’re talking about trillion-dollar consequences,” says panelist Frank von Hippel, a nuclear security expert at Princeton University.

Besides developing better systems for monitoring the pools, the panel recommends that NRC take another look at the benefits of moving spent fuel to other storage as quickly as possible. Spent fuel can be shifted to concrete containers called dry casks as soon as it cools sufficiently, and the academy panel recommends that NRC “assess the risks and potential benefits of expedited transfer.” A wholesale transfer to dry casks at U.S. plants would cost roughly $4 billion.

REFERENCES

NIRS. 2011. Petition for Rulemaking submitted to NRC by Foundation for Resilient Societies.  Nuclear Information and Resource Service.  www.nirs.org/reactorwatch/natureandnukes/petrulemaking262011.pdf

 

APPENDIX A

[ After finding the Science articles above, I stopped working on my extract and comments. FYI below is how far I got. I was mainly interested in the effects of a power outage, which the last NAS paper said would be covered in their next Fukushima review — this one.  Though much of what I wanted to know was classified and not included in their report. Alice Friedemann ]

NRC. 2016. Lessons Learned from the Fukushima Nuclear Accident for Improving Safety and Security of U.S. Nuclear Plants: Phase 2. National Academies Press.  238 pages.  http://www.nap.edu/21874

The Devil’s Scenario

By late March 2011—some 2 weeks after the earthquake and tsunami struck the Fukushima Daiichi plant—it was far from obvious that the accident was under control and the worst was over. Chief Cabinet Secretary Yukio Edano feared that radioactive material releases from the Fukushima Daiichi plant and its sister plant (Fukushima Daini) located some 12 km south could threaten the entire population of eastern Japan: “That was the devil’s scenario that was on my mind. Common sense dictated that, if that came to pass, then it was the end of Tokyo.” (RJIF, 2014)

Here is the worst case “devil’s scenario (Kubota 2012):

  • Multiple vapor and hydrogen explosions and a loss of cooling functions at the six reactors at Tokyo Electric Power Co’s Fukushima Daiichi nuclear plant lead to radiation leaks and reactor failures.
  • Thousands of spent fuel rods, crammed into cooling pools at the plant, melt and mix with concrete, then fall to the lower level of the buildings.
  • In a possible domino effect, a hydrogen explosion at one reactor forces workers to evacuate due to high levels of radiation, halting cooling operations at all reactors and spent fuel pools. Reactors and cooling pools suffer serious damage and radiation leaks.
  • TOKYO EVACUATION. Massive radioactive contamination forces residents in a 170-km radius or further to evacuate while those in a 250-km radius or further may voluntarily evacuate.  Tokyo, Japan’s capital, is located about 240 km (150 miles) southwest of the plant and the greater metropolitan area is home to some 35 million people.
  • Radiation levels take several decades to fall.

WHAT ACTUALLY HAPPENED

  • The 9.0 magnitude earthquake and a tsunami exceeding 15 meters knocked out cooling systems at the six-reactor plant and meltdowns are believed to have occurred at Nos. 1, 2 and 3.
  • Hydrogen explosions occurred at the No. 1 and No. 3 reactor buildings a few days after the quake. Radiation leaks forced some 80,000 residents to evacuate from near the plant and more fled voluntarily, while radioactive materials have been found in food including fish and vegetable and water.
  • Reactor No. 4 was under maintenance and 550 fuel rods had been transferred to its spent fuel pool, which already had about 1,000 fuel rods. The pool caught fire and caused an explosion.
  • Reactors No. 5 and 6 reached cold shutdown — meaning water used to cool fuel rods is below boiling point — nearly 10 days after the tsunami but it took more than nine months to achieve that state at Nos. 1-3.
  • Decommissioning the reactors will take 30 to 40 years and some nearby areas will be uninhabitable for decades.

And here’s what the NRC has to say about the Unit 4 pool

The events in the Unit 4 pool should serve as a wake-up call to nuclear plant operators and regulators about the critical importance of having robust and redundant means to measure, maintain, and, when necessary, restore pool cooling.  These events  also have important implications for accident response actions. As water levels decrease below about 1 meter above the top of the fuel racks, radiation levels on the refueling deck and surrounding areas will increase substantially, limiting personnel access. Moreover, once water levels reach approximately 50% of the fuel assembly height, the tops of the rods will begin to degrade, changing the fuel geometry and increasing the potential for large radioactive material releases into the environment (Gauntt et al., 2012).

These observations bear directly on the safety of pool storage following large offloads of fuel from reactors.  For example, consider what might have occurred in the Unit 4 spent fuel pool had the reactor been shut down and the core been offloaded to the pool 48 days before March 11 rather than the actual 102 days earlier, and had there been no water leakage [into the pool].  [In this case], pool water levels would have reached 50% of fuel assembly height before 10.6 days had elapsed—which was the time elapsed between the onset of the accident on March 11 and the first addition of water to the pool in Unit 4. In this hypothetical situation, if the core had been offloaded closer to the time of the accident or if the water addition had been delayed longer than 10.6 days, then there could have been damage to the fuel with the potential for a large release of radioactive material from the pool, particularly because the most recently offloaded (and highest-power) fuel was not dispersed in the pool but was concentrated in adjacent locations within the racks.

INTRODUCTION

This is the second and final part, focuses on three issues: (1) lessons learned from the accident for nuclear plant security, (2) lessons learned for spent fuel storage, and (3) reevaluation of conclusions from previous Academies studies on spent fuel storage. The present report provides a reevaluation of the findings and recommendations from NRC (2004, 2006).

New recommendations:

  1. The U.S. nuclear industry and the U.S. Nuclear Regulatory Commission should strengthen their capabilities for identifying, evaluating, and managing the risks from terrorist attacks, especially spent fuel storage risks
  2. Nuclear plant operators and their regulators should upgrade and/or protect nuclear plant security infrastructure and systems and train security personnel to cope with extreme external events and severe accidents. Such upgrades should include: independent, redundant, and protected power sources dedicated to plant security systems that will continue to function independently if safety systems are damaged; diverse and flexible approaches for coping with and reconstituting plant security infrastructure, systems, and staffing during and following extreme external events and severe accidents;
  3. The U.S. nuclear industry and its regulator should improve the ability of plant operators to measure real-time conditions in spent fuel pools and maintain adequate cooling of stored spent fuel during severe accidents and terrorist attacks with hardened and redundant physical surveillance systems (e.g., cameras), radiation monitors, pool temperature monitors, pool water-level monitors, and means to deliver pool makeup water or sprays even when physical access to the pools is limited by facility damage or high radiation levels.
  4. The U.S. Nuclear Regulatory Commission should perform a spent fuel storage risk assessment to elucidate the risks and potential benefits of expedited transfer of spent fuel from pools to dry casks. This risk assessment should address accident and sabotage risks for both pool and dry storage.
  5. Some of the committee-recommended improvements have not been made by the USNRC or nuclear industry. In particular, the USNRC has not required plant licensees to install pool temperature monitors, yet these are essential in an accident to evaluate independently whether drops in pool water levels are due to evaporation or leakage, and must have independent power, be seismically rugged, and operate under severe accident conditions.

The committee found that the spent fuel storage facilities (pools and dry casks) at the Fukushima Daiichi plant maintained their containment functions during and after the March 11, 2011, earthquake and tsunami.

However, the loss of power, spent fuel pool cooling systems, and water level- and temperature-monitoring instrumentation in Units 1-4 and hydrogen explosions in Units 1, 3, and 4 hindered efforts by plant operators to monitor conditions in the pools and restore critical pool-cooling functions.

Plant operators had not been trained to respond to these yet they successfully improvised ways to monitor and cool the pools using helicopters, fire trucks, water cannons, concrete pump trucks, and ad hoc connections to installed cooling systems. These improvised actions were essential for preventing damage to the stored spent fuel and the consequent release of radioactive materials to the environment.

The spent fuel pool in Unit 4 was of particular concern because it had a high decay-heat load.

The committee used a steady-state energy-balance model to provide insights on water levels in the Unit 4 pool during the first 2 months of the accident (i.e., between March 11 and May 12, 2011). This model suggests that water levels in the Unit 4 pool declined to less than 2 m (about 6 ft) above the tops of the spent fuel racks by mid-April 2011.

The model suggests that pool water levels would have dropped below the top of active fuel had there not been leakage of water into the pool from the reactor well and dryer/separator pit through the separating gates. This water leakage was accidental; it was also fortuitous because it likely prevented pool water levels from reaching the tops of the fuel racks. The events in the Unit 4 pool show that gate leakage can be an important pathway for water addition or loss from some spent fuel pools and that reactor outage configuration can affect pool storage risks.

Once water levels reach half of the fuel assembly height, the tops of the rods will begin to degrade, changing the fuel geometry and increasing the potential for large radioactive material releases into the environment.

The safe storage of spent fuel in pools depends critically on the ability of nuclear plant operators to keep the stored fuel covered with water.

This has been known for more than 40 years and was powerfully reinforced by the Fukushima Daiichi accident. If pool water is lost through an accident or terrorist attack, then the stored fuel can become uncovered, possibly leading to fuel damage including runaway oxidation of the fuel cladding (a zirconium cladding fire) and the release of radioactive materials to the environment.

The spent fuel pools at Fukushima Daiichi Units 1-4 contained many fewer assemblies than are typically stored in spent fuel pools at U.S. nuclear plants. [The report doesn’t say how many fewer].

The storage capacity of U.S. spent fuel pools ranges from fewer than 2,000 assemblies to nearly 5,000 assemblies, with an average storage capacity of approximately 3,000 spent fuel assemblies. U.S. spent fuel pools are typically filled with spent fuel assemblies up to approximately three-quarters of their capacity (USNRC NTTF, 2011, p. 43).

ELECTRIC POWER

All offsite electrical power to the plant was lost following the earthquake, and DC power was eventually lost in Units 1-4 following the tsunami. Offsite AC power was not restored until 9 to 11 days later. Security equipment requiring electrical power was probably not operating continuously during this blackout period.

Regulations do not specify the performance requirements for these backup power supplies.  These backup supplies need to be adequately protected and sized to cope with a long-duration event such as occurred at the Fukushima Daiichi plant.

Recommendations:

  1. To have portable backup equipment capable of providing water and power to the reactor. Such equipment includes, for example, electrical generators, batteries, and battery chargers; compressors; pumps, hoses, and couplings; equipment for clearing debris; and equipment for temporary protection against flooding.
  2. To stage this equipment in locations both on- and offsite where it will be safe and deployable.

The Unit 1-4 spent fuel pools are equipped with active cooling systems; in particular the Spent Fuel Pool Cooling and Cleanup (FPC) systems, which are located within the reactor buildings below the refueling decks and in a nearby radwaste building. This system is designed to maintain pool temperatures in the range 25°C to 35°C (77°F to 95°F) by pumping the pool water through heat exchangers. The system also filters the pool water and adds makeup water as necessary to maintain pool water levels.

All of these features require electrical power.

The pools and refueling levels contain instruments to monitor water levels, temperatures, and air radiation levels. These measurements are displayed in the main control rooms. The temperature and water-level indicators are limited to a few locations near the tops of the pools for the purpose of maintaining appropriate water levels during normal operations: Pool water level is monitored by two level switches installed 1 foot above and half a foot below the normal water level in the pool.  Pool water temperature is monitored by a sensor 1 foot below the normal water level of the pool.

This instrumentation also requires electrical power to operate and has no backup power supply.

NRC (2014) provides a discussion of key events at the Fukushima Daiichi plant following the March 11, 2011, earthquake and tsunami. To summarize, Units 1-4 lost external power as a result of earthquake-related shaking. Units 1-4 also lost all internal AC power and almost all DC power for reactor cooling functions as a result of tsunami-related flooding. Efforts by plant operators to restore cooling and vent containments in time to avert core damage were unsuccessful. As a result, the Unit 1, 2, and 3 reactors sustained severe core damage and the Unit 1, 3, and 4 reactor buildings were damaged by explosions of combustible gas, primarily hydrogen generated by steam oxidation of zirconium and steel in the reactor core and, secondarily, by hydrogen and carbon monoxide generated by the interaction of the molten core with concrete.

The loss of AC and DC power and cooling functions also affected the Unit 1-4 spent fuel pools: The pools’ Spent Fuel Pool Cooling and Cleanup systems, secondary cooling systems, and pool water-level and temperature instrumentation became inoperable. High radiation levels and explosion hazards prevented plant personnel from accessing the Unit 1-4 refueling decks. Consequently, no data on pool water levels or temperatures were available for almost 2 weeks after the earthquake and tsunami. Moreover, even after pool instrumentation was restored, it was of limited value because of the large swings in pool water levels that occurred during the accident.  Improvised instrumentation and aerial observations were used to monitor pool conditions. Aerial and satellite photography were particularly important sources of information in the early stages of the accident although the images were not always interpreted correctly.

EARTHQUAKE EFFECTS

The earthquake caused the reactor buildings to sway, which likely caused water to slosh from the pools. No observational data on sloshing-related water losses are available, however. Analyses performed by the plant owner, TEPCO, suggest that sloshing reduced pool water levels by about 0.5 m (TEPCO, 2012a, Attachment 9-1). The sloshed water spilled onto the refueling decks and likely flowed into the reactor buildings through deck openings such as floor drains.

The explosions in the Unit 1, 3, and 4 reactor buildings likely caused additional water to be sloshed from the pools in those units. Again, no observational data on explosion-related water losses are available. Sloshing due to building motion resulting from the explosions is unlikely to be significant. But sloshing will occur if there is a spatially non-uniform pressure distribution created on the pool surface by an explosion in the region above the pool. This is particularly likely for high-speed explosions that create shock or detonation waves.  TEPCO estimates that an additional 1 meter of water was sloshed from each of the pools as a result of the explosions (TEPCO, 2012a, Attachment 9-1, p. 3/9).

Emergency response center actions

Personnel in the plant’s Emergency Response Center (see NRC, 2014, Appendix D) were focused on cooling the Unit 1-3 reactors and managing their containment pressures during the first 48 hours of the accident. They knew that restoring cooling in the spent fuel pools was less urgent and prioritized accordingly. Beginning on March 13, 2011, operators became increasingly concerned about water levels in the pools; their concerns increased following the explosions in the Unit 3 and 4 reactor buildings on March 14 and 15, respectively

By the morning of March 15, 2011, it was apparent that the Unit 1-3 reactors had been damaged and were releasing radioactive material. TEPCO evacuated all but about 70 personnel from the plant because of safety concerns (personnel began returning a few hours later). That same day, TEPCO initiated a comprehensive review of efforts to cool the spent fuel pools and made it a priority to determine the status of the Unit 4 pool. TEPCO added the Unit 3 pool to its priority list on the morning of March 16 after steam was observed billowing from the top of the Unit 3 reactor building.

Unit 1 Pool. The explosion in the Unit 1 reactor building on March 12, 2011, blew out the wall panels on the fifth floor, but the steel girders that supported the panels remained intact. The roof collapsed onto the refueling deck and became draped around the crane and refueling machinery. This wreckage prevented visual observations of and direct access to the pool. TEPCO estimated that the pool lost about 129 tonnes of water from the earthquake- and explosion-related sloshing. This lowered the water level in the pool to about 5.5 meters above the top of the racks. Because of the very low decay heat in Unit 1, this pool was of least concern.

Spent Fuel Heat-up Following Loss-of-Pool-Coolant Events

Spent fuel continues to generate heat from the decay of its radioactive constituents long after it is removed from a reactor. The fuel is stored in water-filled pools (i.e., spent fuel pools) to provide cooling and radiation shielding. An accident or terrorist attack that damaged a spent fuel pool could result in a partial or complete loss of water coolant. Such loss-of-pool-coolant events can cause the fuel to overheat, resulting in damage to the metal (zirconium) cladding of the fuel rods and the uranium fuel pellets within and the release of radioactive constituents to the environment.

The loss of water coolant from the pool would cause temperatures in the stored spent fuel to increase because air is a less effective coolant than water. The magnitude and rate of temperature increase depends on several factors, including how long the fuel has been out of the reactor and the rate and extent of water loss from the pool. As fuel temperatures rise, internal pressures in the fuel rods will increase and the rod material will soften. At about 800°C (1472°F), internal pressures in the fuel rod will exceed its yield stress, resulting in failure, a process known as fuel ballooning. Thermal creep of the fuel rod above about 700°C (1292°F) can also result in ballooning. Once the fuel cladding fails, the gaseous and volatile fission products stored in the gap between the fuel rod and pellets will be released. The fission product inventory varies depending on the type of fuel and its irradiation history; typically, on the order of a few percent of the total noble gas inventory (xenon, krypton), halogens (iodine, bromine), and alkali metals (cesium, rubidium) present in the fuel will be released. Between about 900°C (1652°F) and 1200°C (2192°F), highly exothermic chemical reactions between the fuel rods and steam or air will begin to accelerate, producing zirconium oxide.

The reaction in steam also generates large quantities of hydrogen. Deflagration (i.e., rapid combustion) of this hydrogen inside the spent fuel pool building can damage the structure and provide a pathway for radioactive material releases into the environment. Further temperature increases can drive more volatile fissile products out of the fuel pellets and cause the fuel rods to buckle, resulting in the physical relocation of rod segments and the dispersal of fuel pellets within the pool.

At about 1200°C the oxidation reaction will become self-sustaining, fully consuming the fuel rod cladding in a short time period if sufficient oxygen is available (e.g., from openings in the spent fuel pool building) and producing uncontrolled (runaway) temperature increases. This rapid and self-sustaining oxidation reaction, sometimes referred to as a zirconium cladding fire, may propagate to other fuel assemblies in the pool. In the extreme, such fires can produce enough heat to melt the fuel pellets and release most of their fission product inventories.

Unit 4 Pool

The Unit 4 reactor was shut down for maintenance, and large-scale repairs were in on March 11, 2011.

The explosion that occurred in the Unit 4 reactor building at 6:14 on March 15, 2011, destroyed the roof and most of the walls on the fourth and fifth (refueling deck) floors, and it damaged some of the walls on the third floor. TEPCO (2012a) has suggested that the explosion was due to the combustion of hydrogen that was generated in Unit 3 and flowed into Unit 4 through the ventilation system. The fifth-floor slab was pushed upward and the fourth-floor slab was depressed. The explosion also deposited debris around the reactor building, onto the refueling deck, and into the pool . Fires were reported in the damaged building later that morning and on the morning of March 16; these fires self-extinguished and were later attributed to the ignition of lubricating oil.

The damage to the Unit 3 and 4 building structures and steam emissions from both buildings raised grave concerns about the spent fuel pools in those units. Unit 4 was of particular concern because the reactor contained no fuel and therefore could not have been the source of hydrogen or other combustible gas. The only apparent source of combustible gas within Unit 4 was hydrogen from the steam oxidation of spent fuel in the fully or partially drained Unit 4 spent fuel pool.

Plant operators well understood the hazard posed by the spent fuel in the Unit 4 pool: The pool was loaded with high-decay-heat fuel; its water level was dropping because of large evaporative water losses; and openings in the Unit 4 building created by the explosion created pathways for radioactive materials releases into the environment.

The extensive visible damage to the Unit 4 reactor building and high level of decay heat in the Unit 4 pool continued to drive concerns about pool water levels. Operators began to add water to the Unit 4 pool.

Prime Minister Kan asked Dr. Kondo, then-chairman of the Japanese Atomic Energy Commission, to prepare a report on worst-case scenarios from the accident. Dr. Kondo led a 3-day study involving other Japanese experts and submitted his report (Kondo, 2011) to the prime minister on March 25, 2011. The existence of the report was initially kept secret because of the frightening nature of the scenarios it described. An article in the Japan Times quoted a senior government official as saying, “The content [of the report] was so shocking that we decided to treat it as if it didn’t exist.” When the existence of the document was finally acknowledged in January 2012, Special Advisor (to the Prime Minister) Hosono stated: “Because we were told there would be enough time to evacuate residents (even in a worst-case scenario), we refrained from disclosing the document due to fear it would cause unnecessary anxiety (among the public). . . .”

One of the scenarios involved a self-sustaining zirconium cladding fire in the Unit 4 spent fuel pool….Voluntary evacuations were envisioned out to 200 km because of elevated dose levels. If release from other spent fuel pools occurred, then contamination could extend as far as Tokyo, requiring compulsory evacuations out to more than 170 km and voluntary evacuations out to more than 250 km; the latter includes a portion of the Tokyo area. There was particular concern that the zirconium cladding fire could produce enough heat to melt the stored fuel, allowing it to flow to the bottom of the pool, melt through the pool liner and concrete bottom, and flow into the reactor building. After leaving office, Prime Minister Kan stated that his greatest fears during the crisis were about the Unit 4 spent fuel pool (RJIF, 2014).

Two important observations can be made from the committee’s analysis of water levels in the Unit 4 pool. First, because of the substantial uncertainties cited above, the committee cannot rule out the possibility that spent fuel in the Unit 4 pool became partially uncovered sometime prior to April 21, 2011. If the fuel was uncovered, however, then it was not substantial enough to cause fuel damage or substantially increase external dose rates in areas around the Unit 4 building. Fuel damage will not begin immediately when the water level drops below the top of the rack. Simulations of loss-of-cooling accidents (Gauntt et al., 2012) predict that it is possible to recover without fuel damage as long as the collapsed25 water level does not drop below the mid-height of the fuel for an extended period of time.

Second, leakage through the gate seals was essential for keeping the fuel in the Unit 4 pool covered with water. Had there been no water in the reactor well, there could well have been severe damage to the stored fuel and substantial releases of radioactive material to the environment. This is the “worst-case scenario” envisioned by then–Atomic Energy Commission of Japan Chairman Dr. Shunsuke Kondo.  To illustrate this second observation, the committee modeled a hypothetical scenario in which there is no water leakage into the Unit 4 pool from the reactor well and dryer-separator pit.  Without water leakage, pool water levels could have dropped well below the top of active fuel (located 4 m above the bottom of the pool) in early April 2011.

Finally, the damage observed in the Unit 3 gates demonstrates a pathway by which a severe accident could compromise spent fuel pool storage safety: drainage of water from a spent fuel pool through a damaged gate breach into an empty volume such as a dry reactor well or fuel transfer canal. A gate breach could drain a spent fuel pool to just above the level of the racks in a matter of hours, and the resulting high radiation fields on the refueling deck could hinder operator response actions. The committee judges that an effort is needed to assess the containment performance of spent fuel pool gates under severe accident conditions during all phases of the operating cycle.

Assessment of spent fuel pool performance, including gate leakage, is not a new topic for the USNRC. A review of historical data in 1997 (USNRC, 1997c) documented numerous instances of significant accidental drainage of pools in pressurized water reactor and BWR plants due to various failures including gate seals. The report recommended that “[t]he overall conclusions are that the typical plant may need improvements in SFP [spent fuel pool] instrumentation, operator procedures and training, and configuration control” (p. xi). Furthermore, the report goes on to identify the most prevalent reason for loss of pool inventory was leaking fuel pool gates. Given the potential for gate leakage under normal operations it is not surprising that it is also an issue under severe accident conditions.

Lessons Learned for Nuclear Plant Security

To the committee’s knowledge, TEPCO has not publicly disclosed the impacts of the earthquake and tsunami on plant security systems. Nevertheless, the committee infers from TEPCO’s written reports, as well as its own observations during a November 2012 tour of the Fukushima Daiichi plant, that security systems at the plant were substantially degraded by the earthquake and tsunami and the subsequent accident.  Tsunami damage and power losses likely affected the integrity and operation of numerous security systems, including lighting, physical barriers and other access controls, intrusion detection and assessment equipment, and communications equipment.

Such disruptions can create opportunities for malevolent acts and increase the susceptibility of critical plant systems to such acts.  Nuclear plant operators and their regulators should upgrade and/or protect nuclear plant security infrastructure and systems and train security personnel to cope with extreme external events and severe accidents. Such upgrades should include 1) Independent, redundant, and protected power sources dedicated to plant security systems that will continue to function independently if safety systems are damaged; 2) Diverse and flexible approaches for coping with and reconstituting plant security infrastructure, systems, and staffing during and following extreme external events and severe accidents; 3) the events at the plant suggest an important lesson from the accident: Extreme external events and severe accidents can have severe and long lasting impacts on the security systems at nuclear plants. Such long-lasting disruptions can create opportunities for malevolent acts and increase the susceptibility of critical plant systems to such acts. Similar situations could occur as a result of other natural disasters. For example, a hurricane or destructive thunderstorm that spawned tornados could damage onsite and offsite power substations and high-voltage pylons, causing a loss of a nuclear plant’s offsite power. The storm could also damage security fences, cameras, and other intrusion detection equipment. Relief security officers and other site personnel may not be able to report to duty on schedule if storm-related damage was widespread in surrounding communities. An adversary could use this disruption to advantage in carrying out a malevolent act.

The Fukushima Daiichi accident illustrates that full restoration of security measures could potentially take days to weeks after an extreme external event or severe accident: Damaged security equipment must be restored and destroyed equipment must be replaced.

TERRORISM, SABOTAGE, SECURITY

A determined violent external assault, attack by stealth, or deceptive actions, including diversionary actions, by an adversary force capable of operating in each of the following modes: A single group attacking through one entry point, multiple groups attacking through multiple entry points, a combination of one or more groups and one or more individuals attacking through multiple entry points, or individuals attacking through separate entry points, with the following attributes, assistance and equipment:

(A) Well-trained (including military training and skills) and dedicated individuals, willing to kill or be killed, with sufficient knowledge to identify specific equipment or locations necessary for a successful attack;

(B) Active (e.g., facilitate entrance and exit, disable alarms and communications, participate in violent attack) or passive (e.g., provide information), or both, knowledgeable inside assistance;

(C) Suitable weapons, including handheld automatic weapons, equipped with silencers and having effective long range accuracy;

(D) Hand-carried equipment, including incapacitating agents and explosives for use as tools of entry or for otherwise destroying reactor, facility, transporter, or container integrity or features of the safeguards system; and

(E) Land and water vehicles, which could be used for transporting personnel and their hand-carried equipment to the proximity of vital areas; and (ii) An internal threat; and (iii) A land vehicle bomb assault, which may be coordinated with an external assault; and (iv) A waterborne vehicle bomb assault, which may be coordinated with an external assault; and (v) A cyber attack.

An adversary who lacks the strength, weaponry, and training of the nuclear plant’s security forces might utilize attack strategies that do not require direct confrontations with those forces. For example, an adversary might choose to attack perceived weak points in the plant’s support infrastructure (e.g., offsite power and water supplies, key personnel) rather than mounting a direct assault on the plant. The goals of such asymmetric attacks might be to cause operational disruptions, economic damage, and/or public panic rather than radiological releases from a plant’s reactors or spent fuel pools. In fact, such attacks would not necessarily need to result in any radiological releases to be considered successful.

Offsite power substations, piping, fiber optic connection points, and other essential systems provide an adversary the opportunity to inflict damage with very little personal risk and without confronting a nuclear plant’s security forces. The psychological effects of such attacks, even if these do not result in the release of radioactive material, might have consequences comparable to or greater than the actual physical damage. In the extreme, such attacks could lead to temporary shutdowns of, or operating restrictions on, other nuclear plants until security enhancements could be implemented. (Japan shut down all its nuclear power reactors and briefly entertained the dismantlement of its nuclear power industry due to public pressure following the Fukushima Daiichi accident.)

Detailed information about the evolution of the accident at the Fukushima Daiichi plant and its compromised safety systems is widely available on the Internet and in reports such as this one. This information could be used by terrorists to plan and carry out asymmetric attacks on nuclear plants in hopes of creating similar cascading failures.

In the event of a catastrophic event or attack, security systems must be designed and installed to be quickly reconstituted. Hardened power and fiber optic cables must permit “plug-and-play” installation of replacements for inoperable equipment. Reestablishment of security is critical because an adversary who might otherwise be deterred from attacking a site might be encouraged to carry out an attack at a compromised facility.

The USNRC requires licensees to implement an Insider Mitigation Program to oversee and monitor the initial and continuing trustworthiness and reliability of individuals having unescorted access in protected or vital areas of nuclear plants. There is a long-standing assumption by the USNRC that this program reduces the likelihood of an active insider (GAO, 2006). USNRC staff was not able to provide an explanation that was adequate to the committee on how it assesses the effectiveness of these measures for mitigating the insider threat. Moreover, to the committee’s knowledge, there are no programs in place at the USNRC to specifically evaluate the effectiveness of these measures for mitigating the insider threat.

  1. 2.1 Reevaluation of Finding 3B from NRC (2006)

NRC (2006) considered four general types of terrorist attack scenarios:

  1. Air attacks using large civilian aircraft or smaller aircraft laden with explosives,
  2. Ground attacks by groups of well-armed and well-trained individuals,
  3. Attacks involving combined air and land assaults, and
  4. Thefts of spent fuel for use by terrorists (including knowledgeable insiders) in radiological dispersal devices.

The report noted that “. . . only attacks that involve the application of large energy impulses or that allow terrorists to gain interior access have any chance of releasing substantial quantities of radioactive material. This further restricts the scenarios that need to be considered. For example, attacks using rocket-propelled grenades (RPGs) of the type that have been carried out in Iraq against U.S. and coalition forces would not likely be successful if the intent of the attack is to cause substantial damage to the facility. Of course, such an attack would get the public’s attention and might even have economic consequences for the attacked plant and possibly the entire commercial nuclear power industry.” (NRC, 2006, p. 30) The concluding sentence speaks to terrorist intent and metrics for success. That is, if the intent of a terrorist attack is to instill fear into the population and cause economic disruption, then an attack need not result in any release of radioactive material from the plant to be judged a success. The classified report (NRC, 2004) identified particular terrorist attack scenarios that were judged by its authoring committee to have the potential to damage spent fuel pools and result in the loss of water coolant (Section 2.2 in NRC, 2004). The present committee asked USNRC staff whether any of these attack scenarios had been examined further since NRC (2004) was issued. Staff was unable to present the committee with any additional technical analyses of these scenarios. Consequently, the present committee finds that the USNRC has not undertaken additional analyses of terrorist attack scenarios to provide a sufficient technical basis for a reevaluation of Finding 3B in NRC (2004). The present committee did not have enough information to evaluate the particular terrorist attack scenarios identified in NRC (2004) and therefore cannot judge their potential for causing damage to spent fuel pools. The committee notes, however, that new remote-guided aircraft technologies have come into widespread use in the civilian and military sectors since NRC (2004) was issued. These technologies could potentially be employed in the attack scenarios described in NRC (2004). Other types of threats, particularly insider and cyber threats, have grown in prominence since NRC (2004) was issued. There is a need to more fully explore these threats to understand their potential impacts on nuclear plants.

6 Loss-of-Coolant Events in Spent Fuel Pools

Reconfiguring spent fuel in pools can be an effective strategy for reducing the likelihood of fuel damage and zirconium cladding fires following loss-of-pool-coolant events. However, reconfiguring spent fuel in pools does not eliminate the risks of zirconium cladding fires, particularly during certain periods following reactor shutdowns or for certain types of pool drainage conditions. These technical studies also illustrate the importance of maintaining water coolant levels in spent fuel pools so that fuel assemblies do not become uncovered.

The particular conditions under which fuel damage and zirconium cladding fires can occur, as well as the timing of such occurrences, are not provided in this report because they are security sensitive.

 

Spent Fuel Pool Loss-of-Coolant Accidents (LOCAs)

In a complete-loss-of-pool-coolant scenario, most of the oxidation of zirconium cladding occurs in an air environment.   For a partial-loss-of-pool-coolant scenario (or slow drainage in a complete-loss-of-pool-coolant scenario), the initial oxidation of zirconium cladding will occur in a steam environment:

The zirconium-steam reaction leads to the formation of hydrogen, which can undergo rapid deflagration in the pool enclosure, resulting in overpressures and structural damage. This damage can provide a pathway for air ingress to the pool, which can promote further zirconium oxidation and allow radioactive materials to be released into the environment. Debris from the damaged enclosure can fall into the pool and block coolant passages.

After the water level drops below the rack base plate, convective air flow is established. If the steam is exhausted, then the zirconium-steam reaction is replaced by the zirconium-oxygen reaction. However, prior to the onset of convective air flow, fuel cladding temperatures can exceed the threshold for oxidation, and fuel damage and radioactive material release can occur. The time to damage and release depends on pool water depth relative to the stored fuel assemblies.  There is a higher hazard for zirconium cladding fires in partially drained pools.

[To prevent this] nuclear power plants need to be able to provide at least 500 gallons per minute (gpm) of makeup water to the plant’s spent fuel pools for 12 hours.   The operator would first use installed equipment, if available, to meet these goals. If such equipment is not available, then operators would provide makeup water (e.g., from the condensate storage tank) with a portable injection source (pump, flexible hoses to standard connections, and associated diesel engine-generator) that can provide at least 500 gpm of spent fuel pool makeup. The portable equipment would be staged on site and could also be brought in from regional staging facilities.

If pool water levels cannot be maintained above the tops of the fuel assemblies, then portable pumps and nozzles would be used to spray water on the uncovered fuel assemblies. FLEX requires a minimum of 200 gpm to be sprayed onto the tops of the fuel assemblies to cool them (NEI, 2012).

Water and spray strategies need to work even if physical access to pools is hindered by structural damage or radiation levels make the site inaccessible even if permanently installed equipment is damaged. However, physical access might not be possible if the building is damaged or the pool is drained (in the latter case, high radiation levels would likely limit physical access to the pool). The spent fuel pools in Units 1-4 of the Fukushima Daiichi plant were not accessible after the hydrogen explosions because of debris and high radiation levels.

Expedited Transfer of Spent Fuel from Pools to Dry Casks

Spent fuel pools at U.S. nuclear plants were originally outfitted with “low-density” storage racks that could hold the equivalent of one or two reactor cores of spent fuel. This capacity was deemed adequate because plant operators planned to store spent fuel only until it was cool enough to be shipped offsite for reprocessing. However, reprocessing of commercial spent fuel was never implemented on a large scale in the United States; consequently, spent fuel has continued to accumulate at operating nuclear plants.

S. nuclear plant operators have taken two steps to manage their growing inventories of spent fuel. First, “high-density” spent fuel storage racks have been installed in pools to increase storage capacities. This action alone increased storage capacities in some pools by up to about a factor of 5 (USNRC, 2003). Second, dry cask storage has been established to store spent fuel that can be air cooled. Typically, transfers of the oldest (and therefore coolest) spent fuel from pools to dry casks are made only when needed to free up space in the pool for offloads of spent fuel resulting from reactor refueling operations. The objective of accelerated or expedited transfer would be to reduce the density of spent fuel stored in pools: “Expedited transfer of spent fuel into dry storage involves loading casks at a faster rate for a period of time to achieve a low density configuration in the spent fuel pool (SFP). The expedited process maintains a low density pool by moving all fuel cooled longer than 5 years out of the pool.

The low-density configuration achieved by expedited transfer would reduce inventories of spent fuel stored in pools. This might improve the coolability of the remaining fuel in the pools if water coolant was lost or if cooling systems malfunctioned.

Events capable of causing the loss of cooling in spent fuel pools:

  1. seismic events
  2. drops of casks and other heavy loads on pool walls
  3. loss of offsite power
  4. internal fire
  5. loss of pool cooling or water inventory
  6. inadvertent aircraft impacts
  7. wind-driven missiles (the impacts of heavy objects such as storm debris on the external walls of spent fuel pools)
  8. failures of pneumatic seals on the gates in the spent fuel pool

The USNRC’s analyses are of limited use for assessing spent fuel storage risks  because

  1. Spent fuel storage sabotage risks are not considered.
  2. Dry cask storage risks are not considered.
  3. The attributes considered in the cost-benefit analysis (Section 7.3.2) are limited by OMB and USNRC guidance and do not include some expected consequences of severe nuclear accidents.
  4. The analysis employs simplifying bounding assumptions that make it technically difficult to assign confidence intervals to the consequence estimates or make valid risk comparisons.

The present committee’s recommended risk analysis would provide policy makers with a more complete technical basis for deciding whether earlier movements of spent fuel from pools into dry cask storage would be prudent to reduce the potential consequences of accidents and terrorist attacks on stored spent fuel. This recommended risk analysis should • Consider accident and sabotage risks for both pool and dry cask storage. • Consider societal, economic, and health consequences of concern to the public, plant operators, and the USNRC. • More fully account for uncertainties in scenario probabilities and consequences.

A complete analysis would also include similar considerations for sabotage threats, including the consequences should a design-basis-threat (DBT) event fail to be mitigated, as well as the consequences should beyond-DBT events occur and fail to be mitigated. A complete analysis would consider a broad range of potential threats including insider and cyber threats. Sabotage initiators can differ from accident initiators in important ways: For example, most accident initiators occur randomly in time compared to the operating cycle of a nuclear plant. Sabotage initiating events can be timed with certain phases of a plant’s operating cycle, changing the conditional probabilities of certain attack scenarios as well as their potential consequences. There may be additional differences between accident and sabotage events with respect to timing, severity of physical damage, and magnitudes of particular consequences, for example radioactive material releases.

The following three conditional probabilities could have correlated and high numerical values if knowledgeable and determined saboteurs attack the plant in certain ways during certain parts of its operating cycle:

P(loss of offsite power | sabotage),

P(operating cycle vulnerability | loss of offsite power & sabotage), and

P(liner damage leading to loss of coolant | operating cycle vulnerability & sabotage).

If one assumes, for example, that these conditional probabilities are 1.0, then release frequencies will be about two orders of magnitude higher than those for a seismic initiator. This increased frequency is a consequence of the correlated behavior of the saboteurs with the reactor operating cycle and a high probability of success using a strategy that exploits plant vulnerabilities. On the other hand, decreasing these three conditional probabilities by a factor of 2 (corresponding to either less successful attackers or more successful defenders) will decrease the likelihood of a release by a factor of 10. Although the conditional probabilities used in the foregoing scenarios are entirely fictitious (and the scenarios themselves are in no way representative of the broad range of scenarios that could be considered), their use illustrates two important points: (1) A large range of F(release) outcomes are possible depending on the conditional probabilities used in the analysis, and, therefore, (2) it is essential to characterize the uncertainties in F(release) as part of the analysis. A sabotage risk assessment could be used to estimate these outcomes and uncertainties. The committee judges that it is not technically justifiable to exclude sabotage risks without the type of technical analysis that is routinely performed for assessing reactor accident risks. Such an analysis would consider both design-basis and beyond-design-basis threats. The likelihoods of these threats could be assessed through elicitation

SPENT FUEL POOL STUDY

The Spent Fuel Pool Study analyzed the consequences of a beyond design- basis earthquake on a spent fuel pool at a reference plant4 containing a General Electric Type 4 boiling water reactor (BWR) with a Mark I containment. The USNRC selected an earthquake having an average occurrence frequency of 1 in 60,000 years and a peak ground acceleration of 0.5-1.0 g (average 0.7 g) as the initiating event for this analysis.6 The study examined the effects of the earthquake on the integrity of the spent fuel pool and the effects of loss of pool coolant on its stored spent fuel.  A modeling analysis was carried out to identify initial damage states to the pool structure from this postulated seismic event. The analysis concluded that structural damage to the pool leading to water leaks (i.e., tears in the steel pool liner and cracks in the reinforced concrete behind the liner) was most likely to occur at the junction of the pool wall and floor. This leak location would result in complete drainage of the pool if no action was taken to plug the leak or add make-up water. Given the assumed earthquake, the leakage probability was estimated to be about 10 percent.

Leak scenarios

  1. No leak in the spent fuel pool
  2. A “small leak” in the pool that averages about 200 gallons per minute for water heights at least 16 feet above the pool floor (i.e., at the top of the spent fuel rack).
  3. A “moderate leak” in the pool that averages about 1,500 gallons per minute for water heights at least 16 feet above the pool floor

Reactor operating cycle phases:7

  • OCP1: 2-8 days; reactor is being defueled.
  • OCP2: 8-25 days; reactor is being refueled.
  • OCP3: 25-60 days; reactor in operation.
  • OCP4: 60-240 days; reactor in operation.
  • OCP5: 240-700 days; reactor in operation.

Fuel configurations in the pool:8

  • A “high-density” storage configuration in which hot (i.e., recently discharged from the reactor) spent fuel assemblies are surrounded by four cooler (i.e., less recently discharged from the reactor) fuel

assemblies in a 1 × 4 configuration throughout the pool (Figure 7.2).

  • A “low-density” storage configuration in which all spent fuel older than 5 years has been removed from the pool.

Mitigation scenarios:

  • A “mitigation” case in which plant operators are successful in deploying equipment to provide makeup water and spray cooling required by 10 CFR 50.54(hh)(2)10 (see Chapter 2).
  • A “no-mitigation” case in which plant operators are not successful in taking these actions [MY NOTE: BECAUSE THE ELECTRIC GRID IS DOWN FROM AN EMP OR OTHER DISASTER ]

Some key results of the consequence modeling are shown in Table 7.1.

Some of the loss-of-coolant scenarios examined in the study resulted in damage to, and the release of, radioactive material from the stored spent fuel. Releases began anywhere from several hours to more than 2 days after the postulated earthquake. The largest releases were estimated to result from high-density fuel storage configurations with no mitigation (Figure 7.1). The releases were estimated to be less than 2 percent of the cesium-137 inventory of the stored fuel for medium-leak scenarios, whereas releases were estimated to be one to two orders of magnitude larger for small-leak scenarios with a hydrogen combustion event. Hydrogen combustion was found to be “possible” for high-density pools but “not predicted” for low-density pools.

Operating-cycle phase (OCP) played a critical role in determining the potential for fuel damage and radioactive materials release. The potential for damage is highest immediately after spent fuel is offloaded into the pool (OCP1) because its decay heat is large. The potential for damage decreases through successive operating-cycle phases (OCP2-OCP5). In fact, only in the first three phases (OCP1-OCP3) is the decay heat sufficiently large to lead to fuel damage in the first 72 hours after the earthquake for complete drainage of the pool. These three “early in operating cycle” phases (Figure 7.1) constitute only about 8 percent of the operating cycle of the reactor.

In fact, a spent fuel pool accident can result in large radioactive material releases, extensive land contamination, and large-scale population dislocations.

NRC 2016 TABLE 7.1 A  & B

TABLE 7.1 Key Results from the Consequence Analysis in the Spent Fuel Pool Study

NOTE: The individual early fatality risk estimates and individual latent cancer fatality risk

estimates shown in the table were not derived from a risk assessment. They were computed using the postulated earthquake and scenario frequencies shown in the table. PGA = peak ground acceleration. a) Seismic hazard model from Petersen et al. (2008). b) Given that the specified seismic event occurs. c) Given atmospheric release occurs. d) Results from a release are averaged over potential variations in leak size, time since reactor shutdown, population distribution, and weather conditions (as applicable); additionally, “release frequency-weighted” results are multiplied by the release frequency. e) Linear no-threshold and population weighted (i.e., total amount of latent cancer fatalities predicted in a specified area, divided by the population that resides within that area). f) First year post-accident; calculation uses a dose limit of 500 mrem per year, according to Pennsylvania Code, Title 25 § 219.51. g) Mitigation can moderately increase release size; the effect is small compared to the reduction in release frequency. h) Largest releases here are associated with small leaks (although sensitivity results show large releases are possible from moderate leaks). Assuming no complications from other spent fuel pools/reactors or shortage of available equipment/staff, there is a good chance to mitigate the small leak event. i) Kevin Witt, USNRC, written communication, December 22, 2015.

For example, Figures 7.3A, 7.3B, and 7.3C show the estimated radioactive material releases, land interdiction, and displaced persons for the reference plant in the Spent Fuel Pool Study. Also shown for comparison purposes are the same consequences for the Fukushima Daiichi accident taken from the committee’s phase 1 report

NRC 2016 FIGURE 7.3    B  C

FIGURE 7.3 Selected consequences from the Spent Fuel Pool Study as a function of fuel loading (1 × 4 loading; low-density loading) and mitigation required by 10 CFR 50.54(hh)(2). Notes: Consequences for the Fukushima Daiichi accident are shown for comparison. (A) Radioactive material releases. (B) Land interdiction (see footnote 26 for an explanation of the values for the Fukushima bar). (C) Displaced populations. SOURCE: Table 7.1 in this report; IAEA (2015), NRA (2013), NRC (2014, Chapter 6), UNSCEAR (2013).

These figures illustrate three important points:

  1. A spent fuel pool accident can result in large releases of radioactive material, extensive land interdiction, and large population displacements.
  2. Effective mitigation of such accidents can substantially reduce these consequences for some fuel configurations (cf. the bars in the figures for 1 × 4 mitigated and unmitigated scenarios) but can increase consequences for others (cf. the bars in the figures for low density unmitigated and unmitigated scenarios).
  3. Low-density loading of spent fuel in pools can also substantially reduce these consequences and also reduce the need for effective mitigation measures.

Note that the Fukushima estimate includes land that is both interdicted and likely condemned

The Spent Fuel Study (USNRC, 2014a) reports only interdicted land. One of the difficulties with USNRC (2014a) is that, unlike previous studies, the condemned land is not reported. Of the 430 mi2 (1,113 km2) that were evacuated as of May 2013, 124 mi2 (320 km2) was reported as “difficult to return,” which gives an indication of the amount of land that may ultimately be condemned.

A similar point can be made by examining the unweighted results from the Expedited Transfer Regulatory Analysis (USNRC, 2013) for a “sensitivity case” that removes the 50-mile limit for land interdiction and population displacements and raises the value of the averted dose conversion factor from $2,000 per person-rem to $4,000 per person-rem. This scenario postulates the evacuation of 3.46 million people from an area of 11,920 mi2, larger than the area of New Jersey (Table 7.2).  In comparison, approximately 88,000 people were involuntarily displaced from an area of about 400 mi2 as a consequence of the Fukushima accident (MOE, 2015).

The cost-benefit analysis did not consider some other important health consequences of spent fuel pool accidents, in particular social distress. The Fukushima Daiichi accident produced considerable psychological stresses within populations in the Fukushima Prefecture over the past 4 years, even in areas where radiation levels are deemed by regulators to be acceptable for habitation. Radiation anxiety, insomnia, and alcohol misuse were significantly elevated 3 years after the accident (Karz et al., 2014). The incidence of mental health problems and suicidal thoughts also were high among residents forced to live in long-term shelters after the accident

Complex psychosocial effects were also observed, including discordance within families over perceptions of radiation risk, between families over unequal compensatory treatments, and between evacuees and their host communities (Hasegawa et al., 2015).

Sailor et al. (1987) used a modified version of SFUEL to estimate the risks (likelihoods) of zirconium cladding fires as a function of racking density. They estimated that risks could be reduced by a factor of 5 by switching from high- to low-density racks. This estimate was based on the reduction of minimum decay times before the fuel could be air cooled, and also on the reduction in the likelihood of propagation of a zirconium cladding fire from recently discharged fuel assemblies to older fuel assemblies in the low-density racks compared to high-density racks. However, Sailor et al. (1987) cautioned that “the uncertainties in the risk estimate are large.

The regulatory analysis for the resolution of Generic Issue 821 (Throm, 1989) was intended to determine whether the use of high-density racks poses an unacceptable risk to the health and safety of the public. The analysis concluded that no regulatory action was needed; that is, the use of high-density storage racks posed an acceptable risk. The technical analysis was based on the studies of Benjamin et al. (1979) and Sailor et al. (1987) and used the factor-of-5 reduction in the likelihood (i.e., the conditional probability of a fire given a drained pool) of a zirconium cladding fire for switching to low-density racks from high-density racks. A cost-benefit analysis analogous to that employed in USNRC (2014a) found that the costs associated with reracking existing pools (and moving older fuel in the pool to dry storage to accommodate reracking) substantially exceeded the benefits in terms of population dose reductions.

The assumptions and methodology used in the regulatory analysis for Generic Issue 82 are similar to those used in USNRC (2014a): A seismic event is considered the most likely initiator of the accident and spent fuel pool damage frequency is taken to be about 2 × 10–6 events per reactoryear. Moreover, USNRC (2014a) reached essentially the same conclusions as the regulatory analysis for the resolution of Generic Issue 82 (Throm, 1989).

A more pessimistic view on the uncertainties of modeling spent fuel pool loss-of-coolant accidents was expressed by Collins and Hubbard (2001): “In its thermal-hydraulic analysis . . . the staff concluded that it was not feasible, without numerous constraints, to establish a generic decay heat level (and therefore a decay time) beyond which a zirconium fire is physically impossible. Heat removal is very sensitive to these additional constraints, which involve factors such as fuel assembly geometry and SFP rack configuration. However, fuel assembly geometry and rack configuration are plant specific, and both are subject to unpredictable changes after an earthquake or cask drop that drains the pool. Therefore, since a non-negligible decay heat source lasts many years and since configurations ensuring sufficient air flow for cooling cannot be assured, the possibility of reaching the zirconium ignition temperature cannot be precluded on a generic basis.” (p. 5-2)

There is still a great deal to be learned about the impacts of the accident on the Fukushima Daiichi plant, including impacts on spent fuel storage. Additional information will likely be uncovered as the plant is dismantled and studied, perhaps resulting in new lessons learned and revisions to existing lessons, including those in this report.

References

ACRS (Advisory Committee on Reactor Safeguards). 2012a. Draft Interim Staff Guidance Documents in Support of Tier 1 Orders. July 17. http://pbadupws.nrc.gov/ docs/ML1219/ML12198A196.pdf.

ACRS. 2012b. Response to the August 15, 2012 Edo Letter Regarding ACRS Recommendations in Letter Dated July 17, 2012 on the Draft Interim Staff Guidance Documents in Support of Tier 1 Orders. November 7. http://pbadupws.nrc.gov/docs/ML1231/ML12312A197.pdf

ACRS. 2013. Spent Fuel Pool Study. July 18. http://pbadupws.nrc.gov/docs/ ML1319/ML13198A433.pdf.

AFM (Department of the Army Field Manual). 1991. Special Operations Forces Intelligence and Electronic Warfare Operations. Field Manual No. 34-36, Appendix D: Target Analysis Process. September 30. Washington, DC: Department of the Army.

Amagai, M., N. Kobayashi, M. Nitta, M. Takahashi, I. Takada, Y. Takeuch, Y. Sawada, and M. Hiroshima. 2014. Factors related to the mental health and suicidal thoughts of adults living in shelters for a protracted period following a large-scale disaster. Academic Journal of Interdisciplinary Studies 3(3): 11-16.

ASME/ANS (American Society of Mechanical Engineers/American Nuclear Society). 2009. Standard for Level 1/Large Early Release Frequency Probabilistic Risk Assessment for Nuclear Power Plant Applications.

ASME/ANS RA-Sa-2009. New York: ASME Technical Publishing Office.

Bader, J. A. 2012. Inside the White House during Fukushima: Managing Multiple Crises. Foreign Affairs (Snapshots), March 8. http://www.foreignaffairs.com/articles/137320/jeffrey-a-bader/inside-the-white-house-during-fukushima#.

Benjamin, A. S., D. J. McCloskey, D. A. Powers, and S. A. Dupree. 1979. Fuel Heat-up Following Loss of Water during Storage. NUREG/CR-0649, SAND77-1371. Albuquerque, NM: Sandia National Laboratories. http://pbadupws.nrc.gov/docs/ML1209/ ML120960637.pdf.

Bennett, B. T. 2007. Understanding, Assessing, and Responding to Terrorism: Protecting Critical Infrastructure and Personnel. Hoboken, NJ: John Wiley & Sons.

Bier, V., M. Corradini, R. Youngblood, C. Roh, and S. Liua. 2014. Development of an updated societal-risk goal for nuclear power safety: Probabilistic safety assessment and management, PSAM-12. INL/CON-13-30495. Proceedings of the Conference on Probabilistic Safety Assessment and Management, Honolulu, Hawaii, June 22-27. http:// psam12.org/proceedings/paper/paper_199_1.pdf

Blustein, P. September 26 2013. Fukushima’s Worst-Case Scenarios. Much of what you’ve heard about the nuclear accident is wrong. Slate.com

http://www.slate.com/articles/health_and_science/science/2013/09/fukushima_disaster_new_information_about_worst_case_scenarios.2.html

Boyd, C. F. 2000. Predictions of Spent Fuel Heatup after a Complete Loss of Spent Fuel Pool Coolant. NUREG-1726. Washington, DC: U.S. Nuclear Regulatory Commission. http://pbadupws.nrc.gov/docs/ML0037/ML003727905.pdf.

Bromet, E. J., and L. Litcher-Kelly. 2002. Psychological response of mothers of young children to the Three Mile Island and Chernobyl Nuclear Plant accidents one decade later. In J. Havenaar, J. Cwikel, and E. Bromet (Eds.), Toxic Turmoil: Psychological and Societal Consequences of Ecological Disasters (pp. 69-84). New York: Springer Science+Business Media LLC/Springer US.

Brown, G. G., and L. A. Cox, Jr. 2011a. Making terrorism risk analysis less harmful and more useful: Another try. Risk Analysis 31(2): 193-195.

Brown, G. G., and L. A. Cox, Jr. 2011b. How probabilistic risk assessment can mislead terrorism risk analysts. Risk Analysis 31(2): 196-204.

Budnitz, R. J., G. Apostolakis, D. M. Boore, L. S. Cluff, K. J. Coppersmith, C. Allin Cornell, and P. A. Morris. 1998. Use of technical expert panels: Applications to probabilistic seismic hazard analysis. Risk Analysis 18(4): 463-469.

Chen, S. R., W. C. Lin, Y. M. Ferng, C. C. Chieng, and B. S. Pei. 2014. CFD simulating the transient thermal–hydraulic characteristics in a 17 x 17 bundle for a spent fuel pool under the loss of external cooling system accident. Annals of Nuclear Energy 73(2014): 241-249.

Clauset, A., M. Young, and K. S. Gleditsch. 2007. On the frequency of severe terrorist events. Journal of Conflict Resolution 51(1): 58-87.

Cleveland, K. May 18, 2014. Mobilizing Nuclear Bias: The Fukushima Nuclear Crisis and the Politics of Uncertainty. The Asia-Pacific Journal.  http://apjjf.org/2014/12/20/Kyle-Cleveland/4116/article.html

Collins, T. E., and G. Hubbard. 2001. Technical Study of Spent Fuel Pool Accident Risk at Decommissioning Nuclear Power Plants. NUREG-1738. Washington, DC: U.S. Nuclear Regulatory Commission. http://pbadupws.nrc.gov/docs/ML0104/ ML010430066.pdf.

Cooke, R. M., A. M. Wilson, J. T. Tuomisto, O. Morales, M. Tainio, and J. S. Evans. 2007. A probabilistic characterization of the relationship between fine particulate matter and mortality: Elicitation of European experts. Environ Science & Technology 41(18): 6598-6605.

Danzer, A. M., and N. Danzer. 2014. The Long-Run Consequences of Chernobyl: Evidence on Subjective Well-Being, Mental Health and Welfare. Center for Economic Studies and Ifo Institute, Working Paper No. 4855.

DCLG (Department for Communities and Local Government). 2009. Multi-criteria analysis: A manual. 08ACST05703. London: DCLG. https://www.gov.uk/government/ uploads/system/uploads/attachment_data/file/7612/1132618.pdf

Denning, R., and S. McGhee. 2013. The societal risk of severe accidents in nuclear power plants. Transactions of the American Nuclear Society 108: 521-525.

DHS (U.S. Department of Homeland Security). 2010. Nuclear Reactors, Materials, and Waste Sector-Specific Plan: An Annex to the National Infrastructure Protection Plan. Washington, DC: DHS. https://www.dhs.gov/xlibrary/assets/nipp-ssp-nuclear-2010.pdf

DHS. 2013. NIPP 2013: Partnering for Critical Infrastructure Security and Resilience. Washington, DC: DHS. http://www.dhs.gov/sites/default/files/publications/National-Infrastructure-Protection-Plan-2013-508.pdf

EPRI (Electric Power Research Institute). 2004. Probabilistic Consequence Analysis of Security Threats—A Prototype Vulnerability Assessment Process for Nuclear Power Plants. Technical Report No. 1007975. Palo Alto, CA: EPRI.

EPRI. 2012a. Summary of the EPRI Early Event Analysis of the Fukushima Daiichi Spent Fuel Pools Following the March 11, 2011 Earthquake and Tsunami in Japan, Technical Update. Palo Alto, CA: EPRI. http://www.epri.com/abstracts/Pages/ ProductAbstract.aspx?ProductId=000000000001025058

EPRI. 2012b. Practical Guidance on the Use of PRA in Risk-Informed Applications with a Focus on the Treatment of Uncertainty. Palo Alto, CA: EPRI. http://www. epri.com/abstracts/Pages/ProductAbstract.aspx?ProductId=000000000001026511

Ezell, B., and A. Collins. 2011. Letter to the Editor. Risk Analysis 31(2): 192.

Ezell, B. C., S. P. Bennett, D. von Winterfeldt, J. Sokolowski, and A. J. Collins. 2010. Probabilistic risk analysis and terrorism risk. Risk Analysis 30(4): 575-589. https://www.dhs.gov/xlibrary/assets/rma-risk-assessment-technical-publication.pdf.

Forester, J., A. Kolaczkowski, S. Cooper, D. Bley, and E. Lois. 2007. ATHEANA User’s Guide: Final Report. NUREG 1880. Washington, DC: U.S. Nuclear Regulatory Commission. http://pbadupws.nrc.gov/docs/ML0721/ML072130359.pdf.

Frye, R. M., Jr. 2013. The use of expert elicitation at the U.S. Nuclear Regulatory Commission. Albany Law Journal of Science and Technology 23(2): 309-382. http://www.albanylawjournal.org/Documents/Articles/23.2.309-Frye.pdf.

GAO (U.S. Government Accountability Office). 2006. Nuclear Power Plants: Efforts Made to Upgrade Security, but the Nuclear Regulatory Commission’s Design Basis Threat Process Should Be Improved. GAO-06-388. Washington, DC: GAO. http://www.gao. gov/new.items/d06388.pdf.

Garrick, B. J., J. E. Hall, M. Kilger, J. C. McDonald, T. O’Toole, P. S. Probst, E. R. Parker, R. Rosenthal, A. W. Trivelpiece, L. A. VanArsdal, and E. L. Zebroski. 2004. Confronting the risks of terrorism: Making the right decisions. Reliability Engineering and System Safety 86(2): 129-176.

Gauntt, R., D. Kalinich, J. Cardoni, J. Phillips, A. Goldmann, S. Pickering, M. Francis, K. Robb, L. Ott, D. Wang, C. Smith, S. St. Germain, D. Schwieder, and C. Phelan. 2012. Fukushima Daiichi Accident Study (Status as of April 2012). SAND2012-6173. Albuquerque, NM, and Livermore, CA: Sandia National Laboratories. https://fukushima.inl.gov/PDF/FukushimaDaiichiAccidentStudy.pdf.

GIF (The Proliferation Resistance and Physical Protection Evaluation Methodology Working Group of the Generation IV International Forum). 2011. Evaluation Methodology for Proliferation Resistance and Physical Protection of Generation IV Nuclear Energy Systems. Revision 6. September 15. GEN IV International Forum. https:// www.gen-4.org/gif/upload/docs/application/pdf/2013-09/gif_prppem_rev6_final.pdf. (Last accessed March 8, 2016.)

Gneiting, T., F. Balabdaoui, and A. E. Raftery. 2007. Probabilistic forecasts, calibration and sharpness. Journal of the Royal Statistical Society, Series B 69(2): 243-268. http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9868.2007.00587.x/epdf

Government of Japan. 2011. Report of Japanese Government to the IAEA Ministerial Conference on Nuclear Safety: The Accident at TEPCO’s Fukushima Nuclear Power Stations. June. Tokyo: Government of Japan NERHQ (Nuclear Emergency Response Headquarters). www.kantei.go.jp/foreign/kan/topics/201106/iaea_houkokusho_e.html

Government of Japan. 2015. Events and Highlights on the Progress Related to Recovery Operations at Fukushima Daiichi Nuclear Power Station https://www. iaea.org/sites/default/files/highlights-japan1115.pdf

Hasegawa, A., K. Tanigawa, A. Ohtsuru, H. Yabe, M. Maeda, J. Shigemura, T. Ohira, T. Tominaga, M. Akashi, N. Hirohashi, T. Ishikawa, K. Kamiya, K. Shibuya, S. Yamashita, and R. K. Chhem. 2015. Health effects of radiation and other health problems in the aftermath of nuclear accidents, with an emphasis on Fukushima. The Lancet 386(9992): 479-488.

Hirschberg, S., C. Bauer, P. Burgherr, E. Cazzoli, T. Heck, M. Spada, and K. Treyer. 2016. Health effects of technologies for power generation: Contributions from normal operation, severe accidents and terrorist threat. Reliability Engineering and System Safety 145(2016): 373-387.

Hugo, B. R., and R. P. Omberg. 2015. Evaluation of Fukushima Daiichi Unit 4 spent fuel pool. International Nuclear Safety Journal 4(2): 1-5.

Hung, T.-C., V. K. Dhir, B.-S. Pei, Y.-S. Chen, and F. P. Tsai. 2013. The development of a three-dimensional transient CFD model for predicting cooling ability of spent fuel pools. Applied Thermal Engineering 50(2013): 496-504. IAEA (International Atomic Energy Agency). 2015. The Fukushima Daiichi Accident. http://www-pub.iaea.org/books/IAEABooks/10962/The-Fukushima-DaiichiAccident

Insua, D. R., and S. French. 1991. A framework for sensitivity analysis in discrete multi-objective decision-making. European Journal of Operational Research 54(2): 176-190. Investigation Committee (Investigation Committee on the Accident at Fukushima Nuclear Power Stations of Tokyo Electric Power Company). 2011. Interim Report. December 26. Tokyo: Government of Japan. http://www.cas.go.jp/jp/seisaku/icanps/ eng/interim-report.html

Investigation Committee. 2012. Final Report on the Accident at Fukushima Nuclear Power Stations of Tokyo Electric Power Company. July 23. Tokyo: Government of Japan. http://www.cas.go.jp/jp/seisaku/icanps/eng/final-report.html

Jäckel, B. S. 2015. Status of spent fuel in the reactor buildings of Fukushima Daiichi 1-4. Nuclear Engineering and Design 283(March): 2-7.

Jo, J. H., P. F. Rose, S. D. Unwin, V. L. Sailor, K. R. Perkins, and A. G. Tingle. 1989. Value/ Impact Analyses of Accident Preventive and Mitigative Options for Spent Fuel Pools. NUREG/CR-5281, BNL-NUREG-52180. Upton, NY, and Washington, DC: Brookhaven National Laboratory and U.S. Nuclear Regulatory Commission. http://pbadupws.nrc.gov/docs/ML0716/ML071690022.pdf

Kaplan, S., and B. J. Garrick. 1981. On the quantitative definition of risk. Risk Analysis 1(1): 11-27.

Karz, A., J. Reichstein, R. Yanagisawa, and C. L. Katz. 2014. Ongoing mental health concerns in post-3/11 Japan. Annals of Global Health 80(2): 108-114.

Keeney, R. L., and D. von Winterfeldt. 1991. Eliciting probabilities from experts in complex technical problems. IEEE Transactions on Engineering Management 38: 191-201. Kondo, S. 2011. Sketches of Scenarios of Contingencies at Fukushima Daiichi Nuclear Power Plant [in Japanese]. March 25. http://www.asahi-net.or.jp/~pn8r-fjsk/saiakusinario.pdf

Kotra, J. P., M. P. Lee, N. A. Eisenberg, and A. R. DeWispelare. 1996. Branch Technical Position on the Use of Expert Elicitation in the High-Level Radioactive Waste Program. NUREG-1563. November. Washington, DC: U.S. Nuclear Regulatory Commission. http://www.nrc.gov/reading-rm/doc-collections/nuregs/staff/sr1563/sr1563.pdf

Kubota, Yoko. February 17, 2012. Factbox: Japan ‘s hidden nightmare scenario for Fukushima. Reuters.

Lewis, E. E. 2008. Fundamentals of Nuclear Reactor Physics. Burlington, MA, and San Diego, CA: Academic Press (Elsevier). Lienhard IV, J. H., and J. H. Leinhard V. 2015. A Heat Transfer Textbook, 4th Edition. Cambridge, MA: Philgiston Press. http://ahtt.mit.edu/.

Lindgren, E. R., and S. G. Durbin. 2007. Characterization of Thermal-Hydraulic and Ignition Phenomena in Prototypic, Full-Length Boiling Water Reactor Spent Fuel Pool Assemblies Phenomena in Prototypic, Full-Length Boiling Water Reactor Spent Fuel Pool Assemblies 2270. Washington, DC, and Albuquerque, NM: U.S. Nuclear Regulatory Commission and Sandia National Laboratory. http://pbadupws.nrc.gov/docs/ML1307/ ML13072A056.pdf.

Masunaga, T., A. Kozlovsky, A. Lyzikov, N. Takamura, and S. Yamashit. 2014. Mental health status among younger generation around Chernobyl. Archives of Medical Science 9(6): 1114-1116. Mellers, B., E. Stone, T. Murray, A. Minster, N. Rohrbaugh, M. Bishop, E. Chen, J. Baker, Y. Hou, M. Horowitz, L. Ungar, and P. Tetlock. 2015. Identifying and cultivating superforecasters as a method for improving probabilistic predictions. Perspectives on Psychological Science 10(3): 267-281. MOE (Japan Ministry of the Environment). 2015. Progress on Off-site Cleanup Efforts in Japan (April). Tokyo: MOE. http://www.export.gov/japan/build/groups/public/@eg_jp/documents/webcontent/eg_jp_085466.pdf.

Morgan, M. G. 2014. Use (and abuse) of expert elicitation in support of decision making for public policy. Proceedings of the National Academy of Sciences of the United States of America 111(20): 7176-7184. http://www.pnas.org/content/111/20/7176. full.pdf.

NAIIC (Nuclear Accident Independent Investigation Commission). 2012. The Official Report of the Fukushima Nuclear Accident Independent Investigation Commission. Tokyo: National Diet of Japan. https://www.nirs.org/fukushima/naiic_report.pdf.

NEA (Nuclear Energy Agency). 2015. Status Report on Spent Fuel Pools under Loss-ofCooling and Loss-of-Coolant Accident Conditions: Final Report. NEA/CNSI/R(2015)-2. Paris: NEA–Organisation for Economic Co-Operation and Development. https://www.oecd-nea.org/nsd/docs/2015/csni-r2015-2.pdf.

NEI (Nuclear Energy Institute). 2009. B.5.b Phase 2 & 3 Submittal Guidance. NEI 06-12, Revision 3. Washington, DC: NEI.

NEI. 2012. Diverse and Flexible Coping Strategies (FLEX) Implementation Guide. NEI 12-06, Revision B1. Washington, DC: NEI. http://pbadupws.nrc.gov/docs/ML1214/ ML12143A232.pdf.

Nishihara, K., H. Iwamoto, and K. Suyama. 2012. Estimation of Fuel Compositions in Fukushima-Daiichi Nuclear Power Plant. JAEA-Data/Code 2012-018. Tokai, Japan: Japan Atomic Energy Agency.

NRA (Nuclear Regulation Authority of Japan). 2013. Monitoring Air Dose Rates from a Series of Aircraft Surveys across the Two Years after the Fukushima Daiichi NPS Accident. June 5. Tokyo: Radiation Monitoring Division, Secretariat of the Nuclear Regulation Authority. https://www.nsr.go.jp/data/000067128.pdf.

NRA. 2014. Analysis of the TEPCO Fukushima Daiichi NPS Accident: Interim Report (Provisional Translation). October. Tokyo: NRA. https://www.iaea.org/sites/ default/files/anaylysis_nra1014.pdf.

NRC (National Research Council). 2004. Safety and Security of Commercial Spent Nuclear Fuel Storage (U). Washington, DC: National Research Council.

NRC. 2006. Safety and Security of Commercial Spent Nuclear Fuel Storage: Public Report. Washington, DC: The National Academies Press. http://www.nap. edu/catalog/11263/safety-and-security-of-commercial-spent-nuclear-fuel-storage-public.

NRC. 2008. Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change. Washington, DC: The National Academies Press. http://www.nap. edu/catalog/12206/department-of-homeland-security-bioterrorism-risk-assessment-a-callfor.

NRC. 2010. Review of the Department of Homeland Security’s Approach to Risk Analysis. Washington, DC: The National Academies Press. http://www.nap. edu/catalog/12972/review-of-the-department-of-homeland-securitys-approach-to-riskanalysis.

NRC. 2011. Understanding and Managing Risk in Security Systems for the DOE Nuclear Weapons Complex. Washington, DC: The National Academies Press. http:// www.nap.edu/catalog/13108/understanding-and-managing-risk-in-security-systems-forthe-doe-nuclear-weapons-complex.

NRC. 2014. Lessons Learned from the Fukushima Nuclear Accident for Improving Safety of U.S. Nuclear Plants. Washington, DC: The National Academies Press. http://www.nap.edu/catalog/18294/lessons-learned-from-the-fukushima-nuclear-accidentfor-improving-safety-of-us-nuclear-plants.

OMB (U.S. Office of Management and Budget). 1992. Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs. Circular No. A-94. Washington, DC: Office of Management and Budget. https://www.whitehouse.gov/sites/default/files/ omb/assets/a94/a094.pdf.

Parfomak, P. W. 2014. Physical Security of the U.S. Power Grid: High Voltage Transformer Substations. July 17. Washington, DC: Congressional Research Service.

Petersen, M. D., A. D. Frankel, S. C. Harmsen, C. S. Mueller, K. M. Haller, R. L. Wheeler, R. L. Wesson, Y. Zeng, O. S. Boyd, D. M. Perkins, N. Luco, E. H. Field, C. J. Wills, and K. S. Rukstales. 2008. Documentation for the 2008 Update of the United States National Seismic Hazard Maps. U.S. Geological Survey Open-File Report 2008–1128. Reston, VA: U.S. Geological Survey. http://pubs.usgs.gov/of/2008/1128/. (Last accessed February 26, 2016.)

Povinec, P., K. Hirose, and M. Aoyama. 2013. Fukushima Accident: Radioactivity Impact on the Environment. Waltham, MA: Elsevier.

RJIF (Rebuild Japan Initiative Foundation’s Independent Investigation Commission on the Fukushima Nuclear Accident). 2014. The Fukushima Daiichi Nuclear Power Station Disaster: Investigating the Myth and Reality. London: Routledge.

Ross, K., J. Phillips, R. O. Gauntt, and K. C. Wagner. 2014. MELCOR Best Practices as Applied in the State-of-the-Art Reactor Consequence Analyses (SOARCA) Project. NUREG/ CR-7008. Washington, DC: Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission. http://pbadupws.nrc.gov/docs/ML1423/ML14234A136. pdf.

Sailor, V. L., K. R. Perkins, J. R. Weeks, and H. R. Connell. 1987. Severe Accidents in Spent Fuel Pools in Support of Generic Safety, Issue 82. NUREG/CR-4982, BNLNUREG-52093. July. Upton, NY: Brookhaven National Laboratory. http:// www.osti.gov/scitech/servlets/purl/6135335.

Satopää, V., J. Baron, D. P. Foster, B. A. Mellers, P. E. Tetlock, and L. H. Ungar. 2014. Combining multiple probability predictions using a simple logit model. International Journal of Forecasting 30(2): 344-356. Sehgal, B. R. (Ed.). 2011. Nuclear Safety in Light Water Reactors: Severe Accident Phenomenology. Oxford, UK: Academic Press (Elsevier).

Sugiyama, G., J. S. Nasstrom, B. Pobanz, K. T. Foster, M. Simpson, P. Vogt, F. Aluzzi, M. Dillon, and S. Homann. 2013. NARAC Modeling During the Response to the Fukushima Dai-ichi Nuclear Power Plant Emergency. LLNL-CONF-529471. Livermore, CA: Lawrence Livermore National Laboratory. https://e-reports-ext.llnl.gov/ pdf/564098.pdf.

Tateiwa, K. 2015. Spent Fuel and Spent Fuel Storage Facilities at Fukushima Daiichi. Presentation to the Committee on Lessons Learned from the Fukushima Nuclear Accident for Improving Safety and Security of U.S. Nuclear Plants. January 29. Washington, DC: National Academies of Sciences, Engineering, and Medicine.

Teodorczyk, A., and J. E. Shepherd. 2012. Interaction of a Shock Wave with a Water Layer. Technical Report FM2012.002. May. Pasadena, CA: California Institute of Technology. http://shepherd.caltech.edu/EDL/publications/reprints/galcit_fm2012-002. pdf.

TEPCO (Tokyo Electric Power Company). 2011. Fukushima Nuclear Accident Analysis Report (Interim Report). December 2. Tokyo: TEPCO. http://www.tepco.co.jp/ en/press/corp-com/release/11120205-e.html.

TEPCO. 2012a. Fukushima Nuclear Accident Analysis Report. June 20. Tokyo: TEPCO. http://www.tepco.co.jp/en/press/corp-com/release/2012/1205638_1870.html.

TEPCO. 2012b. The Integrity Evaluation of the Reactor Building at Unit 4 in the Fukushima Daiichi Nuclear Power Station. Presentation to the Government and TEPCO’s Mid to Long Term Countermeasure Meeting Management Council. May. https:// www.oecd-nea.org/nsd/fukushima/documents/Fukushima4_SFP_integrity_May_2012. pdf.

TEPCO. 2012c. The skimmer surge tank drawdown at the Unit 4 spent fuel pool. January 23. TEPCO. 2012c. The skimmer surge tank drawdown at the Unit 4 spent fuel pool. January 23. e.pdf. (Last Accessed February 26, 2016.)

Throm, E. 1989. Regulatory Analysis for the Resolution of Generic Issue 82 “Beyond Design Basis Accidents in Spent Fuel Pools.” NUREG-1353. April. Washington, DC: Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission. http:// pbadupws.nrc.gov/docs/ML0823/ML082330232.pdf.

UNSCEAR (United Nations Scientific Committee on the Effects of Atomic Radiation). 2013. Sources, Effects and Risks of Ionizing Radiation. New York: United Nations. http://www.unscear.org/docs/reports/2013/14-06336_Report_2013_Annex_A_Ebook_ website.pdf.

USNRC (U.S. Nuclear Regulatory Commission). 1983. Safety Goals for Nuclear Power Plant Operation. NUREG-0880 (Revision 1). Washington, DC: U.S. Nuclear Regulatory Commission. http://pbadupws.nrc.gov/docs/ML0717/ML071770230.pdf.

USNRC. 1986. Safety Goals for the Operation of Nuclear Power Plants. 51 Federal Register 28044 (August 4, 1986) as corrected and republished at 51 Federal Register 30028 (August 21, 1986).

USNRC. 1990. Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants. NUREG-1150. October. Washington, DC: USNRC. http://www.nrc.gov/ reading-rm/doc-collections/nuregs/staff/sr1150/.

USNRC. 1997a. Perimeter Intrusion and Alarm Systems. Regulatory Guide 5.44 (Rev. 3). Washington, DC: Office of Nuclear Regulatory Research, USNRC.

USNRC. 1997b. Regulatory Analysis Technical Evaluation Handbook, Final Report. NUREG/ BR-0184. January. Washington, DC: Office of Nuclear Regulatory Research, USNRC. http://www.nrc.gov/about-nrc/regulatory/crgr/content-rqmts/nuregbr-0184. pdf.

USNRC. 1997c. Operating Experience Feedback Report: Assessment of Spent Fuel Cooling. NUREG-1275, Vol. 12. Washington, DC: Office for Analysis and Evaluation of Operational Data, USNRC. http://pbadupws.nrc.gov/docs/ML0106/ ML010670175.pdf. (Last accessed April 5, 2016.)

USNRC. 2003. Resolution of Generic Safety Issues. NUREG-0933. October. Washington, DC: USNRC. http://nureg.nrc.gov/sr0933/.

USNRC. 2004. Regulatory Analysis Guidelines of the U.S. Nuclear Regulatory Commission. NUREG/BR-0058, Revision 4. September. Washington, DC: Office of Nuclear Regulatory Research, USNRC. http://www.nrc.gov/reading-rm/doc-collections/nuregs/brochures/br0058/br0058r4.pdf.

USNRC. 2007a. Independent Spent Fuel Storage Installation Security Requirements for Radiological Sabotage. SECY-07-0148. Washington, DC: USNRC.

USNRC. 2007b. A Pilot Probabilistic Risk Assessment of a Dry Cask Storage System at a Nuclear Power Plant. NUREG-1864. Washington, DC: Office of Nuclear Regulatory Research, USNRC. http://pbadupws.nrc.gov/docs/ML0713/ML071340012. pdf.

USNRC. 2009a. Draft Technical Basis for a Rulemaking to Revise the Security Requirements for Facilities Storing Spent Nuclear Fuel and High-Level Radioactive Waste, Revision 1. NRC-2009-0558. http://pbadupws.nrc.gov/docs/ML0932/ML093280743.pdf.

USNRC. 2009b. An Approach for Determining the Technical Adequacy of Probabilistic Risk Assessment Results for Risk-Informed Activities. RG 1.200, Revision 2. March. Washington, DC: USNRC. http://pbadupws.nrc.gov/docs/ML0904/ML090410014. pdf.

USNRC. 2011a. Intrusion Detection Systems and Subsystems: Technical Information for NRC Licensees. NUREG-1959. March. Washington, DC: Office of Nuclear Security and Incident Response, USNRC. Avalable at http://pbadupws.nrc.gov/docs/ML1111/ ML11112A009.pdf.

USNRC. 2011b. Prioritization of Recommended Actions to Be Taken in Response to Fukushima Lessons Learned. SECY-11-0037. October 3. http://pbadupws. nrc.gov/docs/ML1127/ML11272A111.html.

USNRC. 2012a. Letter from E. J. Leeds and M. R. Johnson with Order Modifying Licenses with Regard to Requirements for Mitigation Strategies for Beyond-Design-Basis External Events. Order EA-12-049. March 12. Washington, DC: USNRC. http:// pbadupws.nrc.gov/docs/ML1205/ML12054A735.pdf.

USNRC. 2012b. State-of-the-Art Reactor Consequence Analyses (SOARCA) Report. NUREG-1935, January. Washington, DC: Office of Nuclear Regulatory Research, USNRC. http://www.nrc.gov/reading-rm/doc-collections/nuregs/staff/ sr1935/.

USNRC. 2012c. Proposed Orders and Requests for Information in Response to Lessons Learned from Japan’s March 11, 2011, Great Tohoku Earthquake and Tsunami. SECY12-0025. February 17. Washington, DC: USNRC. http://pbadupws.nrc.gov/ docs/ML1203/ML12039A111.pdf.

USNRC. 2013. Regulatory Analysis for Japan Lessons-Learned Tier 3 Issue on Expedited Transfer of Spent Fuel. COMSECY 13-0030. November 12. Washington, DC: USNRC. http://www.nrc.gov/reading-rm/doc-collections/commission/commsecy/2013/2013-0030comscy.pdf.

USNRC. 2014a. Consequence Study of a Beyond-Design-Basis Earthquake Affecting the Spent Fuel Pool for a U.S. Mark I Boiling Water Reactor. NUREG-2161. September. Washington, DC: Office of Nuclear Regulatory Research, USNRC. http://pbadupws. nrc.gov/docs/ML1425/ML14255A365.pdf.

USNRC. 2014b. Qualitative Consideration of Factors in the Development of Regulatory Analyses and Backfit Analyses. SECY-14-0087. August 14. Washington, DC: USNRC. http://pbadupws.nrc.gov/docs/ML1412/ML14127A451.pdf.

USNRC. 2016. Revision to JLD-ISG-2012-01: Compliance with Order EA-12-049, Order Modifying Licenses with Regard to Requirements for Mitigation Strategies for BeyondDesign-Basis External Events. Interim Staff Guidance, Revision 1. January 22. Washington, DC: Japan Lessons-Learned Division, USNRC. http://pbadupws.nrc. gov/docs/ML1535/ML15357A163.pdf. (Last accessed April 5, 2016.)

USNRC NTTF (U.S. Nuclear Regulatory Commission Near-Term Task Force). 2011. Recommendations for Enhancing Reactor Safety in the 21st Century: The Near-Term Task Force Review of Insights from the Fukushima Dai-Ichi Accident. Rockville, MD: USNRC. http://pbadupws.nrc.gov/docs/ML1118/ML111861807.pdf.

Wagner, K. C., and R. O. Gauntt. 2008. Analysis of BWR Spent Fuel Pool Flow Patterns Using Computational Fluid Dynamics: Supplemental Air Cases. SANDIA Letter Report, Revision 3. January.

Wang, C., and V. M. Bier. 2012. Optimal defensive allocations in the face of uncertain terrorist preferences, with an emphasis on transportation. Homeland Security Affairs, DHS Centers of Excellence Science and Technology Student Papers. March. https://www.hsaj.org/articles/210.

Wang, D., I. C. Gauld, G. L. Yoder, L. J. Ott, G. F. Flanagan, M. W. Francis, E. L. Popov, J. J. Carbajo, P. K. Jain, J. C. Wagner, and J. C. Gehin. 2012. Study of Fukushima Daiichi Nuclear Power Station Unit 4 spent-fuel pool. Nuclear Technology 180(2): 205-215.

Wataru, M. 2014. Spent Fuel Management in Japan. International Nuclear Materials Management Spent Fuel Management Seminar XXIX, January 15. http://www. inmm.org/AM/Template.cfm?Section=29th_Spent_Fuel_Seminar&Template=/CM/ ContentDisplay.cfm&ContentID=4373.

Willis, H. H., and T. LaTourrette. 2008. Using probabilistic terrorism risk modeling for regulatory benefit-cost analysis: Application to the Western Hemisphere travel initiative in the land environment. Risk Analysis 28(2): 325-339.

Willis, H. H., T. LaTourrette, T. K. Kelly, S. Hickey, and S. Neill. 2007. Terrorism Risk Modeling for Intelligence Analysis and Infrastructure Protection. Technical Report. Santa Monica, CA: Center for Terrorism Risk Management Policy, RAND.

Wreathall, J., D. Bley, E. Roth, J. Multer, and T. Raslear. 2004. Using an integrated process of data and modeling in HRA. Reliability Engineering and System Safety 83(2): 221-228.

 

 

Posted in Blackouts, EMP Electromagnetic Pulse, Nuclear Power, Nuclear spent fuel fire | Tagged , , , , | 1 Comment

The EMP Commission estimates a nationwide blackout lasting one year could kill up to 9 of 10 Americans through starvation, disease, and societal collapse

[ Several times Dr. Pry recommends  “States should harden their electric grids against nuclear EMP attack because there is a clear and present danger, because protecting against nuclear EMP will mitigate all lesser threats, and because both the federal government in Washington and the electric power industry have failed to protect the people from the existential peril that is an EMP catastrophe.  The bottom line is that the people and the States cannot trust NERC and U.S. FERC to protect the national electric grid from natural EMP. They probably cannot trust NERC and the U.S. FERC to protect the grid from anything. States should protect their own electric grids, and their people who depend upon the grid for survival, from the worst threat–nuclear EMP attack–so they will be ready for everything.”

I have cut, shortened, and rearranged the order of the 30 page original document.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

Testimony of Dr. Peter Vincent Pry  at the U.S. House of Representatives Serial No. 114-42 on May 13, 2015. The EMP Threat:  the state of preparedness against the threat of an electromagnetic pulse (EMP) event.  House of Representatives. 94 pages.

“The EMP Commission estimates that a nationwide blackout lasting one year could kill up to 9 of 10 Americans through starvation, disease, and societal collapse” 

A natural electromagnetic pulse (EMP) from a geomagnetic super-storm, like the 1859 Carrington Event or 1921 Railroad Storm, or nuclear EMP attack could cause a year-long blackout, and collapse all the other critical infrastructures–communications, transportation, banking and finance, food and water–necessary to sustain modern society and the lives of 310 million Americans.

Seven days after the commencement of blackout, emergency generators at nuclear reactors would run out of fuel. The reactors and nuclear fuel rods in cooling ponds would meltdown and catch fire, as happened in the nuclear disaster at Fukushima, Japan. The 104 U.S. nuclear reactors, located mostly among the populous eastern half of the United States, could cover vast swaths of the nation with dangerous plumes of radioactivity (see Richard Stone’s May 24, 2016 Science article: “Spent fuel fire on U.S. soil could dwarf impact of Fukushima” ).

Nuclear EMP is like super-lightning. The electromagnetic shockwave unique to nuclear weapons, called E1 EMP, travels at the speed of light, potentially injecting into electrical systems thousands of volts in a nanosecond–literally a million times faster than lightning, and much more powerful. Russian open source military writings describe their Super-EMP Warhead as generating 200,000 volts/meter, which means that the target receives 200,000 volts for every meter of its length. So, for example, if the cord on a PC is two meters long, it receives 400,000 volts. An automobile 4 meters long could receive 800,000 volts (unless it is parked underground).

No other threat can cause such broad and deep damage to all the critical infrastructures as a nuclear EMP attack. A nuclear EMP attack would collapse the electric grid, blackout and directly damage transportation systems, industry and manufacturing, satellite navigation, telecommunications systems and computers, banking and finance, and the infrastructures for food and water. Jetliners carry about 500,000 passengers on over 1,000 aircraft in the skies over the U.S. at any given moment. Many, most or virtually all of these would crash, depending upon the strength of the EMP field.

Cars, trucks, trains and traffic control systems would be damaged. In the best case, even if only a few percent of ground transportation vehicles are rendered inoperable, massive traffic jams would result. In the worst case, virtually all vehicles of all kinds would be rendered inoperable. In any case, all vehicles would stop operating when they run out of gasoline. The blackout would render gas stations inoperable and paralyze the infrastructure for synthesizing and delivering petroleum products and fuels of all kinds.

Industry and manufacturing would be paralyzed by collapse of the electric grid. Damage to SCADAS and safety control systems would likely result in widespread industrial accidents, including gas line explosions, chemical spills, fires at refineries and chemical plants producing toxic clouds.

Cell phones, personal computers, the internet, and the modern electronic economy that supports personal and big business cash, credit, debit, stock market and other transactions and record keeping would cease operations. The Congressional EMP Commission warns that society could revert to a barter economy.

Worst of all, about 72 hours after the commencement of blackout, when emergency generators at the big regional food warehouses cease to operate, the nation’s food supply will begin to spoil. Supermarkets are resupplied by these large regional food warehouses that are, in effect, the national larder, collectively having enough food to sustain the lives of 310 million Americans for about one month, at normal rates of consumption. The Congressional EMP Commission warns that as a consequence of the collapse of the electric grid and other critical infrastructures, “It is possible for the functional outages to become mutually reinforcing until at some point the degradation of infrastructure could have irreversible effects on the country’s ability to support its population.”

The New “Lightning War” / Blitzkrieg.

A new Lightning War launched by our adversaries would attack the electric grid and other critical infrastructures all at once with a coordinated assault of cyber-war, sabotage, and EMP attacks, perhaps at the same time as a space weather geomagnetic storm, or severe weather such as a hurricane or blizzard.

U.S. emergency planners tend to think of EMP, cyber, sabotage, severe weather, and geo-storms  as unrelated threats.  However, foreign adversaries (i.e. Iran, North Korea, China, Russia) in their military doctrines and military operations appear to be planning an offensive “all hazards” strategy that would throw at the U.S. electric grid and civilian critical infrastructures every possible threat simultaneously. Such an assault is potentially more decisive than Nazi Germany’s Blitzkrieg (“Lightning War”) strategy that nearly conquered the western democracies during World War II.

Catastrophe from a geomagnetic super-storm may well happen sooner rather than later–and perhaps in combination with a nuclear EMP attack.

Paul Stockton, President Obama’s former Assistant Secretary of Defense for Homeland Defense, on June 30, 2014, at the Electric Infrastructure Security Summit in London, warned an international audience that an adversary might coordinate nuclear EMP attack with an impending or ongoing geomagnetic storm to confuse the victim and maximize damage. Stockton notes that, historically, generals have often coordinated their military operations with the weather. For example, during World War II, General Dwight Eisenhower deliberately launched the D-Day invasion following a storm in the English Channel, correctly calculating that this daring act would surprise Nazi Germany.

Future military planners of the New Lightning War may well coordinate a nuclear EMP attack and other operations aimed at the electric grid and critical infrastructures with the ultimate space weather threat–a geomagnetic storm.

“China and Russia have considered limited nuclear attack options that, unlike their Cold War plans, employ EMP as the primary or sole means of attack,” according to the Congressional EMP Commission, “Indeed, as recently as May 1999, during the NATO bombing of the former Yugoslavia, high-ranking members of the Russian Duma, meeting with a U.S. congressional delegation to discuss the Balkans conflict, raised the specter of a Russian EMP attack that would paralyze the United States.”

Russia has made many nuclear threats against the U.S. since 1999, which are reported in the western press only rarely. On December 15, 2011, Pravda, the official mouthpiece of the Kremlin, gave this advice to the United States in “A Nightmare Scenario for America”: No missile defense could prevent…EMP…No one seriously believes that U.S. troops overseas are defending “freedom” or defending their country…. Perhaps they ought to close the bases, dismantle NATO and bring the troops home where they belong before they have nothing to come home to and no way to get there. On June 1, 2014, Russia Today, a Russian television news show, also broadcast to the West in English, predicted that the United States and Russia would be in a nuclear war by 2016.

Iran, the world’s leading sponsor of international terrorism, openly writes about making a nuclear EMP attack to eliminate the United States. Iran has practiced missile launches that appear to be training and testing warhead fusing for a high-altitude EMP attack–including missile launching for an EMP attack from a freighter. An EMP attack launched from a freighter could be performed anonymously, leaving no fingerprints, to foil deterrence and escape retaliation.

“What is different now is that some potential sources of EMP threats are difficult to deter–they can be terrorist groups that have no state identity, have only one or a few weapons, and are motivated to attack the U.S. without regard for their own safety,” cautions the EMP Commission in its 2004 report, “Rogue states, such as North Korea and Iran, may also be developing the capability to pose an EMP threat to the United States, and may also be unpredictable and difficult to deter.”

On April 16, 2013, North Korea simulated a nuclear EMP attack against the United States, orbiting its KSM-3 satellite over the U.S. at the optimum trajectory and altitude to place a peak EMP field over Washington and New York and blackout the Eastern Grid, that generates 75 percent of U.S. electricity. On the very same day, as described earlier, parties unknown executed a highly professional commando-style sniper attack on the Metcalf transformer substation that is a key component of the Western Grid. A few months later, in July 2013, North Korean freighter Chon Chong Gang transited the Gulf of Mexico carrying nuclear-capable SA-2 missiles in its hold on their launchers. The missiles had no warheads, but the event demonstrated North Korea’s capability to execute a ship-launched nuclear EMP attack from U.S. coastal waters anonymously, to escape U.S. retaliation. The missiles were only discovered, hidden under bags of sugar, because the freighter tried returning to North Korea through the Panama Canal and past inspectors.

What does all this signify? Connect these dots: North Korea’s apparent practice EMP attack with its KSM-3 satellite; the simultaneous “dry run” sabotage attack at Metcalf; North Korea’s possible practice for a ship-launched EMP attack a few months later; and cyber-attacks from various sources were happening all the time, and are happening every day. These suggest the possibility that in 2013 at least North Korea may have exercised against the United States an all-out combined arms operation aimed at targeting U.S. critical infrastructures–the New Lightning War.

How does an EMP damage the electric grid?

EHV Transformers are the technological foundation of our modern electronic civilization as they make it possible to transmit electric power over great distances.

An event that damages hundreds–or as few as 9–of the 2,000 EHV transformers in the United States could plunge the nation into a protracted blackout lasting months or even years. 

Transformers are typically as large as a house, weigh hundreds of tons, costs millions of dollars, and cannot be mass produced but must be custom-made by hand. Making a single EHV transformer takes about 18 months. Annual worldwide production of EHV transformers is about 200 per year.  Unfortunately, although Nikolai Tesla invented the EHV transformer and the electric grid in the U.S., EHV transformers are no longer manufactured in the United States. Because of their great size and cost, U.S. electric utilities have very few spare EHV transformers. The U.S. must import EHV transformers made in Germany or South Korea, the only two nations in the world that make them for export.

SCADAS are basically small computers that run the electric grid and all the critical infrastructures. SCADAS regulate the flow of electric current through EHV transformers, the flow of natural gas or of water through pipelines, the flow of data through communications and financial systems, and operate everything from traffic control lights to the refrigerators in regional food warehouses. SCADAS are ubiquitous in the civilian critical infrastructures, number in the millions, and are as indispensable as EHV transformers to running our modern electronic civilization. An event that damages large numbers of SCADAS would put that civilization at risk.

 

Nuclear weapon  EMP–The Worst Threat

A high-altitude nuclear EMP attack is the greatest single threat that could be posed to EHV transformers, SCADAS and other components of the national electric grid and other critical infrastructures. Nuclear EMP includes a high-frequency electromagnetic shockwave called E1 EMP that can potentially damage or destroy virtually any electronic system having a dimension of 18 inches or greater. Consequently, a high-altitude nuclear EMP event could cause broad damage of electronics and critical infrastructures across continental North America, while also causing deep damage to industrial and personal property, including to automobiles and personal computers.

E1 EMP is unique to nuclear weapons.

Nuclear EMP can also produce E2 EMP, comparable to lightning.

Nuclear EMP can also produce E3 EMP comparable to or greater than a geomagnetic superstorm. Even a relatively low-yield nuclear weapon, like the 10-kiloton Hiroshima bomb, can generate an E3 EMP field powerful enough to damage EHV transformers.

Nuclear EMP Attacks by Missile, Aircraft and Balloon

A nuclear weapon detonated at an altitude of 200 kilometers (124 miles) over the geographic center of the United States would create an EMP field potentially damaging to electronics over all the 48 contiguous States. The Congressional EMP Commission concluded that virtually any nuclear weapon, even a crude first generation atomic bomb having a low yield, could potentially inflict an EMP catastrophe.

The EMP Commission also found that Russia, China, and probably North Korea have nuclear weapons specially designed to generate extraordinarily powerful EMP fields— called by the Russians Super-EMP weapons–and this design information may be widely proliferated: “Certain types of relatively low-yield nuclear weapons can be employed to generate potentially catastrophic EMP effects over wide geographic areas, and designs for variants of such weapons may have been illicitly trafficked for a quarter-century.”

A sophisticated long-range missile is not required

Any short-range missile or other delivery vehicle that can deliver a nuclear weapon to an altitude of 30 kilometers (18.5 miles) or higher can make a potentially catastrophic EMP attack on the United States. Although a nuclear weapon detonated at 30 km could not cover the entire continental U.S. with an EMP field, the field would still cover a very large multi-state region-and be more intense. Lowering the height-of-burst (HOB) for an EMP attack decreases field radius, but increases field strength.

An EMP attack at 30 kilometers HOB anywhere over the eastern half of the U.S. would cause cascading failures far beyond the EMP field and collapse the Eastern Grid, that generates 75% of U.S. electricity. The nation could not survive without the Eastern Grid.

A Scud missile launched from a freighter could perform such an EMP attack. Over 30 nations have Scuds, as do some terrorist groups and private collectors. Scuds are available for sale on the world and black markets.

Any aircraft capable of flying Mach 1 could probably do a zoom climb to 30 kilometers altitude to make an EMP attack, if the pilot is willing to commit suicide.

Even a meteorological balloon could be used to loft a nuclear weapon 30 km high to make an EMP attack. During the period of atmospheric nuclear testing in the 1950s and early 1960s, more nuclear weapons were tested at altitude by balloon than by bombers or missiles.

 

Geomagnetic Storms

In contrast, natural EMP from a geomagnetic super-storm generates only E3 EMP which has such long wavelengths that it requires power lines, telephone lines, pipelines, and railroad tracks over 1 kilometer in length to do harm, so it can’t hurt small targets like autos or personal computers.  However,  a protracted nationwide blackout resulting from such a storm would stop everything within a few days. Personal computers cannot run for long on batteries, nor can automobiles run without gasoline.

Natural EMP from geomagnetic storms, caused when a coronal mass ejection from the Sun collides with the Earth’s magnetosphere, poses a significant threat to the electric grid and the 18 critical infrastructures, that all depend directly or indirectly upon electricity. Normal geomagnetic storms occur every year causing problems with communications and electric grids for nations located at high northern latitudes, such as Norway, Sweden, Finland and Canada.  The 1989 Hydro-Quebec Storm blacked-out the eastern half of Canada in 92 seconds, melted an EHV transformer at the Salem, New Jersey nuclear power plant, and caused billions of dollars in economic losses.

In 1921 a geomagnetic storm 10 times more powerful than the 1989 Hydro-Quebec Storm, the Railroad Storm, afflicted the whole of North America. It did not have catastrophic consequences because electrification of the U.S. and Canada was still in its infancy. The National Academy of Sciences estimates that if the 1921 Railroad Storm recurs today, it would cause a catastrophic nationwide blackout lasting 4-10 years and costing trillions of dollars.

The Carrington Event.   The most powerful geomagnetic storm ever recorded is the 1859 Carrington Event, estimated to be ten times more powerful than the 1921 Railroad Storm and classed as a geomagnetic superstorm. Natural EMP from the Carrington Event penetrated miles deep into the Atlantic Ocean and destroyed the just laid intercontinental telegraph cable. The Carrington Event was a worldwide phenomenon, causing fires in telegraph stations and forest fires from telegraph lines bursting into flames on several continents. Fortunately, in the horse and buggy days of 1859, civilization did not depend upon electrical systems.

Recurrence of a Carrington Event today would collapse electric grids and critical infrastructures all over the planet, putting at risk billions of lives. Scientists estimate that geomagnetic superstorms occur about every 100-150 years. The Earth is probably overdue to encounter another Carrington Event.

On July 22, 2012, NASA warned that a powerful solar flare narrowly missed the Earth that would have generated a geomagnetic super-storm, like the 1859 Carrington Event, and collapsed electric grids and life sustaining critical infrastructures worldwide.

The National Intelligence Council (NIC), that speaks for the entire U.S. Intelligence Community, published a major unclassified report in December 2012 Global Trends 2030 that warns a geomagnetic super-storm, like recurrence of the 1859 Carrington Event, is one of only eight “Black Swans” that could by or before 2030 change the course of global civilization. The NIC concurs with the consensus view that another Carrington Event could recur at any time, possibly before 2030, and that, if it did, electric grids and critical infrastructures that support modern civilization could collapse worldwide.

NASA estimates that the likelihood of a geomagnetic super-storm is 12 percent per decade. This virtually guarantees that Earth will experience a natural EMP catastrophe in our lifetimes or that of our children.

Non-Nuclear EMP Radio-Frequency Weapons (RFWs)

RFWs are non-nuclear weapons that use a variety of means, including explosively driven generators, to emit an electromagnetic pulse similar to the E1 EMP from a nuclear weapon, except less energetic and of much shorter radius. The range of RF Weapons is rarely more than one kilometer.

RF Weapons can be built relatively inexpensively using commercially available parts and design information available on the internet. In 2000 the Terrorism Panel of the House Armed Services Committee conducted an experiment, hiring an electrical engineer and some students to try building an RFW on a modest budget, using design information available on the internet, made from parts purchased at Radio Shack.  They built two RF Weapons in one year, both successfully tested at the U.S. Army proving grounds at Aberdeen. One was built into a Volkswagen bus, designed to be driven down Wall Street to disrupt stock market computers and information systems and bring on a financial crisis. The other was designed to fit in the crate for a Xerox machine so it could be shipped to the Pentagon, sit in the mailroom, and burn-out Defense Department computers.

EMP simulators that can be carried and operated by one man, and used as an RF Weapon, are available commercially. For example, one U.S. company advertises for sale an “EMP Suitcase” that looks exactly like a metal suitcase, can be carried and operated by one man, and generates 100,000 volts/meter over a short distance. The EMP Suitcase is not intended to be used as a weapon, but as an aid for designing factories that use heavy duty electronic equipment that emit electromagnetic transients, so the factory does not self-destruct.

But a terrorist, criminal, or madman, armed with the EMP Suitcase, could potentially destroy electric grid SCADAS or an EHV transformer and blackout a city. Thanks to RF Weapons, we have arrived at a place where the technological pillars of civilization for a major metropolitan area could be toppled by a single individual. The EMP Suitcase can be purchased without a license by anyone.

Terrorists armed with RF Weapons might use unclassified computer models to duplicate the U.S. FERC study and figure out which nine crucial transformer substations need to be attacked in order to blackout the entire national grid for weeks or months. RFWs would offer significant operational advantages over assault rifles and bombs. Something like the EMP Suitcase could be put in the trunk of a car, parked and left outside the fence of an EHV transformer or SCADA colony, or hidden in nearby brush or a garbage can, while the bad guys make a leisurely getaway. If the EMP fields are strong enough, it would be just as effective as, and far less conspicuous than, dropping a big bomb to destroy the whole transformer substation. Maximum effect could be achieved by penetrating the security fence and hiding the RF Weapon somewhere even closer to the target.

Some documented examples of successful attacks using Radio Frequency Weapons, and accidents involving electromagnetic transients, are described in the Department of Defense Pocket Guide for Security Procedures and Protocols for Mitigating Radio Frequency Threats (Technical Support Working Group, Directed Energy Technical Office, Dahlgren Naval Surface Warfare Center):

  • “In the Netherlands, an individual disrupted a local bank’s computer network because he was turned down for a loan. He constructed a Radio Frequency Weapon the size of a briefcase, which he learned how to build from the Internet. Bank officials did not even realize that they had been attacked or what had happened until long after the event.”
  • “In St. Petersburg, Russia, a criminal robbed a jewelry store by defeating the alarm system with a repetitive RF generator. Its manufacture was no more complicated than assembling a home microwave oven.”
  • “In Kzlyar, Dagestan, Russia, Chechen rebel commander Salman Raduyev disabled police radio communications using RF transmitters during a raid.”
  • “In Russia, Chechen rebels used a Radio Frequency Weapon to defeat a Russian security system and gain access to a controlled area.”
  • “Radio Frequency Weapons were used in separate incidents against the U.S. Embassy in Moscow to falsely set off alarms and to induce a fire in a sensitive area.”
  • “March 21-26, 2001, there was a mass failure of keyless remote entry devices on thousands of vehicles in the Bremerton, Washington, area…The failures ended abruptly as federal investigators had nearly isolated the source. The Federal Communications Commission (FCC) concluded that a U.S. Navy presence in the area probably caused the incident, although the Navy disagreed.”
  • “In 1999, a Robinson R-44 news helicopter nearly crashed when it flew by a high frequency broadcast antenna.”
  • “In the late 1980s, a large explosion occurred at a 36-inch diameter natural gas pipeline in the Netherlands. A SCADA system, located about one mile from the naval port of Den Helder, was affected by a naval radar. The RF energy from the radar caused the SCADA system to open and close a large gas flow-control valve at the radar scan frequency, resulting in pressure waves that traveled down the pipe and eventually caused the pipeline to explode.”
  • “In June 1999 in Bellingham, Washington, RF energy from a radar induced a SCADA malfunction that caused a gas pipeline to rupture and explode.”
  • “In 1967, the USS Forrestal was located at Yankee Station off Vietnam. An A4 Skyhawk launched a Zuni rocket across the deck. The subsequent fire took 13 hours to extinguish. 134 people died in the worst U.S. Navy accident since World War II. EMI [Electro-Magnetic Interference, Pry] was identified as the probable cause of the Zuni launch.”
  • North Korea used an Radio Frequency Weapon, purchased from Russia, to attack airliners and impose an “electromagnetic blockade” on air traffic to Seoul, South Korea’s capitol. The repeated attacks by RFW also disrupted communications and the operation of automobiles in several South Korean cities in December 2010; March 9, 2011; and April-May 2012 as reported in “Massive GPS Jamming Attack By North Korea” (GPSWORLD.COM, May 8, 2012).

Protecting the electric grid and other critical infrastructures from nuclear EMP attack will also protect them from the lesser threat posed by Radio Frequency Weapons.

Sabotage–Kinetic Attacks

Kinetic attacks are a serious threat to the electric grid and are clearly part of the game plan for terrorists and rogue states. Sabotage of the electric grid is perhaps the easiest operation for a terrorist group to execute and would be perhaps the most cost-effective means, requiring only high-powered rifles, for a very small number of bad actors to wage asymmetric warfare–perhaps against all 310 million Americans.  Terrorists have figured out that the electric grid is a major societal vulnerability.

Terror Blackout in Mexico. On the morning of October 27, 2013, the Knights Templars, a terrorist drug cartel in Mexico, attacked a big part of the Mexican grid, using small arms and bombs to blast electric substations. They blacked-out the entire Mexican state of Mihoacan, plunging 420,000 people into the dark, isolating them from help from the Federales. The Knights went into towns and villages and publicly executed local leaders opposed to the drug trade. Ironically, that evening in the United States, the National Geographic aired a television docudrama “American Blackout” that accurately portrayed the catastrophic consequences of a cyber-attack that blacks-out the U.S. grid for 10 days. The North American Electric Reliability Corporation and some utilities criticized “American Blackout” for being alarmist and unrealistic, apparently unaware that life had already anticipated art just across the porous border in Mexico. Life had already anticipated art months earlier than “American Blackout”, and not in Mexico, but in the United States.

Terror Blackout of Yemen. On June 9, 2014, while world media attention was focused on the terror group Islamic State in Iraq and Syria (ISIS) overrunning northern Iraq, Al Qaeda in the Arabian Peninsula (AQAP) used mortars and rockets to destroy electric transmission towers to blackout all of Yemen, a nation of 16 cities and 24 million people. AQAP’s operation against the Yemen electric grid is the first time in history that terrorists have sunk an entire nation into blackout. The blackout went virtually unreported by the world press.

The Metcalf Attack (San Jose, California).  On April 16, 2013, apparently terrorists or professional saboteurs practiced making an attack on the Metcalf transformer substation outside San Jose, California, that services a 450 megawatt power plant providing electricity to the Silicon Valley and the San Francisco area. NERC and the utility Pacific Gas and Electric (PG&E) own Metcalf. They claimed that the incident was merely an act of vandalism, and discouraged press interest. Consequently, the national press paid nearly no attention to the Metcalf affair for nine months. Jon Wellinghoff, Chairman of the U.S. Federal Energy Regulatory Commission, conducted an independent investigation of Metcalf. He brought in the best of the best of U.S. special forces–the instructors who train the U.S. Navy SEALS. They concluded that the attack on Metcalf was a highly professional military operation, comparable to what the SEALS themselves would do when attacking a power grid.

Footprints suggested that a team of perhaps as many as six men executed the Metcalf operation. They knew about an underground communications tunnel at Metcalf and knew how to access it by removing a manhole cover (which required at least two men). They cut communications cables and the 911 cable to isolate the site. They had pre-surveyed firing positions. They used AK-47s, the favorite assault rifle of terrorists and rogue states. They knew precisely where to shoot to maximize damage to the 17 transformers at Metcalf. They escaped into the night just as the police arrived and have not been apprehended or even identified. They left no fingerprints anywhere, not even on the expended shell casings.

The Metcalf assailants only damaged but did not destroy the transformers–apparently deliberately. The Navy SEALS and U.S. FERC Chairman Wellinghoff concluded that the Metcalf operation was a “dry run”, like a military exercise, practice for a larger and more ambitious attack on the grid to be executed in the future.   Military exercises never try to destroy the enemy, and try to keep a low profile so that the potential victim is not moved to reinforce his defenses. For example, Russian strategic bomber exercises only send a few aircraft to probe U.S. air defenses in Alaska, and never actually launch nuclear-armed cruise missiles. They want to probe and test our air defenses–not scare us into strengthening those defenses.

Chairman Wellinghoff was aware of an internal study by U.S. FERC that concluded saboteurs could blackout the national electric grid for weeks or months by destroying just nine crucial transformer substations.

Much to his credit, Jon Wellinghoff became so alarmed by his knowledge of U.S. grid vulnerability, and the apparent NERC cover-up of the Metcalf affair, that he resigned his chairmanship to warn the American people in a story published by the Wall Street Journal in February 2014. The Metcalf story sparked a firestorm of interest in the press and investigations by Congress. Consequently, NERC passed, on an emergency basis, a new standard for immediately upgrading physical security for the national electric grid. PG&E promised to spend over $100 million over the next three years to upgrade physical security.

Two months later, amid growing fears that ISIS may somehow act on its threats to attack America, on August 27, 2014, parties unknown again broke into the Metcalf transformer substation and escaped PG&E security guards and the police. PG&E claims that the second Metcalf affair is, again, merely vandalism. Yet after NERC’s emergency new physical security standards and PG&E’s alleged massive investment in improved security–Metcalf should have been the Rock of Gibraltar of the North American electric grid. If terrorists or someone is planning an attack on the U.S. electric grid, Metcalf would be the perfect place to test the supposedly strengthened security of the national grid.

Does stolen equipment prove that Metcalf-2 was a burglary? In the world of spies and saboteurs, mock burglary is a commonplace device for covering-up an intelligence operation, and hopefully quelling fears and keeping the victim unprepared.

If PG&E is telling the truth, and the second successful operation against Metcalf is merely by vandals–this is an engraved invitation by ISIS or Al Qaeda or rogue states to attack the U.S. electric grid. It means that all of PG&E and NERC’s vaunted security improvements cannot protect Metcalf from the stupidest of criminals, let alone from terrorists.

About one month later, on September 23, 2014, another investigation of PG&E security at transformer substations, including Metcalf, reported that the transformer substations are still not secure. Indeed, at one site a gate was left wide open. Former CIA Director R. James Woolsey, after reviewing the investigation results, concluded, “Overall, it looks like there is essentially no security.”

Why isn’t anything being done?

In the U.S. Congress, bipartisan bills with strong support, such as the GRID Act and the SHIELD Act, that would protect the electric grid from nuclear and natural EMP, have been stalled for a half-decade, blocked by corruption and lobbying by powerful utilities.

The U.S. Federal Energy Regulatory Commission has published interagency reports acknowledging that nuclear EMP attack is an existential threat against which the electric grid must be protected. But U.S. FERC claims to lack legal authority to require the North American Electric Reliability Corporation and the electric utilities to protect the grid. “Given the national security dimensions to this threat, there may be a need to act quickly to act in a manner where action is mandatory rather than voluntary and to protect certain information from public disclosure,” said Joseph McClelland, Director of FERC’s Office of Energy Projects, testifying in May 2011 before the Senate Energy and Natural Resources Committee. “The commission’s legal authority is inadequate for such action.” Others think U.S. FERC has sufficient legal authority to protect the grid, but lacks the will to do so because of an incestuous relationship with the NERC.

NERC and the electric power industry deny that it is their responsibility to protect the grid from nuclear EMP attack. NERC thinks it is not their job, but the job of the Department of Defense, to protect the United States from nuclear EMP attack, so argued NERC President and CEO, Gerry Cauley, in his May 2011 testimony before the Senate Energy and Natural Resources Committee. Mark Lauby, NERC’s reliability manager, is quoted by Peter Behr in his EENEWS article (August 26, 2011) that “…the terrorist scenario–foreseen as the launch of a crude nuclear weapon on a version of a SCUD missile from a ship off the U.S. coast–is the government’s responsibility, not industry’s.”

But DOD can protect the grid only by waging preventive wars against countries like Iran, North Korea, China and Russia, or by vast expansion and improvement of missile defenses costing tens of billions of dollars–none of which may stop the EMP threat.

The Department of Defense has no legal authority to EMP harden the privately owned electric grid. Such protection is supposed to be the job of NERC and the utilities.

Preventive wars would make an EMP attack more likely, perhaps inevitable. It is not worth spending thousands of lives and trillions of dollars on wars, just so NERC and the utilities can avoid a small increase in electric bills for EMP hardening the grid. U.S. FERC estimates EMP hardening would cost the average ratepayer an increase in their electric bill of 20 cents annually.

NERC “Operational Procedures” Non-Solution. The North American Electric Reliability Corporation (NERC), the lobby for the electric power industry that is also supposed to set industry standards for grid security, claims it can protect the grid from geomagnetic super-storms by “operational procedures.” Operational procedures would rely on satellite early warning of an impending Carrington Event to allow grid operators to shift around electric loads, perhaps deliberately brownout or blackout part or all of the grid in order to save it. NERC estimates operational procedures would cost the electric utilities almost nothing, about $200,000 dollars annually.

But there is no command and control system for coordinating operational procedures among the 3,000 independent electric utilities in the United States.  Operational procedures routinely fail to prevent blackouts from normal terrestrial weather, like snowstorms and hurricanes. There is no credible basis for thinking that operational procedures alone would be able to cope with a geomagnetic super-storm–a threat unprecedented in the experience of NERC and the electric power industry.

The ACE satellite NERC proposes to use is aged and sometimes gives false warnings that are not a reliable basis for implementing operational procedures. While coronal mass ejections can be seen approaching Earth typically about three days before impact, the Carrington Event reached Earth in only 11 hours, and the Ace satellite cannot warn whether a geo-storm will hit the Earth until 20 to 30 minutes before impact.   Quite recently, on September 19-20, 2014, the National Oceanic and Atmospheric Administration and NERC demonstrated again that they are unable to ascertain until shortly before impact whether a coronal mass ejection (CME) will cause a threatening geomagnetic storm on Earth.

Ironically, on September 8-10, 2014, a week before this CME, there was a security conference on threats to the national electric grid meeting in San Francisco, where executives from the electric power industry credited themselves with building robust resilience into the electric power grid. They even congratulated themselves and their industry with exemplary performance coping with and recovering from blackouts caused by hurricanes and other natural disasters. The thousands of Americans left homeless due to Hurricanes Katrina and Sandy, the hundreds of businesses lost or impoverished in New Orleans and New York City, would no doubt disagree.

The U.S. Government Accountability Office (GAO), if it had jurisdiction to grade electric grid reliability during hurricanes, would almost certainly give the utilities a failing grade. Ever since Hurricane Andrew in 1992, the U.S. GAO has found serious fault with efforts by the Federal Emergency Management Agency, the Department of Homeland Security, and the Department of Defense to rescue and recover the American people from every major hurricane. Blackout of the electric grid, of course, seriously impedes the capability of FEMA, DHS, and DOD to do anything.

Since the utilities regulate themselves through the North American Electric Reliability Corporation, their uncritical view of their own performance reinforces a “do nothing” attitude in the electric power industry.

For example, after the Great Northeast Blackout of 2003, it took NERC a decade to propose a new “vegetation management plan” to protect the national grid from tree branches. NERC has been even more resistant and slow to respond to other much more serious threats, including cyber-attack, sabotage, and natural EMP from geomagnetic storms.

Most alarming, NERC and the utilities do not appear to know their jobs, and are already in panic and despair over the challenges posed by severe weather, cyber threats, and geomagnetic storms. Peter Behr in an article published in Energy Wire (September 12, 2014) reports that at an electric grid security summit, Gary Leidich, Board Chairman of the Western Electricity Coordinating Council–which oversees reliability and security for the Western Grid–appears overwhelmed, as if he wants to escape his job, crying: “Who is really responsible for reliability? And who has the authority to do something about it?”

“The biggest cyber threat is from an electromagnetic pulse, which in the military doctrines of our potential adversaries would be part of an all-out cyber war.”, writes former Speaker of the House, Newt Gingrich, in his article “The Gathering Cyber Storm” (CNN, August 12, 2013). Gingrich warns that NERC “should lead, follow or get out of the way of those who are trying to protect our nation from a cyber catastrophe. Otherwise, the Congress that certified it as the electric reliability organization can also decertify it.”

Much to their credit, a few in the electric power industry understand the necessity of protecting the grid from nuclear EMP attack, have broken ranks with NERC, and are trying to meet the crisis. John Houston of Centerpoint Energy in Texas; Terry Boston of PJM, the largest grid in North America (located in the midwest); and Con Ed in New York–all are trying to protect their grids from nuclear EMP. State Governors and State Legislatures need to come to the rescue. States have a duty to their citizens to fill the gap in homeland security and public safety when the federal government, and the utilities, fail. State governments and their Public Utility Commissions have the legal authority and the moral obligation to, where necessary, compel the utilities to secure the grid against all hazards. State governments have an obligation to help and oversee and ensure that grid security is being done right by those utilities that act voluntarily. Failing to protect the grid from nuclear EMP attack is failing to protect the nation from all hazards.

Regulatory Malfeasance

As noted repeatedly elsewhere, Washington’s process for regulating the electric power industry has never worked well, in fact has always been broken. The electric power industry is the only civilian critical infrastructure that is allowed to regulate itself.

The North American Electric Reliability Corporation is the industry’s former trade association, which continues to act as an industry lobby. NERC is not a U.S. government agency. It does not represent the interests of the people. NERC in its charter answers to its “stakeholders”–the electric utilities that pay for NERC, including NERC’s highly salaried executives and staff.

The U.S. Federal Energy Regulatory Commission, the U.S. government agency that is supposed to partner with NERC in protecting the national electric grid, has publicly testified before Congress that U.S. FERC lacks regulatory power to compel NERC and the electric power industry to protect the grid from natural and nuclear EMP and other threats. Consider the contrast in regulatory authority between the U.S. FERC and, as examples, the U.S. Federal Aviation Administration (FAA), the U.S. Department of Transportation (DOT), or the U.S. Food and Drug Administration (FDA):

  • FAA has regulatory power to compel the airlines industry to ground aircraft considered unsafe, to change aircraft operating procedures considered unsafe, and to make repairs or improvements to aircraft in order to protect the lives of airline passengers. –DOT has regulatory power to compel the automobile industry to install on cars safety glass, seatbelts, and airbags in order to protect the lives of the driving public.
  • FDA has power to regulate the quality of food and drugs, and can ban under criminal penalty the sale of products deemed by the FDA to be unsafe to the public.

Unlike the FAA, DOT, FDA or any other U.S. government regulatory agency, the Federal Energy Regulatory Commission does not have legal authority to compel the industry it is supposed to regulate to act in the public interest. For example, U.S. FERC lacks legal power to direct NERC and the electric utilities to install blocking devices, surge arrestors, faraday cages or other protective devices to save the grid, and the lives of millions of Americans, from a natural or nuclear EMP catastrophe. Or so the FERC has testified to the Congress.

Congress has responded to this dilemma by introducing bipartisan bills, the SHIELD Act and the GRID Act, to empower U.S. FERC to protect the grid from an EMP catastrophe. Lobbying by NERC has stalled both bills for years. Currently, U.S. FERC only has the power to ask NERC to propose a standard to protect the grid. NERC standards are approved, or rejected, by the electric power industry. Historically, NERC typically takes years to develop standards to protect the grid that will pass industry approval. For example, NERC took a decade to propose a “vegetation management” standard to protect the grid from tree branches in 2012. This after ruminating for ten years over the tree branch induced Great Northeast Blackout of 2003, that plunged 50 million Americans into the dark. Once NERC proposes a standard to U.S. FERC, FERC cannot modify the standard, but must accept or reject the proposed standard. If U.S. FERC rejects the proposed standard, NERC gets to go back to the drawing board, and the process starts all over again. The NERC-FERC arrangement is a formula for thwarting effective U..S. government regulation of the electric power industry. Fortunately, Governors, State Legislatures and their Public Utility Commissions have legal power to compel utilities to protect the grid from natural and nuclear EMP and other threats.

Critics argue that the U.S. Federal Energy Regulatory Commission is corrupt–because of a too cozy relationship with NERC and a rotating door between FERC and the electric power industry -and cannot be trusted to secure the grid, even if given legal powers to do so. U.S. FERC’s approval of NERC’s hollow standard for geomagnetic storms appears proof positive that Washington is too corrupt to be trusted.

NERC’s Hollow GMD Protection Standard

Observers serving on NERC’s Geo-Magnetic Disturbance Task Force, that developed the NERC standard for grid protection against geomagnetic storms, have denounced the NERC GMD Standard and published papers exposing, not merely that the Standard is inadequate, but that it is hollow, a pretended or fake Standard. These experts opposed to the NERC GMD Standard include the foremost authorities on geomagnetic storms and electric grid vulnerability in the Free World.  See:  John G. Kappenman and Dr. William A. Radasky, Examination of NERC GMD Standards and Validation of Ground Models and Geo-Electric Fields Proposed in this NERC GMD Standard, Storm Analysis Consultants and Metatech Corporation, July 30, 2014 (Executive Summary appended to this chapter). -EIS Council Comments on Benchmark GMD Event for NERC GMD Task Force Consideration, Electric Infrastructure Security Council, May 21, 2014. –Thomas Popik and William Harris for The Foundation for Resilient Societies, Reliability Standard for Geomagnetic Disturbance Operations, Docket No. RM14-1-000, critiques submitted to U.S. FERC on March 24, July 21, and August 18, 2014.

Kappenman and Radasky, who served on the Congressional EMP Commission and are among the world’s foremost scientific and technical experts on geomagnetic storms and grid vulnerability, warn that NERC’s GMD Standard consistently underestimates the threat from geostorms: “When comparing…actual geo-electric fields with NERC model derived geo-electric fields, the comparisons show a systematic under-prediction in all cases of the geo-electric field by the NERC model.”

The Foundation for Resilient Societies, that includes on its Board of Advisors a brain trust of world class scientific experts–including Dr. William Graham who served as President Reagan’s Science Advisor, director of NASA, and Chairman of the Congressional EMP Commission–concludes from their participation on the NERC GMD Task Force that NERC “cooked the books” to produce a hollow GMD Standard: The electric utility industry clearly recognized in this instance how to design a so-called “reliability standard” that, though foreseeably ineffective in a severe solar storm, would avert financial liability to the electric utility industry even while civil society and its courts might collapse from longer-term outages. In this instance and others, a key feature of the NERC standard-setting process was to progressively water down requirements until the proposed standard obviously benefitted the ballot participants and therefore could pass. In the process, any remaining public benefit was diluted beyond perceptibility…

The several Foundation critiques identify numerous profound and obvious holes in what it describes as NERC’s “hollow” GMD Standard, and rightly castigates U.S. FERC for approving what is, in reality, a Paper Mache GMD Standard that would not protect the grid from a geomagnetic super-storm:

  • “FERC erred by approving a standard that exempts transmission networks with no transformers with a high side (wye-grounded) voltage at or above 200 kV when actual data and lessons learned from past operating incidents show significant adverse impacts of solar storms on equipment operating below 200 kV.”
  • “The exclusion of networks operating at 200kV and below is inconsistent with the prior bright-line definition of the Bulk Electric System” as defined by U.S. FERC.
  • “FERC erred by approving a standard that does not require instrumentation of electric utility networks during solar storm conditions when installation of GIC [Ground Induced Current–Pry] monitors would be cost-effective and in the public interest.”
  • “FERC erred by approving a standard that does not require utilities to perform the most rudimentary planning for solar storms, i.e., mathematical comparison of megawatt capacity of assets at risk during solar storms to power reserves.”
  • “FERC erred by concluding that sixteen Reliability Coordinators could directly communicate with up to 1,500 Transmission and Generator Operators during severe GMD events with a warning time of as little as 15 minutes and that Balancing Authorities and Generator Operators should not take action on their own because of possible lack of GIC data.”
  • “FERC erred by assuming that there would be reliable and prompt two-way communications between Reliability Coordinators and Generator Operators immediately before and during severe solar storms.”

The Foundation is also critical of U.S. FERC for approving a NERC GMD Standard that lacks transparency and accountability. The utilities are allowed to assess their own vulnerability to geomagnetic storms, to devise their own preparations, to invest as much or as little as they like in those preparations, and all without public scrutiny or review of utility plans by independent experts.

Dr. William Radasky, who holds the Lord Kelvin Medal for setting standards for protecting European electronics from natural and nuclear EMP, and John Kappenman, who helped design the ACE satellite upon which industry relies for early warning of geomagnetic storms, conclude that the NERC GMD Standard so badly underestimates the threat that “its resulting directives are not valid and need to be corrected.”

Kappenman and Radasky: These enormous model errors also call into question many of the foundation findings of the NERC GMD draft standard. The flawed geoelectric field model was used to develop the peak geo-electric field levels of the Benchmark model proposed in the standard. Since this model understates the actual geo-electric field intensity for small storms by a factor of 2 to 5, it would also understate the maximum geo-electric field by similar or perhaps even larger levels. Therefore, the flaw is entirely integrated into the NERC Draft Standard and its resulting directives are not valid and need to be corrected.  The excellent Kappenman-Radasky critique of the NERC GMD Standard represents the consensus view of all the independent observers who participated in the NERC GMD Task Force, including the author. The Kappenman-Radasky critique warns NERC and U.S. FERC that, “Nature cannot be fooled!”

Perhaps most revelatory of U.S. FERC’s untrustworthiness, by approving the NERC GMD Standard that grossly underestimates the threat from geo-storms–U.S. FERC abandoned its own much more realistic estimate of the geo-storm threat. It is incomprehensible why U.S. FERC would ignore the findings of its own excellent interagency study, one of the most in depth and meticulous studies of the EMP threat ever performed, that was coordinated with Oak Ridge National Laboratory, the Department of Defense, and the White House.

U.S. FERC’s preference for NERC’s “junk science” over U.S. FERC’s own excellent scientific assessment of the geo-storm threat can only be explained as incompetence or corruption or both.

 

What do we know about a nuclear EMP?

A high-altitude nuclear electromagnetic pulse attack is the most severe threat to the electric grid and other critical infrastructures, far more damaging than a geomagnetic super-storm, the worst case of severe weather, sabotage by kinetic attacks, or cyber-attack.  Not one major U.S. Government study dissents from the consensus that nuclear EMP attack would be catastrophic, and that protection is achievable and necessary.

There is more empirical data on nuclear EMP and its effects on electronic systems and infrastructures than almost any other threat, except severe weather. In addition to the 1962 STARFISH PRIME high-altitude nuclear test that generated EMP that damaged electronic systems in Hawaii and elsewhere, the Department of Defense has decades of atmospheric and underground nuclear test data relevant to EMP. And defense scientists have for over 50 years studied EMP effects on electronics in simulators. Most recently, the Congressional EMP Commission made its threat assessment by testing a wide range of modern electronics crucial to critical infrastructures in EMP simulators.

There is a scientific and strategic consensus behind the Congressional EMP Commission’s assessment that a nuclear EMP attack would have catastrophic consequences for the United States, but that “correction is feasible and well within the Nation’s means and resources to accomplish.” Every major U.S. Government study to examine the EMP threat and solutions concurs with the EMP Commission, including the Congressional Strategic Posture Commission (2009), the U.S. Department of Energy and North American Electric Reliability Corporation (2010), and the U.S. Federal Energy Regulatory Commission interagency report, coordinated with the White House, Department of Defense, and Oak Ridge National Laboratory (2010).

Russian Nuclear EMP Tests.  STARFISH PRIME is not the only high-altitude nuclear EMP test. The Soviet Union (1961-1962) conducted a series of high-altitude nuclear EMP tests over what was then its own territory–not once but seven times–using a variety of warheads of different designs. The EMP fields from six tests covered Kazakhstan, an industrialized area larger than Western Europe. In 1994, during a thaw in the Cold War, Russia shared the results from one of its nuclear EMP tests, that used their least efficient warhead design for EMP–it collapsed the Kazakhstan electric grid, damaging transformers, generators and all other critical components.  The USSR during the Kazakhstan high-altitude EMP experiments tested some low-yield warheads, at least one probably an Enhanced Radiation Warhead that emitted large quantities of gamma rays, that generate the E1 EMP electromagnetic shockwave. It is possible that the USSR developed their Super-EMP Warhead early in the Cold War as a secret super-weapon.  The Soviets apparently quickly repaired the damage to Kazakhstan’s electric grid and other critical infrastructures, thereby proving definitively that with smart planning and good preparedness it is possible to survive and recover from an EMP catastrophe.

 

Other threats: Cyber-attack

Cyber-attacks, the use of computer viruses and hacking to invade and manipulate information systems and SCADAS, is almost universally described by U.S. political and military leaders as the greatest threat facing the United States. Every day, literally thousands of cyber-attacks are made on U.S. civilian and military systems, most of them designed to steal information. Joint Chiefs Chairman, General Martin Dempsey, warned on June 27, 2013, that the United States must be prepared for the revolutionary threat represented by cyber warfare (Claudette Roulo, DoD News, Armed Force Press Service): “One thing is clear. Cyber has escalated from an issue of moderate concern to one of the most serious threats to our national security,” cautioned Chairman Dempsey, “We now live in a world of weaponized bits and bytes, where an entire country can be disrupted by the click of a mouse.”

Cyber Hype? Skeptics claim that the catastrophic scenarios envisioned for cyber warfare are grossly exaggerated, in part to justify costly cyber programs wanted by both the Pentagon and industry at a time of scarce defense dollars. Many of the skeptical arguments about the limitations of hacking and computer viruses are technically correct. However, it is not widely understood that foreign military doctrines define “information warfare” and “cyber warfare” as encompassing kinetic attacks and EMP attack–which is an existential threat to the United States.

Thomas Rid’s book Cyber War Will Not Take Place (Oxford University Press, 2013) exemplifies the viewpoint of a growing minority of highly talented cyber security experts and scholars who think there is a conspiracy of governments and industry to hype the cyber threat. Rid’s bottom line is that hackers and computer bugs are capable of causing inconvenience–not apocalypse. Cyber-attacks can deny services, damage computers selectively but probably not wholesale, and steal information, according to Rid. He does not rule out that future hackers and viruses could collapse the electric grid, concluding such a feat would be, not impossible, but nearly so.

In a 2012 BBC interview, Rid chastised then Secretary of Defense Leon Panetta for claiming that Iran’s Shamoon Virus, used against the U.S. banking system and Saudi Arabia’s ARAMCO, could foreshadow a “Cyber Pearl Harbor” and for threatening military retaliation against Iran. Rid told the BBC that the world has, “Never seen a cyber-attack kill a single human being or destroy a building.”

Cyber security expert Bruce Schneier claims, “The threat of cyberwar has been hugely hyped” to keep growing cyber security programs at the Pentagon’s Cyber Command, the Department of Homeland Security, and new funding streams to Lockheed Martin, Raytheon, Century Link, and AT&T, who are all part of the new cyber defense industry. The Brookings Institute’s Peter Singer wrote in November 2012, “Zero. That is the number of people who have been hurt or killed by cyber terrorism.” Ronald J. Delbert, author of Black Code: Inside the Battle for Cyberspace, a lab director and professor at the University of Toronto, accuses RAND and the U.S. Air Force of exaggerating the threat from cyber warfare.

Peter Sommer of the London School of Economics and Ian Brown of Oxford University, in Reducing Systemic Cybersecurity Risk, a study for Europe’s Organization for Economic Cooperation and Development, are far more worried about natural EMP from the Sun than computer viruses: “a catastrophic cyber incident, such as a solar flare that could knock out satellites, base stations and net hardware” makes computer viruses and hacking “trivial in comparison.”

The now declassified Aurora experiment is the empirical basis for the claim that a computer virus might be able to collapse the national electric grid. In Aurora, a virus was inserted into the SCADAS running a generator, causing the generator to malfunction and eventually destroy itself. However, using a computer virus to destroy a single generator does not prove it is possible or likely that an adversary could destroy all or most of the generators in the United States. Aurora took a protracted time to burn out a generator–and no intervention by technicians attempting to save the generator was allowed, as would happen in a nationwide attack, if one could be engineered. Nor is there a single documented case of a even a local blackout being caused in the United States by a computer virus or hacking–which surely would have happened by now, if vandals, terrorists, or rogue states could attack U.S. critical infrastructures easily by hacking.

Even the Stuxnet Worm, the most successful computer virus so far, reportedly according to White House sources jointly engineered by the U.S. and Israel to attack Iran’s nuclear weapons program, proved a disappointment. Stuxnet succeeded in damaging only 10 percent of Iran’s centrifuges for enriching uranium, and did not stop or even significantly delay Tehran’s march towards the bomb. During the recently concluded Gaza War between Israel and Hamas, a major cyber campaign using computer bugs and hacking was launched against Israel by Hamas, the Syrian Electronic Army, Iran, and by sympathetic hackers worldwide. The Gaza War was a Cyber World War against Israel.

The Institute for National Security Studies, at Tel Aviv University, in “The Iranian Cyber Offensive during Operation Protective Edge” (August 26, 2014) reports that the cyber-attacks caused inconvenience and in the worst case some alarm, over a false report that the Dimona nuclear reactor was leaking radiation: “…the focus of the cyber offensive…was the civilian internet. Iranian elements participated in what the C4I officer described as an attack unprecedented in its proportions and the quality of its targets….The attackers had some success when they managed to spread a false message via the IDF’s official Twitter account saying that the Dimona reactor had been hit by rocket fire and that there was a risk of a radioactive leak.” However, the combined hacking efforts of Hamas, SEA, Iran and hackers worldwide did not blackout Israel or significantly impede Israel’s war effort.

But tomorrow is always another day. Cyber warriors are right to worry that perhaps someday someone will develop the cyber bug version of an atomic bomb. Perhaps such a computer virus already exists in a foreign laboratory, awaiting use in a future surprise attack. On July 6, 2014, reports surfaced that Russian intelligence services allegedly infected 1,000 power plants in Western Europe and the United States with a new computer virus called Dragonfly. No one knows what Dragonfly is supposed to do. Some analysts think it was just probing the defenses of western electric grids. Others think Dragonfly may have inserted logic bombs into SCADAS that can disrupt the operation of electric power plants in a future crisis.

Cyber warfare is an existential threat to the United States, not because of computer viruses and hacking alone, but as envisioned in the military doctrines of potential adversaries whose plans for an all-out Cyber Warfare Operation include the full spectrum of military capabilities-including EMP attack. In 2011, a U.S. Army War College study In The Dark: Planning for a Catastrophic Critical Infrastructure Event warned U.S. Cyber Command that U.S. doctrine should not overly focus on computer viruses to the exclusion of EMP attack and the full spectrum of other threats, as planned by potential adversaries.

Reinforcing the above, a Russian technical article on cyber warfare by Maxim Shepovalenko (Military-Industrial Courier July 3, 2013), notes that a cyber-attack can collapse “the system of state and military control…its military and economic infrastructure” because of “electromagnetic weapons…an electromagnetic pulse acts on an object through wire leads on infrastructure, including telephone lines, cables, external power supply and output of information.” Cyber warriors who think narrowly in terms of computer hacking and viruses invariably propose anti-hacking and anti-viruses as solutions. Such a solution will result in an endless virus versus anti-virus software arms race that may ultimately prove unaffordable and futile.

The worst case cyber scenario envisions a computer virus infecting the SCADAS that regulate the flow of electricity into EHV transformers, damaging the transformers with overvoltage, and causing a protracted national blackout. But if the transformers are protected with surge arrestors against the worst threat–nuclear EMP attack–they would be unharmed by the worst possible overvoltage that might be system generated by any computer virus. This EMP hardware solution would provide a permanent and relatively inexpensive fix to what is the extremely expensive and apparently endless virus versus anti-virus software arms race that is ongoing in the new cyber defense industry.

 

Other threats: Severe Weather

Hurricanes, snow storms, heat waves and other severe weather pose an increasing threat to the increasingly overtaxed, aged and fragile national electric grid. So far, the largest and most protracted blackouts in the United States have been caused by severe weather.  For example:

  • Hurricane Katrina (August 29, 2005), the worst natural disaster in U.S. history, blacked out New Orleans and much of Louisiana, the blackout seriously impeding rescue and recovery efforts. Lawlessness swept the city. Electric power was not restored to parts of New Orleans for months, making some neighborhoods a criminal no man’s land too dangerous to live in. New Orleans has still not fully recovered its pre-Katrina population. Economic losses to the Gulf States region totaled $108 billion dollars.
  • Hurricane Sandy on October 29, 2012, caused blackouts in parts of New York and New Jersey that in some places lasted weeks. Again, as in Katrina, the blackout gave rise to lawlessness and seriously impeded rescue and recovery. Thousands were rendered homeless in whole or in part because of the protracted blackout in some neighborhoods. Partial and temporary blackouts were experienced in 24 States. Total economic losses were $68 billion dollars.
  • A heat-wave on August 14, 2003, caused a power line to sag into a tree branch, which seemingly minor incident began a series of cascading failures that resulted in the Great Northeast Blackout of 2003. Some 50 million Americans were without electric power–including New York City. Although the grid largely recovered after a day, disruption of the nation’s financial capital was costly, resulting in estimated economic losses of about $6 billion dollars.
  • On September 18, 2014, a heat wave caused rolling brownouts and blackouts in northern California so severe that some radio commentators speculated that a terrorist attack on the grid might be underway.

 

What to do: All Hazards Strategy–EMP Protection is Key

Most of the general public and State governments are unaware of the EMP threat and that political gridlock in Washington has prevented the Federal government from implementing any of the several cost-effective plans for protecting the national electric grid.

All Hazards Protection: Most state governments are unaware that they can protect the grid within their State to shield their citizens from the catastrophic consequences of a national blackout, and that if they protect the grid from the worst threat, nuclear EMP, that will also help to protect the grid from other hazards such as a geomagnetic storm EMP, cyber-attack, sabotage, and severe weather.

States Should EMP Harden Their Grids.  All states should prepare themselves for all hazards in this age of the Electronic Blitzkrieg.  State governments and their Public Utility Commissions should exercise aggressive oversight to ensure that the transformer substations and electric grids in their States are safe and secure. The record of NERC and the electric utilities indicates they cannot be trusted to provide for the security of the grid. State governments can protect their grid from sabotage by the “all hazards” strategy that protects against the worst threat–nuclear EMP attack. For example, faraday cages to protect EHV transformers and SCADAS colonies from EMP would also screen from view these vital assets so they could not be accurately targeted by high-powered rifles, as is necessary in order to destroy them by small arms fire. The faraday cages could be made of heavy metal or otherwise fortified for more robust protection against more powerful weapons, like rocket propelled grenades.

Surge arrestors to protect EHV transformers and SCADAS from nuclear EMP would also protect the national grid from collapse due to sabotage. The U.S. FERC scenario where terrorists succeed in collapsing the whole national grid by destroying merely nine transformer substations works only because of cascading overvoltage. When the nine key substations are destroyed, megawatts of electric power gets suddenly dumped onto other transformers, which in their turn get overloaded and fail, dumping yet more megawatts onto the grid. Cascading failures of more and more transformers ultimately causes a protracted national blackout. This worst case scenario for sabotage could not happen if the transformers and SCADAS are protected against nuclear EMP–which is a more severe threat than any possible system-generated overvoltage.

Critics rightly argue that NERC’s proposed operational procedures (satellite warnings) is a non-solution designed as an excuse to avoid the expense of the only real solution–physically hardening the electric grid to withstand EMP.

NERC rejects the recommendation of the Congressional EMP Commission to physically protect the national electric grid from nuclear EMP attack by installing blocking devices, surge arrestors, faraday cages and other proven technologies. These measures would also protect the grid from the worst natural EMP from a geomagnetic super-storm like another Carrington Event. The estimated one time cost–$2 billion dollars–is what the United States gives away every year in foreign aid to Pakistan.

Yet Washington remains gridlocked between lobbying by NERC and the wealthy electric power industry on the one hand, and the recommendations of the Congressional EMP Commission and other independent scientific and strategic experts on the other hand. The States should not wait for Washington to act, but should act now to protect themselves.

While gridlock in Washington has prevented the Federal Government from protecting the national electric power infrastructure, threats to the grid–and to the survival of the American people–from EMP and other hazards are looming ever larger. Grid vulnerability to EMP and other threats is now a clear and present danger.

The Congressional EMP Commission warned that an “all hazards” strategy should be pursued to protect the electric grid and other critical infrastructures, which means trying to find common solutions that protect against more than one threat–ideally all threats. The “all hazards” strategy is the most practical and most cost-effective solution to protecting the electric grid and other critical infrastructures. Electric grid operation and vulnerability is critically dependent upon two key technologies: Extra-High Voltage (EHV) transformers and Supervisory Control and Data Acquisition Systems (SCADAS).

The Congressional EMP Commission recommended protecting the electric grid and other critical infrastructures against nuclear EMP as the best basis for an “all hazards” strategy. Nuclear EMP may not be as likely as other threats, but it is by far the worst, the most severe, threat.

The EMP Commission found that if the electric grid can be protected and quickly recovered from nuclear EMP, the other critical infrastructures can also be recovered, with good planning, quickly enough to prevent mass starvation and restore society to normalcy. If EHV transformers, SCADAS and other critical components are protected from the worst threat–nuclear EMP–then they will survive, or damage will be greatly mitigated, from all lesser threats, including natural EMP from geomagnetic storms, severe weather, sabotage, and cyber-attack.

The “all hazards” strategy recommended by the EMP Commission is not only the most cost-effective strategy–it is a necessary strategy.

New York and Massachusetts Protect Their Grids.

New York Governor Andrew Cuomo and Massachusetts Governor Deval Patrick would not agree that NERC’s performance during Hurricane Sandy was exemplary. Under the leadership of Governor Patrick, Massachusetts is spending $500 million to upgrade the security of its electric grid from severe weather. New York is spending a billion dollars to protect its grid from severe weather.

The biggest impediment to recovering an electric grid from hurricanes is not fallen electric poles and downed power lines. When part of the grid physically collapses, an overvoltage can result that can damage all kinds of transformers, including EHV transformers, SCADAS and other vital grid components. Video footage shown on national television during Hurricane Sandy showed spectacular explosions and fires erupting from transformers and other grid vital components caused by overvoltage.

If the grid is hardened to survive a nuclear EMP attack by installation of surge arrestors, it would easily survive overvoltage induced by hurricanes and other severe weather. This would cost a lot less than burying power lines underground and other measures being undertaken by New York and Massachusetts to fortify their grids against hurricanes–all of which will be futile if transformers and SCADAS are not protected against overvoltage.

Unfortunately, both States are probably spending a lot more than they have to by focusing on severe weather, instead of an “all hazards” strategy to protect their electric grids.

According to a senior executive of New York’s Consolidated Edison, briefing at the Electric Infrastructure Security Summit in London on July 1, 2014–Con Ed is taking some modest steps to protect part of the New York electric grid from nuclear EMP attack. This good news has not been reported anywhere in the press. I asked the Con Ed executive why New York is silent about beginning to protect its grid from nuclear EMP? Loudly advertising this prudent step could have a deterrent effect on potential adversaries planning an EMP attack. The Con Ed executive could offer no explanation.

New York City because of its symbolism as the financial and cultural capital of the Free World, and perhaps because of its large Jewish population, has been the repeated target of terrorist attacks with weapons of mass destruction. A nuclear EMP attack centered over New York City, the warhead detonated at an altitude of 30 kilometers, would cover all the northeastern United States with an EMP field, including Massachusetts.  A practitioner of the New Lightning War may be more likely to exploit a hurricane, blizzard, or heat wave than a geomagnetic storm, when launching a coordinated cyber, sabotage, and EMP attack. Terrestrial bad weather is more commonplace than bad space weather.

 

PETER VINCENT PRY is Executive Director of the EMP Task Force on National and Homeland Security, a Congressional Advisory Board dedicated to achieving protection of the United States from electromagnetic pulse (EMP), cyber-attack, mass destruction terrorism and other threats to civilian critical infrastructures on an accelerated basis. Dr. Pry also is Director of the United States Nuclear Strategy Forum, an advisory board to Congress on policies to counter Weapons of Mass Destruction. Dr. Pry served on the staffs of the Congressional Commission on the Strategic Posture of the United States (2008-2009); the Commission on the New Strategic Posture of the United States (2006-2008); and the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack (2001-2008). Dr. Pry served as Professional Staff on the House Armed Services Committee (HASC) of the U.S. Congress, with portfolios in nuclear strategy, WMD, Russia, China, NATO, the Middle East, Intelligence, and Terrorism (1995-2001). While serving on the HASC, Dr. Pry was chief advisor to the Vice Chairman of the House Armed Services Committee and the Vice Chairman of the House Homeland Security Committee, and to the Chairman of the Terrorism Panel. Dr. Pry played a key role: running hearings in Congress that warned terrorists and rogue states could pose an EMP threat, establishing the Congressional EMP Commission, helping the Commission develop plans to protect the United States from EMP, and working closely with senior scientists who first discovered the nuclear EMP phenomenon. Dr. Pry was an Intelligence Officer with the Central Intelligence Agency responsible for analyzing Soviet and Russian nuclear strategy, operational plans, military doctrine, threat perceptions, and developing U.S. paradigms for strategic warning (1985-1995). He also served as a Verification Analyst at the U.S. Arms Control and Disarmament Agency responsible for assessing Soviet compliance with strategic and military arms control treaties (1984-1985). Dr. Pry has written numerous books on national security issues, including Apocalypse Unknown: The Struggle To Protect America From An Electromagnetic Pulse Catastrophe; Electric Armageddon: Civil-Military Preparedness For An Electromagnetic Pulse Catastrophe; War Scare: Russia and America on the Nuclear Brink; Nuclear Wars: Exchanges and Outcomes; The Strategic Nuclear Balance: And Why It Matters; and Israel’s Nuclear Arsenal. Dr. Pry often appears on TV and radio as an expert on national security issues. The BBC made his book War Scare into a two-hour TV documentary Soviet War Scare 1983 and his book Electric Armageddon was the basis for another TV documentary Electronic Armageddon made by the National Geographic.

 

 

Posted in Blackouts, Electricity, EMP Electromagnetic Pulse, Extreme Weather, Nuclear, Nuclear Power | Tagged , , , , | Leave a comment

Hydropower dams and the ways they destroy things

[ This contains excerpts and paraphrasing of a 19 November 2014 NewScientist article by Peter Hadfield “River of the dammed“. Dams typically last 60 to 100 years, but whether Three Gorges can last this long is questionable given the unexpectedly high amounts of silt building up. Since fossil fuels are finite, as is uranium, to keep the electric grid up many see building more dams for hydropower as absolutely essential. Hydropower is also one of the few energy resources that can balance variable wind and solar as well. In addition, climate change is likely to lead to a state of permanent drought and dams could help cope with water shortages.  But dams have a dark side and we should proceed with caution as you’ll see from some of the damage done from the three gorges dam ]

Three Gorges dam stats:

  • 13 cities, 140 towns and 1350 villages drowned under the rising water of the Three Gorges dam requiring 1.3 million people to move
  • Required  27 million cubic metres of concrete to build the 2-kilometer-long dam.
  • Provides 2% of China’s electricity
  • 32 turbines, each weighing as much as the Eiffel tower
  • Trash litters the water — discarded plastic bottles, bags, algae and industrial crud — because garbage that used to be flushed downriver and out to sea is now trapped and backing up in the Yangtze’s numerous tributaries. It covers a massive area despite 3000 tonnes being collected a day.
  • The fish population has crashed:  lower water levels, slower flow, and pollution have crashed the Yangtze’s fish population and also decreasing the productivity of fisheries in the South China Sea.
  • Drinking water is being affected because the dam is allowing more seawater than before to intrude into the Yangtze estuary.

Silt will drastically shorten the lifespan of Three Gorges

All dams eventually are rendered useless in 30 to 200 years.  But Three Gorges is silting faster than expected. Far more silt is entering the river and being carried far further than predicted by the models, resulting in silt buildup to depths of up to 60 meters, almost two-thirds the maximum depth of the reservoir itself. The dam continues to accumulate silt at the rate of around 200 million cubic meters a year.

As a result, one of the two navigation channels that pass on either side of an island in the reservoir has been completely blocked, forcing ship traffic in both directions to follow a single channel.

Worse yet, silt is building up at the dam wall. A lot of it has to be cleared by dredgers to make sure it doesn’t interfere with the turbines that generate  China’s electricity and the massive locks that allow ships to travel through.

The only way to slow the process is to build more dams upstream to trap the silt. Many were already being planned. If they are all built, the Yangtze will become a series of dams instead of a river.

Erosion

The filling of the reservoir has also destabilized some of the steep slopes lining the dam. Landslides are common, blocking roads and threatening villages.

This reduces the flow downstream, bringing forward the start of the Yangtze’s natural low-water period. The result is that the Yangtze’s once bountiful floodplain is now drying up. “China’s two largest freshwater lakes – Poyang and Dongting – now find themselves higher than the river,” says Patricia Adams of Probe International, a Canadian environmental foundation that has written a number of critical reports about the Three Gorges dam. “The effect of that is that their water is flowing into the river and essentially draining these very important flood plains.

Like all deltas, the mouth of the Yangtze is a tug of war between deposition and erosion. Between 1050 and 1990, according to a 2003 study, deposition won. During these 900 years the Nanhui foreland, which marks the south bank of the estuary, grew nearly 13 kilometers. But more recently, erosion began to dominate.

The dam has made things even worse by nearly halving the amount of silt entering the delta, leading to a threefold jump in the erosion rate. This could become a major problem for China’s largest city, Shanghai, which is only a meter above sea level, which is expected to rise up to 2 meters over the next century.

List of Serious Problems from The Guardian

  • The dam reservoir has been polluted by algae and chemical runoff that would normally have floated away had the dam not been built. Algae and pollution are building up.
  • The weight of the extra water is being blamed for earthquake tremors, landslides and erosion of hills and slopes.
  • Because of the project’s instability and unpredictability, scientists are calling on the government to: establish water treatment plants, warning systems, shore up and reinforce riverbanks, boost funding for environmental protection and increase benefits to the displaced.
  • Some scientists are advocating the reestablishment of ecosystems that were destroyed by the project and are suggesting the additional movement of hundreds of thousands of residents to safer ground.
  • Before the project, there were 1,392 fresh reservoirs of water that have become “dead water”, destroying drinking water of over 300,000 people.
  • Boat traffic on the Yangtze River has been negatively affected as the depths and shallows of the river have been completely transformed and thousands of boats regularly run aground.
  • The design of the project has resulted in damage to the Yangtze River in that water no longer pushes mud and silt downstream but stagnates it above the dam.
  • While the current problem is a drought over the past decade floods and droughts have come and gone, the flow control mechanism of the dam project doesn’t seem operational; it does not affect water levels in any way.

 

Posted in Dams, Hydropower | Tagged , , | Leave a comment

Are biofuels a sustainable and viable energy strategy?

[In 2000, Melanie Kenderine at the U.S. Department of energy stated that: “This nation has abundant biomass resources (grasses, trees, agricultural wastes) that have the potential to provide power, fuels, chemicals and other bio-based products.” House 106-147. May 24, 2000. National energy power: ensuring adequate supply of natural gas and crude oil. U.S. House of Representatives.

She is making a good point – after fossil fuels, biomass will have to do everything fossils once did: generate electricity, fuel our cars, trucks, trains, and ships, replace the vast petrochemical industry of plastics, medicine, and half a million other products.  But is there really enough biomass to do all of these things?

Gomiero doesn’t think so, and makes many of the same points I do in Peak Soil, adding new evidence. There is also an interesting explanation of why biofuel subsidies are 136 times higher than oil subsidies.

And keep in mind that when diesel-burning trucks stop running, civilization ends. Diesel engines can’t burn ethanol, or even gasoline, and most engine warranties allow zero to at most 20% biodiesel to be mixed in with petroleum-derived diesel. So why are we making ethanol?  Why aren’t we getting cars off the road ASAP to free up fuel for trucks, locomotives, and ships?

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

Gomiero, Tiziano. June 30, 2015. Are Biofuels an Effective and Viable Energy Strategy for Industrialized Societies? A Reasoned Overview of Potentials and Limit. Sustainability 2015, 7, 8491-8521.

Excerpts from this 31 page article follow.

For our industrial society to rely on “sustainable biofuels” for an important fraction of its energy, most of the agricultural and non-agricultural land would need to be used for crops, and at the same time a radical cut to our pattern of energy consumption would need to be implemented, whilst also achieving a significant population reduction.

Some scholars questioned the energy efficiency of biofuels, claiming that it was an unproductive enterprise (e.g., [2–13]), a point already made in the 1970s by energy experts such as Prof. David Pimentel [2], and Prof. Vaclav Smil [4].

Biofuels, in fact, call for the adoption of those very same agricultural practices that for decades have been blamed for being highly energy inefficient and water consuming, and for contaminating the environment and threatening biodiversity and soil health [2–5,14–17].

Other works highlighted that, contrary to current belief, biofuel production may cause net CO2 emission, in particular when tropical forests and pristine land are converted to plantations and crops for biofuel production [18–20].

The interest in biofuel as a potential sustainable and renewable energy source is still high, as is attested by numerous scientific journals recently created in its name, and the number of funded research projects that focus on this topic. Private investments and public subsidies are still poured into this sector. Since the crisis, however, the focus shifted from first-generation biofuels (or the use of fuel crops) to second-generation biofuels, i.e., the use of cellulosic ethanol (crop residues, woody biomass), and then to third-generation biofuels, i.e., oil from algae.

Palm oil is also becoming of high interest for the biofuel market, and  there is a risk that palm oil plantations may further increase the displacement of native forests in tropical countries (as happened for sugarcane plantations in Brazil), or replace other food crops, without providing any benefits to farmers. After the plantation is discontinued (20–25 years), the soil is then ruined and cannot easily serve for further agricultural activities.

Findings from different experts, however, diverge considerably. Some authors claim that biofuels may represent an efficient alternative to oil, some of them referring to fuel crops, while others only refer to cellulosic ethanol. Other authors claim that biofuels and biomass in general are instead an inefficient alternative to fossil fuels. So, how is it possible that highly respected scholars can reach such opposing conclusions?

We have to face the fact that data-gathering systems rely on different approaches and methodologies, involving different focuses, models, assumptions and scale of analysis. To begin with, a major problem arises with the choice of system boundaries, the “boundary dilemma” as Smil [32] (p. 275) put it. The choices over where to make our system end can lead to large differences in the results [12,28,32]. Borrion et al. [34], in their extensive review of environmental LCA of lignocellulosic ethanol conversion, conclude that results strongly depend on system boundary, functional unit, data quality and allocation methods chosen. The authors also make an important remark stating that “The lack of available data from commercial second generation ethanol plant and the uncertainties in technology performance have made the LCA study of the lignocellulosic ethanol conversion process particularly difficult and challenging.” [34] (p. 4648).

Assessments are scale dependent (and of course value laden, a matter which scientists often prefer not to confront). This means that before the assessment exercise takes place we have to frame properly the context in which we are operating. To put it simply, do cars pollute? It depends on how many cars we are talking about, the performance of their engines, their average speed, the quality of the fuel, etc. New “clean” engines on many new cars may cause more pollution than old dirty engines on few old cars; scale matters. But the scale has to be decided before carrying out the assessment. There is a very telling example concerning the calculation of biofuel efficiency presented by Shapouri et al. [36] vs. Giampietro et al. [26], on how to account for co-products. I quote [3] (p. 33)

Prof. David Pimentel was also a co-author of the paper [12]), as it is explained very clearly: “Shapouri et al. reported a net energy return of 67% after including the co-products, primarily dried distillers grain (DDG) used to feed cattle. These co-products are not fuel!

Giampietro et al. (1997) observed that although the by-product DDG may be considered as a positive output in the calculation of the output/input energy ratio in ethanol production, in a large-scale production of ethanol fuel, the DDG would be many times the commercial livestock feed needs each year in the U.S. (Giampietro et al. 1997).

It follows then that in a large-scale biofuel production, the DDG could become a serious waste disposal problem and increase the energy costs.” For issue of scale was also pointed out by Smil [4], in his assessment of the program PROALCOL, launched by the Brazilian government. Apart from a number of problems identified by Smil [4], (e.g., soil erosion, land conversion, productivity-related issues, economic viability), the author stressed that in order to achieve the production of ethanol from sugarcane forecast by the government, the process would have to also produce each year more than 150 million m3 of vinhoto, the residue of the process. Such a byproduct can be dried up and used as feed, but that is a highly energy-intensive process. The liquid may be used as fertilizer, but it requires logistics for concentrating it, transporting it around the country, etc. So the usual solution is dumping the fluid into the nearest water bodies, and in that context, vinhoto is a very serious pollutant.

For more examples on how the scale issue matters, I refer the reader to [12,37].

On the energy analysis of biofuels, a fierce debate surrounds the issue of providing an accurate EROI estimate for biofuels, but this really has to do with a few decimals below or above one, as the EROI for biofuels is between 0. 8 and 1.6.

This issue should not be a matter of concern, as fossil fuels, which fuel industrial societies, generate an EROI of 20–30 or more [12,27,29]. The fact that there are cases where biofuels can be produced at higher EROI does not really change the judgment over the low performance of biomass.

The power density of the energy source, that is to say the rate of energy flux per unit of area (W/m2), is a key indicator [4–7]. Concerning power density, fossil fuels perform from 300 to 3000 times better than the best biofuel.

See also Smil [7] (p. 265), for data about the power density of various kinds of biomass energy production.

Giampietro and colleagues [12,26] argue that developed societies, in order to sustain their level of metabolism, require an energy throughput in the energy sector ranging from 10,000 to 20,000 MJ per hour of labor. The fact that the range of values achievable with biofuel are just 250–1600 MJ per hour of labor says it all. Of course, we may argue that this is a positive outcome, as it allows the creation of more jobs and reduce unemployment. Nevertheless, if wages in those jobs have to be comparable to those in other sectors of society, the cost of energy will skyrocket

On the biophysical side, one of these indicators is energy density. The final cost of energy in economic terms is, of course, another key issue. Biofuels can be produced only thanks to subsidies. A number of qualitative indicators are also highly relevant such as: the level of contamination produced, the reliability of the supply, and the level of risk involved [5–7,12,13,29].

It should be clear, therefore, that to perform a sound and effective assessment of an energy source is far from being a simple task, and requires the adoption of a number of different indicators related to different criteria and scales. The narrative about biofuels, instead, has been and still is, dangerously simplistic.

At present, the energetic discourse on biofuels is focused on the EROI, but, as we have seen, the EROI is just part of the story. The main problem with biofuels is that they have a power density that is simply too low and this requires handling an enormous quantity of biomass, costing society a lot of working time and capital. Those characteristics make biofuels unable to supply energy to match the metabolic rate of energy consumption of developed countries [5,6,12,26,32].

For our industrial society to rely on “sustainable biofuels” for an important fraction of its energy, it would require a complete reshaping of its metabolism:

  • cropping most of the agricultural and non-agricultural land, affecting food supply and food affordability, increasing the impact on natural resources (water, soil health, pollution, loss of biodiversity);
  • implementing an amazing occupational shift by sending millions of people back to the fields, which will increase the cost of energy (or at least drastically reduce the wages of those working in the sector);
  • cutting our pattern of energy consumption, given the reduced flow of net energy;
  • a consistent reduction of population size and consumption would be required;
  • dealing with a continuous risk of running out of energy due to climate extremes, pests, etc.;
  • such a massive amount of biomass may not be sustainable in the long term, and in the short run, it would require increasing amounts of input.

In summary, for a society (as for any living organism) the energetic supply is a matter of vital importance. The key factors being: (1) the quality of the energy source (fossil fuels are much better than biomass as most of the work has already been done by the Earth’s ecosystems and geological forces over hundreds of millions of years); and (2) the overall efficiency of the supply process (extraction, transformation, etc.), that is to say, the net energy supplied to society at the proper rate of delivery, able to match the rate of energy demand. If the supply of energy cannot match the rate of metabolic energy consumption, society will reduce its metabolism accordingly.

Subsidies: Are They the Key for Biofuel Sustainability?

Pimentel, Smil and Youngquist, were critical towards the real efficiency of biomass as an energy source, and posed important questions concerning its economic efficiency and environmental impact (e.g., soil, water, use of agrochemicals). Youngquist claims that ethanol policy in the USA is a mere political issue, with politicians granting subsidies for inefficient ethanol production in order to secure the votes from Corn Belt electors: “The answer is that it is an example of politics overriding reason. The political block of the corn belt states holds votes crucial to elections, and companies which produce ethanol in the United States have been some of the largest contributors to political campaign funds in recent years” [43] (pp. 243–244).

Subsidies are still the main driving force shaping biofuel policy and trade, and ultimately they keep all this going. Even with oil at 100US$/barrel, biofuels were still not competitive and needed subsidies (and that can also be expected, as a lot of fossil fuel is required to carry out intensive agriculture) [12,44,45].

Koplow and Steenblik [45], estimate that in 2008, in the USA, total support towards ethanol production ranged between 9.0 and 11.0 billion US$, with subsidies between 2009 and 2012 accounting for about 50% (up to 80% in 2007) of the ethanol market price. These figures are likely an underestimate, given the many faces economic support can take (from tax exemption to price premium), rendering precise subsidy assessment a difficult task [44,45].

According to the IEA, biofuel subsidies amounted to about US$22 billion in 2010, and are projected to increase to up to US$67 billion per year in 2035 [44]. Note that fossil fuel benefits from subsidies, too. Fossil-fuel subsidies are estimated at between US$45–75 billion a year in OECD countries and at US$409 billion in 2010 in non-OECD countries [44]. Some authors (e.g., [46]) back subsidy policy of biofuels on the basis that “In any case, the size of the support of biofuels is small (the authors are referring to the figure of US$ 20 billion they present earlier), in relation to the cost of fossil fuel consumption subsidies amounted to $312 billion worldwide in 2009”. This reasoning is evidently flawed. The comparison refers to the total value, but has to be done on a per-unit basis instead. According to the BP Statistical Review of World Energy [47], in 2009 fossil fuel consumption amounted to about 10,000 Mt oil equivalent (3809 Mt oil, 2690 Mtoe gas, 3547 Mtoe coal), while biofuel amounted to about 52 Mt oil equivalent.

Subsidies turn out to be 3.1 million US$ per Mt oil eq. in the case of fossil fuels (US$ 3/t), and 423 million US$ per Mt oil eq. in the case of biofuels (US$423/t), 136 times more. We may well wonder what are we doing with biofuels!

Who benefits most from these subsidies? In the USA, federal and state subsidies for ethanol production, that total more than US$7 per bushel of corn, have been always mainly paid to large corporations [9,45,49]. It thus seems that those who will gain from subsidies are large corporations that sell the fossil-fuel-derived inputs, and the losers are the farmers, the consumers and the tax payers! And the environment, of course.

The USA population, 310 million in 2009, will reach 440 million by 2050 (US Census Bureau, 2009). According to Nowak and Walton [73], the rate of rural land lost to development in the 1990s was about 0.4 million ha per year and the authors warn that if this rate continues until 2050, USA will have lost an additional 44 million ha of rural countryside. Such areas will be lost mostly at the expense of agriculture or conservative land programs. Brown [74] points out that the USA, with its 214 million motor vehicles, paved an estimated 16 million ha of land (in comparison to the 20 million ha that US farmers plant in wheat). About 13% of U.S. land area is currently dedicated to highways and urbanization, so adding other 150 million people will dramatically affect both the demand for food, as well as the demand for space (e.g., urbanization and highways).

Promoting the extensive cultivation of species suitable for biofuel production would increase two of the major causes of biodiversity loss on the planet, namely the clearing and conversion of yet more natural areas for monocultures, and the invasion by non-native species.

“Carbon Debt”: Biofuels and Increasing Carbon Emissions

The belief that burning biomass is carbon neutral has been questioned. Such an idea is founded upon the rather simplistic reasoning that CO2 released in the burning is picked up again by plants, giving a net release of zero. There are a number of reasons why this is not so. Displacing tropical ecosystems in favor of plantations causes the loss of aboveground biomass, and also the release of a huge amount of carbon stored in the soil (about 50% of the total carbon in tropical forests is stored in the soil). Plantations will never store as much biomass as native ecosystems, and that leads to net carbon emissions. Converting grasslands into fuel crops will cause the net emission of the carbon stored in the native ecosystem.

Estimates concerning the “carbon debt” (the carbon that is lost in land use change) have been already published (e.g., [18,19]:

  • the conversion of rainforests, peatlands, savannas. Brazil and Southeast Asia may create a “biofuel carbon debt” by releasing 17 to 420 times more CO2 than the annual GHGs reductions that these biofuels would provide by displacing fossil fuels;
  • in the USA, corn-based ethanol will nearly double GHG emissions over 30 years, while cropping grasslands to produce biofuels (e.g., with switchgrass), will increase GHG emissions by 50%. Some USA public institutions concluded that much worse problems may be caused by fuel crops than by fossil fuels, due to corn ethanol and biodiesel made from soybean oil causing a large amount of land conversion to create a high “carbon debt” [88,89];
  • in a meta-analysis carried out by Piñeiro et al. [90] on 142 soil studies, the authors conclude that soil C sequestered by setting aside former agricultural land was greater than the C credits generated by planting corn for ethanol on the same land for 40 years, and that C releases from the soil after planting corn for ethanol may, in some cases, completely offset C gains attributed to biofuel generation for at least 50 years.

It has been suggested that agricultural intensification may help reduce the expansion of plantations into pristine ecosystems. However, recent analysis found that using high-yielding oil palm crops to intensify productivity and then preserving the remaining biodiversity may not work either. Carrasco et al. [95], for example, argue that using high-yielding oil palm crops could actually lead to further tropical deforestation. That is because palm oil will become cheaper on the global food markets and will outcompete biofuels grown in temperate regions. That in turn will increase the planting of oil palm in tropical regions. In fact, paradoxically, while developed countries are claiming to import biofuels from tropical regions in order to reduce their CO2 emission, they are actually contributing to an amplification of the problem, and concurring to fuel the process of tropical deforestation [18,19,44,96,97]. Houghton [98] warns that, between 1990 and 2010, forest degradation and deforestation accounted for 15% of anthropogenic carbon emissions and argues that we have to work to stop this trend. The author is rather critical about the international biofuel trade, which, he claims, is driven by distortions generated by the high subsidies in place in the USA and the EU, and is not going to work towards halting deforestation.

The greater availability of crop residues and weed seeds translates to increased food supplies both for invertebrates and vertebrates, which play important ecological functions in agro-ecosystems, influencing, among other things: soil structure, nutrients cycling and water content, and the resistance and resilience against environmental stress and disturbance [57,115–120].

When compared to corn grain, it takes 2 to 5 times more cellulosic biomass to obtain the same amount of starch and sugars. This means that 2 to 5 times more biomass has to be produced and handled in order to obtain the same starches as for corn grain [9].

Tilman et al. [21] suggest that all 235 million hectares of grassland available in the USA, plus crop residues, can be converted into cellulosic ethanol, recommending that crop residues, like corn stover, can be harvested and utilized as a fuel source. I have already mentioned residues; as for the use of grassland, this cannot be considered an empty space. There are tens of millions of livestock (cattle, sheep, and horses) grazing on that land, as well as all the wild fauna and flora living in those ecosystems [122];

Some energy analysts consider the biofuel “solution” so completely unrealistic that it should not even be worth any attention (e.g., [4,6,10,12]). Pimentel in his edited book on renewable energies [10], closes the work with chapter 20, on algae, consisting of two pages, summary and references included [126] (pp. 499–500). Pimentel claims that properly accounting for all the costs and assuming a realistic energy production level would lead to an estimated algal oil barrel cost of 800 US$.

References

  1. HLPE (High Level Panel of Experts). Biofuels and Food Security; A report by the High Level Panel of Experts on Food Security and Nutrition of the Committee on World Food Security; FAO, Rome, Italy, 2013. http://www.fao.org/fileadmin/user_upload/hlpe/hlpe documents/HLPE_Reports/HLPE-Report-5_Biofuels_and_food_security.pdf (accessed on 5 February 2015).
  2. Pimentel, D.; Moran, M.A.; Fast, S.; Weber, G.; Bukantis, R.; Balliett, L.; Boveng, P.; Cleveland, C.; Hindman, S.; Young, M. Biomass energy from crop and forest residues. Science 1981, 212, 1110–1115.
  3. Pimentel, D.; Patzek, T.; Cecil, G. Ethanol production: Energy, economic, and environmental losses. Rev. Environ. Contam. Toxicol. 2007, 189, 25–41.
  4. Smil, V. Biomass Energies; Plenum Press: New York, NY, USA, 1983.
  5. Smil, V. Energy at the Crossroads; The MIT Press: Cambridge, MA, USA, 2003. 6. Smil, V. Energy: Myths and Realities; The AEI Press: Washington, DC, USA, 2010.
  6. Smil, V. Power Density Primer, 2010.http://www.vaclavsmil.com/wpcontent/uploads/docs/smil-article-power-density- primer.pdf
  1. Ulgiati, S. A comprehensive energy and economic assessment of biofuels: When green is not enough. Crit. Rev. Plant Sci. 2001, 20, 71–106.
  2. Pimentel, D.; Patzek, T. Ethanol production using corn, switchgrass, and wood: Biodiesel production using soybean and sunflower. Nat. Resour. Res. 2005, 14, 65–76.
  3. Pimentel, D. (Ed.) Biofuels, Solar and Wind as Renewable Energy Systems: Benefits and Risks; Springer: New York, NY, USA, 2008.
  4. Patzek, T. Thermodynamics of agricultural sustainability: The case of US maize agriculture. Crit. Rev. Plant Sci. 2008, 27, 272–293.
  5. Giampietro, M.; Mayumi, K. The Biofuel Delusion: The Fallacy of Large Scale Agro-Biofuels Production; Earthscan: London, UK, 2009.
  6. MacKay, D.J.C. Sustainable Energy—Without the Hot Air; UIT Cambridge Ltd.: Cambridge, UK 2009.http://www.withouthotair.com/download.html (accessed on 20 December 2014).
  1. MEA (Millenium Ecosystem Assessment). Ecosystems and Human Well-Being: Biodiversity Synthesis; World Resources Institute: Washington, DC, USA, 2005.http://www.millenniumassessment.org/documents/document.354.aspx.pdf
  1. IAASTD (International Assessment of Agricultural Knowledge, Science and Technology for Development). Agriculture at the Crossroad; Synthesis Report; Island Press: Washington, DC, USA, 2009. http://apps.unep.org/publications/pmtdocuments/Agriculture %20at%20a%20crossroads%20-%20Synthesis%20report-2009Agriculture_at Crossroads_Synthesis_Report.pdf (accessed on 24 November 2014).
  2. WBGU (German Advisory Council on Global Change). Future Bioenergy and Sustainable Land Use; Earthscan: London, UK, 2009. http://www.wbgu.de/fileadmin/templates/ dateien/veroeffentlichungen/hauptgutachten/jg2008/wbgu_jg2008_en.pdf
  3. Gomiero, T.; Pimentel, D.; Paoletti, M.G. Is there a need for a more sustainable agriculture? Crit. Rev. Plant Sci. 2011, 30, 6–23.
  4. Fargione, J.; et al. Land clearing and the biofuel carbon debt. Science 2008, 319, 1235–1238.
  5. Searchinger, T.D.;.; et al. Fixing a critical climate accounting error. Science 2009, 326, 527–528.
  6. Robertson, G.P.; Dale, V.H.; Doering, O.C.; Hamburg, S.P.; Melillo, J.M.; Wander, M.M.; Parton, W.J.; Adler, P.R.; Barney, J.N.; Cruse, R.M.; et al. Sustainable biofuels redux. Science 2008, 322, 49–50.
  7. Tilman, D.G.; Hill, J. M.; Lehman, C. Carbon-negative biofuels from low-input high-diversity grassland biomass. Science 2006, 314, 1598–1600.
  8. Tilman, D.G.; Socolow, R. ; Foley, J.A.; Hill, J.; Larson, E.; Lynd, L.; Pacala, S.; Reilly, J.; Searchinger, T.; Somerville, C.; et al. Beneficial biofuels—The food, energy, and environment trilemma. Science 2009, 325, 270–271.
  9. EU (European Union). Environment Committee Backs Switchover to Advanced Biofuels. 2015. http://www.europarl.europa.eu/news/en/news-room/content/ 20150223IPR24714/html/Environment-Committee-backs-switchover-to-advanced- biofuels (accessed on 25 March 2015).
  10. EIA (Energy Information Administration). U.S. Ethanol Exports in 2014 Reach Highest Level since 2011. http://www.eia.gov/todayinenergy/detail.cfm?id=20532
  11. EIA (Energy Information Administration). U. S. Ethanol Imports from Brazil down in 2013.http://www.eia.gov/todayinenergy/detail.cfm?id=16131(accessed on 15 March 2015).
  1. Giampietro, M.; Ulgiati, S.; Pimentel, D. Feasibility of large- scale biofuel production: Does an enlargement of scale change the picture? BioScience 1997, 47, 587–600.
  2. Hall, C.A.S.; Lambert, J.G.; Balogh, S.B. EROI of different fuels and the implications for society. Energy Policy 2014, 64, 141–152.
  3. Hall, C.A.S.; Dale, B.E.; Pimentel, D. Seeking to understand the reasons for different Energy Return on Investment (EROI) Estimates for Biofuels. Sustainability 2011, 3, 2413–2432.
  4. Hall, C.A.S.; Cleveland, C. J.; Kaufmann, R. Energy and Resource Quality; Wiley-Interscience: New York, NY, USA, 1986.
  5. Brody, S. Bioenergetics and Growth; Reinhold: New York, NY, USA, 1945.
  6. World Bank, 2015. Energy Use (kg of Oil Equivalent Per Capita). http://data.worldbank.org/indicator/EG.USE.PCAP.KG.OE
  7. Smil, V. Energy in Nature and Society: General Energetics of Complex Systems; The MIT Press: Cambridge, MA, USA, 2008.
  8. Ridley, C.E.; Clark, C.M.; LeDuc, S.D.; Bierwagen, B.G.; Lin, B.B.; Mehl, A.; Tobias, D.A. Biofuels: Network analysis of the literature reveals key environmental and economic unknowns. Environ. Sci. Technol. 2012, 46, 1309–1315.
  9. Borrion, A.L.; McManus, M.C.; Hammond, G.P. Environmental life cycle assessment of lignocellulosic conversion to ethanol: A review. Renew. Sustain. Energy Rev. 2012, 16, 4638–4650.
  10. Searchinger, T.; Edwards, R.; Mulligan, D.; Heimlich, R.; Plevin, R. Do biofuel policies seek to cut emissions by cutting food? Science 2015, 15, 1420–1422.
  11. Shapouri, H.; Duffield, J.; McAloon, A.; Wang, M. The 2001 Net Energy Balance of Corn- Ethanol (Preliminary); U.S. Department of Agriculture: Washington, DC, USA, 2004. http://www.biomassboard.gov/pdfs/net_energy_balanced.pdf
  1. Giampietro, M. Multi-Scale Integrated Analysis of Agro-Ecosystems; CRC Press: Boca Raton, FL, USA, 2004.
  2. Georgescu-Roegen, N. Energy and economic myths. Southern Econ. J. 1975, 41, 347–381.
  3. Georgescu-Roegen, N. The Entropy Law and the Economic Process; Harvard University Press: Cambridge, MA, USA, 1971.
  4. Brown, L.R. Food or fuel: New Competition for the World’s Cropland. Worldwatch Paper 35, Worldwatch Institute, Washington D.C., USA, 1980. http://www.fastonline.org/CD3WD_40/JF/424/19–414.pdf
  5. Lockeretz, W. Crop residues for energy: Comparative costs and benefits for the farmer, the energy facility, and the public. Energy Agric. 1981, 1, 71–89.
  6. Pimentel, D. Ethanol fuels: Energy security economics and the environment. J. Agric. Environ. Ethics 1991, 4, 1–13.
  7. Youngquist, W. GeoDestinies: The Inevitable Control of Earth Resources over Nations and Individuals; National Book Company: Portland, OR., USA, 1997.
  8. Gerasimchuk, I.; Bridle, R.; Beaton, C.; Charles, C. State of Play on Biofuel Subsidies: Are Policies Ready to Shift? The International Institute for Sustainable Development, Winnipeg, Manitoba, Canada, 2012. http://www.iisd.org/gsi/sites/default/files/bf_stateplay_2012.pdf (accessed on 10 March 2015). 45. Koplow, D.; Steenblik, R. Subsidies o ethanol in the United States. In Biofuels, Solar and Wind as Renewable Energy Systems: Benefits and Risks; Pimentel, D., Ed.; Springer: Berlin, Germany; Heidelberg, Germany, 2008; pp. 79–108. 46. Valentine, J.; Clifton-Brown, J.; Hastings, A.; Robson, P.; Allison, G.; Smith, P. Food vs. fuel: The use of land for lignocellulosic “next generation” energy crops that minimize competition with primary food production. GCB Bioenergy 2012, 4, 1–19. 47. BP. Statistical Review of World Energy. 2015. http://www.bp.com/en/global/ corporate/about-bp/energy-economics/statistical-review-of-world-energy/2013- in-review.html (accessed on 10 March 2015).
  9. Myers, N.; Kent, J. Perverse Subsidies: How Tax Dollars can Undercut the Environment and the Economy; Island Press: Washington, DC, USA, 2001.
  10. Peterson, E.W.F. A Billion Dollars a Day: The Economics and Politics of Agricultural Subsidies; Wiley- Blackwell: Hoboken, NJ, USA, 2009.
  11. Von Braun, J. The food crisis isn’t over. Nature 2008, 456, 701. 51. Mitchell, D. A Note on Rising Food Prices. The World Bank Development Prospects Group July 2008http://www-wds.worldbank.org/servlet/WDSContentServer/ WDSP/IB/2008/07/28/000020439_20080728103002/Rendered/PDF/WP4682.pdf (accessed on 17 July 2014). 52. FAO (Food and Agriculture Organization). Soaring Food Prices: Facts, Perspectives, Impacts and Actions Required.http://www.fao.org/fileadmin/user_upload/ foodclimate/HLCdocs/HLC08-inf-1-E. pdf
  1. International Monetary Fund. Reaping the Benefits of Financial Globalization. 2007.http://www.imf.org/external/np/res/docs/2007/0607.htm
  1. Trostle, R. Global Agricultural Supply and Demand: Factors Contributing to the Recent Increase in Food Commodity Prices. http://www.ers.usda.gov/PUBLICATIONS/ WRS0801/WRS0801.PDF
  2. Gallagher, E. The Gallagher Review of the Indirect Effects of Biofuels Production.http://www.renewablefuelsagency.org/_db/_documents/Report_of_the_Gallagher review.pdf (accessed on 5 March 2015).
  1. UNEP (United Nations Environmental Programme). The Environmental Food Crisis the Environment’s Role in Averting Future Food Crises a UNEP Rapid Response Assessment.http://www.grida.no
  1. Gomiero, T.; Paoletti, M. G.; Pimentel, D. Biofuels: Ethics and concern for the limits of human appropriation of ecosystem services. J. Agric. Environ. Ethics 2010, 23, 403–434.
  2. Alexandratos, N.; Bruinsma, J. World Agriculture towards 2030/2050: The 2012 Revision.http://www.fao.org/docrep/016/ap106e/ap106e.pdf
  1. Bardgett, R.D.; van der Putten, V.H. Belowground biodiversity and ecosystem functioning. Nature 2014, 515, 505–551. 60. UN (United Nations). World Population Prospects: The 2012 Revision. Population Division, Department of Economic and Social Affairs, United Nations, New York, 2013. http://esa.un.org/wpp/Documentation/pdf/WPP2012_HIGHLIGHTS.pdf
  2. FAO (Food and Agriculture Organization). Global Agriculture towards 2050. High Level Expert Forum—How to Feed the World in 2050. Office of the Director, Agricultural Development Economics Division, FAO, Rome, 2009.http://www.fao.org/fileadmin/ templates/wsfs/docs/Issues_papers/HLEF2050_Global_Agriculture.pdf
  1. FAO (Food and Agriculture Organization). The State of Food and Agriculture 2008.http://www.fao.org/docrep/011/i0100e/i0100e00.htm (accessed on 10 February 2009).
  1. Montgomery, D.R. Soil erosion and agricultural sustainability. PNAS 2007, 104, 13268–13272.
  2. Brown, L.R. Outgrowing the Earth; Earthscan: London, UK, 2005.
  3. Smil, V. Feeding the World: A Challenge for the Twenty- First Century; MIT Press: Cambridge, MA, USA, 2008.
  4. Eide, A. The Right to Food and the Impact of Liquid Biofuels (Agrofuels). http://www.fao.org/ docrep/016/ap550e/ap550e.pdf (accessed on 10 February 2009).
  5. ActionAid. Biofuels and Lad Grabs 2015. http://www.actionaid.org/eu/what-wedo/biofuels-and-land-grabs (accessed on 5 March 2015).
  6. Oxfam. Biofuels. 2015.http://www.oxfam.org.uk/media-centre/pressreleases/tag/biofuels
  1. Cotula, L.; Dyer, N.; Vermeulen, S. Fuelling exclusion? The Biofuels Boom and Poor People’s Access to Land. International Institute for Environment and Development, London, UK, 2008.http://pubs.iied.org/pdfs/12551IIED.pdf
  1. Creutzig, F.; Corbera, E.; Bolwig, S.; Hunsberger, C. Integrating Place- Specific Livelihood and Equity Outcomes into Global Assessments of Bioenergy Deployment. Environ. Res. Lett. 2013, 8, doi:10.1088/1748-9326/8/3/035047.
  2. Obidzinski, K.; Andriani, R.; Komarudin, H.; Andrianto, A. Environmental and social impacts of oil palm plantations and their implications for biofuel production in Indonesia. Ecol. Soc. 2012, 17, Article 25.
  3. Oxfam. Biofuelling Poverty—EU Plans Could be Disastrous for Poor People. Oxfam, 29 October 2007. http://www.oxfam.org/en/node/217 (accessed on 5 March 2015).
  4. FAO; IFAD; and WFP. The State of Food Insecurity in the World 2014.Strengthening the Enabling Environment for Food Security and Nutrition. http://www.fao.org/3/a-i4030e.pdf (accessed on 5 March 2015).
  5. Nowak, D.J.; Walton, J.T. Projected Urban Growth (2000–2050) and Its Estimated Impact on the US Forest Resource. http://www.fs.fed.us/ne/newtown_square/publications/ other_publishers/OCR/ne_2005_nowak001.pdf (accessed on 10 February 2015).
  6. Brown, L.R. Plan B: Rescuing a Planet under Stress and a Civilization in Trouble. http://www.earth-policy.org/Books/PB3/index.htm (accessed on 5 February 2015).
  7. Bindraban, P.S.; Bulte, E.H.; Conijn, S.G. Can large-scale biofuels production be sustainable by 2020? Agric. Syst. 2009, 101, 197–199.
  8. Crutzen, P.J.; Mosier, A.R.; Smith, K.A.; Winiwarter, W. N2O release from agro-biofuel production negates global warming reduction by replacing fossil fuels. Atmos. Chem. Phys. Discuss. 2007, 7, 11191–11205.
  9. Haberl, H.; Sprinz, D.; Bonazountas, M.; Cocco, P.; Desaubies, Y.; Henze, M.; Hertel, O.; Johnson, R.K.; Kastrup, U.; Laconte, P.; et al. Correcting a fundamental error in greenhouse gas accounting related to bioenergy. Energy Policy 2012, 45, 18–23.
  10. Primack, R.B. A Primer of Conservation Biology, 4th ed.; Sinauer Associates: Sunderland, MA, USA, 2008.
  11. Robertson, G.P.; Gross, K.L.; Hamilton, S.K.; Landis, D.A.; Schmidt, T.M.; Snapp, S.S.; Swinton, S.M. Farming for ecosystem services: An ecological approach to production agriculture. Bioscience 2012, 64, 404–415.
  12. Gomiero, T.; Pimentel, D.; Paoletti, M.G. Environmental impact of different agricultural management practices: Conventional vs. organic agriculture. Crit. Rev. Plant Sci. 2011, 30, 95–124.
  13. Cal-IPC. Arundo donax: Distribution and Impacts. California Invasive Plant Council, 2011. http://www.cal- ipc.org/ip/research/arundo/
  14. Chapin, F.S., III; Zavaleta, E.S.; Eviner, V.T.; Naylor, R.L.; Vitousek, P.M.; Reynolds, H. L.; Hooper, D.U.; Lavorel, S.; Sala, O.E.; Hobbie, S.E.; et al. Consequences of changing biodiversity. Nature 2000, 405, 234–242.
  15. GISP. Biofuel Run the Risk of Becoming Invasive Species. The Global Invasive Species Programme, May 2008. http://www.issg.org/pdf/publications/GISP/ Resources/BiofuelsReport.pdf (accessed on 5 March 2015).
  16. Smith, A.L.; Klenk, N.; Wood, S.; Hewitt, N.; Henriques, I.; Yana, N.; Bazely, D.R. Second generation biofuels and bioinvasions: An evaluation of invasive risks and policy responses in the United States and Canada. Renew. Sustain. Energy Rev. 2013, 27, 30–42.
  17. Heikkinen, N.; ClimateWire. 49 plants that could make biofuel less troublesome.http://www.scientificamerican.com/article/49-plants-that-could-make-biofuel- less-troublesome/ (accessed on 2 February 2015).
  1. IUCN (International Union for Conservation of Nature). Guidelines on Biofuels and Invasive Species.http://cmsdata.iucn.org/downloads/iucn_guidelines_on_biofuels and_invasive_species_.pdf (accessed on 2 February 2015).
  1. Searchinger, T.D. Biofuels and the need for additional carbon. Environ. Res. Lett. 2010, 5, 024007.
  2. Babcock, B.A. Measuring Unmeasurable Land-Use Changes from Biofuels.

http://www.card.iastate.edu/iowa_ag_review/summer_09/article2.aspx

  1. EPA (Environmental Protection Agency). Emissions from Land Use Change Due to Increased Biofuel Production—Satellite Imagery and Emissions Factor Analysis.

http://www.epa.gov/OMS/renewablefuels/rfs2-peer-review-land-use.pdf

  1. Piñeiro, G.; Jobbágy, E.G.; Baker, J.; Murray, B.C.; Jackson, R.B. Set-asides can be better climate investment than corn ethanol. Ecol. Appl. 2009, 19, 277–282.
  2. Sims R.; Schaeffer, R.; Creutzig, F.; Cruz-Núñez, X.; D’Agosto, M.; Dimitriu, D.; Figueroa Meza, M.J.; Fulton, L.; Kobayashi, S.; Lah, O.; et al. Transport. In Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change; Edenhofer, O., Pichs-Madruga, R., Sokona, Y., Farahani, E., Kadner, S., Seyboth, K., Adler,A., Baum, I., Brunner, S., Eickemeier, P., et al. Eds.; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2014; pp. 599–670.
  3. Wicke, B.; Sikkema, R.; Dornburg, V.; Faaij, A. Exploring land use changes and the role of palm oil production in Indonesia and Malaysia. Land Use Policy 2011, 28, 193–206.
  4. Miyamoto, M.; Parid, M.M.; Aini, Z.N.; Michinaka, T. Proximate and underlying causes of forest cover change in Peninsular Malaysia. For. Policy Econ. 2014, 44, 18–25.
  5. Carlson, K.M.; Curran, L.M.; Asner, G.P.; McDonald Pittman, A.; Trigg, S.N.; Adeney, J.M. Carbon emissions from forest conversion by Kalimantan oil palm plantations. Nat. Clim. Chang. 2013, 3, 283–287.
  6. Carrasco, L.R.; Larrosa, C.; Milner-Gulland, E.J.; Edwards, D.P. A double-edged sword for tropical forests. Science 2014, 346, 38–40.
  7. Laurance, W.F.; Sayer, J.; Cassman, K.G. Agricultural expansion and its impacts on tropical nature. TREE 2014, 29, 107–116.
  8. Wilcove, D.S.; Koh, L. P. Addressing the threats to biodiversity from oil-palm Agricultura. Biodivers. Conserv. 2010, 19, 999–1007.
  9. Houghton, R.A. The emissions of carbon from deforestation and degradation in the tropics: Past trends and future potential. Carbon Manag. 2013, 4, 539–546.
  10. Dwivedi, P.; Wang, W.; Hudiburg, T.; Jaiswal, D.; Parton, W.; Long, S.; DeLucia, E.; Khanna, M. Cost of abating Greenhouse Gas emissions with cellulosic ethanol. Environ. Sci. Technol. 2015, 49, 2512–2522.
  11. Lindstrom, L.J. Effects of residue harvesting on water runoff, soil erosion and nutrient loss. Agric. Ecosyst. Environ. 1986, 16, 103–112.
  12. Smil, V. Crop residues: Agriculture’s largest harvest. BioScience 1999, 49, 299–308.
  13. Lal, R. World crop residues production and implications of its use as a biofuel. Environ. Int. 2005, 31, 575–584.
  14. Wilhelm, W.W.; Doran, J.W.; Power, J.F. Corn and soybean yield response to crop residue management under no-tillage production systems. Agron. J. 1986, 78, 184–189.
  15. Wilhelm, W.W.; Johnson, J.M.F.; Karlen, D. L.; Lightle, D.T. Corn stover to sustain soil organic carbon further constrains biomass supply. Agron. J. 2007, 99, 1665–1667.
  16. Rasmussen, P. E.; Goulding, K.W.T.; Brown, J.R.; Grace, P.R.; Janzen, H.H.; Körschens, M. Long-term agroecosystem experiments: Assessing agricultural sustainability and global change. Science 1998, 282, 893–896.
  17. Pimentel, D.; Harvey, C.; Resosudarmo, P.; Sinclair, K.; Kurz, D.; McNair, M.; Crist, S.; Shpritz, L.; Fitton, L.; Saffouri, R.; et al. Environmental and economic costs of soil erosion and conservation benefits. Science 1995, 267, 1117–1123. 108. Blanco- Canqui, H.; Lal, R.; Post, W.P.; Owens, L.B. Changes in long-term no-till corn growth and yield under different rates of stover mulch. Agron. J. 2006, 98, 1128–1136. 109. Kenney, I.; Blanco-Canqui, H.; Presley, D.R.; Rice, C.W.; Janssen, K.; Olson, B. Soil and crop response to stover removal from rainfed and irrigated corn. Glob. Chang. Biol. Bioenergy 2014, 7, 219–230.
  18. Linden, D.R.; Clapp, C.E.; Dowdy, R.H. Long-term corn grain and stover yields as a function of tillage and residue removal in east central Minnesota. Soil Tillage Res. 2000, 56, 167–174.
  19. Liska, A.J.; Yang, H.; Milner, M.; Goddard, S.; Blanco-Canqui, H.; Pelton, M.P.; Fang, X.X.; Zhu, H.; Suyker, A. E. Biofuels from crop residue can reduce soil carbon and increase CO2 emissions. Nat. Clim. Chang. 2014, 4, 398–401.
  20. Barber, S.A. Corn residue management and soil organic matter. Agron. J. 1979, 71, 625–627.
  21. Karlen,D. L.; Hunt, P.G.; Campbell, R.B. Crop residue removal effects on corn yield and fertility of a Norfolk sandy loam. Soil Sci. Soc. Am. J. 1984, 48, 868–872.
  22. Lal, R. Soil carbon management and climate change. Carbon Manag. 2014, 4, 439–462.
  23. Lal, R. Carbon sequestration. Philos. Trans. R. Soc. Lond. B Biol Sci. 2008, 27, Article 363.
  24. Gomiero, T. Alternative land management strategies and their impact on soil conservation. Agriculture 2013, 3, 464–483.
  25. Coleman, D.C.; Crossley, D.A., Jr.; Hendrix, P.F. Fundamentals of Soil Ecology, 2nd ed.; Academic Press: Amsterdam, The Netherlands, 2004. 118. Lavelle, P.; Spain, A.V. Soil Ecology; Kluwer: Amsterdam, The Netherlands, 2002.
  26. Brussaard, L.; de Ruiter, P.C.; Brown,
  1. Coleman, D.C.; Crossley, D.A., Jr.; Hendrix, P.F. Fundamentals of Soil Ecology, 2nd ed.; Academic Press: Amsterdam, The Netherlands, 2004.
  2. Lavelle, P.; Spain, A.V. Soil Ecology; Kluwer: Amsterdam, The Netherlands, 2002.
  3. Brussaard, L.; de Ruiter, P.C.; Brown, G. G. Soil biodiversity for agricultural sustainability. Agric. Ecosyst. Environ. 2007, 121, 233–244.
  4. Heemsbergen, D.A.; Berg, M.P.; Loreau, M.; van Hal, J.R.; Faber, J.H.; Verhoef, H.A. Biodiversity effects on soil processes explained by interspecific functional dissimilarity. Science 2004, 306, 1019–1020.
  5. Bardgett, R.; van der Putten, W.H. Belowground biodiversity and ecosystem functioning. Nature 2014, 515, 505–511.
  6. Stephen, J.D.; Mabee, W.E.; Saddler, J.N. Will second-generation ethanol be able to compete with first-generation ethanol? Opportunities for cost reduction. Biofuels Bioprod. Bioref. 2011, 6, 159–176.
  7. Pimentel, D.; Marklein, A.; Toth, M.A.; Karpoff, M.N.; Paul, G.S.; McCormack, R.; Kyriazis, J. ; Krueger, T. Food Versus Biofuels: Environmental and Economic Costs. Hum. Ecol. 2009, 37, 1–12.
  8. Herrero, M.; Havlík, P.; Valin, H.; Notenbaert, A.; Rufino, M.C.; Thornton, P.K.; Blümmel, M.; Weiss, F.; Grace, D.; Obersteiner, M. Biomass use, production, feed efficiencies, and greenhouse gas emissions from global livestock systems. PNAS 2013, 110, 20888–20893.
  9. Sayre, R. Microalgae: The potential for carbon capture. Bioscience 2010, 60, 723–727.
  10. Biello, D. Energy: The false promises of biofuels. Sci. Am. 2011, 305, 59–65.
  11. Pimentel, D. A brief discussion on algae for oil production: Energy issues. In Biofuels, Solar and Wind as Renewable Energy Systems: Benefits and Risks; Pimentel, D., Ed.; Springer: New York, NY, USA, 2008; pp. 499–500.
  12. La Monica, M. Why the Promise of Cheap Fuel from Super Bugs Fell Short. http://www.technologyreview.com/news/524011/why-the- promise-of-cheap-fuel-from-super-bugsfell-short/ (accessed on 5 February 2015).
  13. Xiao, N.; Chen, Y.; Chen, A.; Feng, L. Enhanced Bio-hydrogen production from protein wastewater by altering protein structure and amino acids acidification Type. Sci. Rep. 2014, 4, 3992, doi:10.1038/srep03992.
  14. Ghirardi, M.L.; Posewitz, M.C.; Maness, P.C.; Dubini, A.; Yu, J.; Seibert, M. Hydrogenases and hydrogen photoproduction in oxygenic photosynthetic organisms. Annu. Rev. Plant Biol. 2007, 58, 71–91.
  15. Volgusheva, A.; Styring, S.; Mamedov, F. Increased photosystem II stability promotes H2 production in sulfur-deprived Chlamydomonas reinhardtii. PNAS 2013, 110, 7223–7228.
  16. Dubinic, A.; Ghirardi, M.L. Engineering photosynthetic organisms for the production of biohydrogen. Photosynth Res. 2015, 123, 241–253.
  17. Hwang, J.-H.; Kim, H.-C.; Choi, J.-A.; Abou-Shanab, R.A.I.; Dempsey, B. A.; Regan, J.M.; Kim, J.R., Song, H.; Nam, I.-J.; Kim, S.-N.; et al. Photoautotrophic hydrogen production by eukaryotic microalgae under aerobic conditions. Nat. Commun. 2014, doi:10.1038/ncomms4234.
  18. Blankenship, R.; Tiede, D.; Barber, J.; Brudvig, G.; Fleming, G.; Ghirardi, M.; Gunner, M.; Junge, W.; Kramer, D.; Melis, A.; et al. Comparing photosynthetic and photovoltaic efficiencies and recognizing the potential for improvement. Science 2011, 332, 805–809.
  19. Odum, H.T. Environment, Power, and Society; WILEY: New York, NY, USA, 1971.

 

Posted in Biofuels, EROEI Energy Returned on Energy Invested | Tagged , , , , | Leave a comment

Hybrid electric trucks are very different from HEV cars

[ This document explains why it is hard to transfer auto hybrid technology to trucks.  They are entirely different animals — medium-duty trucks weigh up to 10 times more, have up to 10 times the horsepower, and a far longer life-expectancy, and therefore medium-duty truck hybrid technologies need to be 10 times more durable. Hybrid batteries for medium-sized trucks are far behind batteries developed for autos, which are mass produced.  Trucks are usually custom-built for their specific purpose, and therefore don’t have the same economies of scale as mass-produced cars.

Hybrid electric trucks are only suitable for medium-duty trucks that stop and start a lot, mainly delivery and garbage trucks. 

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

NRC. 2008. Review of the 21st Century Truck Partnership. National Research Council, National Academy of Sciences. 131 pages.

Heavy-duty hybrid Vehicles  

Despite the emerging presence of hybrid electric technology in the passenger car industry (Toyota Prius and Honda Insight/Civic), heavy-hybrid technology for commercial trucks and buses needs significant research and development (R&D) before it will be ready for widespread commercialization.

There is a common perception that investments in passenger car (light-duty [LD] vehicle) technology benefit Heavy-Duty (HD) trucks.

This is not entirely true. First, LD vehicles (including trucks) fall into Classes 1 and 2a, which contain passenger cars, light trucks (such as the GMC/Chevy 1500 series pickup truck), minivans, and most SUVs. HD trucks are everything else—all vehicles that exceed 8,500 lb GVW, which are Classes 2b–8. This group of vehicles is very diverse and includes tractor-trailers, refuse and dump trucks, package delivery vehicles (e.g., UPS and FedEx), buses (e.g., city transit, school, shuttle, para-transit/demand response).

Heavy duty trucks are very different from autos:

  • A heavy-duty truck weighs 2–10 times more
  • Heavy-duty trucks have 2 to 10 times the horsepower
  • Burn 3 to 4 times more fuel per mile driven
  • The life expectancy and duty cycles for HD vehicles are about 10 times more demanding than those for light-duty vehicles.

Therefore, heavy-duty hybrid technologies and solutions must be about 10 times as durable as those being developed for light-duty hybrid applications.

HD truck and LD vehicle technologies and corresponding investments in them leverage each other only at the most basic level.

Bringing complex commercial products, such as HD hybrid propulsion systems, to market can cost $500 million to $1 billion per company and take as long as 10 years.

Comparison of Heavy-duty and Light-Duty vehicles  

Heavy Duty Light Duty trucks & cars
Weight 8,500-200,000 lbs < 8,500 lbs
Peak horsepower 150-600 70-300
Continuous horsepower 100-600 25-60
Annual mileage 20,000-250,000 8,000-20,000
Expected lifetime 400,000-1,000,000 miles 150,000 miles
Purchase price $60,000-$250,000 + $12,000-$40,000
Number of configurations Millions Thousands
Fuel of choice Diesel Gasoline
Fuel consumption 5-15 MPG 14-40 MPG

 

Industry/market characteristics that are considered barriers include low truck market volumes, high R&D costs, challenging reliability requirements, minimal technology crossover from cars, and razor thin margins in the trucking industry.

2.9. Component-specific barriers

Energy Conversion Technology Barriers

For hybrid electric propulsion systems, most components were not designed or optimized for use in on-road HEVs. Electric components can be costly because precision manufacturing tools are needed to produce the components, and production volumes are low. A new generation of components is needed for commercial and military HEVs. Electric motors, power electronics, electrical safety, regenerative braking, and power-plant control optimization have been identified as the most critical technologies requiring further research to enable the development of higher efficiency hybrid electric propulsion systems. The major barriers associated with these items relate to weight and cost reduction.

The major barriers to introducing hybrid electric drive units for HD trucks include system (life cycle) cost, system reliability, and system durability. Safety concerns and system complexity as they relate to maintenance are also issues. The rigorous duty cycles and demands placed on HD vehicles necessitate a high degree of component reliability. In the lower volume market of heavy hybrid vehicles, cost reduction will be a challenge.

Power electronics. The barriers for introducing improved power electronic systems for truck applications are the cost, complexity, reliability, and the operating environment. Current power electronic converters and motor controllers that meet size and weight requirements are not rugged or reliable enough for 500,000-mile vehicle lifetimes and harsh trucking environments.

Other barriers are thermal management systems for fast, energy-efficient heat removal from device junctions and components, control of electromagnetic interference generated when the devices are switched, and achieving a low-inductance package for the power inverter. Generally, silicon operates too cold for efficient heat removal, and silicon carbide is a preferred technology for more efficient heat removal. The task of packaging power electronics to satisfy the multiple extreme environments and ensuring reliable operation with proper function is a barrier. (The packages that are available are generally not suitable for vehicle applications.) Additionally, there are no domestic suppliers for high-power switch devices. This must be corrected.

Power Plant and Control System Optimization Barriers. Most components used in today’s hybrid vehicles are commercially available. However, they are not optimized for on-road heavy hybrid performance. Electric components can be costly to produce and have low production volumes. Hybrid propulsion components are high weight and high volume. Integrated generator/motors need higher specific power, lower cost, and higher durability.

Safety risks may be higher for prototype HEVs that have not been subjected to rigorous hazard analysis.

Heavy-duty hybrid trucks will have improved fuel economy and potentially significant reductions in emissions. An HEV seeks to recover as much of the braking energy as possible to recharge the battery. If the battery system has insufficient ability to be rapidly charged, the friction brakes will be used and significant energy will be lost to heat.

The equipment must have a payback period of less than 2 years and be sufficiently rugged and durable to perform reliably during the full design life of the truck in bad weather to be successful.  So far hybrid trucks have been held back by the high costs, inability to meet the $50/kW goal, no significant progress toward achieving the desired reliability target of 15 years design life for the hybrid propulsion powertrain equipment, and the limited energy storage capacity of hydraulic accumulators constrains the usefulness of hybrid-hydraulic technology in heavy-duty trucks primarily to those with significant start-stop duty cycle requirements, such as refuse trucks (Gray).

The ideal electrical energy storage system for heavy-duty hybrid trucks would have the following characteristics:

  1. High Volumetric Energy Density (energy per unit volume)
  2. High Gravimetric Energy Density (energy per unit of weight, Specific Energy)
  3. High Volumetric Power Density (power per unit of volume)
  4. High Gravimetric Power Density (power per unit of weight, Specific Power)
  5. Low purchase cost
  6. Low operating cost
  7. Low recycling cost
  8. Long useful life
  9. Long shelf life
  10. Minimal maintenance
  11. High level of safety in collisions and rollover accidents
  12. High level of safety during charging
  13. Ease of charging method
  14. Minimal charging time
  15. Storable and operable at normal and extreme ambient temperatures
  16. High number of charge-discharge cycles, regardless of the depth of discharge
  17. Minimal environmental concerns during manufacturing, useful life, and recycling or disposal

Unfortunately, every commercially viable battery technology being pursued must trade-off compromises of these attributes.

[My note: as I explain in Who Killed the Electric Car?, every time a battery is tweaked to improve, say, #1 (volumetric energy density), it could harm one or more of the other 16 parameters. It can take months of testing to find out which, if any, of the other properties were changed.   This is why it takes about 10 years to bring a new battery to market.]

The optimal electrical energy storage system for a given application will highly depend on the weighted values of these attributes as they relate to the specific application.

Battery-only electric vehicles (EVs), hybrid electric vehicles (HEVs), and plug-in HEVs (PHEVs) have distinct requirements. An EV developer might place the highest priority on an energy storage system that has the highest energy density or specific energy, to assure maximum range between charges for a given size of system. The instantaneous power available would likely be less important than mileage or range to the EV developer, but the priorities would be reversed for the HEV developer. Systems with higher energy capacity also tend to have higher available power for acceleration but with more mass than is desired for HEV applications.

The EV developer might also interpret system safety and environmental concerns differently from an HEV developer. Because a battery-only vehicle usually has a much larger battery than an HEV, and because it carries more electrical energy and caustic chemicals on-board, it may carry higher battery-related safety risks than an HEV with a smaller battery. However, the HEV includes an internal combustion engine (ICE) that carries additional safety risks associated with its energy storage system (i.e., gasoline fuel tank) that drivers of conventional ICE-based vehicles have lived with for many years.

Because regenerative braking is a primary method to charge the battery in an HEV, the efficiency is critically important to an HEV’s performance characteristics. Because the electric motor is also used significantly to assist the internal combustion engine during acceleration, specific power and power density will become important considerations. PHEVs have battery energy storage characteristics that can have more in common with either typical EV or HEV requirements, dependent on whether the PHEV powertrain design is dominated by the electric motor or by the internal combustion engine.

Battery Technology for heavy-duty applications

Unique challenges exist for the application of energy storage components in heavy-duty hybrid trucks, including batteries, ultra-capacitors, hydraulic accumulators, or flywheels. Light-duty EVs and HEVs focus on energy capacity for long battery range, or rapid power charging and discharging capabilities for acceleration and braking energy recovery, or a combination of both.

It is currently impractical for heavy-duty vehicles and trucks to carry sufficiently large battery packs or electric power sources (e.g., fuel cells) to provide the required power levels for an all-electric powertrain.

Therefore, vehicle manufacturers and researchers are focusing on hybrid powertrains based on diesel-electric architectures that require batteries with high power capability to assist in vehicle acceleration, rapid charging, and efficient recovery of braking energy.

The charge rate and level of charge acceptance needed to maximize the capture of braking energy in a heavy-duty vehicle is much greater than the comparable requirements for a light-duty vehicle, due to the difference in vehicle mass and inertia. A popular way to reach the higher power capacity required for heavy-duty truck applications is to over-size the battery. For light-duty hybrid vehicles there are storage systems available with sufficiently high charge rates that avoid the need to over-size the battery. Over-sizing the energy storage system to obtain the necessary power capacity is undesirable in several regards including the unnecessary expenses of additional mass, volume, and heightened environmental and safety concerns.

The additional mass in the heavy-duty vehicle makes them less practical as battery-only EVs due to the required battery size for reasonable performance, given the current state of the art. DOE EV efforts are only being made for cars and light-duty trucks because of this.

Battery life is critically important to avoid the replacement of the energy storage system before the end of the useful life of the vehicle which would represent a very significant repair/replacement cost and increase the recycling challenges. The need to replace the energy storage system once in a vehicle’s life would more than double its effective cost. Therefore, the goal of achieving battery lifetimes that match or exceed that of the vehicle may be necessary for owner acceptance in large volume production. Capacity Goals. The FreedomCAR energy capacity goals of 300 Wh and power capability goal of 25 kW for 18 seconds may be appropriate for the anticipated battery-only range of a light-duty HEV, but they may fall short of the needs for heavy-duty HEVs, unless two or more of the target battery packs are used for the application.

Safety remains a significant issue for Li-ion battery systems. Overcharging, fast charging, fast discharging, crushing, projectile penetration, external heating, or external short-circuiting, can cause the battery pack to heat up. If heat generation exceeds heat dissipation capability, thermal runaway can occur. Elevated temperatures can cause leaks, gas venting, smoke, flames, or even “rapid disassembly” to occur. Intelligent monitoring and control of the charging and discharging processes is being developed to manage many of the concerns associated with thermal runaway. However, vehicle collisions and projectiles that can cause the battery case to be breached are inspiring the need for new construction materials that are less prone to mechanical and thermal issues.

It is clear that the capabilities needed for heavy-duty use may differ significantly from light-duty applications. it is important to recognize that heavy-duty trucks experience a much wider range of driving cycles than passenger vehicles or light-duty trucks. For example, a Class 6 urban delivery van experiences typical driving cycles that are much different from those of Class 8 long-haul commercial trucks. Because large numbers of accelerations and braking decelerations associated with truck applications such as delivery vans or refuse trucks are well-suited to demonstrating the advantages of hybridization, most of the 21CTP-funded development of hybrid trucks has been focused on these applications.

Adding batteries will make trucks heavier.  A fully loaded tractor-trailer combination can weigh up to 80,000 pounds. Reduction in overall vehicle weight could enable an increase in freight delivered on a ton-mile basis. Practically, this enables more freight to be delivered per truck and improves freight transportation efficiency. In certain applications, heavy trucks are weight-limited (i.e. bulk cargo carriers), and reduced tractor and trailer weight allows direct increases in the quantity of material that can be carried. New vehicle systems, such as hybrid power trains, fuel cells and auxiliary power will present complex packaging and weight issues that will further increase the need for reductions in the weight of the body, chassis, and power train components in order to maintain vehicle functionality. Material and manufacturing technologies can also play a significant role in vehicle safety by reducing vehicle weight, and in the improved performance of vehicle passive and active safety systems. Finally, development and application of materials and manufacturing technologies that increase the durability and life of commercial vehicles result in the reduction of life-cycle costs.

Making trucks lighter

The principal barriers to overcome in reducing the weight of heavy vehicles are associated with the cost of lightweight materials, the difficulties in forming and manufacturing lightweight materials and structures, the cost of tooling for use in the manufacture of relatively low-volume vehicles (when compared to automotive production volumes), and ultimately, the extreme durability requirements of heavy vehicles. While light-duty vehicles may have a life span requirement of several hundred thousand miles, typical heavy-duty commercial vehicles must last over 1 million miles with minimum maintenance, and often are used in secondary applications for many more years. This requires high strength, lightweight materials that provide resistance to fatigue, corrosion, and can be economically repaired. Because of the limited production volumes and the high levels of customization in the heavy-duty market, tooling and manufacturing technologies that are used by the automotive industry are often uneconomical for heavy vehicle manufacturers. Lightweight materials such as aluminum, titanium and carbon fiber composites provide the opportunity for significant weight reductions, but their material cost and difficult forming and manufacturing requirements make it difficult for them to compete with low-cost steels.

Vehicle Corrosion. Many lightweight materials and light weighting approaches cannot be used in commercial vehicles because of significant corrosion and maintenance issues. Corrosion is a significant contributor to the cost of maintenance of heavy vehicles. Research is needed to develop materials that are resistant to both general and galvanic corrosion. Low-cost, durable coatings are needed.

Accidents involving large trucks and buses create significant delays on our highways, particularly in congested areas. During these delays, there are increases in fuel usage due to travel at low speeds and while sitting in traffic at idle. There is a corresponding increase in tailpipe emissions during these times. In some cases, the accidents involve vehicles carrying hazardous materials, creating an even more dangerous situation, and in certain cases, potential issues related to national security. Of course, accidents also contribute to costs associated with lost work time by commuters. Indeed, highway congestion, even in the absence of an accident, is a serious problem in the United States and in many large cities around the world. The Texas Transportation Institute (TTI) tracks congestion data for the 85 largest cities in the United States (http://tti.tamu.edu/). According to TTI, in 2003, in the combined total of the 85 cities, there was travel delay of about 3.7 billion hours, associated with which there was excess fuel consumption of 2.258 billion gallons of fuel. Elements contributing to congestion include heavy traffic, highway construction and repair, and roadway incidents including accidents (Texas Transportation Institute, 2007, Table 2).

Posted in Batteries, Electric Trucks | Tagged , , , | Leave a comment

The Dark side of Cruise ships. Garbage. Sewage. And more.

[ I detest cruise ships which destroy the ambiance of small towns in Alaska and everywhere else they go.  While cruise ships disgorged their sewage and garbage in Alaska’s Skagway harbor, passengers disgorged into shops owned by the cruise ships rather than voyage into the spectacular scenery they’d supposedly come to see.  Then I read Elizabeth Becker “Overbooked: The Exploding Business of Travel and Tourism”, and found out cruise ships were even worse than I thought.  Below are excerpts from this book, which I very much recommend, and she covers many other interesting issues with the tourism industry as well. 

But wait!  It gets worse.  There’s rape, and crime, and death, see my summary of a U.S. Senate hearing for details.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

The small head tax paid by cruise ships doesn’t begin to cover the damage that cruise passengers cause during their short stay. “Goff’s Caye in Belize has really been trampled now. Locals avoid it. Most of the wildlife has fled, but that is fine because these cruise tourists are less sensitive to protecting habitat of birds or monkeys, or protecting coral reef. If there is garbage strewn everywhere, you know the tourists came from cruise ships.” Many conservationists believe keeping cruise ship passengers restricted to a few areas-sacrifice zones-could prevent irreversible damage, because their numbers are getting out of hand.  Excursions of several hundred people descending on a wildlife preserve for a few hours can be disastrous. “Our parks weren’t made for that kind of an invasion and the guides can’t control the tourists.

Against this bleak picture, cruise ships could seem insignificant; there are some four hundred cruise ships, compared to a global fleet of tens of thousands of commercial vessels. And these few cruise ships sail across vast oceans. Yet their contribution to the fouling of the seas is considerable. While the oceans are large, these cruise ships stick to a standard path whether in the Caribbean or the Baltic, disposing of their considerable waste at roughly the same stretch at the same time, year in and year out. According to the Environmental Protection Agency, in the course of one day the average cruise ship produces: 21,000 gallons of human sewage, one ton of solid waste garbage, 170,000 gallons of wastewater from showers, sinks and laundry, 6,400 gallons of oily bilge water from the massive engines, 25 pounds of batteries, fluorescent lights, medical wastes and expired chemicals, and 8,500 plastic bottles.

Multiply this by those 400 ships cruising year-round and you have a sense of the magnitude of the problem. But there are no accurate studies of how well that waste is disposed of because the ships are not required to follow any state or national laws once in international waters.

Cruise lines are not required to monitor or report what they release. As a result, neither the government nor the public know how much pollution is released at sea.

Cruise ships were largely ignored, considered harmless for carrying tourists rather than oil. The awakening came in Alaska ten years after the Exxon Valdez spill. The guilty party was Royal Caribbean. Their cruise ships, which sailed through some of Alaska’s most sensitive harbors and coastal waterways, including the Inside Passage, were caught illegally dumping bilge water containing waste oil and hazardous chemicals. The bilge water routinely dumped by the cruise ships was sufficiently toxic that the U.S. Clean Water Act forbids its discharge within 200 miles of the coast because it endangers fish and wildlife and the habitat they depend on.

The government made new efforts to police the industry. In 2003 the Carnival Corporation pled guilty to illegally discharging oily waste from its ships, paying a $9 million fine and agreeing to pay another $9 million to environmental projects. Environmentalists lobbied for new legislation in Congress and in state legislatures to strictly regulate the discharge of ships refuse and regularly monitor that discharge. The cruise line industry pushed back, saying they were making voluntary improvements in their waste disposal systems.

After the Royal Caribbean convictions a group of Alaskans lobbied their state government to hold cruise lines responsible for the damage they caused. Juneau, the state capital, imposed a $5-per-passenger head tax to cover costs of cleaning up after those cruise tourists. Governor Tony Knowles convened a state panel in 2000 to monitor the waste produced by cruise ships during that summer season. One of the members appointed was Gershon Cohen, a scientist and environmentalist who lives in the small port town of Haines, which limits the number of ships allowed to dock there. As Cohen describes it, the panel tested cruise ship waste for evidence of hazardous material. What they found, instead, was untreated human sewage. “That shocked the hell out of us,” he said. “We found the cruise ships were floating poop producers.” The raw sewage came from inadequate “marine sanitation devices” that were designed to treat the refuse from a few dozen people but were installed on ships to treat the waste from thousands. Cohen said the samples testing fecal coliform bacteria from the ships’ human sewage were unbelievable: “One ship tested out at nine million fecal coliform bacteria counts per sample. Another tested at fourteen million, another at twenty-four million. These samples to be healthy are supposed to be at 200 or less.” Those pollutants from human sewage were threatening Alaska’s marine life, its fish, coral reefs, oyster beds, and sea mammals.

Since the Alaskan economy depends mightily on fishing, recreation and other land- based tourism, those findings alarmed the state leaders. The Alaskan legislature passed laws requiring that cruise ships routinely be tested to meet the state’s clean-air and -water standards and levying a $1 tax on each passenger to pay for the program. In Congress, Senator Frank Murkowski, Republican from Alaska, won passage of a law to allow Alaska to set standards and regulate “black water” waste that contains human sewage. No other state had these laws. The cruise industry pushed back again, and convinced Senator Murkowski to win approval from the Secretary of Interior to nearly double the number of cruise ships allowed in Glacier Bay National Park during the high summer season over the strong objections of park officials. Then Gershon Cohen and an Alaska attorney won a petition drive that placed an initiative on the 2006 ballot requiring cruise ships to apply for official waste permits with strict limits on sewage disposal. The initiative also created an ocean rangers program of marine engineers who would ride cruise ships to monitor the discharge and that would be underwritten by a new $50 passenger head tax. Despite predictions to the contrary, the voter initiative passed. Some of the requirements were later eased by Sean Parnell, the new governor, including cutting in half the passenger head tax in order to head off a lawsuit filed against the tax by Carnival and Royal Caribbean.

Maine joined Alaska in passing state laws curbing cruise ship pollution. California, with its long, varied coastline and strong environmental movement, has passed the strictest rules against any waste discharge by cruise ships. The laws were sparked in part by a Crystal Cruises ship that dumped 36,000 gallons of gray water and sewage in Monterey Bay. The cruise line was able to claim, rightly, that it hadn’t broken any rules. So the town banned the Crystal Cruises ship from the bay in 2005. The California state legislature then passed a law forbidding discharge of any waste whatsoever-treated or untreated, black water or gray water, sewage waste or garbage waste, into California’s coast waters by cruise ships or other large vessels. The federal government through the EPA endorsed the law in 2010, which gives the Coast Guard authority to enforce it.

the industry has forcefully opposed the Clean Cruise Ship Act, sponsored by Senator Richard Durbin, Democrat of Illinois, which would require sewage and gray water discharges to be controlled by the Clean Water Act. The legislation would also require cruise ships to use advanced treatment systems and to sail beyond the current 12-mile limit before discharging treated sewage.

U.S. Coast Guard is charged with enforcing existing laws and standards in American waters, but it has done a lackluster job, largely because inspecting sewage from cruise ships is close to the bottom of its to-do list. After the 9/11 attacks, when the Coast Guard was absorbed into the new Homeland Security Department, its mission has been insistently focused on “antiterrorism.” In theory, complaints about ships’ discharge in international waters are investigated by flag states like Liberia, Panama and the Bahamas, but they rarely follow up. The Congressional Research Service study of cruise ship pollution rated overall enforcement as “poor.” That leaves the industry

In some countries, cruise ships pose such an immediate danger they are under tight restriction. Antarctica has banned large cruise ships outright, beginning in 2011. The cruise ships’ heavy fuel oils were causing serious air pollution and, when spilled in an accident, causing irreparable damage. In 2007 the cruise ship Explorer capsized in an ice field, dumping 50,000 gallons of marine diesel fuel, 6,300 gallons of lubricant and 260 gallons of gasoline into the ocean where it rests at a depth of 5,000 feet.

—————————–

Shopping seminars were the only lectures offered on the cruise-nothing on Mexico or Belize, our two ports of call. At the seminar, Wesley and Victoria, the shopping gurus, told us there were world-class bargains in the ports and passed out a free map with lists of reliable stores that they said had qualified to insert paid advertisements in the brochure. Cozumel was especially strong on diamonds, Wesley said, even though diamonds are not mined, polished or set in Mexico.

“Stick to the stores on the map,” said Wesley. “If you’re silly enough to buy something from a store not on the map, then my hands are tied when something goes wrong.”

Kathy Kaufmann, a professional dancer in New York and a friend, told me what it was like to be a member of a dance troupe aboard a cruise ship. She described it as something you do when you’re young, “a little like backpacking through Europe.” The work is demanding; the pay adequate. During a cruise on a Holland America ship, she danced in the two productions each night and then rehearsed from midnight to five in the morning, when the stages were empty. The artists slept through the day in cell- sized rooms well below sea level, “which is a little depressing but great for sleeping since there are no windows.” After a year, she said, “I couldn’t do that again.”

The next morning we docked at the island of Cozumel. We were part of a mini-armada of eight cruise ships that arrived the same day, each adorned with trademark funnels- Mickey ears on the Disney liner, a splayed red tail for Carnival-and each carrying at least 2,000 people. That meant at least 16,000 people were all getting off at the same time for an afternoon of fun. A Native Mayan in a feathered headdress, his face and body painted in beguiling swirls, greeted us at the gangplank. Bill pulled out his camera to snap a photo of me with the Mayan when a ship photographer blocked him. “We paid for the Indian to be here, only we can take his picture,” he said. “Ship rates at a hundred dollars for the first four copies; ten dollars a copy afterwards.” “But we’re passengers on this ship,” said Bill, wondering why we had been put in the camp of “us versus them.”

Fifty years ago the French oceanographer Jacques Cousteau visited the then unknown and sparsely inhabited island of Cozumel and declared its clear waters among the best for scuba diving. Today, every year, 1 million cruise passengers visit the thirty- mile-long island with a population of 100,000 looking for a few hours of sightseeing and shopping in the now densely commercial strip of San Miguel, where, again, a Diamonds International store dominated.

We saw cruise passengers on excursions arranged by the ship, snorkeling near the shore, or swimming with dolphins. (Forty minutes for $122.) We walked past a thatched-roof al fresco bar where

Walking back onto the ship, we went through a security check where the guards were largely concerned about hidden alcohol. No, Bill and I said, we did not buy any liquor. One of the most stringently applied policies of Royal Caribbean is the ban on bringing any beer, wine, or spirits on board. If passengers had purchased a bottle of tequila in Cozumel, they had to hand it over to security where it would be “sequestered” until the cruise was over and the ship docked in Miami. The only alcohol passengers were allowed to drink had to be purchased at the ship’s bars or restaurants. The penalty for disobeying this policy is severe. In our rules book Royal Caribbean states that guests concealing alcohol “may be disembarked or not allowed to board, at their own expense, in accordance with our Guest Conduct Policy.” Those drinks tabs added up. My husband and I were not reveling into the late hours, but our wine at dinner and occasional cocktails over five nights ran to several hundred dollars. When your key card is also your credit card, it is easy to lose track of what you’re spending.

We passengers were the ultimate captive audience, spending our time and money on that one ship for five days, watching our bargain vacation quickly spiral into a more expensive getaway. Temptation was everywhere. The Portofino Italian Restaurant and the Chops Grille required a surcharge of $25 a person. Massages cost as much as $238.

With five thousand tourists landing in Belize on that day, we had expected business to be booming all over the city. But the tourists were off on excursions or were shopping at the pier, following the warning that anything at local stores not approved by the ship would be sketchy.

After five hours we were back on the ship, attending the “Grand Finale Champagne Art Preview” at the Ixtapa Lounge, a warm-up for an auction offering pieces by Pablo Picasso, Salvador Dalí and Henri Matisse. Derek, the auctioneer, taught us how to bid with a paddle and quizzed us on our general art knowledge. He represented Park West Gallery, headquartered in Southfield, Michigan, which advertised itself as one of the biggest art galleries in the world. The next day, at the actual auction, the first art up for bid were serigraphs and hand-embellished graphic works by lesser-known artists. Those were followed by more pieces by artists we had never heard of. Puzzled by the selection, we left before it was over. Back in our cabin, Bill calculated tips for the waiters and housekeeper. Royal Caribbean made it clear that passengers were expected to pay tips or gratuities to “thank those who have made your cruise vacation better than you could have imagined” and had left envelopes in our room with forms listing the rates we were expected to pay: $5.75 a day per person to our housekeeper, $3.50 a day per person to our dining room waiter, $2.00 a day per person to the assistant waiter and $0.75 cents a day per person to the headwaiter, or maître d’hôtel.

Now I had a sense of the appeal of cruises. It is effortless travel aboard these ships, taking all of the risk out of foreign travel. Once you buy that single ticket, you don’t have to lift a finger again. No planning, no moving from one hotel to another, no navigating buses or taxis to find a café that proves to be a disappointment. The excursions on land are tightly programmed, requiring no understanding of foreign languages or cultures. You unpack your suitcase once, sleep in the same bed, and read an activities bulletin each morning to decide whether you want to enter the “Men’s Sexy Legs Competition,” attend a complimentary slot machine lesson or take a merengue dance lesson for “fun fitness,” which were all offerings on our second day at sea. It is the ultimate package tour. How the cruises made their profit

was less obvious: onboard sales of everything from photographs to Internet service to yoga classes was the cash cow. But a lot didn’t add up: these are American cruise line companies, but we didn’t meet any American employees. And the wages paid were definitely below the American minimum.

Behind the carefree holiday of a cruise-the dancing waiters, the constant shows and events, the spreads of great food and the escape from daily drudgery-is a serious industry that has changed what people expect out of a vacation. It was built by several entrepreneurs who took advantage of changes in American lifestyles, married the design of a resort with the rhythm of a theme park, put it on a boat and won sweet deals through giant loopholes in American laws.

Carnival rendition diverges dramatically from the real man. Arison was not born poor; his well-to-do family has a long history in maritime shipping. He invested millions in Carnival Cruise Lines over several years before turning a profit, leaning on wealthy friends to come up with the money. But he was bold and brash and imaginative as he redefined what it meant to take a cruise, filling ever-larger ships with thousands of fun-seeking passengers and giving them nonstop entertainment sailing the seas. He decided port visits should be almost incidental, offering a few hours on foreign soil before returning to the real pleasure of eating, drinking and playing on board.

His cruise line made the Port of Miami the cruise capital of the world. And yet he accomplished these seismic changes without having to follow American laws and regulations that govern everything from pollution to minimum wages. As a business model, the cruise industry has been phenomenal, a $40 billion industry in the United States alone, and the fastest-growing segment of the global tourist industry. Cruises are the future. But cutting corners and avoiding laws have had serious downsides. Cruise ships are not subject to the requirement for federal permits covering sewer and waste disposal systems that are de rigueur for the resorts and hotels on land. As a result, all of those millions of passengers and crew members dining and defecating and showering on the oceans have left filthy discharges in their wake. On land, the cruise crowds streaming into foreign ports by the thousands have disfigured beaches and plazas, building resentment among many locals. Cozumel isn’t the only port that has taken on the life of a strip mall. St. Mark’s Square in Venice is now a field of kiosks selling cheap imports and lines of tourists waiting to visit the basilica.

But having fun on a ship sailing in the middle of the ocean requires prosaic essentials re-creating all of the systems hotels on land take for granted as well as the underpinnings of the ship: the navigation system, engines, power plant, water filtration and purification plants, sewage plants, photography plants, laundry and dry-cleaning facilities, kitchen galleys, a morgue, and storage lockers for the 100,000 pounds of food required to feed 3,000 people every day on a cruise. Also hidden from view are the below-sea-level accommodations for the 1,200 crew members. These fun ships grew ever larger to incorporate all the services necessary to run a miniature town, becoming megaships with space for elaborate playthings like the skating rinks and climbing walls.

As the American lines expanded to the United Kingdom and the rest of Europe and then into Asia, annual passenger load tripled from 500,000 in 1970 to 1.5 million in 1980, and then grew exponentially to 4 million in 1990 to over 13 million in 2010.

“Onboard spending is becoming more profitable than ticket sales. On average, each passenger provides forty-three dollars in profits each day to the big cruise companies,” he said. “If you include all the onboard spending, it is now less expensive to stay in an upscale Caribbean resort than to sail there on a cruise ship.” That onboard revenue translates, roughly, into at least 24 percent of all cruise revenue. Along the U.S.-Caribbean routes it can jump to more than 30 percent, according to UNWTO. Booze is one of the biggest earners. In the earliest days of modern

Gambling brings in nearly as much profits. Spas, Internet fees, extra costs for fancier restaurants, fees for sports and exercise classes, photographs and a DVD of the cruise-that DVD can earn at least $100,000 in revenue on a short cruise-and souvenirs all bring in money. As Mr. Dickinson wrote, “Everyone has already prepaid for their ticket and the only variable left that will determine the overall revenue (and ultimately the overall profitability) of a voyage is how much is spent on board. “The truth is that selling goes on all of the time all over the ship,” he said, “and it makes all the difference in the world when it comes to the bottom line.” Convincing passengers to spend is part of the theater of a cruise, conjuring up “the vacation of a lifetime” with unique flashy shows. This is especially true for the third big profit center for cruises: art sales. Even though art auctions are relative newcomers to cruises, begun in the mid-1990s, they are now big business and a serious source of money.

Enthusiasm is whipped up with flyers tucked into cabins, by announcements on the in- house television channel and by lectures on how to buy art. At the auction we attended, the salesman spoke convincingly about the quality of the paintings and artworks, the high reputation of the artists, and the long-term investment value of the pieces. And everything, he said, was guaranteed with appraisals of the fair market price and a generous return policy. But over the years hundreds of customers have complained that those guarantees are sketchy. And they have tried to bring legal cases against Park West, but the gallery argued that since the sales were made in international waters, the gallery was outside the jurisdiction of the American legal system. The customers felt cheated and started writing letters to their members of Congress and their hometown newspapers. The narrative of the complaints was always the same: back home the customers discover that the art they purchased was worth far less than they had paid and, at times, wasn’t even authentic. But when they complained, the return policy evaporated and Park West refused to refund the purchase.

On the cruise, this emphasis on buying diamonds felt out of synch with the idea of a carefree vacation. The pep-rally shopping lectures in the ship’s huge auditorium added to the sense of a trip aboard a shopping mall-destination nowhere. For all those reasons, though, cruise ships are the face of modern mass tourism. The industry has turned travel into a shopping spree. Airports have resembled shopping malls for several decades. The most glorious cathedrals and monuments are surrounded by high- end luxury stores with the same brands for sale whether in Europe, the United States or Asia.

There is one more profit center for onboard revenue that seems counterintuitive. Cruise lines make significant money from what the passengers do when they leave the ship and go ashore on those excursion trips. Cruise lines essentially apply the same system to excursion trips as they do to diamonds and artwork. The ship sells the excursions onboard, offering guarantees and then warning against taking competing excursions. Then the ship takes a nice cut from every excursion sold. On average, the cruise lines collect a commission or fee from the local tour agency as great as 50 percent of the price of the tour. In one year, Royal Caribbean earned a third of its profits from selling shore excursions.

This antipathy derives not only from the frustration of seeing your city overrun on a regular basis but also knowing that there is little profit in welcoming them. In Venice the city spends more to cover the services used by the ships-water, electricity, cleaning-and their passengers than it receives in the taxes paid per passenger to the port. (Given Italy’s murky political system, it is impossible to find out what that per passenger fee is and whether it goes into the city treasury. In a study with the Center on Ecotourism and Sustainable Development, the Belize tourism board found that cruise ship passengers spent an average of $44 on land, not the $100 average cited by the cruise industry. The tourism board commissioned the study out of disappointment that passengers on the cruise ships were not helping the economy as promised. The study warned of an “inherent tension between the objectives of the cruise industry and those of Belize.” By contrast, tourists who came by land to Belize spent at least $96 a day and $653 per visit. Costa Rica’s figure for cruise passengers was similar to that of Belize, with an average of $44.90. In Europe, an impartial study found passenger spending in Croatia averaged about $60 in 2007.

The industry also has made friends in the nonprofit media, think tanks and organizations by offering them fundraisers aboard cruise ships at special prices. These cruises offer fans and donors the chance to spend up to a week listening to their favorite personalities. At the same time, the price they pay for the cruise fills the organization’s coffers with tens of thousands of dollars. Nonprofits from across the political spectrum take advantage of these offers. Media stars like Diane Rehm of National Public Radio, Katrina vanden Heuvel of The Nation magazine and Gwen Ifill of public television’s News Hour, have hosted cruises to the Caribbean and Europe for their organizations. More than one critic has asked if it wasn’t hypocritical for organizations to blithely make hundreds of thousands of dollars on cruise ships that pay poor wages and routinely dump pollutants, the exact practices they deplore.

 

Posted in Ships and Barges | Tagged , , , | Leave a comment

Electric truck range is less in cold weather

[ What follows are excerpts from Calstart’s study of the effects cold weather had on lithium and Sodium Nickel Chloride e-truck batteries

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

CALSTART.  June 2013. E-truck performance in cold weather. Calstart.

Several different battery chemistries are used in transportation applications: Lead-Acid, Nickel Metal Hydride, Lithium-Ion, Sodium Nickel.  Table 1 shows the different e-trucks and their battery chemistry studied in this document.

cold battery e-truck maker battery type table1

 

 

 

 

 

 

SUMMARY OF DRIVING RANGE EXPECTATIONS

Figure 8 shows the combined impact of cold temperatures and cabin heating on an E-Truck equipped with a cabin heater with a power draw of 5 kW and an advertised range of 100 miles.

Figure 8: Impact of cold temperatures and cabin heating on the driving range of an E-Truck

Figure 8: Impact of cold temperatures and cabin heating on the driving range of an E-Truck

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

From the 80-mile maximum usable range, we can then see the impact of cold temperatures and cabin heating on driving range:

  • With ambient temperature throughout the day at 32°F (0°C), the E-Truck maximum usable range would decrease to 70 miles with no cabin heating and 50 miles with 4 hours of cabin heating at 5 kW power draw.
  • With ambient temperature throughout the day at 14°F (-10°C), the E-Truck maximum usable range would decrease to 60 miles with no cabin heating and 40 miles with four hours of cabin heating at 5 kW power draw.
  • With ambient temperature throughout the day at -4°F (-20°C), the E-Truck maximum usable range would decrease to 40 miles with no cabin heating and 20 miles with four hours of cabin heating at 5 kW power draw.

Lithium-Ion

Both high and low temperatures impact Lithium-Ion battery performance. At high temperatures, side reactions happen faster leading to faster battery degradation.

At cold temperatures, battery performance (power and energy) is lower due to poor ion transport. This leads to poor vehicle acceleration, limited capability to recover braking energy and lower driving range than experienced in warmer temperatures.

Each battery chemistry has its own unique performance degradation curve. Lithium-Ion batteries generally see their performance decrease gradually when ambient temperature drops from 80°F (27°C) to 32°F (0°C). However, performance falls off sharply when ambient temperature drops below 32°F (0°C).

  • At 32°F (0°C), relative capacity is about 90% of the capacity at the testing temperature of 77°F (25°C).
  • At 14°F (-10°C), relative capacity is about 80% of the capacity at the testing temperature of 77°F (25°C).
  • At -4°F (-20°C), relative capacity is about 60% of the capacity at the testing temperature of 77°F (25°C).

In addition, Lithium-Ion battery charging is much more challenging at cold temperatures as battery degradation is accelerated and the probability of a catastrophic failure is increased. As a result, Lithium-Ion batteries are generally inhibited from charging below 32°F (0°C).

cold battery lithium driving range figure 5

 

 

 

 

 

 

 

Figure 5: Impact of Lithium-Ion battery operating temperature on the driving range of an E-Truck with no cabin heating

When fleets deploy E-Trucks, they generally include a “buffer” to the advertised maximum range to limit “range anxiety”. This buffer is in addition to the OEM programmed battery buffer needed to preserve battery life. While every fleet may chose a different buffer, we chose a reasonable 20% that decreases the 100-mile advertised maximum range to an 80-mile maximum usable range with ambient temperature at 77°F (25°C). From this 80-mile maximum usable range, we can then see the impact of cold temperatures on driving range:

  • With ambient temperature throughout the day at 32°F (0°C), the E-Truck maximum usable range would decrease to 70 miles.
  • With ambient temperature throughout the day at 14°F (-10°C), the E-Truck maximum usable range would decrease to 60 miles.
  • With ambient temperature throughout the day at -4°F (-20°C), the E-Truck maximum usable range would decrease to 40 miles.

The 100-mile advertised maximum range to an 80-mile maximum usable range with ambient temperature at 77°F (25°C). From this 80-mile maximum usable range, we can then see the impact of cold temperatures on driving range:

  • With ambient temperature throughout the day at 32°F (0°C), the E-Truck maximum usable range would decrease to 70 miles.
  • With ambient temperature throughout the day at 14°F (-10°C), the E-Truck maximum usable range would decrease to 60 miles.
  • With ambient temperature throughout the day at -4°F (-20°C), the E-Truck maximum usable range would decrease to 40 miles.

Lithium-Ion battery performance is affected by cold temperatures. The extent of the performance degradation will depend on various factors:

  • Starting temperature (at which temperature are the batteries when the E- Truck starts its day),
  • Drive cycle (do the batteries have time to cool down when the vehicle is stopped on a delivery),
  • Outside temperature (what is the ambient temperature the batteries are exposed to).

We estimate that Lithium-Ion batteries used in current E-Trucks could lose 10 to 20% state of charge in typical Chicago winter weather (from 14°F to 32°F) and up to 40% in extreme cold weather (-4°F). For a 100-mile truck, this would represent a 10 to 20-mile reduction in driving range and up to a 40-mile reduction in extreme cold weather.

Sodium-Nickel batteries present the advantage of being able to operate at extreme temperatures from 40 to 149°F (-40 to +65°C) with no performance degradation. Since the electrolyte used in Sodium-Nickel batteries is solid and inactive at normal ambient temperatures, batteries are continuously kept at their internal working temperature of 518°F (270°C) in order to keep the electrolyte molten and the battery ready to use. Thus, Sodium-Nickel batteries provide consistent performance regardless of the outside temperature and charge normally at cold temperatures. In 2012, Motiv Power Systems was awarded a contract with a total value of $13.4 million from the City of Chicago to electrify up to 20 garbage trucks. In order to meet the range requirements provided by the City of Chicago (drive 60 miles all year-round), Motiv Power Systems uses Sodium-Nickel Chloride batteries. Figure 6 below shows the first US all- electric Class 8 refuse truck from Motiv Power Systems.

During initial testing in December 2013, no degradation of performance was observed. Between 50% and 60% of total battery capacity is used for driving regardless of the outside temperature, leaving enough battery capacity to run trash compaction and vehicle accessories.

Sodium-Nickel batteries present several drawbacks compared to Lithium-Ion batteries:

  • Sodium-Nickel batteries have lower power density than Lithium-Ion batteries. Thus, they are not suited for every truck application.
  • Sodium-Nickel batteries are not shipped at their operating temperature and thus need 24 hours to heat up to 280°C prior to being used.
  • Sodium-Nickel batteries are better for high usage applications (such as refuse), as the batteries will cool down if not in use or not connected to a power source. While this would not damage the batteries, a 24-hour period would be needed to reheat them to their 280°C operating temperature.
  • While connected to a power source, Sodium-Nickel batteries will draw power to keep batteries warm (less than 100 W).
  • There is currently only one commercial-stage supplier of Sodium-Nickel batteries for E-Truck applications (FIAMM), which is a limiting factor for the further adoption of Sodium-Nickel batteries.

We can see that cabin heating represents a significant energy draw on E-Truck batteries: from 16 kWh for a 4 kW cabin heater operated for four hours in a day, to 48 kWh for a 6 kW cabin heater operated for eight hours in a day. Chassis dynamometer testing of a Smith Electric Newton Step Van at the Argonne National Laboratory (see Chapter 7 for reference) showed a 40% increase in energy consumption (and thus a 40% decrease in driving range) at cold temperatures of 20°F (-7°C) compared to ambient temperatures of 70°F (-21°C).

cold battery figure 7 with cabin heating

 

 

 

 

 

 

 

Figure 7: Impact of cabin heating on the driving range of an E-Truck

From the 80-mile maximum usable range, we can then see the impact of cabin heating on driving range:

  • With 4 hours of cabin heating at 5 kW power draw, the E-Truck maximum usable range would decrease to 60 miles.
  • With 6 hours of cabin heating at 5 kW power draw, the E-Truck maximum usable range would decrease to 50 miles.
  • With 8 hours of cabin heating at 5 kW power draw, the E-Truck maximum usable range would decrease to 40 miles.

We can see that cabin heating represents a significant energy draw on E- Truck batteries: from 16 kWh for a 4 kW cabin heater operated for four hours in a day, to 48 kWh for a 6 kW cabin heater operated for eight hours in a day. Chassis dynamometer testing of a Smith Electric Newton Step Van at the Argonne National Laboratory showed a 40% increase in energy consumption (and thus a 40% decrease in driving range) at cold temperatures of 20°F (-7°C) compared to ambient temperatures of 70°F (-21°C). The

Figure 7 shows the impact of cabin heating on an E-Truck equipped with a cabin heater with a power draw of 5 kW and an advertised range of 100 miles.  Although outdoor temperatures would be low enough to require cabin heating, in order to quantify the impact of cabin heating on driving range, we assumed in that case cold temperatures would not affect battery performance.

We estimate that cabin heating use could decrease state of charge (SOC) by 20% in typical delivery operation and up to 40% in operation where the driver requires longer periods of cabin heating.

Lastly, we researched potential solutions that would help maintain E-Truck driving range in cold climate.

Table 3: List of potential solutions to help maintain E-Truck driving range in cold climate

cold battery table 3 part a solutions

 

 

 

cold battery table 3 part b solutionsREFERENCES

  • The Truth About Electric Vehicles (EVs) in Cold Weather. On-Demand Webinar.
  • https://www.fleetcarma.com/Resources/the-truth-about-electric-vehicles-in- cold-weatherwebinar
  • American Automobile Association (2014). Extreme Temperatures Affect Electric Vehicle Driving Range, AAA Says. http://newsroom.aaa.com/2014/03/extreme- temperatures-affect-electricvehicle-driving-range-aaa-says/
  • Duoba, M., E. Rask, M. Meyer, APRF & Co (2012). Advanced Powertrain Research Facility AVTA Nissan Leaf testing and analysis. Argonne National Laboratory, October 12th, 2012. http://www.transportation.anl.gov/D3/data/2012_nissan_leaf/AVTALeaftestinga nalysis_Major %20summary101212.pdf
  • Pesaran A., Ph. D, S. Santhanagopalan, G. H. Kim, Addressing the Impact of Temperature Extremes on Large Format Li-Ion Batteries for Vehicle Applications. National Renewable Energy Laboratory, NREL/PR-5400-58145. 30th International Battery Seminar, Fort Lauderdale, FL, March 11-14, 2013. http://www.nrel.gov/docs/fy13osti/58145.pdf
  • Jehlik F., et al, Electric Heater Effects on two Medium Duty Electric Trucks, from Argonne & FedEx EV Evaluations. Handout for the National Governor’s Association Workshop on Advanced Vehicle Technologies and Infrastructure. Indianapolis, IN. May 19th, 2014.
Posted in Batteries, Electric Trucks | Tagged , , | 3 Comments

Once upon a time Congress knew an energy crisis was coming

Extracts from the 2005 Senate hearing “High Costs of Crude”

[ In Mason Inman’s outstanding biography of M. King Hubbert, “The Oracle of Oil”, he shows five different models Hubbert used to predict the continental peak of U.S. production that all came up with roughly the same, correct answer.

Here are some other forgotten predictions from a 2005 Senate hearing “High Costs of Crude”:

  1. James R. Schlesinger, former U.S. Secretary of Defense predicted that “by about 2010, we should see a significant increase in oil production as a result of investment activity now under way. There is a danger that any easing of the price of crude oil will, once again, dispel the recognition that there is a finite limit to conventional oil. “
  2. James Woolsey, former Director of the CIA said that “Even if other production comes online from unconventional sources such as shale in the American West, the relatively high cost of production could permit low-cost producers, particularly Saudi Arabia, to increase production, drop prices for a time, and undermine the economic viability of the higher cost competitors, as occurred in the mid-1980s”.

Quite often, electricity generating contraptions like wind, solar, and nuclear are called on as the answer to the energy crisis.  But back in 2005 it was understood why anything generating electricity wouldn’t solve the problem:

  • James Woolsey: “The current transportation infrastructure is committed to oil and oil-compatible products. And since our electricity system scarcely uses any oil, ”you can put windmills and nuclear reactors on every hilltop and you would have a negligible effect on our use of oil. For the foreseeable future, as long as vehicular transportation is dominated by oil as it is today, the Greater Middle East, and especially Saudi Arabia, will remain in the driver’s seat.”
  • James Schlesinger: Advocating the construction of nuclear plants may be desirable, but it does not confront the critical issue of the liquids crisis. The intractable problem lies in liquid fuel for land, sea and air transportation.

So here we are now, the energy crisis forgotten, lots of excitement about wind, solar and nuclear despite their irrelevancy, electric cars, cellulosic ethanol, and other non-solutions.

And tight “fracked” shale oil is getting its ass kicked by the Saudi’s as Woolsey predicted.  Yet both Houses of Congress proclaim a century or more of energy independence, with the most recent energy bill expediting the export of U.S. natural gas (LNG) at a time when the fracking bubble appears to be bursting (shalebubble.org).  Ten years ago, congress was having meetings on the alarming shortages of conventional natural gas looming in the not too distant future.  Fracked NG delayed that crisis a few years, but won’t be able to in the future, since fracked natural gas wells only produce for the first few years before rapidly declining to low levels of production.

So here is a refreshing Senate hearing back when our dependence on oil was acknowledged, and our leaders knew and cared that an energy crisis was coming. First a few remarks, and then more excerpts from the hearing.

Indiana Senator Richard Lugar: In the long run our dependence on oil is pushing the U.S. toward an economic disaster of lower living standards, increased risks of war, and environmental degradation. When we reach the point where the world’s oil-hungry economies are competing for insufficient supplies of energy, oil will become an even stronger magnet for conflict than it already is.

James R. Schlesinger, Former U.S. Secretary of Defense.  In the decades ahead we shall reach a plateau or peak, beyond which we can’t increase production of conventional oil worldwide. The day of reckoning draws nigh.

A growing consensus accepts that the peak is not that far off. It was a geologist, M. King Hubbert, who outlined the theory of peaking and correctly predicted that production in the United States would peak around 1970.

Sometime in the decades ahead, the world will no longer be able to accommodate rising energy demand with increased production of conventional oil.  We need to … begin to prepare for that transition.

We have a growing gap between our discoveries and production, which will continue to increase unless we discover oil, we will not be able to produce it. Most of our giant fields were found 40 years ago and more. Even today, the bulk of our production comes from these old and aging giant fields. More recent discoveries tend to be small with high decline rates and are soon exhausted.

The energy bill … doesn’t  deal with the long term problem that for two centuries we have been dependent on the growth of our economies and on the rise of living standards from the exploitation of a finite resource: oil.

The public does not really get interested in energy problems until the price of gasoline runs up. Other than that it is indifferent. We move as a country from complacency to panic. Gasoline prices are high at the moment, and it has gotten the public’s attention. Other than that, ]the public only pays attention] when there are supply interruptions and [long] gasoline lines, as we had in 1973 and 1979. That gets the public’s attention.

Pointing to the reality [of the end of] vast discoveries of super giant oil fields] in the Middle East doesn’t do it, until such time as there’s some impact. I hesitate to mention to you, gentlemen, that politicians don’t usually like to be associated with bad news. And that is bad news and it is very hard to persuade people to emulate Jimmy Carter, and go out there and say there’s a problem coming.

One additional point needs to be made. When gasoline prices are rising, public anger rises at least correspondingly. Public anger immediately draws the attention of politicians—and here in the United States it elicits a special type of political syndrome: Wishful thinking. It is notable that in the last election both candidates talked about ‘‘energy independence,’’ a phrase that traces back to the presidency of Richard Nixon and to the reaction to the Arab oil embargo. One should not be beguiled by this forlorn hope.

The transition [from oil] will be the greatest challenge this country and the world will face— outside of war. The longer we delay, the greater will be the subsequent trauma. For this country, with its 4 percent of the world’s population, using 25 percent of the world’s oil, it will be especially severe.

Senator HAGEL, Nebraska. Maybe the answer is, as you said earlier in your remarks, there has to be a crisis –a big crisis.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

Senate 109–385. November 16, 2005. High costs of crude: the new currency of foreign policy.  U.S. Senate Hearing.    

Excerpts from this 39 page hearing follow (out of order, some paraphrasing).

RICHARD G. LUGAR, U.S. SENATOR FROM INDIANA.

The committee meets today to examine the effects of U.S. oil consumption on American foreign policy and on our wider economic and security interests. High oil prices have hurt American consumers at the gas pump, and record revenues flowing into oil producing nations are changing the world’s geopolitical landscape. Increasingly, oil is the currency through which countries leverage their interests against oil dependent nations such as ours.

Oil is not just another commodity. It occupies a position of singular importance in the American economy and way of life. In 2003, each American consumed about 25 barrels of oil. That is more than double the per capita consumption in the United Kingdom, Germany, and France and more than 15 times that of China. With less than 5% of the world’s population, the United States consumes 25% of its oil.

In the short run, our dependence on oil has created a drag on economic performance at home and troubling national security burdens overseas. In the long run, this dependence is pushing the United States toward an economic disaster that could mean diminished living standards, increased risks of war, and accelerated environmental degradation.

Up to this point, the main issues surrounding oil have been how much we have to pay for it and whether we will experience supply disruptions. But in decades to come, the issue may be whether the world’s supply of oil is abundant and accessible enough to support continued economic growth, both in the industrialized West and in large rapidly growing economies like China and India. When we reach the point where the world’s oil-hungry economies are competing for insufficient supplies of energy, oil will become an even stronger magnet for conflict than it already is.

Since 1991, we have fought two major wars in the oil-rich Middle East, and oil infrastructure and shipping lanes are targets for terrorism. In addition to the enormous dollar cost we pay for the military strength to maintain our access to foreign oil, our petroleum dependence exacts a high price in terms of foreign policy and international security. Massive infusions of oil revenue distort regional politics and can embolden leaders hostile to U.S. interests. Iran, where oil income has soared 30% this year, threatened last month to use oil as a weapon to protect its nuclear ambitions. At a time when the international community is attempting to persuade Iran to live up to its nonproliferation obligations, our economic leverage on Iran has declined due to its burgeoning oil revenues. Similarly, the Chavez government in Venezuela resists hemispheric calls for moderation, in part because it has been emboldened by growing oil revenues. Russia uses its gushing oil and natural gas income and reserves as leverage over new democracies in East Europe. Globally, critical international security goals, including countering nuclear weapons proliferation, supporting new democracies, and promoting sustainable development are at risk because of dependence on oil. Diversification of our supplies of conventional and nonconventional oil, such as Canada’s tar sands, is necessary and under way. Yet because the oil market is globally integrated, the impact of this diversification is limited.

Our current rate of oil consumption, coupled with rapidly increasing oil demand in China, India, and elsewhere, will leave us vulnerable to events in the tumultuous Middle East and to unreliable suppliers such as Venezuela. Any solution will require much more than a diversification and expansion of our oil supply.

Despite the widening discussion of our energy vulnerability, the U.S. political system has been capable of only tentative remedial steps that have not disturbed the prevailing oil culture. The economic sacrifices imposed on Americans recently by rising oil prices have expanded our Nation’s concern about oil dependence. But in the past, as oil price shocks have receded, motivations for action have also waned. Currently, policies for mediating the negative effects of oil dependence continue to be hamstrung in debate between supply-side approaches and those preferring to decrease demand. We must consider whether the political will now exists to commit to a comprehensive strategy.

 

JAMES R. SCHLESINGER, Former U.S. Secretary of Defense

We face a fundamental, longer term problem. In the decades ahead, we do not know precisely when, we shall reach a plateau or peak, beyond which we shall be unable further to increase production of conventional oil worldwide. We need to understand that problem now and to begin to prepare for that transition.

The underlying problem is that for more than three decades, our production has outrun new discoveries. Most of our giant fields were found 40 years ago and more. Even today, the bulk of our production comes from these old and aging giant fields. Ghawar in Saudi Arabia, for example, produced 7 percent of the world’s petroleum all by itself. There are other examples. More recently discoveries tend to be small with high decline rates and are soon exhausted.

The problem is that demand and production continue to grow and the discoveries are not matching those increases. The fact of the matter is that unless we discover oil, we will not be able to produce it over time. And we have a growing gap between our discoveries and production, which will continue to increase. The consequence is that as we look to the future, and we begin to drain off those giant fields like Ghawar, like the Burgen Field in Kuwait, we are going to be faced with an oil stringency.

As the years roll by the entire world will face a prospectively growing problem of energy supply. Moreover, we shall inevitably see a growing dependency on the Middle East.

By about 2010, we should see a significant increase in oil production as a result of investment activity now under way. There is a danger that any easing of the price of crude oil will, once again, dispel the recognition that there is a finite limit to conventional oil. In no way do the prospective investment decisions solve the long-term, fundamental problem of oil supply.

Let me underscore that energy actions tend to be a two-edged sword. To some extent, the recent higher prices for oil reflect some of our own prior policies and actions. For example, the sanctions imposed upon various rogue regimes, by reducing world supply, have resulted in higher prices. Operation Iraqi Freedom, followed by the insurgency, has caused unrest in the Middle East. The consequence has been somewhat lower production and a significant risk premium that, again, has raised the price of oil. The effect of higher oil prices has been significantly higher income for producers. A much higher level of income has meant that a range of nations, including Russia, Iran, Venezuela, as well as gulf Arab nations have had their economic problems substantially eased. As a result, they have become less amenable to American policy initiatives. Perhaps more importantly, the flow of funds into the Middle East inevitably has added to the moneys that can be transferred to terrorists. As long as the motivation is there and controls remain inadequate, that means that the terrorists will continue to be adequately or amply funded. To the extent that we begin to run into supply limitations and to the extent that we all grow more dependent on the Middle East, this problem of spillover funding benefits for terrorists is not going to go away.

The United States is today the preponderant military power in the world. Still, our military establishment is heavily dependent upon oil. At a minimum, the rising oil price poses a budgetary problem for the Department of Defense at a time that our national budget is increasingly strained. Moreover, in the longer run, as we face the prospect of a plateau in which we are no longer able worldwide to increase the production of oil against presumably still rising demand, the question is whether the Department of Defense will still be able to obtain the supply of oil products necessary for maintaining our military preponderance. In that prospective world, the Department of Defense will face all sorts of pressures at home and abroad to curtail its use of petroleum products, thereby endangering its overall military effectiveness.

 

JAMES WOOLSEY, former CIA Director

The testimony I’m presenting today is in large measure of the substance of a paper by former Secretary of State, George P. Shultz, and I. We wrote and published it on the Web site of the Committee on the Present Danger, which he and I co-chair this summer.  [Today} I’m going to … point out why a pure market approach is something that will not work under the current circumstances.

First of all the current transportation infrastructure is committed to oil and oil-compatible products. So there’s no effective short-term substitutability. One simply has to eat whatever increases in oil prices come upon us. We can’t shift as we can with many other commodities.

Second, that dependence is one which operates today in such a way that the transportation fuel market and the electricity market are effectively completely separate things. In the 1970s about 20% of our electricity came from oil, so if one introduced nuclear power, or wind power, one was substituting them to some extent for oil use. Today that’s essentially not true anymore. Only 2 to 3 percent of our electricity comes from oil. Whether you’re a fan of nuclear power or wind or whatever, you can put windmills and nuclear reactors on every hilltop and you would have only negligible effect on our use of oil.

So the transportation fuel market and the electricity market today are very different. Secretary Shultz and I focused on the importance of proposals that could get something done soon. And in that regard let me be very blunt. We should forget about 95 percent of our effort on hydrogen fuel cells for transportation. We found on the National Energy Policy Commission that ‘‘hydrogen offers little to no potential to improve oil security and reduce climate change risks in the next 20 years.’’ Hydrogen fuel cells have real utility in niche markets for stationary uses. But the combination of trying to get the cost of these one-to-two-million-dollar vehicles that run on hydrogen down, at the same time one coordinates a complete restructuring of the energy industry so one has hydrogen at filling stations, and does a complete restructuring of the automotive industry so one has hydrogen fuel cells, is a many decades-long undertaking.

Hydrogen fuel cells for transportation in the near term are, in my judgement, a snare and a delusion and we should stop spending the kind of money on them that we are spending now.

The second point is that the Greater Middle East will continue to be the low-cost and dominant petroleum producer for the foreseeable future. If one looks at the coming demand growth from China and India, and the relatively high cost of production elsewhere, it is still going to be the case that the gulf—Saudi Arabia in particular—is going to be the swing producer and have a dominant influence on oil prices.

If the Saudi fields are in the negative shape that Mr. Simmons and others have suggested in some of their writings it may be a bit harder for the Saudis to increase production quickly, drop the price of oil as they did in the mid-1980s, and bankrupt other approaches.

The petroleum infrastructure is very vulnerable to terrorist and other attacks. My friend, Bob Baer, the former CIA officer, who wrote the recent book, ‘‘Sleeping With the Devil,’’ opens with a scenario in which a hijacked airliner is flown into the sulfur-cleaning towers up near Ras Tanura in northeastern Saudi Arabia. That takes 6 million barrels a day or so offline for a year or more. It sends world oil prices well over $100/barrel and crashes the world’s economy.

And that’s not to speak of some of the vulnerabilities from attacks on shipping, from hurricane damage in the gulf and all the rest. So the infrastructure of oil worldwide is vulnerable both to accidents and certainly to terrorism. But neither Secretary Shultz nor I talk in terms of just oil imports. We don’t solve anything in this country by importing a lot less from the Middle East and importing, say, more from Canada and Mexico, and then Europe importing more from the Middle East.

The possibility exists, particularly under regimes that could come to power in the Greater Middle East, of embargoes or other disruptions of supply. People sometimes say, whoever is in power in Saudi Arabia, they’re going to need to sell the oil in order to live. Well, they don’t need to pump that much of it if they want to live in the seventh century.

Bin Laden has explicitly said that he thinks $200/barrel or more is a perfectly reasonable price for oil. And we should remember that in 1979 there was a serious coup attempt in Saudi Arabia. In this part of the world, however successful or unsuccessful, our current efforts to help bring democracy and the rule of law into that part of the world are, we are looking at a decade or two or three of chaotic change and unpredictable governmental behavior in the Middle East. And that bodes concern, at the very least, for the stability of oil supplies.

Wealth transfers from oil have been used, and continue to be used, to fund terrorism and ideological support. The old Pogo cartoon line, ‘‘We have met the enemy and he is us,’’ is certainly true with respect to the funding of terrorism in the Middle East. For the ideological underpinnings of terrorism and the hate which is reflected in the al-Qaeda doctrine and related doctrines, we have only to look to the funding which takes place from Saudi Arabia and from wealthy individuals in that part of the world. Estimated generally at $3–$4 billion a year these funds go into teaching hatred in the madrassas of Pakistan, in the textbooks of Indonesia, in the mosques of the United States. We hear Prince Turki bin Faisal, the new Ambassador in Washington from Saudi Arabia and my former counterpart when he headed Saudi intelligence, say that we don’t appreciate how much the Saudis are doing in fighting against terrorism. Well, in a sense they are. They are perfectly willing to cooperate with us in fighting al-Qaeda, but it is not because the underlying views of the Wahhabis in Saudi Arabia and those of the Salafist jihadis such as al-Qaeda are different: They are not. The underlying views are genocidal for both groups with regard to Shiite Muslims, Jews, and homosexuals and they are absolutely filled with hatred with respect to Suffi and other Muslims, Christians, those with other religious beliefs, and democracy. Both are on the side of terrible oppression of women.

The current account deficits for a number of countries create risks ranging from major world economic disruption to deepening poverty, and could be substantially reduced by reducing oil imports. The United States essentially borrows about $2 billion now every day, principally from major Asian states, to finance its consumption. The single largest category of imports is the approximately $1 billion per working day that we borrow in order to finance our imported oil.

Global-warming gas emissions from man-made sources do create at least the risk of climate change, and one important component of potential climate change is, of course, transportation and oil.

The Greater Middle East will continue to be the low-cost and dominant petroleum producer for the foreseeable future.  Home of around two-thirds of the world’s proven reserves of conventional oil—45 percent of it in just Saudi Arabia, Iraq, and Iran—the Greater Middle East will inevitably have to meet a growing percentage of world oil demand.

Even if other production comes on line, e.g., from unconventional sources such as tar sands in Alberta or shale in the American West, their relatively high cost of production could permit low-cost producers, particularly Saudi Arabia, to increase production, drop prices for a time, and undermine the economic viability of the higher cost competitors, as occurred in the mid-1980s.

For the foreseeable future, as long as vehicular transportation is dominated by oil as it is today, the Greater Middle East, and especially Saudi Arabia, will remain in the driver’s seat.

Biodiesel and renewable diesel. The National Commission on Energy Policy pointed out some of the problems with most current biodiesel ‘‘produced from rapeseed, soybean, and other vegetable oils— as well as . . . used cooking oils.’’ It said that these are ‘‘unlikely to become economic on a large scale’’ and that they could “cause problems when used in blends higher than 20% in older diesel engines.’’ It added that ‘‘waste oil is likely to contain impurities that give rise of undesirable emissions.’’

Senator Lugar.  Let me begin the questions by noting, Director Woolsey, that when we wrote the article 6 years ago, there was great enthusiasm. President Clinton came over to the U.S. Department of Agriculture. There was a celebration of a breakthrough of energy independence in our country. I think the enthusiasm only lasted throughout that rally at USDA. Even though we tried to make the points that you’ve made today, 6 years later we are now sobered by war in the Middle East. And we are sobered by the fact, as you suggested, that in the future, events could make oil politically unavailable.

But all the assumptions on which our economy and our security, are based have consequences on our external affairs, over which we may not have a great deal of control. Ditto for the oil wells or lines in Iraq. Even as we try to protect them, we are not bringing more oil into the world. We are struggling to get back to the levels under Saddam.

Now, I ask the two of you: What sort of shock value is required, so that we will understand the world in which we live, and so that these modest suggestions will have some hearings, some legislation?

Secretary SCHLESINGER.   The public does not really get interested in energy problems until such time as the price of gasoline runs up. Other than that it is indifferent. We move as a country from complacency to panic. Gasoline prices are high at the moment, they have risen and it has gotten the public’s attention. Other than that, to get [attention, are] supply interruptions and [long] gasoline lines, as we had in 1973 with the Arab oil embargo, and to some extent with the fall of the Shaw in 1979. That gets the public’s attention.

Pointing to the reality that we have this trend ending the period of vast discoveries of elephants [also called super giant fields], also called, in the Middle East doesn’t do it, until such time as there’s some impact. I hesitate to mention to you, gentlemen, that politicians don’t usually like to be associated with bad news. And that is bad news and it is very hard to persuade people to emulate Jimmy Carter, and go out there and say there’s a problem coming.

Mr. WOOLSEY. I would think that $3-a-gallon gasoline, preceded by 15 of the 19 people who flew the planes on 9/11 coming from the world’s largest oil producer, would have done it. But the only thing I can say is that one wants to make these steps as palatable as possible.  Both financially and in terms of people’s lifestyles.

 

Senator LugarWe could be in a situation in which the Chinese, the Indians, and the European countries finally decide they are desperate. In the past, countries that were desperate often took over other people’s territory. And we could say—well, we’re in a small world. People are fighting world wars because they don’t have energy.

Mr. WOOLSEY. This is an issue on which all us oil importers are in the same fix together. I would have thought it would have been a wonderful major topic for cooperative discussion between the President and the Japanese and the Chinese, that we could work on programs like this together. We have no reason to want China to need lots of oil. We’d rather have them happy with using their grass to drive home.

Senator Lugar. Exactly. And each one of us who travel find hotels in African countries filled with people from India, China, as well as our own country, looking for the last acre on the preemptive possibility.

Senator CHUCK HAGEL, Nebraska How do we then take everything that the two of you have talked about in a way where we can address it, find solutions for it, develop the policy needed to do the things that you’re talking about to avert the things that are coming down the track at us?

I would like to have you each address it because in your opinions does it start to address, at all, what we must deal with here, and the decisions we’re going to have to make in order to avert, I think, an international catastrophe that’s headed straight at this country.

I wonder whether the President of the United States should lift this above where we are now, and essentially put this on the same plain as a Manhattan Project which has been mentioned before. The seriousness of this I don’t think takes second place to any issue. And yet, we seem to kind of be sleepwalking through this. Yes, we passed the bill, kind of interesting, good. I voted for it, I suspect most of my colleagues voted for it. It just doesn’t, in my opinion, really address what you’re talking about.

And it is complicated. I understand that you talked to Secretary Schlesinger about, I think, 17 different blends of gasoline that our refineries have to deal with. You talk about, Director Woolsey, the Pogo quote. Much of this, I think, is self-inflicted because we have not had the courage in this country, administrations, Congresses, to deal with this. But these hearings, as important as they are, are not going to lift this up and do what we need to do to address this impeding disaster.

My question is: How do we then fix this? How do we address it? Maybe we start with the energy bill, whether that’s really relevant to what needs to be done. Should the President come up here and sit down with the leadership of the Congress of the United States, and say now we’re going to get it above this. We’re going to make this a Manhattan Project, it is the focus of this country and the energy that we’re going to harness, private public partnerships and get this done.

We hear a lot of talk about, especially politicians, energy independence. It’s in our press releases. We’re going to get this country to a point where there’s energy independence. I’d like to hear from each of you whether that’s possible. How do you do that? I didn’t hear anything too encouraging from either one of you today, about that’s going to happen.

We need friends, we need alliances, we need relationships. I think we’re destroying our infrastructure in this country because of Iraq and because of over-commitments. We’re destroying our budgets, but yet Rome burns.

 

Secretary SCHLESINGER. The first point is: No, we’re not going to have energy independence until such time as we move away from oil as our principal source of transportation fuel. We have a long-term liquids problem.

The energy bill was quite useful. But it dealt essentially with shorter term problems: The failure to build our infrastructure; the difficulty in stringing out transmission lines or pipe lines; it eased a number of those problems and that was desirable. But it doesn’t  deal with this longer term problem that for two centuries we have been dependent on the growth of our economies and on the rise of living standards of the exploitation of a finite resource which is oil.

How do we deal with that? I would hope that we can focus the national attention on this longer term problem and begin to prepare now to get through that transition that we face, 20 years out, 25 years out, I don’t know what the date is. That depends, of course, on Presidential leadership and the need to focus on the realities of that future and possibly to develop a number of what I’ll call ‘‘mini Manhattan Projects’’ because there are a range of developments that can help. Hybrid cars, plug-ins, look most promising. But that is not going to happen unless we are prepared to contravene to some extent, at least, the decisions of the marketplace. Senator Sununu’s concerns about electric power supply are appropriate. But once again until we can link up electric power and the transportation sector, we are not going to deal with the larger oil problem.

Senator HAGEL. I don’t know if there is an answer here.  But then, what do you do to get it out of neutral, and take it up somewhere where we can start to put all these pieces together, bring some leadership, resources, harness, focus policy,

And maybe the answer is, you said it earlier in your remarks, there has to be some crisis. A big crisis. And I think the margins of error today in the world are so much different than they were when you were Secretary of Energy, to recover from such a crisis, that is a very frightening prospect if we don’t get serious about this, and I think both political parties, the Congress, and the President, have this as its greatest responsibility.

Secretary SCHLESINGER. That is absolutely right, Senator. We need to have a chorus of all political, almost all political figures, in Washington and throughout the country, Governors as well, pointing to this problem, that it is something we must address. And if we don’t have that, we are not going to get on with these major adjustments that are necessary. We must remember that societies have difficulty facing distant threats.

We saw that in the case of Hurricane Katrina. For over a century we’ve known that sooner or later a CAT 4 or CAT 5 would hit a city that was below sea level. But it wasn’t today’s problem. Somebody has commented, it’s like the fella who plays Russian roulette, and he spins five or six times, nothing happens, and he puts the revolver aside and says that’s not dangerous. Well, we’ve been to two or three of those occasions, starting—possibly starting with the Suez crisis in 1956 and then, of course, with 1973 and 1979 and we’ve recovered from them and the reaction is like that fella with the revolver and Russian roulette.

 

Mr. WOOLSEY. Energy independence is really the wrong phrase. The problem is oil, as Jim suggested.

Secretary S CHLESINGER.  We must remember that we are working against the grain of the price mechanism, or the market economy. And that we are working against the predilections of the public and that’s what makes it hard.

 

Senator BILL NELSON, FLORIDA.  Some of these things can work, and we are suddenly at a position that we’re using half of the gasoline that we are using now. By a combination of all the things that you have very articulately laid out. Realistically, in what period of time would that be?

Mr. WOOLSEY. Well, a lot would have to do with how fast the fleet of passenger vehicles turns over. I think the average American passenger vehicle stays in service 10 years or 12 years

 

Senator BILL NELSON. And, as a result of that we would be, if at the end of that period of time, however long it is. We would be almost not dependent on foreign oil, and the question is: Are we going to be well on our way to that goal, or achieving that goal before the crisis comes that you mentioned, Senator Hagel? Because the crisis is coming. We just don’t know how it’s going to come. It may be that a terrorist sinks a supertanker in the Strait of Hormuz, or they blow up a refinery, or some other—maybe another major hurricane. And why we can’t get the American public and the American leadership focused on this is beyond me.  We have been seduced by cheap oil. And now it is so omnipresent in our system of distribution of energy that it’s hard to change it, and it’s going to take a crisis. It’s going to force us to change.

And that’s sad. Now this Senator’s going to continue to speak out, and I assume my colleague on the basis of your leadership, Mr. Chairman, are going to continue to speak out and let’s see if we can influence whoever’s occupying the White House for the next 3 years, and for the next years after that, whoever the new administration is, to see if we can break this stranglehold that we’re in. I don’t know what else to say.

 

Senator Lugar.  Thank you very much, Senator Nelson. This committee is declaring intellectual independence, even if we can’t declare energy independence.

But let me just say, the thing that all segments of Ukraine politics pointed to, were maps. They drew all sorts of oil lines to various countries, or gas, because of a sense of their independence conceivably being lost. The people who have the spigots and could turn them off could create a cause of war. They could create financial chaos in the meanwhile, a physical torture of the country. In other words, fortunately we are not in that condition. We are talking about a situation down the trail, but if you are in that condition as are many countries, either Ukraine or those coming to that point in this world. I stress again the international implications of our conversation today.

Even as we get our own act straightened out, and I think that we will, we must exude optimism. We must try to work with other countries, so that they do not face this crushing sense of dependence. This is critical, or we are going to be involved, I fear, in military conflict elsewhere in the world, trying to mediate either wars or disputes among others who did not work things out. And that is a very serious problem. For the moment, we’re talking about competition with the Chinese, the Indians, everybody grasping for the last barrel, with the understanding that if they don’t get it, and the dynamics of their public demand a good for their country, they may take means to get it. We have a strong need for diplomacy.

 

SENATOR RUSSELL D. FEINGOLD, WISCONSIN. As I have said many times, we must move away from our dependence on oil, most of which comes from foreign soil, if we are to truly meet our responsibility to future generations.

I would like to thank today’s witnesses, James Woolsey and James Schlesinger, for appearing before the committee. Given their active role in bringing attention to the concerns surrounding dependency on foreign oil, I look forward to hearing their ideas for avoiding future policy crises through an intelligent, well-informed non-fossil-fuel-based energy policy.

 

[From The National Interest, Winter 2005/06] THINKING SERIOUSLY—ABOUT ENERGY AND OIL’S FUTURE (By James R. Schlesinger)

The run-up in gasoline and other energy prices—with its impact on consumers’ purchasing power—has captured the public’s attention after two decades of relative quiescence. Though energy mavens argue energy issues endlessly, it is only a sharp rise in price that captures the public’s attention. A perfect storm—a combination of the near-exhaustion of OPEC’s spare capacity, serious infrastructure problems, most notably insufficient refining capacity, and the battering that Hurricanes Katrina and Rita inflicted on the Gulf Coast have driven up the prices of oil and oil products beyond what OPEC can control—and beyond what responsible members of the cartel prefer. They, too, see the potential for worldwide recession and recognize that it runs counter to their interests. But the impact is not limited to economic effects. Those rising domestic energy prices and the costs of fixing the damage caused by Katrina have weakened public support for the task of stabilizing Iraq, thereby potentially having a major impact on our foreign policy. What is the cause of the run-up in energy prices? Is the cause short term (cyclical) or long term? Though the debate continues, the answer is both. Clearly there have been substantial cyclical elements and ‘‘contradictions’’ at work. For several decades, there has been spare capacity in both oil production and refining. Volatile prices for oil and low margins in refining have discouraged investment. The International Energy Agency, which expresses confidence in the adequacy of oil reserves, urges substantially increased investment in new production capacity and has recently warned that, in the absence of such investment, oil prices will increase sharply. Such an increase in investment clearly would be desirable, but it is more easily said than done. In the preceding period of low activity, both the personnel and the physical capacity in the oil service industry have diminished—and it will take time to recruit and train personnel, to restore capacity and to produce equipment.

One additional point needs to be made. When gasoline prices are rising, public anger rises at least correspondingly. Public anger immediately draws the attention of politicians—and here in the United States it elicits a special type of political syndrome: Wishful thinking. It is notable that in the last election both candidates talked about ‘‘energy independence,’’ a phrase that traces back to the presidency of Richard Nixon and to the reaction to the Arab oil embargo. One should not be beguiled by this forlorn hope—and this brings us to the real problem for the foreseeable future. What is the prospect for oil production in the long term? How does it bear on the prospects for ‘‘energy independence’’?

THE DAY OF RECKONING DRAWS NIGH.  At the end of World War II came the period of the opening-up and rapid development of Middle East oil production, notably in the Arabian Peninsula. Both Europe and the United States embraced the shift from coal to oil as their principal energy source. The beginning of flush production in the Middle East coincided with and fostered the tremendous expansion of world oil consumption. In the 1950s and 1960s, oil production and consumption more than doubled in each decade. Annual growth rates in consumption of 8, 9 or 10 percent were typical. By contrast, no one, not even the most optimistic observers, expects a doubling of production in the decades ahead. The present expectation is markedly different. In increasing numbers, now approaching a consensus, knowledgeable analysts believe that the world will, over the next several decades, reach a peak—or plateau— in conventional oil production (Hirsch) Timing varies among these observers, but generally there is agreement on the outcome.

The implication is clear. Even present trends are unsustainable. Sometime in the decades ahead, the world will no longer be able to accommodate rising energy demand with increased production of conventional oil.

It should be emphasized that that would pose not a general ‘‘crisis in energy,’’ but instead a ‘‘liquids crisis.’’ Problems in energy other than oil are infrastructure problems, solvable through appropriate investment. To talk of a general ‘‘energy crisis’’ aside from oil is to divert attention from the central long-term problem. Advocating the construction of nuclear plants, for example, may be desirable, but it does not confront the critical issue of the liquids crisis. Basically, there is no inherent problem in generating and transmitting electric power, for which the resources are available. The intractable problem lies in liquid fuel for land, sea and air transportation.

We get clear indications regarding oil’s future from those in the industry. Though the United States and other consuming nations seem to believe that Saudi Arabia can and should increase production as demand rises, when he was asked at a recent conference whether oil production would peak, Ali Naimi, the long-time head of Saudi Aramco, responded that it would reach a plateau. It is quite telling that when, in 2004, the Energy Information Administration (EIA) projected Saudi production in 2025 of some 25 million BPD to satisfy world demand, the Saudis demurred—and quite politely indicated that such figures were ‘‘unrealistic.’’ The Saudis have never discussed a figure higher than 15 million BPD.

This is why David O’Reilly, CEO of Chevron has stated that the ‘‘era of easy oil is over.’’ Projections by Shell and by BP put that plateau several decades out. BP now says that its initials stand for ‘‘Beyond Petroleum.’’ Others, more pessimistic, suggest that the peak is much closer at hand—in the next decade. It is interesting to note, in light of the recent discussion of Chinese ambitions in acquiring oil assets, that the Chinese seem to believe that world production will reach a peak around 2012 (Pang Xiongqi).

So any indication of relative optimism is greeted with sighs of relief: The peak is not that near. For example, when Daniel Yergin of Cambridge Energy Research Associates recently stated that the peak will not come until after 2020, it was greeted with something approaching cries of elation: The threat is not that immediate!

What lies behind this now-changed view? In brief, most of the giant fields were found forty years or more ago. Only a few have been found since 1975. Even today the bulk of production comes from these old and now aging giant fields.

The Ghawar oilfield in Saudi Arabia, discovered in the 1940s, is by itself still producing 7 percent of the world’s oil. Would that there were more Ghawars, but, alas, that is probably not to be.

Moreover, the announcement by the Kuwait Oil Company in November that its Burgan field, the world’s second largest, is now past its peak output caused considerable consternation. The field’s optimal rate is now calculated at 1.7 million BPD, not the two million that had been forecast for decades ahead. In addition, that announcement has called into question the EIA’s estimate in its reference case that Kuwait would be able to produce five million BPD; it now appears likely that the emirate will not be able to produce over three million BPD.

Recent discoveries have typically been relatively small with high decline rates— and have been exhausted relatively quickly. With respect to the United States, it has been observed: ‘‘In the old days, we found elephants—now we find prairie dogs.’’

A growing consensus accepts that the peak is not that far off. It was a geologist, M. King Hubbert, who outlined the theory of peaking in the middle of the last century, basing it on the experience that as an oilfield passes the halfway point in extracting its reserves, its production goes into decline. Hubbert correctly predicted that production in the United States itself would peak out around 1970. Dissenting from that view are the economists, who have a deep (and touching) faith in the market mechanism—and a belief that over time market forces can adequately cope with any limits on oil supply.  In the extreme, some economists have regarded oil supplies as almost inexhaustible.

Administration of the Department of Energy, as well as the International Energy Agency. What lies behind it? While it is conceded that we have not been finding many new giants, it is contended that ‘‘additions and extensions’’ of existing fields will sustain growth. There is some truth in that contention—in that new technologies have been the basis of much of the additions to existing fields—and the hope is always there that we can increase overall recovery from the already discovered fields.

Optimists are buttressed in their views and are fond of pointing to the many earlier statements about ‘‘running out of oil.’’ Perhaps the most notable example was one by the director of the U.S. Geological Survey, George Otis Smith, who suggested in 1920 that we had already used up 40 percent of the oil to be found here in this country. That was a decade before the discovery in 1930 of the vast East Texas field, a bonanza that made oil supply so available that it drove oil prices below a dollar a barrel during the 1930s. A recent Chevron advertisement makes this substantive point quite dramatically: ‘‘It took us 125 years to use the first trillion barrels of oil. We’ll use the next trillion in 30.’’

Such past failed predictions are far less comforting than the journalists who cite them believe. The future may actually be different from the past. The optimists, mostly non-experts, seem unable to think quantitatively. Things are different now. In 1919 the world consumed a modest 386 million barrels of oil. Today the world is consuming some thirty billion barrels of oil each year. Statements like that of Director Smith were made before we had something approaching a billion automobiles worldwide, before we had aircraft and air transportation, before agriculture depended upon oil-powered farm machinery.

[Note: this is not true, see Inman’s “The Oracle of Oil”. Hubbert did consider technology and unconventional oil]

Hubbert’s peaking theory, based on observation of individual oil fields, was static in that it abstracted from improvements in technology. It also dealt strictly with conventional oil supplies. One notes that today those who are challenging Hubbert’s Peak are changing the rules of the game. They rightly point to dramatic improvements in technology, most notably deep-sea drilling. Somewhat less legitimately, they include in their projections all sorts of unconventional oil, like the Canadian tar sands and the prospects for shale oil. For example, of late, estimates of Canadian oil reserves have jumped by 180 billion barrels, now including the tar sands of Alberta. This is not a refutation of Hubbert’s theory (though it is frequently treated as such); it is simply a change in the rules that does not gainsay the fear that we will reach a plateau in conventional oil production.

We must bear in mind that earlier estimates suggested that there were some two trillion barrels of conventional oil in the earth’s crust. Now the estimate has grown to around three trillion. We have now consumed over a trillion barrels of oil. As indicated, we are consuming oil at the rate of thirty billion barrels a year. If one accepts Department of Energy projections, worldwide we would be consuming forty billion barrels of oil by 2025.

At such rates of consumption, the world will soon have reached the halfway point—with all that that implies—of all the conventional oil in the earth’s crust. At that point, the plateau or the peak will be near. And such calculations presuppose what cannot be assumed, that all the nations with substantial oil reserves will be willing to develop those reserves and exploit them at the maximum efficient rate. Both the Russian Federation and Saudi Arabia seem to intend to reach a plateau that they can sustain for a long time—the Russians at around ten million BPD, the Saudis up to but no more than 15 million BPD.

The inability readily to expand the supply of oil, given rising demand, will in the future impose a severe economic shock. Inevitably, such a shock will cause political unrest—and could impact political systems. To be sure, we cannot anticipate with any precision the year or even the decade that we will reach that plateau. Yet, as Justice Potter Stuart suggested, in seeking to define pornography, we shall know it when we see it.

Many economists take great comfort from the conviction that there is always a price at which markets will clear, and that the outcome determined by supply and demand is not only inevitable, but is also politically workable and acceptable. An outcome in which the price of a crucial commodity like oil rises to a level causing widespread economic disruption, along with the political consequences that flow from such disruption, turns out to be a secondary consideration, if considered at all. One is reminded of the phrase used by Wesley Clair Mitchell and Arthur F. Burns in their classic, Measuring Business Cycles (1946), when they spoke scornfully of the ‘‘Dreamland of Equilibrium.’’

That brings us to the question of the transition away from conventional oil as the principal source of energy for raising living standards of the world’s population. That transition will be the greatest challenge this country and the world will face— outside of war. The longer we delay, the greater will be the subsequent trauma. For this country, with its 4 percent of the world’s population, using 25 percent of the world’s oil, it will be especially severe.

The Day of Reckoning is coming, and we need to take measures earlier to cushion the shock. To reduce the shock, measures to ameliorate it should start ten years earlier at a minimum, given the length of time required to adjust the capital stock—and preferably much longer. The longer we delay, the greater the subsequent pain.

Both people and nations find it hard to deal with the inevitable. Even though it was long recognized that a Category 4 or Category 5 hurricane would inevitably strike New Orleans, a city substantially below sea level, Hurricane Katrina reminds us that political systems do not allocate much effort to dealing with distant threats—even when those threats have a probability of 100 percent.

We should heed a lesson from ancient Rome. In the towns of Pompeii and Herculaneum, scant attention was paid to that neighboring volcano, Vesuvius, smoking so near to them. It had always been there. Till then, it had caused little harm. The possibility of more terrible consequences was ignored—until those communities were buried in ten feet of ash.

Nonetheless, it does appropriately point to our greater vulnerability to a future period.

References

Robert L. Hirsch, ‘‘The Inevitable Peaking of World Oil Production,’’ (Atlantic Council of the United States, October 2005), which includes a range of different estimates for the peak year. For a more comprehensive analysis, see Robert L. Hirsch, Roger Bezdek and Robert Wendling, ‘‘Peaking of World Oil Production: Impacts, Mitigation and Risk Management’’   (National Energy Technology Laboratory, February 2005).

Pang Xiongqi, et al., ‘‘The Challenge Brought by the Shortage of Oil and Gas in China and their Countermeasures,’’ a presentation at an international seminar in Lisbon, 2004. One may assume that such presentations do not depart significantly from the views of the Chinese government. The optimistic view is held by the Energy Information

—————–

I thank the committee for this invitation to discuss the quest for energy security, the implications of our heavy dependence on imported oil, the rise in oil prices, and their manifold political and economic repercussions for our Nation. In so many ways, the use of oil as our primary energy source turns out to be a two-edged sword. Actions that we take may reduce supply or add to the resources of those who are hostile to us.

The problem of energy security is of relatively recent origin. When mankind depended upon windmills, oxen, horses, and the like, energy security was not a strategic problem. Instead, as a strategic problem it is a development of modern times and reflects most crucially the turn to fossil fuels as increasingly the source of energy. The Industrial Revolution in the 19th century, strongly reinforced by the rapid growth of oil-dependent transportation in the 20th century, unavoidably posed the question of security of supply. Imperial Germany took over Lorraine with its coal fields after the Franco-Prussian War to insure its energy security. When Britain, pushed by Churchill, converted its Navy to oil early in the 20th century, it sought a secure supply of oil under its own control in the Persian Gulf, which incidentally increased its concern for the security of the Suez Canal.

For the United States, where the production of oil had started, in 1869, and for long was primarily located, the question of security of supply did not arise until the 1960s and 1970s. Since then, we have regularly talked about and sought, by various measures, to achieve greater energy security. Such measures, limited as they were, have generally proved unsatisfactory. The Nation’s dependence on imported hydrocarbons has continued to surge.

Until such time as new technologies, barely on the horizon, can wean us from our dependence on oil and gas, we shall continue to be plagued by energy insecurity. We shall not end dependence on imported oil nor, what is the hope of some, end dependence on the volatile Middle East with all the political and economic consequences that flow from that reality.

We shall have to learn to live with degrees of insecurity—rather than that elusive security we have long sought. To be sure, some insecurity will be mitigated by the Strategic Petroleum Reserve, and other emergency measures. That will provide some protection against short-term supply disruptions, but it will not provide protection against the fundamental long-term problem.

Senator Lugar, Indiana. Our weak response to our own energy vulnerability is all the more frustrating given that alternatives to oil do exist. Oil’s importance is the result of industrial and consumption choices of the past. We now must choose a different path. Without eliminating oil imports or abandoning our cars, we can offset a significant portion of demand for oil by giving American consumers a real choice of automotive fuel. We must end oil’s near monopoly on the transportation sector, which accounts for 60% of American oil consumption.

I believe that biofuels, combined with hybrid and other technologies, can move us away from our extreme dependence on oil. Corn-based ethanol is already providing many Midwesterners with a lower cost fuel option. Cellulosic ethanol, which is made of more abundant and less expensive biomass, is poised for a commercial takeoff. We made progress in the 2005 energy bill, which includes incentives to produce 7.5 billion gallons of renewable biofuel annually. I introduced legislation last week that would require manufacturers to install flexible-fuel technology in all new cars. This is an easy and cheap modification, which allows vehicles to run on a mixture of 85 percent ethanol and 15 percent gasoline. We will get even greater payoffs for our investment in oil alternatives if American technological advances can be marketed to the rest of the world. Nations containing about 85 percent of the world’s population depend on oil imports.

 

JAMES WOOLSEY, former CIA Director

Secretary Shultz and I suggested three proposed directions for policy in these circumstances. The first policy is to encourage improved vehicle mileage, using technology that is now in production. First, with modern diesel vehicles: One needs to be sure that they are clean enough with respect to emissions, but one of the main reasons that European fuel mileage is 42 miles a gallon for their fleet and ours is 24 miles a gallon, is because over half of the passenger vehicles in Europe are diesels; modern diesels.

Light weight carbon composite construction of vehicles. The Rocky Mountain Institute’s publication of a year ago, ‘‘Winning the Oil Endgame’’ (WTOE) talks about this. This is a technology that is now in place for at least racing cars. Formula 1 racers are constructed out of carbon composites that are about 80 percent of the strength of aviation composites but about 20 percent of the cost. What that does is separate weight from safety. If one is in a light weight carbon composite vehicle like a Formula 1 racer it is extremely resistant to being crushed or damaged, many times better than steel. So having light-weight vehicles that are fuel efficient, but also strong enough that you don’t have to worry that your family’s going to get crushed if they get hit by an SUV, has some real advantages.

The second policy we suggest is the commercialization of alternative transportation fuels—fuels that can be available soon, are compatible with existing infrastructure, and can be derived from waste or otherwise produced cheaply. The first is cellulosic ethanol. The chairman and I stressed it in the Foreign Affairs article that he mentioned. Ethanol of any kind can be used for up to 85 percent of the fuel in flexible-fuel vehicles.  The cost of cellulosic ethanol looks like it is headed down to well below $1 a gallon for production.

There are also new technologies for producing diesel encouraged in the Energy Act. It’s called renewable diesel rather than biodiesel, because it focuses on waste products of all kinds as we said in the Energy Commission Report.

The Toyota Priuses that are sold in Japan and Europe have a button on them, which if you push it you can drive all electric for a kilometer or so. For some reason those buttons are not put on the Priuses that are sold in the United States. But if one improves the capabilities of the batteries in a hybrid, and you can punch a button of that sort and drive for, let’s say, 30 miles before the hybrid feature cuts in—that is the movement back and forth between gasoline power and electric power—and you have topped off the battery by plugging in the hybrid overnight, using off-peak night-time power, you are driving on the equivalent of something between 25-cent and $1-a-gallon gasoline. Most cars in the United States are driven less than 30 miles a day. So, if that’s the second car in the family, the car that’s used for errands and taking kids to school and so forth, you could well go weeks or months before you visited the filling station. On the average that type of a feature makes my 50-mile-a-gallon Prius into about a 125-mile-a-gallon Prius. If you make that vehicle out of carbon composites, then instead of 125 miles a gallon you would be getting around 250 miles a gallon, because halving the weight would approximately double the mileage.

There are imaginative proposals for transitioning to other fuels for transportation, such as hydrogen to power automotive fuel cells, but this would require major infrastructure investment and restructuring. If privately owned fuel cell vehicles were to be capable of being readily refueled, this would require reformers (equipment capable of reforming, say, natural gas into hydrogen) to be located at filling stations, and would also require natural gas to be available there as a hydrogen feed-stock. So not only would fuel cell development and technology for storing hydrogen on vehicles need to be further developed, but the automobile industry’s development and production of fuel cells also would need to be coordinated with the energy industry’s deployment of reformers and the fuel for them. Moving toward automotive fuel cells thus requires us to face a huge question of pace and coordination of large-scale changes by both the automotive and energy industries. This poses a sort of industrial Alphonse and Gaston dilemma: Who goes through the door first? (If, instead, it were decided that existing fuels such as gasoline were to be reformed into hydrogen on board vehicles instead of at filling stations, this would require onboard reformers to be developed and added to the fuel cell vehicles themselves—a very substantial undertaking.) It is because of such complications that the National Commission on Energy Policy concluded in its December 2004, report ‘‘Ending The Energy Stalemate’’ that ‘‘hydrogen offers little to no potential to improve oil security and reduce climate change risks in the next 20 years.’’ To have an impact on our vulnerabilities within the next decade or two, any competitor of oil-derived fuels will need to be compatible with the existing energy infrastructure and require only modest additions or amendments to it.

 

 

 

 

 

 

Posted in Energy Dependence, Energy Policy | Tagged , , , , | Leave a comment