Peak oil sands, low EROI, high debt, limited pipeline and refinery capacity

Peak tar sands, a.k.a. oil sands

Techno-opmtimists claim that technology will enable nasty, sour, gunky, expensive, difficult unconventional oil to fill in the gap of declining conventional oil.

Conventional oil is declining too quickly for unconventional to match

But that’s not likely, because the decline in conventional production is between 4–8% annually (Höök 2009), equal to a new North Sea (~5 Mb/d) coming on stream every year to keep present output constant (Fantazzini 2011). This equals 5 new Saudi-Arabia’s  by 2030 just to offset the decline in existing production (Aleklett 2010).

Höök (2009) provides additional data on giant oil field decline rates and finds the average decline rate has increased by around 0.15% per year since mid-1960s – a trend that is expected to continue. Decline rates are even higher for smaller fields. As future production becomes more reliant on non-giant fields, average decline in existing production will increase. The increasing decline rate is seldom discussed and may be as high as 7,000,000 barrels/day by 2030 (Aleklett 2010).

Although tar sands production may be as high as 4 million b/d in 2030, the easiest, high quality sands are being mined and melted in situ now. Mining depends on adequate water and natural gas supplies. If either becomes less available. production declines. Meanwhile, the entire infrastructure is rusting and corroding, maintenance costs in the harsh environment will take an increasingly larger toll on Energy Returned On Invested as time goes on. The very low EROI of 5 now doesn’t include the cost to transport and refine tar sands oil in Gulf refineries.  The true EROI is more likely around 3, not even close to the 12 to 14 EROI needed.

There are other limits to growth of Canadian Tar Sands

Refinery limits. According to James Burkhard, managing driector of HIS CERA, “Canadian oil sands would eventually hit the limits of existing cross-border capacity by around 2019. Even before that, however, oil sands supply would run up against the capacity limits of Canada’s existing US customers—refineries in the Mid-Continent—to process oil sands production. This could occur as soon as 2015 and is a key reason Canadian producers are seeking access to the much bigger refining market in the US Gulf Coast. To expand the reach of Canadian oil into the U.S. you need a pipeline to the U.S. Gulf Coast which is the largest, most sophisticated refining center in the world” (U.S. SENATE)

Economic. If tar sand prices are unaffordable by society, whether from a deflationary low price or a high-demand, high price, “The Market” will be unable to afford to increase oil sand production.  A financial crash would also stop the money from flowing to new production projects.

On average, the oil price needs to be $85 for oil sand companies to break even. With low oil prices and huge debt loads, companies could find themselves unable to get financing:

  1. Canadian Oil Sands Ltd. said Thursday it would further slash its dividend and capital spending budget in response to the sharp drop in oil prices and a rising debt load. The company’s net debt stood at $1.51 billion as of Dec. 31, 2014.
  2. Southern Pacific Resource Corp. and Connacher Oil and Gas Ltd. announced last week that they’d hired banks to help raise cash so the companies can avoid missing interest-rate payments. Trading in the bonds shows investors expect less than half the principal to be paid back from the energy companies in Alberta’s oil sands.
  3. Connacher, with about C$977 billion in debt, said two days earlier that it had hired Bank of Montreal to look at its liquidity and capital structure after saying in November that cash flow may not be sufficient to cover interest payments on debt and it will need to get additional funds next year to stay in business. The company’s August 2018 notes were trading at about 40 cents on the dollar, according to data compiled by Bloomberg.
  4. MEG Energy Corp., a Calgary-based oil-sands developer, said Dec. 4 it reduced its capital budget for this year by a third and plans to keep 2015 spending flat and funded with cash on hand. Canadian Oil Sands Ltd., said Dec. 3 it’s lowering its dividend by 42 percent.
  5. Canadian Natural Resources Ltd., the nation’s largest producer of heavy oil, has set aside C$2 billion it can remove from its budget of C$8.6 billion next year if prices remain low.

Canadian Association of Petroleum Producers

2030 production changed from 6.4 million b/d to as low as 4.3 million b/d to as high as 5.3 million b/d 835,000 b/d lower oil sands in situ, 33,000 b/d oil sands mining, and 260,000 b/d conventional oil.

Oil Sands production

  • 2014: 2,200,000 b/d (912,000 mining 1,200,000 b/d in situ)
  • 2030: 3,000,000 b/d to 4,000,000 b/d

Conventional oil

  • 2015: 1,400,000 b/d
  • 2020: 1,300,000 b/d

Delayed projects

  • Royal Dutch Shell Plc: Carmon Creek for 2 years that will produce 80,000 barrels a day.

Canceled  projects

  • Royal Dutch Shell Plc: 200,000 barrel-a-day Pierre River mine
  • Total SA: C$11 billion Joslyn mine producing 160,000 barrels-a-day
  • Cenovus Energy said that it would reduce investment spending by 27%, and set aside plans for two oil sands project expansions.

June 17, 2015. Oil-Sands Megaproject Era Wanes as Suncor Scales Back. by R. Penty. Bloomberg.

The era of the megaproject in Canada’s oil sands is fading. Canadian oil-sands spending is poised to drop 30% to $19 billion this year while total oil production will be 17% lower in 2030 compared with last year’s estimates.

High cost, low crude oil prices, tax increases, lack of pipelines, etc., are causing producers such as Suncor Energy Inc. and Imperial Oil Ltd. to shift to smaller projects like cheaper, bite-sized drilling programs that deliver quicker returns and require less labor.

With crude 46% below last year, companies globally have delayed or scrapped about $200 billion in big projects according to a June 16 report from Ernst & Young LLP.

“New oil sands projects, especially the mining projects, are very difficult,” said Amir Arif, an analyst at Cormark Securities Inc. in Calgary. “It’s hard to make the economics work.”


Feb 2, 2015. Lower Oil Prices Strike at Heart of Canada’s Oil Sands Production By I. Austen. New York Times. 

For as long as 400-ton dump trucks have been rumbling around the open pit mines of Canada’s oil sands, crews from Kal Tire have been on hand to replace and repair their $70,000, 13-foot diameter tires. But the relationship, going back over a decade, didn’t spare the company when oil prices began plummeting.

Canada’s oil sands prompted an unprecedented expansion over the last decade. But the roughly $155 billion spending spree left the industry with unusually high production costs.

Now, oil sands operators are scrambling to limit the damage, as crude prices hover near 7-year lows.

Suncor, the largest oil sands operator, announced plans to eliminate about 1,000 contract jobs. Shell Canada said it would cut its oil sands work force by about 10%. Cenovus Energy said that it would reduce investment spending by 27%, and set aside plans for two oil sands project expansions.

The enormous projects are difficult to switch off. Companies must keep pumping crude to cover the sizable debt on their multibillion-dollar investments.

Imperial Oil, controlled by Exxon Mobil, said 4th-quarter earnings dropped by 36%.

While production may keep humming along, the big question is whether oil sands producers can break even at current prices.

An oil sands project takes five to 10 years to design and build, and they have a life span of 25 to 50 years. Fort Hills, a project now under construction in a partnership led by Suncor, has a budget of $10.7 billion.

Once such projects are up and running, the expenses are significant, given the process needed to get the oil-laden bitumen from the ground. It must either be dug up or blasted from under the ground using steam. Two energy-hungry steps are then needed to separate the bitumen from the sand and to turn it into usable oil known as synthetic crude.


Aleklett, K., et al. 2010. The Peak of the Oil Age – analyzing the world oil production Reference Scenario in World Energy Outlook 2008. Energy Policy, 38(3), 1398-1414. DOI:

Fantazzini, D., et al. 2011. Global oil risks in the early 21st Century. Energy Policy, 39(12), 7865–7873. DOI:

Höök, M., et al. 2009. Giant oil field decline rates and their influence on world oil production. Energy Policy, 37(6), 2262–2272. DOI:

U.S. SENATE. Jan 31, 2012. U.S. Global energy outlook for 2012. S. HRG. 112-378.

Posted in Oil Sands, Peak Oil | Tagged , , , , | Leave a comment

Should America Export Oil? Congressional record

March 19, 2015. CR 2015-3-19 U.S. Crude oil export policy . Congressional record. 150 pages.



We’ve been lurching from energy crisis to energy crisis for as long as most of us can probably remember.


Teddy Roosevelt in his Administration Papers on Conservation of Minerals in 1909. Teddy Roosevelt’s Administration found, ‘‘The greatest waste of petroleum has been in exporting crude petroleum and petroleum products to foreign countries. The necessity for it has been due to the sudden increase of production due to the discovery and immediate development of large fields and only by this means has it been possible for the producers to continue to obtain a constant market for petroleum where ever produced. This immediate purchase of product has meant a gain of millions of dollars to the producers.’’ I think the same observation is relevant today.

The U.S. Congress banned the export of crude oil in 1975 after oil exporting nations had used their export capacity as an economic weapon which caused serious damage to the U.S. and to the global economy. Since that time there has never been a reason to revisit the ban. For decades we, in Congress, have debated the best ways to deal with our country’s ever increasing dependence on imported foreign oil. Within the last decade we actually started to see that situation reverse as we started consuming less, producing more and importing less.

Now the oil industry is asking to repeal the export ban. As our oil industry producers produce more at home but our consumption stays relatively flat, our industry wants to sell American oil into the foreign markets where it can get a higher price. But let’s be clear about this.

The United States is and will remain a net oil importer. As we talk about whether we should export oil, we need to keep in mind that for every barrel of oil we export we will be importing even more.

The question before us today is whether this policy change will be in the interest of the American people. As policy makers our obligation is not to any particular industry nor to any particular economic theory. Our responsibility is to decide what policies provide the greatest good to the greatest number of people. As we consider these questions of whether this export ban is still the right policy for America, I think we should think about three variables. First, price. Economic effects of oil and gas prices ripple through our economy. Lower oil prices act like a tax cut for the vast majority of Americans. No one wants to see the price at the pump go up, not in my State of Washington or I’m sure throughout the country. In a published poll this week by Allstate in the National Journal Heartland Monitor, 79 percent of Americans said the current price drop has made a difference in their financial situation. The same percentage of respondents said they are using what they save at the pump daily for other necessities or paying down debt. I would rather have Americans get their own fiscal house in order verses more at the pump for their transportation needs. Second, safety. The oil is moving around our country in ways that we never anticipated, even just five years ago. Oil production has increased faster than the infrastructure needed to transport it in the safest ways. My state currently has tens of thousands of barreled oil traveling through every major population center of our state. And I want to be clear about this. We currently do not have the regulations on the books to safely transport this product. I am going to be working for further measures to make sure that we do get those standards in place. Third, energy security. No one consumes oil. We consume gasoline, diesel and other products that are made from oil. If we are sending oil abroad while some regions of our country then have to import gasoline, diesel and home heating oil, that were refined someplace else are we exporting

CARLOS PASCUAL, FELLOW, CENTER ON GLOBAL EN- ERGY POLICY, COLUMBIA UNIVERSITY, SENIOR VICE PRESIDENT, HIS: I want to address why eliminating the export ban on crude oil will create jobs, raise incomes, stimulate economic growth, lower gasoline prices and strengthen our national security and American influence in the world. From my experience I have seen that lifting the export ban would increase U.S. credibility and leverage in convincing international partners to adopt policies that mirror U.S. interests on Iran, Russia, free trade and even the environment. The ban on crude oil exports is an anachronism that grew out of a period of scarcity in the 1970s. The United States now has the fastest growing oil economy in the world. Since 2008 the U.S. crude oil output increased by 81 percent. This increase exceeds the combined production gains from the rest of the world. The conditions that justify the crude oil export ban in 1973 no longer apply.

RYAN LANCE, CHAIRMAN AND CEO, CONOCOPHILLIPS After decades of declining production our national fortunes are truly changing. This energy renaissance has benefitted our country both domestically and geopolitically. We really have shifted the oil market’s center of gravity away from unstable sources. Even as President Obama said, ‘‘America is number one in oil and gas.’’ We have a bright energy future. That’s a new concept for us. We did it through American-made technology and ingenuity, but there is a problem. We’re producing more oil than our refineries can process economically. They could install new condensate splitters to process more light oil, but that could cost, on average, $400 million per refinery.


Debate should be grounded in fact. To that point I’d like to describe a survey we released yesterday that

simply asked our members what they are doing and what their plans are in the near term to deal with this new light crude oil. In other words, this survey is not based on modeling or hypothetical scenarios, but on actual refiner’s plans. Bottom line. The refiners plan to increase their use of their light, sweet crude by over 730,000 barrels a day from 2014 through ’16. This is more than EIA’s projected increase for that time frame. The survey also pointed out the importance of being able to access the new production. For the refiners getting the crude has been much more of a bigger issue than refining it. If logistics were not an issue, respondents could process 1.5 billion barrels a day more crude in 2016 than they did in 2014 without any further investments than they already have in the works today.

The survey asked about the logistic activities to get new production to refineries. Most crude delivery was actually from the Bakken region where, in North Dakota, not surprising since this was a new region never connected to the refining system. But old regions in the Permian and the Eagle Ford areas in Texas also had significant crude delivery activities.

While these old regions had some delivery infrastructure problems, or infrastructure in place I should say, the reinvigorated production required some more infrastructure to get it to refiners. These results underscore, once again, that policies facilitating

[Most of this has testimony from industries who are for export because they’d financially benefit, and industries against it who would not]



Congressional record. January 30, 2014. Crude oil exports hearing, US Senate HRG. 113–355. 67 pages.


HON. RON WYDEN, U.S. SENATOR FROM OREGON. The fact is energy is not the same thing as blueberries and accordingly it is treated differently under Federal law. The Energy Policy and Conservation Act allows for the export of crude oil only when doing so is in the national interest. There simply isn’t that kind of requirement for blueberries or other commodities. National security, of course, is involved when Americans talk about exporting energy. Right now there are several armed conflicts around the world, in South Sudan, Libya, Mozambique and elsewhere that are certainly being inflamed by fights to control oil. Now I’ll put Oregon blueberries up against just about anything. But the last time I looked, nobody is fighting a war over blueberries. It’s hard to believe that only a few years after campaigns for America’s energy independence, having been dominated by slogans such as ‘‘drill, baby, drill,’’ our country now finds itself having a serious discussion on whether it should export crude oil. Energy independence has been a well-worn staple of virtually every politician’s energy speech for decades. Now our country is in the enviable position of having choices about our energy future.

In any energy debate it’s never very hard to find a voice for the various regions of America, for various industries in America and for various ideological points of view in America. Consumers, however, often don’t have one. I just want it understood that on my watch, the consumer is not going to get short shrift. Now it looks like a number of influential voices want to start exporting oil. I just want to hammer home the point this morning that, for me, the litmus test is how middle class families are going to be affected by changing our country’s policy on oil exports. It is not enough to say some algorithm determines exports are good for the Gross Domestic Product or some other abstract concept. American families and American businesses deserve to know what exports would mean for their specific needs when they fill up at the pump or get their delivery of heating oil. Simply charging forward and hoping for the best is not the way you get the best policy decisions. The responsibility of our committee, and we have always worked on these issues in a bipartisan way, is to make sure consumers are not going to get hammered by the cost of gas going up because of some theory that everything is just going to turn out hunky dory in the end.

We’ve all heard about how it’s a global price. I’m sure we’re going to hear that again today. But a global price does not automatically mean a stable price. If oil stops flowing from Saudi Arabia next week, American consumers and businesses would feel it in a hurry.


In October 2011 DEPA put a stake in the ground and predicted American energy independence by 2020. America’s independent oil and gas producers have unlocked the technology and resources that made this a reality, not the majors. As a result we can today mark the recent 40th anniversary of the OPEC oil embargo by ending their oil scarcity in America and along with it ending the last short sighted regulation passed during that same period.

  • America now counts their natural gas supplies in centuries.
  • Experts agree we’ll be energy independent in terms of crude oil within this decade. This phenomenon was brought about by a group of independent American producers and missed by the general consensus of the industry.
  • It was in complete contrast to the popular belief that the United States would be running out of oil and gas at the turn of the 21st century.



Behind the U.S. military, Delta is the largest user of jet fuel in the world and jet fuel is our largest expense. Because of this we are uniquely situated both as an end user of crude oil and as a refiner to comment on the crude oil export ban and the current debate over whether to lift it. We believe strongly that the ban on U.S. crude oil exports is good policy and that lifting export limits now would come at the expense at the American consumer, who would pay more for gasoline, more for heating oil and more for the price of an airline ticket. Today the going price for a barrel of U.S. crude is $11 less than a barrel sold in Europe. This price differential can be easily explained. The U.S. crude market is a competitive one with price determined by supply and demand. Once the U.S. domestic market incorporated the increased supply of crude from places like North Dakota, the price of a domestic barrel of oil came down.

It’s clear who gains from this scenario. The oil exploration and production companies, many of which are foreign owned. With the increased supply of U.S. crude helping to push prices down these companies want to sell U.S. crude on the global market at higher prices largely determined by OPEC.

Our country’s refinery workers also stand to lose from lifting export limits. Some recent history can help explain why. Before the shale oil boom there was too much capacity in the refineries in the Northeast, along the Gulf Coast and many were closing. In fact Delta purchased its Pennsylvania refinery in 2012 from ConocoPhilips after their facility had been closed nearly 1 year. The shale oil revolution breathed new life into U.S. refineries and created jobs for thousands of refinery workers. In thinking about the merits of the export ban we should also consider one of its goals, which was to help achieve energy independence. By independence I mean the ability to meet our energy needs from sources within North America. Notwithstanding the upswing in domestic production this country still imports around 33 percent of its daily crude oil needs from outside of North America. That’s why exporting U.S. crude makes little sense. If we allow for the export of U.S. crude we’ll have to import more oil from overseas and subject ourselves once again to an increasing degree of price volatility and higher global prices. In sum, the export ban works.


[She makes long POLITICAL arguments about why we should export to Europe to weaken adversaries such as Iran and Russia] Another senator characterized her point of view as: What is your opinion of Ms. Myers Jaffe’s argument that U.S. crude exports, used as a tool of geopolitics, may have the effect of reducing volatility in the global oil market, much of which is driven by geopolitical conflicts?

What we’re really discussing is No. 1, what is the best way to organize free markets and to eliminate distortions and who gets the profit from the exports. Will the refining industry get the profits from the export or the upstream oil and gas industry get the profits from the export or will other industries get the profits from the exports because we’re not in here to discuss banning all energy exports from the United States.

Because we have physical bottlenecks that prevent us from exporting our surplus of natural gas we are currently exporting coal. We need to understand that when you block, like the little boy with the finger in the dike, when you block a hole in one point of the dike, water pressure comes to another point in the dike and something will be exported that’s a different thing. I think the natural gas example is the best example because nobody expected the United States, with its best, new abundance of natural gas and the industry and lower electricity prices that it is promoting, nobody expected the result of that to be the export of coal to Europe. I’m just returning from the World Economic Forum in Davos. I can tell you that the entire discussion focused around Europe’s need to reevaluate their entire energy policies because they are importing coal. Their emissions are going up. They are not drilling for natural gas. They realized that they have these huge distortions that have created a great economic advantage for the U.S. economy and a great disadvantage for the European economic system.

I want to remind the committee and our public that when we had a temporary disruption gas land supply during Hurricane Rita and Katrina as Senator Landrieu might remember, Europe loaned us gasoline supplies from their mandatory strategic stocks that they require industry to hold. That is how we weathered through our crisis. We need to consider our relationship with our allies like Europe when we think about our future export policies.

Energy exports will weaken some of our adversaries such as Iran and Russia. US shale gas has already played a key role in weakening Russia’s ability to wield an energy weapon over its European customers by displacement.

Energy exports also improve our balance of trade.


Since 2008 the United States has produced more and used less oil due to advances in drilling technology, innovatingly employed by Mr. Hamm and his company and due to more efficient vehicles. This reduced oil imports and lowered our vulnerability to a foreign oil supply disruption that could cause a gasoline price spike. Lifting the ban on crude oil exports could squander this recently improved energy security and price stability. To maintain these benefits we urge you to defend the existing domestic crude oil export ban.

Although domestic production has significantly grown over the past 5 years, the Energy Information Administration projects that crude oil crude oil production will peak in 2019 and begin a steady decline after that.

This energy abundance could be a temporary phenomenon.

The EIA also predicts that in 2014 the U.S. will consume 5 million barrels per day more of oil and liquids than we produce. This gap between demand and supply will continue at least through 2040 growing by 13 percent. This is hardly energy independence.

Our transportation system is almost entirely powered by oil which makes crude oil different from many other commodities. American families, the economy and our energy security are vulnerable to sudden foreign oil supply disruptions and price spikes.

The U.S. imports more oil from the Organization of Petroleum Exporting Countries (OPEC) than from any other single source. OPEC oil is very vulnerable to supply disruptions.8 EIA found that interruptions may occur frequently… for a variety of reasons, including conflicts [and] natural disasters… Total outages among the Organization of the Petroleum Exporting Countries (OPEC) producers recently rose to historically high levels.9

A commission of retired senior U.S. military officers recently noted that ‘‘No matter how close the country comes to oil self-sufficiency, volatility in the global oil market will remain a serious concern.’’10 Oil produced in the United States is significantly less vulnerable to supply disruptions and therefore provides more energy security. There is little benefit to Americans from lifting the ban, particularly since oil companies are already making huge profits even with it. The five largest oil companies—BP, Chevron, ConocoPhillips, ExxonMobil, and Shell—made a combined total profit of $1 trillion over the last decade, based on their quarterly financial reports.11

I think all discussion about energy independence or almost all of it is focused on supply. That is something we control some of and some we don’t.

My view is we need to focus on reducing our demand because that is something we do have control over. It will help save consumers money. It will help reduce the carbon pollution that will cause extreme weather, that will disrupt our energy production and transportation system. So I think we need to really focus on reducing demand. Particularly when it comes to transportation which is fueled over 90 percent by oil, we need to invest in alternatives to oil whether it’s electric vehicles, whether it is natural gas fueled trucks, whether it is public transportation, advanced biofuels. All of those things will give consumer choices so we are not solely dependent on this one fuel to run, essentially run, our economy because as long as we are we’ll still be here having discussions about energy security and energy independence.

The Energy Information Administration (EIA) recently found that Organization of the Petroleum Exporting Countries (OPEC) supply disruptions in 2013 reduced the anticipated growth in world global fuels supply. EIA reported this finding in the just published ‘‘Short-Term Energy Outlook Supplement: Uncertainties in the Short-Term Global Petroleum and Other Liquids Supply Forecast.’’1 EIA determined that In January 2013, EIA’s Short-Term Energy Outlook (STEO) projected that global liquid fuels supply growth would average 1.0 million bbl/d in 2013, but EIA’s latest estimate shows that global supply grew by about 0.6 million bbl/d in 2013. The difference mainly reflects higher-than-expected unplanned supply disruptions among OPEC producers.2 This same analysis found that OPEC disruptions increased in the second half of 2013, reaching 2.6 million bbl/d by the end of the year because of increased disruptions in Libya. The issues underpinning the outages in these countries are unresolved, resulting in uncertain oil production outlooks for these countries.3

As the production of U.S. oil has grown, the importation of foreign oil has declined from 57 percent in 2008 to 40 percent in 2013.4    [my comment: THAT’S JUST 17%]

This includes a 35 percent reduction in crude oil imports from OPEC since 2008, which was the second largest amount of imports since 1973.5 As U.S. domestic production continues to grow, EIA projects OPEC crude oil imports will decline by 47 percent between 2013 and 2020.6 Despite the important growth in domestic oil production, the U.S. will consume over 5 million barrels of oil and liquids per day in 2014 compared to the amount it produces.7

Unless there are large reductions in demand, the demand-supply gap will grow if the U.S. exports crude oil and liquids. This gap could be filled by oil from both OPEC and non-OPEC nations. If the U.S. begins to export significantly more oil than it did in 2013, it would have to import oil to offset the exports. Oil companies would like to export ‘‘lighter’’ crude oil because there has been a slight increase in light oil production in the U.S. over the past few years.89 In 2013, EIA reported that domestic crude oil was light, with an average API gravity of 35.3. Imported oil was intermediate, with an average API gravity of 28.10 EIA projects that the increase in domestic production will ‘‘replace imports of medium and heavy crude.’’11 If exports were allowed, refiners could import slightly heavier oil as they were before the domestic production increase began in 2009. The three largest importers of heavy oil are Canada, Mexico, and Venezuela, with average imports of 2.6 million barrels per day (mbd), 1.0 mbd, and .8 mbd, respectively, during the first 11 months of 2013.12 Presumably, some of the increase in heavier crude oil to offset any domestic exports will come from Venezuela, which is a member of OPEC. I am not aware of any projections of changes in future oil imports from these three nations if the crude oil export ban is lifted.

As you note, much of the price volatility in the global oil market ‘‘is driven by geopolitical conflicts.’’ I am not an expert in the regional conflicts in the Middle East, Africa, or other oil producing regions. However, even from my lay person’s perspective it seems that ancient sectarian disagreements, government repression, joblessness, and vast disparities of wealth in these nations are a major part of many of these conflicts. It is difficult to imagine, for instance, that the export of one million barrels of oil per day from the U.S. would have much impact on these factors.

In October, New York became the first state to establish a ‘‘strategic gasoline reserve’’ to prevent serious supply disruptions during extreme weather events or other emergencies.34

Amy Myers Jaffe recently promoted a mandate to ensure a certain amount of refined product inventories. She wrote: Regulators [should] mandate a minimum level of mandatory refined product inventories in the United States. Such a system exists in Europe and Japan and allowed Europe the flexibility to provide gasoline to the United States during the production shortfalls that occurred following Katrina and Rita, preventing worse dislocations. The system helped Japan in the aftermath of the Fukushima crisis.

New York plans to store up to 3 million gallons of gasoline for first responders and other motorists. Establishment of additional reserves could supply gasoline in other states in the event of future supply disruptions. Because of technical limitations on storing significant amounts of gasoline for long periods of time, there would probably have to be multiple smaller reserves rather than several large reserves, as with the Strategic Petroleum Reserve. The Senate Energy Committee should explore the need for such gasoline reserves, as well as the technical and economic feasibility of building and maintaining them.

A US government program reserving the right to use for strategic national emergency releases a portion of this mandated minimum supplementary industry refined product stocks of 5% or 10% of each refining company’s average customer demand would ensure that needed supplies of gasoline or heating oil in inventory to ease the impact of sudden weather related demand surges or accidental disruption of consumer supplies.35 I believe that this proposal would help address future extreme weather or other unforeseen events that cause gasoline supply disruptions.

some of the citations:

1 Energy Information Administration, Short-Term Energy Outlook Supplement: Uncertainties in the Short-Term Global Petroleum and Other Liquids Supply Forecast (U.S. Department of Energy, 2014), available at

2 Ibid 3Ibid

4Energy Information Administration, AEO2014 Early Release Overview (U.S. Department of Energy, 2013), available at

5Energy Information Administration, ‘‘U.S. Imports from OPEC Countries of Crude Oil,’’ available at (last accessed February 2014).

6Energy Information Administration, ‘‘Imported Liquids by Source, Reference case,’’ available at AEO2014ER&table=101-AEO2014ER&region=0-0&cases=ref2014er-d102413a (last accessed February 2014).

7Energy Information Administration, AEO2014 Early Release Overview (U.S. Department of Energy, 2014), Figure 12, available at earlylproduction.cfm?src=Petroleum-b2. 8Energy Information Administration, Annual Energy Outlook 2013 (U.S. Department of Energy, 2013), Figure 98, available at

9Crude oil with an API gravity greater than 35.0 is ‘‘light,’’ while oil with an API gravity less than 25.0 is ‘‘heavy.’’ In 2013, EIA reported that domestic crude oil was light, with an API of 35.3. Imported oil was intermediate, with an API of 28.

10Energy Information Administration, Annual Energy Outlook 2013, Figure 98.

11Energy Information Administration, ‘‘WTI-Brent Spread Projected to Average $11 per barrel in 2014,’’ This Week in Petroleum, February 12, 2014, available at twip/twip.asp.

12Energy Information Administration, ‘‘U.S. Imports by Country of Origin,’’ available at http:// (last accessed February 2014).


Posted in Congressional Record U.S. | Tagged , | 1 Comment

Implications of declining EROI on oil production 2013 by David J. Murphy

Murphy, David J. December 2, 2013. The implications of the declining energy return on investment of oil production. Trans. R. Soc. A 2014 372

[This is a great paper on EROI, highly recommended. Without EROI studies, we risk building energy capturing contraptions that end up being useless, consuming more oil than generated, the Easter Island Heads of our former civilization. Alice Friedemann,]

Declining production from conventional oil resources has initiated a global transition to unconventional oil, such as tar sands. Unconventional oil is generally harder to extract than conventional oil and is expected to have a (much) lower energy return on (energy) investment (EROI). Recently, there has been a surge in publications estimating the EROI of a number of different sources of oil, and others relating EROI to long-term economic growth, profitability and oil prices. The following points seem clear from a review of the literature: (i) the EROI of global oil production is roughly 17 and declining, while that for the USA is 11 and declining; (ii) the EROI of ultra-deep- water oil and oil sands is below 10; (iii) the relation between the EROI and the price of oil is inverse and exponential; (iv) as EROI declines below 10, a point is reached when the relation between EROI and price becomes highly nonlinear; and (v) the minimum oil price needed to increase the oil supply in the near term is at levels consistent with levels that have induced past economic recessions. From these points, I conclude that, as the EROI of the average barrel of oil declines, long-term economic growth will become harder to achieve and come at an increasingly higher financial, energetic and environmental cost.


Today’s oil industry is going through a fundamental change: conventional oil fields are being rapidly depleted and new production is being derived increasingly from unconventional sources, such as tar or oil sands and shale (or tight) oil. Indeed, much of the so-called ‘peak oil debate’ rests on whether or not these sources can be produced at rates comparable to the conventional mega-oil fields of yesterday.

What is less discussed is that the production of unconventional oil most likely has a (much) lower net energy yield than the production of conventional crude oil. Net energy is commonly defined as the difference between the energy acquired from some source and the energy used to obtain and deliver that energy, measured over a full life cycle (net energy=E(out)- E(in)). A related concept is the energy return on investment (EROI), defined as the ratio of the former to the latter (EROI=E(out)/E(in)). The ‘energy used to obtain energy’, E(in), may be measured in a number of different ways. For example, it may include both the energy used directly during the operation of the relevant energy system (e.g. the energy used for water injection in oil wells) as well as the energy used indirectly in various stages of its life cycle (e.g. the energy required to manufacture the oil rig). Owing to these differences, it is necessary to ensure that the EROI estimates have been derived using similar boundaries, i.e. using the same level of specificity for Ein. Murphy et al. [1] suggested a framework for categorizing various EROI estimates, and, where applicable, I will follow this framework in this paper.

Estimates of EROI are important because they provide a measure of the relative ‘efficiency’ of different energy sources and of the energy system as a whole [2,3]. Since it is this net energy that is important for long-term economic growth [3–6], measuring and tracking the changes in EROI over time may allow us to assess the future growth potential of the global economy in ways that data on production and/or prices cannot.

Over the past few years, there has been a surge in research estimating the EROI of a number of different sources of oil, including global oil and gas [7], US oil and gas [8,9], Norwegian oil and gas [10], ultra-deep-water oil and gas [11]and oil shale[12]. In addition, there have been several publications relating EROI to long-term economic growth, firm profitability and oil prices [3, 13–15].

The main objective of this paper is to use this literature to explain the implications that declining EROI may have for long-term economic growth. Specifically, this paper: (i) provides a brief history of the development of EROI and net energy concepts in the academic literature, (ii) summarizes the most recent estimates of the EROI of oil resources, (iii) assesses the importance of EROI and net energy for economic growth and (iv) discusses the implications of these estimates for the future growth of the global economy.

(a) A brief history of energy return on investment

In the late 1960s, Charles Hall studied the energy flows within New Hope Creek, in North Carolina, USA, to understand the migration patterns of the fish within the stream. His conclusions [16] revealed that, by migrating, the fish were able to exploit new sources of food, which, after accounting for the additional energy cost of migration, conferred a large net energy gain upon the fish. In other words, owing to the abundance of food in the new locations, the fish were able to gain enough energy not only to ‘pay’ for the energy expenditure of that migration but also to grow and reproduce. Comparing the energy gained from migration to the energy expended in the migration process was ostensibly the first calculation of EROI.

In the autumn of 1973 the price of oil skyrocketed following the Arab oil embargo (the so-called ‘first oil shock’), which sent most OECD economies tumbling into recession. The apparent vulnerability of OECD nations to spikes in the price of oil led many researchers to focus on the interaction between the economy and energy. Then, in 1974, the journal Energy Policy dedicated a series of articles to the energy costs of production processes. The editor of this series, Peter Chapman, began the series with a paper titled ‘Energy costs: a review of methods’, and observed that ‘this subject is so new and undeveloped that there is no universally agreed label as yet’ [17], and followed up two years later with a second paper [18]. Today this area of research is spread among a number of different disciplines, including, but not limited to, ecological economics, industrial ecology and net energy analysis, and the EROI statistic is just one of many indicators calculated.

Also during this period researchers started using Leontief input–output tables as a way to measure the use of energy within the economy [19–22]. For example, Bullard & Herendeen [23] used a Leontief-type input–output matrix to calculate the energy intensity (in units of joules per dollar) of every major industrial sector of the US economy. Even today this paper serves as a useful model for other net energy analyses [8,24]. In addition, a workshop in Sweden in 1974 and one at Stanford, CA, in 1975 formalized the methodologies and conventions of energy analysis [25,26].

In 1974, the US Congress enacted specific legislation mandating that net energy be accounted for in energy projects. The Nuclear Energy Research and Development Act of 1974 (NERDA) included a provision stating that ‘the potential for production of net energy by the proposed technology at the stage of commercial application shall be analyzed and considered in evaluating proposals’. Further influential papers by the Colorado Energy Research Institute, Bullardet al.and Herendeen followed this requirement [27–29]. Unfortunately, the net energy provision within the NERDA was never adopted and was eventually dropped.

In 1979, the Iranian revolution led to a cessation of their oil exports (the second oil shock), which precipitated another spike in the price of oil and squeezed an already strained US economy. Responding to this, and in an attempt to control deficits and expenditure, President Reagan of the USA enacted Executive Order 12291 in 1980. This order mandated that ‘regulatory action shall not be undertaken unless the potential benefits to society from the regulation outweigh the potential costs to society’.


In other words, all US regulatory action had to show a net monetary benefit to US society, and the idea of measuring benefits in terms of net energy fell even further from the policy arena.

Net energy analysis remained insignificant in US energy policy debates until the dispute over corn ethanol emerged 25 years later [30,31].

Although the political emphasis had now shifted towards economic analysis, the 1980s still provided useful papers on net energy analysis (e.g. [32]). In 1981, Hall published ‘Energy return on investment for United States petroleum, coal, and uranium’, which marked the first time that the acronym EROI was published in the academic literature [33]. Later that year, Hall & Cleveland [34] published ‘Petroleum drilling and production in the United States: yield per effort and net energy analysis’. This paper analyzed the amount of energy being produced per foot drilled and found that the ratio had been declining steadily for 30 years. Further publications by Hall and colleagues then tested hypotheses relating economic growth to energy use, introduced explicitly the concept of energy return on investment and examined the EROI of most major sources of energy [35,36].

Following growing concern about environmental impacts, climate change and sustainability, documented in the Brundtland Report in 1987 [37], emphasis began to shift from energy analysis to greenhouse gas (GHG) emissions and life-cycle analysis. Life-cycle analysis (LCA) itself was born out of the process and input–output analyses codified in the aforementioned energy literature of the 1970s and 1980s, and can be used to calculate EROI and other net energy metrics. Beginning around the turn of the century, researchers began to recognize the complementarity between LCA and net energy and began publishing on the matter [38].

There was another surge in publications in net energy analysis in the 2000s, due mainly to a growing global interest in renewable energy, and therefore an interest in metrics that compare renewable energy technologies. The debate about whether or not corn ethanol has an EROI greater than one is a good example [30,31]. There has also been a number of studies using the input–output techniques developed in the 1970s to track emissions production and/or resource consumption across regions [39].


Today, research within the field of net energy analysis is expanding rapidly. The main renewable energy options, including, but not limited to, solar photovoltaics, concentrating solar, wind power and biofuels, have each been the focus of studies estimating their net energy yield [31,40,41].

Furthermore, with the expansion of oil production into ultra-deep water, tar sands and other unconventional sources, as well as developments with shale gas, there has been a renewed interest in whether or not these sources of energy have EROI ratios similar to conventional oil and gas, and publications are expected to be forthcoming .

Recent estimates of the energy return on (energy) investment for oil and gas production

There has been a recent resurgence in EROI studies for liquid fuels, beginning with Cleveland [ 8], who estimated the EROI for oil and gas extraction in the US, Gagnon et al. [7], who estimated the same EROI for the whole world, and a number of additional studies that were contained in a 2011 special issue of the journal Sustainability. This section reviews the findings of these papers. Unless otherwise noted, all of the oil EROIs reported here are equivalent to the standard EROI (EROIstnd), as reported in Murphy et al. [1], which means that both the indirect and direct costs of energy extraction are included in the EROI calculation, but costs further downstream, such as transportation and refinement, have been omitted.

Cleveland [ 8]estimated two values for the EROI of US oil and gas that differed in the method of aggregating different types of energy carrier. The first method used thermal-equivalent aggregation, i.e. volumes of natural gas and oil are combined in terms of their heat content in joules. The second method uses a Divisia index, developed by Berndt [42], and uses both energy prices and consumption levels to adjust for the ‘quality’ of each energy carrier. Quality corrections are often used in energy analysis to adjust for the varying economic productivity of different energy carriers—for example, since electricity is more valuable, in terms of potential economic productivity, than coal, it is given more weight in the aggregate measure [43]. Quality-corrected measures better reflect the ability of energy carriers to produce marketable goods and services, so are arguably more useful.


The EROI values calculated using the energy quality-corrected data for US oil and gas production are consistently lower than those calculated from the non-quality-corrected data. This reflects the fact that many of the inputs to production are high-quality (i.e. high-priced) energy carriers such as electricity and diesel, while the outputs are unprocessed crude oil and natural gas.

Nevertheless, both estimates show the same trend over time: namely, an increase until the early 1970s, a decline until the mid-1980s, a slight recovery until the mid-1990s, followed again by decline (figure 1).

According to Cleveland, the overall downward trend from the 1970s till the mid-1990s is the result of higher extraction costs due to the depletion of oil in the USA. The up and down fluctuations within this aggregate trend are likely to be linked to changes in oil prices influencing the rate of drilling in the USA, with higher prices encouraging more drilling in less promising areas, which in turn leads to a lower yield and a lower aggregate EROI. Gagnon et al. [7] estimated the EROI for global oil and gas from 1992 to 2006 using the same energy aggregation techniques as Cleveland [8], i.e. both thermal equivalence and Divisia indices. In both cases, the EROI at the wellhead was around 26 in 1992 and increased to 35 in 1999 before declining to 18 in 2006 (figure 1).

It is not surprising that the EROI for global oil and gas is higher than that for the USA considering that oil production peaked in the USA in 1970 due mainly to the depletion of its biggest oil fields, while global production continued to flow and even increase from the mega-oil fields of the Persian Gulf.

US producers are increasingly reliant upon smaller and poorer-quality fields in difficult locations (e.g. deep water) together with the enhanced recovery of oil from existing fields—all of which are relatively energy intensive. In contrast, most OPEC members are still producing oil from high-quality supergiant fields.

The first few years of the Gagnon dataset and the last few years of the Cleveland dataset overlap in the early 1990s and both show a general increasing trend. The results from Gagnon et al. [7] then show that the increase in the early 1990s reaches a maximum in 1999, followed by a monotonic decline through the 2000s. Much like the Cleveland paper, Gagnon et al. assume that the decline is due to the depletion of easy access resources, but, as mentioned earlier, this trend also could be dependent on the trend in oil prices.

In addition to the estimates of Cleveland [ 8] and Gagnon et al. [7], Guilford et al. [9] estimated the non-quality-corrected EROI of conventional oil and gas production for the USA. They found that the EROI of oil production has declined from a peak of 24 in the 1950s to roughly 11 in 2007 (figure 1). By deriving separate estimates for exploration and production, they show how depletion reduces the rate of production from existing fields and gives incentives for increased exploration for new fields, both of which lower the aggregate EROI. They also suggest that natural gas is subsidizing oil production and that the EROI for oil alone is likely to be much lower.

Figure 1. EROI estimates from three sources, Gagnon et al. [7], Cleveland [8] and Guilford et al. [9]. The Gagnon et al. [7] data represent estimates of the EROI for global oil and gas production using aggregation by Divisia indices. The Cleveland [8] data represent the trend in EROI values for US oil and gas production calculated using the Divisia indices to aggregate energy units. The Guilford et al. [9] data represent estimates of the EROI of US oil production from 1919 to 2007

Figure 1. EROI estimates from three sources, Gagnon et al. [7], Cleveland [8] and Guilford et al. [9]. The Gagnon et al. [7] data represent estimates of the EROI for global oil and gas production using aggregation by Divisia indices. The Cleveland [8] data represent the trend in EROI values for US oil and gas production calculated using the Divisia indices to aggregate energy units. The Guilford et al. [9] data represent estimates of the EROI of US oil production from 1919 to 2007

Despite differences in coverage and approach, the results from these three studies are broadly consistent, namely a general increase in EROI until 1970, then a general decline until the early 1980s, an increase through the mid-1990s and then a decline.


Grandell et al. [10] estimated the EROI of oil production from Norwegian oilfields to be roughly 20 in

  1. They also note that as the fields deplete they expect the EROI to decline further. Brandt [44] estimatedthattheEROIfromCalifornianoilfieldshasdeclinedfromover50 in the 1950s to under 10 by the mid-2000s. Similarly, Hu et al. [45] estimated that the EROI from the Daqing oil field, the biggest oil field in China, had declined from 10 in 2001 to 6.5 by 2009.


Two other recent EROI estimates of particular importance are those of Moerschbaecher & Day [11], who estimated the EROI of ultra-deep-water (depths greater than 1524 m or 5000 feet) production in the Gulf of Mexico, and Cleveland & O’Connor [12], who estimated the EROI of oil shale production.

Moerschbaecher & Day [11] estimated the EROI for deep-water oil production to be between 7 and 22. The range in EROI values is due to a sensitivity analysis performed by the authors that incorporated three different energy intensity values as proxies for the energy intensity of the ultra-deep-water oil industry. They also noted that, owing to the large infrastructure requirements of the deep-water oil industry, the real value is probably closer to the lower end of the range presented.

Cleveland & O’Connor [12] estimated that the EROI for oil shale production using either surface retorting or in situ methods was roughly 1.5, much lower than for other unconventional resources. Oil shale is the production of oil from kerogen found in sedimentary rock and is distinct from ‘shale oil’ or, preferably, ‘tight oil’, which is oil trapped in shale or other impermeable rock. Oil shale is discussed here because the western USA has vast resources of oil shale, but production costs are much higher than for other forms of unconventional oil [46].

The following summarizes the aforementioned studies:

  • EROI 11: average for US oil production today, down from roughly 20 in the early 1970s
  • EROI 17: global average, down from EROI of roughly 30 in 2000
  • EROI 10: ultra-deep-water oil production is probably less than 10
  • EROI 1.5: Oil shale (kerogen), not tight oil (aka ‘shale’ oil)

Energy return on (energy) investment, oil prices, and economic growth


The economic crash of 2008 occurred during the same month that oil prices peaked at an all-time high of $147 per barrel, leading to numerous studies that suggested a causal link between the two [47,48]. In addition, other researchers involved in net energy analysis began examining how EROI relates to both the price of oil and economic growth [3,13,15,49–51].


Murphy & Hall [3] examined the relation between EROI, oil price and economic growth over the past 40 years and found that economic growth occurred during periods that combined low oil prices with an increasing oil supply. They also found that high oil prices led to an increase in energy expenditures as a share of GDP, which has led historically to recessions. Lastly, they found that oil prices and EROI are inversely related (figure 2), which implies that increasing the oil supply by exploiting unconventional and hence lower EROI sources of oil would require high oil prices. This created what Murphy & Hall called the ‘economic growth paradox: increasing the oil supply to support economic growth will require high oil prices that will undermine that economic growth’.


Other researchers have come to similar conclusions to those of Murphy & Hall, most notably economist

James Hamilton [47]. Recently, Kopits [50], and later Nelder & Macdonald [49], reiterated the importance of the relation between oil prices and economic growth in what they describe as a ‘narrow ledge’ of oil prices. This is the idea that the range, or ledge, of oil prices that are profitable for oil producers but not so high as to hinder economic growth is narrowing as newer oil resources require high oil prices for development, and as economies begin to contract due largely to the effects of prolonged periods of high oil prices. In other words, it is becoming increasingly difficult for the oil industry to increase supply at low prices, since most of the new oil being brought online has a low EROI. Therefore, if we can only increase oil supply through low EROI resources, then oil prices must apparently rise to meet the cost, thus restraining economic growth.

Skrebowski [51]provides another interpretation of the relation between oil prices and economic growth in what he calls the ‘effective incremental oil supply cost. It should be noted there are wide divergences in estimates of oil development costs depending on what is included and the treatment of financial costs, profits and overheads. Those used here are estimates of the prices needed to justify a new, large development.’

According to data provided by Skrebowski, developing new unconventional oil production in Canada (i.e. tar sands) requires an oil price between $70 and $90 per barrel. Skrebowski also indicates that new production from ultra-deep-water areas requires prices between $70 and $80 per barrel. In other words, to increase oil production over the next few years from such resources will require oil prices above at least $70 per barrel. These oil prices may seem normal today considering that the market price for reference crude West-Texas Intermediate ranged from $78 to $110 per barrel in 2012 alone, but we should remember that the average oil price during periods of economic growth over the past 40 years was under $40 per barrel, and the average price during economic recessions was under $60 per barrel (dollar values inflation adjusted to 2010) [3]. What these data indicate is that the floor price at which we could increase oil production in the short term would require, at a minimum, prices that are correlated historically with economic recessions.

Heun & de Wit [15] found indicates that the price of oil increases exponentially as EROI declines [equation and explanation snipped, see pdf]. They suggest that the nature of the relation between EROI and the price is such that the effect on price becomes highly nonlinear as EROI declines below 10.

Figure 2. Relationship between oil prices and EROI. (Adapted from Murphy & Hall [3].)

Figure 2. Relationship between oil prices and EROI. (Adapted from Murphy & Hall [3].)











King & Hall [13] examined the relation between EROI, oil prices and the potential profitability of oil-producing firms, termed energy-producing entities (EPEs). They found that for an EPE to receive a 10% financial rate of return from an energy extraction process, which, for example, has an EROI of 11, would require an oil price of roughly $20 per barrel.3 Alternatively, a 100% financial rate of return for the same extraction project would require $60 per barrel (figure 3). King & Hall also echoed Heun & de Wit, suggesting that the relationship between EROI and profitability becomes nonlinear when the EROI declines below 10.

The pertinent results from the literature summarized in this subsection are as follows:

  • there appears to be a negative exponential relationship between the aggregate EROI of oil production and oil prices;
  • there appears to be a comparable relationship between EROI and the potential profitability of oil-producing firms;
  • the relationship between EROI and profitability appears to become nonlinear as the EROI declines below 10;
  • the minimum oil price needed to increase global oil supply in the near-term is comparable to that which has triggered economic recessions in the past.

Understanding the relationship between energy return on (energy) investment and net energy

The mathematical relation between EROI, net energy and gross energy can be used to explain why, at around an EROI of 10, the relation between EROI and most other variables, such as price, economic growth and profitability, becomes nonlinear. The following equation describes the relation between EROI, gross and net energy [3]:

Equation 3.2 net energy = gross energy (1 – 1/ EROI)

Figure 3. Oil price as a function of EROI. The lines on the figure correspond to various rates of monetary return on investment (MROI). (Adapted from King & Hall [13].)

Figure 3. Oil price as a function of EROI. The lines on the figure correspond to various rates of monetary return on investment (MROI). (Adapted from King & Hall [13].)












Using this equation, we can estimate the net energy provided to society from a particular energy source or (rearranging) the amount of gross energy required to provide a certain amount of net energy [52].

We can interpret equation (3.2) as follows:

  • an EROI of 10 delivers to society 90% (1 – .2 = 90%) of the gross energy extracted as net energy
  • an EROI of 5 will deliver to society 80% (1 – .2 = 80%)
  • an EROI of 2 will deliver only 50% (1 – .5 = 50%).

This exponential relation between gross and net energy means that there is little difference in the net energy provided to society by an energy source with an EROI above 10, whether it is 11 or 100, but a very large difference in the net energy provided to society by an energy source with an EROI of 10 and one with an EROI of 5. This exponential relation between gross and net energy flows has been called the ‘net energy cliff’ [53]and it is the main reason why there is a critical point in the relation between EROI and price at an EROI of about 10 (figure 4).

Figure 4. The 'net energy cliff' graph, showing the relation between net energy and EROI. As EROI declines, the net energy as a percentage of total energy extracted declines exponentially. Note that the x-axis is in reverse order. (Adapted from Mearns [53].)

Figure 4. The ‘net energy cliff’ graph, showing the relation between net energy and EROI. As EROI declines, the net energy as a percentage of total energy extracted declines exponentially. Note that the x-axis is in reverse order. (Adapted from Mearns [53].)

Calculating the minimum energy return on (energy) investment at the point of energy acquisition for a sustainable society

‘The true value of energy to society is the net energy, which is that after the energy costs of getting and concentrating that energy are subtracted.’ H. T. Odum [6]

According to equation (3.2), as EROI declines, the net energy provided to society declines as well, and, at some point, the amount of net energy will be insufficient to meet existing demand.

The point at which the EROI provides just enough net energy to society to sustain current activity represents the minimum EROI for a sustainable society.

But estimating empirically the actual minimum EROI for society is challenging. Hall et al. [24] estimated that the minimum EROI required to sustain the vehicle transportation system of the USA was 3. Since their calculation included only the energy costs of maintaining the transportation system, it is reasonable to expect that the minimum EROI for society as a whole could be much higher.

Exploring the minimum EROI for a sustainable society is beyond the scope of this paper. Instead, I will examine how, in theory, the minimum EROI could be calculated by using some simple models. I will first do this by examining how the idea of net energy grew from analyzing the energy budgets of organisms.

The energy that an organism acquires from its food is its gross energy intake. Let us assume, for simplicity’s sake, that an organism consumed 10 units of gross energy, but to access this food it expended 5 units of energy. Given these parameters, the EROI is 2 (=10/5) and the net energy is 5. It is important to note that the expended energy created an energy deficit (5 units) that must be repaid from the gross energy intake (10 units) before any growth, for example, in the form of building fat reserves or reproduction, can take place.

An economy also must have an influx of net energy to grow. Let us assume that Economy A produces 10,000 units of energy at an EROI of 10, which means that the energy cost of acquisition is 1,000 units and the net energy is 9,000. Like organisms, economies also have energy requirements that must be met before any investments in growth can be made. Indeed, researchers are now measuring the ‘metabolism of society’ by mapping energy consumption and flow patterns over time [54]. For example, economies must invest energy simply to maintain transportation and building infrastructure, to provide food and security, as well as to provide energy for direct consumption in transportation vehicles, households and business, etc. The energy flow to society must first pay all of these metabolic energy costs before enabling growth, such as constructing new buildings, roads, etc.

Building off this idea of societal metabolism, we can gain additional insight into the relationship between EROI and economic growth by differentiating between 3 main uses of energy by society:

  1. Metabolism, which could be described as the energy and material costs associated with the maintenance and replacement of populations and capital depreciation (examples include food consumption, bridge repair or doctor visits)
  2. Consumption: the expenditure of energy that does not increase populations or capital accumulation and is not necessary for metabolism (examples include purchasing movie tickets or plane tickets for vacation; in general, items purchased with disposable income)
  3. Growth, the investment of energy and materials in new populations and capital over and above that necessary for metabolism (examples include building new houses, purchasing new cars, increasing populations).
Figure 5. (a-d) Flow diagrams relating net energy, EROI and gross energy production for a hypothetical Economy A. Each diagram describes the energy flows according to a different EROI, where the EROI is (a) 10, (b) 5, (c) 2 and (d) 1.5

Figure 5. (a-d) Flow diagrams relating net energy, EROI and gross energy production for a hypothetical Economy A. Each diagram describes the energy flows according to a different EROI, where the EROI is (a) 10, (b) 5, (c) 2 and (d) 1.5




















Figure 5 (a-d) illustrates how the flows of energy to the three categories change as EROI declines. Let us assume that the metabolism of Economy A requires the consumption of 5000 units of energy per year. So, of the 10,000 units of energy extracted, 1,000 must be reinvested to produce the next 10,000, and another 5,000 are invested to maintain the infrastructure of Economy A. This leaves 4,000 units of net energy that could be invested in either consumption or growth (figure 5a).

As society transitions to lower EROI energy sources, a portion of net energy that was historically used for consumption and/or growth will be transferred to the energy extraction sector. This transfer decreases the growth and consumption potential of the economy. For example, let us assume that, as energy extraction becomes more difficult in Economy A, it requires an additional 1,000 units of energy (2,000 total) to maintain its current production of gross energy, decreasing the EROI from 10 to 5 and the net energy from 9,000 to 8,000. If the metabolism of the economy remains at 5,000 units of energy, Economy A now has only 3,000 units of energy to invest in growth and/or consumption (figure 5b).

If the EROI for society were to decline to 2, the amount of energy that could previously be invested in growth and consumption would be transferred completely to the energy extraction sector. Thus, given the assumed metabolic needs of Economy A in this example, an EROI of 2 would be the minimum EROI needed to provide enough energy to pay for the current infrastructure requirements of Economy A, or, to put it another way, an EROI of 2 would be the minimum EROI for a sustainable Economy A. If the EROI were to decline below 2, for example in some biofuel systems [31], then the net energy provided to society would not be enough to maintain the infrastructure of Economy A, resulting in physical degradation and economic contraction (figure 5d).

There are a few caveats to this discussion of the minimum EROI that need to be addressed. First, it is important to remember that this is a simple example with hypothetical numbers, and, as such, the minimum EROI for our current society is probably, and maybe substantially, higher. Second, over time, efficiency improvements within the economy can mitigate the impact that lower EROI resources have on economic growth by increasing the utility of energy. That said, the exact relation between energy efficiency improvements and declining EROI is yet to be determined. Third, the model assumes that metabolic needs will be met first, then consumption and growth. This may not necessarily be the case.

It is quite possible that there could be growth at the expense of meeting metabolic needs. Likewise, we can consume at the expense of growth or metabolism. Either way, the net energy deficit that results from declining EROI will become apparent in one of the three sectors of energy use.

The gross energy requirement ratio

‘Now, here, you see, it takes all the running you can do, to keep in the same place.’ The Red Queen, in Through the looking-glass [55,p.15]

Another way to explore the impact that a decline in EROI can have on net energy flows to society is to consider the ‘gross energy requirement ratio’ (GERR). The GERR indicates the proportional increase or decrease in gross energy production that is required to maintain the net energy flow to society given a change in the EROI of the energy acquisition process. The GERR is calculated by dividing the gross energy requirement (GER) of the substitute energy source by the GER of the reference energy source.

The GER is the minimum amount of gross energy production required to produce one unit of net energy.

Both of these equations are outlined below [2]:


Equation 5.1 GER(X) = EROI(X) / EROI(X) – 1

Equation 5.2 GERR = GER(X) / GER(REF)

The GERR is most useful when examining how transitioning from high to low EROI energy sources will impact the net energy flow to society. For example, the average barrel of oil in the USA is produced at an EROI of roughly 11 [9]. Using equation (5.1), an EROI of 11 results in a GER of 1.1, i.e. 1.1 units of gross energy must be extracted to deliver 1 unit of net energy to society, with the 0.1 extra being the amount of energy required for the extraction process. For comparison, delivering one unit of net energy from an oil source with an EROI of 5 would require the extraction of 1.25 units of oil. If conventional oil at an EROI of 11 is our reference GER, and our substitute energy resource has an EROI of 5, then the GERR is 1.14. This GERR value indicates that, if society were to transition from an energy source with an EROI of 11 to one with an EROI of 5, then gross energy production would have to increase by 14% simply to maintain the same net energy flow to society. The net effect of declining EROI is to increase the GERR, requiring the extraction of larger quantities of gross energy simply to sustain the same net energy flow to society (figure 6).

Implications for the future of economic growth

The implication of these arguments is that, if we try to pursue growth by using sources of energy of lower EROI, perhaps by transitioning to unconventional fossil fuels, long-term economic growth will become harder to achieve and come at an increasingly higher financial, energetic and environmental cost.

Figure 6. The GERR as a function of declining EROI. In this example, the reference EROI was 11. As such, the GERR value associated with an EROI of 4 represents the proportional increase in gross energy required to deliver one unit of net energy if society transitioned from an energy source with an EROI of 11 to one with an EROI of 4.

Figure 6. The GERR as a function of declining EROI. In this example, the reference EROI was 11. As such, the GERR value associated with an EROI of 4 represents the proportional increase in gross energy required to deliver one unit of net energy if society transitioned from an energy source with an EROI of 11 to one with an EROI of 4.

Revolutionary technological advancement is really the only way in which unconventional oil can be produced with a high EROI, and thus enhance the prospects for long-term economic growth and reduce the associated financial, energetic and environmental costs. This technological advancement would have to increase the energy efficiency of unconventional oil extraction or allow for increased oil recovery from fields discovered already [56]. Alternatively, there could be massive substitution from oil to high EROI renewables such as wind or hydropower [57].

It is difficult to assess directly how much technological progress is being or will be made by an industry, but we can get a glimpse as to how the oil industry is faring by comparing how production is responding to effort. If new technological advancements, such as hydraulic fracturing and horizontal drilling, represent the types of revolutionary technological breakthroughs that are needed, then we should at least see production increasing relative to effort. The data, however, do not indicate that this is the case. From 1987 to 2000, when the US oil industry increased the number of rigs used to produce oil, there was, as expected, a corresponding increase in the amount of oil produced (figure 7 not shown, see paper). But from 2001 to 2012 the trend shows very little correlation between drilling effort and oil production.

Biofuels are the only currently available non-fossil substitute for oil that is being produced at any sizable scale, but factors such as economic cost, land-use requirements and competition with food production restrict their potential contribution (see [58]). Most importantly, the EROI of most large-scale biofuels5 is between 1 and 3 [30,31], which means that we would be substituting towards a fuel that is even less useful, from a net energy perspective, for long-term economic growth. Others claim that substituting towards renewable electricity is the key; for example, Jacobson & Delucchi [59] argue that wind and solar energy could power global society by 2030. Even if their analysis stands up to scrutiny (and some claim that it does not [60,61]), the high price of oil in the transition period may provide a significant constraint on economic growth. Without high levels of economic growth, the investment capital needed to build, install and operate renewable energy will be hard to acquire.

The other option is to construct coal-to-liquids (CTL) or gas-to-liquids (GTL) operations, but even these solutions have their own difficulties (see [62]). For example, both CTL and GTL operations represent an energy conversion process, not an energy extraction process, which, in terms of EROI, simply adds to the cost of producing the final fuel and lowers the overall EROI. CTL and/or GTL will most probably lead to a significant increase in GHG emissions [63]. For GTL, there is a narrow window of low gas prices and high oil prices in which the GTL process can remain profitable [63]. Achieving profitability is easier in a CTL operation because of cheap coal, but the future availability, quality and cost of that resource is also becoming uncertain [64]. And, again, it will most probably be decades until any sizable portion of global demand for oil is met from a series of GTL or CTL plants, and in the mean-time economies will still be struggling to grow in a high oil price, low oil EROI environment.

Lastly, increasing oil production from low EROI resources is expected to degrade the global environment at an accelerated rate, for two main reasons. First, on average, the environmental impact per unit of energy is larger for unconventional oil than for conventional oil. GHG emissions, for example, are somewhere between 15% and 60% higher for gasoline and diesel produced from tar sands when compared to that produced from conventional petroleum [65,66]. Similarly, the water used per unit of energy produced is also much higher for most low EROI sources of energy [67]. Second, declining EROI increases the GERR. As society switches to lower EROI resources, simply maintaining the flow of net energy to society will require a proportionally larger amount of gross energy extraction, thus increasing the environmental impact associated with that extraction. This evidence indicates that the environmental impacts of energy extraction are most probably related exponentially to EROI, mimicking the relation between EROI and price (figure 8). This relationship holds as long as the flow of net energy to society remains the same or even increases despite a decrease in EROI. The relationship weakens if, when met with lower EROI resources, we simply decrease our effort in energy acquisition, i.e. embrace conservation.

The ecology of societal succession

‘Energy fixed tends to be balanced by the energy cost of maintenance in the mature or “climax” ecosystem.’ E. P. Odum [68]

Societal succession from the beginning of the Industrial Revolution to today mimics ecosystem succession in important and illuminating ways. The early stages of ecosystem development are marked by rapid growth (figure 9a), where the energy fixed through photosynthesis (gross photosynthesis) is greater than the energy consumed through respiration, resulting in a gain of net energy in the ecosystem. This gain in net energy leads to the accumulation of biomass (the energy equivalent of biomass in the context of society is embodied energy). As Odum [68] observed, as succession occurs, the gross photosynthesis of the ecosystem tends to balance with respiration as the steady-state, or ‘climax’, successional stage is reached. In other words, in the climax stage, almost all of the energy fixed by the ecosystem is used in maintenance respiration by the biomass that has accumulated over the years.

The simple diagram of forest succession (figure 9a not shown)is reflected by societal succession (figure 9b not shown)since the beginning of the Industrial Revolution until today. Figure 9 shows how gross photosynthesis is equivalent to humanity’s gross energy production-i.e. the total biomass, coal, oil, natural gas, etc. produced each year. Forest respiration is the equivalent of societal metabolism-i.e. the energy and material costs associated with the maintenance and replacement of populations and capital depreciation. The accumulation of biomass is the equivalent of societal growth-i.e. investments in populations and infrastructure that will increase overall societal metabolism. Lastly, the net energy provided to society is that left after accounting for the metabolic needs of society (i.e. net energy = gross energy production – societal metabolism). Historically, we have simply found and produced more energy as the metabolism (i.e. energy demand) of society grew. Indeed, the exponential increase in global economic output over the past 200 years is highly correlated with the same exponential increase in energy consumption (figure 10).

The question is: can global society continue to produce enough energy to outpace the increased metabolic requirements of a growing, and now very large, built infrastructure? Answering this question for each energy source is clearly beyond the scope of this paper, but the answer for oil seems clear, as the production of conventional oil seems to have peaked in 2008 [71], and both unconventional oil and other feasible substitutes have a much lower EROI. Both of these factors are likely to place contractionary pressure on the global economy by decreasing the flow of net energy to society.

The main difference between society and nature, in terms of figure 9, is in the reason for the peak and initial decline in gross energy acquisition. In forests and other natural ecosystems, the amount of gross photosynthesis declines and reaches parity with respiration as the forces of competition and natural selection create a steady-state, or ‘climax’, ecosystem. These forces exist also for society, but they are in the form of declining EROI, geological depletion, environmental degradation, climate change, water pollution, air pollution, land-cover change and such, and all the other factors that are occurring today that make it harder and harder to produce energy easily. In the end, ecosystems are able to successfully transition from a growth-oriented structure to a steady state; it is unclear whether society will be able to do the same.

Figure 10. GDP as a function of energy consumption over the past 200 years. (Adapted from Kremmer [69] and Smil [70].)

Figure 10. GDP as a function of energy consumption over the past 200 years. (Adapted from Kremmer [69] and Smil [70].)



The concept of energy return on investment (EROI) was born out of ecological research in the early 1970s, and has grown over the past 30 years into an area of study that bridges the disciplines of industrial ecology, economics, ecology, geography and geology, just to name a few. The most recent estimates indicate that the EROI of conventional oil is between 10 and 20 globally, with an average of 11 in the USA.

The future of oil production resides in unconventional oil, which has, on average, higher production costs (in terms of both money and energy) than conventional oil, and should prove in time to have a (much) lower EROI than conventional oil. Similar comments apply to other substitutes such as biofuels. The lack of peer-reviewed estimates of the EROI of such resources indicates a clear need for further investigation.

Transitioning to lower EROI energy sources has a number of implications for global society.

  1. It will reallocate energy that was previously destined for society towards the energy industry alone. This will, over the long run, lower the net energy available to society, creating significant headwinds for economic growth.
  2. Transitioning to lower EROI oil means that the price of oil will remain high compared to the past, which will also place contractionary pressure on the economy.
  3. As we try to increase oil supplies from unconventional sources, we will accelerate the resource acquisition rate, and therefore the degradation of our natural environment.

It is important to realize that the problems related to declining EROI are not easily solved. Renewable energy may indeed represent the future of energy development, but renewables are a long time off from displacing oil. Lastly, it seems apparent that the supply-side solutions (more oil, renewable energy, etc.) will not be sufficient to offset the impact that declining EROI has on economic growth. All of this evidence indicates that it is time to re-examine the pursuit of economic growth at all costs, and maybe examine how we can reduce demand for oil while trying to maintain and improve quality of life. A good summary of these problems is also given in Sorrell [72].

For society, we can either dictate our own energy future by enacting smart energy policies that recognize the clear and real limits to our own growth, or we can let those limits be dictated to us by the physical constraints of declining EROI. Either way, both the natural succession of ecosystems on Earth and declining EROI of oil production indicate that we should expect the economic growth rates of the next 100 years to look nothing like those of the last 100 years.


  1. Murphy DJ, Hall CAS, Dale M, Cleveland C. 2011 Order from chaos: a preliminary protocol for determining the EROI of fuels. Sustainability 3, 1888–1907. (doi:10.3390/su3101888)
  1. Mulder K, Hagens NJ. 2008 Energy return on investment: toward a consistent framework. Ambio 37, 74–79. (doi:10.1579/0044-7447(2008)37[74:EROITA]2.0.CO;2)
  1. Murphy DJ, Hall CAS. 2011 Energy return on investment, peak oil, and the end of economic growth. Ann. NY Acad. Sci. 1219, 52–72. (doi:10.1111/j.1749-6632.2010.05940.x)
  1. Hall CAS, Powers R, Schoenberg W. 2008 Peak oil, EROI, investments and the economyin an uncertain future. In Biofuels, solar and wind as renewable energy systems: benefits and risks(ed. D Pimentel). Houten, The Netherlands: Springer Netherlands.
  1. Odum HT. 1971 Environment, power, and society. New York, NY:Wiley.
  2. Odum HT. 1973 Energy, ecology, and economics. Ambio 2, 220–227.
  1. Gagnon N, Hall CAS, Brinker L. 2009 A preliminary investigation of the energy return on energy invested for global oil and gas extraction. Energies 2, 490–503. (doi:10.3390/en20300490)
  1. Cleveland C. 2005 Net energy from the extraction of oil and gas in the United States. Energy30, 769–782. (doi:10.1016/
  1. Guilford MC, Hall CAS, O’Connor P, Cleveland CJ. 2011 A new long term assessment of energy return on investment (EROI) for U.S. oil and gas discovery and production. Sustainability 3, 1866–1887. (doi:10.3390/su3101866)
  1. Grandell L, Hall CAS, Höök M. 2011 Energy return on investment for Norwegian oil and gas from 1991 to 2008. Sustainability 3, 2050–2070. (doi:10.3390/su3112050)
  1. Moerschbaecher M, Day JW. 2011 Ultra-deepwater Gulf of Mexico oil and gas: energy return on financial investment and preliminary assessment of energy return on energy investment. Sustainability 3, 2009–2026. (doi:10.3390/su3102009)
  1. Cleveland CJ, O’Connor PA. 2011 Energy return on investment (EROI) of oil shale. Sustainability 3, 2307–2322. (doi:10.3390/su3112307)
  1. King CW, Hall CAS. 2011 Relating financial and energy return on investment. Sustainability 3, 1810–1832. (doi:10.3390/su3101810)
  1. Dale M, Krumdieck S, Bodger P. 2012 Global energy modelling—a biophysical approach (GEMBA). II. Methodology. Ecol. Econ. 73, 158–167. (doi:10.1016/j.ecolecon.2011.10.028)
  1. Heun MK, de Wit M. 2012 Energy return on (energy) invested (EROI), oil prices, and energy transitions. Energy Pol. 40, 147–158. (doi:10.1016/j.enpol.2011.09.008)
  1. Hall CAS. 1972 Migration and metabolism in a temperate stream ecosystem. Ecology 53, 585–(doi:10.2307/1934773)
  2. Chapman PF. 1974 Energy costs: a review of methods. Energy Pol. 2, 91–103. (doi:10.1016/0301-4215(74)90002-0)
  1. Chapman P. 1976 Energy analysis: a review of methods and applications. Omega 4, 19–33.(doi:10.1016/0305-0483(76)90036-0)
  1. Carter AP. 1974 Applications of input–output analysis to energy problems. Science 184, 325–329. (doi:10.1126/science.184.4134.325)
  1. Estrup C. 1974 Energy consumption analysis by application of national input–output tables. Ind. Market. Manage. 3, 193–210. (doi:10.1016/0019-8501(74)90007-8)
  1. Nilsson S. 1974 Energy analysis: a more sensitive instrument for determining costs of goods and services. Ambio 3, 222–224.
  1. Nilsson S, Kristoferson L. 1976 Energy analysis and economics. Ambio 5, 27–29.
  1. Bullard C, Herendeen R. 1975 The energy costs of goods and services. Energy Pol. 3, 268–278.(doi:10.1016/0301-4215(75)90035-X)
  1. Hall CAS, Balogh S, Murphy DJR. 2009 What is the minimum EROI that a sustainable society must have? Energies 2, 25–47. (doi:10.3390/en20100025)
  1. Slesser M (ed.). 1974 Energy Analysis Workshop on Methodology and Conventions, Guldsmedshyttan, Sweden, 25–30 August 1974. IFIAS Rep. no. 6. Stockholm, Sweden: International Federation of Institutes for Advanced Study.
  1. Connolly TJ, Spraul JR (eds). 1975 Report of the NSF–Stanford Workshop on Net Energy Analysis, Palo Alto, CA, 25–28 August. Stanford, CA: Institute of Energy Studies, Stanford University.
  1. CERI. 1976 Net energy analysis: an energy balance study of fossil fuel resources. Golden, CO:Colorado Energy Research Institute.
  1. Bullard CW, Penner PS, Pilati DA. 1978 Net energy analysis: handbook for combining process and input–output analysis. Resour. Energy 1, 267–313. (doi:10.1016/0165-0572(78)90008-7)
  1. Herendeen R. 1978 Input–output techniques and energy cost of commodities. Energy Pol. 6, 162–165. (doi:10.1016/0301-4215(78)90039-3)
  1. Farrell AE, Plevin RJ, Turner BT, Jones AD, O’Hare M, Kammen DM. 2006 Ethanol can contribute to energy and environmental goals. Science 311, 506–508. (doi:10.1126/science. 1121416)
  1. Murphy DJ, Hall CAS, Powers B. 2011 New perspectives on the energy return on (energy) investment (EROI) of corn ethanol. Environ. Dev. Sustain. 13, 179–202. (doi:10.1007/s10668-010-9255-7)
  1. Costanza R. 1980 Embodied energy and economic valuation. Science 210, 1219–1224.(doi:10.1126/science.210.4475.1219)
  1. Hall CAS, Cleveland CJ, Berger M. 1981 Energy return on investment for United States petroleum, coal, and uranium. In Energy and ecological modeling (ed. W Mitsch), pp. 715–724.Amsterdam, The Netherlands: Elsevier.
  1. Hall CAS, Cleveland CJ. 1981 Petroleum drilling and production in the United States: yieldper effort and net energy analysis. Science 211, 576–579. (doi:10.1126/science.211.4482.576)
  1. Hall CAS, Kaufmann R, Cleveland CJ. 1986 Energy and resource quality: the ecology of theeconomic process. New York, NY:Wiley.
  1. Cleveland CJ, Costanza R, Hall CAS, Kaufmann R. 1984 Energy and the U.S. economy: abiophysical perspective. Science 225, 890–897. (doi:10.1126/science.225.4665.890)
  1. Brundtland GH. 1987 Our common future. New York, NY: United Nations.
  2. de Haes HAU, Heijungs R. 2007 Life-cycle assessment for energy analysis and management. Appl. Energy 84, 817–827. (doi:10.1016/j.apenergy.2007.01.012)
  1. Wiedmann TA. 2009 A review of recent multi-region input–output models used forconsumption-based emission and resource accounting. Ecol. Econ. 69, 211–222. (doi:10.1016/j.ecolecon.2009.08.026)
  1. Raugei M, Fullana-i-Palmer P, Fthenakis V. 2012 The energy return on energy investment (EROI) of photovoltaics: methodology and comparisons with fossil fuel life cycles. Energy Pol. 45, 576–582. (doi:10.1016/j.enpol.2012.03.008)
  1. Kubiszewski I, Cleveland CJ, Endres PK. 2010 Meta-analysis of net energy return for wind power systems. Renew. Energy 35, 218–225. (doi:10.1016/j.renene.2009.01.012)
  1. Berndt ER. 1978 Aggregate energy, efficiency and productivity measurement. Annu. Rev.Energy 3, 225–273. (doi:10.1146/
  1. Cleveland CJ, Kaufmann RK, Stern DI. 2000 Aggregation and the role of energy in theeconomy. Ecol. Econ. 32, 301–317. (doi:10.1016/S0921-8009(99)00113-5)
  1. Brandt AR. 2011 Oil depletion and the energy efficiency of oil production: the case ofCalifornia. Sustainability 3, 1833–1854. (doi:10.3390/su3101833)
  1. Hu Y, Feng L, Hall CAS, Tiang D. 2011 Analysis of the energy return on investment (EROI) ofthe huge Daqing oil field in China. Sustainability 3, 2323–2338. (doi:10.3390/su3122323)
  1. Farrell AE, Brandt AR. 2006 Risks of the oil transition. Environ. Res. Lett. 1, 1–6.(doi:10.1088/1748-9326/1/1/014004)
  1. Hamilton J. 2009 Causes and consequences of the oil shock of 2007–08. In Brookings Papers on Economic Activity (eds D Romer, J Wolfers), pp. 215–283. Washington, DC: The Brookings Institution.
  1. Rubin. 2008 Just how big is Cleveland? Toronto, Canada: CIBC World Markets Inc.
  1. Nelder C, Macdonald G. 2012 There will be oil, but at what price? Harvard Business ReviewBlog Network [Internet]. 1 October 2011.See
  1. Kopits S. 2009 A peak oil recession. In ASPO-8: The ASPO 2009 Int. Peak Oil Conf., Denver, CO,11–13 October. Uppsala, Sweden: Association for the Study of Peak Oil & Gas.
  1. Skrebowski C. 2011 A brief economic explanation of peak oil. In Oil DepletionAnalysis Centre (ODAC) Newsletter, 16 September 2011 [online]. See
  1. Deng S, Tynan GR. 2011 Implications of energy return on energy invested on future total energy demand. Sustainability 3, 2433–2442. (doi:10.3390/su3122433)
  1. Mearns E. 2008 The global energy crisis and its role in the pending collapse of the global economy.Presentation to the Royal Society of Chemists, Aberdeen, Scotland, 29 October 2008. See
  1. Giampietro M, Mayumi K, Sorman AH. 2010 Assessing the quality of alternative energysources: energy return on investment (EROI), the metabolic pattern of societies and energystatistics. Institut de Ciencia, Universitat Autonoma de Barcelona.
  1. Carroll L. 1871 Through the looking-glass. London,UK:Macmillan.
  2. Muggeridge A, Cockin A, Webb K, Frampton H, Collins I, Moulds T, Salino P. 2014 Recovery rates, enhanced oil recovery and technological limits. Phil. Trans. R. Soc. A 372, 20120320.(doi:10.1098/rsta.2012.0320)
  1. Murphy DJ,Hall CAS. 2010 Year in review—EROI or energy return on (energy) invested. Ann.NY Acad. Sci. 1185, 102–118. (doi:10.1111/j.1749-6632.2009.05282.x)
  1. Timilsina GR. 2014 Biofuels in the long-run global energy supply mix for transportation. Phil.Trans. R. Soc. A 372, 20120323. (doi:10.1098/rsta.2012.0323)
  1. Jacobson MZ, DelucchiMA. 2009 A path to sustainable energy by 2030. Scient. Am. 301, 58–65.(doi:10.1038/scientificamerican1109-58)
  1. Trainer TA. 2012 A critique of Jacobson and Delucchi’s proposals for a world renewable energy supply. Energy Pol. 44, 476–481. (doi:10.1016/j.enpol.2011.09.037)
  1. Trainer TA. 2010 Can renewables etc. solve the greenhouse gas problem? The negative case.Energy Pol. 38, 4107–4114. (doi:10.1016/j.enpol.2010.03.037)
  1. Höök M, Davidsson S, Johansson S, Tang X. 2014 Decline and depletion rates of oilproduction: a comprehensive investigation. Phil. Trans. R. Soc. A 372, 20120448. (doi:10.1098/rsta.2012.0448)
  1. Jaramillo P, Griffin WM,Matthews HS. 2008 Comparative analysis of the production costs and life-cycle GHG emissions of FT liquid fuels from coal and natural gas. Environ. Sci. Technol. 42, 7559–7565. (doi:10.1021/es8002074)
  1. National Research Council. 2007 Coal: research and development to support national energy policy. Washington, DC: National Academies Press.
  1. Brandt AR, Farrell AE. 2007 Scraping the bottom of the barrel: greenhouse gas emissionconsequences of a transition to low-quality and synthetic petroleum resources. Clim. Change 84, 241–263. (doi:10.1007/s10584-007-9275-y)
  1. CAPP. 2013 The facts on oil sands. Calgary, Canada: Canadian Association of Petroleum Producers. See
  1. Mulder K, Hagens N, Fisher B. 2010 Burning water: a comparative analysis of the energyreturn on water invested. Ambio 39, 30–39. (doi:10.1007/s13280-009-0003-x)
  1. Odum EP. 1969 The strategy of ecosystem development. Science 164, 262–270. (doi:10.1126/science.164.3877.262)
  1. Kremmer. 2010 Historic population and GDP data [online]. See (accessed 24 November 2010).
  1. Smil V. 2010 Energy transitions: history, requirements, prospects. Santa Barbara, CA: Praeger.
  2. IEA. 2012 World energy outlook 2012. Paris, France: International Energy Agency.
  3. Sorrell S. 2010 Energy, growth and sustainability: five propositions. Brighton, UK: University of Sussex.
Posted in EROEI Energy Returned on Energy Invested, Net Energy Cliff, Other Experts | 1 Comment

The electric grid, critical interdependencies, vulnerabilities. House of Representatives 2003

Notes from: Congressional Record. September 4 & 23, 2003. Implications of power blackouts for the nation’s cyber-security and critical infrastructure protection. House of Representatives. 246 pages.


COFER BLACK, Office of the coordinator for counterrorism, department of state

The phrase ‘‘critical infrastructure’’ covers many elements of the modern world. To cite a few examples: the computers we use to transfer financial information from New York to Hong Kong and other cities, the air traffic control systems for international and domestic flights and, of course, the electric grid systems. The global critical infrastructure is both a contributor to, and a result of, the interdependence that exists among nations today. Critical infrastructure essentially means all the physical and virtual ties that bind us together, not only as a society but as a world. Terrorists know this, and they see attacking the very bonds that hold us together as one more way to drive us apart.

Christopher Cox, California, Chairman Select committee on Homeland Security 

The blackout shutdown over 100 power plants, including 22 nuclear reactors, cutoff power for 50 million people in 8 states and Canada, including much of the Northeast corridor and the core of the American financial network, and showed just how vulnerable our tightly knit network of generators, transmission lines, and other critical infrastructure is.

Cyber attacks are a real and growing threat. The problem of cyber-security is unique in its complexity and in its rapidly evolving character. Cyber attacks are different from physical attacks since they can be launched from anywhere in the world and be routed through numerous intermediate computers. Cyber attacks require a different skill set to detect and counter, and are not limited to the risks posed from Al-Quaida. They include threats posed by those criminals and hackers who are already attacking our infrastructure for their own amusement or using it to steal information and money. As the most information technology-dependent country in history, we remain uniquely vulnerable to cyber attacks that can disrupt our economy or undermine our national security.

The dependence of major infrastructural systems on the continued supply of electrical energy, and of oil and gas, is well recognized. Telecommunications, information technology, and the Internet, as well as food and water supplies, homes and worksites, are dependent on electricity; numerous commercial and transportation facilities are also dependent on natural gas and refined oil products.

Physical or cyber attacks can amplify the impact of physical attacks on this critical infrastructure, and diminish the effectiveness of emergency responses.

Blackout effects:

  1. Harlem’s sewage treatment plant shut down without power for its pump.
  2. Seven oil refineries in the U.S. and Canada temporarily shut down, worsening an already tight gasoline supply situation.
  3. Many airports were closed because of inoperable systems on the ground. Refueling of aircraft stopped as hydrant systems and fuel farms lacked power.
  4. Nearly all manufacturers in southeast Michigan ground to a halt with the blackout.
  5. The 911 emergency systems in New York and Detroit failed during the blackout.
  6. New York City’s computer-aided dispatch system for its fire department and rescue squad crashed. Water systems in Cleveland and Detroit could not handle the drop in power.
  7. Ohio Governor Bob Taft declared a state of emergency in Cleveland after all four pumping stations that lift water out of Lake Erie went out and residents were ordered to boil their water for days.
  8. The beaches were off limits for swimming after a sewage discharge into Lake Erie and the Cuyahoga River sent bacteria levels soaring.
  9. More than 50 assembly and other plants operated by General Motors Corp., Ford Motor Co., DaimlerChrysler, and Honda Motor Co. were idled by the cascading blackout.
  10. NOVA Chemicals shutdown plants in Pennsylvania, Ohio, and Ontario, Canada.
  11. Walmart closed 200 stores in Canada and the United States.
  12. Marriott International saw 175 of its hotels in the Northeast lose power at the height of the blackout.
  13. Hundreds of airline flights were cancelled. For many airports throughout the U.S. and Canada, the power failure exposed the risk of fuel supply interruptions from electricity outages, since most hubs in North America are fed by pipeline systems.
  14. Tightened security measures established after 9–11 could not be maintained as power was not available for baggage screening machines.
  • Without railroads to deliver coal, the nation loses 60% of the fuel used to generate electricity.
  • Without electricity, fueling stations cannot pump fuel.
  • Without diesel, the railroads will eventually stop running.
  • When railroads stopped running after 9/11 to guard hazardous materials in only two days the city of Los Angeles was out of chlorine and faced the threat of no drinking water—the railroads began operating again on the third day.

Blackstart of grid – restoring power. Restoring a system from a blackout required a very careful choreography of re-energizing transmission lines from generators that were still online inside the blacked-out area, from systems from outside the blacked-out area, restoring station power to off-line generating units so they could be restarted, synchronizing the generators to the interconnection, and then constantly balancing generation and demand as additional generating units and additional customer demands are restored to service.

Many may not realize it takes days to bring nuclear and coal fired power plants back on-line, so restoring power was done with gas-fired plants normally used for peak periods to cover baseload needs normally coal and nuclear-powered. The diversity of our energy systems proved invaluable.

Robert Liscouski, assistant secretary, Infrastructure protection, Department of homeland security

While the national focus was primarily on the blackout and its cause, our teams were hard at work assessing the cascading effects into other sectors. Interdependencies among the sectors were again demonstrated by this event. Seven major petroleum refineries suspended operations, many chemical manufacturing plants were shut down, grocery stores lost perishable inventories, air traffic ceased at several major airports, and emergency services capacity was tested. Web sites were shut down. ATMs did not work in the affected areas and the American Stock Exchange did not operate for a period of time. The effect of the blackout highlighted what we already knew at the department. If one infrastructure is affected, many other infrastructures are likely to be impacted as well. Indeed, all the critical infrastructure sectors were affected by this event. Understanding the vulnerabilities and interdependencies associated with cascading events is an area of great importance to the department.

Jim Turner, Texas. This incident demonstrated that there are literally hundreds of thousands of potential targets that terrorists could choose to strike. These include power systems, chemical and nuclear plants, commercial transportation and mass transit, skyscrapers, and sports and concert venues. In addition to physical assets, we also need to protect cyber assets. Recent computer disruptions have had unexpected consequences on nuclear plants and other utilities. Eighty-five percent of our critical infrastructure assets are privately owned. We must, therefore, work in partnership with the private sector to improve our national security. But we can’t rely too heavily on voluntary private action. Companies seeking to maximize profits simply are unlikely to have the economic incentives to voluntarily make the investments necessary to raise security levels to where they need to be.

In the absence of sufficient action by critical infrastructure owners, we have a duty to take the initiative to protect the American people.

Curt Weldon, Pennsylvania. The greatest threat would be a low-yield nuclear weapon, which we now know that North Korea has and Iran is trying to obtain, and the ability to put it up into the atmosphere, which we know that both Iran and North Korea have, a low-complexity missile; and by detonating that low-yield nuclear weapon off of the coast in the atmosphere The electro-magnetic pulse (EMP) would fry all the electronic components within a given range within the U.S. In fact, our military has tested this type of capability in the past. In testimony before the Armed Services Committee, we have not hardened our systems. Only our ICBM system is hardened, and almost the entirety of our energy complex in America would be vulnerable to any EMP laydown.

I am familiar with Russian nuclear doctrine. Their first attempt at attacking us would be to lay down an EMP burst off of our coast with a nuclear weapon that would not hurt one person, but would fry all of our electronic components, including our electrical grid system. It would shut down America, including our vehicles, which have chips in them that would stop on the roads. Now, we tested this capability in 1962 when we did four tests at the Kwajalein Atoll in the Pacific. We were startled that within 800 miles everything was shut down, streetlights. We stopped cars dead in their tracks, and we fried the major electronic components of our telephone system. We did those tests in 1962. That is not classified. That has been reported in the media, and in fact it was just in a book put out by Dan Verton called ‘‘The Black Ice.’’ In 1999, we in the House held hearings on this phenomenon, not because of 9–11, but because we knew of the implications. Directed energy has become the weapon of choice for the future for nations that want to bring us down or harm us. We are doing research ourselves, and so are other countries on directed energy, let alone the EMP phenomenon.

There is no more, no more threat to our security and our quality of life than a terrorist using electro-magnetic pulse, which we now have 10 countries that have nuclear capability. We are talking about low-yield weapons that would not harm one person. We detonate it in the atmosphere and we know 70 countries have missiles that could launch such a capability off of our coast.

Paul H. Gilbert. Gilbert us a member of the National Academy of Engineering and was Chair of the National Research Council Panel responsible for the Chapter on Energy Systems for the NRC Branscomb-Klausner Report, Making the Nation Safer: The Role of Science and Technology in Countering Terrorism

Over the past decade our electric supply system has been tasked to carry ever-increasing loads. It has also undergone a makeover from being a highly regulated, vertically integrated utility to one that is partially deregulated, far less unified, not as robust and resilient as it was. The generation side is essentially deregulated and operating under an open market set of conditions. At the same time the transmission sector remains fully regulated, but under voluntary compliance reliability rules, resulting in diminished investments in maintenance and spare parts and lower reliability. Another concern is that in seeking to reduce operating costs, the operating companies have installed automated cyber-controllers, or SCADA systems, to perform functions that people previously performed. These open architecture cyber units are an invitation for those who would seek to use computer technology to attack the grid.

The in-place electrical utility assets today are typically being operated at close to the limit of available capacity. In this mode another characteristic of such complex systems appears. When operated near their capacity, these systems are fragile, having little reserve within which to handle power or load fluctuations.

When load and capacity are out of balance, shutting down becomes the only way a system element has to protect itself from severe damage. However, the loss of a piece of the grid, let us say a transmission line, does not end the problem. A line down takes down with it the power that it was transmitting. The connected power plant that was producing that power, having no connected load, must also shut down. In these highly integrated grids, more lines have imbalance problems, and more plants sense the capacity limitations and they all shut down. The cascading effect spreads rapidly in many directions, and in seconds an entire sector of the North American grid can be down. And this is what we experienced a few weeks ago from an accident.

The exact same consequences could, however, too easily be produced by a terrorist attack from a small, trained team. This was the scenario assumed in the Making the Nation Safer report, where several critical nodes in the grid were taken out in a well planned and executed terrorist attack. The cascading system failures resulted in region-wide catastrophic consequences.

Recovery was estimated to take weeks or months, not hours or days.

Now, while the report does not speculate in any detail on the extended consequences of such an event. I have been asked to do so here, and so I offer the following as a personal opinion. Based on the critical infrastructure, and because that critical infrastructure is so extensively integrated, with power out beyond a day or two in our cities, both food and water supplies would soon fail.

Transportation systems would come to a standstill. Waste water could not be pumped. And so we would soon have public health problems. Natural gas pressure would decline, and some would lose gas altogether, very bad news in the winter. Nights would become very dark with no lighting, and communications would be spotty or nonexistent. Storage batteries would have been long gone from the stores, if any stores were still open. Work, jobs, employment, business and economic activity would be stopped. Our economy would take a major hit. All in all our cities would not be very nice places to be. Some local power generators such as at hospitals would get back up, and so there would be islands of light in the darkness. Haves and have-nots would get involved. It would not be a very safe place to be either. Martial law would likely follow, along with emergency food and water supply relief. At our core we would rally and find ways to get by while the systems are being repaired. In time the power would start to come back, tentatively at first, with rolling blackouts, and then in all its glory.

Several weeks to months would have passed, and the enormous recovery and clean-up would begin. This is simply one person’s view, but based upon a fairly in-depth understanding of the critical interdependency of our infrastructure.

Our basic infrastructure systems include our electric power, food, and water supplies, waste disposal, natural gas, communications, transportation, petroleum products, shelter, employment, medical support and emergency services, and facilities to meet all our basic needs. These are a highly integrated, mutually dependent, heavily utilized mix of components that provide us with vitally needed services and life support. While all these elements are essential to our economy and our well-being, only one has the unique impact, if lost, of causing all the others to either be seriously degraded or completely lost. And that, of course, is electric power. Our technically advanced society is literally hard wired to a firm, reliable electric supply.

KENNETH C. WATSONPresident & Chairman of the Partnership for Critical Infrastructure Security (PCIS), currently the manager of Cisco Systems’ involvement in critical infrastructure

Interdependence Examples. We all depend on telecommunications—in fact, when recently asked to list their dependence on other sectors, the sector coordinators rated telecommunications as first or second on their list. Nearly equal to telecommunications was electric power. Without electricity, there is no ‘‘e’’ in e-commerce. However, without railroads to deliver coal, the nation loses 60 percent of the fuel used to generate electricity. Without diesel, the railroads will stop running. Without water, there is no firefighting, drinking water, or cracking towers to refine petroleum. Without financial services, transactions enabling all these commodity services cannot be cleared. Yet, these are not just one-way dependencies. When the railroads stopped running after 9/11 to guard hazardous material, it only took the city of Los Angeles two days to demand chlorine or face the threat of no drinking water—the railroads began operating again on the third day. Throughout the Northeast, dependencies on electric power were obvious. Some areas had electric water pumps, and they had to boil their drinking water for days after the blackout.

All of our critical infrastructures are interlinked in complex, sometimes little-understood ways. Some dependencies are surprising, contributing to unusual key asset lists.


As you know, our energy infrastructure is vast, complex and highly interconnected. It includes power plants, electric transmission and distribution lines, oil and gas production sites, pipelines, storage and port facilities, information and control systems and other assets. Many of these entities own, operate, supply, build or oversee their infrastructure. The private sector owns about 85 percent of these assets and a host of federal and state agencies regulate energy generation, transport, transmission and use.

TOO MANY AGENCIES !!!  [My comment, not in the text of the congressional record, Alice Friedemann]

We maintain collaborative relationships with [many entities]:

  1. We work closely with the Department of Homeland Security (DHS), which leads, integrates, and coordinates critical infrastructure protection activities across the federal government.
  2. To aid this effort, Department of Energy & DHS are working on a plan for collaboration and responsibilities (i.e. critical infrastructure protection of physical and cyber assets, science and technology, and emergency response).
  3. We are also beginning to work with the Coast Guard
  4. With Federal Emergency Management Agency (FEMA),
  5. Representatives of the Defense Intelligence Agency,
  6. The National Institute of Standards and Technology to consider options for developing a collaborative National SCADA Program.
  7. We work closely with the Department of Transportation’s Office of Pipeline Safety
  8. We coordinate with the Environmental Protection Agency (EPA) to avoid redundant efforts with petrochemical facilities.
  9. We partnered with the Federal Energy Regulatory Commission (FERC)),
  10. state regulators,
  11. and industry to assess the implications of a loss of natural gas supply in some regions of the country.
  12. DOE’s new Office of Electric Transmission and Distribution on issues related to the electric grid
  13. The Office of Security to improve the operations of DOE’s Emergency Operation Center.
  14. The Office of Energy Efficiency and Renewable Energy’s regional offices to support our meetings with state energy offices;
  15. The Office of Fossil Energy on new technologies to harden oil and gas pipelines;
  16. The Office of Science on visualization techniques through their Advanced Scientific Computing Research Program;
  17. The Office of Independent Oversight and Performance Assurance on cyber security protection.

Collaboration with the PRIVATE SECTOR is critical :

  1. American Petroleum Institute (API),
  2. American Gas Association (AGA),
  3. Interstate Natural Gas Association of America (INGAA),
  4. Gas Technology Institute (GTI),
  5. National Propane Gas Association (NPRA),
  6. Edison Electric Institute (EEl),
  7. Electric Power Research Institute (EPRI),
  8. National Rural Electric Cooperative Association (NRECA),
  9. American Public Power Association (APPA),
  10. North American Electric Reliability Council (NERC).

Collaboration with STATES

  1. National Association of State Energy Officials (NASEO),
  2. National Governors Association (NGA),
  3. National Association of Regulatory Utility Commissioners (NARUC),
  4. National Conference of State Legislatures (NCSL)


Colonel Michael C. McDaniel.  Assistant Adjutant General for Homeland Security for the Michigan National Guard, Homeland Security Advisor to Michigan’s Governor, Jennifer M Granholm.

On Thursday, August 14, 2003, at 4:15 p.m., a massive power outage struck the Niagara-Mohawk power grid in the Northeast US and Ontario causing blackouts from New York to Michigan.

Within minutes, much of southeast and mid-Michigan was without power, with 60% of Michigan’s population, over 2.2 million households, affected by the outage

The State of Michigan and local governments spent $20.4 million on emergency measures to save lives, protect public health, and prevent damage to public and private property.

The Emergency Management Division of the Michigan State Police began to immediately monitor conditions around the state, including the state’s nuclear power plants.

Within minutes, the state’s Emergency Operations Center (EOC) was formally activated, and state agencies began to monitor state and national conditions.

Some of the major complications from the blackout:

  1. Gas stations were unable to supply peoples’ needs for their cars and portable generators, as without electricity the pumps were inoperable
  2. The Detroit Board of Water and Sewers, oversight board of the nation’s second largest water system, reported that its system was not functioning correctly. It issued a boiled water advisory for its entire service area.
  3. There was no system to notify all of the customers of the boiled water advisory, as notification was dependent on the public media. It became clear, on the morning of August 15, that the largest problem was the lack of potable water. Public and private entities delivered hundreds of thousands of gallons of water to those affected sites, but a boiled water advisory was not lifted until Monday, August 18.
  4. Widespread traffic signals not functioning and limited telephone communications.
  5. Marathon Refinery, Michigan’s largest refining facility, lost power and had to shut down. One unit did not shut down properly and began venting partially processed hydrocarbons. Because of the tank’s location, the city of Melvindale (with the assistance of the Michigan State Police) decided to evacuate 30,000 residents and shut down Interstate 75 for several hours until the situation was controlled. The Marathon Refinery was inoperable as a result of the loss of electricity and water, and out of production for approximately 10 days.
  6. The auto industry shut down operations for three days.
  7. A lot of first responders were relying upon cell phones that did not have an adequate radio system, and a number of cell towers did not have backup systems that worked.
  8. Radio and television stations reported broadcasting difficulties, with several small stations not operating at all.
  9. Many facilities lacked sufficient alternative energy sources. Portable generators were needed at hospitals and other public facilities, including the state mental institution.
  10. The Fermi II nuclear plant in Monroe County was shut down as a precaution. It returned to full power production and was reconnected to the power grid late a week later on August 21
  11. The Ambassador Bridge in Detroit, the busiest commercial landport in the United States with 16,000 tractor-trailers crossing daily, was also affected.
  12. Canadian customs lost their computer datalink, and their ability to verify trucking manifests electronically. As a result they were forced to visually and manually inspect the manifests and, if warranted, the freight itself. This resulted in an approximately four-mile backup of traffic for almost 24 hours on the U.S. side.
  13. Many computer systems were not functioning, including the Law Enforcement Information Network (LEIN).
  14. The Michigan State Police positioned 50 state troopers on stand-by for mobilization, if needed to maintain order in blackout areas . The Michigan National Guard also had troopers ready on stand-by.
  15. Metropolitan Detroit Airport was closed and all flights canceled until midnight on August 14.
  16. A number of public water issues arose from the blackout. Generators need an automatic activation switches and shouldn’t rely on telephone lines
  17. Almost every type of critical infrastructure that should have a generator did have some sort of generator. But no one had not tested those generators under load, so we had a lot of generators that just didn’t work. They might have fired them up before, but they never tested them under a load and actually had them producing electricity. When they did work, they ran out of fuel. We were starting to get calls from both hospitals and some of the utilities wanting to know if we could help them find kerosene diesel for their generators.
  18. A lot of people did not have old-fashioned phones. Everybody’s phone is portable, a hand-held device which requires electricity these days, or a cell device, and not all of those towers worked. So there were a number of instances where the communication systems were more reliant on electricity than we believed that they would be. Again, even those radio and TV stations that had generators, the generators didn’t work because they had never been tested. So they weren’t ready to work under load. They weren’t the right capacity generator. And then the other problem, as I said, was 24 hours later they were staring to run out of power. Both TV and radio, as well as the telephone companies, were calling as well.
  19. This was a very hot day in the summer where the usage on the Detroit water system was almost a billion gallons a day. The system, even after it came back up on generators, could only handle about 400 million gallons per day. If we had had a method, if we had some sort of warning that this was going to happen, and could have gotten out to decrease your electricity, decrease your water use ahead of time, it probably would have made it easier for the system to come back on.

The NIAC Interdependency and Risk Assessment Working Group submitted its final report to NIAC members October 14, 2003. That report included results of a survey of Sector Coordinators and key infrastructure owners and operators regarding their top dependencies. Respondents were asked to list the top three sectors on which they depend, and the top three sectors that depend on them. In terms of short-term dependencies, the overall top three were 1) telecommunications and IT, 2) electricity, and 3) transportation. However, adding long-term impacts broadens the list of critical dependencies. Without financial services, business comes to a grinding halt in a matter of days. Without safe food, clean drinking water, and available health care, public health also reaches a crisis in days. Without emergency police, fire, and medical services, the ability to respond and contain emergencies is severely impacted. Long-term impacts of transportation failures are far more severe than the short term.

Without consideration for what vulnerability analysis is underway and what protective measures are in place, the following sectors present the highest potential risk to national security: Energy Information and Communications Banking and Finance Transportation Postal and Shipping This priority scheme is based on (a) the ease at which problems propagate within the sector, (b) the extent of other sectors’ dependencies on it, and (c) the potential impact of a sector’s loss of crucial functionality.


As a group, the critical infrastructure sectors are backbone services for our nation’s economic engine and produced approximately 31% of the Gross Domestic Product (GDP) in the year 2000. The blackout rippled through the economy. The examples are endless, and experience shows us that the blackout is not alone in its capacity to disrupt the economy. The information super highway of the Internet has become a fast lane for computer viruses. A computer virus launched one morning can infect computers around the world in one day. The Slammer virus, launched in January of this year, reportedly infected 100,000 computers in its first ten minutes alone. Because of the SoBig computer virus, some rail routes of CSX were recently shut down on August 20, until a manual backup system started the trains running again.

We know that terrorists have assessed the possibility of attacking our nuclear power plants and our transportation system. Al-Qaida computers seized in Afghanistan in 2001 had logged on to sites offering that offer software and programming instructions for the distributed control systems (DCS) and Supervisory- control and Data-acquisition (SCADA) systems that run power, water, transport and communications grids. All critical infrastructure industries are becoming increasingly dependent on information management and internal telecommunications systems to control and maintain their operations. The U.S. Dept. of Commerce’s National Telecommunications & Information Administration (NTIA) published a study in January 2002 that detailed the myriad of uses the internal wireless communications systems to meet essential operational, management and control functions including two-way emergency restoration and field communications, monitoring power transmission lines and oil and natural gas pipeline functions to instantaneously respond to downed transmission lines or changes in pipeline pressure; sending commands to various remote control switches; inspecting 230,000 miles of rail track; managing wastewater, processing drinking water, and protective relaying. SCADA systems could be attacked simply by overloading a system that, upon failure, causes other systems operations to malfunction as well.

While there is some debate about the ability of a terrorist to successfully launch a cyber attack against a SCADA system, there are several examples of people or groups who have tried. In March 2000 a disgruntled former municipal employee used the Internet, a wireless radio and stolen control software to release up to 1 million liters of sewage into the river and coastal waters of Queensland, Australia. Similarly, NERC reports that over the past two years, there have been a number of ‘‘cyber incidents that have or could have directly impacted the reliable operation of the bulk electric system,’’ including: • In January 2003, When the SQL/Slammer worm caused an electric utility company to lose control of their SCADA system for several hours, forcing the company operations staff to resort to manual operation of their transmission and generation assets until control could be restored. • In September 2001, the Nimda worm compromised the SCADA system of an electric utility, and then propagated itself to the internal project network of a major SCADA vendor via the vendor’s support communications circuit, devastating the vendor’s internal network and launching further attacks against the SCADA networks of the vendor’s other customers. More telling, perhaps, is a report issued in May 2002 by the Defense Department’s Critical Infrastructure Assurance Program (CIAP) claiming that there was evidence of a coordinated cyber reconnaissance effort directed against the critical assets of at least two electric utilities participating in the Defense Department sponsored program. The report revealed that the probing appeared to come from the People’s Republic of China, Hong Kong, and South Korea, with each probe building upon information previously garnered. The blackout is yet another wake-up call to our nation. It demonstrated the fragility of our electric transmission system, and reminds us of the interdependent nature of our infrastructure. Clearly, we need to encourage private industry and government to raise the standards of cyber security, and to further enhance our infrastructure security against attack.


Some rudimentary research has been done on interdependencies, but it has only been sufficient to illuminate how important this type of modeling and analysis could be. Sandia and other national labs have initiated interdependency studies, looking at intersections with the energy sector. The National Security Telecommunications Advisory Committee (NSTAC) has done similar work, addressing intersections between telecommunications and other sectors. The National Infrastructure Advisory Council (NIAC) has a current effort to develop policy recommendations on interdependency risk assessments. The sector coordinators are involved in that study, which will become available after delivery to the President in the October timeframe. The PCIS is coordinating with this NIAC working group to ensure that the handbook we develop is in harmony with NIAC policy recommendations.

Network owners already know their key assets and critical nodes—what they don’t know is whether their key assets and critical nodes are in the same geographic vicinity as their competitors’ nodes, or whether underlying or supporting infrastructure is in fact, truly diverse. In highly competitive sectors, such as telecommunications or finance, it would not be unusual to find that each of the major providers has intended to buy diversity and redundancy from numerous entities, only to find that all these entities use the same underground conduit for transport that goes through the same underground tunnel, and they are powered by the same power generation plant. The NSTAC has studied the implications of these types of cross-sector dependencies and has developed a number of programs that the telecommunications sector uses to mitigate these risks. It is time, however to take it to the next level, covering all cross-sector and multisector interdependencies.

One of the challenges will be that much of the data required may be proprietary. To date, the NISAC has centered its modeling efforts on the energy sector. To understand the complexity of this modeling problem, consider the NISAC model of the energy sector as a baseline, and apply it as a level of magnitude to the telecommunications sector. While we do not know the precise amounts, it is our understanding that the current electrical sector modeling cost about $30–40 million to develop and was done over the course of 3 to 8 years. If you assume that the level of detail developed within the electrical sector model is appropriate (and we do not know that to be the case) and simply multiply this $30–40 million times the number of facilities-based networks that comprise the telecommunications sector, then you would conservatively multiply this estimate by a factor of 9 networks (5 wireless + 1 wireline + 2 IXC + 1 paging), resulting in a baseline model for telecommunications in the $270–$360 million range. Even if all $200 million was dedicated to telecommunications modeling, it would take 1 to 2 years of currently allocated funding, and an even longer actual modeling effort, to model telecommunications alone. Multiply that by 12 sectors, and then you can start on the cross-sector interdependency modeling.

I am not sure you can point to a single weak link. Over the last 20 years, all of the infrastructures have become more and more dependent on networks, and they have become more and more interconnected. I think the key that we need to study in research and modeling and exercises is interdependency. Each of the sectors is dependent on each of the others and sometimes we don’t even know what these dependencies are without modeling and exercises.


The blackout of 2003 has underscored concerns about the vulnerability of our nation’s critical infrastructure to both accidents and deliberate attack, providing an immediate connection to the nation’s homeland security efforts. But the blackout may offer a deeper lesson beyond the vulnerability of the nation’s electricity grid to terrorist attack. In particular, a common explanation for the problems facing the electricity system is that private firms have had inadequate incentives to invest in distribution lines.

The important point is that market incentives are extremely powerful. For that very reason, however, it is essential that they be structured properly. As Patrick Wood, chairman of the Federal Energy Regulatory Commission, has put it: ‘‘We cannot simply let markets work. We must make markets work.’’

Let me give you an example that I think is particularly timely, involving chemical facilities. Let’s say that you have a chemical facility. It is worth a billion dollars. It houses chemicals. There are 123 chemical facilities in the United States that contain chemicals that could injure or kill more than a million people. The value of a million lives can easily exceed, well exceed a billion dollars. You may well have some incentive to make sure that there is some level of security to ensure that your plant is not intruded upon and those chemicals are not dispersed and harm people. But it is not adequate because your financial loss is much smaller than society’s loss that would occur if a successful attack did unfortunately take place. And that kind of example occurs, you know, in a wide array of settings. And I—in my written testimony I provide lots of other types of examples, but I think that might be a particularly timely and compelling one, where any time that private financial losses that you suffer are vastly smaller than the losses that we as a society would suffer, you don’t have enough incentive, bottom line.

In homeland security, private markets do not automatically produce the best result.

We must therefore alter the structure of incentives so that market forces are directed toward reducing the costs of providing a given level of security for the nation, instead of providing a lower level of security than is warranted. Given the significance of the private sector in homeland security settings, structuring incentives properly is critical. To be sure, private firms currently have some incentive to avoid the direct financial losses associated with a terrorist attack on their facilities or operations. In general, however, that incentive is not compelling enough to encourage the appropriate level of security—and should therefore be supplemented with stronger market-based incentives in several sectors. My testimony argues that: • Private markets, by themselves, do not provide adequate incentives to invest in homeland security, and • A mixed system of minimum regulatory standards, insurance, and third-party inspections would better harness the power of private markets to invest in homeland security in a cost-effective manner. Incentives for homeland security in private markets

Private markets by themselves do not generate sufficient incentives for homeland security for seven reasons: • Most broadly, a significant terrorist attack undermines the nation’s sovereignty, just as an invasion of the nation’s territory by enemy armed forces would. The costs associated with a reduction in the nation’s sovereignty or standing in the world may be difficult to quantify, but are nonetheless real. In other words, the costs of the terrorist attack extend well beyond the immediate areas and people affected; the attack imposes costs on the entire nation. In the terminology of economists, such an attack imposes a ‘‘negative externality.’’ The presence of this negative externality means that private markets will undertake less investment in security than would be socially desirable: Individuals or firms deciding how best to protect themselves against terrorism are unlikely to take the external costs of an attack fully into account, and therefore will generally provide an inefficiently low level of security against terrorism on their own.3 Without government involvement, private markets will thus typically under-invest in anti-terrorism measures.4 • Second, a more specific negative externality exists with regard to inputs into terrorist activity. For example, loose security at a chemical facility can provide terrorists with the materials they need for an attack. Similarly, poor security at a biological laboratory can provide terrorists with access to dangerous pathogens. The costs of allowing terrorists to obtain access to such materials are generally not borne by the facilities themselves:

the attacks that use the materials could occur elsewhere. Such a specific negative externality provides a compelling rationale for government intervention to protect highly explosive materials, chemicals, and biological pathogens even if they are stored in private facilities. In particular, preventing access to such materials is likely to reduce the overall risk of catastrophic terrorism, as opposed to merely displacing it from one venue to another. • Third, a related type of externality involves ‘‘contamination effects.’’ Contamination effects arise when a catastrophic risk faced by one firm is determined in part by the behavior of others, and the behavior of these others affects the incentives of the first firm to reduce its exposure to the risk. Such interdependent security problems can arise, for example, in network settings. The problem in these settings is that the risk to any member of a network depends not only on its own security precautions but also on those taken by others. Poor security at one establishment can affect security at others. The result can often be weakened incentives for security precautions.5 For example, once a hacker or virus reaches one computer on a network, the remaining computers can more easily be contaminated. This possibility reduces the incentive for any individual computer operator to protect against outside hackers. Even stringent cyber-security may not be particularly helpful if a hacker has already entered the network through a ‘‘weak link.’’ • A fourth potential motivation for government intervention involves information—in particular, the cost and difficulty of accurately evaluating security measures. For example, one reason that governments promulgate building codes is that it would be too difficult for each individual entering a building to evaluate its structural soundness. Since it would also be difficult for the individual to evaluate how well the building’s air intake system could filter out potential bio-terrorist attacks, the same logic would suggest that the government should set minimum anti-terrorism standards for buildings.

It is also possible, at least in theory, for private firms to invest too much in anti-terrorism security. In particular, visible security measures (such as more uniformed guards) undertaken by one firm may merely displace terrorist attacks onto other firms, without significantly affecting the overall probability of an attack. In such a scenario, the total security precautions undertaken can escalate beyond the socially desirable levels—and government intervention could theoretically improve matters by placing limits on how much security firms would undertake.

Unobservable security precautions (which are difficult for potential terrorists to detect), on the other hand, do not displace vulnerabilities from one firm to another and can at least theoretically reduce the overall level of terrorism activity. For an interesting application of these ideas to the Lojack automobile security system, see Ian Ayres and Steven Levitt, ‘‘Measuring Positive Externalities from Unobservable Victim Precaution: An Empirical Analysis of Lojack,’’ Quarterly Journal of Economics, Vol. 108, no. 1 (February 1998). For further analysis of evaluating public policy in the presence of externalities, see Peter Orszag and Joseph Stiglitz, ‘‘Optimal Fire Departments: Evaluating Public Policy in the Face of Externalities,’’ Brookings Institution Working Paper, January 2002.

It would be possible, but inefficient, for each individual to conduct extensive biological anti-terrorism safety tests on the food that he or she was about to consume. The information costs associated with that type of system, however, make it much less attractive than a system of government regulation of food safety. • The fifth justification for government intervention is that corporate and individual financial exposures to the losses from a major terrorist attack are inherently limited by the bankruptcy laws. For example, assume that there are two types of possible terrorist attacks on a specific firm: A very severe attack and a somewhat more modest one. Under either type of attack, the losses imposed would exceed the firm’s net assets, and the firm would declare bankruptcy—and therefore the extent of the losses beyond that which would bankrupt the firm would be irrelevant to the firm’s owners. Since the outcome for the firm’s owners would not depend on the severity of the attack, the firm would have little or no incentive to reduce the likelihood of the more severe version of the attack even if the required preventive steps were relatively inexpensive. From society’s perspective, however, such security measures may be beneficial—and government intervention can therefore be justified to address catastrophic possibilities in the presence of the bankruptcy laws. • The sixth justification for government intervention is that the private sector may expect the government to bail it out should a terrorist attack occur. The financial assistance to the airline industry provided by the government following the September 11th attacks provides just one example of such bailouts. Such expectations create a ‘‘moral hazard’’ problem: private firms, expecting the government to bail them out should an attack occur, do not undertake as much security as they otherwise would. If the government cannot credibly convince the private sector that no bailouts will occur after an attack, it may have to intervene before an attack to offset the adverse incentives created by the expectation of a bailout. • The final justification for government intervention involves incomplete markets. The most relevant examples involve imperfections in capital and insurance markets. For example, if insurance firms are unable to obtain reinsurance coverage for terrorism risks (that is, if primary insurers are not able to transfer some of the risk from terrorism costs to other insurance firms in the reinsurance market), some government involvement may be warranted. In addition, certain types of activities may require large-scale coordination, which may be possible but difficult to achieve without governmental intervention.

Both the need for government intervention and the potential costs associated with it thus vary from sector to sector, as should the policy response. Government intervention will generally only be warranted in situations in which a terrorist attack could have catastrophic consequences. Nonetheless, the general conclusion is that we can’t just ‘‘leave it up to the market’’ in protecting ourselves against terrorist attacks.

SHEILA JACKSON-LEE, TEXAS: An illustration of the disjunct in our infra and super-structure is the television broadcast of the tens of thousands of New Yorkers who had to walk across the Brooklyn Bridge to end their workday. This is vulnerability. Thousands of riders of underground mass transit systems trapped in cars, frugal in their consumption of oxygen and hopeful that their rescue team was near equates to vulnerability. Because we cannot cast blame for this occurrence on a terrorist group means that we are vulnerable to ourselves first and foremost. The Administration must increase our awareness of the status of the areas that are most open to corruption.


Posted in Blackouts, Congressional Record U.S., CyberAttacks, Interdependencies | Tagged , , , | Leave a comment

Blackstarting the grid after a power outage


Large blackouts can be particularly devastating and happen much more frequently than a normal distribution predicts.

The impact of a blackout exponentially increases with the duration of the blackout, and the duration of restoration decreases exponentially with the availability of initial sources of power. For several time-critical loads, quick restoration (minutes rather than hours or even days) is crucial. Blackstart generators, which can be started without any connection to the grid, are a key element in restoring service after a widespread outage. These initial sources of power include pump-storage hydropower, which can take 5-10 minutes to start, to certain types of combustion turbines, which take on the order of hours.

For a limited outage, restoration can be rapid, which will then allow sufficient time for repair to bring the system to full operability, although there may be a challenge for subsurface cables in metropolitan areas. On the other hand, in widespread outages, restoration itself may be a significant barrier, as was the case in the 1965 and 2003 Northeast blackouts. Natural disasters, however, can also lead to significant issues of repair—after Hurricanes Rita and Katrina, full repair of the electric power system took several years (NAS)

Restoring a system from a blackout required a very careful choreography of re-energizing transmission lines from generators that were still online inside the blacked-out area, from systems from outside the blacked-out area, restoring station power to off-line generating units so they could be restarted, synchronizing the generators to the interconnection, and then constantly balancing generation and demand as additional generating units and additional customer demands are restored to service.

Many may not realize it takes days to bring nuclear and coal fired power plants back on-line, so restoring power was done with gas-fired plants normally used for peak periods to cover baseload needs normally coal and nuclear-powered. The diversity of our energy systems proved invaluable (CR).

Restarting the grid after the 2003 power outage was especially difficult.

The blackout shutdown over 100 power plants, including 22 nuclear reactors, cutoff power for 50 million people in 8 states and Canada, including much of the Northeast corridor and the core of the American financial network, and showed just how vulnerable our tightly knit network of generators, transmission lines, and other critical infrastructure is.

The dependence of major infrastructural systems on the continued supply of electrical energy, and of oil and gas, is well recognized. Telecommunications, information technology, and the Internet, as well as food and water supplies, homes and worksites, are dependent on electricity; numerous commercial and transportation facilities are also dependent on natural gas and refined oil products.



NAS. 2012. Terrorism and the Electric Power Delivery System. National Academy of Science

NAS. 2013. The Resilience of the Electric Power Delivery System in Response to Terrorism and Natural Disasters. National Academy of Science

CR. September 4 & 23, 2003. Implications of power blackouts for the nation’s cybersecurity and critical infrastructure protection. Congressional Record, House of Representatives. Serial No. 108–23. Christopher Cox, California, Chairman select committee on homeland security

Posted in Grid instability | Tagged , , | Leave a comment

Barges are more energy efficient than rail and truck

marine highways


[After reading two congressional hearings, one in 2008, and another in 2013, about how the inland waterway system was falling apart, and had been for 30 years, I was curious to know why such an important asset would be allowed to fall apart. In the testimony, it was said that more money was collected in fees by the government than doled back out in capital and maintenance expenses (true from 1991 to 2006 (NAS 2015). It was said at the 2013 hearing that the U.S. Army Corps of Engineers (USACE) has a set budget, so if the money put into the Inland Waterway Trust Fund was actually given to port and river projects, other USACE projects would not be funded.

So the selection of waterways projects for authorization has a long history of being driven largely by political and local concerns (NAS 2015). Many states got a lot more money than they put in. The NAS report explains in gory detail what an irrational, byzantine mess the approval and funding process is.

National energy policy is not based on energy efficiency–there were no café standards for decades. Instead, massive, polluting gas guzzling vehicles have pummeled the hell out of our bridge and road infrastructure, wasting decades of oil that future generations will be angry about when the permanent oil crisis arrives.

Now that we’re at peak oil, a lot more attention and funding ought to go to the waterway system. 

Alice Friedemann]

Energy Intensity of Barges and other transport

Barges are the second most energy efficient form of transport, next to large container and bulk ships.

Barges being towed down a river will get 953 net ton-miles, but being towed against the flow of the current will drop to 243 (Tolliver) with an overall average of 576 ton-miles, with rail 413, truck 155.

Barge versus rail

Davis reports that rail (294 Btu/ton-mile in 2012) is 40% more energy intensive than barge (210 Btu/ton-mile in 2012), nearly the same percentage difference as reported by Kruse (2013) who found 311 Btu/ton-mile for rail and 223 Btu/ton-mile for inland towing.

Dager (2013) reports even lower energy intensity for inland barge transport on the basis of independent data and fuel use modeling, corresponding to about 196 Btu/ton-mile, or about 60 percent better energy intensity than average rail.

Commodity-specific configurations can do even better. Dager reports  towboats on the Mississippi River between the mouth of the Missouri and Baton Rouge, Louisiana, averaged 867 ton-miles per gallon in 2011 versus the system average of 656. Baumel (2008) reported that unit grain trains moving from Iowa to New Orleans, Louisiana, had route-specific fuel efficiency of 640 ton-miles per gallon, 54% better than energy intensity for an average train.

“24th Annual State of Logistics Report: Is This the New Normal”, by Roz Wilson

Drought effect on barges

There were numerous times when sections of the particularly that Mississippi could travel only in one direction at a time because of the width of the channel would not support a bridge to, despite the fact that the Army Corps of Engineers was providing emergency dredging. Barges were often backed up for days at a time awaiting passage. I one point there were close to 100 vessels run aground and the lower Mississippi.

Shallower channels meant lighter loads, lower speeds and fewer barges, any of which would run up costs. Several harbors were closed at the height of the drought. It is estimated that every inch of drought loss represents thousands of potential products that cannot be moved. And 11 mile stretch of the Mississippi was closed intermittently and August causing queues up to 100 tows. Every single day a towboat is idle, it cost the owners $10,000. No surprise that shipping rates increase close to 25% during that period.

Just to show you how important the waterways really are, take a look at this model comparison chart and look at what you can move on one barge compared to what you can move on railroad, cars or trucks, or in one barge tow.

modal comparison barge rail truck

We should be using the water part of our system a lot more efficiency, I think, than we are. Just to bring it home, look at the time the miles traveled per gallon of fuel based on various modes. Looking at this really makes you want to understand or figure out ways that we can use our waterways more effectively.

most of the lock infrastructure has already exceeded its expected life. We need to fix the aging infrastructure. And then we need to build more landside infrastructure to support containers on barges and for translating the other modes.

Nicholas Kehoe. Oct 17, 2012. An Update on America’s Marine Highway Program

The infrastructure for seaports in our country was developed- much of it goes back to the 1930s, or some of it down in the South goes even later than that- but, the majority of the infrastructure in the country dates back to the 1950s, 60s and 70s. When it was built, it had an expected life cycle of about 50 years.

If you look at the freight network map that we have that has been used now for almost 5 years, not all of those highways connect to all of those ports. A prime section of ports that are missing are the Great Lakes. Look at the map and look at Duluth, which generates a significant amount of tonnage. Duluth supports one of the last US steelmaking plants and does not even have any highways connecting to it on that map. So, we need to work together to make sure that our freight highway system connects to the ports that the freight is flowing to, so that we can have that intermodal connectivity.

The purpose of the Marine Highway program, as legislated, is to mitigate landside congestion. And, we are to encourage the use of short sea or Marine Highway transportation through development and expansion of designated corridors, similar to the highway corridors, but for waterways. We use documented vessels and services, which means US flagged vessels. And, we have to encourage shipper utilization of the program. If the market is not establishing a program on its own, there is reason for that. There are policy disincentives to using marine highways, we are discovering. The system is not made right now to make water very easy. We are lacking purpose-built vessels to carry the freight on the water on the routes that we are identifying.


Baumel, C. P. 2008. The Mississippi River System Shallow Draft Barge Market—Perfectly Competitive or Oligopolistic? Journal of the Transportation Research Forum, Vol. 47, No. 4, pp. 5–18.

Dager, C. A. 2013. Fuel Tax Report, 2011. Center for Transportation Research, University of Tennessee, Knoxville.

Davis, S. C., S. W. Diegel, and R. G. Boundy. 2014. Transportation Energy Data Book, 33rd ed. Oak Ridge National Laboratory, Oak Ridge, Tenn.

Kruse, C. J., D. Ellis, A. Protopapas, and N. Norboge. 2013. New Approaches for U.S. Lock and Dam Maintenance and Funding. Texas A&M Transportation Institute, Texas A&M University, College Station.

NAS. 2015. TRB special report 315: funding and managing the U.S. inland waterways system: what policy makers need to know.  Transportation research board, National Academy of Sciences.  157 pages.

Tolliver, D, et al. October 2013. Comparing rail fuel efficiency with truck and waterway. Transportation Research Part D: Transport and environment. volume 24 pp69-75.



Posted in Ships and Barges | 2 Comments

Trucking and Fracking

September 18, 2013. The Transportation Needs and Impacts of Fracking-Based Energy Extraction. U.S. Department of Transportation, Federal Highway Administration

fracking and trucks top image








May 16, 2012. Jack Olson. Impacts of Heavy of Oversize Truck Shipments on the U.S. Highway Network

In the early 1990s, these rigs weighed about 90,000 pounds. Today they weigh about 110,000 pounds. This is true for most of the equipment used in the oil industry; it is getting larger. There are many different kinds of equipment necessary to bring an oil well into production, and the number of truckloads that are involved with each of these oil productions is dependent on whether the well is drilled vertically or horizontally. It is also dependent on the depth of the well, the moving efficiencies of the companies that are moving the pieces of equipment, and a variety of other factors that influence the overall figures. A vertical well takes about 400 truckloads one-way, and a horizontal well takes about 1,150 truckloads one-way, or 2,300 truckloads total, inbound outbound.

fracking trucks on the road

Several of the loads that are used to drill a well are oversized or overweight, many of them exceeding the legal loads in North Dakota of 105,500 pounds on most of our highways. The largest of these is the mud pump, which weighs 164,000 pounds. There are two of those that move into each of the sites. Of the 100 or so loads used to move just the drilling rig portion of the operation when bringing a well into production, 40 to 50 feet are overweight, and 3 out of 4 loads are also oversized.

fracking truck overloads in pounds










Oil is initially transported to rail facilities or pipeline locations by collection pipelines or trucks – almost exclusively by trucks. About 70% of all oil is currently being trucked from wells to pipelines and transfer locations. On average, a typical Bakken well produces about three truckloads of oil/day during its first year production.

Bakken oil wells produce about one barrel of salt water for every three barrels of oil during the first year of production. Salt water is transported by pipelines in some cases, but most of it is trucked to saltwater disposal sites.

Individual wells are the destination of sand or proppants, which are used to maintain the cracks in the formation so the oil can seep to the well bore.Three years ago, Williston, North Dakota was the only location receiving sand for the fracking process. Today, fracking sand and proppants are shipped to several locations by rail and then by trucks for final delivery to the well sites. The same is true of pipe used in the oil drilling phase. Again, it is brought into the state by rail to several different locations and then transported by truck to the drilling site. In addition to the state’s pipeline infrastructure, which is capable of transporting about 535,000 barrels/day, there are 13 rail facilities capable of transporting about 720,000 barrels/day. Unfortunately, rail and pipeline transportation capacity is not always necessarily available relative to the location of oil production. The typical truck, similar to the one used to transport saltwater, can transport about 220 barrels of oil per load.

The EOG Resources Rail Transload Facility near Stanley, North Dakota currently ships 65,000 barrels/day. Every day, 125 truckloads deliver between 20,000 and 25,000 barrels of oil to the facility.Depending on their size, each of the state’s rail transload facilities have similar truck-generating impacts on the system.

Mark Murawski “Transportation Patterns and Impacts from Marcellus Development”

Each well pad typically uses 3 to 5 acres of land per well and 6-8 wells per pad and developed over 4 to 6 week period. We have 5000 tons of aggregate needed which generates 400 truck trips to do that. That there is actual drilling that occurs that requires more equipment, water and cement that generates another 150-200 truck trips over another 4 to 5 week period.

The third stage is the fracking we actually take the natural gas deposits that takes another 800 and 1000 truck trips transporting 3-6 million gallons of water and frack sand over another 1-2 week period. At the end of the day, per pad, you’re looking 2-3 months of development of today 1250-1600 cumulative truck trips over roads that maybe had 100 or 200 vehicles a day on them previously.

Some roads went from 150 to an additional 700 trucks per day and that has been quite a challenge.

So with the look at is two thirds of our road system and Lycoming County is locally owned by different municipalities and not the state of Pennsylvania. The other third is owned by the state. Another concern we have is the accelerated deterioration to our lifecycle payments on roads that are not bonded. So who’s going to pay that bill? In Pennsylvania basically the transportation funding is derived from the gas tax at the state level.   But we have no comprehensive database on the condition of local roads. We do on the state road system.

The railroad impacts have been significant as well. Right not about twenty percent of their rail traffic is gas related and it helps take trucks off the road, but it’s not a substitute but there still needs to be an interface point since the wells are in locations not served by rail, truck traffic still has to happen there. You see a lot of types of Marcellus gas commodities transported by rail such as frack sand and the pipe and other kind of equipment related to make the and they come from a large swath of the United States.

Our main transfer terminal point between rail and truck freight for Marcellus or for anything is the Newberry Rail Yard which is now operating at full capacity

Obviously the siting of the wells being in remote areas that are difficult to access and the trucks going to small communities that have an inadequate capacity to handle all of the sudden traffic at intersections-these have definitely raised public discontent.


Dr. Cesar Quiroga Fracking-based Energy Development and Transportation Impacts and Needs

when we looked at the numbers that we look at fifty to sixty percent of the selected segments which were expected to have less than five years of remaining life. So that’s quite significant. One of the things that we did then was to try to estimate the pavement life. We developed a couple of tools to be able to do this. For those of you familiar with the energy development industry you need to dispose of the saltwater using disposal facilities, typically injection wells.

Many of these injection wells are permitted by the number of the maximum barrels they can receive per day. If you look at any number, for example 20,000, you can translate that into the number of truckloads you receive per day or per year and if you are familiar with the design of the different types of facilities can be designed for different types of ESALs. If you assume a rural road and assuming that it is new, at this rate 20,000 barrels a day in the facility may not have more than four years of life assuming that it was new when the process started.

One other thing that we did as an example was the estimate of impact statewide. We produced a high level estimate for the state of about $1 billion per year on state roads. Taking into consideration that local and county roads account for a roughly the same amount of mileage, we came up with an estimate of about $2 billion a year which is quite significant.

There were some assumptions that we had to make regarding the buffer around which we had some impact within the facilities-we didn’t include U.S. highways– So if anything the impact would be higher than the number I just mentioned. Another important part to keep in mind is that you may have overweight loads.

Just to give you an idea how important the overweight factor is if you look at 80,000 pounds is the reference and if you look to increase the overload to 100,000 pounds, an increase in weight is only twenty percent, but the increase in the impact is 240% which is quite significant, and that is something we should not forget.

It has been documented for example here-I live in San Antonio very close to the Eagle Ford and when the increasing of truck traffic became evident, there were also increases in documented crashes and fatalities. You can document this using other commercial data, for example, commercial vehicle violations, but the impact that I mentioned regarding overweight. Well, it turns out that a significant number of violations pertained to exceeding the maximum tandem axle weight as in this case is that of 34,000 pounds and note that energy-related traffic is ranked higher than non-energy-related traffic.

Let me try to summarize some of these issues in terms of things that are happening right now. I think nationwide there is an increasing amount of awareness When you talk to stakeholders and the county officials, it kind of depends. Most of the focus is related to environmental and water issues. One of the needs that I see is to continue to increase awareness about the impact on transportation and infrastructure with the numbers I mentioned earlier.




Posted in Trucks | Leave a comment

Admiral Hyman G Rickover : Energy Resources and Our Future, 1957

May 14, 1957. Rear Admiral Hyman G. Rickover, U.S. Navy. Energy Resources and Our Future. Scientific Assembly of the Minnesota State Medical Association

Energy Resources and Our Future

[I’ve shortened and reworded some of this speech.  Alice Friedemann,]
We live in what historians may some day call the Fossil Fuel Age. Today coal, oil, and natural gas supply 93% of the world’s energy; water power accounts for only 1%; and the labor of men and domestic animals the remaining 6%. This is a startling reversal from 1850 – only a century ago. Then fossil fuels supplied 5% of the world’s energy, and men and animals 94%. Over 80 percent of all the coal, oil, and gas consumed since the beginning of the Fossil Fuel Age has been burned up in the last 55 years.

These fuels have been known to man for more than 3,000 years. In parts of China, coal was used for domestic heating and cooking, and natural gas for lighting as early as 1000 B.C. The Babylonians burned asphalt a thousand years earlier. But these early uses were sporadic and of no economic significance. Fossil fuels did not become a major source of energy until machines running on coal, gas, or oil were invented. Wood, for example, was the most important fuel until 1880 when it was replaced by coal; coal, in turn, has only recently been surpassed by oil in this country.

Once in full swing, fossil fuel consumption has accelerated at phenomenal rates. All the fossil fuels used before 1900 would not last five years at today’s rates of consumption.

Nowhere are these rates higher and growing faster than in the United States. Our country, with only 6% of the world’s population, uses one third of the world’s total energy input. Each American has at his disposal, each year, energy equivalent to that obtainable from eight tons of coal — 6 times the world’s per capita energy consumption.

With high energy consumption goes a high standard of living. Thus the enormous fossil energy which we in this country control feeds machines which make each of us master of an army of mechanical slaves. Man’s muscle power is rated at 35 watts continuously, or one-twentieth horsepower. Machines therefore furnish every American industrial worker with energy equivalent to that of 244 men, while at least 2,000 men push his automobile along the road, and his family is supplied with 33 faithful household helpers. Each locomotive engineer controls energy equivalent to that of 100,000 men; each jet pilot of 700,000 men. Truly, the humblest American enjoys the services of more slaves than were once owned by the richest nobles, and lives better than most ancient kings. In retrospect, and despite wars, revolutions, and disasters, the hundred years just gone by may well seem like a Golden Age.

Whether this Golden Age will continue depends entirely upon our ability to keep energy supplies in balance with the needs of our growing population. Before I go into this question, let me review briefly the role of energy resources in the rise and fall of civilizations.

Possession of surplus energy is, of course, a requisite for any kind of civilization, for if man possesses merely the energy of his own muscles, he must expend all his strength – mental and physical – to obtain the bare necessities of life.

Surplus energy provides the material foundation for civilized living – a comfortable and tasteful home instead of a bare shelter; attractive clothing instead of mere covering to keep warm; appetizing food instead of anything that suffices to appease hunger. It provides the freedom from toil without which there can be no art, music, literature, or learning. There is no need to belabor the point. What lifted man – one of the weaker mammals – above the animal world was that he could devise, with his brain, ways to increase the energy at his disposal, and use the leisure so gained to cultivate his mind and spirit. Where man must rely solely on the energy of his own body, he can sustain only the most meager existence.

Man’s first step on the ladder of civilization dates from his discovery of fire and his domestication of animals. With these energy resources he was able to build a pastoral culture.

To move upward to an agricultural civilization he needed more energy. In the past this was found in the labor of dependent members of large patriarchal families, augmented by slaves obtained through purchase or as war booty. There are some backward communities which to this day depend on this type of energy.

Slave labor was necessary for the city-states and the empires of antiquity; they frequently had slave populations larger than their free citizenry. As long as slaves were abundant and no moral censure attached to their ownership, incentives to search for alternative sources of energy were lacking; this may well have been the single most important reason why engineering advanced very little in ancient times.

A reduction of per capita energy consumption has always in the past led to a decline in civilization and a reversion to a more primitive way of life. For example, exhaustion of wood fuel is believed to have been the primary reason for the fall of  once flourishing civilizations in Asia. India and China once had large forests, as did much of the Middle East.

Deforestation not only lessened the energy base but had a further disastrous effect: lacking plant cover, soil washed away, and with soil erosion the nutritional base was reduced as well.

Another cause of declining civilization comes with pressure of population on available land. A point is reached where the land can no longer support both the people and their domestic animals. Horses and mules disappear first. Finally even the versatile water buffalo is displaced by man who is two and one half times as efficient an energy converter as are draft animals. It must always be remembered that while domestic animals and agricultural machines increase productivity per man, maximum productivity per acre is achieved only by intensive manual cultivation.

It is a sobering thought that the impoverished people of Asia, who today seldom go to sleep with their hunger completely satisfied, were once far more civilized and lived much better than the people of the West. And not so very long ago, either. It was the stories brought back by Marco Polo of the marvelous civilization in China which turned Europe’s eyes to the riches of the East, and induced adventurous sailors to brave the high seas in their small vessels searching for a direct route to the fabulous Orient. The “wealth of the Indies” is a phrase still used, but whatever wealth may be there it certainly is not evident in the life of the people today.

Asia failed to keep technological pace with the needs of her growing populations and sank into such poverty that in many places man has become again the primary source of energy, since other energy converters have become too expensive. This must be obvious to the most casual observer. What this means is quite simply a reversion to a more primitive stage of civilization with all that it implies for human dignity and happiness.

Anyone who has watched a sweating Chinese farm worker strain at his heavily laden wheelbarrow, creaking along a cobblestone road, or who has flinched as he drives past an endless procession of human beasts of burden moving to market in Java – the slender women bent under mountainous loads heaped on their heads – anyone who has seen statistics translated into flesh and bone, realizes the degradation of man’s stature when his muscle power becomes the only energy source he can afford. Civilization must wither when human beings are so degraded.

Where slavery represented a major source of energy, its abolition had the immediate effect of reducing energy consumption and civilization declined until other sources of energy could be found. As Christianity spread through the Roman Empire and masters freed their slaves – in obedience to the teaching of the Church – the energy base of Roman civilization crumbled. This, some historians believe, may have been a major factor in the decline of Rome and the temporary reversion to a more primitive way of life during the Dark Ages. Slavery gradually disappeared throughout the Western world, except in its milder form of serfdom. That it was revived a thousand years later merely shows mans ability to stifle his conscience – at least for a while – when his economic needs are great. Eventually, even the needs of overseas plantation economies did not suffice to keep alive a practice so deeply repugnant to Western man’s deepest convictions.

When slavery disappeared in the West engineering advanced. Men began to harness the power of nature by utilizing water and wind as energy sources. The sailing ship, in particular, which replaced the slave-driven galley of antiquity, was vastly improved by medieval shipbuilders and became the first machine enabling man to control large amounts of inanimate energy.

The next important high-energy converter used by Europeans was gunpowder – an energy source far superior to the muscular strength of the strongest bowman or lancer. With ships that could navigate the high seas and arms that could out-fire any hand weapon, Europe was now powerful enough to preempt for herself the vast empty areas of the Western Hemisphere into which she poured her surplus populations to build new nations of European stock. With these ships and arms she also gained political control over populous areas in Africa and Asia from which she drew the raw materials needed to speed her industrialization, thus complementing her naval and military dominance with economic and commercial supremacy.

When a low-energy society comes in contact with a high-energy society, the advantage always lies with the latter. The Europeans not only achieved standards of living vastly higher than those of the rest of the world, but they did this while their population was growing at rates far surpassing those of other peoples. In fact, they doubled their share of total world population in the short span of three centuries. From one sixth in 1650, the people of European stock increased to almost one third of total world population by 1950.

Meanwhile much of the rest of the world did not even keep energy sources in balance with population growth. Per capita energy consumption actually diminished in large areas. It is this difference in energy consumption which has resulted in an ever-widening gap between the one-third minority who live in high-energy countries and the two-thirds majority who live in low-energy areas.

These so-called underdeveloped countries are now finding it far more difficult to catch up with the fortunate minority than it was for Europe to initiate transition from low-energy to high-energy consumption. For one thing, their ratio of land to people is much less favorable; for another, they have no outlet for surplus populations to ease the transition since all the empty spaces have already been taken over by people of European stock.

Almost all of today’s low-energy countries have a population density so great that it perpetuates dependence on intensive manual agriculture which alone can yield barely enough food for their people. They do not have enough acreage, per capita, to justify using domestic animals or farm machinery, although better seeds, better soil management, and better hand tools could bring some improvement. A very large part of their working population must nevertheless remain on the land, and this limits the amount of surplus energy that can be produced. Most of these countries must choose between using this small energy surplus to raise their very low standard of living or postpone present rewards for the sake of future gain by investing the surplus in new industries. The choice is difficult because there is no guarantee that today’s denial may not prove to have been in vain. This is so because of the rapidity with which public health measures have reduced mortality rates, resulting in population growth as high or even higher than that of the high-energy nations. Theirs is a bitter choice; it accounts for much of their anti-Western feeling and may well portend a prolonged period of world instability.

How closely energy consumption is related to standards of living may be illustrated by the example of India. Despite intelligent and sustained efforts made since independence, India’s per capita income is still only 20 cents daily; her infant mortality is four times ours; and the life expectancy of her people is less than one half that of the industrialized countries of the West. These are ultimate consequences of India’s very low energy consumption: one-fourteenth of world average; one-eightieth of ours.

Ominous, too, is the fact that while world food production increased 9% in the six years from 1945-51, world population increased by 12%. Not only is world population increasing faster than world food production, but unfortunately, increases in food production tend to occur in the already well-fed, high-energy countries rather than in the undernourished, low-energy countries where food is most lacking.

I think no further elaboration is needed to demonstrate the significance of energy resources for our own future. Our civilization rests upon a technological base which requires enormous quantities of fossil fuels. What assurance do we then have that our energy needs will continue to be supplied by fossil fuels: The answer is – in the long run – none.

The earth is finite. Fossil fuels are not renewable. In this respect our energy base differs from that of all earlier civilizations. They could have maintained their energy supply by careful cultivation. We cannot. Fuel that has been burned is gone forever. Fuel is even more evanescent than metals. Metals, too, are non-renewable resources threatened with ultimate extinction, but something can be salvaged from scrap. Fuel leaves no scrap and there is nothing man can do to rebuild exhausted fossil fuel reserves. They were created by solar energy 500 million years ago and took eons to grow to their present volume.

In the face of the basic fact that fossil fuel reserves are finite, the exact length of time these reserves will last is important in only one respect: the longer they last, the more time do we have, to invent ways of living off renewable or substitute energy sources and to adjust our economy to the vast changes which we can expect from such a shift.

Fossil fuels resemble capital in the bank. A prudent and responsible parent will use his capital sparingly in order to pass on to his children as much as possible of his inheritance. A selfish and irresponsible parent will squander it in riotous living and care not one whit how his offspring will fare.

Engineers whose work familiarizes them with energy statistics; far-seeing industrialists who know that energy is the principal factor which must enter into all planning for the future; responsible governments who realize that the well-being of their citizens and the political power of their countries depend on adequate energy supplies – all these have begun to be concerned about energy resources. In this country, especially, many studies have been made in the last few years, seeking to discover accurate information on fossil-fuel reserves and foreseeable fuel needs.

Statistics involving the human factor are, of course, never exact. The size of usable reserves depends on the ability of engineers to improve the efficiency of fuel extraction and use. It also depends on discovery of new methods to obtain energy from inferior resources at costs which can be borne without unduly depressing the standard of living. Estimates of future needs, in turn, rely heavily on population figures which must always allow for a large element of uncertainty, particularly as man reaches a point where he is more and more able to control his own way of life.

Current estimates of fossil fuel reserves vary to an astonishing degree. In part this is because the results differ greatly if cost of extraction is disregarded or if in calculating how long reserves will last, population growth is not taken into consideration; or, equally important, not enough weight is given to increased fuel consumption required to process inferior or substitute metals. We are rapidly approaching the time when exhaustion of better grade metals will force us to turn to poorer grades requiring in most cases greater expenditure of energy per unit of metal.

But the most significant distinction between optimistic and pessimistic fuel reserve statistics is that the optimists generally speak of the immediate future – the next twenty-five years or so – while the pessimists think in terms of a century from now. A century or even two is a short span in the history of a great people. It seems sensible to me to take a long view, even if this involves facing unpleasant facts.

For it is an unpleasant fact that according to our best estimates, total fossil fuel reserves recoverable at not over twice today’s unit cost, are likely to run out at some time between the years 2000 and 2050, if present standards of living and population growth rates are taken into account. Oil and natural gas will disappear first, coal last. There will be coal left in the earth, of course. But it will be so difficult to mine that energy costs would rise to economically intolerable heights, so that it would then become necessary either to discover new energy sources or to lower standards of living drastically.

For more than one hundred years we have stoked ever growing numbers of machines with coal; for fifty years we have pumped gas and oil into our factories, cars, trucks, tractors, ships, planes, and homes without giving a thought to the future. Occasionally the voice of a Cassandra has been raised only to be quickly silenced when a lucky discovery revised estimates of our oil reserves upward, or a new coalfield was found in some remote spot. Fewer such lucky discoveries can be expected in the future, especially in industrialized countries where extensive mapping of resources has been done.

Yet the popularizers of scientific news would have us believe that there is no cause for anxiety, that reserves will last thousands of years, and that before they run out science will have produced miracles. Our past history and security have given us the sentimental belief that the things we fear will never really happen – that everything turns out right in the end. But, prudent men will reject these tranquilizers and prefer to face the facts so that they can plan intelligently for the needs of their posterity.

Looking into the future, from the mid-20th Century, we cannot feel overly confident that present high standards of living will of a certainty continue through the next century and beyond. Fossil fuel costs will soon definitely begin to rise as the best and most accessible reserves are exhausted, and more effort will be required to obtain the same energy from remaining reserves. It is likely also that liquid fuel synthesized from coal will be more expensive. Can we feel certain that when economically recoverable fossil fuels are gone science will have learned how to maintain a high standard of living on renewable energy sources?

I believe it would be wise to assume that the principal renewable fuel sources which we can expect to tap before fossil reserves run out will supply only 7 to 15% of future energy needs. The five most important of these renewable sources are wood fuel, farm wastes, wind, water power, and solar heat.

Wood fuel and farm wastes are dubious as substitutes because of growing food requirements to be anticipated. Land is more likely to be used for food production than for tree crops; farm wastes may be more urgently needed to fertilize the soil than to fuel machines.

Wind and water power can furnish only a very small percentage of our energy needs. Moreover, as with solar energy, expensive structures would be required, making use of land and metals which will also be in short supply. Nor would anything we know today justify putting too much reliance on solar energy though it will probably prove feasible for home heating in favorable localities and for cooking in hot countries which lack wood, such as India.

More promising is the outlook for nuclear fuels. These are not, properly speaking, renewable energy sources, at least not in the present state of technology, but their capacity to “breed” and the very high energy output from small quantities of fissionable material, as well as the fact that such materials are relatively abundant, do seem to put nuclear fuels into a separate category from exhaustible fossil fuels. The disposal of radioactive wastes from nuclear power plants is a problem which must be solved before there can be any widespread use of nuclear power.

Another limit in the use of nuclear power is that we do not know today how to employ it otherwise than in large units to produce electricity or to supply heating. Because of its inherent characteristics, nuclear fuel cannot be used directly in small machines, such as cars, trucks, or tractors. It is doubtful that it could in the foreseeable future furnish economical fuel for civilian airplanes or very large ships. Rather than nuclear locomotives, it might prove advantageous to move trains by electricity produced in nuclear central stations. We are only at the beginning of nuclear technology, so it is difficult to predict what we may expect.

Transportation – the lifeblood of all technically advanced civilizations – seems to be assured, once we have borne the initial high cost of electrifying railroads and replacing buses with streetcars or interurban electric trains. But, unless science can perform the miracle of synthesizing automobile fuel from some energy source as yet unknown or unless trolley wires power electric automobiles on all streets and highways, it will be wise to face up to the possibility of the ultimate disappearance of automobiles, trucks, buses, and tractors. Before all the oil is gone and hydrogenation of coal for synthetic liquid fuels has come to an end, the cost of automotive fuel may have risen to a point where private cars will be too expensive to run and public transportation again becomes a profitable business.

Today the automobile is the most uneconomical user of energy. Its efficiency is 5% compared with 23% for the Diesel-electric railway. It is the most ravenous devourer of fossil fuels, accounting for over half of the total oil consumption in this country. And the oil we use in the United States in one year took nature about 14 million years to create. Curiously, the automobile, which is the greatest single cause of the rapid exhaustion of oil reserves, may eventually be the first fuel consumer to suffer. Reduction in automotive use would necessitate an extraordinarily costly reorganization of the pattern of living in industrialized nations, particularly in the United States. It would seem prudent to bear this in mind in future planning of cities and industrial locations.

Our present known reserves of fissionable materials are many times as large as our net economically recoverable reserves of coal. A point will be reached before this century is over when fossil fuel costs will have risen high enough to make nuclear fuels economically competitive. Before that time comes we shall have to make great efforts to raise our entire body of engineering and scientific knowledge to a higher plateau. We must also induce many more young Americans to become metallurgical and nuclear engineers. Else we shall not have the knowledge or the people to build and run the nuclear power plants which ultimately may have to furnish the major part of our energy needs. If we start to plan now, we may be able to achieve the requisite level of scientific and engineering knowledge before our fossil fuel reserves give out, but the margin of safety is not large. This is also based on the assumption that atomic war can be avoided and that population growth will not exceed that now calculated by demographic experts.

War, of course, cancels all man’s expectations. Even growing world tension just short of war could have far-reaching effects. In this country it might, on the one hand, lead to greater conservation of domestic fuels, to increased oil imports, and to an acceleration in scientific research which might turn up unexpected new energy sources. On the other hand, the resulting armaments race would deplete metal reserves more rapidly, hastening the day when inferior metals must be utilized with consequent greater expenditure of energy. Underdeveloped nations with fossil fuel deposits might be coerced into withholding them from the free world or may themselves decide to retain them for their own future use. The effect on Europe, which depends on coal and oil imports, would be disastrous and we would have to share our own supplies or lose our allies.

Barring atomic war or unexpected changes in the population curve, we can count on an increase in world population from 2.5 billion today to 4 billion in the year 2000; 6 to 8 billion by 2050. The United States is expected to quadruple its population during the 20th Century from 75 million in 1900 to 300 million in 2000 – and to reach at least 375 million in 2050. This would almost exactly equal India’s present population which she supports on just a little under half of our land area.

It is an awesome thing to contemplate a graph of world population growth from prehistoric times – tens of thousands of years ago – to the day after tomorrow – let us say the year 2000 A.D. If we visualize the population curve as a road which starts at sea level and rises in proportion as world population increases, we should see it stretching endlessly, almost level, for 99% of the time that man has inhabited the earth. In 6000 B.C., when recorded history begins, the road is running at a height of about 70 feet above sea level, which corresponds to a population of 10 million. Seven thousand years later – in 1000 A.D. – the road has reached an elevation of 1,600 feet; the gradation now becomes steeper, and 600 years later the road is 2,900 feet high. During the short span of the next 400 years from 1600 to 2000 – it suddenly turns sharply upward at an almost perpendicular inclination and goes straight up to an elevation of 29,000 feet – the height of Mt. Everest, the world’s tallest mountain.

In the 8,000 years from the beginning of history to the year 2000 A.D. world population will have grown from 10 million to 4 billion, with 90% of that growth taking place during the last 5% of that period, in 400 years. It took the first 3,000 years of recorded history to accomplish the first doubling of population, 100 years for the last doubling, but the next doubling will require only 50 years. Calculations give us the astonishing estimate that one out of every 20 human beings born into this world is alive today.

The rapidity of population growth has not given us enough time to readjust our thinking. Not much more than a century ago our country, in the very spot on which I now stand, was a wilderness in which a pioneer could find complete freedom from men and from government. If things became too crowded – if he saw his neighbor’s chimney smoke – he could, and often did, pack up and move west. We began life in 1776 as a nation of less than four million people – spread over a vast continent – with seemingly inexhaustible riches of nature all about. We conserved what was scarce – human labor – and squandered what seemed abundant – natural resources – and we are still doing the same today.

Much of the wilderness which nurtured what is most dynamic in the American character has now been buried under cities, factories and suburban developments where each picture window looks out on nothing more inspiring than the neighbor’s back yard with the smoke of his fire in the wire basket clearly visible.

Life in crowded communities cannot be the same as life on the frontier. We are no longer free, as was the pioneer – to work for our own immediate needs regardless of the future. We are no longer as independent of men and of government as were Americans two or three generations ago. An ever larger share of what we earn must go to solve problems caused by crowded living – bigger governments; bigger city, state, and federal budgets to pay for more public services. Merely to supply us with enough water and to carry away our waste products becomes more difficult and expansive daily. More laws and law enforcement agencies are needed to regulate human relations in urban industrial communities and on crowded highways than in the America of Thomas Jefferson.

Certainly no one likes taxes, but we must become reconciled to larger taxes in the larger America of tomorrow.

I suggest that this is a good time to think soberly about our responsibilities to our descendents – those who will ring out the Fossil Fuel Age. Our greatest responsibility, as parents and as citizens, is to give America’s youngsters the best possible education. We need the best teachers and enough of them to prepare our young people for a future immeasurably more complex than the present, and calling for ever larger numbers of competent and highly trained men and women. This means that we must not delay building more schools, colleges, and playgrounds. It means that we must reconcile ourselves to continuing higher taxes to build up and maintain at decent salaries a greatly enlarged corps of much better trained teachers, even at the cost of denying ourselves such momentary pleasures as buying a bigger new car, or a TV set, or household gadget. We should find – I believe – that these small self-denials would be far more than offset by the benefits they would buy for tomorrow’s America. We might even – if we wanted – give a break to these youngsters by cutting fuel and metal consumption a little here and there so as to provide a safer margin for the necessary adjustments which eventually must be made in a world without fossil fuels.

One final thought I should like to leave with you. High-energy consumption has always been a prerequisite of political power. The tendency is for political power to be concentrated in an ever-smaller number of countries. Ultimately, the nation which control – the largest energy resources will become dominant. If we give thought to the problem of energy resources, if we act wisely and in time to conserve what we have and prepare well for necessary future changes, we shall insure this dominant position for our own country.

Posted in An Overview, Other Experts | 3 Comments

Book list: What to do

I’m not sure that collapse can be coped with, but if it can, it’s at a local level cooperating and sharing with people nearby, because few have all the skills necessary to survival on their own year after year.  Given the popularity of “re-education”, concentration camps, involuntary conscription into armies, enslavement, and so on in history across cultures and civilizations, your number one survival skill is to have useful skills and keep your mouth shut lest you offend anyone…

What to do – not in any particular order

Richard Heinberg.

  1. The Oil Depletion Protocol : A Plan to Avert Oil Wars, Terrorism And Economic Collapse
  2. Powerdown : Options and Actions for a Post-Carbon World
  3. The Party’s Over: Oil, war, and the Fate of Industrial Societies

James H  Kunstler.  The Long Emergency: Surviving the Converging Catastrophes of the Twenty-First Century
Gene Gerue.  How to find your ideal country home. A comprehensive guide.

Howard T. Odum. The Prosperous Way Down: Principles and Policies
Ted Trainer. The Alternative, Sustainable Society; the Simpler Way

Fellneth. 1973. Politics of Land in California. (If we’re on the way to 90% farmers again, land reform will be essential since currently most farms are huge and owned by wealthy individuals and corporations and if it remains that way, ecologically unsound mono-crops and slave-to-poorly paid farm labor will be the future direction)

John Barry. The Great Influenza. The epic story of the deadliest plague in History.  (as the energy crisis/collapse grow worse, malnutrition will make many more vulnerable to disease and less medical care will make the spread and deadliness more likely, so inform yourself on what happens.  Plus this is one of the best books I’ve ever read, fascinating, and you’ll learn very surprising aspects of this time period you’ve probably never read anywhere else)

Posted in Advice, Book List | Tagged , | 2 Comments

Biofuel distribution wastes valuable diesel fuel

Biofuels can’t use the existing refined petroleum distribution pipeline system, by far the cheapest way to move fuel — 17.5 times cheaper than truck, 5 times less than rail, 2.25 times less than barge, on average (Curley), so delivery of biofuels consumes finite, far more energy-dense diesel fuel on rail, truck, and barge to be mixed in gas/diesel storage tanks, most of which aren’t served by rail:

gasoline distribution and consumption

ethanol distribution system

Source: National Commission on Energy Policy’s Task Force on Biofuels Infrastructure. 2008. Bipartisan Policy Center, Washington D.C., U.S., 

Notes from May 2011 APEC Biofuel Transportation and Distribution Options

Use of Existing Fuel Products Pipelines for Biofuels

Shipments Ethanol cannot easily be shipped via fuel products pipeline because it is a good solvent and would remove sulfur and other impurities from the pipeline system, resulting in contamination of the shipped ethanol.

Biodiesel is also a good solvent and could remove sulfur and other impurities from the pipeline system, resulting in contamination of the shipped biodiesel. In addition, there is concern regarding traces of biodiesel left over in the pipeline system. There is a possibility that trace methyl ester (biodiesel) could disarm the coalescers in aircraft fuel and potentially compromise the safety of the aircraft.  There is thus a proposal to limit the methyl ester content in the pipeline system in the USA to 5 PPM as a result of this concern.

A national ethanol pipeline?

Obstacles to Ethanol Pipeline Shipments There are 2 types of challenges involved in moving ethanol through a pipeline:
1. Challenges due to the corrosive nature of ethanol.
2. Challenges due to incompatibility with other products and substances within the pipeline.

The obvious challenge is that ethanol behaves so much differently than the refined petroleum products that are typically moved through pipelines.  More work is needed to find ways to overcome ethanol’s effects on the pipe, the valves and the pipeline systems themselves.

A key consideration is whether a new dedicated pipeline should be built, and if built, where it should be located. The other key question is whether long-term prices and demand would be able to support the building of a vast trans-national pipeline. In the United States , even the Renewable Fuels Association (RFA)  has stated that it is not certain that a dedicated ethanol pipeline would provide the same transport security as the more traditional barges, rail cars, and trucks.

Ethanol Solvency Issues: Ethanol’s solvent properties pose additional challenges. Over years of use, small quantities of residual sulfur and dirt from petroleum products can build up in existing pipeline systems.  Although these are not soluble in petroleum products, they can be in ethanol, which can lead to discoloration and product contamination.  Ethanol (and biodiesel) can strip lacque rs and deposits from internal pipeline surfaces and carry them as impurities. A dedicated ethanol pipeline would not encounter these issues, because these contaminants/deposits only arise from prior transport of petroleum products.

Materials Compatibility: Compatibility and corrosion issues can arise because of the way ethanol reacts with some materials in the pipeline and associated equipment. Ethanol and biodiesel can also degrade materials used in gaskets, o-rings, and seals used in fuels transportatio n and storage systems. Elastomers can experience swelling, shrinking and cracking when exposed to ethanol or biodiesel . Polymers used for coatings may be degraded by certain b iofuels as well.Corrosion of certain non-ferrous metals used in gauges, meters, valves, and pumps may occur . Any part of the supply system that will be converted to biofuels service needs to be assessed for materials compatibility and refitted with more resistant materials where required.

Stress Corrosion Cracking: Another challenge experienced in ethanol transportation by pipeline is Stress Corrosion Cracking (SCC) associated with ethanol movement and storage in pipelines and storage tanks. Stress corrosion cracking (SCC) can be defined as the slow growth of cracks along the inside of the pipeline, which are caused by mechanical stress and exposure to a corrosive environment. Research, largely funded by pipeline companies, has made great strides in addressing this problem. Industry/government research by Pipeline Research Council In ternational, Inc. (PRCI) 8 has found that ethanol-gasoline blends containing up to 15 percent ethanol by volume (E-15 and below) can be transported in existing pipelines without any design or operational modifications. PRCI also found that higher ethanol-containing blends (E-20 and above) and fuel-grade ethanol can be transported without SCC when certain commercial inhibitors are added. The efficacy of commercial inhibitors to mitigate SCC must be assessed prior to their use.

Water and Biofuels Fuel Quality: Small amounts of water enter pipeline systems from petroleum fuels, terminals and tank roofs. This is generally not a problem during pipeline transportation of refined petroleum products, because the water can separate in a tank and can be drained off. Unlike petroleum products, ethanol has an affinity for water as it flows through the pipeline network. The water-ethanol mixture has the potential to separate from petroleum products with which it may be mixed, resulting in degraded fuel quality. This can be managed by taking steps to cover tanks and remove excess water at certain points in the supply and distribution system.

Typical Biofuel Transport Modes

Biodiesel and biodiesel blends are transported primarily by dedicated (or washed) tanker trucks and rail cars. I f the truck or railcar was used for diesel shipment in the previous load, no washing is needed, but if another type of petroleum fuel was shipped, the tank must be washed.

Ethanol and ethanol blends are also transported mainly by dedicated (or washed) tanker trucks and rail cars. If the truck or railcar was used for gasoline shipment in a prior load, no washing is needed, but if another type of petroleum fuel was shipped, the tank has to be washed.

Ethanol and ethanol blends are generally not transported via pipeline due to some concerns regarding corrosion and contamination.

Primary terminals (also called “product terminals”) are generally located near major markets and transportation modes. Some terminals are located at refineries, while others are separate tank farms that receive fuel products by pipeline, tanker truck, rail car or marine tanker. Primary termin als are equipped with product delivery and loading racks that vary from one terminal to another. For example, some terminals linked via pipeline will not necessarily have racks that are adapted for other modes of transportation (train or truck). In region s with ample waterways, petroleum products may be transported to primary terminals by marine tankers.  In regions that are essentially land-locked, products are often transported from refineries to terminals by pipeline. For marine shipments of biofuels, there may be additional storage infrastructure required at the marine terminal.  From the marine terminal, the biofuel would be delivered by truck or rail, unless a dedicated biofuels pipeline could be justified. Since primary terminals are designed to provide downstream distribution of finished products, they all have tanker trucks and high-performance fuel injection equipment at the loading rack to prepare fuel blends (i.e., in-line blending).

Pipelines are a key part of the petroleum fuel transportation infrastructure. The petroleum fuels are transported via pipeline to primary or secondary terminals, which then serve as distribution points to nearby retail sites that are supplied by tanker trucks.  It is typically at these terminals that biofuels are blended with petroleum fuel for distribution.  At present, biofuels are usually transported to the blending terminals by truck, since there are no dedicated biofuel pipelines and the terminals are not generally linked to the railway network.

Rail Cars: Biodiesel (B100) or ethanol (E100) could move by rail from the biofuel production plant to destination terminals (mainly primary terminals or, in some cases, secondary terminals equipped with rail spurs). Rail shipment is generally the most cost-effective delivery method for medium-range and longer-range destinations (i.e., 500 to 5,000 km) that are incapable of receiving product by barge, tanker or pipeline. Rail line coverage and access va ry from region to region. Some te rminals lack rail receipt capability, requiring biodiesel (B100) and ethanol (E100) to be transported by truck.  Rail delivery might also prove infeasible in colder climates, unless the rail cars are heated and a heating system is in place at the destination terminal.

Because of the number of railcar units, the smaller volume of biofuel shipped per unit , and the laborious process of cargo unloading and inspection, rail shipments require more effort compared with ocean tankers, for example. The transportation of biodiesel and ethanol via train also requires more complex logistics (availability of heated or dedicated rail cars, delays due to cleaning rail cars in the case of non-dedicated rail cars or heating rail cars at the terminal, etc.). In some cases, installing heating systems or rail spurs adds to the terminal adaptation costs.

Tanker Trucks: In many cases, a tanker truck delivers B100 or E100 directly from the production plant to nearby terminals. In distant markets, tanker trucks may also pick up biofuel blends at primary terminals (that have received biodiesel or ethanol by tanker or rail), for delivery to secondary terminals that either cannot take product other than by truck or that have insufficient tankage for larger quantity deliveries. The redistribution of bi ofuel blends to retail outlets and end-users is also made by trucks.

Typical Blending and Distribution Practices

Ethanol is usually “splash blended into tanker trucks or rail cars that already contain gasoline.  The ethanol blended with the gasoline mixes readily and does not stratify.

Biodiesel is also generally splash blended or blended in tanks near the point of use. In the European Union (EU), blending is primarily done “in line” at refineries. The “splash blending” of biodiesel may result in some shock crystallization,depending on the temperature during blending or the means by which the splash blend is administered.  In-line blending provides contact between the diesel and the biodiesel and mitigates this risk. At primary terminals for ethanol blending, E100 is injection or splash blended into trucks (or rail cars) before being taken to secondary terminals or to retail.  Similarly, at primary terminals for biodiesel blending, the B100 is blended with the diesel by injection (or in some 14 cases splash blending) before being distributed in its blended form B5-B20) to secondary terminals or retail outlets (service stations, card locks, users with their own storage facilities). At this stage, the modes of shipment used no longer have to be insulated and heated.

Existing petroleum distribution terminals usually do not have rail access, creating a distribution infrastructure challenge for biofuels. Petroleum distribution facil ities were generally designed for pipeline distribution of petroleum fuel products. In r emote or smaller petroleum distribution terminals, product receipts were designed around truck receipt and delivery. In most cases, therefore, distribution of E100 or B 100 or blended biofuel product by rail is usually impractical.

From secondary terminals (or depots), blended biofuel product is moved mainly by tanker truck to retail outlets fueling stations–petrol and gasoline stations with direct delivery to end-users. Delivery distance, costs and carbon footprint of distribution may be greater for biofuel blends than for purely petroleum-based fuels due tho the concentration of biofuel feedstocks and refineries in agricultural regions which are remote from m any key urban population centers.


The cost of shipping feedstock s greater than 100 miles is generally prohibitive.  In the case of adv anced biofuel feedstocks such as biomass for cellulosic ethanol, even with densification technologies, the transportation costs become prohibitive beyond 100 miles.  Thus, the location of future cellulosic ethanol plants is likely to be dictated by proximity to feedstock as opposed to proximity to market, similar to the current situation with first generation biofuels. This also implies that most feedstocks will be delivered by truck, and that most biofuel production facilit ies will be located in rural areas close to feedstock, rather than close to urban fuel markets. Transportation factors to consider as biofuel production continues to expand include:

  • The capacity of the transportation system to move biofuel, feedstock, and co-products produced from biofuel, especially over long distances to fuel markets.
  • The availability of feedstock close to biofuel plants within 100 miles
  • The proximity of feedstocks and biorefineries to co-product markets.
  • Uncertainty about the size and location of biofuel demand from terminal s which consolidate, trans load, and distribute biofuels for blending.

Government policies towards biofuels may decrease this uncertainty. The lack of excess transportation capacity reduces flexibility in case of sudden changes in transportation demand and distribution patterns. Changes in these patterns brought on by rapidly increasing biofuel production could impact the logistics of rail networks, highway congestion, and marine logistics.

Co-Product Transportation Issues

Ethanol plants that use corn and other grains as feedstock produce a co-product called distillers grains (DDGS dried distillers grains with solubles, WDG-wet distillers grains, and MDG-modified distillers grains).  For every 56-pound bushel of corn, 17.5 p ounds of DDGS and 2.76 gallons of ethanol are produced, on average.  Slightly different yields of DDGS are produced from other grains. Dairy cattle operations and cattle feedlots are the primary domestic users of distilled grains as a protein supplement for the ruminant animals. Research is ongoing for increasing the DDGS use by poultry and hog operations, which currently is limited due to nutritional challenges DDGS present to non-ruminant animals. DDGS are initially marketed locally, and delivered by truc k. However, as production grows, access to wider markets may rely on rail or marine transport. Facilities using grain may also choose to adopt fractionation technologies to extract fibre, protein, starch or sweeteners as co-products. These food-grade co-products would also require transportation infrastructure to deliver these products to market.

Biofuel Infrastructure. Managing in an Uncertain Future. Research and Innovation, Position Paper 03 – 2010

At present, biofuel is first sent to blending terminals through tanker trucks, rail cars, and barges, where they are blended with gasoline or diesel and then sent to consumer filling stations via trucks. In the U.S. 67 percent of the ethanol is transported to blending terminals via trucks, 31 percent by rail cars, and 2 percent by barges. Biofuel is also exported through ships to receiving terminals which then blend them with gasoline and then transport them to filling stations using trucks.

The U.S. passed the Energy Independence and Security Act of 2007 (EISA), which required the creation of a Renewable Fuel Standard (RFS) program. The U.S. environmental Protection Agency issued revised RFS effective on July 1st 2010 (called RFS2) that for the first contained specific fuel volume requirements (Figure 3)


Curley, M. 2008. Can ethanol be transported in a multi-product pipeline? Pipeline & Gas Journal 235:34

Posted in Biofuels | Tagged , , | Leave a comment