American Physical Society: has the Battery Bubble Burst?

Aug/Sep 2012. Has the Battery Bubble Burst?

Fred Schlachter. American Physical Society. APS News Vol 21, number 8.

Three years ago at a symposium on lithium-air batteries at IBM Almaden there was great optimism. The symposium “Scalable Energy Storage: Beyond Lithium Ion” had as a working message: “There are no fundamental scientific obstacles to creating batteries with ten times the energy content–for a given weight–of the best current batteries.”

Optimism had all but vanished this year at the fifth conference in the scalable-energy-storage series in Berkeley, California.

“Although new electric vehicles with advanced lithium ion batteries are being introduced, further breakthroughs in scalable energy storage, beyond current state-of-the-art lithium ion batteries, are necessary before the full benefits of vehicle electrification can be realized.”

The mood was cautious, as it is clear that lithium-ion batteries are maturing slowly, and that their limited energy density and high cost will preclude producing all-electric cars to replace the primary American family car in the foreseeable future.

“The future is cloudy” is how Venkat Srinivasan, who heads the battery research program at Berkeley Lab, summarized the conference.

Electric cars have a long history. They were popular at the dawn of the automobile age, with 28% of the automobiles produced in the United States in 1900 powered by electricity. The early popularity of electric cars faded, however, as Henry Ford introduced mass-produced cars powered with internal-combustion engines in 1908.

Gasoline was quickly recognized as nature’s ideal fuel for cars: it has a very high energy density by both weight and volume–around 500 times that of a lead-acid battery–and it was plentiful, inexpensive, and seemingly unlimited in supply. By the 1920s electric cars were no longer commercially viable and disappeared from the scene. They did not reappear until late in the 20th century as gasoline became expensive, supplies no longer seemed unlimited, and concerns over the possible effect of combustion of fossil fuels on global climate reached public awareness.

Electric cars are returning with the advent of battery chemistries that are more efficient than the lead-acid batteries of old. A new generation of electric cars has come in the form of hybrid electric vehicles (HEVs), plug-in hybrid vehicles (PHEVs), and fully electric or battery electric vehicles (BEVs). Most of the latest generation of electric vehicles are powered by lithium-ion batteries, using technology pioneered for laptop computers and mobile phones.

Powering cars with electricity rather than with gasoline offers the dual advantages of eventually eliminating our dependence on imported fossil fuels and operating cars with renewable energy resources. Eliminating dependence on petroleum imported from often-unfriendly countries will greatly improve our energy security, while powering cars from a green grid with solar and wind resources will significantly reduce the amount of CO2 released into the atmosphere.

The major barrier to replacing the primary American family car with electric vehicles is battery performance. The most significant issue is energy storage density by both weight and volume. Present technology requires an electric car to have a large and heavy battery, while providing less range than a car powered by gasoline.

Batteries are expensive, resulting in electric cars typically being much more expensive than similar-sized cars powered by gasoline. There is a sensible cost limit when the cost of an electric car and electricity consumed over the life of the car considerably exceeds the cost of a car with an internal combustion engine including gasoline over the life of the car.

Safety is an issue much discussed in the press. Although there are more than 200,000 fires per year in gasoline-fueled cars in America, there is widespread fear of electricity. Batteries in cars powered by electricity will surely burn in some accident scenarios; the fire risk will probably be similar to gasoline-powered cars.

Stored energy in fuel is considerable: gasoline is the champion at 47.5 MJ/kg and 34.6 MJ/liter; the gasoline in a fully fueled car has the same energy content as a thousand sticks of dynamite. A lithium-ion battery pack has about 0.3 MJ/kg and about 0.4 MJ/liter (Chevy VOLT).

Gasoline thus has about 100 times the energy density of a lithium-ion battery.

This difference in energy density is partially mitigated by the very high efficiency of an electric motor in converting energy stored in the battery to making the car move: it is typically 60-80% efficient. The efficiency of an internal combustion engine in converting the energy stored in gasoline to making the car move is typically 15% (EPA 2012). With the ratio about 5, a battery with an energy storage density 1/5 of that of gasoline would have the same range as a gasoline-powered car. We are not even close to this at present.

Powering a car with electricity is considerably more efficient than powering a car with gasoline in terms of primary-energy consumption. While the efficiency of energy use of an electric car is very high, most power plants producing electricity are only about 30% efficient in converting primary energy to electricity delivered to the user. Conversion of petroleum to gasoline is highly efficient. This results in electricity having a factor of 1.6 improvement in use of primary energy relative to gasoline, and is an important point in its favor.

A 2008 APS report on energy efficiency examined statistics on how many miles Americans drive per day. The conclusion of that study was that a full fleet of PHEVs with a 40-mile electric range could reduce gasoline consumption by more than 60%. Thus America may not need a full fleet of BEVs to achieve a very considerable reduction in gasoline use.

The compelling question is whether electric cars can provide the convenience, cost, and range necessary to replace their gasoline-powered counterparts as the primary standard American family car. And this hinges almost entirely on the state of battery development, coupled with issues of making the grid green and providing widespread infrastructure for recharging electric vehicles.

The answer today is mixed:

  • HEVs are already popular, even though they represent only a small fraction of cars on the road today. The present generation of batteries is adequate for HEVs, and range is not an issue, as 100 percent of the energy to power the car comes from gasoline. Purchase cost is higher than for a conventional car; the advantage is a 40 percent or more improvement in fuel economy (EPA 2012).
  • PHEVs are now coming onto the market (Fig. 1). Electric range is limited, and batteries presently available are only marginally adequate. Total range is not an issue as gasoline is stored onboard as a “range extender.”
  • BEVs coming onto the market are expensive and the range is too small for many American drivers, at least as the primary family vehicle. Batteries with a much higher energy storage density and a lower cost are needed for BEVs to become popular outside a limited market of upscale urban dwellers as a second car to be used for local transportation, where home recharging is feasible, and where charging time is not an issue.

Battery requirements are different for HEVs, PHEVs, and BEVs. A battery for an HEV does not need to store much energy, but needs to be able to store energy quickly from regenerative braking. Because it operates over a limited charge/discharge range, its lifetime can be very long. A PHEV battery must have much greater energy-storage capacity to achieve a reasonable electric range and will operate with a considerably greater charge/discharge range, which limits the cycle life of the battery. The battery for a BEV must supply all the energy to power the car over its full range–say 150-300 km–and must use most of its charge/discharge range. These requirements mean the battery for a BEV will be large, heavy, expensive, and have a limited cycle life. Replacing a battery for a BEV could entail a cost exceeding ten thousand dollars, which, divided by miles driven, will likely exceed by a large amount the cost of electricity to power the car.

The Berkeley 2012 symposium focused on 2 alternative chemistries:  lithium/oxygen (lithium/air) and lithium/sulfur. Both theoretically offer much higher energy density than is possible even at the theoretical limit of lithium-ion-battery development. However, the technical difficulties in making a practical battery with good recharging capability using either of these chemistries are considerable.

There are major research issues concerning all aspects of a battery: the cathode, the anode, and the electrolyte, as well as materials interfaces and potential manufacturing issues. A Li/air (Li/O2) battery requires cooled compressed air without water vapor or CO2, which would greatly complicate a Li/air battery system. A Li/air battery would be both larger and heavier than a Li-ion battery, making prospects for automobile use unlikely in the near term. However, a leading battery-development group at IBM wrote in a 2010 article on lithium-air batteries; “Automotive propulsion batteries are just beginning the transition from nickel metal hydride to Li-ion batteries, after nearly 35 years of research and development on the latter. The transition to Li-air batteries (if successful) should be viewed in terms of a similar development cycle.” Perhaps we need to be patient.

Many approaches are being followed to develop and improve battery performance, including studies using nanotubes, nanowires, nanospheres, and other nanomaterials. However, none of the researchers reported progress to the point where a practical battery using Li/air or Li/S could be envisioned.

Thomas Greszler, manager of the cell design group at General Motors Electrochemical Energy Research Lab, was pessimistic about the prospects for new battery chemistries: “We are not investing in lithium-air and lithium-sulfur battery technology because we do not think from an automotive standpoint that it provides a substantial benefit for the foreseeable future.”

A significant infrastructure challenge is the network that will need to be constructed for recharging the battery of a BEV. There are more than 120,000 gasoline filling stations in the United States. With the range of a present-day BEV being less than a third of that of a gasoline-powered car, a very large number of recharging stations will be required, in addition to home charging, which may be feasible only for those who live in private homes or apartment buildings with dedicated parking.

Charging an electric car takes hours, and even a fast charge will take longer than most people will be willing to wait. And charging should be done at night, when electricity generation and grid capacity are most available.

Battery research is being funded at a modest level, as there is a false perception among the public and policymakers that present battery performance is adequate for widespread acceptance of battery-electric vehicles. The national focus has been on renewable sources of energy. The United States will not become independent of foreign oil and combustion of fossil fuels until new battery technologies are developed. This will require a concerted national effort in science and technology at a considerable cost.

Fred Schlachter recently retired as a physicist at the Advanced Light Source, Lawrence Berkeley National Laboratory. He is co-author of the 2008 APS report Energy Future: Think Efficiency, for which he wrote the chapter on transportation.

“Moore’s Law” for Batteries?

Isn’t there some kind of “Moore’s Law” for batteries? Why is progress on improving battery capacity so slow compared to increases in computer-processing capacity? The essential answer is that electrons do not take up space in a processor, so their size does not limit processing capacity; limits are given by lithographic constraints. Ions in a battery, however, do take up space, and potentials are dictated by the thermodynamics of the relevant chemical reactions, so there only can be significant improvements in battery capacity by changing to a different chemistry.

Posted in Batteries, Transportation | Comments Off

Charles Hugh Smith: How To Find Shelter From The Coming Storms?

How To Find Shelter From The Coming Storms?

by Charles Hugh Smith

Some basic suggestions for those who are seeking shelter from the coming storms of global financial crisis and recession.

Reader Andy recently wrote: “I look forward to your blog each day but am still waiting for your ideas for surviving the coming crisis.” Andy reports that he and his wife have small government and private pensions, are debt-free and have simplified their lifestyle to survive the eventual depreciation of their pensions. They currently split their time between a low-cost site in North America and Mexico. They are considering moving with the goal of establishing roots in a small community of life-minded people.

Though I have covered my own ideas in detail in my various books (Survival+: Structuring Prosperity for Yourself and the Nation, An Unconventional Guide to Investing in Troubled Times, Why Things Are Falling Apart and What We Can Do About It and Get a Job, Build a Real Career and Defy a Bewildering Economy, I am happy to toss a few basic strategies into the ring for your consideration.

Let’s start by applauding Andy for getting so much right.

1. Don’t count on pensions maintaining their current purchasing power as the promises issued in previous eras are not sustainable going forward. I’ve addressed the reasons for this ad nauseam, but we can summarize the whole mess in four basic points:

A. Demographics. Two workers cannot support one retiree’s pensions and healthcare costs (skyrocketing everywhere as costly treatments expand along with the cohort of Baby Boomer retirees). The U.S. is already at a ratio of two full-time workers to one retiree, and this is during a “recovery.” the ratio in some European nations is heading toward 1.5-to-1 and the next global financial meltdown hasn’t even begun.

B. The exhaustion of the debt-based consumption model. The only way you can sustain a debt-based model of ever-expanding consumption is to drop interest rates to zero. But alas, lenders go broke at 0%, so either the system implodes as debtors default or lenders go bankrupt. Take your pick, the end-game of financial crisis and collapse is the same in either case.

C. Printing money out of thin air does not increase wealth, it only increases claims on existing wealth. An honest government will eventually default on its unsustainable promises; a dishonest government (the default setting everywhere) will print money to fund the promises until its currency loses purchasing power as a result of either inflation or some other flavor of currency crisis.

In other words, the dishonest government will still issue pension checks for $2,000 a month but a cup of coffee will cost $500–if anyone will take the currency at all.

D. Pensions funds are assuming absurdly unrealistic returns on their investments. Many large public pension plans are assuming long-term yields of 7.5% even as the yield on “safe” government bonds has declined to 3% or 4%. As a result, the pension fund managers have taken on staggering amounts of systemic risk as they reach for higher yields.

When the whole rotten house of cards (shadow banking, subprime everything, etc.) collapses in a stinking heap, the yields will be negative. As John Hussman has noted, asset bubbles simply bring forward all the returns from future years. Once the bubble pops, yields are substandard/negative for years or even decades.

Pension funds that earn negative yields for a few years will soon burn through their remaining capital paying out unrealistic pensions.

2. Lowering the cost of one’s lifestyle. It’s much easier to cut expenses than it is to earn more money or squeeze more yield out of capital.

3. Establishing roots in a community of like-minded people. Though it’s rarely mentioned in a culture obsessed with financial security, day-to-day security is based more on community than on central-state-issued cash–though this is often lost on those who have surrendered all sense of community in their dependency on the state.

The core of community is reciprocity: before you take, you first have to give or share. Free-riders are soon identified and shunned.

My suggestions are derived from this week’s entries on the inevitable popping of credit bubbles, the unenviable role of tax donkeys in funding corrupt state Castes and the Great Game of Elites acquiring essential resources with unlimited credit issued by central banks, leaving the 99% debt-serfs and/or tax donkeys with neither the income nor the credit to compete with Elites for real resources.

4. Lessen your dependence on anything that requires debt and assets bubbles for its survival. Whatever depends on expanding debt and asset bubbles for its survival will go away when credit/asset bubbles pop, which they always do, despite adamant claims that “this time it’s different.” It never is.

5. Control as many real resources as you can. These include water rights, energy-producing or conserving assets (solar arrays, geothermal heating/cooling systems, etc.), farmland, orchards and gardens, rental housing, and tools that you know how to use to make/repair essential assets such as transport, housing, equipment, etc.

6. It’s easier to conserve/not use something than it is to acquire it or pay for it. As resources rise in price, those who consume little will be far less impacted than those whose lifestyles requires massive consumption of gasoline, heating oil, electricity, water, etc. It’s as simple as this: don’t waste food, or anything else.

7. The easiest way to conserve energy and time is to live close to your work and to essential services/transport hubs. Those who reside in liveable city neighborhoods and towns with public transport and multiple modes of transport who can walk/bike to work, farmers markets, cafes, etc. will need far less fossil fuel than those commuting to everything via vehicle.

8. If you can’t find work/establish a livelihood, move to a locale with a better infrastructure of opportunity. I explain this in Get a Job, Build a Real Career and Defy a Bewildering Economy, but John Kenneth Galbraith made much the same point in his 1979 book The Nature of Mass Poverty.

9. If you buy property, do so in a state with Prop 13-type limits on property tax increases. We have no choice about being tax donkeys, but choose a state where income and consumption (i.e. sales tax) are taxed rather than property tax. You can choose to earn less and buy less, but you can’t choose not to pay rising property taxes.

10. Be useful to others. That way, they’ll want you around and will welcome your presence. There are unlimited ways to be helpful/useful.

11. Trust the network, not the state or corporation. Centralized systems such as the government and global corporations are either bankrupt and don’t yet know it or are bankrupt and are well aware of it but loathe to let the rest of the world catch on.

12. Be trustworthy. Don’t be morally corrupt or work for corrupt/self-serving institutions. Many initially idealistic people think they can retain their integrity while working for morally bankrupt, self-serving bureaucracies, agencies and corporations; they are all eventually brought down to the level of the institution.

Lagniappe suggestion: lead by example. “Setting an example is not the main means of influencing others; it is the only means.” Albert Einstein

Charles Hugh Smith from Of Two Minds

Posted in Experts | Comments Off

David Fleming. 2007. The Lean Guide to Nuclear Energy. A Life-Cycle in Trouble

This is an easy to read 56-page primer on how nuclear reactors work, how ore is mined, nuclear fuel created, why there’s likely to be a supply crunch, and much more. I’ve extracted a small part of this article  and often rephrased some of it. Fleming doesn’t have many (high-quality) citations, so I’ve left out most of what he wrote since I’m not sure if he’s right about various matters (see the discussion at the end of this 2008 theoilddrum article by Fleming)

David Fleming . 2007. The Lean Guide to Nuclear Energy. A Life-Cycle in Trouble.  

Nuclear Waste

Nuclear power is a source of high-level waste which has to be sequestered. Every stage in the process produces waste, including the mining and leaching processes, the milling, the enrichment and the decommissioning. It is very expensive.

Deep reductions in travel and transport can be expected to come about rapidly and brutally as the oil market breaks down [from declining oil production, making disposal of the wastes less likely].

Nuclear energy relies on the existence of a fully powered-up grid system into which it can feed its output of electricity – but the grid itself is mainly powered by the electricity from mainly coal and gas-fueled power stations, so if coal or gas supplies were to be interrupted, the grid would (at least partially) close down, along with the nuclear reactors that feed into it;

Nuclear energy inevitably brings a sense of reassurance that, in the end, the technical fix will save us.  Which it can’t [since electricity doesn't solve the liquid fuels crisis at hand, since mining and long-haul trucks, tractors, harvesters, and billions of other diesel powered equipment can't be run on fuel cells or batteries].

The nuclear industry should focus on finding solutions to the whole of its waste problem before it becomes too late to do so. And hold it right there, because this is perhaps the moment to think about what “too late” might mean. Despite the emphasis placed on oil depletion in this booklet, it is climate change that may well set the final date for completion of the massive and non-negotiable task of dealing with nuclear waste. Many reactors are in low-lying areas in the path of rising seas; and many of the storage ponds, crowded with high-level waste, are close by. Estimated dates for steep rises in sea levels are constantly being brought forward (as of 2014 the latest projection is 1 meter by 2100 made much worse by storm surges best case, worst case is Antarctic or Greenland ice sheets slip off the land into the ocean).

With an angry climate, and whole populations on the move, it will be hard to find the energy, the funds, the skills and the orderly planning needed for a massive program of waste disposal – or even moving waste out of the way of rising tides. When outages in gas supplies lead to break down in electricity supplies, the electrical-powered cooling systems that cool high-level waste will stop working.

It will also be hard to stop ragged armies, scrambling for somewhere to live, looting spent fuel rods from unguarded dumps, attaching them to conventional explosives, and being prepared to use them. All this will have to be dealt-with, and at speed. There may be no time to wait for reactor cores and high-level wastes to cool down.

The task of making those wastes safe should be an unconditional priority, equal to that of confronting climate change itself. The default-strategy of seeding the world with radioactive time-bombs which will pollute the oceans and detonate at random intervals for thousands of years into the future, whether there are any human beings around to care about it or not, should be recognized as off any scale calibrated in terms other than dementia. Nuclear power is an energy source that causes trouble far beyond the scale of the energy it produces. It is a distraction from the need to face up to the coming energy gap.

How reactors work

Nuclear fission uses Uranium-235, an isotope of uranium that splits in half when struck by a neutron, producing more neutrons resulting in a chain reaction that produces lots of energy. The process is controlled by a moderator consisting of water or graphite, which speeds the reaction up, and by neutron-absorbing boron control rods, which slow it down. Eventually the uranium gets clogged with radioactive impurities such as the barium and krypton from uranium-235 decays, “transuranic” elements such as americium and neptunium, and much of the uranium-235 itself gets used up. It takes a year or two for this to happen, and then the fuel elements have to be removed, and fresh ones inserted. The spent fuel elements are very hot and radioactive (stand nearby for a second and you’re dead). In Europe the spent fuel is sometimes recycled (reprocessed), to extract the remaining uranium and plutonium and use them again, although you don’t get as much fuel back as you started with, the bulk of impurities still has to be disposed of, and other scientists believe this has a negative EROEI. Very few nations have anywhere safe to put it to keep future generations from harming themselves over the next billion years (the half-life of U-238, one of the main items of waste, is about 4.5 billion years).

The steps to get electricity from uranium

1. Mine and mill ore. Although uranium is found all over the world, only a few places have enough concentrated uranium ores (.01-.2%) to mine: Australia, Kazakhstan, Canada, South Africa, Namibia, Brazil, Russia, the USA, and Uzbekistan in mines up to 800 feet deep. Mines are injected and drenched in in tons of sulfuric acid, nitric acid, ammonia, and other chemicals and pumped up again after 3-25 years, yielding about a quarter of the uranium from the treated rocks and depositing unknown amounts of radioactive and toxic metals into the local environment. You need to grind up 1,000 tons of .1% ore to get 1 ton of yellow oxide and 999 tons of waste, both of which are radioactive from uranium-238 and 13 decay products. The waste takes up much more space after it has been mined, where wind and water can take radioactive waste far away. Properly cleaning it up would take 4 times the energy to mine the ore, so it seldom happens.

2. Preparing the fuel. The uranium oxide must now be enriched to concentrate U-235 to 3.5%, resulting in even more nasty, toxic, scary waste that isn’t properly disposed of. One of the wastes from this process is plutonium, which can be used to make nuclear bombs.

3. Generation. The fuel can now be used to produce heat to raise the steam to generate electricity. When the fuel rods are spent they must cool off to allow the isotopes to decay from 10 to 100 years before they can be disposed of elsewhere. The ponds need a reliable electricity supply to keep them stirred and topped up with water to stop the radioactive fuel elements drying out and catching fire. Then robots need to pack the wastes into lead, steel, and pure electrolytic copper, and put into giant geological repositories considered to be stable. There will never be an ideal way to store waste which will be radioactive for a thousand centuries or more and, whatever option is chosen, it will require a lot of energy.

Human Error. The consequences of a serious accident would make nuclear power an un-insurable risk. The nuclear industry has good safety systems but is not immune to accidents. The work is routine, requiring workers to cope with long periods of tedium punctuated by the unexpected, along with “normality-creep” as anomalies become familiar. The hazards were noted in the mid-1990s by a senior nuclear engineer working for the U.S. Nuclear Regulatory Commission: “I believe in nuclear power but after seeing the NRC in action, I’m convinced a serious accident is not just likely, but inevitable… They’re asleep at the wheel.” The Nuclear Regulatory Commission estimates the probability of meltdown in the U.S. over 20 years is 15 to 45%. The risk never goes away.

4. Reactors last 30-40 years [but are being renewed for another 20 anyhow] but produce electricity at full power for no more than 24 years. During their lifetimes, reactors have to be maintained and (at least once) thoroughly refurbished; eventually, corrosion and intense radioactivity make them impossible to repair. At that point they must be taken apart and disposed of, resulting in at least a thousand cubic meters of high-level waste. After a cooling-off period which may be as much as 50-100 years, the reactor has to be dismantled and cut into small pieces to be packed in containers for final disposal. The total energy required for decommissioning has been estimated at approximately 50 percent more than the energy used in the original construction.

Greenhouse gases

Every stage in the life-cycle of nuclear fission uses energy, and most of this energy is derived from fossil fuels. Since we’re waiting for high-level waste to cool off before dismantling plants, the emissions look better now than they will in the future. And as ores get less concentrated, the carbon dioxide from mining will consume more fossil fuels and emit even more greenhouse gases.

Nuclear power may have a negative EROEI & Peak Uranium

Deposits are often at great depth, requiring the removal of massive overburden, or the development of very deep underground mines, require more energy to mine the resource than is required by the shallower mines now being exploited.

Water problems can reduce EROEI. You can have too little water (it is needed as part of the process of deriving uranium oxide from the ore) or too much (it can cause flooding). Some of the more promising mines have big water problems.

How much uranium with a positive EROEI is left? The Energy Watch group predicts Peak Uranium between 2020-2035. Michael Dittmar at the Institute of Particle Physics predicts Peak Uranium will happen in 2015. The 2005 OECD Nuclear Energy Agency (NEA) and the International Atomic Energy Agency (IAEA) suggested a 70 year supply at the current price.

Every year 65,000 tons of uranium are consumed in reactors worldwide. About 40,000 tons are supplied from uranium mines (which are declining in output), 10,000 tons comes from Russian nuclear weapons (contract for this expires in 2013), and 15,000 tons comes from inventories which won’t last much longer.

So the only hope to keep enough uranium in production for existing reactors is more mining. Several medium-sized producers have maintained or increased output the past few years in Kazakhstan, Namibia, Niger, Russia, America and Canada.

But the biggest hope for more uranium is from the Cigar Lake mine, but after catastrophic flooding in 2006, and again in 2008, it wasn’t until spring of 2014 that the mine finally started processing uranium ore. The other big hope was the Olympic Dam in Australia, which has the largest known single deposit of uranium in the world (but it’s very low-grade, with an average of .03%, and only economic because uranium is a byproduct of gold, silver, and copper mining.

Fleming predicts that before 2019 some nuclear reactors will have to shut down due to a lack of fuel.

Fleming goes to great lengths to explain why nuclear power won’t end up having a positive net energy in the future, mainly due to the tremendous amount of energy that will be needed to safely store the wastes that have been building up since the industry started back in the 1950s. (I believe it is highly unlikely we will ever store any of this waste because as oil declines, which 99% of transportation is fueled by, people will want to use oil to grow and transport food, pump drinking water, treat sewage, and so on — safely storing nuclear waste will be at the bottom of the list. This is an outrageous crime: we will poison millions of generations of our descendants, and add to the growing pile of dangers that might drive us extinct).

Fleming demolishes Lovelocks’ proposal to use nuclear power to get ourselves out of the energy and climate change mess. First he shows why Lovelock’s idea of getting uranium from granite won’t work – it’s such a low concentration (.0004%) and for a 1 GW plant, you’d need 100 million tons of granite ore requiring 650 petajoules to extract, yet the energy delivered from the uranium would only be 26 petajoules. The same negative energy return true of uranium from sea water.

Lovelock also urges that we have a readily-available stock of fuel in the plutonium that has been accumulated from the reactors that are shortly to be decommissioned. But this won’t work for many reasons, including that it’s never been attempted in reactors like those we have now. If Lovelock means for us to use a breeder reactor, that has huge problems as well (including that we don’t know how to do this safely yet). There are 3 fast-breeder reactors in the world: Beloyarsk-3 in Russia, Monju in Japan and Phénix in France; Monju and Phénix have long been out of operation; Beloyarsk is still operating, but it has never bred. Getting the plutonium to breed involves 3 processes that, like breeder reactors, have never been done at a commercial scale. You end up with many nasty radioactive mixtures that clog up and corrode equipment.   Even if you could figure out how to do build breeder reactors in 30 years and built 80 in 2045, it would take another 40 years for each breeder to produce enough plutonium to replace itself and start up another nuclear plant. By 2085 we will be deep into oil depletion, yet only have 160 breeder reactors. And that is all we will have, because the uranium-235 reactors we have now will be out of fuel by then.

It’s impossible to prevent accidents at a breeder reactor

A meltdown is nothing compared to the explosion of a breeder reactor, which is basically a large nuclear bomb in a major accident. If you designed a system that couldn’t fail, it would be so expensive you’d have to build an enormous breeder reactor to justify the cost, but such a large reactor would have such a huge dome that there is no material to give it enough structural strength to survive a major accident. You could try to make the defense system even more complex, but then the defense system would be more problem-prone than the breeder reactor itself. A study for the nuclear industry in Japan concludes: “A successful commercial breeder reactor must have 3 attributes: it must breed, it must be economical, and it must be safe. Although any one or two of these attributes can be achieved in isolation by proper design, the laws of physics apparently make it impossible to achieve all three simultaneously, no matter how clever the design.”


(A truly ridiculous idea — see Peak Phosphorous).  Phosphate reserves are likely to last at most for 70 years and they are essential for growing food. They’re also a poor source because they have very low concentrations of uranium. Extracting uranium is difficult, and results in greenhouse gases — the solvents used include toxic organophosphate compounds that result in organofluorophosphorus and greenhouse gases in the form of fluorohydrocarbons.


David Fleming has an MA (History) from Oxford, an MBA from Cranfield and an MSc and PhD (Economics) from Birkbeck College, University of London. He has worked in industry, the financial services and environmental consultancy, and is a former Chairman of the Soil Association. He designed the system of Tradable Energy Quotas (TEQs), (aka Domestic Tradable Quotas and Personal Carbon Allowances), in 1996, and his booklet about them, Energy and the Common Purpose, now in its third edition in this series, was first published 2005. His Lean Logic: The Book of Environmental Manners is forthcoming.


Posted in Nuclear Power | 2 Comments

Michael Klare: Twenty-First-Century Energy Wars

Twenty-First-Century Energy Wars by Michael Klare, originally published by Tomdispatch

Iraq, Syria, Nigeria, South Sudan, Ukraine, the East and South China Seas: wherever you look, the world is aflame with new or intensifying conflicts.  At first glance, these upheavals appear to be independent events, driven by their own unique and idiosyncratic circumstances.  But look more closely and they share several key characteristics — notably, a witch’s brew of ethnic, religious, and national antagonisms that have been stirred to the boiling point by a fixation on energy.

In each of these conflicts, the fighting is driven in large part by the eruption of long-standing historic antagonisms among neighboring (often intermingled) tribes, sects, and peoples.  In Iraq and Syria, it is a clash among Sunnis, Shiites, Kurds, Turkmen, and others; in Nigeria, among Muslims, Christians, and assorted tribal groupings; in South Sudan, between the Dinka and Nuer; in Ukraine, between Ukrainian loyalists and Russian-speakers aligned with Moscow; in the East and South China Sea, among the Chinese, Japanese, Vietnamese, Filipinos, and others.  It would be easy to attribute all this to age-old hatreds, as suggested by many analysts; but while such hostilities do help drive these conflicts, they are fueled by a most modern impulse as well: the desire to control valuable oil and natural gas assets.  Make no mistake about it, these are twenty-first-century energy wars.

It should surprise no one that energy plays such a significant role in these conflicts.  Oil and gas are, after all, the world’s most important and valuable commodities and constitute a major source of income for the governments and corporations that control their production and distribution.  Indeed, the governments of Iraq, Nigeria, Russia, South Sudan, and Syria derive the great bulk of their revenues from oil sales, while the major energy firms (many state-owned) exercise immense power in these and the other countries involved.  Whoever controls these states, or the oil- and gas-producing areas within them, also controls the collection and allocation of crucial revenues.  Despite the patina of historical enmities, many of these conflicts, then, are really struggles for control over the principal source of national income.

Moreover, we live in an energy-centric world where control over oil and gas resources (and their means of delivery) translates into geopolitical clout for some and economic vulnerability for others.  Because so many countries are dependent on energy imports, nations with surpluses to export — including Iraq, Nigeria, Russia, and South Sudan — often exercise disproportionate influence on the world stage.  What happens in these countries sometimes matters as much to the rest of us as to the people living in them, and so the risk of external involvement in their conflicts — whether in the form of direct intervention, arms transfers, the sending in of military advisers, or economic assistance — is greater than almost anywhere else.

The struggle over energy resources has been a conspicuous factor in many recent conflicts, including the Iran-Iraq War of 1980-1988, the Gulf War of 1990-1991, and the Sudanese Civil War of 1983-2005.  On first glance, the fossil-fuel factor in the most recent outbreaks of tension and fighting may seem less evident.  But look more closely and you’ll see that each of these conflicts is, at heart, an energy war.

Iraq, Syria, and ISIS

The Islamic State of Iraq and Syria (ISIS), the Sunni extremist group that controls large chunks of western Syria and northern Iraq, is a well-armed militia intent on creating an Islamic caliphate in the areas it controls.  In some respects, it is a fanatical, sectarian religious organization, seeking to reproduce the pure, uncorrupted piety of the early Islamic era.  At the same time, it is engaged in a conventional nation-building project, seeking to create a fully functioning state with all its attributes.

As the United States learned to its dismay in Iraq and Afghanistan, nation-building is expensive: institutions must be created and financed, armies recruited and paid, weapons and fuel procured, and infrastructure maintained.  Without oil (or some other lucrative source of income), ISIS could never hope to accomplish its ambitious goals.  However, as it now occupies key oil-producing areas of Syria and oil-refining facilities in Iraq, it is in a unique position to do so.  Oil, then, is absolutely essential to the organization’s grand strategy.

Syria was never a major oil producer, but its prewar production of some 400,000 barrels per day did provide the regime of Bashar al-Assad with a major source of income.  Now, most of the country’s oil fields are under the control of rebel groups, including ISIS, the al-Qaeda-linked Nusra Front, and local Kurdish militias.  Although production from the fields has dropped significantly, enough is being extracted and sold through various clandestine channels to provide the rebels with income and operating funds.  “Syria is an oil country and has resources, but in the past they were all stolen by the regime,” said Abu Nizar, an anti-government activist.  “Now they are being stolen by those who are profiting from the revolution.”

At first, many rebel groups were involved in these extractive activities, but since January, when it assumed control of Raqqa, the capital of the province of that name, ISIS has been the dominant player in the oil fields.  In addition, it has seized fields in neighboring Deir al-Zour Province along the Iraq border.  Indeed, many of the U.S.-supplied weapons it acquired from the fleeing Iraqi army after its recent drive into Mosul and other northern Iraqi cities have been moved into Deir al-Zour to help in the organization’s campaign to take full control of the region.  In Iraq, ISIS is fighting to gain control over Iraq’s largest refinery at Baiji in the central part of the country.

It appears that ISIS sells oil from the fields it controls to shadowy middlemen who in turn arrange for its transport — mostly by tanker trucks — to buyers in Iraq, Syria, and Turkey.  These sales are said to provide the organization with the funds needed to pay its troops and acquire its vast stockpiles of arms and ammunition.  Many observers also claim that ISIS is selling oil to the Assad regime in return for immunity from government air strikes of the sort being launched against other rebel groups.  “Many locals in Raqqa accuse ISIS of collaborating with the Syrian regime,” a Kurdish journalist, Sirwan Kajjo, reported in early June.  “Locals say that while other rebel groups in Raqqa have been under attack by regime air strikes on a regular basis, ISIS headquarters have not once been attacked.”

However the present fighting in northern Iraq plays out, it is obvious that there, too, oil is a central factor.  ISIS seeks both to deny petroleum supplies and oil revenue to the Baghdad government and to bolster its own coffers, enhancing its capacity for nation-building and further military advances.  At the same time, the Kurds and various Sunni tribes — some allied with ISIS — want control over oil fields located in the areas under their control and a greater share of the nation’s oil wealth.

Ukraine, the Crimea, and Russia

The present crisis in Ukraine began in November 2013 when President Viktor Yanukovych repudiated an agreement for closer economic and political ties with the European Union (EU), opting instead for closer ties with Russia.  That act touched off fierce anti-government protests in Kiev and eventually led to Yanukovych’s flight from the capital.  With Moscow’s principal ally pushed from the scene and pro-EU forces in control of the capital, Russian President Vladimir Putin moved to seize control of the Crimea and foment a separatist drive in eastern Ukraine.  For both sides, the resulting struggle has been about political legitimacy and national identity — but as in other recent conflicts, it has also been about energy.

Ukraine is not itself a significant energy producer.  It is, however, a major transit route for the delivery of Russian natural gas to Europe.  According to the U.S. Energy Information Administration (EIA), Europe obtained 30% of its gas from Russia in 2013 — most of it from the state-controlled gas giant Gazprom — and approximately half of this was transported by pipelines crossing Ukraine.  As a result, that country plays a critical role in the complex energy relationship between Europe and Russia, one that has proved incredibly lucrative for the shadowy elites and oligarchswho control the flow of gas, whille at the same time provoking intense controversy. Disputes over the price Ukraine pays for its own imports of Russian gas twice provoked a cutoff in deliveries by Gazprom, leading to diminished supplies in Europe as well.

Given this background, it is not surprising that a key objective of the “association agreement” between the EU and Ukraine that was repudiated by Yanukovych (and has now been signed by the new Ukrainian government) calls for the extension of EU energy rules to Ukraine’s energy system — essentially eliminating the cozy deals between Ukrainian elites and Gazprom.  By entering into the agreement, EU officials claim, Ukraine will begin “a process of approximating its energy legislation to the EU norms and standards, thus facilitating internal market reforms.”

Russian leaders have many reasons to despise the association agreement.  For one thing, it will move Ukraine, a country on its border, into a closer political and economic embrace with the West.  Of special concern, however, are the provisions about energy, given Russia’s economic reliance on gas sales to Europe — not to mention the threat they pose to the personal fortunes of well-connected Russian elites.  In late2013 Yanukovych came under immense pressure from Vladimir Putin to turn his back on the EU and agree instead to an economic union with Russia and Belarus, an arrangement that would have protected the privileged status of elites in both countries.  However, by moving in this direction, Yanukovych put a bright spotlight on the crony politics that had long plaguedUkraine’s energy system, thereby triggering protests in Kiev’s Independence Square (the Maidan) — that led to his downfall.

Once the protests began, a cascade of events led to the current standoff, with the Crimea in Russian hands, large parts of the east under the control of pro-Russian separatists, and the rump western areas moving ever closer to the EU.  In this ongoing struggle, identity politics has come to play a prominent role, with leaders on all sides appealing to national and ethnic loyalties.  Energy, nevertheless, remains a major factor in the equation.  Gazprom has repeatedly raised the price it charges Ukraine for its imports of natural gas, and on June 16th cut off its supply entirely, claiming non-payment for past deliveries.  A day later, an explosion damaged one of the main pipelines carrying Russian gas to Ukraine — an event still being investigated.  Negotiations over the gas price remain a major issue in the ongoing negotiations between Ukraine’s newly elected president, Petro Poroshenko, and Vladimir Putin.

Energy also played a key role in Russia’s determination to take the Crimea by military means.  By annexing that region, Russia virtually doubled the offshore territory it controls in the Black Sea, which is thought to house billions of barrels of oil and vast reserves of natural gas.  Prior to the crisis, several Western oil firms, including ExxonMobil, were negotiating with Ukraine for access to those reserves.  Now, they will be negotiating with Moscow.  “It’s a big deal,” said Carol Saivetz, a Eurasian expert at MIT.  “It deprives Ukraine of the possibility of developing these resources and gives them to Russia.”

Nigeria and South Sudan

The conflicts in South Sudan and Nigeria are distinctive in many respects, yet both share a key common factor: widespread anger and distrust towards government officials who have become wealthy, corrupt, and autocratic thanks to access to abundant oil revenues.

In Nigeria, the insurgent group Boko Haram is fighting to overthrow the existing political system and establish a puritanical, Muslim-ruled state.  Although most Nigerians decry the group’s violent methods (including the kidnapping of hundreds of teenage girls from a state-run school), it has drawn strength from disgust in the poverty-stricken northern part of the country with the corruption-riddledcentral government in distant Abuja, the capital.

Nigeria is the largest oil producer in Africa, pumping out some 2.5 million barrels per day.  With oil selling at around $100 per barrel, this represents a potentially staggering source of wealth for the nation, even after the private companies involved in the day-to-day extractive operations take their share.  Were these revenues — estimated in the tens of billions of dollars per year — used to spur development and improve the lot of the population, Nigeria could be a great beacon of hope for Africa.  Instead, much of the money disappears into the pockets (and foreign bank accounts) of Nigeria’s well-connected elites.

In February, the governor of the Central Bank of Nigeria, Lamido Sanusi, told a parliamentary investigating committee that the state-owned Nigerian National Petroleum Corporation (NNPC) had failed to transfer some $20 billion in proceeds from oil sales to the national treasury, as required by law.  It had all evidently been diverted to private accounts.  “A substantial amount of money has gone,” he told the New York Times.  “I wasn’t just talking about numbers.  I showed it was a scam.”

For many Nigerians — a majority of whom subsist on less than $2 per day — the corruption in Abuja, when combined with the wanton brutality of the government’s security forces, is a source of abiding anger and resentment, generating recruits for insurgent groups like Boko Haram and winning them begrudging admiration.  “They know well the frustration that would drive someone to take up arms against the state,” said National Geographic reporter James Verini of people he interviewed in battle-scarred areas of northern Nigeria.  At this stage, the government has displayed zero capacity to overcome the insurgency, while its ineptitude and heavy-handed military tactics have only further alienated ordinary Nigerians.

The conflict in South Sudan has different roots, but shares a common link to energy.  Indeed, the very formation of South Sudan is a product of oil politics.  A civil war in Sudan that lasted from 1955 to 1972 only ended when the Muslim-dominated government in the north agreed to grant more autonomy to the peoples of the southern part of the country, largely practitioners of traditional African religions or Christianity.  However, when oil was discovered in the south, the rulers of northern Sudan repudiated many of their earlier promises and sought to gain control over the oil fields, sparking a second civil war, which lasted from 1983 to 2005.  An estimated two million people lost their lives in this round of fighting.  In the end, the south was granted full autonomy and the right to vote on secession.  Following a January 2011 referendum in which 98.8% of southerners voted to secede, the country became independent on that July 9th.

The new state had barely been established, however, when conflict with the north over its oil resumed.  While South Sudan has a plethora of oil, the only pipeline allowing the country to export its energy stretches across North Sudan to the Red Sea.  This ensured that the south would be dependent on the north for the major source of government revenues.  Furious at the loss of the fields, the northerners charged excessively high rates for transporting the oil, precipitating a cutoff in oil deliveries by the south and sporadic violence along the two countries’ still-disputed border.  Finally, in August 2012, the two sides agreed to a formula for sharing the wealth and the flow of oil resumed. Fighting has, however, continued in certain border areas controlled by the north but populated by groups linked to the south.

With the flow of oil income assured, the leader of South Sudan, President Salva Kiir, sought to consolidate his control over the country and all those oil revenues.  Claiming an imminent coup attempt by his rivals, led by Vice President Riek Machar, he disbanded his multiethnic government on July 24, 2013, and began arresting allies of Machar.  The resulting power struggle quickly turned into an ethnic civil war, with the kin of President Kiir, a Dinka, battling members of the Nuer group, of which Machar is a member.  Despite several attempts to negotiate a cease-fire, fighting has been under way since December, with thousands of people killed and hundreds of thousands forced to flee their homes.

As in Syria and Iraq, much of the fighting in South Sudan has centered around the vital oil fields, with both sides determined to control them and collect the revenues they generate.  As of March, while still under government control, the Paloch field in Upper Nile State was producing some 150,000 barrels a day, worth about $15 million to the government and participating oil companies.  The rebel forces, led by former Vice President Machar, are trying to seize those fields to deny this revenue to the government.  “The presence of forces loyal to Salva Kiir in Paloch, to buy more arms to kill our people… is not acceptable to us,” Machar said in April.  “We want to take control of the oil field.  It’s our oil.”  As of now, the field remains in government hands, with rebel forces reportedly making gains in the vicinity.

The South China Sea

In both the East China and South China seas, China and its neighbors claim assorted atolls and islands that sit astride vast undersea oil and gas reserves.  The waters of both have been the site of recurring naval clashes over the past few years, with the South China Sea recently grabbing the spotlight. 

An energy-rich offshoot of the western Pacific, that sea, long a focus of contention, is rimmed by China, Vietnam, the island of Borneo, and the Philippine Islands.  Tensions peaked in May when the Chinese deployed their largest deep-water drilling rig, the HD-981, in waters claimed by Vietnam.  Once in the drilling area, about 120 nautical miles off the coast of Vietnam, the Chinese surrounded the HD-981 with a large flotilla of navy and coast guard ships.  When Vietnamese coast guard vessels attempted to penetrate this defensive ring in an effort to drive off the rig, they were rammed by Chinese ships and pummeled by water cannon.  No lives have yet been lost in these encounters, but anti-Chinese rioting in Vietnam in response to the sea-borne encroachment left several dead and the clashes at sea are expected to continue for several months until the Chinese move the rig to another (possibly equally contested) location.

The riots and clashes sparked by the deployment of HD-981 have been driven in large part by nationalism and resentment over past humiliations.  The Chinese, insisting that various tiny islands in the South China Sea were once ruled by their country, still seek to overcome the territorial losses and humiliations they suffered at the hands the Western powers and Imperial Japan.  The Vietnamese, long accustomed to Chinese invasions, seek to protect what they view as their sovereign territory.  For common citizens in both countries, demonstrating resolve in the dispute is a matter of national pride.

But to view the Chinese drive in the South China Sea as a simple matter of nationalistic impulses would be a mistake.  The owner of HD-981, the China National Offshore Oil Company (CNOOC), has conducted extensive seismic testing in the disputed area and evidently believes there is a large reservoir of energy there.  “The South China Sea is estimated to have 23 billion tonsto 30 billion tons of oil and 16 trillion cubic meters of natural gas, accounting for one-third of China’s total oil and gas resources,” the Chinese news agency Xinhua noted.  Moreover, China announced in June that it was deploying a second drilling rig to the contested waters of the South China Sea, this time at the mouth of the Gulf of Tonkin.

As the world’s biggest consumer of energy, China is desperate to acquire fresh fossil fuel supplies wherever it can.  Although its leaders are prepared to make increasingly large purchases of African, Russian, and Middle Eastern oil and gas to satisfy the nation’s growing energy requirements, they not surprisingly prefer to develop and exploit domestic supplies.  For them, the South China Sea is not a “foreign” source of energy but a Chinese one, and they appear determined to use whatever means necessary to secure it.  Because other countries, including Vietnam and the Philippines, also seek to exploit these oil and gas reserves, further clashes, at increasing levels of violence, seem almost inevitable.

No End to Fighting

As these conflicts and others like them suggest, fighting for control over key energy assets or the distribution of oil revenues is a critical factor in most contemporary warfare.  While ethnic and religious divisions may provide the political and ideological fuel for these battles, it is the potential for mammoth oil profits that keeps the struggles alive.  Without the promise of such resources, many of these conflicts would eventually die out for lack of funds to buy arms and pay troops.  So long as the oil keeps flowing, however, the belligerents have both the means and incentive to keep fighting.

In a fossil-fuel world, control over oil and gas reserves is an essential component of national power.  “Oil fuels more than automobiles and airplanes,” Robert Ebel of the Center for Strategic and International Studies told a State Department audience in 2002.  “Oil fuels military power, national treasuries, and international politics.”  Far more than an ordinary trade commodity, “it is a determinant of well being, of national security, and international power for those who possess this vital resource, and the converse for those who do not.”

If anything, that’s even truer today, and as energy wars expand, the truth of this will only become more evident.  In our present world, if you see a conflict developing, look for the energy.  It’ll be there somewhere on this fossil-fueled planet of ours.

Posted in War | Comments Off

Wave, Tide, Ocean Current, In-stream, OTEC power: National Academy of Sciences 2013

A review of “An Evaluation of the U.S. Department of Energy’s Marine and Hydrokinetic Resource Assessments. 2013. Marine & Hydrokinetic Energy Technology Committee; National Research Council” by Alice Friedemann, July 7, 2014.


The U.S. Department of Energy (DOE) hired contractors to evaluate five Marine and Hydrokinetic Resources (MHK) globally: 1) Ocean tides 2) Waves 3) Ocean Currents 4) Temperature gradients in the ocean (OTEC) and 5) Free-flowing rivers and streams.

Then DOE asked the National Academy of Sciences (NAS) to evaluate the results, so NAS assembled a panel of 71 experts to write this assessment.

The NAS replied it was a waste of time for DOE to ask the contractors what the global theoretical maximum power generation from MHK resources might be.  For example, solar power plants provide less than .1 % of electricity in the United States, even though the theoretical amount would be staggeringly enormous if you plastered the entire continent with them.  But you can’t do that.

Nor can you fill the world’s ocean and rivers with devices to harvest the power in waves, tides, ocean currents, rivers, and temperature gradients (OTEC).

NAS says DOE should have asked was how much power could be generated locally at specific sites in the United States after taking into account technical and practical resource limits. For example:

The GIS database of MHK resources has a 100 MW resource. But after evaluating the location further, it turns out to be a 2.7 MW resource because of 1) technical resource limits (turbines 30% efficient, only 20% of the area can be used, the efficiency of connecting the extracted energy to the electric grid is 90%), and 2) practical resource issues: 50% of the remaining area interferes with existing fisheries and navigation routes, leaving a practical resource of 2.7 MW (100 MW * .30 * .20 * .90 * .50 = 2.7 MW).

Here are some more practical barriers to developing MHK:


  • Impacts on marine species and ecosystems (e.g., rare or keystone species, nursery, juvenile and spawning habitat, Fish, Invertebrates, Reptiles, Birds, Mammals, Plants and habitats)
  • Bottom disturbance
  • Altered regional water movement
  • acoustic, chemical, temperature, and electromagnetic changes or emissions
  • Physical impacts on the subsurface, the water column, and the water surface, scouring and/or sediment buildup, changes in wave or stream energy, turbulence

Regulatory obstacles:

  • Endangered Species Act; Coastal Zone Management Act; Marine Mammal Protection Act; Clean Water Act; Federal agency jurisdictions: National Oceanic and Atmospheric Administration (NOAA), U.S. Army Corps of Engineers (USACE), Federal Energy Regulatory Commission (FERC), State Department, U.S. Fish and Wildlife Service (FWS), Environmental Protection Agency (EPA), Bureau of Ocean Energy Management (BOEM), U.S. Coast Guard
  • Overlapping jurisdiction of state and federal agencies: FERC (within DOE) has jurisdiction over hydroelectric development; leases on the U.S. outer continental shelf require approval by BOEM (Dept of the Interior; NOAA (Dept of Commerce) is responsible for licensing commercial OTEC facilities; FWS (Dept of the Interior) and NOAA coordinate protection of marine mammals from potentially harmful development; NOAA also protects essential fish habitats. Projects in navigable waters fall under the jurisdiction of USACE and may also require involvement of the U.S. Coast Guard. USACE permits may be required for projects involving dredging rivers or coastal areas. The Coastal Zone Management Act involves coordination among local, state, and federal agencies to ensure that plans are in accordance with a state’s own coastal management program.

Social and economic:

  • Spatial conflicts (e.g., ports and harbors, marine sanctuaries, navigation, shipping lanes, dumping sites, cable areas, pipeline areas, shoreline constructions, wreck points, mooring and warping points, military operations, marine sanctuaries, wildlife refuges, Traditional hunting, fishing, and gathering; commerce and transportation; oil and gas exploration and development; sand and gravel mining; environmental and conservation activities; scientific research and exploration; security, emergency response, and military readiness; tourism and recreational activities; ocean cooling water for thermoelectric power plants that use coal, natural gas, or nuclear fuel; aquaculture; maritime heritage and archeology; offshore renewable energy; view sheds, commercial and recreational fisheries, access locations such as boat ramps, diving sites, marinas; national parks, cultural heritage sites
  • Interconnection to the power grid (e.g., transmission requirements, integrating variable electricity output, shore landings; Capital and life-cycle costs (e.g., engineering, installation, equipment, operation and maintenance, debris management, and device recovery and removal
TABLE 1 Issues That Impact the Development of the Practical MHK Resource

No Commercial scale MHK plants exist because:

Once installed, MHK devices are subject to mechanical wear and corrosion that is more severe than land-based equipment

Corrosion-related problems (i.e. galvanic, stress, fatigue, biocorrosion) and marine fouling are key challenges for all MHK devices.   Advanced structural materials with appropriate coatings and paints still need to be identified in order to construct the robust, corrosion-resistant components for MHK energy generation.

Survivability in hurricanes, tides, storms, large waves, and so on

This is another challenging problem, especially in shallow water. Devices can be destroyed, damaged, or moved from their moorings under the actions of rough seas and breaking waves

Making MHK devices rugged enough is expensive

Rugged MHK devices require huge amounts of steel and concrete, which is inherently expensive, and many use expensive exotic materials or engineering.  The power electronics on MHK devices will be a challenge to implement and operate reliably. In shallow tidal and riverine areas, there is a great concern that debris will affect both the efficiency and durability of any installed devices.

Capital and Life-Cycle Costs

As with any energy device or power plant, there are costs such as design, installation, operation and maintenance, removal, and replacement. The largest of these costs, and potentially the greatest barrier to MHK deployments, is the capital cost. An earlier NRC committee concluded that it will take at least 10 to 25 years before the economic viability of MHK technologies for significant electricity production will be known. A 2008 report evaluating the potential for renewable electricity sources to meet California’s renewable electricity standard found that the cost of electricity from waves and currents was higher than that from most other renewable sources and had a substantially greater range of uncertainty.

The best places for MHK are often far from urban centers

  • In-stream power: Alaska is by far the largest resource but it’s questionable whether it would work because rivers freeze up, the scour incurred during spring ice break-up would make year-round deployment a challenge and possibly require seasonal device removal.
  • Tidal resource: Alaska’s Cook Inlet
  • OTEC: only feasible near Hawaii, Puerto Rico, U.S. Virgin Islands, Guam, Northern Mariana Islands, and American Samoa.


These challenges affect not only installation, maintenance costs, and electricity output, but also MHK scalability from small to utility applications

Time and Regulation

The time to get all the regulatory agencies at federal, state, and local levels to agree to a project is formidable and time-consuming.  MHK devices are far from being ready to scale-up to commercial levels.

Most of the ocean and rivers are too far to connect to the electric grid

The distance required to interconnect into the electricity system is critical, as it directly impacts the economic viability of a project.  Often the MHK device needs to be placed far from areas close to the grid because ports, cities, and other users already occupy prime grid-connection locations.

Connection to the grid is challenging and requires extra equipment due to harsh environmental conditions, intermittent and unstable load flows, variable energy output, lack of electrical demand near the generation, the length of cable from a device or array to a shore terminus, potential environmental impacts from the cable, permitting issues, and the reliability of the equipment.

The situation is even more complicated if there are large numbers of offshore generators, because connecting a large number of devices together with no load demand along the path of the network cable could produce an unstable system.

Tidal Power

The potential of tidal power has long led to proposals of a barrage (a dam that lets water flow in and out) across the entrance of a bay that has a large range of height between low and high tides. It would generate power by releasing water trapped behind the barrage at high tide through turbines similar to a hydro-power facility. Or this could be done with in-stream turbines similar to the way that wind turbines work.

Scale:  A tidal amplitude of 3.3 feet would require over 110 square miles to produce 100 MW (enough to power about 70,000 homes). This is why tidal power is limited to regions with very large tides (which tend to be in the northern latitudes, far from any cities that could use the power). Even with a current speed of 3 meters per second, a 100 MW project would need a flow of nearly 40,000 cubic meters per second, which requires 120 turbines, each having a cross-sectional area of 120 square yards, or 24 turbines of 82-foot diameter. Many more turbines would be needed for more typical, smaller currents. This many large turbines are likely to interfere with existing water uses, and an array this large would have near-field back effects that reduce the current each individual turbine experiences.

More than 1 channel: Power is reduced if there’s more than 1 channel, which also tends to divert flow to other channels.

Engineering challenges: Corrosion, biofouling, and metal fatigue in the vigorous turbulence typically associated with strong tidal flows.

Conflicting uses: Some of the locations with the highest tidal energy density are also estuaries having ports with heavy commercial shipping traffic. It is likely that there will be limitations to the number and size of turbines and the depth at which they can be deployed so as not to interfere with established shipping lanes.

Tides only generate power two to four times a day.

Wave Power

Power in ocean waves originates as wind energy transferred to the sea surface when wind blows over large areas of the ocean. The resulting wave field consists of a collection of waves at different frequencies traveling in many directions.

If energy is removed by a wave energy device from a wave field at one location, less energy will be available in the shadow of the extraction device, so a second row of wave energy devices won’t perform as well as the first row.  The planning of any large-scale deployment of wave energy devices would require sophisticated, site-specific field and modeling analysis of the wave field and the devices’ interactions with the wave field. 


One theoretical study on wave-device interaction modeled the Wave Dragon Energy Converter deployed in the highly energetic North Sea. They concluded that capturing 1 GW of power would require the deployment of a 124-mile-long single row of devices or a 5-row staggered grid about 1.9 miles wide and 93 miles long. This doesn’t take into account that the recovered power must be transformed into electricity and then transmitted. Because of the high development and maintenance costs, low efficiency, and large footprint, such devices would be a sustainable option only for small-scale developments considerably less than 1 GW close to territories with limited demand, such as islands.

It would take about 81 miles of wave machines to produce as much power as a typical power plant (1000 MW). Even if you built wind machines as far north as Canada and as far south as Mexico along both coasts, you’d only get 9% of the electricity we use now (Hayden).

Wave Power Efficiency

None of these systems are likely to operate at efficiencies over 90% and will probably have more realistic efficiencies of 50-70%. This calls into question claims of wave energy facilities that capture 90% or more of the available energy.

Other Wave Power Issues

  • Waves are intermittent, which means energy production is spotty
  • Waves have a low potential energy that varies with the weather and only a small hydraulic head of 2 or 3 meters. Hence large volumes of water have to be processed which means large structures relative to power output
  • The waves are a challenge for energy harvesting since they not only roll past a device but bob up and down or converge from all sides in confused seas, plus have to cope with the period of the wave (Levitan)
  • No design that’s been investigated is very good at capturing a very large fraction of the energy over a range of wave conditions. If they’re designed to efficiently capture wave energy in “average” sea conditions, they’ll be totally overwhelmed in high sea conditions. If they’re designed for efficient energy capture in high sea conditions, they’ll be almost totally insensitive to the energy present in average conditions (HED).
  • These devices typically produce what’s known as low-frequency power, which can be difficult and expensive to convert to high-frequency electrical grids
  • Wave technologies have lots of electrical components, hydraulic fluids and oils — all presenting a pollution risk
  • So far about 30 wave power ventures have failed, such as Denmark’s “Wave Dragon”, the UK “Salter Duck”, Netherlands “Archimedes Wave Swing”, The Sea Clam, the Tapchan, the Pendulor, Finavera Renewables “AquaBuOY” in Oregon, Pelamis Wave Power in Portugal, Verdant Power’s East River project ($30 million spent so far), Pacific Gas & Electric’s wave energy testing program, Oceanlinx in Sydney, and Ocean Power Technologies in July 2014 canceled plans to build a wave energy project off the coast of Australia, saying it says is no longer commercially viable and will repay what it has received of a A$66.5M government grant, which was intended to be used toward the A$232M proposed cost of building the project.

Ocean Current Power

Ocean currents (excluding tidal currents) are affected by Coriolis forces and mainly generated by winds that cause strong, narrow currents which carry warm water from the tropics toward the poles, such as the Gulf Stream, with an ocean current in the Florida Strait that can exceed two meters per second.

The ocean current power team estimated the Florida current could generate 14.1 GW, or 62% of the 20 GW maximum power obtainable.

NAS thought that figure was way too high for many reasons and concluded that maximum power that could be extracted is 1 and 2 GW at best.

Or it may be less than 1-2 GW:

  1. If the high turbine density in the water column diverted the Florida Current and forced the flow around the Bahamas
  2. Seasonal variability and meandering might limit the placement of turbines to just a few narrow areas where the flow was consistent

Ocean Thermal Energy Conversion (OTEC) Power

Ocean thermal energy conversion (OTEC) is the process of deriving energy from the difference in temperature between surface and deep waters in the tropical oceans. The OTEC process absorbs thermal energy from warm surface seawater found throughout the tropical oceans and ejects a slightly smaller amount of thermal energy into cold seawater pumped from water depths of approximately 1,000 meters. In the process, energy is recovered as an auxiliary fluid expands through a turbine.

NAS thought the study should have been limited to just the areas this could possibly work: the Hawaiian Islands, Puerto Rico, U.S. Virgin Islands, Guam, the Northern Mariana Islands, and American Samoa. Hawaii could generate 143 TWh/yr, the Mariana Islands (including Guam) 137 TWh/yr, and Puerto Rico and the U.S. Virgin Islands 39 TWh/yr. The majority of this resource is found far from the United States near Micronesia (1,134 TWh/yr) and Samoa (1,331 TWh/yr).

The continental U.S. resource is very seasonal and limited, and it is unlikely that plant owners would want to operate only part of the year.

OTEC plants are vulnerable to corrosion, strong currents, tides, large waves, hurricanes, and storms, and remaining anchored.

OTEC could cause environmental damage.

OTEC plants must be near tropical islands with steep topography to make it easier to reach deep cold water and transmit power to shore.

The committee estimated the global OTEC resource could be 5 TW (a 100-MW plant every 30 miles in the tropical ocean). In reality, this would never happen because you need to connect them to land-based electric grids.

OTEC needs very large equipment and very high seawater flow rates

OTEC systems are similar to most other heat engines. There are significant practical aspects that make it difficult to implement, mainly from the small available temperature difference of only ~20ºC between the warm and cold seawater streams. Because of the low efficiencies, OTEC plants require very large equipment (e.g., heat exchangers, pipes) and seawater flow rates (~200-300 cubic meters per second for a typical 100-MW design) that exceeds any existing industrial process to generate a significant amount of electricity.

OTEC needs to be near existing electric power systems

The cold-water pipe is one of the largest expenses in an OTEC plant. As a result, the most economical OTEC power plants are likely to be open-ocean designs with short vertical cold-water pipes, close enough to shore to connect to existing electric power systems.

Concerns with tides, variation in power output, shear current effects on the cold-water pipe

The committee is concerned about the variations in isotherm depth due to internal tides, which can be significant near islands. For example, deep isotherm displacements of as much as 50 or even 100 m are common near the Hawaiian Islands, which could induce a 5-10 percent variation in power output over the tidal cycle. In addition, areas with strong internal tides will also impose strong shear currents on the cold-water pipe. Seasonal variations could lead to a 20% variation in power output in Hawaii over the course of the year. Even more dramatic changes result from fluctuations due to El Niño or La Niña in the central tropical Pacific, where the committee estimates variations in power production as high as 50 percent. The assessment group largely fails to address the temporal variability issue.

Spacing must be far apart given the huge seawater requirements

Clearly, a key question for determining the OTEC technical resource would be how closely plants could be spaced without interfering with each other or excessively disturbing the ocean thermal structure. At regional and global scales there could be a variety of impacts on the ocean arising from widespread deployment of OTEC.

There are many interesting physics, chemistry, and biology problems associated with the operation of an OTEC plant. Whitehead suggested that an optimal plant size would be around 100 MW in order to avoid adverse effects on the thermal structure the plant is designed to exploit.

In-Stream Hydrokinetic Power

In-stream hydrokinetic energy is recovered by deploying a single turbine unit or an array of units in a free-flowing stream.  Estimates of the maximum extractable energy that minimizes environmental impact range from 10 to 20% of the naturally available physical energy flux.

There are many limiting factors that will reduce the in-stream hydrokinetic energy production

These factors include but are not limited to ice flows and freeze-up conditions, transmission issues, debris flows, potential impacts to aquatic species (electromagnetic stimuli, habitat, movement and entrainment issues), potential impact to sites with endangered species, suspended and bedload sediment transport, lateral stream migration, hydrodynamic loading during high flow events, navigation, recreation, wild and scenic designations, state and national parklands, and protected archeological sites. These considerations will need to be addressed to further estimate the practical resource that may be available.

Navigable waters are a resource for a number of sectors, and coordinating their use is an immense logistical challenge that will definitely impact in-stream energy development.


NAS criticisms of the DOE report

This is just a very small part of the criticisms scattered throughout the report, much of which criticizes the data, methods, and conclusions of each of the 5 contractors, such as:

The committee was disappointed by the resource groups’ lack of awareness of some of the physics driving their resource assessments, which led to simplistic and often flawed approaches. The committee was further concerned about a lack of rigorous statistics, which are essential when a project involves intensive data analysis. A coordinated approach to validation would have provided a mechanism to address some of the methodological differences between the groups as well as a consistent point of reference. However, each validation group (chosen by individual assessment groups) determined its own method, which led to results that were not easily comparable. In some instances, the committee noted a lack of sufficient data and/or analysis to be considered a true validation. The weakness of the validations included an insufficiency of observational data, the inability to capture extreme events, inappropriate calculations for the type of data used, and a focus on validating technical specifications rather than underlying observational data.

The committee is also concerned about the scientific validity of some assessment conclusions.

All five MHK resource assessments lacked sufficient quantification of their uncertainties. There are many sources of uncertainty in each of the assessments, including the models, data, and methods used to generate the resource estimates and maps. Propagation of these uncertainties into confidence intervals for the final GIS products would provide users with an appropriate range of values instead of the implied precision of a specific value, thus better representing the approximate nature of the actual results.

The committee has strong reservations about the appropriateness of aggregating theoretical and technical resource assessments to produce a single-number estimate for the nation or a large geographic region (for example, the West Coast) for any one of the five MHK resources. A single-number estimate is inadequate for a realistic discussion of the MHK resource base that might be available for electricity generation in the United States. The methods and level of detail in the resource assessment studies do not constitute a defensible estimate of the practical resource that might be available from each of the resource types. This is especially true given the assessment groups’ varying degrees of success in calculating or estimating the technical resource base.

Challenging social barriers (such as fishery grounds, shipping lanes, environmentally sensitive areas) or economic barriers (such as proximity to utility infrastructure, survivability) will undoubtedly affect the power available from all MHK resources, but some resources may be more significantly reduced than others. The resource with the largest theoretical resource base may not necessarily have the largest practical resource base when all of the filters are considered. It is not clear to the committee that a comparison of theoretical or technical MHK resources—to each other or to other energy resources—is of any real value for helping to determine the potential extractable energy from MHK.

Site-specific analyses will be needed to identify the constraints and trade-offs necessary to reach the practical resource.

Quantifying the interaction between MHK installations and the environment was a challenge for the assessment groups. Deployment of MHK devices can lead to complex near-field and/or far-field feedback effects for many of the assessed technologies. Analysis of these feedbacks affects both the technical and practical resource assessments (and in some cases the theoretical resource) and requires careful evaluation. The committee noted in several instances a lack of awareness by the assessment groups of some of the physics driving their resource assessments, such as the lack of incorporation of complex near-field and/or far-field feedback effects, which led to simplistic and sometimes flawed approaches. The committee was further concerned about a lack of rigorous validation.

As part of the evaluation of the practical resource base, there seemed to be little analysis by the assessment groups of the MHK resources’ temporal variability. The committee recognized that the time-dependent nature of power generation is important to utilities and would need to be taken into account in order to integrate MHK-generated electricity into any electricity system.

DOE requests for proposals did not offer a unified framework for the efforts, nor was there a requirement that the contractors coordinate their methodologies. The differing approaches taken by the resource assessment groups left the committee unable to provide the defensible comparison of potential extractable energy from each of the resource types as called for in the study task statement. To do so would require not only an assessment of the practical resource base discussed by the committee earlier but also an understanding of the relative performance of the technologies that would be used to extract electricity from each resource type. Simply comparing the individual theoretical or technical MHK resources to each other does not aid in making such a comparison since the resource with the largest theoretical resource base may not necessarily have the largest practical resource base. However, some qualitative comparisons can be made, especially with regard to the geographic extent and predictability of the various MHK resources. Both the ocean current and OTEC resource bases are confined to narrow geographic regions in the United States, whereas the resource assessments for waves, tides, and in-stream show a much greater number of locations with a large resource base. As for predictability, while there is multi-day predictability for wave and in-stream systems, especially in settings where the wave spectrum is dominated by swells or in large hydrologic basins, the predictability is notably poorer than for tidal, where the timing and magnitude of events are known precisely years into the future.

Overall, the committee would like to emphasize that the practical resource for each of the individual potential power sources is likely to be much less than the theoretical or technical resource.

Tidal resource NAS criticisms

Based on the final assessment report, the assessment group produced estimates of the total theoretical power resource. However, this was done for complete turbine fences, which essentially act as barrages. The group did not assess the potential of more realistic deployments with fewer turbines, nor did they incorporate technology characteristics to estimate the technical resource base. It is clear, however, that the practical resource will be very much less than the theoretical resource.

Because power is related to the cube of current speed, errors of 100% or more occur in the prediction of tidal power density in many model regions. In the Pmax scenario, the fence of turbines is effectively acting as a barrage, so that Pmax is essentially the power available when all water entering a bay is forced to flow through the turbines. Pmax is thus likely to be a considerable overestimate of the practical extractable resource once other considerations, such as extraction and socioeconomic filters are taken into account.

Allowing for the back effects of an in-stream turbine array deployed in a limited region of a larger scale flow requires extensive further numerical modeling that was not undertaken in the present tidal resource assessment study and is in its early stages elsewhere. However, a theoretical study by Garrett and Cummins (2013) has examined the maximum power that could be obtained from an array of turbines in an otherwise uniform region of shallow water that is not confined by any lateral boundaries. The effect of the turbines is represented as a drag in addition to any natural friction. As the additional drag is increased, the power also increases at first, but the currents inside the turbine region decrease as the flow is diverted and, as in other situations, there is a point at which the extracted power starts to decrease. The maximum power obtainable from the turbine array depends strongly on the local fluid dynamics of the area of interest. Generally, for an array larger than a few kilometers in water shallower than a few tens of meters, the maximum obtainable power will be approximately half to three-quarters of the natural frictional dissipation of the undisturbed flow in the region containing the turbines. In deeper water, the natural friction coefficient in this result is replaced by twice the tidal frequency. For small arrays, the maximum power is approximately 0.7 times the energy flux incident on the vertical cross-sectional area of the array (Garrett and Cummins, 2013). Estimates of the true available power must also take into account other uses of the coastal ocean and engineering challenges.

Conclusions & Recommendations. The assessment of the tidal resource assessment group is valuable for identifying geographic regions of interest for the further study of potential tidal power. However, although Pmax (suitably modified to allow for multiple tidal constituents) may be regarded as an upper bound to the theoretical resource, it is an overestimate of the technical resource, as it does not take turbine characteristics and efficiencies into account. More important, it is likely to be a very considerable overestimate of the practical resource as it assumes a complete fence of turbines across the entrance to a bay, an unlikely situation. Thus, Pmax overestimates what is realistically recoverable, and the group does not present a methodology for including the technological and other constraints necessary to estimate the technical and practical resource base. The power density maps presented by the group are primarily applicable to single turbines or to a limited number of turbines that would not result in major back effects on the currents. Additionally, errors of up to 30% for estimating tidal currents translate into potential errors of a factor of more than 2 for estimating potential power. Because the cost of energy for tidal arrays is very sensitive to resource power density, this magnitude of error would be quite significant from a project-planning standpoint. The limited number of validation locations and the short length of data periods used lead the committee to conclude that the model was not properly validated in all 52 model domains, at both spatial and temporal scales. Further, the committee is concerned about the potential for misuse of power density maps by end users, as calculating an aggregate number for the theoretical U.S. tidal energy resource is not possible from a grid summation of the horizontal kinetic power densities obtained using the model and GIS results. Summation across a single-channel cross section also does not give a correct estimate of the available power. Moreover, the values for the power across several channel cross sections cannot be added together. The tidal resource assessment is likely to highlight regions of strong currents, but large uncertainties are included in its characterization of the resource. Given that errors of up to 30% in the estimated tidal currents translate into potential errors of more than a factor of 2 in the estimate of potential power, developers would have to perform further fieldwork and modeling, even for planning small projects with only a few turbines.

The tidal resource assessment is likely to highlight regions of strong currents, but large uncertainties are included in its characterization of the resource. Errors of up to 30% in the estimated tidal currents translate into potential errors of more than a factor of two in the estimate of potential power. Although maximum extractable power may be regarded as an upper bound to the theoretical resource, it overestimates the technical resource because the turbine characteristics and efficiencies are not taken into account.

Waves. The theoretical wave resource assessment estimates are reasonable, especially for mapping wave power density; however, the approach taken by the assessment group is not suitable for shallow water and is prone to overestimating the resource. The group used a “unit circle” approach to estimate the total theoretical resource, which summed the wave energy flux across a cylinder of unit diameter along a line of interest, such as a depth contour. This approach has the potential to double-count a portion of the wave energy if the direction of the wave energy flux is not perpendicular to the line of interest or if there is significant wave reflection from the shore. Further, the technical resource assessment is based on optimistic assumptions about the efficiency of conversion devices and wave-device capacity, thus likely overestimating the available technical resource. Recommendation: Any future site-specific studies in shallow water should be accompanied by a modeling effort that resolves the inner shelf bathymetric variability and accounts for the physical processes that dominate in shallow water (e.g., refraction, diffraction, shoaling, and wave dissipation due to bottom friction and wave breaking).

The wave power team used a model that’s only accurate in water depth over 164 feet deep (50 m). Yet shallow-water regions are where developers might prefer to put wave machines to minimize the distance to connect to the grid, and would be easier and cheaper to build and maintain if close to shore. NAS recommended a model for shallow be used next time, one with much higher spatial resolution that includes shallow-water physics (e.g., refraction, diffraction, shoaling, wave dissipation due to bottom friction and wave breaking).

Nor did they capture how often very large waves or extreme weather events are likely to occur that might destroy or harm the wave power equipment, and the model was likely to double count part of the wave energy, and even when this was pointed out, continued to use this methodology even though it “clearly overestimates the total theoretical resource”.

The mechanical and electrical losses in the transformation processes and transmission significantly reduce the technical resource, typically to 15-25% of the recoverable power. So the Energetech prototype would have had a technical power resource of just 4.5% to 7.5% of the incident wave’s theoretical power.

Estimates of the current state of wave-energy technology are not based on proven devices.

Ocean Currents. The ocean current resource assessment is valuable because it provides a rough estimate of ocean current power in U.S. coastal waters. However, less time could have been spent looking at the West Coast in order to concentrate more fully on the Florida Strait region of the Gulf Stream, where the ocean current can exceed 2 m/s. This would have also allowed more focus on the effects of meandering and seasonal variability. Additionally, the current maps cannot be used directly to estimate the magnitude of the resource. The deployment of large turbine farms would have a back effect on the currents, reducing them and limiting the potential power. Recommendation: Any follow-on work for the Florida Current should include a thorough evaluation of back effects related to placing turbine arrays in the strait by using detailed numerical simulations that include the representation of extensive turbine arrays. Such models should also be used to investigate array optimization of device location and spacing. The effects of meandering and seasonal variability within the Florida Current should also be discussed.


The group chose to use a specific OTEC plant model proprietary to Lockheed Martin as the basis for its resource assessment, a 100-MW plant, a size generally considered to be large enough to be economically viable and of utility-scale interest yet small enough to construct with manageable environmental impacts. Since no plants this large have yet been built, there are many technical and environmental challenges to overcome before even larger plants are attempted.

The committee views the use of the HYCOM model for assessment of the theoretical resource to be inadequate and also regards the application of a specific proprietary Lockheed Martin plant model with a fixed pipe length to be unnecessarily restrictive.

The DOE funding opportunity for OTEC was the only one to specify that the assessment should include both U.S. and global resources, and the assessment group chose to focus on the global resource. The committee believed, however, that more emphasis should have been placed on potential OTEC candidates in U.S. coastal waters. To demonstrate this point, the committee evaluated equation 1 and used the National Oceanographic Data Center of the National Oceanic and Atmospheric Administration’s World Ocean Atlas data to map this function for a 1,000-m pipe length, a TGE efficiency of 0.85, and PL of 30 percent. This simple exercise shows that in USA territory, the coastal regions of the Hawaiian Islands, Puerto Rico and the U.S. Virgin Islands, Guam and the Northern Mariana Islands, and American Samoa would be the most efficient sites for OTEC.

The committee is also concerned that the 2-yr HYCOM run will not provide proper statistics on the temporal variability of the thermal resource. Although it does include both El Niño and La Niña events, 2 years is not sufficient to characterize the global ocean temperature field with any reliability. Longer datasets are widely available, so it is not clear why the assessment group limited itself in this way. Ocean databases that extend for more than 50 years are readily available; these data would allow assessment of the inter-annual variability in thermal structure due to El Niño/Southern Oscillation (ENSO) to be evaluated. The advantage of HYCOM’s higher resolution over earlier estimates from coarser climatologies may vanish if HYCOM is used without appropriate boundary conditions near the coasts, resulting in inaccurate seasonal and inter-annual statistics on thermal structure. Without these abilities, this study is not much more valuable than prior maps of global ocean temperature differences, which already identified OTEC hot spots.

The OTEC assessment group’s GIS database provides a visualization tool to identify sites for optimal OTEC plant placement. However, assumptions about the plant model design and a limited temperature data set impair the utility of the assessment. Recommendation: Any future studies of the U.S. OTEC resource should focus on Hawaii and Puerto Rico, where there is both a potential thermal resource and a demand for electricity

Rivers and Streams. The theoretical resource estimate from the in-stream assessment group is based upon a reasonable approach and provides an upper bound to the available resource; however, the estimate of technical resources is flawed by the assessment group’s recovery factor approach (the ratio of technical to theoretical resource) and the omission of other important factors, most importantly the omission of statistical variation of stream discharge. Recommendation: Future work on the in-stream resource should focus on a more defensible estimate of the recovery factor, including directly calculating the technically recoverable resource by (1) developing an estimate of channel shape for each stream segment and (2) using flow statistics for each segment and an assumed array deployment. The five hydrologic regions that comprise the bulk of the identified in-stream resource should be tested further to assure the validity of the assessment methodologies. In addition, a two- or three-dimensional computational model should be used to evaluate the flow resistance effects of the turbine on the flow

Additional references

Hayden, Howard. 2005. The Solar Fraud: Why Solar Energy Won’t Run the World

HED. 7 May 2003. Hawaii Economic Development Department study.

Levitan, David. 28 Apr 2014. Why Wave Power Has Lagged Far Behind as Energy Source.

Martin, Glen. 4 Aug 2004. Humboldt Coast Wave power plan gets a test. San Francisco Chronicle.


Posted in Waves & Tidal | 1 Comment

David Korowicz. 2013. Catastrophic Shocks through Complex Socio-Economic Systems.

David Korowicz. 2013. Catastrophic Shocks through Complex Socio-Economic Systems.

The globalized economy has become more complex (connectivity, interdependence, and speed), delocalized, with increasing concentration within critical systems. This has made us all more vulnerable to systemic shocks. This paper provides an overview of the effect of a major pandemic on the operation of complex socio-economic systems using some simple models. It discusses the links between initial pandemic absenteeism and supply-chain contagion, and the evolution and rate of shock propagation. It discusses systemic collapse and the difficulties of re-booting socio-economic systems.

1. A New Age of Risk

Consider the following scenarios:

  • A highly contagious pandemic outbreak in South-East Asia (of comparable or greater human impact than the 1918 influenza outbreak) .
  • A disorderly break-up of the Eurozone and global financial system implosion.
  • A “perfect storm” during a time of major global financial instability – there are terrorist attacks on North African oil installations (partially driven by social unrest arising from record food prices) & a category 5 hurricane hits a major population/ industrial/ oil producing regions of the US east coast.

These are all examples of potential global shocks, that is hazards that could drive fast and severe cascading impacts mediated through global systems. Global systems include telecommunications networks; financial and banking networks; trade networks; and critical infrastructure networks. These systems are themselves highly interdependent and together form part of the globalized economy.

One of the primary issues for this paper are, given any significant hazard, how does the impact spread through the globalized economy and in what way are we vulnerable to the failure of interconnected systems. To answer this we need to understand how complex societies are connected and how they have changed over time. The globalized economy is an example of a complex adaptive system that dynamically links people, goods, factories, services, institutions and commodities across the globe.

The state is characterized by exponential growth in Gross World Product of about 3.5% per annum over nearly 200 years within a range of several percentage points. This had correlated with emergent and self-organizing growth in socio-economic complexity which is reflected in the growth of the:

  • Number of interacting parts (nodes): This includes exponential population growth; the 50,000+ different items available in Wal-Mart; the 6 billion+ digitally connected devices; the number of cars, factories, power plants, mines and so on.
  • Number of linkages (edges): This includes the 3 billion passengers traveling between 4000 airports on over 50 million flights each year; the 60,000 cargo ships moving between 5000 ports with about a million ship movements a year; the average number of media channels (internet sites, TV channels, twitter feeds) per person times the population; and the billions of daily financial transactions.
  • Levels of interdependence between nodes: The growing number of inputs necessary to make a good, service, livelihood, infrastructural output or the function of society as a whole.
  • The speed of processes (or time compression) :This includes the increasing speed of financial transactions; transportation; digital signaling; and Just-In-Time logistics. If we consider the globalized economy as a form of singular organism, we can understand this process as an increasing metabolic rate.
  • Efficiency: increasing competition and global trade arbitrage driving down inventories; and globalized economies of scale.
  • Concentration: The emergence of ʻhubsʼ within the globalized economy- a small number of very highly connected nodes whose function (or loss of function) have a disproportionate role in the operation of the globalized economy . For example, banks are not connected at random to other banks, rather a very small number of large banks are highly connected with lots of other banks, who have few connections to each other. These arrangements are sometimes known as scale-free networks. We can also see concentration in critical infrastructure, and trade networks.
  • De-localization: The conditions of personal welfare; business or service output; or countryʼs economic output is smeared over the whole globalized economy. The corollary is that if there is a major failure of the systems integration in the globalized economy, a localized community may have extreme difficulties meeting its basic needs.

Economic and complexity growth have in many ways reduced risk. Localized agricultural failure once risked famine in isolated subsistence communities, but now such risk is spread globally. It has made critical infrastructure such as sewage treatment and clean water available and affordable. Global financial markets enable an array of risks, from home insurance and pensions to default risk and export credit insurance, to be dispersed and potential volatility reduced. Indeed, what is remarkable is just how reliable our complex society is given the number of time sensitive inter-connections.

Another way of saying all this is that our society is very resilient, within certain bounds, to a huge range interruptions in the flow of goods and services. Within those bounds our society is self-stabilizing. For example supply-chain shocks from the Japanese tsunami in 2011, the eruption of the Icelandic Eyjafjallajokull volcano in 2010 or the UK fuel blockades in 2000 all had severe localized effects in addition to shutting down some factories across the world as supply-chains were interrupted. However the impacts did not spread and amplify, and normal functioning of the local economy quickly resumed.

But we know from many complex systems in nature and society that a system can rapidly shift from one state to another as a threshold is crossed (Scheffer 2009). One way a state shift can occur is when a shock drives the system out of its stability bounds. The form of those stability bounds can increase or decrease resilience to shocks depending upon whether the system is already stressed prior to the shock.

The commonalities of global integration mean that diverse hazards may lead to common shock consequences. The systems that transmit shocks are also the systems we depend upon for our welfare and the operation of businesses, institutions and society, so to borrow Marshal McLuhanʼs phrase, the medium is the message. One of the primary consequences of a generic shock is an interruption in the flow of goods and services in the economy. This has diverse and profound implications – including food security crises, business shut-downs, critical infrastructure risks and social crises. This can in turn quickly destroy forward looking confidence in an economy with major consequences for financial and monetary stability which depend ultimately on the collateral of real economic production. More generally it can entail multi-network and de-localized cascading failure leading to a collapse in societal complexity.

Previously the dynamics of such a scenario was studied when the initial shock was caused by a systemic banking collapse and monetary shock. This coupled the exchange of goods and services causing financial system supply-chain cross contagion and a re-enforcing cascade of de-localizing multi-system risk (Korowicz 2012). In this paper a similar methodology is used to look at the socio-economic implications of a major pandemic.

2. Socio-economic Impact of a Major Pandemic

We are interested in the socio-economic implications of a major influenza pandemic whose initial impact would be direct absenteeism from illness and death, and absenteeism for family and prophylactic reasons. The pandemic wave (we will only consider one) lasts 10-15 weeks. We assume this causes an absenteeism rate of 20% or 40% over the peak period of 2-4 weeks, and a rate above 20% for 4-8 weeks when the peak is 40%. This represents our initial impact. Our question is then what happens next.

Some key personnel that might not show up for work are in health care, shipping / train / truck drivers, (and I’ve read elsewhere that the electric grid might fail if key workers don’t show up because they’re afraid of catching something at work, and that would bring ALL systems down).

how a health service would manage a pandemic when its own operation is compromised

3. Vulnerability Revealed

One way to understand complex socio-economic systems is to study occasions when there has been some systemic failure. In September 2000 truckers in the United Kingdom, angry at rising diesel duties, blockaded refineries and fuel distribution outlets. Consequences:

a)      The petrol stations reliance on Just-In-Time re-supply meant the impact was rapid. Within 2 days about half of the petrol stations had run out of fuel and supplies to industry and utilities had begun to be severely affected.

b)     People couldn’t get to work and businesses could not be re-supplied.

c)      Supermarkets had begun to run out of food

d)     Large parts of the manufacturing sector were about to shut down

e)      Hospitals began to offer emergency only care

f)       Automatic cash machines could not be re-supplied

g)      The postal service was severely affected.

h)      There was panic buying at supermarkets and petrol stations.

i)        It was estimated that after the first day an average 10% of national output was lost. Surprisingly, at the height of the disruption, commercial truck traffic on the UK road network was only 10-12% below average values. There were clear indications that had the fuel blockades gone on just a few days longer large parts of UK manufacturing including the automotive, defense and steel industries would have had to shut down.

Failure of production or supply from one area can shut down factories on the other side of the world within days of the initial interruption as was seen in the 2010 Icelandic volcano eruption in 2010 and the 2011 Japanese tsunami and Thai flooding.

A report from the think-tank Chatham House on the impacts of the Icelandic volcano and subsequent interviews with businesses about its impact and their preparedness came to the general conclusion: “One week seems to be the maximum tolerance of a Just-In-Time economy”…..before major shut-downs in business and industries would occur, and things would not just return to normal afterwards. … many businesses said that had the disruption continued just a few days longer, it would have taken at least a month for companies to recover” And a quote from a desk study on the impact of a one week long absence of (just) trucks in the UK economy, things would not just return to normal (McKinnon 2006): “..After a week, the country would be plunged into a deep social and economic crisis. It would take several weeks for most production and distribution systems to recover”

The studies do not consider what would happen if the primary disruption were to continue for many weeks.

4. Interdependence, Liebigʼs law, and Cascading

One of the defining features of rising complexity is growing interdependence. Now, the output of a person, service provider, factory, piece of critical infrastructure, etc., depends upon ever more inputs, be they tools, intermediate products, consumables, specialist skills and knowledge or collective societal infrastructures. And those outputs in turn become further inputs through the dispersed networks of the globalized economy.

Some of the least substitutable critical inputs are labeled hubs. Hubs are things like electricity, fuel, water, and financial system functionality – things generally referred to as critical infrastructure. They are societal services and functions upon which all society depends.

A simple but important principle, Liebig ʼs Law of the Minimum, says that the production is constrained by the scarcest critical input. So even if you have ample supplies of all but one critical input, your production fails. That is, production fails on the weakest link.

This explains why the most exposed businesses to supply-chain failure are the most complex businesses. First they have some of the most inputs (making a car can mean assembling up to 15,000 components). Second, they have more inputs are very complex and specialized, and so cannot be easily substituted. Alternative production lines might not be available or take months to re-engineer or specialist skills may be in limited supply. Thus, auto and electronics manufacturers were some of the most affected by the Icelandic volcano, the Japanese tsunami and the Thai flooding in 2011. What Liebig ʼs law shows is that you do not need to lose everything to stop a business, service or function or society – just the right bit. This helps to explain why a loss of only 10-12% of commercial vehicles had such a big impact during the fuel blockades in the U.K. As our economies have become more complex we have been adding more inputs into our lives, goods and services, and the functioning of our societies. More of these are critical with low substitutability.

Let us now apply Liebig’s law to pandemic absenteeism. The people affected by a pandemic are part of the supply of inputs to any systems function. There may be many people contributing to one output of a business, service or function. We assume that most employees are either unnecessary for the period of the pandemic, can telecommute, or are easily substituted. But there is a smaller number of sub-functional roles occupied most likely by those with specialist skills who are critical with low substitutability. If any one of them is unavailable, the sub-functional role fails and with that, the output of the whole organization/ function.

With the loss of this output good or service (especially if it is critical with low substitutability) other businesses and services may be affected potentially causing cascading affects through complex socioeconomic networks as a whole.

5. Time and Cascading Failure

There is always a level of absenteeism and a percentage of goods and services that can’t be delivered for whatever reason. The reason you don’t have supply-chain contagion spreading with every problem is that complex societies are efficient at finding alternative suppliers, and some inventories are carried to help when there is a hiatus. Also, most factories don’t produce very critical things or there is lots of substitutability. One won’t miss a brand of toothpaste in the supermarket when there are 20 brands available.

To initiate a cascading failure:

1)      It has to be large scale, i.e. from a major hub failure or large enough absenteeism.

2)      The function needs to be central, like the electric grid, financial system, or pandemic that keeps people from going to work. All of these are critically connected to other parts of a socio-economic network. Thus the effects of a pandemic or hub failure in a weakly connected country, Mali say, would be unlikely to spread supply-chain failure widely. Thus we can conclude that there might be point above which supply-chain contagion takes off, and below which the society is still operational and recovery can occur. This point depends upon the initial pandemic absenteeism rate and the societies complexity at the epicenter of the pandemic.

A simple model of supply-chain failure can be based upon the idea that the more supply-chains are disrupted or infected, the greater the chance that further supply-chains will be infected

6. External Cross-Network Contagion

Imagine a pandemic outbreak occurs in South-East Asia. The main vectors through which a shock could propagate outside the region are pandemic contagion, financial system contagion, and supply-chain contagion.

We would expect the shock to spread at different rates (banking shock could travel faster than supply-chain contagion because the operational speed of the financial system is greater than the inventory turn-over time).

Some countries’ role in trade is far more important to the globalized economy than others. The more important the initially impacted region is, the greater is the likelihood of spreading supply-chain contagion globally. Kali measured countries’ influence on global trade, not only by trade volumes, but the influence a country has on the global trading system. They used an Importance Index to rank their influence. For example, they find that Thailand, which was at the center of the 1997-1998 Asian financial crisis ranked 22nd in terms of global trade share, but 11th on their level of importance. In another study, Garas used an epidemic model to look at the potential any country had to spread a crisis. One of their data sets is based upon international trade in 2007. It uses a measure of centrality to identify countries with the power to spread a crisis via their level of trade integration. Like the previous paper, the centrality in the network does not necessarily correspond to those countries with the highest trade volumes. There are 12 inner core countries, which are listed in no particular order are: China, Russia, Japan, Spain, UK, Netherlands, Italy, Germany, Belgium-Luxembourg, USA, and France.

Hidalgo used international trade data to look at two things – the diversity of products a country produces, and the exclusivity of what they produce. An exclusive product is something made by few other countries. Most countries in the world are non- diversified and make standard products. The most complex countries are diversified and make more exclusive products. More exclusive products have less substitutability.

Financial system contagion outside the initially impacted region could be through banking networks, the bond market, the shadow banking system, currency volatility and confidence. Again the structure of financial networks and the centrality of the region with respect to financial assets and liabilities would determine the extent of any shock.

More broadly, if an economy was shattered, and its forward looking viability looked both precarious and uncertain one would expect a collapse in the value of a country’s currency. Rather than helping exports (which would be very little because the economy’s productive capacity had collapsed), it would hinder imports of emergency supplies and make debt in external currencies much more difficult to service. The economic damage and reduced economic prospects may then cause tightened credit conditions, spiraling bond yields and systemic bank failure.

There are also issues that are most pertinent for more complex societies. We imagine that after a pandemic wave people are again available for work. But people cannot however become productive immediately because other inputs are also needed. But those inputs are stalled because they rely upon other inputs and so on. More broadly we may define Recursion failure as: “the inability of a complex economy to easily resume production and trade after a significant collapse because in a complex and interdependent economy, production and trade must resume in order for production and trade to resume”.

Further, even if a government wanted to rebuild, it may be too complex to orchestrate resumption from the top down. This is because the economy has evolved by self-organization. Nobody ever put its elements together in the first place. And even if it could be done, the systems of command, control and supply that might do it would be the very systems that had been undermined. Over time entropy would become an issue as engines rust, reagents become contaminated and expected maintenance and repairs left undone. This would all add to the cost and inputs needed for resumption.

The longer a socio-economic system spends in the critical regime, the more likely it is to undergo a complete systemic collapse and loss of basic function. In addition, the longer it spends in this state, the more difficult it may be to ever return to its pre-pandemic state. This is a complex society’s equivalent of a heart attack. When a person has a heart attack, there is a brief period during which CPR can revive the person. But beyond a certain point when there has been cascading failure in co-dependent life support systems, the person cannot be revived. This means that the socioeconomic system could be changed irretrievably and the job of society and government would be to both manage the crisis and plot a fundamentally different path.

To make the systems we depend upon more resilient ideally we would want more redundancy within critical systems and weaker coupling between them.

Localization and de-complexification of basic needs (food, water, waste etc) would provide some societal resilience if systems resilience was lost. We would have more buffering at all levels, that is, larger inventories throughout society. All this is the very opposite of the direction of economic forces.

The reason we have such tight inventories, tight coupling, and concentration in critical infrastructure is they bring efficiency and competitive advantage. But when something goes wrong, this makes recovery harder. For example, during super-storm Sandy, fuel shortages were exacerbated by low inventories that were the direct result of cost cutting arising from the financial crisis.

We are locked into socio-economic processes that are at an increasingly complex that make us ever more vulnerable. Increasing vulnerability coupled with increasing hazard mean that the risk of a major socio-economic collapse is rising.

Because a permanent state shift could occur, planning needs to consider how to deal with non-reversion to pre-shock conditions.

Posted in David Korowicz, Stages of | 1 Comment

Chip Fab Plants need electricity 24 x 7. The electric grid needs chips. The Financial system needs both.

July 3, 2014     Alice Friedemann

The US Energy Department recently reported that “the nation’s aging electric grid cannot keep pace with innovations in the digital information and telecommunications network … Power outages and power quality disturbances cost the economy billions of dollars annually” (DOE).  Val Jensen, a vice president at ComEd, says the current grid is “relatively dumb…the power put into the grid at the plant flows according to the law of physics through all of the wires.”

But wait — that may be a good thing.  The less dependent the electric power system is on computers, microcontrollers and processors, and SCADA, the more resilient, easy to repair, and less vulnerable to cyber attacks the power system will be.  The electric grid is already complicated enough, with 9,200 generation plants, 300,000 miles of transmission lines, and dozens of squabbling entities running it.

The Smart Grid will dramatically increase the dependency of the electric grid on microprocessors, and turn the electric system into a giant computer that will monitor itself, optimize power delivery, remotely control and automate processes, and increase communications between control centers, transformers, switches, substations, homes, and businesses.

Smart Grid devices have the potential of making the electric grid less stable: “Many of these devices must function in harsh electromagnetic environments typical of utility, industrial, and commercial locations. Due to an increasing density of electromagnetic emitters (radiated and conducted, intentional and unintentional), the new equipment must have adequate immunity to function consistently and reliably, be resilient to major disturbances, and coexist with other equipment.” (NIST)

The electric grid is  vulnerable to disruptions from drought (especially hydroelectricity), hurricanes, floods, cyberattack, terrorism, and soon rising sea level and oil shocks (oil-fueled trains and barges deliver most coal to power plants). Making the electric grid even more dependent on microprocessors than it already is will make the grid more difficult and expensive to fix, and overly-dependent on microprocessor production — the most vulnerable industry of all.

Chip fabrication can stop for weeks after a short electric power disturbance or outage, potentially ruining an entire 30-hour batch of microprocessors and manufacturing equipment. High quality electricity must be available 24 hours a day, 7 days a week. Semiconductor chips are vulnerable to even tiny power disruptions because a single mistake anywhere in the dozens to hundreds of steps renders the product useless.

Chip fabrication plants can not handle rolling blackouts 

Electric service interruption is one of the major causes of semiconductor fab losses (Global). It can take a week or more for a fabrication plant to start up again (EPRI 2003).  There can be losses of millions of dollars an hour when a chip fabrication plant shuts down (Sheppard).

Chip fabrication & Financial system Interdependency

“The semiconductor industry is widely recognized as a key driver for economic growth in its role as a multiple lever and technology enabler for the whole electronics value chain. In other words, from a worldwide base semiconductor market of $213 billion in 2004, the industry enables the generation of some $1,200 billion in electronic systems business and $5 trillion in services, representing close to 10% of the world’s GDP” (wiki semiconductor industry).

Chip fabrication & Electric Grid Interdependency

Without microprocessors or electricity, infrastructure fails and civilization collapses. Just about everything that matters — financial systems, transportation, drinking water, sewage treatment, etc — is interdependent with both electricity and  microprocessors, which are found in just about every electronic device from toasters to computers.

Low Quality Electricity

The electric power system was designed to serve analog electric loads—those without microprocessors—and is largely unable to consistently provide the level of digital quality power required by digital manufacturing assembly lines and information systems, and, soon, even our home appliances. Achieving higher power quality places an additional burden on the power system.

Electricity disturbance causes:

  • Voltage sags can result from utility transmission line faults, or at a given business from motor start-ups, defective wiring, and short circuits, which reduce voltage until a protective device kicks in.
  • Transients happen due to utility capacitor bank switching or grounding problems at the energy user.
  • Harmonics and spikes often originate at end-user sites, from non-linear loads such as variable speed motor drives, arc furnaces, and fluorescent ballasts.

Any device with a microprocessor is vulnerable to the slightest disruption of electricity. Billions of microprocessors have been incorporated into industrial sensors, home appliances, and other devices. These digital devices are highly sensitive to even the slightest disruption (an outage of a small fraction of a single cycle can disrupt performance), as well as to variations in power quality due to transients, harmonics, and voltage surges and sags.

Voltage and frequency must be maintained within narrow limits

The generation and demand for electricity must be balanced over large regions to ensure that voltage and frequency are maintained within narrow limits (usually 59.98 to 60.02 Hz). If not enough generation is available, the frequency will decrease to a value less than 60 Hz; when there is too much generation, the frequency will increase to above 60 Hz. If voltage or frequency strays too far from this range, the resulting stress can damage power systems and users’ equipment, and may cause larger system outages.

Chip Fabrication plant shutdowns and consequences

Concern over the impact of utility power disturbances is probably the greatest in the semiconductor wafer fabrication industry. Producing complex computer chips is an extremely delicate process that blends microelectronics with chemical and mechanical systems, requiring tolerances in microns. The process can take 30 to 50 days to complete and can be totally ruined in a blink of an eye (Energy User News)

Power outages frequently cause damage to chips, which are fabricated on silicon wafers about the size of dinner plates that may take eight to 12 weeks to process. Wafers that are inside processing machines at the time of an outage are often ruined. In some cases, a shutdown of the air-purifying and conditioning system that keeps air in a chip factory free of dust also could contaminate chips.

Here are a few examples:

2007. Samsung, the world’s biggest maker of memory chips, shut down 6 of its chip production lines after a power cut at its Kiheung plant, near Seoul, costing the company $43.4 million. A problem at a switchboard at a transformer substation caused the power outage. Some analysts had said the outage could wipe out as much as a month’s worth of Samsung’s total production of NAND flash memory chips, which are widely used for data storage in portable electronics. Chips that were already in the fabrication process when the outage hit were discarded, and ramping back up to the previous production level could take some time (So-eui).

2010. A drop in voltage caused a .07-second power disruption at a Toshiba NAND memory chip plant in Japan which could raise prices on many devices, such as smartphones, tablet PCs and digital music players. NAND flash chips are fabricated on silicon wafers about the size of dinner plates and can take between 8 to 12 weeks to process. If the power goes out at any point in that time frame, the entire batch can be destroyed (Clark).

2011. The earthquake and tsunami in Japan took out nearly 70% of global semiconductor silicon wafers, the platform computer chips are built on (Dobosz). Production of microchips to control car electronic  operations  was  stopped at 10 Renesas factories where about 40% of these microprocessors are made, mainly due to power outages, not physical damage.  Renesas doesn’t expect to get back to pre-quake production levels for 4 months (SupplyChain Digital).

2011. The massive monsoon flooding of Thailand took out 25% of the world’s hard disk drives (Thailand is the world’s  #2 producer). One company, Western Digital, was out for 6 weeks and lost about $250 million dollars.

2011. Due to the Fukushima nuclear power plant disaster, Japan had to institute rolling outages, which shut down chip manufacturing.  Even a 3-hour outage can result in a stopped production line that can’t be restarted for a week or so. Analysts estimated this could cost $3.7 billion in losses (SIRIJ).

2013. DRAM supplies from Hynix’s fabrication plant in Wuxi, China, aren’t expected to return to normal until next year after a fire severely damaged that facility, according to a new report. In the meantime, DRAM prices are up 35% since the fire, as looming supply constraints prevail and there appears to be no rush by DRAM makers to sign new contracts, according to the report from analysts at investment bank PiperJaffray. The fire that blazed for almost two hours on September 4th, damaged equipment used for making PC DRAM, which sent memory prices skyrocketing. Hynix said it would make every effort to ramp up its Waxi-based fab operations to return to normal DRAM production by this November, a prediction Piper Jaffray contested (Mearian)

Emergency and Backup Power

A supply of fluctuation-free electricity is critical. Chip fabrication plants and server farms must balance the expense of building independent electricity resources with the cost of equipment failures and network crashes caused by unreliable power. Hewlett-Packard has estimated that a 15-minute outage at a chip fabrication plant cost the company $30 million, about half the plant’s power budget for a year.   Backup systems are so expensive, that a survey of 48 companies revealed only 3 had backup power sources: 3 used generators and the other one solar (Hordeski).

It’s too expensive to operate a separate power plant to generate power. Fab plants use up to 60 megawatts of power, so putting a natural gas or coal power plant onsite would cost somewhere between $100-400 million dollars.

Microprocessors and electricity are coupled

Microprocessors can’t be made if the electric grid is down. The electric grid can’t function without microprocessors — about 10% of total electrical demand America is controlled by microprocessors, and by 2020 this level is expected to reach 30% or more (EPRI).

Related Articles:


Clark, D. Dec 10, 2010. Power Blip Jolts Supply of Gadget Chips. Wall Street Journal.

Dobosz, J. 15 March 2011. Japan Outages Serve Up Semiconductor Bargains On A Platter. Forbes.

DOE. July 2003. Grid 2030. A National Vision for Electricity’s second 100 years. United Dtates Department of Energy.

EPRI (Electric Power Research Institute). 2003. Electricity Technology Roadmap: Meeting the Critical Challenges of the 21st Century: Summary and Synthesis. Palo Alto, Calif.: EPRI.

Energy User News Vol 26 #1. Jan 2001. Semiconductor Wafer Fab Plant Gets Premium Utility Power

Global, FM. 31 Oct 2003. Safeguarding the Semiconductor Fabrication Facility. Controlled Environments.

Hordeski, Michael F. 2005. Emergency and Backup Power Sources: Preparing for Blackouts and Brownouts. CRC press.

Mearian, L. 30 Sep 2013. DRAM prices up 35% since China fab plant fire.

(NIST) National Institute of Standards and Technology.  24 Jan 2014. Electromagnetic Compatibility of Smart Grid Devices and Systems. U.S. Department of Commerce.

Sheppard, J. Oct 14, 2003. Reducing Risk with Enterprise Energy Management: Observations After the Biggest Blackout in US History.

SIRIJ. April 6, 2011. Rolling power outages make chip fabrication impossible.

So-eui, R. Aug 4, 2007. Samsung chip lines fully working. Reuters.

SupplyChain Digital. 11 May 2011. Renesas to renew operations June 1.

Posted in Electric Grid, Electricity, Interdependencies, Microchip Fabrication stops | 1 Comment

Lambert & Hall : Energy, EROI and quality of life

Excerpts from: Lambert, Jessica G., Hall Charles A. S. et al. 2014. Energy, EROI and quality of life. Energy Policy 64:153–167

EROEI society energetic needs arts health care NET ENERGY











Fig. 12. “Pyramid of Energetic Needs” representing the minimum EROI required for conventional oil, at the well-head, to be able to perform various tasks required for civilization. The blue values are published values, the yellow values increasingly speculative. If the EROI of a fuel (say oil) is 1.1:1 then all you can do is pump it out of the ground and look at it. Each increment in EROI allows more and more work to be done.

Abstract.  The near- and long-term societal effects of declining EROI are uncertain, but probably adverse.  To evaluate the possible linkages between societal well-being and net energy availability, we compare these preliminary estimates of energy availability: (1) EROI at a societal level, (2) energy use per capita, (3) multiple regression analyses and (4) a new composite energy index (Lambert Energy Index), to select indicators of quality of life (HDI, percent children under weight, health expenditures, Gender Inequality Index, literacy rate and access to improved water). Our results suggest that energy indices are highly correlated with a higher standard of living.

1. Introduction

Humans, as well as our complex societies, require food energy and now massive amounts of external energy to survive and reproduce. For all organisms it is the net energy, or the energy available to an organism or a society after investments to obtain that energy, that is important, indeed that may be the most important factor in determining the long-term survival and wellbeing of humans and society. The history of human cultural advancement can be examined from the perspective of the development of energy resources and the evolution of energy conversion technologies. Energy provided by the burning of fossil fuels has fostered the expansion of economic, social and environmental development. The availability of energy and the increased efficacy with which it is used has enabled humans to enhance their comfort, live longer and increase their numbers.

Because energy is used directly and indirectly in the production of all goods and services, energy prices have a significant impact on nearly every facet of economic performance. Economic analyses indicate that decline in the rate of increase in energy availability is likely to have serious effects. There is a strong correlation between per capita energy use and social indicators such as the UN’s Human Development Index.

1. 1. Quality of energy. The quality of a unit of energy is the usefulness of that energy unit to society. The amount of work that can be performed by an available unit of energy (not used directly or indirectly in the acquisition of the next unit of energy) influences the perception of quality but is not the only factor in ascertaining that unit of energy’s usefulness. For example, hydropower creates electricity that has greater economic utility than a similar amount of heat energy. However, electricity is less useful for smelting ore as it would need to be translated into thermal energy for this task and would lose a good deal of its special properties in this process. Energy return on investment (EROI) is one measure for establishing the quality,

We use EROI as a gauge of the effectiveness of human activity intended to satisfy fundamental physical needs, assist in achieving a sense of mental and psychological well-being, and accomplish the higher aspirations associated with the best of what the human species has to offer. Studies of early human culture suggest that hunter gatherers have a relatively large energy surplus (i.e. an EROI of 10:1), which allowed them to spend a great deal of time in leisure activities. Just as with the !Kung, the larger the surplus, i.e. the higher the EROI, the greater the societal welfare that can be generated. Hence the higher the EROI of a society, the greater the contributions possible to quality of life.

Anthropologist White (1959) was among the first to recognize the importance of surplus energy for art, culture, progress and indeed all the trappings of modern civilization.

Modern humans invest their own energy plus an enormously larger quantity of fossil fuel to produce food, to generate leisure and to do the plethora of activities and attributes we associate with modern society. Whether increased GDP is required is implicit but not proven: one can imagine a causative chain: higher EROI –> higher GDP –> higher social well-being.

An economy without sufficient domestic fuels of a type that it needs, such as oil for transport, must import these fuels and pay for them using an externally-accepted currency via some kind of surplus economic activity. This is especially the case if and as the nation develops industrially. Oil is usually the fuel of choice. The ability to purchase the oil used to maintain or grow an economy depends upon what an economy can generate to sell to the world, the oil required to grow or produce those products and their relative prices. Assume an economy that depends 100% on imported oil (e.g. for agriculture and transportation).

Costa Rica is an example. It has no domestic fossil fuels (although considerable hydroelectric power) but has a fairly energy-intensive economy, and to a large degree pays for its imported oil with exported agricultural products e.g. bananas and coffee. These are commodities highly valued in the world and hence readily sold. They are also quite energy-intensive to produce, especially when produced of the quality that sells in rich countries. Costa Rica’s bananas require an amount of money equivalent to about half of their dockside purchase price to pay for the oil and petrochemicals required for their production and cosmetic quality. These production expenses consume a large portion of the economic “surplus” necessary to generate hard currency to pay for imported petroleum.

1.4. EROI and the net energy cliff

Fig. 1 below illustrates the possible distribution of energy employed to produce energy (light grey) and the outcome of this process, the energy available to society (dark grey) for various fuel sources ranked according to their EROI values. As EROI approaches 1:1 the ratio of the energy gained (dark gray) to the energy used (light gray) from various energy sources decreases exponentially. High EROI fuels allow a greater proportion of that fuel’s energy to be delivered to society, e.g. a fuel with an EROI of 100:1 (horizontal axis) will deliver 99% of the useful energy (vertical axis) from that fuel to society. Conversely, lower EROI fuels delivers substantially less useful energy to society (e.g. a fuel with an EROI of 2:1 will deliver only 50% of the energy from that fuel to society). Therefore, large shifts in high EROI values (e.g. from 100 to 50:1) may have little or no impact on society while small variations in low EROI values (e.g. from 5 to 2.5:1) may have a far greater and potentially more “negative” impact on society.

Fig. 1. The “Net Energy Cliff”










The oil, gas and coal that dominate energy use today probably had EROI values greater than 30:1 to 100:1 in the past. Therefore, we did not need to be concerned with their EROIs or the potential political, economic and social ramifications of decreasing EROI values. Recently, we have become aware that the EROI and hence the amount of net energy available to society are in a general decline as the highest grade fossil fuel deposits are depleted .

“New” energy sources must be sufficiently abundant and have a large enough EROI value to power society, or much more time, effort, and attention must be paid to securing the next year’s supply of energy, leaving less money, energy, and labor available for discretionary purposes. The general decline in EROI for our most important fuels implies that depletion is a more powerful force than technological innovation

Carbon capture and sequestration (CCS) and the use of hydrogen fuel cells are topics of interest to the energy community but are not considered within this discussion as neither are methods of source energy production.

If the EROI values of traditional fossil fuel energy sources (e.g. oil) continue to decline and non-conventional energy resources fail to provide sufficient quantities of high EROI alternatives, we may be moving toward the “net energy cliff.” If EROI continues to decline over time, the surplus wealth that is used to perform valuable but perhaps less essential activities in a society (e.g. higher education, the arts, technologically advanced health care, etc.) will probably decline. Given this, we believe that declining EROI will play an increasingly important role in our future economy and quality of life.

1. 5. Quality of life indices. We hypothesize that access to cheap and abundant fuel is related to an individual’s and a society’s ability to attain a “higher quality of life, using some commonly used indicators of a society’s performance—the Human Development Index (HDI), percent of children under weight, average health expenditures per capita, percent of female literacy, Gender Inequality Index (GII), and improved access to clean water for rural communities. These values impart an array of environmental and social features that assist in defining the “quality of life” of the citizens of a nation.

The human development index (HDI) is a commonly used composite index of well-being and is calculated using four measures of societal well-being: life expectancy at birth, adult literacy, combined educational enrollment, and per capita GDP. It has a possible range of 0 to 1. The world’s most affluent countries in 2009 had HDI values above 0.7; these include Norway (.876), with the highest value, followed by Australia (.864), Sweden (.824), the Netherlands (.818), and Germany (.814). The lowest HDI values, below.35, tend to belong to the world’s least affluent countries (e.g. Ethiopia (.216), Malawi (.261), Mozambique (.155)).

Some scientists believe that energy scarcity is associated with constrained food production, poverty, limited production and conveyance of essential goods and services, and also generates strain on other limited environmental resources.

Results: Energy availability and quality of life

We find that many indices of human social well-being are well correlated with indices of energy availability and, as expected, GDP per capita. We also find that these quality of life indices are as well correlated with a composite index of energy use and distribution. Hence it appears that the quantity, quality and distribution of energy are key issues that influence quality of life.

3. 4. EROI for imported oil for developing countries. Developing nations, defined in this paper as those with an EROISOC of 20:1 or less, are also countries characterized as having high, and sometimes very high, population growth rates. As these populations grow and as the bulk of these people become increasingly located within cities, the task of feeding these urban dwellers becomes impossible without industrialized agriculture. Agricultural products, grown with high yield, tend to be especially energy-intensive whether grown for internal consumption or for export.

In addition, most of these emergent countries are developing their industries; exportation of agricultural and industrial products is often how they obtain foreign exchange to obtain needed industrial inputs. In general, as the GDP of a developing nation increases so does its energy use (or perhaps the converse). Consequently, for these and many other reasons fuel use in developing nations tends to increase rapidly. Most developing countries, however, do not have their own energy supplies, especially oil, which is needed to run their economic machine.

The implications for all nations

Traditionally, economists have viewed quality of life indices as a consequence of economic input and well-being. However we find that EROISOC and per capita energy use are as strong a statistical predictor as traditional economic indices. Both energy per capita and EROISOC are independent measures of the influence of energy availability on the ability of an economy to do work, which includes the generation of economic well-being and “quality of life.

The process of developing fuel-intensive domestic industries to generate exports has worked reasonably well for many developing nations in the past when the price of oil was low compared to the prices of exports. However, the trends suggested by our data imply that the increasing oil prices observed over the past decade, if they continue, will impact developing nations and their ability to produce goods substantially. When oil prices increase, these oil importing nations are “stuck” with the industrial investments that the people of that nation have become dependent upon. For a nation without domestic sources of fossil fuels, an environment of rising imported energy prices relative to price of exports obligates that nation to dedicate more and more of its production (and therefore energy use) to obtain the next unit of energy needed to run the economy. Large and increasing populations, mechanized agriculture and industrialization all are making developing nations increasingly dependent on foreign fuels. When the ratio of the price of oil to exports is low, times are good. When, inevitably, the relative price of oil increases things become much tougher. Once a developing nation steps on to this “fossil fuel treadmill,” it becomes difficult to step off. If the price of oil continues to increase and hence the EROIIO declines, this is likely to correspond to lower quality of life indices for the citizens of these nations. Specifically, health expenditures per capita, HDI and GII are likely to decline.

Certainly history is littered with cities and entire civilizations that could not maintain a sufficient net energy flow, showing us that certain thresholds of surplus energy must be met in order for a society to exist and flourish. As a civilization flourishes and grows it tends to generate more and more infrastructure which requires additional flows of energy for its maintenance metabolism. The concept of a hierarchy of “energetic needs” required for the maintenance and perhaps growth of a typical “western” society is somewhat analogous to Maslow’s “pyramid of (human) needs”. Humans must first meet their physiological and reproductive needs and then progressively less immediate but still important psychological needs. Like Maslow’s vision of a system of human hierarchical needs, a society’s energy needs are hierarchically structured. In this theory, needs perceived as “lower” in the hierarchy, e.g. extraction and refining of fossil fuels, must be satisfied before needs “higher” in the hierarchy become important at a societal level. For example, the need to first extract and then refine fuels must be met in order to meet the need for transport of that energy to its point of use. In Western society, the energy required to e.g. grow and transport sufficient food cannot be met without first fulfilling these first 3 needs (i.e. extraction, refining and transport of those fuels to their point of use). Energy for the support required for the maintenance of a family, the provision of basic education for the next generation of citizens, and healthcare for all citizens follows the hierarchical structure; each progressive level of energy needs requires a higher EROI and must be fulfilled before the next can be met. Discretionary use of energy e.g. the performing arts and other social amenities can be perceived as a societal energetic necessity only once all levels beneath this are fulfilled. The rating of importance of “the arts” probably is related to the socio-economic position that individuals or societal groups hold and may be operative only for those at the top of that society. A society’s pyramidal hierarchy of energetic needs represents the relative importance of various components of a society, ranked by importance to human survival and well-being, and the quality of energy devoted to the production and maintenance of infrastructure required to support those components of society. The specific and concrete nature of the lower levels may appear increasingly obscure and ambiguous to those at “higher” levels but is absolutely essential for their support.

As we use up our best fossil fuels and the EROI of traditional fossil fuels continues to decline countries with currently high EROISOC and energy use per capita values may find themselves in a deteriorating position, one with lower EROI SOC and energy use per capita. Policy decisions that focus on improving energy infrastructure, energy efficiency and provide additional non-fossil fuel energy sources (e.g. nuclear) within these nations may stem the tide of declining energy quality.

Most alternatives to oil have a very low EROI and are not likely to generate as much net economic or social benefit. Improving the efficiency at which their economies convert energy (and materials) into marketable goods and services is one means of improving energy security.

There is evidence too that once payments for energy rise above a certain threshold at the national level (e.g. approximately 10 percent in the United States) that economic recessions follow.


Posted in Charles A. S. Hall, EROEI Energy Returned on Energy Invested | Comments Off

From Wood (10,000 BC to 1750) to coal (1750-1920) to Oil, Natural Gas, & Electricity to What?

Cutler J. Cleveland . Energy Quality, Net Energy, and the Coming Energy Transition.  Department of Geography and Center for Energy and Environmental Studies, Boston University

The level of health, food security and especially material standard of living that exists today throughout the world is made possible by the expansive use of fossil fuels. While many take this affluence for granted, a long run view illustrates that the fossil fuel era is relatively new and will last for a relatively short period of time. For thousands of years prior to the Industrial Revolution, human societies were powered by the products of photosynthesis, principally fuel wood and charcoal. Widespread use of coal did not develop until the 18th century, oil and gas not until the late 19th century.

In 1800, the nation was fueled by animal feed, which powered the draft animals on farms, and wood — used for domestic heating and cooking and by early industry.

Wood and animal feed rapidly disappeared when coal became the dominant fuel, the latter due to the introduction of the first tractor in 1911.

The Industrial Revolution transformed the nation’s energy picture, substituting coal for wood on a massive scale.

By the time of World War I, coal accounted for nearly 75% of energy use. But coal’s place as the dominant fuel was fleeting as well.

Oil and natural gas quickly replaced coal, just as coal had replaced wood.

By the 1960s, oil and gas together accounted for more than 70% of total energy use; coal had dropped to less than 20%. Primary electricity has played a small but steadily growing role. Primary electricity refers to electricity generated by hydroelectric, nuclear, geothermal, solar, and other so-called “primary sources. The increase in the share of primary electricity towards the end of the period is due to the rise in nuclear generating capacity.

This long run view of energy raises an important question: what guided these transitions in the past, and to what extent can such information inform us about the impending transition from fossil to renewable fuels?

The transition from one major energy system to the next is driven by a combination of energetic, economic, technological and institutional factors. The energy-related forces stem from the tremendous economic and social opportunities that new fuels, and their associated energy converters, offered compared to earlier ones.

Energy plays a critical role in nature.

All organisms must use energy to perform a number of life-sustaining tasks such as growth, reproduction, and defense from predators. The most fundamental task of all is using energy to obtain more energy from the environment. When energy is used to do useful work, energy is degraded from a useful, high quality state to a less useful low quality state. This means that all systems must continuously replace that energy they use, and to do so takes energy.

This fundamental reality means that Energy Returned on Invested (EROI) and net energy are used to explain the foraging behavior of organisms, the distribution and abundance of organisms and the structure and functioning of ecosystems

For the overwhelming majority of their existence, humans obtained energy from the environment by hunting and gathering.

The EROI for food capture is the caloric value of the food capture to the expenditure of energy in the capture or gathering process.

Natural ecosystems produce enough edible food energy to support hunter-gatherers at densities no greater than one person per square kilometer. Traditional agricultural societies support hundreds of people square kilometer, enabling permanent settlements to grow in size and number. The greater surplus released labor from the land, creating the potential for people to move to urban areas and work in manufacturing and industry.

The economic usefulness of an energy converter is determined in part by its power, the rate at which it converts energy to do useful work.

Humans and draft animals convert energy to work at low power outputs. The energetic limits of people and draft animals set very definite economic and social limits.

The Industrial Revolution erased these limits with the introduction of the steam engine, which had a power output that dwarfed that of muscle power.

The higher power output of the steam engine enabled it to deliver a much large energy surplus than human labor or draft animals.

Given the economic advantage offered by heat engines powered by fossil fuels, it is no surprise that labor and draft animals we rapidly replaced by heat engines once they became available.

The United States’ economy illustrates this transition. In 1850, more than 90% of the work done in the economy was accomplished by human labor and draft animals.

Over the next half-century, engines powered by wood and then coal rapidly displaced the animate converters.

By the 1950s, labor and animals had almost been completely displaced. Of the economic changes driven by the new fuels and machines, one of the most dramatic was the effect on labor productivity. In agriculture, for example, the productivity of labor increased more than 100-fold relative to rates possible prior to the Industrial Revolution. This increase in labor productivity reduced the need for farm labor and workers moved to industrial jobs.

How strong is the connection between energy use and economic growth?

One hypothesis is that the link is weak.  This is because it’s assumed that as fossil fuels become scarcer, their price will rise, which in turn will trigger technological changes and substitutions that improve energy efficiency. Indeed, many believe that the price shocks in 1973-74 and 1979-80 led to the adoption of many new energy efficient technologies. Second, the shift to a service-oriented, dot-com economy will de-couple energy use from economic activity. A dollar’s worth of steel requires 93,000 Btu to produce in the United States; a dollar’s worth of financial services uses 9,500 Btu. Thus, it stands to reason that a shift towards less-energy intensive activities will reduce the need for energy.

A second hypothesis is that the connection between energy use and economic output is strong.   The heat equivalent of a fuel is just one of the attributes of the fuel and ignores the context in which the fuel is used, and thus cannot explain, for example, why a thermal equivalent of oil is more useful in many tasks than is a heat equivalent of coal.

Because of the variation in attributes among energy types, the various fuels and electricity are less than perfectly substitutable in production or consumption. For example, a Btu of coal is not perfectly substitutable with a Btu of electricity; since the electricity is cleaner, lighter, and of higher quality, most people are willing to pay a premium price per Btu of electricity.

Consider incoming solar energy. The land area of the lower 48 United States intercepts 500 times of the nation’s annual energy use. But that energy is spread over nearly 3 million square miles of land, so that the energy absorbed per unit area is very small. Plants, on average, capture only about 0.1% of the solar energy reaching the Earth. This means that the actual plant biomass production in the United States is very small (compared to the overall incoming solar energy).

Power density combines two attributes of energy sources: the rate at which energy can be produced from the source and the geographic area covered by the source. A coal mine in China, for example, can produce upwards of 10,000 watts per square meter of the mine. As the above examples indicate most solar technologies have low power densities compared to fossil fuels.

A low energy and power density means that large amounts of capital, labor, energy and materials must be used to collect, concentrate and deliver solar energy to users.

This makes them more expensive than fossil fuels. The difference between solar and fossil energy is best represented but their energy return on investment (EROI). The EROI for fossil fuels tends to be large while that for solar tends to be low. This is the principal reason that humans aggressively developed fossil fuels in the first place. Fossil fuels have allowed us develop lifestyles that also are very energy intensive. The places that we live, work and shop have very high power densities. Supermarkets, office buildings and private residences in industrial nations demand huge amounts of energy. This very energy-intensive way of living, working, and playing have been made possible by fossil fuels sources that are equally as concentrated. Another quality difference between renewable fuels and fossil fuels is their energy density: the quantity of energy contained per unit mass of a fuel. For example, wood contains 15 Mj per kilogram; oil contains up to 44 Mj per kilogram.


Among the countless technologies humans have developed, only two have increased our power over the environment in an essential way.

Georgescu-Roegen called these Promethean technologies. Promethean I was fire, unique because it was a qualitative conversion of energy (chemical to thermal) and because it generates a chain reaction that sustains so long as sufficient fuel is forthcoming. The mastery of fire enabled man not only to keep warm and cook the food, but, above all to smelt and forge metals, and to bake bricks, ceramics, and lime. No wonder that the ancient Greeks attributed to Prometheus (a demigod, not a mortal) the bringing of fire to us.

Promethean II was the heat engine. Like fire, heat engines achieve a qualitative conversion of energy (heat into mechanical work), and they sustain a chain reaction process by supplying surplus energy. Surplus energy or (net energy) is the gross energy extracted less the energy used in the extraction process itself. The Promethean nature of fossil fuels is due to the much larger surplus they deliver compared to muscle energy from draft animals or human labor.

The energy surplus delivered by fossil fuel technologies is the energetic basis of the Industrial Revolution.


Posted in Wood | 2 Comments

Population, Fossil Fuels, Consequences

Now that oil, coal, and natural gas are at peak production or will be soon, how can any rational person argue there’s no need for birth control and less immigration with 5 of the 7 billion humans alive due to fossil fuels?

Not only that, we’re driving other species extinct now that we’re using up three-quarters of the Earth’s land:

  • 1%     Urban and infrastructure
  • 11.7%  Cropland
  • 26.8%  Forestry
  • 36%     Livestock grazing. 20% of all animal biomass on the planet (UNFAO 2006).

The 24.5% that we aren’t using is:

  • 12.5%  Rocky, desert, or covered with snow
  • 7.4%    Unproductive arctic and alpine tundra, grasslands
  • 4.6%    pristine forests, including boreal and tropical rainforests
Source: (Erb, 2009 – doesn’t include Greenland or Antarctica)

A Litany of Evils caused by overpopulation and immigration

  • Aquifer depletion, especially northwestern India, northern and western China, northern Mexico, Iraq, Yemen, Pakistan, and Syria
  • Climate change
  • Climate refugees: New Orleans among the first climate refugees – soon those in London, New York, Washington, Miami, Shanghai, Kolkata, Cairo and Tokyo will join them.
  • Storms from higher surface water temperatures in Central America, the Caribbean, the Atlantic and Gulf coasts of the USA, East and Southeast Asia, japan, China, Taiwan, the Philippines, Viet Nam, and Bangladesh.
  • Desertification. Expanding deserts include the Sahara into Morocco, Tunisia, and Algeria. The Sahelian is moving southward into Nigeria. Deserts are forcing migrations in Iran, Brazil, and Mexico. The expansion of deserts in China has accelerated since 1950. 24,000 villages have been abandoned partially or entirely. The Gobi desert grew as much as half of Pennsylvania in just 5 years and is within 150 miles of Beijing. The 1930s dust bowl forced 2 million people to leave Oklahoma, Texas and Kansas. As the Ogalalla aquifer continues to deplete, the conditions for even larger dust bowls grows more likely.
  • Extinction
  • Invasive species
  • Pollution refugees: Love canal, Times Beach Missouri, Chernobyl area, cancer villages in China, Fukushima
  • Rising oceans
  • Water shortages
  • Toxic pollutants in local environments
  • Wildlands lost, wildlife habitat fragmented, converted to farmland, reservoirs, power lines, roads, mines, logging, overgrazing, bottom trawling, urban sprawl

What’s at stake: 5 billion people dying, starvation, disease, Genocide and madness on the scale of the Nazis, Rwandan Hutu-Tutsi, North Korea, Mao, Stalin, nuclear war, chaos, and endless wars.

Feeding the next 3 billion means cutting down the remaining forests, and unprecedented biodiversity destruction as we take over what few bits of wild remain and replace them with soil-eroding, aquifer guzzling, toxic pesticide polluting crops. Most scientists don’t think we can sustain the 7 billion we have now, ten billion is a sick fantasy.

One reason not even 7 billion can survive much longer is that we’re mining topsoil to grow enormous amounts of food now.  This has always been a factor in the fall of civilizations in the past, it just took them longer to destroy their soil in the past (on average 1500 years) because they didn’t have mega-horsepower tractors to compact and till the soil so it could wash and blow away within 100-200 years.  Soil erosion is happening 17 times faster than new soil is being formed on 90% of farmland (IUGS 2013).  Future generations simply won’t be able to grow as much food.

Population growth relentlessly destroys past environmental victories.

A wild river that was once saved gets dammed. A freeway that was once prevented is built, ripping apart the ecosystem and tight-knit neighborhoods.

A million acres of prime farmland in America is paved over by sprawl every year

2.2 million acres if you include wild land.  Who benefits? Developers and businesses that can pay cheap wages.

Overpopulation was caused by coal, oil, and natural gas

Fossil fuels allowed up to agricultural intensification and equally important — the ability to harvest, preserve, and deliver food before it spoiled in myriad ways:

  1. Iron made with coal rather than charcoal is what launched the industrial revolution and made combustion engines, tractors, vehicles, etc., possible
  2. Trains delivering food to inland areas of famine and later trucks that could deliver food and other essential goods anywhere
  3. Up to five times as much food grown with Haber Bosch nitrogen natural-gas fertilizers
  4. Public Health – clean water and food (i.e. sewage and water treatment, etc., raised average lifespans far more than medicine and continues to do so)
  5. Container ships, above all, made globalization possible (Levinson). America now imports half of its food.

Fossil Fuels have allowed us to go way past carrying capacity

Since the 1980s we’ve been using about 1.4 Earths by burning vast troves of oil, coal, and natural gas. This energy allowed us to go way beyond our carrying capacity by intensifying agriculture and using up resources that would have otherwise been preserved for future generations.

There are many other reasons why population went up

  • Wanting children is a biological drive
  • Abortions and birth control were hard to come by
  • Capitalism depends on endless growth
  • Religious leaders depend on endless growth of worshippers to amass power and wealth
  • Before oil-based weapons systems, the largest army was the most likely victor
  • Political, military, business (especially real estate) leaders want more voters, the largest armies, and more consumers which leads to abortions being banned and birth control hard to come by
  • Humans don’t think very well, see my list of “Over 250 cognitive biases, fallacies and more” at energyskeptic, or read Carol Tavris’s book “Mistakes Were Made But Not by Me: Why We Justify Foolish Beliefs, Bad Decisions, & Hurtful Acts”
  • It is taboo to be realistic. Reality-based talk is labeled pessimism and dismissed. Happy endings to Hollywood movies, lack of critical thinking skills and science in schools, and other cultural factors in America have taken this “must always be optimistic” to such a crazy level that “Positive Thinking” ought to be in the DSM-5. Some good books to read: Ehrenreich’s “Bright-sided: How the Relentless Promotion of Positive Thinking Has Undermined America” and Kuntsler’s “Too Much Magic: Wishful Thinking, Technology, and the Fate of the Nation”
  • The business need to make products break so more products could be sold led to a much earlier peak of resources. Read Slade’s “Made to Break Technology and Obsolescence in America”.
  • Even people who were aware of “The State Of The World” had children, hoping that “The Scientists Would Come Up With Something”.
  • We live in the moment. Today. People have a hard time imagining they’ll be hungry tomorrow after a large meal. Even if you could convince people that times would be hard decades ahead, that would not be a strong enough reason to refrain from having kids.

America could have stayed below 200 million

Several systems ecologists have estimated that the carrying capacity of the United States without fossil fuels is somewhere between 100 and 250 million people.  How do we get from over 317 million to 100 million in less than 20-30 years?  It’s already too late for no immigration or one-child per woman to do the trick, but still, both of these would help a bit.

Limiting our population in America would have a huge impact.

Americans consume 5 times as much as the average person, so 317 million Americans is the same as 1.58 billion Chinese.

Exponential Growth: Sustainability Impossible

Above all, if the concept of exponential growth had been taught in schools, or explained by journalists and environmental groups, Americans would be more willing to have fewer children.

Here’s how Albert Bartlett explains it: “The growth in one doubling time is greater than the total growth during all the preceding doubling times”.

An example he uses is oil production. Over 100 years world oil production grew 7% per year. That’s 10 doublings which means 1970 oil production was a thousand times more than in 1870. So every decade, more oil was produced than in all preceding decades.

Similarly, with each doubling of population, we cause as much destruction as all of the preceding doublings.

It took 5,000 years for population to double from 1 to 2 million people between 20 and 15 thousand years ago at a rate of almost zero growth. But it only took 37 years to go from 2 to 4 billion between 1930 and 1976. Now, 37 years later, we haven’t quite doubled, but we’re close — 7.13 billion. The rate of population growth has gone down very slightly, but the rate is still exponential, and orders of magnitude larger than the almost zero rate for most of human history.

Once Upon a Time, people understood population mattered

1963 President Johnson told the United Nations that “five dollars invested in population control is worth 100 dollars invested in economic growth” (Erlich 1970)

1968 President Eisenhower: “once as president, I thought and said that birth control was not the business of our Federal Government. The facts changed my mind…I have come to believe that the populatin explosion is the world’s most critical problem.”

1976 Gallup poll: 84% said they didn’t want more people in the United States (Hays). The population was 200 million back then.

The consequences: If journalists and environmental groups had kept population issues and awareness in print we more than half of the American people wouldn’t have to die of starvation, disease, or war in less than a generation (that’s how Mother Nature solves overpopulation).

The consequences: Sprawl and consequent lower carrying capacity

Sprawl is one of the largest environmental problems in America and world-wide. It increases energy and water consumption, air pollution, and destroys wildlife. In the USA between 1982 and 2001 we lost 34,000,000 acres of forest, cropland, and pasture to development, an area the size of Illinois.

Posted in Overpopulation | 2 Comments