Why the demise of civilization is inevitable

MacKenzie, D. April 2, 2008. Why the demise of civilisation may be inevitable. NewScientist.

Every civilization in history has collapsed. Why should ours be any different?

Homer-Dixon doubts we can stave off collapse completely. He points to what he calls “tectonic” stresses that will shove our rigid, tightly coupled system outside the range of conditions it is becoming ever more finely tuned to. These include population growth, the growing divide between the world’s rich and poor, financial instability, weapons proliferation, disappearing forests and fisheries, and climate change. In imposing new complex solutions we will run into the problem of diminishing returns – just as we are running out of cheap and plentiful energy.

The stakes are high. Historically, collapse always led to a fall in population. “Today’s population levels depend on fossil fuels and industrial agriculture,” says Tainter. “Take those away and there would be a reduction in the Earth’s population that is too gruesome to think about.”

If industrialized civilization does fall, the urban masses – half the world’s population – will be most vulnerable. Much of our hard-won knowledge could be lost, too. “The people with the least to lose are subsistence farmers,” Bar-Yam observes, and for some who survive, conditions might actually improve. Perhaps the meek really will inherit the Earth.

A few researchers have been making such claims for years. Disturbingly, recent insights from fields such as complexity theory suggest that they are right. It appears that once a society develops beyond a certain level of complexity it becomes increasingly fragile. Eventually, it reaches a point at which even a relatively minor disturbance can bring everything crashing down.

Some say we have already reached this point, and that it is time to start thinking about how we might manage collapse.
Environmental mismanagement

History is not on our side. Think of Sumeria, of ancient Egypt and of the Maya. In his 2005 best-seller Collapse, Jared Diamond of the University of California, Los Angeles, blamed environmental mismanagement for the fall of the Mayan civilization and others, and warned that we might be heading the same way unless we choose to stop destroying our environmental support systems.

Lester Brown of the Earth Policy Institute in Washington DC agrees. He has long argued that governments must pay more attention to vital environmental resources.

Others think our problems run deeper. From the moment our ancestors started to settle down and “For the past 10,000 years, problem solving has produced increasing complexity in human societies,” says Joseph Tainter, an archaeologist at Utah State University, Logan, and author of the 1988 book The Collapse of Complex Societies.

If crops fail because rain is patchy, build irrigation canals. When they silt up, organize dredging crews. When the bigger crop yields lead to a bigger population, build more canals. When there are too many for ad hoc repairs, install a management bureaucracy, and tax people to pay for it. When they complain, invent tax inspectors and a system to record the sums paid. That much the Sumerians knew.
Diminishing returns

There is, however, a price to be paid. Every extra layer of organization imposes a cost in terms of energy, the common currency of all human efforts, from building canals to educating scribes. And increasing complexity, Tainter realized, produces diminishing returns. The extra food produced by each extra hour of labor – or joule of energy invested per farmed hectare – diminishes as that investment mounts. We see the same thing today in a declining number of patents per dollar invested in research as that research investment mounts. This law of diminishing returns appears everywhere, Tainter says.

To keep growing, societies must keep solving problems as they arise. Yet each problem solved means more complexity. Success generates a larger population, more kinds of specialists, more resources to manage, more information to juggle – and, ultimately, less bang for your buck.

Eventually, says Tainter, the point is reached when all the energy and resources available to a society are required just to maintain its existing level of complexity. Then when the climate changes or barbarians invade, overstretched institutions break down and civil order collapses. What emerges is a less complex society, which is organised on a smaller scale or has been taken over by another group.

Tainter sees diminishing returns as the underlying reason for the collapse of all ancient civilizations, from the early Chinese dynasties to the Greek city state of Mycenae. These civilizations relied on the solar energy that could be harvested from food, fodder and wood, and from wind. When this had been stretched to its limit, things fell apart.
An ineluctable process

Western industrial civilization has become bigger and more complex than any before it by exploiting new sources of energy, notably coal and oil, but these are limited. There are increasing signs of diminishing returns: the energy required to get each new joule of oil is mounting and although global food production is still increasing, constant innovation is needed to cope with environmental degradation and evolving pests and diseases – the yield boosts per unit of investment in innovation are shrinking.

Is Tainter right? An analysis of complex systems has led Yaneer Bar-Yam, head of the New England Complex Systems Institute in Cambridge, Massachusetts, to the same conclusion that Tainter reached from studying history. Social organizations become steadily more complex as they are required to deal both with environmental problems and with challenges from neighboring societies that are also becoming more complex, Bar-Yam says. This eventually leads to a fundamental shift in the way the society is organized.

“To run a hierarchy, managers cannot be less complex than the system they are managing,” Bar-Yam says. As complexity increases, societies add ever more layers of management but, ultimately in a hierarchy, one individual has to try and get their head around the whole thing, and this starts to become impossible.
Increasing connectedness

Things are not that simple, says Thomas Homer-Dixon, a political scientist at the University of Toronto, Canada, and author of the 2006 book The Upside of Down. “Initially, increasing connectedness and diversity helps: if one village has a crop failure, it can get food from another village that didn’t.”

As connections increase, though, networked systems become increasingly tightly coupled. This means the impacts of failures can propagate: the more closely those two villages come to depend on each other, the more both will suffer if either has a problem. “Complexity leads to higher vulnerability in some ways,” says Bar-Yam. “This is not widely understood.”

The reason is that as networks become ever tighter, they start to transmit shocks rather than absorb them. “The intricate networks that tightly connect us together – and move people, materials, information, money and energy – amplify and transmit any shock,” says Homer-Dixon. “A financial crisis, a terrorist attack or a disease outbreak has almost instant destabilizing effects, from one side of the world to the other.”

For instance, in 2003 large areas of North America and Europe suffered blackouts when apparently insignificant nodes of their respective electricity grids failed. And this year China suffered a similar blackout after heavy snow hit power lines. Tightly coupled networks like these create the potential for propagating failure across many critical industries, says Charles Perrow of Yale University, a leading authority on industrial accidents and disasters.
Credit crunch

Perrow says interconnectedness in the global production system has now reached the point where “a breakdown anywhere increasingly means a breakdown everywhere”. This is especially true of the world’s financial systems, where the coupling is very tight. “Now we have a debt crisis with the biggest player, the US. The consequences could be enormous.”

“The networks that connect us can amplify any shocks. A breakdown anywhere increasingly means a breakdown everywhere”

“A networked society behaves like a multicellular organism,” says Bar-Yam, “random damage is like lopping a chunk off a sheep.” Whether or not the sheep survives depends on which chunk is lost. And while we are pretty sure which chunks a sheep needs, it isn’t clear – it may not even be predictable – which chunks of our densely networked civilization are critical, until it’s too late.

“When we do the analysis, almost any part is critical if you lose enough of it,” says Bar-Yam. “Now that we can ask questions of such systems in more sophisticated ways, we are discovering that they can be very vulnerable. That means civilization is very vulnerable.”
Tightly coupled system

Scientists in other fields are also warning that complex systems are prone to collapse. Similar ideas have emerged from the study of natural cycles in ecosystems, based on the work of ecologist Buzz Holling, now at the University of Florida, Gainesville. Some ecosystems become steadily more complex over time: as a patch of new forest grows and matures, specialist species may replace more generalist species, biomass builds up and the trees, beetles and bacteria form an increasingly rigid and ever more tightly coupled system.

“It becomes an extremely efficient system for remaining constant in the face of the normal range of conditions,” says Homer-Dixon. But unusual conditions – an insect outbreak, fire or drought – can trigger dramatic changes as the impact cascades through the system. The end result may be the collapse of the old ecosystem and its replacement by a newer, simpler one.

Globalization is resulting in the same tight coupling and fine-tuning of our systems to a narrow range of conditions, he says. Redundancy is being systematically eliminated as companies maximize profits. Some products are produced by only one factory worldwide. Financially, it makes sense, as mass production maximizes efficiency. Unfortunately, it also minimizes resilience. “We need to be more selective about increasing the connectivity and speed of our critical systems,” says Homer-Dixon. “Sometimes the costs outweigh the benefits.”

 

Posted in 3) Fast Crash, Interdependencies | Tagged , , , | 1 Comment

Effects of biodiesel on diesel engines: John Deere

[ Since petroleum is finite, the most important focus of U.S. energy research ought to be keeping trucks operating, since civilization ends when trucks stop running.  Ideally this would be done with a “drop-in” fuel that can be burned in existing diesel engines so we don’t have to scrap the trillions of dollars we have invested in trucks, and transportation and oil pipeline infrastructure and to save the enormous amount of energy, cost, and materials required to build a new fuel distribution system, service stations, new/modified trucks and so on.

Since the most important trucks are the ones that plant and harvest food, here’s what John Deere has to say about using biodiesel in their engines.

For an even more detailed account of how biodiesel affects engines made to burn petroleum-based diesel, see:

Radich, A. 2004. Biodiesel performance, costs, and use. Energy Information Administration. http://www.eia.gov/oiaf/analysispaper/biodiesel/

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

John Deere. 2016. Using Biodiesel in John Deere Engines.

All John Deere engines can use biodiesel blends. B5 blends are preferred, but concentrations up to 20 percent (B20) can be used providing the biodiesel used in the fuel blend meets the standards set by the American Society of Testing Materials (ASTM) D6751 or European Standard (EN) 14214.

John Deere engines with exhaust filters should not use biodiesel blends above B20. Concentrations above B20 may harm the engine’s emissions control system. Specific risks include, but are not limited to, more frequent regeneration, soot accumulation, and increased intervals for ash removal. For these engines, John Deere-approved fuel conditioners containing detergent/dispersant additives are required when using B20, and recommended when using lower biodiesel blends.

John Deere engines without exhaust filters can operate on biodiesel blends below and above B20 (up to 100 percent biodiesel); however, they should be operated at levels above B20 ONLY if the biodiesel is permitted by law and meets the EN 14214 specification. Engines operating on biodiesel blends above B20 may not fully comply with or be permitted by all applicable emissions regulations. For these engines, John Deere-approved fuel conditioners containing detergent/dispersant additives are required when using biodiesel blends of B20 or higher, and recommended when using lower biodiesel blends.

Biodiesel impacts all diesel engines — no matter what brand. In an effort to ensure every John Deere engine user enjoys a positive experience, we want you to know the benefits as well as the cautions of using biodiesel.

Material compatibility

  • Through repeated exposure, biodiesel can seep through certain seals, gaskets, hoses, elastomers, glues, and plastics. This is more of a problem in older engines.
  • Natural rubber, nitrile, and butyl rubber are particularly vulnerable to degradation.
  • Brass, bronze, copper, lead, tin, and zinc can accelerate the oxidation of biodiesel and create deposits in the engine.

Performance

  • Compared to conventional petroleum diesel fuel, B20 will result in slight reductions in power and fuel economy. Expect a 2% reduction in power and a 3% reduction in fuel economy when using B20 biodiesel. Expect up to a 12% reduction in power and an 18% reduction in fuel economy when using B100.
  • Biodiesel can accelerate the degradation of crankcase oil.
  • When using biodiesel fuel, the engine oil level must be checked daily.
  • In no instance should the fuel dilution of the oil be allowed to exceed 5%. OILSCAN™ can be used to verify fuel dilution levels.
  • Fuel should be sampled periodically to ensure a consistent percentage of biodiesel.
  • Biodiesel can reduce water separator efficiency.
  • Biodiesel can cause cold weather flow degradation.

Storage and Handling

  • To improve storage of biodiesel fuels, John Deere recommends the use of a fuel conditioner. To be effective, the conditioner needs to be added when the fuel is fresh (close to the time it is produced). Periodic testing of the fuel is recommended to ensure it continues to meet specifications.
  • Fuel conditioners can improve fuel flow in cold temperatures and oxidation stability in the summer.
  • Tanks should be kept as full as possible to minimize condensation because water accelerates microbial growth.
  • Acceptable storage tank materials include aluminum, steel, fluorinated polyethylene, fluorinated polypropylene, Teflon®, and most fiberglass.
  • Sedimentation and water should be removed on a routine basis.
  • New fuel filters should be installed when biodiesel is introduced to older or used engines. For the first two changes, the fuel filter life will be half the standard.
  • Biodiesel might cause corrosion and deposit formation due to higher acidity.

Emissions

  • Users are responsible for compliance with local emissions regulations limiting the use of biodiesel in emissions-certified engines.
  • Biodiesel tends to increase NOx emissions while reducing smoke.
  • The use of biodiesel blends above B20 can impact the performance and maintenance of exhaust filters.

This list of considerations is not intended to be all-inclusive. Consult your engine operators manual, your local John Deere engine distributor or equipment dealer, or visit www.JohnDeere.com/biodiesel for more information.

Warranty

  • The John Deere warranty covers only defects in material and workmanship as manufactured and sold by John Deere. Failures caused by poor quality fuel of any type cannot be compensated under our warranty.
  • IMPORTANT: Raw pressed vegetable oils are NOT acceptable for use as fuel in any concentration in John Deere engines. Their use could cause engine failure.
Posted in Biodiesel, Trucks | Tagged , , | Comments Off on Effects of biodiesel on diesel engines: John Deere

Restore wild bison

[ Native wild animals have the least impact on ecosystems. While cattle plod along in line with one another, bison dance their own unique steps across the landscape, and don’t develop deep rutted grooves that can erode soil like cattle do.  As toxic invasive species invade rangeland and can no longer be controlled, domestic cattle are less likely to be able to cope. So let’s bring the bison back now, so our descendants can once again hunt them on horseback a century (or less) from now. 

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

DeWoody, J.A. August 29, 2014. Heirloom genomes and bison conservation. A book review of James A. Bailey’s 2013 “American Plains Bison. Rewilding an Icon.” Science Vol. 345:1009

James A. Bailey, who was at Colorado State University, wrote this book “to describe some details of the breadth and depth of wildness in American plains bison, and to show how bison wildness is threatened by creeping domestication.”  It is a sobering assessment of bison biology today. Bailey’s thesis is that most bison advocates undervalue wildness to the detriment of bison and our sense of free will.

Early on, it becomes apparent that this book will not be a dusty academic treatise that only looks backward: Bailey quickly makes his foray into contemporary issues and notes that today, only two bison herds in the contiguous United States live in the presence of any nonhuman predators. How can we expect bison to continue a natural journey when we have removed predators as evolutionary drivers?

The book then proceeds to provide the obligatory summary of how humans devastated bison populations. Bailey does so in depressingly successful fashion, replete with photos and shameful anecdotes of our ancestors’ exploits.

Bailey drives home the point that the maintenance of bison in small pens is a travesty. Our society should instead retain elements of bison wildness by challenging them in a natural manner. This means with competitors, predators (wolves, bears, and humans), pathogens, and the environment itself. Wild plains bison evolved historically in the presence of pathogens and predators, and they remain the best means to control population sizes.

Bailey convincingly argues that traits necessary for the survival of wild animals are ill-adapted for captivity and vice versa. Animal production (i.e., farming or ranching) is insufficient to maintain the historical and biological integrity of wild animals. Many attributes of wild bison populations (e.g., dominance hierarchies) simply do not exist in most captive populations because artificial breeding schemes selectively avoid them. As a conservationist and hunter myself, I agree with this sentiment and particularly like his admonition that hunters and biologists must avoid artificial selection (e.g., for horn size) to the extent possible.

Bailey analyzes the 44 largest conservation herds of plains bison in the United States by considering a number of factors related to “wildness,” including range size, supplemental feeding, the presence of natural predators, sex ratio, and disease management (lack thereof being most consistent with wildness). The results are a bit disheartening: only two herds (Yellowstone and Utah’s Henry Mountains) stand out as justifiably “wild.

In the remaining herds, inbreeding, hybridization, and artificial selection (domestication) reduce the wildness of native bison and diminish their capacity to provide natural ecosystem services that are crucial to many other plains species. A variety of other species are described (e.g., prairie chickens and Arkansas darters) that could benefit from the establishment of large bison herds.

Bailey then switches gears, from esoteric discussions about why we should conserve wild bison to how we actually do so. The vast majority of states have no recovery plan for bison, and their regulatory status impedes conservation in most Western states because bison are not managed like most wildlife species. Bailey does not wave his hands and say “Let’s rewild bison everywhere,” but he provides detailed suggestions about where and how to conserve wild bison. The practical problems that are apt to confound bison rewilding are clearly recognized, including the uncertainties of global warming, the (incomprehensible) fact that many states do not legally recognize bison as wildlife, and the tendency to relegate bison to poor-quality lands because they will be easiest (and cheapest) to acquire. He argues that an objective review of bison status under the Endangered Species Act would at least identify them as threatened and recognizes that it might take a century or more to widely restore wild plains bison in the United States.

Bailey’s musings about nature resonated deeply, in large part because of the author’s intuitive understanding of wildness. He writes that “Wildness is the most unique and irreplaceable characteristic of wildlife. … In a world where humans increasingly restrict their own freedom by crowding and monotonizing their environment, wild bison should be retained, at least as a symbol of what we have sacrificed in domesticating and civilizing ourselves.

The plight of the bison should strike a chord with Bailey’s intended general audience, but ecologists, evolutionary biologists, and wildlife managers will find this book an effective primer on practical conservation biology. It should be required reading for all wildlife students; I know it will be for mine.

Posted in Farming & Ranching | Tagged , , | 1 Comment

A U.S. Senate hearing on T. Boone Pickens plans for natural gas and wind to reduce oil dependence

[ This session is unusual in that the words “peak oil” are spoken several times, and M. King Hubbert, James Howard Kunstler, and Matt Simmons are lauded.   Gal Luft points out that “10 years ago, Osama bin Laden predicted that oil would be $144 a barrel. Everybody laughed at him. Oil was only $12 a barrel at the time. He was right.”

Pickens ideas about running transportation on natural gas so far haven’t worked out so far.  It was hoped that 20% of trucks would be running on natural gas by now, but only 3% are, for many reasons that I explain in my book “When Trucks Stop Running: Energy and the Future of Transportation.  

At least Pickens realizes that it is heavy-duty transportation, especially trucks, that are the most important.  Yet cars dominate discussions and the lion-share of funding for “energy solutions.  And guess what, people drive more miles when cars get more efficient, undoing the oil saved.   Even if people  drove less, so what?  Trucks, trains, and ships BURN DIESEL. Cars burn gasoline.  Diesel engines can’t run on gasoline (or ethanol, diesohol and many other fuels).  Diesel engine are just as important for our high level of civilization as the diesel they burn because they’re twice as efficient than a gasoline engine and far more powerful, lasting up to 40 years and a million miles.

Alice Friedemann   www.energyskeptic.com ]

T. Boone Pickens

T. Boone Pickens

Senate 110-1023. July 22, 2008. Energy security. An American imperative.  U.S. Senate hearing.

Excerpts from this 175 page hearing follow:

T. BOONE PICKENS, Founder & CEO, BP Capital Management

We had produced 1 trillion barrels of oil at the turn of the century. It is interesting because if you look at King Hubbert’s extension, peak oil, and what would happen, the guy was great, in my estimation. I am a disciple.  I don’t think there are 2 trillion barrels of oil.  You may say take the oil shale on the western slope and this and that and everything. You can add up a bunch of stuff. When you add it up, it is going to be very expensive oil. But in looking at conventional oil—I live and you live and everybody in this room lives in the hydrocarbon era, and that era started with the automobile in 1900.

Half of the oil that I see out there had been produced by the year 2000.

Now, we have another trillion barrels, and you say, well, that is another hundred years. No. You started slow, ramped up, and now the next trillion is going to go out of the system within the next 50 years. So you are going to be forced to abandon the hydrocarbon era.

Can you imagine researchers 500 years out that come back and look at us? They are going to say, ‘‘That was a strange crowd. They lived on oil as a fuel.’’

We are going to have to make it to the next fuel. But what is going to happen, if I am right on what I am trying to do, I am going to awaken the American people, and they are going to see what they are up against. When they walk out of a room, they will turn off the lights. They do not do that now.

The Pickens Plan starts with harnessing wind and building solar capabilities. We are blessed with some of the best wind and solar resources in the world.  The plan substitutes electricity generated by natural gas-fired plants with wind-generated electricity. Natural gas-fired is 22 percent; the wind is going to replace that 22 percent. The natural gas freed up is directed to transportation needs of the country. The natural gas is cheaper, cleaner than gasoline, and its supply is plentiful. And, most of all, it is American.

But natural gas is nothing more than a bridge to the next fuel because when you get to 2050, we are pretty well maxed out on hydrocarbons as a transportation fuel. I almost think it is divine intervention to have natural gas show up at such a critical time for this country, and to be able to use it as a bridge to the next fuel in the next 20 or 30 years.

And 70% of the oil is used for transportation. When a barrel of oil comes to the United States today, it will be moved to a refinery, refined, then go into marketing, then go into our cars, and in 4 months it is gone. It is gone. We burn it up. It is out of here. And so we have to get a hold of this situation.

 

Senator COLLINS. How much of the solution also should encompass energy conservation?

Mr. PICKENS. Oh, it has got to be on page 1, of course. We have got to conserve. There is no question about that. We have been very wasteful. But in our defense, we had cheap oil. And as long as we had cheap oil—I don’t know whether you have seen Jim Kunstler.  I went over to Southern Methodist University (SMU) and heard him the other night. He is worth hearing. He is a generalist, but he tells us where we made the mistakes. We did not develop our rail system. You look at the world today, we go places and we want to ride on a 200-mile-an-hour train. We have to go to a foreign country to do that. We don’t have that. Why don’t we have it? Because we had cheap oil. It didn’t make sense for us to. It was expensive. We were going to subsidize it.   And we built too far away from our work. He says you are going to move to your work now because of the cost of energy. And it was really interesting because this was 2 years ago and the guy nailed it. I listened to what he had to say. I watched what has happened, and he was right on.

If you go with my plan and get 400,000 megawatts of wind in the central part of the country, you have helped the economy. Now, what is the cost of your energy? I am guessing in 10 years you are going to be a long way down the track to an electric vehicle. But, remember, an electric vehicle does not do heavy duty. So you are going to have to continue to use natural gas with heavy duty vehicles.

Ethanol is a light-duty fuel. Ethanol cannot work for heavy duty. But natural gas can. So I am approaching it with the view that natural gas would be for heavy duty, first and all. Mandate to the fleets that they have got to go to natural gas. 38 percent of the fuel used in America is used to move goods. And that is done by trucks.

Geoffrey Anderson, President & CEO Smart Growth America

The real opportunity out there right now is to allow people to drive less and to be able to do more. We can do that by building more walkable and complete communities. A lot of the growth in oil use has been as a result of spread out landscapes that have no options besides driving.  There is a real move now to create more walkable communities where homes are closer to jobs, shops are closer to work, and all of these things can be reached either on foot, by bike, with transit, or by shorter car trips.

Real estate and our research indicate that about a third of the market is interested in having more walkable communities, more compact communities. The fact is that for the last 50 years, we have essentially built drive-only communities, so the two-thirds of the market that really is interested in that product is well provided for.

Work trips only account for 25 to 35% of trips a household takes.  Denser communities mean kids can walk to school (50% used to, just 11% now), and daily errands require shorter trips.  The current way we are building communities is locking in oil dependence in the transportation sector.

Senator Lieberman.  The near total dependence of our economy, the energy sector of it—and particularly the transportation sector—on oil is weakening our Nation’s position in the world while enriching and strengthening a lot of countries in the rest of the world, many of them volatile and some of them just plain hostile to the United States of America. For well over a generation, America’s leaders have seen this growing dependence on foreign oil but essentially sat back and watched passively as trillions of dollars of our American, hard-earned wealth has been used to buy that oil and thereby go to countries abroad. And during that more than a generation, America’s leaders have done little or nothing about that problem. Apparently, it took $4-a-gallon gasoline to wake up the American people and their leaders here in Washington, to make all of us angry and anxious enough to get serious about breaking our national dependency on foreign oil.

Senator Collins. Beyond the impact on countless families struggling with high costs, our growing dependence on foreign oil is a threat to our national and economic security. One of our witnesses, Mr. Pickens, has vividly illustrated our ever-increasing dependence on foreign sources of oil in the Middle East and Venezuela. We are impoverishing ourselves while enriching regimes that are in many cases hostile to America. Ending our dependence on foreign oil and securing our own energy future is an American imperative. Our Nation must embrace a comprehensive strategy to reduce, and ultimately eliminate, our reliance on Middle East oil. We must expand and diversify American energy resources, and while doing so, improve our environment.

Our Nation missed an enormous opportunity on another October day 35 years ago. On October 17, 1973, the Organization of Arab Petroleum Exporting Countries, the predecessor of the Organization of Petroleum Exporting Countries (OPEC), hit the United States with an oil embargo. The immediate results were soaring gasoline prices, fuel shortages, lines at filling stations, and an economic recession. Unfortunately, after the immediate crisis passed, the long-term result was a steady increase in oil imports and a dependence that worsens each day. The 1973 embargo was a wake-up call that we failed to heed. The current crisis is a fire alarm that we must not ignore.

It also requires action by government. From establishing a timeline for energy security to undertaking critical investments to stimulate research in alternatives to expanding the production and conservation tax credits, government has a critical role to play.

Mr. Pickens.  In 1945, we were exporting oil to our allies. By 1970, we were importing 24% of our oil. By the 1980s, it was 37%. And in 1991, during the Gulf War, it was 42%. Today, we are approaching 70%. Much of our dependency is on oil from countries that are not friendly, and some would even like to see us fail as a democracy and as the leader of the free world. I am convinced we are paying for both sides of the Iraq war. We are giving them tools to accomplish their mission without ever having to do anything but sell us oil. This is more than a disturbing trend line. It is a recipe for national disaster. It has gone on for 40 years now. This is a crisis that cannot be left to the next generation to solve, and it is a shame if we do not do something about it. And we can, without bringing our economy and way of life to a halt.

I will tell you what [the American people] do understand. They know it is something very bad about energy. They do not think they are being told the truth about energy. And it is confusing to them. I think when we come out of this, by the time we get—I want to elevate this into the presidential debate, and it is not there yet. OK. Elevate it there. By the time we get the elections over, whoever wins, the American people are going to demand they know the truth about energy, they know what they are up against, and they will respond. We will see the energy use go down dramatically when they see what it is going to cost. They can see that it does not have anything to do with Exxon or Chevron or anybody else running up the price. It does not have anything to do with some speculator on Wall Street. That is not what we are faced with. We are faced with 85 million barrels a day of production in the world, and we are using 25 percent of it, with 4 percent of the population, and we only have 3 percent of the reserves. In the United States, we have nothing to do with the price of oil. We only have 3 percent of the reserves.

SENATOR VOINOVICH. Wind produces about 1.5 percent of our energy in this country. I think renewables are about—let’s see, about 9 percent, most of it is hydroelectric. How can you ramp that up over a quick period of time? And, second of all, as you know, down in Texas you have had some times when the wind just kind of stopped and you have had some reliability problems. And if you are going to use wind, you know that if you are going to have reliability, you are going to have to back up that wind with some ordinary baseload energy generation.

SENATOR DOMENICI.  You are so right that we must get the people to understand; that the United States is sending so much of our resources to foreign countries just to acquire crude oil; that it should be doubtful in the minds of intelligent people as to whether America can continue this kind of exportation of our assets, of our resources to foreign countries for 5 or 10 years. I actually do not believe we can. I believe we will become poorer and poorer and poorer as we send $500 to $700 billion a year overseas for crude oil. We are in a real mess. You are not against us opening more of the offshore assets of the United States where there are 85 percent that are locked up in a moratorium of one type or another and you cannot drill even if you wanted to. Are you on the side of those who say lift those and start drilling in an appropriate——

Mr. PICKENS. I am saying do everything you can do to get off of foreign oil, is what I am saying.

Senator DOMENICI. And that is one.

Mr. PICKENS. That is one. It is not going to do it.  It is not big enough. You do not have enough reserves in the offshore to do it. I think you are going to get a rude awakening as to value of the east and west coast when it is opened up and when it is put up for sale. When those tracts are put up for sale, I think you are going to be surprised at the [low] price you get for the tracts [ because most of the remaining oil is in the Gulf ].

There is no question that if I am right on the peak oil at 85 million barrels, in 10 years we are going to have less than 85 million barrels available to the world. Now, the question is: What is the demand? I have to think in 10 years the demand for oil— because the price now is going up. In 10 years, you are going to have $300 a barrel oil. Maybe higher, I don’t know. But this is really— it is a tough question to look out 10 years on this one. But I can tell you this: In 10 years, if we continue to drift like we are drifting, you are going to be importing 80 percent of your oil. And I promise you, it will be over $300 a barrel.

Senator VOINOVICH. I went to some war games at the National Defense University, and they talked about the vulnerability that we have. And some folks out at Stanford said that in the next 10 years there is a 80-percent chance that the cut-off of oil will bring our economy to its knees. So we have a certain urgency that we have right now to get on with this.

GAL LUFT, PH.D.  EXECUTIVE DIRECTOR, INSTITUTE FOR THE ANALYSIS OF GLOBAL SECURITY, AND CO-FOUNDER, SET AMERICA FREE COALITION

When we talk about national security, we need to realize that 63% of the world’s natural gas reserves are in the hands of Russia, Iran, Qatar, Saudi Arabia, and United Arab Emirates. These countries are now in the process of developing and discussing the establishment of a natural gas cartel. So shifting our transportation sector from oil to natural gas is like jumping from the frying pan into the fire. This is a spectacularly bad idea for us to shift our transportation sector from one resource that we do not have to another that we do not have. And we only have 3 percent of the world reserves of natural gas. The situation is very similar to our situation with regards to oil.

Just to remind the Committee that 10 years ago, Osama bin Laden predicted that oil would be $144 a barrel. Everybody laughed at him. Oil was only $12 a barrel at the time. He was right, and as a result, we are exporting hundreds of billions of dollars. This is the first year that we actually are going to pay foreign countries more than we pay our own military to protect us.

In order to understand what should be the road to energy security, we must first understand why we are where we are. There are many reasons why we have the oil crisis now. Of course, strong demand in developing Asia, speculation, geological decline, geopolitical risk, all of them have contributed their share. But, in my view, by far the main culprit is OPEC’s reluctance to ramp up production. This cartel owns 78 percent of the world’s proven reserves, and it produces about 40 percent of its oil production.

Our energy security problem stems from the fact that our transportation sector is dominated by petroleum. And while being in a hole, we continue to dig.

We put on the road annually 16 million new cars, almost all of them gasoline only, each with an average street life of 16.8 years. A Senator elected in 2008 will witness the introduction of 102 million gasoline-only cars during his or her 6- year term.

This means that neither efforts to expand petroleum supply nor those to crimp petroleum demand through increased Corporate Average Economy Fuel (CAFE) standards will be enough to reduce America’s strategic vulnerability. Such non-transformational policies at best buy us a few more years of complacency, while ensuring a much worse dependence down the road when America’s conventional oil reserves are even more depleted.

[ Luft then goes on with solutions for CARS: ethanol, methanol, open fuel standards, electric – whether Pickens plan makes sense or not, at least Pickens understand it is heavy –duty transportation that needs to be kept running. Clearly Luft hasn’t read Matt Simmons “Twilight in the Desert: The Coming Saudi Oil Shock and the World Economy”, which makes the case that the Saudi’s and other Middle Eastern oil producing nations greatly exaggerated their reserves. And why should the Saudi’s produce oil as fast as we want them to, when they’d prefer for their oil to last many more generations. Also, producing oil at too fast a rate can damage the oil field and reduce the ultimate amount of oil extracted – indeed, it’s thought that before the U.S. and British oil companies in the Middle East that got kicked out did this before they left ].

HABIB J. DAGHER, PH.D., DIRECTOR, ADVANCED STRUCTURES AND COMPOSITES LABORATORY, UNIVERSITY OF MAINE

I would like to start this testimony by acknowledging… the inspiring role of Matt Simmons, who is well known for alerting our country to peak oil and peak oil issues.

You have heard about T. Boone Pickens’ wonderful plan, but we sit in the corner of the country, and we are not very close to the wind belt that runs up and down from Kansas to Texas. So what do we do? T. Boone Pickens’ plan utilizes the wind corridor from the Dakotas down to Texas to generate anywhere from 200 to 400 gigawatts, depending on how much you want to generate. But that leaves us out, if you wish, on the east coast and on the west coast unless we build very expensive transmission systems. The majority of the U.S. population, actually close to 28 States, utilize more than 70 percent of the Earth’s electricity around the coasts of the United States. So the major demand for electricity is around the perimeter of the country.  There are line losses that take place, and, of course, there are transmission costs, and building transmission lines in heavily populated areas is very expensive as well from a permitting viewpoint and so forth. And if you look at the population centers on the east coast, for example, the Midatlantic States and up in the New England area, it would be very costly to build transmission lines in those areas.

Wind speed is actually high when we need it. We need to heat ourselves in the State of Maine and in the Northeast, and the heating costs are our biggest issues. But in the wintertime, the wind blows twice as fast as it does in the summertime, and the power generated from the wind is the cube of the wind speed. So in the wintertime, per month, we can generate 8 times as much power as we do in the summertime. You can think of wind off the coast of Maine as a seasonal crop right now that can help us heat the State of Maine. [ What will Maine do in the summer? ]

Another hearing on Natural Gas

House 113-58. June 20, 2013.  U.S. energy abundance. Manufacturing competitiveness and America’s energy advantage. U.S. House hearing.

CICIO, PRESIDENT, INDUSTRIAL ENERGY CONSUMERS OF AMERICA. The shale gas revolution and lower natural gas and feed stock costs have launched the start of the manufacturing renaissance with announced manufacturing investments of over $110 billion. This is the first wave of investment. The second wave will be from our downstream customers who will relocate to be near their suppliers and reduce their costs. The Boston Consulting Group estimates that 5 million new jobs will be created in manufacturing by 2020. Every dollar’s worth of natural gas run through our manufacturing economy creates up to $8 in added value. This is a superior economic use of natural gas than exporting LNG. The $110 billion investment will also create new natural gas demand between 7 and 9 Bcf a day, about an 11% increase. This is all good news. The most significant threat to the fulfillment of the manufacturing renaissance will be determined by the speed of LNG export terminal approvals and the volume of its shipments, which brings me to the key points of my testimony. Doing it right can be a win-win for producers and consumers of natural gas. Doing it wrong will result in spiking natural gas and electricity prices and an end to the manufacturing renaissance. We need to avoid what happened in Australia.

IECA is not opposed to LNG exports but warns policymakers that careless due diligence by the DOE on the public interest determination of LNG export applications to non-free- trade countries is a real concern. LNG terminal approvals are for 30 years. A lot can happen in 30 years.

Domestic demand is accelerating and LNG export demand is additive to that demand. For example, just six of the most likely export terminals would increase demand by 16%. The export demand would be on top of the AEO 2013 demand increase of 6% by 2020. Neither demand number includes the manufacturing renaissance of an 11% demand. Combined, this is a 33% increase. This is a huge increase in a very short time frame, and this does not include new demand that will occur from the EPA’s utility mat and EPA’s greenhouse gas regulations.

The public interest determination for approval of LNG exports to non-free-trade countries is the law. The public interest test is really important, because it is a safeguard to ensure that decisions are being made correctly and with up-to-date information.

Dean Cordle, president, CEO of AC&S, a chemical manufacturer, appearing on behalf of the American Chemistry Council.   This shale gas revolution has transformed our company. We are putting steel in the ground, as we speak, we are nearing completion of a new production unit, and my focus right now on growth opportunities is certainly centered in the oil and gas industry and the downstream derivatives.

The U.S. chemical industry is highly energy intensive. We use energy inputs, mainly natural gas and natural gas liquids as both our major fuel source and feed stock. About 75% of the cost of the producing petrochemicals and plastics is related to the cost of energy-derived raw materials. Consequently, our ability to compete in global markets is largely determined by the price and availability of natural gas and gas liquids. The consulting firm IHS forecasts that the U.S. has a 100-year supply of natural gas.

This abundant and affordable supply of natural gas has transformed the U.S. chemical industry from the world’s high-cost producer 5 years ago to the world’s low cost producer today. As a result, the U.S. enjoys a decisive competitive advantage in the cost of producing basic petrochemicals. For example, it costs less than $400 a ton to produce ethylene in the United States, whereas it compares $1,000 a ton in Europe and even more in Japan. As a result of this cost advantage, dozens of companies are making plans to invest in new U.S.-based chemical production capacity.

ACC estimates that more than $72 billion in new capital expenditures will be invested in the U.S. between 2012 and 2020. Roughly half of those investments will come from firms that are based outside of the U.S. The U.S. is emerging as the place to manufacture chemicals now. The supply response from shale gas will directly create tens of thousands of new jobs in the U.S. chemical industry.  Policy will play an important role if we are to optimize our competitive

Andre de Ruyter, senior group executive for Sasol Limited. Sasol is an integrated international energy and chemicals company. We employ about 34,000 people in 38 countries worldwide. We operate large-scale fuel and chemical plants throughout the world, and we are listed on the Johannesburg and New York stock exchanges.

NOTE: At this time Sasol was going to build a $22 billion dollar GTL facility in Louisiana that could make 96,000 barrels a day of 30% naptha and chemicals, and 70% diesel (67,000 barrels/day or .3% of daily oil consumption).  But due to the low price of natural gas, this plant isn’t being built. A year before Sasol cancelled this GTL plant, Royal Dutch Shell plc abandoned plans also canceled a GTL plant in Louisiana because it was too expensive – the initial cost ballooned from $12.5 billion to more than $20 billion.

While natural gas is a major energy source for global power generation, it has up to now lacked the versatility to embrace transportation needs. With our proven GTL technology, we can fundamentally alter the chemistry of natural gas so that we can convert it to approximately 100,000 barrels per day of gas-to-liquids diesel for use in transportation.

Unlike other alternative fuels, GTL diesel is fully fungible with conventional diesel and requires no adjustment to engine technology or to distribution infrastructure.  Diesel will continue to be the workhorse of the global economy for the foreseeable future with demand expanded to grow 65% by 2040 (ExxonMovil. The outlook for energy: a view to 2040, Irving, Texas: march 2013).

GTL diesel’s high quality makes it highly suitable for use as a blend stock by crude oil refineries to upgrade their products into high quality fuels; however, when gas-to- liquids diesel is used neat, it has the added benefit of leading to lower emissions of particulates and other pollutants as a result of the fact that it contains essentially zero sulfur and very low aromatic compounds.

With our partner, Qatar Petroleum, we have produced more than 45 million barrels of diesel fuel for export into the international market since the commissioning of our ORYX gas-to-liquids facility in Qatar in 2007.

We have also supplied GTL jet fuel to the Department of Defense, who uses it for experimental purposes.

Mr. WHITFIELD. I notice today the Federal Reserve board yesterday, I guess, said they are going to kind of stop our easy money policy, so we may see interest rates start edging up soon. So the policies that the U.S. Government adopts are going to have a dramatic impact on the cost of energy. And energy costs are a key component for continuing to grow our manufacturing base and create jobs. And so when we talk about that, we are talking about the regulations, we are talking about an all-of-the-above energy policy, which many of you talked about specifically in your testimony, but I would remind everyone once again that the Obama administration says an all-of-the-above, but they systematically are trying to eliminate some fossil fuels, particularly coal. And I notice—I was reading the Federal Register footnotes on the proposed greenhouse gas new source performance standard for new electric generating units. And in the register, it says the Department of Energy National Energy Technology Laboratory estimates that when that rule becomes final, that the technology that the coal industry would have to use to meet the emissions standards would add 80 percent to the cost of electricity; that one standard, 80 percent increase. So we are all excited now and we feel good about these low energy costs, but as we move forward, we have to think about the policies and the impact, because I, for one, as many of you said in your testimony, do believe we need all of the above.

Green energy alone is not going to get it done.

Mr. SCALISE. I think the fact that we are here in a committee hearing in Congress talking about how technology and energy is revolutionizing our country, and not only creating tens of thousands of really good high-paying jobs, which is something that we ought to be focused on every single day, but also allowing our country to be energy independent. Here is one case where we have got the opportunity to reduce our dependence on, in many cases, Middle Eastern countries who don’t like us, where we are spending billions of dollars to countries who use that money against us, to kill Americans in many cases. And so the revolution in energy is, I think, one of the most important things if we want to get our economy back on track, get our country moving again, create jobs and create the energy security I think that Americans expect and deserve.

If we want everybody to live in squalor and poverty, you know, then we go with the old economy. If we actually want to create jobs and manufacture, make things in this country so that we can create jobs and increase everybody’s lifestyle, not just in America, but in other countries, it starts with energy, and safe and secure energy, and that is what this is all about.

Mr. MCNERNEY. How is the natural gas mostly used? Is it used as a chemical, as a solute? Is it used to create heat through burning, or is it used to create electricity?

Mr. CORDLE. In two primary ways. We use natural gas to fire our steam boilers in our chemical production facility. And the overall lowering of that cost has certainly helped us dramatically. In the overall chemical manufacturing industry, it is a raw material, it is an ingredient in what we make in terms of our products.  Natural gas, when it comes out of the ground, has several components:  ethane, propane, and a few others.  Ethane is the key raw material that is cracked and turned into ethylene, ethylene oxide, and then eventually it comes into polyethylene in the plastics that we use every day.

Mr. MCNERNEY. What is the energy balance of the GTL liquids; that is, energy in your product, divided by energy into the process and energy in the natural gas? What does the balance look like?

Mr. DE RUYTER. We use about 9.5 Bcf per day to produce 100,000 barrels of diesel per day. So you could work out the balance from that.  It is a ratio between natural gas in and diesel out on the other side of the process.

Mr. JOHNSON. There is a nearly an almost limitless supply of natural gas, if the Federal Government doesn’t mess up the opportunity, and from a manufacturing perspective, if we aren’t forced to use gas for power generation instead of cheaper coal. I would suggest your time and the time of your members would be better spent helping us make sure that the administration doesn’t stamp out the coal industry, which is the most cost affordable, reliable form of energy on the planet.

Posted in Natural Gas Vehicles, U.S. Congress Energy Dependence | Tagged , , , | Comments Off on A U.S. Senate hearing on T. Boone Pickens plans for natural gas and wind to reduce oil dependence

Natural gas is a stupid transportation fuel

[ My comment: The only reason natural gas has come up as a transportation fuel at all is the false belief that there is 100 years of natural gas (even this article does, but natural gas may last far less for reasons explained in articles here).

Although this article focuses on cars, the same critique applies to heavy-duty trucks as well, which need even bigger, heavier tanks.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

Service, R. F. October 31, 2014. Stepping on the gas. Science Vol. 346, Issue 6209, pp. 538-541 

At a conference on natural gas-powered vehicles Dane Boysen, head of a natural gas vehicle research program at the U.S. Department of Energy’s Advanced Research Projects Agency-Energy, said what industry stalwarts don’t want to hear:

“Honestly, natural gas is not that great of a transportation fuel.” In fact, he adds, “it’s a stupid fuel.”

This is because of the low energy density of natural gas. A liter of gasoline will propel a typical car more than 10,000 meters down the road; a liter of natural gas just 13 meters. Even when natural gas is chilled or jammed into a high-pressure tank—at a high cost of both energy and money—it still can’t match gasoline’s range.

Nevertheless, Boysen’s ARPAE project, called Methane Opportunities for Vehicular Energy (MOVE), is in the middle of spending $30 million over 5 years to jump-start the development of natural gas-powered cars and light-duty trucks which now burn over 60% of oil used in transportation.

But as Stephen Yborra, who directs market development for NGVAmerica, puts it, “there are an awful lot of hurdles to overcome.” Honda, for example, already makes a natural gas version of its Civic sedan. But it has sold only 2,000 of them in the United States, compared with more than 1.5 million gasoline-powered cars a year. Major improvements in fuel tanks, pumps, and infrastructure will be needed before natural gas vehicles rule the road.

One by one, Boysen ticks off formidable technical challenges and the efforts engineers are making to solve them.

GAS TANK MATERIALS. The biggest problem goes back to the meager energy density of natural gas. At ambient temperature and pressure, it’s a mere 40,000 joules per liter, slightly more than 1/1000th that of gasoline. To carry enough fuel, a car needs an oversized fuel tank, which eats into its cargo space. As a result, Honda’s natural gas Civic has less than half the trunk volume of its gasoline counterpart. “Drivers hate this because they can’t pick up people at the airport,” Boysen says.

The fuel tanks also have to be pressurized—another source of headaches. Today’s tanks compress gas to 250 bar, about 250 times atmospheric pressure. To handle the stresses, tanks must be made either from thick metal—which makes them heavy—or from lighter but expensive carbon fiber. Current tanks add an average of $3500 to the cost of natural gas vehicles.

  • GAS TANK SHAPES. Spongelike fuel storage at modest pressures might free engineers to build tanks in shapes other than the now-standard high-pressure cylinder. That’s critical, because in a car, a cylinder occupies a box as big as its largest dimension, wasting a lot of space. For heavy-duty trucks and buses, which don’t have tight space constraints, an awkward tank shape is less of a problem. But it’s a killer for passenger cars.
  • GASSING UP. One challenge is the time it takes to fill up. Gasoline pumps can supply as much as 10 gallons (38 liters) of fuel per minute, an energy transfer rate equivalent to 20 megawatts of power. Today’s CNG systems can fill the equivalent of a 15-gallon (57-liter) tank in 5 minutes. But they are expensive and primarily service trucks and specialized fleets.
    Many advocates of natural gas cars dream of a low-pressure compressor that could be used for home refueling, as roughly half of U.S. homes—some 60 million—already have a natural gas line. If cars could be refueled at home, consumers would tolerate slower filling rates, as they do with electric vehicles. One such compressor is already on the market, Boysen notes. But it costs $5500.
  • But with so few vehicles on the road, compressor manufacturers have been unwilling to invest in new technologies. As a result, says Bradley Zigler, a combustion researcher at the National Renewable Energy Laboratory in Golden, Colorado, “right now there is a valley of death between research progress and commercially available technologies.
  • INFRASTRUCTURE, INFRASTRUCTURE, INFRASTRUCTURE. Even if engineers do it all—come up with a cheap space-age crystal to hold gas in a low-pressure tank, a more efficient natural gas–burning engine to reduce the demand for a large tank, and a cheap new compressor—that still might not be enough. For drivers to gamble tens of thousands of dollars on a new kind of car, analysts say, they’ll need all of these technologies to be widely available at the same time. “It has to be in a box,” Youssef says. “To me, that’s the biggest hurdle. I’m afraid we’re not there yet.”
  • Even then, Boysen notes, natural gas vehicles would face competition from a more-than-viable alternative: the gasoline- and diesel-powered cars that now make up 93% of passenger vehicles on the road. Drivers will need to be convinced that a natural gas car will work at least as well as current cars do. They will need to know they can buy fuel wherever and whenever they want. And they will need a nationwide network of mechanics and parts suppliers to fix things when they break. Gasoline-powered and electric cars already cover the whole menu, but would-be competitors have far to go.

This suite of demands is particularly acute for truly novel technologies, such as hydrogen-powered fuel cell vehicles. The lack of an existing fueling infrastructure for those cars makes it far less likely that drivers will embrace them. But the fact that such challenges are also proving daunting to natural gas-powered cars, with their sizable fuel cost advantage, underscores just how difficult it is to transform the way we drive. For Boysen and his colleagues, the allure of natural gas is stronger than ever. But they know reality can be unkind to even the most appealing technologies.

Posted in Automobiles, Natural Gas Vehicles, Transportation | Tagged , , | Comments Off on Natural gas is a stupid transportation fuel

California could hit the solar wall

[ According to a Stanford University article below this introduction (followed by excerpts from two California Energy Commission reports), if California uses mainly solar power to meet a 50% Renewable Portfolio Standard (RPS), on sunny days, for most of the year, more power would be generated mid-day than needed 23% of the time, and over-generation by solar PV from 42 to 65%. On average, nearly 9% of solar or other electricity generation would have to be be shut down. In other words, California could hit the solar wall.

Why would California mainly use solar rather than wind power as well? “Unlike the Midwest, California has a modest technical potential for wind and many of the best sites are already developed [my comment: any sites not developed yet are too expensive or far from the grid]. California does have a large offshore-wind resource but high costs and technological challenges remain. Importing electricity generated by onshore wind from neighboring states is promising, but some imports will require new high-voltage transmission lines that may take a decade to plan, site, permit, finance and build.”

Overall, California has already developed MOST of the best sites (NREL 2013):

“Prime renewable resources include wind (40% capacity factor or better), solar (7.5 DNI or better), and discovered geothermal potential. All other renewable resources are non-prime. California’s remaining options for easily developable in-state utility-scale renewables could be limited by 2025. Wind, geothermal, biomass, and small hydro projects under contract (either existing or under construction) are about equal to the total developable potential estimated for each of these technologies in California’s renewable energy zones. 

Solar projects to date, however, exceed the amount of developable prime and borderline prime resources estimated to exist within California’s zones.  This suggests that California’s remaining solar resource areas tend have less solar exposure than what has already been developed and might be less productive.”

Less productive = more expensive. This report also points out that California will have 44.3 million people in 2025, requiring renewable generation to grow even further. ]

The consequences of too much solar power include:

  • Since solar power provides energy when least needed (mid-day rather than peak morning and late afternoon hours), massive amounts of power from national gas power plants (tens of gigawatt hours of energy) would need to QUICKLY ramp up as the sun’s power rapidly fades in the afternoon, requiring large expensive natural gas back-up plants. But natural gas is finite, so that’s a temporary solution. The only other commercial source of dispatchable energy is (pumped) hydropower, but there are few spots to put new dams, and existing dams are limited much of the year due to drought, fisheries, agriculture, and drinking water. Geothermal, nuclear, coal are not dispatchable — they are considered baseload power running 24 x 7. They can’t ramp up and down in less than 6 to 8 hours because that damages their equipment.  When built they expected to generate a certain percent of power per day to pay back their cost, so when they have to shut down quickly because solar and wind power have first rights to provide power.  This can drive electricity prices negative, but since ramping down can damage their equipment, coal, natural gas, and nuclear plants lose money to continue generating power. This is why many nuclear and coal power plants are shutting down — they are losing money.  But since solar and wind are unreliable, intermittent, and unpredictable, the electric grid needs them to be ready to fill in when the sun goes down and the wind dies.
  • Utility scale energy battery storage is dispatchable, but far from commercial.
  • Large-scale curtailment of solar PV during times of over-generation reduce the value of solar capacity additions to investors.
  • Real-time pricing during times of over-generation could limit or eliminate the net-metering advantage of PV on residential and commercial-scale installations.

California peak and off peak demand in California. Solar produces power when it’s least needed: from 7 am to 4 pm during Off Peak and Super Off Peak time frames.  Adding more solar power makes the problem worse, requiring even more solar PV power and other plants to shut down more often.

[ California solar generation has reached the point where it’s producing so much power at the time of day when it’s least needed that it has to be shut down during the sunniest time of the year.  This is because year round, solar generates power when there is the least demand, and the least power when demand is highest.

Notice in the figure above that peak demand occurs after 4 p.m., which according to the California ISO, is “when the sun is setting and solar output is declining. During July and August supplies are even more limited during peak hours”. Except for July and August on the weekends, supply surplus occurs during “super off-peak” hours from 10 to 4 PM – which is when solar generation is at its highest. In addition, surplus conditions occur this same time period in March and April weekdays while weather is still mild and there’s no need for air-conditioning.

Because solar PV is so seasonal, it provides from 2% in winter, to 10% in summer of California’s daily needs — but not when most needed, and at times, far more than what is needed, so solar PV and/or other power generation has to be shut down.  Additional solar PV power only makes the problem worse.  Solar thermal with energy storage would help, but it’s mostly “smoke and mirrors”, less than a quarter have storage, and most of the time produce less than half a percent of daily power needs for California.

Related articles

2016-4-7 Texas and California have too much renewable energy. The rapid growth of wind and solar power in the states is wreaking havoc with energy prices. MIT Technology Review.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

Benson, S., and Majumdar, A. July 12, 2016. On the path to deep decarbonization: Avoiding the solar wall. Stanford University.

Photo: Close-up of solar modules. Credit: NREL

Photo: NREL

While you might not have been paying attention, California’s electric grid has undergone a radical transformation. Today, more than 25 percent of the electricity used in California comes from renewable energy resources, not including the additional 10-15 percent from large-scale hydropower or all the electricity generated from roof-top solar photovoltaic (PV) panels. Compared to only five years ago, solar energy has grown 20-fold (Figure 1). In the middle of a typical springtime day, solar energy from utility-scale power plants provides an impressive 7 gigawatts (GW) of grid-connected power, accounting for about 20 percent of the electricity used across the state (Figure 2).

Last year, the total electricity from both in-state and out-of-state resources was 296,843  gigawatt-hours (GWh), including out-of-state generation from unspecified sources. An estimated 8 percent of California’s electricity consumption was generated from wind farms and 6 percent from solar power plants connected directly to the California Independent System Operator (CAISO) grid. This rapid growth is great news for the nascent renewable energy industry and can serve as proof-point for the scale-up of renewable energy.  In addition to these utility-scale renewable energy power plants, California has an additional 3.5 GW of solar “self-generation” on the customer side of the meter that offsets demand for electricity from the grid when the sun is shining. And, a growing third source is “community solar,” where residents and businesses invest in small, local solar plants.

Figure 1. Total electricity generation in California from solar and wind energy directly connected to the CASIO grid (California Energy Almanac).

By law, in 2020 California plans to have 33% of its electricity sourced from renewable sources and 50% by 2030 under its renewable portfolio standard (RPS) requirements. Many in-state renewable energy projects are in the pipeline, including nearly 9 GW of solar PV and 1.8 GW of new wind projects that have received environmental permits.  Power purchase agreements have already been signed for at least 1 GW of new solar projects. If all of these permitted projects were developed, California would have about 16 GW of solar-generating capacity by 2020.

While wind has provided a significant portion of California’s renewables to date, the majority of new additions for meeting the 2020 33% RPS requirement is forecast to come from direct grid-connected solar PV. California has an enormous and high-quality solar resource, with an estimated technical potential of more than 4,000 GW for utility-scale solar and 76 GW for rooftop solar. Unlike the Midwest, California has a modest technical potential for wind and many of the best sites are already developed. California does have a large offshore-wind resource, some of which is now in permitting, but high costs and technological challenges remain. Importing electricity generated by onshore wind from neighboring states is promising, but some imports will require new high-voltage transmission lines that may take a decade to plan, site, permit, finance and build.

Figure 2. California energy mix on May 29, 2016. Note that renewables provide more than 40% of the power during the middle of the day. Of this, more than 30% is from solar power (CAISO Daily Renewables Watch).

California has a major effort – the Renewable Energy Transmission Initiative (RETI) – that has successfully identified and built lines to meet its RPS requirements. This year, California launched a new phase of RETI to develop the additional transmission, both in-state and out-of-state, needed for the 50-percent RPS. A recent study supporting California Senate Bill 350 implementation (which includes the 50-percent RPS) showed from $1 billion to  $1.5 billion annual savings by adding major  transmission lines that would bring more out-of-state wind energy into California.

If, instead, California continues to rely mostly on solar resource for meeting the 2030 50-percent RPS, the total statewide solar-generating capacity would reach 30 to 40 GW under peak production, according to a report by Energy and Environmental Economics Inc. (E3). Under these conditions, on a sunny day, for most of the year, California would be generating more electric power than it needs during the middle of the day from solar energy alone. E3 calculates that this large amount of overgeneration could be a problem 23 percent of the time, resulting in curtailment of 8.9 percent of available renewable energy, with marginal overgeneration by solar PV of 42-65 percent. In other words, California could hit the solar wall. And this does not even consider that midday demand is likely to decrease due to the installation of additional residential and commercial solar PV systems “behind the electricity meter.”

Consequences of hitting the solar wall

Just a decade ago it would have been nearly unthinkable that during the middle of the day solar energy could provide more electricity than an economy as large as California’s needs. But supportive policies, rapid scale-up and decreasing costs make this possibility a reality today. While from some perspectives this is very encouraging, in reality, there are consequences for hitting the solar wall. For example:

  • Reliance on so much solar energy would require rapid ramping capacity for more than 10s of GW of natural gas power plants from 4:00-6:00 p.m., when the sun is going down and electricity demand goes up as people return home.
  • Large back-up capacity from natural gas plants or access to other sources of dispatchable electricity would be required for days when the sun isn’t shining.
  • Zero marginal-cost solar generation could squeeze out other valuable low-carbon electricity sources that can provide baseload power. For example, natural gas combined cycle plants, geothermal energy and nuclear power that cannot operate during these times at zero marginal cost.
  • Large-scale curtailment of solar PV during times of over-generation, which will reduce the value of solar capacity additions to investors.
  • Real-time pricing during times of over-generation could limit or eliminate the net-metering advantage of PV on residential and commercial-scale installations.

There is no doubt that California’s solar energy potential is invaluable, but we must take steps to avoid the solar wall.  Fortunately, these issues are being recognized and addressed at many levels in California.

Avoiding the solar wall

Numerous approaches to avoiding the solar wall are available today, and in the future more options will exist as we develop new technologies, policies and markets to take advantage of large solar-energy resources that exist around the world. In the short term, key actions include:

  • Develop a renewable energy-generation mix that is well-balanced among solar, wind and other forms of renewable generation. The right generation mix will be region specific, but for California should include increasing wind generation to provide nighttime power. [my comment: what other renewable generation?  To reach an 80 to 100% renewable grid, most of the power has to come from solar and wind with a little help from geothermal and hydropower]
  • Support regional generation markets across wide geographic areas to balance the variability of renewable generation. California has created an energy imbalance market with participants in Nevada, Wyoming and Oregon. Expansion of regional markets is being studied as part of the implementation of Senate Bill 350, California’s 50-percent RPS law.
  • Ensure adequate capacity of rapid ramping natural gas plants  to provide reliable supply during the morning and evening hours as the sun rises and sets. [my comment: natural gas is finite! Conventional natural gas peaked in 2001 in the U.S., shale gas is peaking both economically now and geologically by 2020, and we have only 4 Liquefied Natural Gas (LNG) import terminals].
  • Expand use of load shifting through real-time pricing to incentivize using power during daytime hours when large amounts of solar power are available.
  • Encourage daytime smart charging of electric vehicles to take advantage of abundant and zero marginal-cost solar generation. Achieving this will require workplace charging stations and new business models. With transportation at about 40% of the state’s energy use, electrification of the transport sector could have the dual benefits of eliminating tailpipe emissions and providing demand for abundant and low-cost solar energy. [My comment: the math and computer algorithms to have a smart grid are far from existence, batteries aren’t much better than they were 210 years ago, and trucks can’t run on batteries. ]
  • Increase energy storage to avoid curtailment of solar over-generation during peak production periods. For now, few financial incentives exist for large-scale pumped-hydropower or compressed air storage projects [my comment: that’s not the problem!  There are very few places left to put pumped-hydro and no spots at all to put compressed air facilities, unless they’re above ground, which is crazy expensive]. Levelized costs of small-scale storage in batteries range from about $300 to more than $1,000/megawatt-hour (MWh) depending on the use-case and the technology. These are expensive compared to pumped-hydro storage at $190 to $270/MWh. For comparison, gas peaker plants have a levelized cost of $165 to $218/MWh. The business case for battery storage will be limited until prices come down significantly. Both R&D and scale-up will be needed to reduce costs. [my comment: utility scale battery storage is FAR from commercial, and only sodium sulfur (NaS) batteries have enough material on earth to store half a day of world electricity (see Barnhart, Charles J. and Benson, Sally M. January 30, 2013. On the importance of reducing the energetic and material demands of electrical energy storage. Energy Environ. Sci., 2013, 6, 1083-1092)]
  • [ My comment: Furthermore, utility scale battery storage is far from being commercial. Using data from the Department of Energy (DOE/EPRI 2013) energy storage handbook “Electricity storage handbook in collaboration with NRECA”, I calculated that the cost of NaS batteries capable of storing 24 hours of electricity generation in the United States came to $40.77 trillion dollars, covered 923 square miles, and weighed in at a husky 450 million tons.
    Sodium Sulfur (NaS) Battery Cost Calculation:
    NaS Battery 100 MW. Total Plant Cost (TPC) $316,796,550. Energy
    Capacity @ rated depth-of-discharge 86.4 MWh. Size: 200,000 square feet.
    Weight: 7000,000 lbs, Battery replacement 15 years (DOE/EPRI p. 245).
    128,700 NaS batteries needed for 1 day of storage = 11.12 TWh/0.0000864 TWh.
    $40.77 trillion dollars to replace the battery every 15 years = 128,700 NaS * $316,796,550 TPC.
    923 square miles = 200,000 square feet * 128,700 NaS batteries.
    450 million short tons = 7,000,000 lbs * 128,700 batteries/2000 lbs.
    Using similar logic and data from DOE/EPRI, Li-ion batteries would cost
    $11.9 trillion dollars, take up 345 square miles, and weigh 74 million tons. Lead–acid (advanced) would cost $8.3 trillion dollars, take up 217.5 square miles, and weigh 15.8 million tons.
  • Use electrolysis to produce hydrogen fuel to augment the natural gas grid, generate heat and power with fuel cells, or power hydrogen vehicles. [my comment: hydrogen is the least likely energy solution, even more unlikely than fusion]. Also, compared to storing electricity in batteries, hydrogen-based storage systems that combine electrolysis and  fuel cells are about three times less efficient.  In addition, today, these technologies are expensive, and significant cost reductions will be required to make them competitive alternatives.
  • For the longer term, scientists are developing new methods to produce fuels from renewable energy. The SUNCAT Center and  the Joint Center for Artificial Photosynthesis are developing new materials to produce “zero net carbon fuels” from carbon dioxide, water and renewable energy that can be used for transportation or backing up the electric grid. While we don’t know if and when the needed breakthroughs will occur, the game-changing potential of net zero carbon fuels would unlock the full potential of solar energy and break through the solar wall.  [My comment: before reading further, if the fuel isn’t DIESEL to keep trucks running, then what’s the point? And given that we’re at peak oil, peak coal, and peak natural gas, we don’t have the time for breakthroughs to occur. You’d want to prepare at least 20 years ahead of time].

Taking full advantage of the power from the sun

The global potential of solar energy is enormous and surely it can play a major role in a deeply decarbonized future energy system. In Thomas Edison’s words, “I’d put my money on the sun and solar energy. What a source of power! I hope we don’t have to wait until oil and coal run out before we tackle that.”*

We have work to do, but we are well on the way.  Who would have imagined just five years ago that solar energy would provide 6% of California’s electricity and is on track to double, triple or go beyond? But we need to be smart – avoiding running into the solar wall by balancing the generation mix, expanding regional markets, creating real-time markets to increase demand during solar peak-generating periods and creating new electricity demand, such as day-time charging for electric vehicles. In the longer term, electricity storage, hydrogen generation and zero net carbon fuels will further unlock the potential of solar energy.

*As quoted in Uncommon Friends : Life with Thomas Edison, Henry Ford, Harvey Firestone, Alexis Carrel & Charles Lindbergh(1987), by James Newton, p. 31

Meier, Alexandra von. (California Institute for Energy and Environment). 2010. Challenges to the Integration of Renewable Resources at High System Penetration. California Energy Commission. Publication number: CEC-500-2014-042.

More work is required to move from the status quo to a system with 33 percent of intermittent renewables. Work must proceed simultaneously on multiple fronts.

The complex nature of the grid and the refining temporal and spatial coordination represented a profound departure from the capabilities of the legacy or baseload system. Any “smart grid” development will require time for learning, especially by drawing on empirical performance data as they become available. Researchers concluded that time was of the essence in answering the many foundational questions about how to design and evaluate new system capabilities, how to re-write standards and procedures accordingly, how to create incentives to elicit the most constructive behavior from market participants and how to support operators in their efforts to keep the grid working reliably during these transitions.

Research needs for temporal coordination relate to: resource intermittence, forecasting and modeling on finer time scales; electric storage and implementation on different time scales; demand response and its implementation as a firm resource; and dynamic behavior of the alternating current grid, including stability and low-frequency oscillations, and the related behavior of switch-controlled generation. Different technologies, management strategies and incentive mechanisms are necessary to address coordination on different time scales.

A challenge to “smart grid” coordination is managing unprecedented amounts of data associated with an unprecedented number of decisions and control actions at various levels throughout the grid.

Renewable and distributed resources introduce space or location (spatial) and time (temporal) constraints on resource availability. It is not always possible to have the resources available where and when they are required

New efforts will be required to coordinate these resources in space and time within the electric grid.

Time lag between solar generation peak and late afternoon demand peak.  The availability of solar power generally has an excellent coincidence with summer-peaking demand. However, while the highest load days are reliably sunny, the peak air-conditioning loads occur later in the afternoon due to the thermal inertia of buildings, typically lagging peak insolation by several hours.

Limited forecasting abilities. Rapid change of power output is especially problematic when it comes without warning.

On the technical side:  

  • Long-distance a.c. power transfers are constrained by stability limits (phase angle separation) regardless of thermal transmission capacity
  • Increased long-distance a.c. power transfers may exacerbate low-frequency oscillations (phase angle and voltage), potentially compromising system stability and security
  • Further expansion of long-distance power transfers, whether from renewable or other sources, will very likely require the increased use of newer technologies in transmission systems to overcome the dynamic constraints.

CEC. December 2016. Tracking resource flexibility. California Energy Commission.

The rapid growth in renewable resources in California has also brought new challenges for grid operators. As discussed in the Renewables Tracking Progress page, wind and solar resources have grown tremendously over the last decade. Solar in particular increased from a little more than 400 megawatts (MW) in 2001 to more than 7,000 MW in 2015. Rooftop solar photovoltaic (PV) has also seen dramatic growth with 4,400 MW installed statewide, nearly 2,000 MW of which was installed in 2014 and 2015. Maintaining the reliability of the electricity system while integrating larger amounts of variable wind and solar generation requires more flexible resources to balance supply and demand.

The continued projected growth of intermittent renewable generation to meet California’s 33 percent Renewables Portfolio Standard (RPS) by 2020 spurred several studies to determine the extent to which the system operator needs additional flexible capabilities to accommodate late afternoon upward ramps in energy demand.1 These studies and current system operating data also highlight the extent to which overgeneration has become a concern.2

Any challenges in addressing intermittent generation at 33 percent are increased when planning to achieve 50 percent. Furthermore, because of expected changes in the natural gas-fired dispatchable fleet, the California Independent System Operator (California ISO) is concerned that it needs greater operational control over flexible capacity than is available through California Public Utilities Commission (CPUC) rules or existing California ISO tariffs.

A standard one-hour time resolution was sufficient to match large amounts of renewable resources with firming resources that can compensate for the intermittency of renewables. However, operational concerns in the California electrical system are increasingly focused on much shorter time scales. For example, there may be plenty of reserve generation capacity but a lack of fast-responding resources that can follow a rapid change in generation and load.

http://docs.cpuc.ca.gov/PublishedDocs/Efile/G000/M064/K141/64141005.PDF

Analyses to date suggest that flexible capacity has to address variability in load and power production in three time scales: (1) seconds-to-minutes, (2) 5-10 minutes, and (3) multi-hour.

Variations in the seconds-to-minutes time scale can be addressed by expanding the existing regulation service, such as using automatic generation control on existing generators. Storage is increasingly seen as a possible solution to these regulation concerns.

The 5-10 minute flexibility requirements address discrepancies between the 5-minute real-time market schedules and actual loads or generation encountered during these intervals.

Multi-hour ramps up and down have been a feature of California’s electrical system for decades, but the introduction of large amounts of renewable capacity with strong diurnal cycles exacerbates these traditional patterns, especially in winter and spring months, and is the focus of flexible capacity efforts.

Scheduling renewables in smaller time intervals, such as the real-time market, can reduce the amount of reserves required since the opportunity for differences between forecast and actual generation is reduced from an hour to a shorter time interval. Also, expanding the geographic footprint of the market can help in two ways. First, greater diversity of renewable resources can reduce the coincidence of production patterns. Second, loads in larger regions can help absorb excess production and generating resources may be able to assist with upward ramping requirements.

The California ISO popularized a graphical depiction of the “net load curve” 6 (the “duck chart”) that dispatchable generating resources must satisfy each hour.

Figure 1 illustrates the extent to which resources must be available to ramp up or down to satisfy this need. A net load curve shares many features with a total load curve but superimposes the hour-by-hour variability of wind and solar generation. The ramps up and down in the net load curve have become sharper and more exaggerated faster than anticipated given the rapid increase in behind-the-meter solar PV and progress toward the 2020 RPS goal.

By definition, a net load curve is total load less the production of wind and solar generating facilities. It can be computed with data of any time increment, most commonly hourly or for 1-minute increments.

Figure 1: The Duck Has Landed  Source: Fowlie, Meredith, The Duck has Landed, Energy at Haas, U.C. Berkeley, May 2, 2016. California ISO Hourly Data, March 28-April 3, Years 2013-2016

In 2013, the California ISO projected that net energy demand after subtracting behind-the-meter generation (net load) could be as low as 12,000 MW by 2020 and that meeting peak demand may require ramping up 13,000 MW in three hours. Two days in 2016 illustrate that the grid is already experiencing unprecedented operational fluctuations that grid operators were bracing for in 2020. On May 15, 2016, the net load reached a minimum of 11,663 MW, and on February 1, 2016, the three hour ramp was 10,892 MW, with the peak shifting to later hours in the day.

Overgeneration is the condition represented by the “belly” of the duck curve. Overgeneration exists when net load falls below the minimum generation level of other resources that must be on-line. The analyses submitted in the CPUC’s 2014 Long-Term Procurement Plan (LTPP) rulemaking identify spring months with high wind and solar production coupled with low loads as the prime time for overgeneration conditions to be encountered.7

See California ISO report for a summary of overgeneration issues and its study results. http://docs.cpuc.ca.gov/PublishedDocs/Efile/G000/M152/K411/152411557.PDF

Some options to solve overgeneration suggest a need for more flexible generating facilities from either a physical or contractual perspective. Overgeneration also can be solved by curtailing renewable generation, retrofitting existing natural gas plants to reduce minimum generation levels, building load through demand response programs when overgeneration conditions are expected, shifting load using system condition-dependent TOU rates, or by exporting power outside the California ISO balancing authority area, and so forth.8 The development of a regional grid is another important tool to help integrate renewable resources beyond what can be achieved with the Energy Imbalance Market (EIM). The EIM started in 2014 with PacifiCorp but continues to expand with NV Energy joining in 2015, and Arizona Public Service and Puget Sound Energy joining on October 1, 2016. The EIM is a mechanism to balance deviations in supply and demand and dispatch least-cost resources every five minutes. With the EIM, excess energy in the California ISO balancing area can be transferred to other areas in real time. If not for energy transfers facilitated by the EIM, the California ISO would have curtailed 272,000 MWh of renewable energy in the first two quarters of 2016, equivalent to 116,000 metric tons of carbon emissions. PacifiCorp has

Ramps

California ISO analyses completed in April 2016 show that the problem of rapidly increasing net load ramps is most severe in the winter months of November through March.10 Figure 2 provides an estimate of the maximum ramp over 180 minutes by month for three historical years and 2017 based on renewable projects now in the pipeline.11 Figure 2 shows that maximum monthly 180-minute ramps were relatively uniform throughout the year historically but become much larger into the future for the eight non-summer months. The implication is the need for flexible resources to satisfy this increasing ramp for these non-summer months, the opposite of the traditional capacity planning focus on summer peak months of July to September.

10 California ISO, http://www.caiso.com/Documents/FinalFlexibleCapacityNeedsAssessmentFor2017.pdf.

Figure 2: Comparing Historical and Projected Maximum 3-Hour Ramps by Month

For the first time, the California ISO study for 2017 flexibility requirements included behind-the-meter PV generation. This increases the 3-hour ramps considerably. As noted earlier, the rapid growth in behind-the-meter PV capacity means that the load curve does not remain static, but itself is lower during the middle hours of the day, creating ramping requirements where none would have existed without the behind-the-meter PV.

Flexible Resources.  Since the California ISO assessments assume that the great majority of renewable resources will continue to be “must take,” the California ISO wants to ensure that sufficient flexible capacity will be available to satisfy these growing ramping requirements.

Clearly, the total of nearly 35,000 MW of existing flexible capacity expected in 2017 exceeds the largest California ISO estimate of requirements in 2017. There are three concerns, however, suggesting that the balance between requirements and capabilities is tighter than it might appear in comparing Figure 2 with Table 1.

First, nearly all of the steam turbine capacity is very old, and most of it uses once-through cooling (OTC) technology. Facility owners must satisfy State Water Resources Control Board (SWRCB) OTC policy by retiring or retrofitting the power plants.

Responses to SWRCB information requests reveal that nearly all generator owners plan to comply by retiring, although many would prefer to repower if long-term contracts can be secured from load-serving entities (LSEs). Retiring all of the remaining natural gas steam boiler EFC (8,931 MW) would reduce the remaining EFC of the generating fleet to about 26,000 MW if nothing more was added.

Table 1: Effective Flexible Capacity by Generating Technology and Fuel Type (Megawatts)

Second, much of the fossil-fired generating fleet must shut down for annual maintenance, and the optimal time has typically been in the winter months, when loads have been low. The need for much larger amounts of flexible capacity in winter months means that there are now competing motivations for when to schedule maintenance

Third, even if sufficient physical flexible capacity exists, such resources may not be available to the California ISO when flexibility is needed.

ISO wants to have greater control to ensure that it can dispatch capacity up or down to satisfy net loads. LSE/generator contracts with self-scheduling will still be allowed, but such capacity will not count as flexible. An LSE wishing to continue to self-schedule will be required to satisfy its share of the aggregate, or combined, flexible capacity requirements by nominating18 other capacity that is both physically flexible and can be dispatched up or down by the California ISO.

REFERENCES

NREL. August 2013. Beyond Renewable Portfolio Standards: An Assessment of Regional Supply and Demand Conditions Affecting the Future of Renewable Energy in the West. National Renewable Energy Laboratory.

Posted in Photovoltaic Solar, Renewable Integration | Tagged , , , | 4 Comments

Scientific American: Peak oil and coal may keep catastrophic climate change in check

[ Since conventional oil, 90% of supply, peaked in 2005-6 we’ve been on a plateau.  Since the rate of conventional oil decline exponentially decreasing, and population is still exponentially increasing, it is unlikely unconventional oil like shale “fracked” oil, tar sands, or arctic oil can fill in the gap.  Coal production may have already peaked, or is close to peaking.

That’s good news, because if this is true, we are very unlikely to reach the worst case IPCC scenarios. According to the Intergovernmental Panel on Climate Change, 50% of carbon dioxide emitted by human activity will be removed from the atmosphere within 30 years, and a further 30% will be removed within a few centuries. The remaining 20% may stay in the atmosphere for many thousands of years (GAO. 2014. Climate Change: Energy Infrastructure Risks and Adaptation Efforts. United States Government Accountability Office. GAO-14-74).

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

Ogburn, S. P. October 29, 2013.  Peak Oil may keep catastrophic climate change in check.  Scientific American.

Scientists suggest that the highest possible pollution rates are unlikely

Even as governments worldwide have largely failed to limit emissions of global warming gases, the decline of fossil fuel production may reduce those emissions significantly, experts said yesterday during a panel discussion at the Geological Society of America meeting.

Conventional production of oil has been on a plateau since 2005, said James Murray, a professor of oceanography at the University of Washington, who chaired the panel.

As production of conventional oil, which is far easier to get out of the ground, decreases, companies have turned to unconventional sources, such as those in deep water, tar sands or tight oil reserves, which have to be released by hydraulic fracturing.

But those techniques tend to lead to production peaks that tail off quickly, Murray said.

The panelists said these trends belie the high-end emission scenario from the Intergovernmental Panel on Climate Change (IPCC). That scenario, known as RCP 8.5, and often referred to as the “business as usual” scenario, has carbon dioxide emissions increasing through 2100.

“I just think it’s going to be really hard to achieve some of these really high CO2 scenarios,” Murray said.

David Rutledge, an engineering professor at the California Institute of Technology who studies world coal production, said the IPCC’s “business as usual” scenario is unrealistic because it essentially assumes that growth of fossil fuels like coal will continue apace, which is unlikely.

Recovery estimates may be too high

In reality, governments tend to overestimate their coal reserves, and much of these reserves will never be accessed, Rutledge said.

“There is little relationship between the RCPs and the actual historical experience of oil, gas and coal production,” Rutledge said.

Rutledge said of the four IPCC scenarios, he found the second RCP scenario, RCP 4.5, where carbon dioxide emissions flatten out around 2080, to be more plausible under a business-as-usual scenario for coal exploitation.

“4.5 would be the closest one if you look at the mining history,” Rutledge said. “My own opinion is that no one should use RCP 8.5 for any purpose at all.”

David Hughes, of Global Sustainability Research Inc., pointed out that production from tight oil fields like North Dakota’s Bakken and Texas’ Eagle Ford plays quickly reach what he called “middle age,” when production begins to fall off.

He said it is likely that the Bakken Shale oil play will peak in 2015 or 2016 and that the Eagle Ford Shale play, another significant U.S. oil production area, will peak soon after.

“Long-term [production] sustainability is highly questionable, and environmental impacts are a major concern,” Hughes said.

Charles Hall, a professor at the State University of New York who researches energy and wealth, in graph after graph showed that almost every oil-producing country has reached its peak of oil production.  This is even with a tripling of oil prices over the analysis period, Hall said.

Pieter Tans, a climate scientist at the National Oceanic and Atmospheric Administration who wrapped up the panel, said that while governments and policymakers should still aggressively pursue the goal of reducing greenhouse gas emissions, he did not believe that the most severe IPCC scenario, RCP 8.5, was likely.

From a climate perspective, there is some good news about the likely decline in the growth of fossil fuel production discussed by others at the panel, Tans said.

“It does decrease the chances of catastrophic climate change,” he said.

Posted in But not from climate change: Peak Fossil Fuels, Peak Coal, Peak Oil | Tagged , , , | Comments Off on Scientific American: Peak oil and coal may keep catastrophic climate change in check

Ecological cost of new roads in Africa

[ What follows are excerpts from the January 8 2014 issue of Newscientist’s “Africa’s road-building frenzy will transform continent” by Andy Coghlan.  ]

China is funding most of the new roads to get the minerals they’ve mined, and transport food from the millions of acres of agricultural land they’ve purchased.

These roads are expected to impact large regions of untouched natural habitat. A quarter of the known 4151 mineral deposits in central africa are in irreplaceable natural habitat, most of them unprotected

There are few roads there now, which are only 25% paved — 204 km/road per 1000 square kilometer of land versus a world average of 944 km/road that’s more than half paved.

The 6,000 miles of roads will be expanded 10-fold to 60,000.

Roads enable farmers to produce more because they can buy tools, fertilizer, and other supplies easily. So roads would increase agricultural production given that farmers 4 hours from a city reach 45% of potential output, farmers twice as far (8 hours away) produce just 5% of what they could.

 

 

Posted in Roads, Transportation Infrastructure | Comments Off on Ecological cost of new roads in Africa

Why the world is headed the way of Easter island

Petros Sekeris. 19 November 2014. Violence ahead as tragedies of the commons spread. NewScientist.

The world risks heading the way of Easter Island – a spiral into conflict as depleted natural resources are plundered.

There is a growing feeling that resources vital to sustain human life, such as fresh water, land and fossil fuels, are being used too fast to ensure our long-term presence on the planet. It seems obvious that nations should cooperate on this problem, and yet successful cross-border solutions and agreements are hard to find. Why don’t we act for the common good more often?

Look around the world and you can see instances of water-related inter-state tension and conflicts in many regions, including the Middle East (Jordan river basin, Tigris-Euphrates basin), Asia (Indus river), and Africa (the Nile).

“Fish wars” have erupted sporadically, such as Europe’s cod wars, and while these have been more contained, they could resurge amid decreasing stocks. In the same way, the shared resource of global climate continues to be threatened by the relentless burning of fossil fuels.

Our degradation of the environment is ominous and much evidence points to a clear link between the scarcity of vital resources and conflict. One wonders, then, why world leaders failed to reach a substantive agreement on climate change at the Copenhagen summit in 2009; or why fishing and hunting quotas for endangered species are so hard to implement; or why the use and pollution of river basins is not better regulated.

Explanations such as poor forecasting of resources, the short-term mindset of politicians, or simply the refusal to recognize the problem are usually given.

However, what if these are not the real reasons and something more fundamental is at work?

For example, imagine a depletable natural resource – such as a water basin – jointly owned by two countries. Both drain it for drinking, sanitation, irrigation and so on. Draining too quickly will result in it drying out. Most game theory work says that working for the common good is the optimum choice for both nations. But this does not square with conflicts we see, or the widely held view that more are inevitable.

To address this, I designed a simulation that allowed the use of violence to control resources (The Rand Journal of Economics, vol 45, p 521). In a world where force is a very real option and history suggests it is used or threatened more often than we might hope, this seemed reasonable.

The outcome offers an explanation for the gap between theory and reality. Having constructed a game-theoretical model, I found that when conflict is allowed it always occurred, but only when resources become heavily depleted.

And, crucially, the very expectation of impending conflict led to non-cooperation in the short term and sped up depletion of the common resource. I would argue that this resource-grabbing tallies with what we see in much of the world, be it disputes over fossil fuels, fresh water, land or marine resources.

Are there any historical examples that illustrate this effect of “conflict expectation” and more rapid resource use? Possibly. The demise of the first society on Easter Island is salient. It is thought Polynesians were first to colonize this isolated, 160-square-kilometre Pacific island around AD 900. At its peak, 30,000 people may have lived there.  Their society was organized in hierarchical clans, peacefully competing for supremacy by displaying vast stone statues. To move them, the tallest trees needed to be felled and used as rollers. Deforestation resulted, says Diamond. Instead of reaching agreements, the islanders rapidly devastated their lands, and by the time the first Europeans arrived in 1722, no tree taller than 3 meters stood there.

An ecological disaster and dramatic deprivation must have occurred. According to Diamond, a sort of military coup took place, sparking prolonged conflict. It is reasonable to imagine that the clans realized that trees – also vital for things like fishing boats – were in short supply, and so grabbed what they could before the inevitable violence.

The conclusions I’ve drawn on the impact of over-use of resources today on future conflict are purely theoretical. So with economists Giacomo De Luca and Dominic Spengler of the University of York, UK, I am designing a lab experiment to see whether humans in a controlled environment do deplete resources faster when given the possibility to use violent control. Our early findings point that way. Such evidence would shed new light on the failure of international cooperation over the preservation of the environment.

What’s next? I have not yet considered human ingenuity in adapting to a changing environment. Whether that will be sufficient to achieve a sustainable path depends on the rate of depletion versus adaptation.

Inevitable conflict and accelerated use of depleted resources may be more likely to become a reality within weak states and in the international arena, where weak institutions are more likely. For example, signing a carbon emissions treaty today does not commit a country beyond mild sanctions that the global community may or may not impose. In addition, a change in government in a powerful country is sufficient for a treaty to be revised, curbing the incentives of others to join.

All this reinforces the need for stronger institutions and international bodies if we are to avert a tragedy of the commons in a violent world. Sadly, this will require overcoming the very problem we are trying to solve: a lack of international cooperation.

Petros Sekeris is an economist at the University of Portsmouth, UK

Posted in ! PEAK EVERYTHING | Tagged , | 6 Comments

Ted Trainer criticizes Hatfield-Dodds CSIRO study in Nature that denies “Limits to Growth”

[This study denies “Limits To Growth”, and I’ve posted Ted Trainer’s objections below.  It is alarming Nature would publish such claptrap.  Has Rupert Murdoch secretly purchased them? Alice Friedemann www.energyskeptic.com]

Ted Trainer.  November 2015. A brief critical response to the CSIRO study:

Hatfield-Dodds, et al., (2015) Australia is ‘free to choose’ economic growth and falling environmental pressures, Nature, 5th Nov, 527, pp. 49 – 53. 1doi:10.1038/nature16065

http://www.nature.com/nature/journal/v527/n7576/full/nature16065.html#affil-auth

This study (by eighteen authors) concludes that Australia can achieve sustainable levels of resource use and environmental impact by 2050 without interfering with economic growth and without any radical change in values or behavior. About twenty scenarios are modeled, and reported in many detailed plots in the c. 25 page Nature article plus the Supplementary Information document. These credentials make it likely that the findings will be widely reported and accepted. There are however a number of problematic aspects of the study. Following are brief notes on some of these, supporting the view that the paper’s conclusions are mistaken. They contradict the now large “limits to growth” literature so it is very important that they should be considered carefully.

The problem with “scenarios”.

The study reports on scenarios, mostly in the form of plots of trends on a baseline extending to 2050. Scenarios are commonly used but are of no value unless they are accompanied by full information on the assumptions on which they are based and full presentation of derivations. In this case we are given neither, meaning that the paper is little more than a set of unsupported claims. These might be correct, but the exercise would only be of value if we were able to assess this.

It is possible to prove just about anything by feeding specific assumptions into models, especially when a number of optimistic assumptions are combined. This is not to say dishonesty is involved. Estimates of future efficiencies and costs typically vary greatly in fields like renewable energy, emissions analysis, carbon sequestration, the hydrogen economy, and biomass technologies. If a set of relatively optimistic plausible numbers is taken it can produce conclusions many times more favorable than a set of plausible pessimistic numbers. In the case of this study it seems to me from the conclusions given that some quite implausibly optimistic assumptions have been made.

In other words, the paper does not explain how its claimed 2050 figures can be achieved; it simply states that they can be. The claims might be valid, but we can’t evaluate them. What we want is to know is how/why it is thought that they can be achieved, to be able to rework the arithmetic to assess the validity of these conclusions, and to be able to consider whether the assumptions underlying them are plausible. If an analysis does not provide us with the information enabling us to do these things it is not far from worthless. There are a number of analyses of this kind in the renewable energy field. I take a dim view of Nature’s poor standards in accepting such a paper, especially when it provides strong support for a much contested and I believe erroneous position on what is probably the most important issue we face; viz. whether or not there are limits to growth.

“Decoupling”.

The study strongly accepts the “decoupling” thesis, i.e., that economic growth can be separated from increasing resource use and ecological impacts.

Reviews have found that at present there is virtually no satisfactory support for the claim that this is happening. (Burton, 2015.) Over the longer term energy use for instance has tracked almost exactly in parallel with GDP growth. It is not very helpful for this paper to say, “We find that substantial economic and physical decoupling is possible.” Even if substantial decoupling could be shown to be possible the important question is, could the magnitude of the effect be sufficient?

There are impressive reasons for thinking that the effect could not be sufficiently powerful to achieve the outcomes this paper envisages. According to these authors by 2050 Australian GDP can multiply by 2.7 while resource use falls 35%. That would leave a ratio of resource use to GDP that is around one-fifth of the present level. No evidence or reason is given to indicate why this is thought to be possible — in an era when just about all material, biological and ecological resource grades, costs, scarcities problems etc. are deteriorating rapidly. Add the cumulative global resource depletion that will occur in the next 35 years during which they estimate that GWP will multiply by 2.5.

There are numerous well known indices which show how enormous decoupling would have to be if economic growth could continue while resource and ecological impacts become sustainable. For instance ghe World Wildlife Fund’s “Footprint” analysis shows that the amount of productive land needed to provide an Australian with energy, food, water and shelter is about 7-8 ha. If 9.7 billion people were to live as we do then we’d need up to 78 billion ha of productive land … but that’s about ten times the amount there is on the planet. And if present loss rates continue we will have only half the present amount of cropland by 2050.

Similarly, if by 2050 all 9.7 billion people were to have risen to the GDP per capita Australians would have then given 3% p.a. economic growth, world economic output would be about 25 times as great every year as it is now. Is it plausible that “decoupling” could allow GWP, the amount of producing, purchasing and using up going on, to multiply by 20+ while rich world per capita resource use can be cut to one-tenth or one-twentieth of the present total? What is the case for thinking that anything like this could be done?

Given these kinds of multiples, a 35% reduction in materials demand (i.e., only 25% per capita given that the analysis envisages a 37 million population in 2050) would not get us far towards a global consumption rate that is sustainable and possible for all.

Presumably it is being assumed that the economy would be much more heavily centered on provision of services than at present rather than on producing resource-intensive commodities and goods, but services are remarkably energy and resource intensive, even when associated factors such as getting workers to offices, and training them in the first place, are not included. Again we would need to see assumptions and numbers.

I sent a draft of this critique to the main author. His only response regarding the decoupling issue was to say that a paper by Schandl et al. (2015) provides “more explanation.” But that paper does not provide any evidence or argument supporting the claim that decoupling is possible.  It isn’t even concerned with that question. What the paper does is make a basic assumption on carbon price and another on materials use efficiency, and then look at the effects on GDP etc. to 2050.

The Schandl et al. paper assumes that the efficiency of use of materials could improve at up to 4.5% p.a, compared with the historical rate said to be 1.5% p.a.  No reason is given for thinking that this extremely high rate is realistically achievable. If it was achieved then by 2050 materials used per unit of production would be around 4% of what it is now.  To put it mildly, we would need a very convincing case before we could take this expectation seriously.

But the biggest problem with the Schandl et al. paper is that it is pretty clearly saying that if we implement a high carbon price, and achieve an up to 4.5% p.a. improvement in materials efficiency, then by 2050 there will be significant decoupling, without affecting GDP.  But this only saying if we assume that significant decoupling takes place each year from now on, then by 2050 we will have significantly decoupled. (!) The paper is little more than an exploration of the effects of improving materials efficiency at the rates stated.

But the ultimate point about the Schandl et al. paper is that clearly and emphatically says that none of the scenarios they explore result in absolute decoupling.

On p. 5 they say,

“Our results show that while relative decoupling can be achieved in some scenarios, none would lead to an absolute reduction in energy or materials footprint.”  (They do say carbon would go down.)

“…even strong carbon abatement and large investment into resource efficiency would see global energy use growing from …(416 EJ/y to 1128 EJ/y in 2050.)

Note again the paper was the sole reference given to me when I asked the CSIRO authors what is the support for the decoupling thesis(!)

By the way, that energy growth figure is far higher than I have seen anyone predict, even the IEA. Energy demand more than doubles in all three of their scenarios, so to say the least, there is no absolute energy decoupling. To quote the paper again, “…energy use continues to be strongly coupled with economic activity in all three scenarios.” (p.5.) We are left with question, how sustainably could we find 2.7 times present world energy supply. The paper does not consider the difficulty of doing this via renewables. (I have published a number of papers arguing that this cannot be done affordably.)

Similarly they say that global materials use would increase markedly, from 79 billion tons/y to 183 billion tonnes/y. This would only be a small “relative” decoupling, but it would be 2.3 times the present burden on the planet due to resource extraction.

Thus it would seem that a) it is highly implausible that anything like the expected/assumed decoupling could be achieved, b) no reason is given to expect that it could, c) in fact even when Schandl et al. make very implausible assumptions they admit decoupling does not result, and d) even if the most optimistic CSIRO rate was achieved was it would leave Australian levels of resource and ecological impact far higher than those enabling a sustainable world (explained further below.)

Bio-sequestration

The second of the two big assumptions the paper’s optimism depends on is the assumed potential for bio-sequestration of carbon. It says that in 2050 large quantities of carbon based energy would still be being used and up to 59 million ha would be planted to take carbon from the atmosphere. (All our cropland is only c. 24 million ha and all our agricultural land is about 85 million ha.) The yield assumption does not seem to be stated; is it 15 t/ha, or the more like 5 t/ha likely from a very large area of more or less average land? The main problem with the use of land to soak up carbon via plant growth is that after about 60 years the trees are more or less fully grown and will not take up any more carbon; what then?

The implications of this do not seem to be considered. It means that in the second half of the century an amount of new planting would be needed each year that was big enough to take out the amount of carbon emitted that year. Given that the economy in 2050 is expected to be 2.7 times bigger than it is now, and still growing at a normal rate, the area to be planted each year would be substantial, and increasing.

Fig. 2 shows that in 2050 a net 200 million tonnes of CO2 would be being taken out of the atmosphere each year. That is, in addition to taking out the emissions generated by the large amount of fossil fuels still being used in 2050 (which seems to be around 1.825 EJ), another 220 million tonnes would be taken out (the amount from power plus transport), making a total in the region of 450 million tonnes/y. Assuming 10 tonnes/ha/y forest growth (it would be more like 5 t/y for a large area), taking out approximately 36 tonnes of CO2/ha/y, the additional area to be planted each year would be 12.5 million ha, and more when it is to cope with an economy that is growing.

How has the carbon embodied in the production and transport of imports been accounted? It would seem that the 2050 economy would have to be even more dependent on services than the present economy, meaning there would be heavy importation of goods no longer produced in Australia. The energy, carbon, resource and Third World justice effects of imports is only beginning to be attended to, and the picture is disturbing. For instance for a rich country the amount of carbon emissions due to imported goods is typically as great as or much greater than the amount released from energy production. (And it shows up on the books of the exporting country, not the rich country consuming the goods.) Has the amount of bio-sequestration needed to deal with this been included?

In a reply to my draft of this discussion the main author said that “… the carbon sequestered by plantings on currently cleared satiates after a period and does not provide a permanent flow.” This is difficult to understand because it would seem to contradict their entire case. Their defence of the possibility of growth and affluence depends heavily on the capacity of bio-sequestration to take out as much CO2 each year as we are putting in but this reply seems to be admitting that their strategy could only do that until around 2050.

Randers, one of the original Limits to Growth authors, doesn’t think we will run into limits problems by 2050, but he thinks by about 2070 they will be catastrophic. The time line isn’t crucial; the original book wasn’t concerned with when we will hit the wall; it was concerned that we are going to hit it. At the best the CSIRO paper provides some reason to think it will be later rather than sooner, but it doesn’t give us any good reason to think we won’t hit it. Yet the paper is being taken to mean there are no limits to growth to worry about.

What carbon price will do it?

The study seems to have assumed that power generators will find it economic to shift from carbon fuels to renewables when the price of carbon rises to about $50/tonne (i.e., rises at 4.5% p.a. from $15/t.) Lenzen’s soon to be published detailed study of Australian renewable potential is likely to indicate that the price needed to drive carbon out of the generating system is $500/tonne. His colleague working on the German situation says that there the price would be close to $1000/tonne. (The CSIRO paper does not assume close to compete elimination of carbon fuels.)

The study seems to have made the very common mistake of taking the cost of carbon that would make it more economic for a generator to shift the generation of 1 kWh from carbon fueled power station to a wind turbine. But this is not the right question. A power supply system with a large fraction of renewable input would have to have a very large amount of redundant generating capacity, most of it sitting idle most of the time, to be able to guarantee supply during periods of low wind or solar energy, or it would have to retain much carbon-fuelled capacity, sitting idle most of the time. Either way high capital costs are created for the system. The multiple for a 100% renewable system seems to be in the range of 4 to 10 times the amount of plant that would do the job if renewables worked to peak capacity all the time. So the price of carbon would have to be high before it became cheaper for power generators to shift to renewable technologies.

No analysis of renewables.

Renewable energy is claimed to provide a significant proportion of the power and transport energy but there is no reference to the many, difficult and unsettled associated problems of intermittency, redundant capacity, and storage, and the resulting total system capital costs. It is utterly impossible to derive conclusions about the viability and cost of sustainable alternative systems without carrying out detailed and convincing analyses of this field.

Conservation potential?

The plots show that it is being assumed that demand and impacts can be greatly reduced by conservation and efficiency effort. This is commonly assumed but few if any optimistic pronouncements take into account the significant energy, resource and environmental cost of saving energy, resources and environment. In other words claims are often only about gross reductions achievable and not net reductions.

Powerful examples of this are given by figures on housing and vehicles. Much attention is given to the German Passivhaus which it is said can reduce energy consumption by 75% or more. However this kind of claim usually refers only to energy consumed within the house, and does not take into account the energy used to install the typically elaborate insulation and heat transfer equipment. The issue seems to be unsettled but a recent study by Crawford and Stephen (2013) found that the total life-cycle energy cost for the Passivhaus is actually greater than for a normal German house.

Even more common is the claim that electric vehicles (assumed to make up 25% of transport energy use in this study) can reduce energy use by 75 – 80%, but this does not take into account the considerable energy costs in producing EVs. The State Government of Victoria’s trial of EVs found that they reduce emissions only if powered by renewable energy. (Carey, 2012.) Otherwise life-cycle emissions taking into account all factors in addition to fuel are actually 29% greater than those of petrol driven cars. Mateja (2003) finds that electric cars involve much higher embodied energy costs than normal cars. Bryce (2010) says 60% of the life cycle energy and environmental cost of these cars is to do with their production and disposal, not their on-road performance.

Again it would be important to see what assumptions are being made by these authors in arriving at the optimistic conservation and efficiency claims being made.

Water.

It is said that water extraction might increase 101%, but desalinization would be important. What are the energy implications of this? Also what would be the water implications of 59 million ha growing trees. There is reference to fact that this is an issue but the implications and the magnitudes are not made clear.

What would the cost be?

It is one thing to show that something could be done but it is another to show that it could be afforded. The paper claims that no significant cost to GDP would be involved. Even if the decoupling and sequestration assumptions were valid we would want to know the cost of doing those things, e.g., of maintaining and harvesting 59 million ha, and of producing half the power by renewables. My understanding of Lenzen’s current study is that it seems to be indicating that a fully renewable power supply system would result in a production cost around four or five times the present cost of fossil fuelled power. (The CSIRO paper does say the cost of power production could double.) This would be affordable, but would have major disruptive effects, especially on GDP as energy costs feed into everything and have multiplier effects.

The post GFC stagnation, and wild fluctuation in oil prices seem to have shown how surprisingly fragile and sensitive the global economy is to resource input factors. Tverberg (2015) argues persuasively that resource limits to do with the increasing difficulty of providing oil and its deteriorating EROI led to the recent spectacular rise in its price, which in turn depressed the economy, which led to the present low oil demand and prices. This suggests how disruptive a significant rise in electricity price might be. This paper adds questions to do with the probable costs for all that bio-sequestration, and especially regarding the EROI assumed for biofuels which are assumed to provide 25% of transport energy. (Various studies find that it is around 1.4 or less for corn based ethanol, which suggests that option is not worth bothering with.)

Would it scale to 9.7 billion people?

The amount of land planted for bio-sequestration would not. The area assumed for the optimistic scenario, up to 59 million ha forest plantation for sequestration plus 35 million ha for “biodiversity planting” would total 2.2 ha per person (assuming population will reach 37 million by 2050.) But Australia has much more potential forest area than most countries and the amount of forest on the planet now averages about only 0.45 ha per person, and is heading for .25 ha by 2050.

The expected 2050 consumption of petroleum and gas is considerable. Leaving aside whether there will be much of either left by then, the per capita use would be 35 GJ per person. Thus for 9.7 billion people demand would be 340 EJ which is about 1.7 times present world oil consumption … and therefore far from a plausible amount all could be consuming in 2050.

These numbers mean that even if the optimistic scenario could be achieved it would fall far short of one that could save the planet. It would still leave Australians living at per capita levels of resource use that were many times higher than all could share.

Conclusions.

As noted above, it would be difficult to suggest an issue that is more important than whether or not the limits to growth thesis is valid. The case for it has been accumulating weight for at least fifty years and in my opinion has long been beyond serious challenge. All resource stocks are being depleted at significantly unsustainable rates, summarized by the WWF conclusion that 1.5 planet Earth’s would be needed to provide them sustainably. And only about 2 billion are using them; what happens when 11 billion (the UN’s 2100 expectation) rise to our levels of consumption … let alone the levels we will have then given 3% growth … that is, levels that might be ten times as high as they are now.

This is the kind of arithmetic that is now leading considerable and increasing numbers of people to see the dominant obsession with affluence and growth and tech-fixes as absurd and suicidal, and to join the De-growth and associated movements such as Voluntary Simplicity, eco-villages and transition towns. We who are working in this area believe we know how to save the planet and we know the only way it can be saved. It is to shift to ways that do not create the problems now destroying the planet, depleting resources, condemning billions to deprivation, causing resource wars and damaging the quality of life in even the richest countries. Our “Simpler Way” vision (http://thesimplerway.info) would be easily and quickly achieved, if that was what people wanted to do. It isn’t and it will not be considered until the conditions presently devastating the lives of billions begin to impact supermarket shelves in the countries now living well on their grossly unfair proportion of world wealth. By which time it will probably be too late. The CSIRO paper is saying what just about everyone wants to hear, i.e., that there is no need to worry about any need to take The Simpler Way seriously.

References

Bryce, R., (2010), Power Hungry, Public Affairs, New York.

Burton, M., (2015), “The Decoupling Debate: Can Economic Growth Really Continue Without Emission Increases?”, The Leap, October 23.

Carey, A., 2012. Electric cars make more emissions unless green powered. The Age, 4th Dec.

Crawford, R., A. Stephan, (2013), “The significance of embodied energy in certified passive houses.”, World Academy of Science, Engineering and Technology, 78, 589 –595.

Mateja, D., (2000), ‘Hybrids aren’t so green after all’, www.usnews.com/usnews/biztech/articles/060331/31hybrids.htm

Schandl, H., et al., (2015), “Decoupling global environmental pressure and economic growth: Scenarios for energy use, materials use and carbon emissions.” J. of Cleaner Production, (In press.)

Tverberg, G., (2015) “Oops! Low oil prices are related to a debt bubble”, Our Finite World, November 3.

Posted in Limits To Growth, Other Experts | Tagged , , | Comments Off on Ted Trainer criticizes Hatfield-Dodds CSIRO study in Nature that denies “Limits to Growth”