A U.S. Senate hearing on T. Boone Pickens plans for natural gas and wind to reduce oil dependence

[ This session is unusual in that the words “peak oil” are spoken several times, and M. King Hubbert, James Howard Kunstler, and Matt Simmons are lauded.   Gal Luft points out that “10 years ago, Osama bin Laden predicted that oil would be $144 a barrel. Everybody laughed at him. Oil was only $12 a barrel at the time. He was right.”

Pickens ideas about running transportation on natural gas so far haven’t worked out so far.  It was hoped that 20% of trucks would be running on natural gas by now, but only 3% are, for many reasons that I explain in my book “When Trucks Stop Running: Energy and the Future of Transportation.  

At least Pickens realizes that it is heavy-duty transportation, especially trucks, that are the most important.  Yet cars dominate discussions and the lion-share of funding for “energy solutions.  And guess what, people drive more miles when cars get more efficient, undoing the oil saved.   Even if people  drove less, so what?  Trucks, trains, and ships BURN DIESEL. Cars burn gasoline.  Diesel engines can’t run on gasoline (or ethanol, diesohol and many other fuels).  Diesel engine are just as important for our high level of civilization as the diesel they burn because they’re twice as efficient than a gasoline engine and far more powerful, lasting up to 40 years and a million miles.

Alice Friedemann   www.energyskeptic.com ]

T. Boone Pickens

T. Boone Pickens

Senate 110-1023. July 22, 2008. Energy security. An American imperative.  U.S. Senate hearing.

Excerpts from this 175 page hearing follow:

T. BOONE PICKENS, Founder & CEO, BP Capital Management

We had produced 1 trillion barrels of oil at the turn of the century. It is interesting because if you look at King Hubbert’s extension, peak oil, and what would happen, the guy was great, in my estimation. I am a disciple.  I don’t think there are 2 trillion barrels of oil.  You may say take the oil shale on the western slope and this and that and everything. You can add up a bunch of stuff. When you add it up, it is going to be very expensive oil. But in looking at conventional oil—I live and you live and everybody in this room lives in the hydrocarbon era, and that era started with the automobile in 1900.

Half of the oil that I see out there had been produced by the year 2000.

Now, we have another trillion barrels, and you say, well, that is another hundred years. No. You started slow, ramped up, and now the next trillion is going to go out of the system within the next 50 years. So you are going to be forced to abandon the hydrocarbon era.

Can you imagine researchers 500 years out that come back and look at us? They are going to say, ‘‘That was a strange crowd. They lived on oil as a fuel.’’

We are going to have to make it to the next fuel. But what is going to happen, if I am right on what I am trying to do, I am going to awaken the American people, and they are going to see what they are up against. When they walk out of a room, they will turn off the lights. They do not do that now.

The Pickens Plan starts with harnessing wind and building solar capabilities. We are blessed with some of the best wind and solar resources in the world.  The plan substitutes electricity generated by natural gas-fired plants with wind-generated electricity. Natural gas-fired is 22 percent; the wind is going to replace that 22 percent. The natural gas freed up is directed to transportation needs of the country. The natural gas is cheaper, cleaner than gasoline, and its supply is plentiful. And, most of all, it is American.

But natural gas is nothing more than a bridge to the next fuel because when you get to 2050, we are pretty well maxed out on hydrocarbons as a transportation fuel. I almost think it is divine intervention to have natural gas show up at such a critical time for this country, and to be able to use it as a bridge to the next fuel in the next 20 or 30 years.

And 70% of the oil is used for transportation. When a barrel of oil comes to the United States today, it will be moved to a refinery, refined, then go into marketing, then go into our cars, and in 4 months it is gone. It is gone. We burn it up. It is out of here. And so we have to get a hold of this situation.

 

Senator COLLINS. How much of the solution also should encompass energy conservation?

Mr. PICKENS. Oh, it has got to be on page 1, of course. We have got to conserve. There is no question about that. We have been very wasteful. But in our defense, we had cheap oil. And as long as we had cheap oil—I don’t know whether you have seen Jim Kunstler.  I went over to Southern Methodist University (SMU) and heard him the other night. He is worth hearing. He is a generalist, but he tells us where we made the mistakes. We did not develop our rail system. You look at the world today, we go places and we want to ride on a 200-mile-an-hour train. We have to go to a foreign country to do that. We don’t have that. Why don’t we have it? Because we had cheap oil. It didn’t make sense for us to. It was expensive. We were going to subsidize it.   And we built too far away from our work. He says you are going to move to your work now because of the cost of energy. And it was really interesting because this was 2 years ago and the guy nailed it. I listened to what he had to say. I watched what has happened, and he was right on.

If you go with my plan and get 400,000 megawatts of wind in the central part of the country, you have helped the economy. Now, what is the cost of your energy? I am guessing in 10 years you are going to be a long way down the track to an electric vehicle. But, remember, an electric vehicle does not do heavy duty. So you are going to have to continue to use natural gas with heavy duty vehicles.

Ethanol is a light-duty fuel. Ethanol cannot work for heavy duty. But natural gas can. So I am approaching it with the view that natural gas would be for heavy duty, first and all. Mandate to the fleets that they have got to go to natural gas. 38 percent of the fuel used in America is used to move goods. And that is done by trucks.

Geoffrey Anderson, President & CEO Smart Growth America

The real opportunity out there right now is to allow people to drive less and to be able to do more. We can do that by building more walkable and complete communities. A lot of the growth in oil use has been as a result of spread out landscapes that have no options besides driving.  There is a real move now to create more walkable communities where homes are closer to jobs, shops are closer to work, and all of these things can be reached either on foot, by bike, with transit, or by shorter car trips.

Real estate and our research indicate that about a third of the market is interested in having more walkable communities, more compact communities. The fact is that for the last 50 years, we have essentially built drive-only communities, so the two-thirds of the market that really is interested in that product is well provided for.

Work trips only account for 25 to 35% of trips a household takes.  Denser communities mean kids can walk to school (50% used to, just 11% now), and daily errands require shorter trips.  The current way we are building communities is locking in oil dependence in the transportation sector.

Senator Lieberman.  The near total dependence of our economy, the energy sector of it—and particularly the transportation sector—on oil is weakening our Nation’s position in the world while enriching and strengthening a lot of countries in the rest of the world, many of them volatile and some of them just plain hostile to the United States of America. For well over a generation, America’s leaders have seen this growing dependence on foreign oil but essentially sat back and watched passively as trillions of dollars of our American, hard-earned wealth has been used to buy that oil and thereby go to countries abroad. And during that more than a generation, America’s leaders have done little or nothing about that problem. Apparently, it took $4-a-gallon gasoline to wake up the American people and their leaders here in Washington, to make all of us angry and anxious enough to get serious about breaking our national dependency on foreign oil.

Senator Collins. Beyond the impact on countless families struggling with high costs, our growing dependence on foreign oil is a threat to our national and economic security. One of our witnesses, Mr. Pickens, has vividly illustrated our ever-increasing dependence on foreign sources of oil in the Middle East and Venezuela. We are impoverishing ourselves while enriching regimes that are in many cases hostile to America. Ending our dependence on foreign oil and securing our own energy future is an American imperative. Our Nation must embrace a comprehensive strategy to reduce, and ultimately eliminate, our reliance on Middle East oil. We must expand and diversify American energy resources, and while doing so, improve our environment.

Our Nation missed an enormous opportunity on another October day 35 years ago. On October 17, 1973, the Organization of Arab Petroleum Exporting Countries, the predecessor of the Organization of Petroleum Exporting Countries (OPEC), hit the United States with an oil embargo. The immediate results were soaring gasoline prices, fuel shortages, lines at filling stations, and an economic recession. Unfortunately, after the immediate crisis passed, the long-term result was a steady increase in oil imports and a dependence that worsens each day. The 1973 embargo was a wake-up call that we failed to heed. The current crisis is a fire alarm that we must not ignore.

It also requires action by government. From establishing a timeline for energy security to undertaking critical investments to stimulate research in alternatives to expanding the production and conservation tax credits, government has a critical role to play.

Mr. Pickens.  In 1945, we were exporting oil to our allies. By 1970, we were importing 24% of our oil. By the 1980s, it was 37%. And in 1991, during the Gulf War, it was 42%. Today, we are approaching 70%. Much of our dependency is on oil from countries that are not friendly, and some would even like to see us fail as a democracy and as the leader of the free world. I am convinced we are paying for both sides of the Iraq war. We are giving them tools to accomplish their mission without ever having to do anything but sell us oil. This is more than a disturbing trend line. It is a recipe for national disaster. It has gone on for 40 years now. This is a crisis that cannot be left to the next generation to solve, and it is a shame if we do not do something about it. And we can, without bringing our economy and way of life to a halt.

I will tell you what [the American people] do understand. They know it is something very bad about energy. They do not think they are being told the truth about energy. And it is confusing to them. I think when we come out of this, by the time we get—I want to elevate this into the presidential debate, and it is not there yet. OK. Elevate it there. By the time we get the elections over, whoever wins, the American people are going to demand they know the truth about energy, they know what they are up against, and they will respond. We will see the energy use go down dramatically when they see what it is going to cost. They can see that it does not have anything to do with Exxon or Chevron or anybody else running up the price. It does not have anything to do with some speculator on Wall Street. That is not what we are faced with. We are faced with 85 million barrels a day of production in the world, and we are using 25 percent of it, with 4 percent of the population, and we only have 3 percent of the reserves. In the United States, we have nothing to do with the price of oil. We only have 3 percent of the reserves.

SENATOR VOINOVICH. Wind produces about 1.5 percent of our energy in this country. I think renewables are about—let’s see, about 9 percent, most of it is hydroelectric. How can you ramp that up over a quick period of time? And, second of all, as you know, down in Texas you have had some times when the wind just kind of stopped and you have had some reliability problems. And if you are going to use wind, you know that if you are going to have reliability, you are going to have to back up that wind with some ordinary baseload energy generation.

SENATOR DOMENICI.  You are so right that we must get the people to understand; that the United States is sending so much of our resources to foreign countries just to acquire crude oil; that it should be doubtful in the minds of intelligent people as to whether America can continue this kind of exportation of our assets, of our resources to foreign countries for 5 or 10 years. I actually do not believe we can. I believe we will become poorer and poorer and poorer as we send $500 to $700 billion a year overseas for crude oil. We are in a real mess. You are not against us opening more of the offshore assets of the United States where there are 85 percent that are locked up in a moratorium of one type or another and you cannot drill even if you wanted to. Are you on the side of those who say lift those and start drilling in an appropriate——

Mr. PICKENS. I am saying do everything you can do to get off of foreign oil, is what I am saying.

Senator DOMENICI. And that is one.

Mr. PICKENS. That is one. It is not going to do it.  It is not big enough. You do not have enough reserves in the offshore to do it. I think you are going to get a rude awakening as to value of the east and west coast when it is opened up and when it is put up for sale. When those tracts are put up for sale, I think you are going to be surprised at the [low] price you get for the tracts [ because most of the remaining oil is in the Gulf ].

There is no question that if I am right on the peak oil at 85 million barrels, in 10 years we are going to have less than 85 million barrels available to the world. Now, the question is: What is the demand? I have to think in 10 years the demand for oil— because the price now is going up. In 10 years, you are going to have $300 a barrel oil. Maybe higher, I don’t know. But this is really— it is a tough question to look out 10 years on this one. But I can tell you this: In 10 years, if we continue to drift like we are drifting, you are going to be importing 80 percent of your oil. And I promise you, it will be over $300 a barrel.

Senator VOINOVICH. I went to some war games at the National Defense University, and they talked about the vulnerability that we have. And some folks out at Stanford said that in the next 10 years there is a 80-percent chance that the cut-off of oil will bring our economy to its knees. So we have a certain urgency that we have right now to get on with this.

GAL LUFT, PH.D.  EXECUTIVE DIRECTOR, INSTITUTE FOR THE ANALYSIS OF GLOBAL SECURITY, AND CO-FOUNDER, SET AMERICA FREE COALITION

When we talk about national security, we need to realize that 63% of the world’s natural gas reserves are in the hands of Russia, Iran, Qatar, Saudi Arabia, and United Arab Emirates. These countries are now in the process of developing and discussing the establishment of a natural gas cartel. So shifting our transportation sector from oil to natural gas is like jumping from the frying pan into the fire. This is a spectacularly bad idea for us to shift our transportation sector from one resource that we do not have to another that we do not have. And we only have 3 percent of the world reserves of natural gas. The situation is very similar to our situation with regards to oil.

Just to remind the Committee that 10 years ago, Osama bin Laden predicted that oil would be $144 a barrel. Everybody laughed at him. Oil was only $12 a barrel at the time. He was right, and as a result, we are exporting hundreds of billions of dollars. This is the first year that we actually are going to pay foreign countries more than we pay our own military to protect us.

In order to understand what should be the road to energy security, we must first understand why we are where we are. There are many reasons why we have the oil crisis now. Of course, strong demand in developing Asia, speculation, geological decline, geopolitical risk, all of them have contributed their share. But, in my view, by far the main culprit is OPEC’s reluctance to ramp up production. This cartel owns 78 percent of the world’s proven reserves, and it produces about 40 percent of its oil production.

Our energy security problem stems from the fact that our transportation sector is dominated by petroleum. And while being in a hole, we continue to dig.

We put on the road annually 16 million new cars, almost all of them gasoline only, each with an average street life of 16.8 years. A Senator elected in 2008 will witness the introduction of 102 million gasoline-only cars during his or her 6- year term.

This means that neither efforts to expand petroleum supply nor those to crimp petroleum demand through increased Corporate Average Economy Fuel (CAFE) standards will be enough to reduce America’s strategic vulnerability. Such non-transformational policies at best buy us a few more years of complacency, while ensuring a much worse dependence down the road when America’s conventional oil reserves are even more depleted.

[ Luft then goes on with solutions for CARS: ethanol, methanol, open fuel standards, electric – whether Pickens plan makes sense or not, at least Pickens understand it is heavy –duty transportation that needs to be kept running. Clearly Luft hasn’t read Matt Simmons “Twilight in the Desert: The Coming Saudi Oil Shock and the World Economy”, which makes the case that the Saudi’s and other Middle Eastern oil producing nations greatly exaggerated their reserves. And why should the Saudi’s produce oil as fast as we want them to, when they’d prefer for their oil to last many more generations. Also, producing oil at too fast a rate can damage the oil field and reduce the ultimate amount of oil extracted – indeed, it’s thought that before the U.S. and British oil companies in the Middle East that got kicked out did this before they left ].

HABIB J. DAGHER, PH.D., DIRECTOR, ADVANCED STRUCTURES AND COMPOSITES LABORATORY, UNIVERSITY OF MAINE

I would like to start this testimony by acknowledging… the inspiring role of Matt Simmons, who is well known for alerting our country to peak oil and peak oil issues.

You have heard about T. Boone Pickens’ wonderful plan, but we sit in the corner of the country, and we are not very close to the wind belt that runs up and down from Kansas to Texas. So what do we do? T. Boone Pickens’ plan utilizes the wind corridor from the Dakotas down to Texas to generate anywhere from 200 to 400 gigawatts, depending on how much you want to generate. But that leaves us out, if you wish, on the east coast and on the west coast unless we build very expensive transmission systems. The majority of the U.S. population, actually close to 28 States, utilize more than 70 percent of the Earth’s electricity around the coasts of the United States. So the major demand for electricity is around the perimeter of the country.  There are line losses that take place, and, of course, there are transmission costs, and building transmission lines in heavily populated areas is very expensive as well from a permitting viewpoint and so forth. And if you look at the population centers on the east coast, for example, the Midatlantic States and up in the New England area, it would be very costly to build transmission lines in those areas.

Wind speed is actually high when we need it. We need to heat ourselves in the State of Maine and in the Northeast, and the heating costs are our biggest issues. But in the wintertime, the wind blows twice as fast as it does in the summertime, and the power generated from the wind is the cube of the wind speed. So in the wintertime, per month, we can generate 8 times as much power as we do in the summertime. You can think of wind off the coast of Maine as a seasonal crop right now that can help us heat the State of Maine. [ What will Maine do in the summer? ]

 

Posted in Energy Dependence, Natural Gas Vehicles | Tagged , , , | Leave a comment

Natural gas is a stupid transportation fuel

[ My comment: The only reason natural gas has come up as a transportation fuel at all is the false belief that there is 100 years of natural gas (even this article does, but natural gas may last far less for reasons explained in articles here).

Although this article focuses on cars, the same critique applies to heavy-duty trucks as well, which need even bigger, heavier tanks.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

Service, R. F. October 31, 2014. Stepping on the gas. Science Vol. 346, Issue 6209, pp. 538-541 

At a conference on natural gas-powered vehicles Dane Boysen, head of a natural gas vehicle research program at the U.S. Department of Energy’s Advanced Research Projects Agency-Energy, said what industry stalwarts don’t want to hear:

“Honestly, natural gas is not that great of a transportation fuel.” In fact, he adds, “it’s a stupid fuel.”

This is because of the low energy density of natural gas. A liter of gasoline will propel a typical car more than 10,000 meters down the road; a liter of natural gas just 13 meters. Even when natural gas is chilled or jammed into a high-pressure tank—at a high cost of both energy and money—it still can’t match gasoline’s range.

Nevertheless, Boysen’s ARPAE project, called Methane Opportunities for Vehicular Energy (MOVE), is in the middle of spending $30 million over 5 years to jump-start the development of natural gas-powered cars and light-duty trucks which now burn over 60% of oil used in transportation.

But as Stephen Yborra, who directs market development for NGVAmerica, puts it, “there are an awful lot of hurdles to overcome.” Honda, for example, already makes a natural gas version of its Civic sedan. But it has sold only 2,000 of them in the United States, compared with more than 1.5 million gasoline-powered cars a year. Major improvements in fuel tanks, pumps, and infrastructure will be needed before natural gas vehicles rule the road.

One by one, Boysen ticks off formidable technical challenges and the efforts engineers are making to solve them.

GAS TANK MATERIALS. The biggest problem goes back to the meager energy density of natural gas. At ambient temperature and pressure, it’s a mere 40,000 joules per liter, slightly more than 1/1000th that of gasoline. To carry enough fuel, a car needs an oversized fuel tank, which eats into its cargo space. As a result, Honda’s natural gas Civic has less than half the trunk volume of its gasoline counterpart. “Drivers hate this because they can’t pick up people at the airport,” Boysen says.

The fuel tanks also have to be pressurized—another source of headaches. Today’s tanks compress gas to 250 bar, about 250 times atmospheric pressure. To handle the stresses, tanks must be made either from thick metal—which makes them heavy—or from lighter but expensive carbon fiber. Current tanks add an average of $3500 to the cost of natural gas vehicles.

  • GAS TANK SHAPES. Spongelike fuel storage at modest pressures might free engineers to build tanks in shapes other than the now-standard high-pressure cylinder. That’s critical, because in a car, a cylinder occupies a box as big as its largest dimension, wasting a lot of space. For heavy-duty trucks and buses, which don’t have tight space constraints, an awkward tank shape is less of a problem. But it’s a killer for passenger cars.
  • GASSING UP. One challenge is the time it takes to fill up. Gasoline pumps can supply as much as 10 gallons (38 liters) of fuel per minute, an energy transfer rate equivalent to 20 megawatts of power. Today’s CNG systems can fill the equivalent of a 15-gallon (57-liter) tank in 5 minutes. But they are expensive and primarily service trucks and specialized fleets.
    Many advocates of natural gas cars dream of a low-pressure compressor that could be used for home refueling, as roughly half of U.S. homes—some 60 million—already have a natural gas line. If cars could be refueled at home, consumers would tolerate slower filling rates, as they do with electric vehicles. One such compressor is already on the market, Boysen notes. But it costs $5500.
  • But with so few vehicles on the road, compressor manufacturers have been unwilling to invest in new technologies. As a result, says Bradley Zigler, a combustion researcher at the National Renewable Energy Laboratory in Golden, Colorado, “right now there is a valley of death between research progress and commercially available technologies.
  • INFRASTRUCTURE, INFRASTRUCTURE, INFRASTRUCTURE. Even if engineers do it all—come up with a cheap space-age crystal to hold gas in a low-pressure tank, a more efficient natural gas–burning engine to reduce the demand for a large tank, and a cheap new compressor—that still might not be enough. For drivers to gamble tens of thousands of dollars on a new kind of car, analysts say, they’ll need all of these technologies to be widely available at the same time. “It has to be in a box,” Youssef says. “To me, that’s the biggest hurdle. I’m afraid we’re not there yet.”
  • Even then, Boysen notes, natural gas vehicles would face competition from a more-than-viable alternative: the gasoline- and diesel-powered cars that now make up 93% of passenger vehicles on the road. Drivers will need to be convinced that a natural gas car will work at least as well as current cars do. They will need to know they can buy fuel wherever and whenever they want. And they will need a nationwide network of mechanics and parts suppliers to fix things when they break. Gasoline-powered and electric cars already cover the whole menu, but would-be competitors have far to go.

This suite of demands is particularly acute for truly novel technologies, such as hydrogen-powered fuel cell vehicles. The lack of an existing fueling infrastructure for those cars makes it far less likely that drivers will embrace them. But the fact that such challenges are also proving daunting to natural gas-powered cars, with their sizable fuel cost advantage, underscores just how difficult it is to transform the way we drive. For Boysen and his colleagues, the allure of natural gas is stronger than ever. But they know reality can be unkind to even the most appealing technologies.

Posted in Automobiles, Natural Gas Vehicles, Transportation | Tagged , , | Leave a comment

Methane hydrate apocalypse? Maybe not…

 [ I don’t think there’s enough evidence yet to decide for sure there won’t be a gas hydrate apocalypse, but here’s some evidence that this might not happen. Even Benton, who wrote “When Life Nearly Died” didn’t pin the mass murderer of the Permian extinction on gas hydrates. 

I’ve also read Peter Ward’s “Under a Green Sky” and many other books and peer-reviewed articles, and will continue to try to follow the mystery of extinction events and who the mass murderers were. 

Until then, here are some of the articles I’ve run across that cast some doubt on this being a sure thing, though clearly more research needs to be done.

There’s also the hope that peak oil, which makes all other resources available, including coal, natural gas and oil itself, prevent us from going extinct as energy reduces population back to about 1 billion or so (whatever it was before fossil fuels replaced wood and muscle power) and we don’t have the energy to cross all of the other 9 boundaries as well.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer] 

SBC. June 2015. Gas Hydrates. Taking the heat out of the burning-ice debate. Potential and future of Gas Hydrates. SBC energy institute.

Recent studies (e.g. Whiteman et al) have raised the alarm that methane emissions could occur in the Arctic, especially over the East Siberian Shelf and in Siberian Lakes (e.g. Shakhova et al). However, there is a vigorous academic debate on the origin and potential impact of these emissions. As acknowledged by the IPCC: “How much of this CH 4 originates from decomposing organic carbon or from destabilizing hydrates is not known. There is also no evidence available to determine whether these sources have been stimulated by recent regional warming, or whether they have always existed
…since the last deglaciation. More research is therefore urgently needed.

The response of gas hydrates to climate change has only been investigated recently. Modeling in this field remains in its infancy. As a consequence, the likelihood, and impact, of gas-hydrate dissociation due to climate change is still poorly understood and more research is needed.

The first uncertainty is the amount of gas hydrates stored on Earth. Global gas-in-place estimates range over an order of magnitude 1,000-20,000 tcm, with most estimates around 3,000 tcm. Estimates are even more uncertain at the regional level. For instance, there are no models for Antarctic reservoirs, and estimates for Arctic permafrost have only been done recently.

In the permafrost, additional uncertainty arises from the origin of methane emissions, whereas in the case of ocean sediments, the mechanisms by which methane is released and its ability to reach the atmosphere are also disputed. So are the biochemical and chemical consequences that gas-hydrate releases would have on oxidation mechanisms e.g. there may be resource limitations hindering methane oxidation in the ocean.

Since gas hydrates are only stable under high pressures and at low temperatures, there have been concerns that climate change could result in gas-hydrate dissociation and the release of methane into the atmosphere. The response of gas hydrates to climate change has only been investigated recently. Modelling in this field is in its infancy and faces major uncertainties. Nevertheless, it is generally agreed that gas-hydrate dissociation is likely to be a regional phenomenon, rather than a global one, and more likely to occur in subsea permafrost and upper continental shelves than in deep-water reservoirs, which make up the majority of gas hydrates. Indeed,the later are relatively well insulated from climate change because of the slow propagation of warming and the long ventilation time of the ocean. Moreover, the release of methane from gas-hydrate dissociation should be chronic rather than explosive, as was once assumed;and emissions to the atmosphere caused by hydrate dissociation should be in the form of CO2 because of the oxidation of methane in the water column.

no MH apocalypse Thermal diffusivity and ocean thermal

 

 

 

 

 

 

 

 

1 Graphs adapted from Archer (2007), “Methane hydrate stability and anthropogenic climate change”. In the graph on the right, ventilation timescale corresponds to the timescale required by temperature (heat), pressure and solutes such as methane to diffuse through the sediments

Ocean thermal response varies according to depth, as highlighted in the graph above (left), but also from place to place, especially in deep-water locations, due to ocean currents. In sediments, the diffusion of heat towards deeper layers takes time and varies primarily according to depth, but also according to the composition of the sediment and to the geothermal gradient.  Heat can diffuse approximately 100 meters in about 300 years (point A). Solutes such as dissolved methane diffuse even more slowly (100 meters in about 30,000 years), point B), while pressure perturbation (e.g. following a sea-level rise) diffuses more quickly (100 meters in about 3 years), point C.

As a result of thermal inertia, heat diffusion and the melting of permafrost take time, and should be slow enough to insulate most hydrate deposits from expected anthropogenic warming over a 100-year timescale. Nevertheless, temperature increases in high latitudes, such as the Arctic, are expected to be much higher than increases in the mean global temperature, and are therefore more likely to affect gas-hydrates reservoirs. Rises in sea level would result in pressure increases at the seafloor that may mitigate further dissociation of offshore gas-hydrate deposits. However, it is likely to be insufficient to negate the warming.

Even if warming were to reach the gas hydrate stability zone, the fate of any methane released would be uncertain.Gas could escape if the pressure exceeded the sediment’s lithostatic pressure, but it might also remain in place. In addition, since gas-hydrate dissociation will start at the edge of the stability zone, even if gas were able to migrate, it might subsequently be trapped in newly formed hydrates.

Finally, even if methane were able to migrate towards the seafloor, it would probably not reach the atmosphere. Most methane is expected to be oxidized in the water column rather than released by bubble plumes or other “transport pathways” directly into the atmosphere as methane. Nevertheless, the oxidation of methane produces CO2, which will have an impact on ocean acidification and will remain in the atmosphere.

The susceptibility of gas-hydrate deposits to climate-change-induced dissociation varies significantly, according to reservoir location

The susceptibility of gas-hydrate deposits to climate-change-induced dissociation varies significantly, according to reservoir location. (1) Moridis et al.2011. Challenges, uncertainties and issues facing production from gas hydrate deposits.

 

The risk of climate change causing gas-hydrate dissociation and methane leaks varies significantly by location.This can be explained by depth differentials, the existence of mitigation mechanisms such as water-column oxidation, or by the exposure of gas-hydrate deposits to varying regional warming phenomena. High-latitude warming is expected to be much greater than global-mean-temperature warming.

As a rule-of-thumb, gas hydrates held within subsea permafrost on the circum-Arctic ocean shelves and on upper continental slopes are the most prone to dissociation. Subsea permafrost, which were flooded under relatively warm waters due to sea level rises thousands of years ago, have been exposed to dramatic rises in temperature that have led to a significant degradation both of subsea permafrost and t he gas hydrates within it.The latter are believed to store a greater quantity of gas hydrates than the former, but methane releases are less likely to reach directly the atmosphere because of oxidation in the water column.

However, it is very unlikely that climate warming will disturb gas-hydrate deposits that are held in deep-water reservoirs around 95% of all deposits on a millennial timescale. Finally,
gas hydrates in seafloor mounds may also dissociate as a result of warming, overlying water or pressure perturbation, but these account for a very limited share of gas hydrates in place.

The sensitivity of gas-hydrate deposits in onshore permafrost,especially at the top of the hydrate stability zone, is more uncertain and subject to greater debate

Archer et al. calculated that between 35 and 940 GtC of methane could escape as a result of global warming of 3° C, with maximum consequences of adding a further 0.5° C to global warming. On top of the uncertainty reflected in the range above, there are other considerable uncertainties, notably concerning the effectiveness of mitigation mechanisms and the long-term outlook, since methane will continue to be released, even if warming stops.

Reagan and Moridis (2007), “Oceanic gas hydrate instability and dissociation under climate change scenarios”;
Maslin et al. (2010), “Gas hydrates: past and future geohazard?”;
Shakhova et al. (2010), “Predicted Methane Emission on the East Siberian Shelf”;
Whitemann et al. (2013), “Climate science: Vast costs of Arctic change”

Ananthaswamy, A. May 20, 2015 Methane apocalypse? Defusing the Arctic’s time bomb. NewScientist.

Do the huge craters pockmarking Siberia herald a release of underground methane that could exceed our worst climate change fears?  They look like massive bomb craters. So far 7 of these gaping chasms have been discovered in Siberia, apparently caused by pockets of methane exploding out of the melting permafrost. Has the Arctic methane time bomb begun to detonate in a more literal way than anyone imagined?

The “methane time bomb” is the popular shorthand for the idea that the thawing of the Arctic could at any moment trigger the sudden release of massive amounts of the potent greenhouse gas methane, rapidly accelerating the warming of the planet. Some refer to it in more dramatic terms: the Arctic methane catastrophe or methane apocalypse.

Some scientists have been issuing dire warnings about this. There is even an Arctic Methane Emergency Group. Others, though, think that while we are on course for catastrophic warming, the one thing we don’t need to worry about is the so-called methane time bomb. The possibility of an imminent release massive enough to accelerate warming can be ruled out, they say. So who is right?

Few scientists think there is any chance of limiting warming to 2 °C, even though many still publicly support this goal. Our carbon dioxide emissions are the main cause of the warming, but methane is a significant player.

Methane is a highly potent greenhouse gas – causing 86 times as much warming per molecule as CO2 over a 20-year period. Fortunately, there’s very little of it in the atmosphere. Before humans arrived on the scene there was less than 1000 parts per billion. Levels started rising very slowly around 5000 years ago, possibly to due to rice farming. They’ve gone up more since the industrial age began: the fossil fuel industry is by far the single biggest source, followed by farting farm animals, leaking landfills and so on. Only a tiny percentage comes from melting Arctic permafrost.

The level in the atmosphere is now nearing 1900 ppb, but that’s still low. CO2 levels were much higher to start with, around 270,000 ppb before the industrial age. They have now shot up to 400,000 ppb today. The main reason is that CO2 persists for hundreds of years, so even small increases in emissions lead to its buildup in the atmosphere, just as water dripping into a bath with the plug left in can fill the bath eventually.

Methane, by contrast, breaks down after just 12 years, so its level in the atmosphere can only increase if there are big ongoing emissions.

So for methane to cause a big jump in global warming there not only has to be a massive source, it has to be released very rapidly. Is there such a source?

Yes, claim a few scientists. They point to the Arctic permafrost, and specifically to the East Siberian Arctic shelf. This vast submerged shelf underlies a huge area of the Arctic Ocean, which is less than 100 meters deep in most places. During past ice ages, when sea level dropped 120 meters, the land froze solid.

This permafrost was covered by rising seas as the ice age ended around 15,000 years ago. The upper layer has been slowly melting as the relative warmth of the seawater penetrates down. But the frozen layer is still hundreds of meters thick. No one doubts that there is plenty of carbon locked away in and under it. The questions are, how much is there, how much will come out in the form of methane, and how fast?

Natalia Shakhova of the International Arctic Research Center at the University of Alaska Fairbanks, has been studying the East Siberian Arctic shelf for more than two decades. Her team has made more than 30 expeditions to the region, in winter and in summer, collected thousands of water samples and tons of seabed cores during four drilling campaigns and made millions of measurements of ambient levels of methane in the air.

Her team has estimated that there is a whopping 1750 gigatons of methane buried in and below the subsea permafrost, some of it in the form of methane hydrates – an ice-like substance that forms when methane and water combine under the right temperature and pressure. What’s more, they say that the permafrost is already beginning to thaw in places. “Our results show that… [the] subsea permafrost is perforating and opening gas migration paths for methane from the seabed to be released to the water column,” says Shakhova.

Her team’s work hit the headlines in 2010, when in a letter in the journal Science they reported finding more than 100 hot spots where methane was bubbling out from the seabed. But as others pointed out, it was not clear whether these emissions were something new or had been going on for thousands of years.

More sensational stuff was to follow. In another 2010 paper, the team explored the consequences of 50 gigatons of methane – 3% of their estimated total – entering the atmosphere (Doklady Earth Sciences, vol 430, p 190). If this happened over five years methane levels could soar to 20,000 ppb, albeit briefly. Using a simple model, the team calculated that if the world was on course to warm 2 °C by 2100, the extra methane would lead to additional warming of 1.3 °C, so temperatures would hit 3.3 °C by 2100.

This study appeared in an obscure journal and did not get much attention at the time. But then Peter Wadhams of the University of Cambridge and colleagues decided to see how much difference a huge methane release between 2015 and 2025 would make when added to an existing model of the economic costs of global warming. “A 50-gigaton reservoir of methane, stored in the form of hydrates, exists on the East Siberian Arctic shelf,” they stated in Nature, citing Shakhova’s paper as evidence. “It is likely to be emitted as the seabed warms, either steadily over 50 years or suddenly. Understandably, this was big news.

But in reality the idea that 50 gigatons could suddenly be released, or that there’s a store of 1750 gigatons in total, is very far from being accepted fact. On the contrary, Patrick Crill, a biogeochemist at Stockholm University in Sweden who studies methane release from the Arctic, says it is simply untenable. He wants Shakhova’s team to be more open about how they came up with these figures. “The data aren’t available,” says Crill. “It’s not very clear how those extrapolations are made, what the geophysics are that lead to those kinds of claims.

Shakhova now says, “We never stated that 50 gigatons is likely to be released in near or distant future.” It is true that the 2010 study explores the consequences of the release of 50 gigatons rather than explicitly claiming that this will happen. However, it has certainly been widely misunderstood both by other scientists and the media. And her team’s papers continue to fuel the idea that we should be worried about dramatic and damaging releases of methane from the Arctic.

But other researchers disagree. “The Arctic methane catastrophe hypothesis mostly works if you believe that there is a lot of methane hydrate,” says Carolyn Ruppel, who heads the gas hydrates project for the US Geological Survey in Woods Hole, Massachusetts. And her team estimates that there are only 20 gigatons of permafrost-associated hydrates in the Arctic (Journal of Chemical and Engineering Data, vol 60, p 429). If this is right, there’s little reason for concern.

The issue is not just how much methane hydrate there is, but whether it could be released rapidly enough to build up to high levels.

This could happen soon only if the hydrates are shallow enough to be destabilized by heat from the warming Arctic Ocean.

But David Archer of the University of Chicago says that hydrates could only exist hundreds of meters below the sea floor. That’s far too deep for any surface warming to have a rapid impact. The heat will take thousands of years to work its way down to that depth, he calculated last year, and only then will the hydrates respond (Biogeosciences Discussions, vol 12, p 1). “There is no way to get it all out on a short timescale,” says Archer. “That’s the crux of my position.

This concerted push back against the idea of an impending methane bomb has led to something of a feud. Commenting on Archer’s paper, for instance, Shakhova said he clearly knew nothing about the topic. She has repeatedly pointed out that her team has actual experience of collecting data in the East Siberian Ice shelf, unlike her detractors.

But there is skepticism about Shakhova’s actual measurements, too. For instance, her team has reported that methane levels above some hotspots in the East Siberian shelf were as high as 8000 ppb. Last summer, Crill was aboard the Swedish icebreaker Oden, measuring levels of methane over the East Siberian shelf. Nowhere did he find levels this high. Even when the Oden ventured near the hotspots identified by Shakhova’s team, he never saw levels much beyond 2000 ppb. “There was no indication of any large-scale rapid degassing,” says Crill.

It’s not clear why other teams are finding lower levels than Shakhova’s. But to find out if a catastrophic release of methane is imminent, there is another line of evidence we can turn to. Thanks to ice cores from places like Greenland, we have a record of past methane levels going back hundreds of thousands of years. If there are lots of shallow hydrates in the Arctic poised to release methane as soon it warms up a little, they should have done so in the past, and this should show up in the ice cores, says Gavin Schmidt of the NASA Goddard Institute for Space Studies in New York.

Around 6000 years ago, although the world as a whole was not warmer, Arctic summers were much warmer thanks to the peculiarities of Earth’s orbit. There is no sign of any short-term spikes in methane at this time. “There’s absolutely nothing,” says Schmidt. “If those methane hydrates were there, they were there 6000 years ago. They weren’t triggered 6000 years ago, so it’s unlikely they’d be triggered imminently.

During the last interglacial period, 125,000 years ago, when temperatures in the Arctic were about 3 °C warmer than now, methane levels rose a little, as expected in warmer periods, but never exceeded 750 ppb. Again, there’s no sign of the kind of spike a large release would produce.

There is, then, no solid evidence to back the idea of a methane bomb and past climate records suggest there is no cause for alarm. Extraordinary claims require extraordinary proof, otherwise it’s going to undermine credibility and slow down our ability to actually make the decisions that we are going to have to make as a society.

No one is saying methane is not a concern. Levels are now the highest they’ve been for at least 800,000 years and climbing. The Intergovernmental Panel on Climate Change’s worst-case emissions scenario assumes a big rise in methane, to as much as 4000 ppb by 2100.

What about the gaping craters? They are certainly spectacular and scary-looking. The latest idea is that they are caused by the release of pockets of compressed methane as ice seals melt. But the amount of methane released per crater is minuscule in global terms. Around 20 million craters would have to form within a few years to release 50 gigatons of the gas.

Posted in CO2 and Methane, Methane Hydrates | Tagged , , | Leave a comment

California could hit the solar wall

[ According to this Stanford University article, if California uses mainly solar power to meet a 50% Renewable Portfolio Standard (RPS), on sunny days, for most of the year, more power would be generated mid-day than needed 23% of the time, and over-generation by solar PV from 42 to 65%. On average, nearly 9% of solar or other electricity generation would have to be be shut down. In other words, California could hit the solar wall.

Why would California mainly use solar rather than wind power as well? “Unlike the Midwest, California has a modest technical potential for wind and many of the best sites are already developed [my comment: any sites not developed yet are too expensive or far from the grid]. California does have a large offshore-wind resource but high costs and technological challenges remain. Importing electricity generated by onshore wind from neighboring states is promising, but some imports will require new high-voltage transmission lines that may take a decade to plan, site, permit, finance and build.”

Overall, California has already developed MOST of the best sites (NREL 2013):

“Prime renewable resources include wind (40% capacity factor or better), solar (7.5 DNI or better), and discovered geothermal potential. All other renewable resources are non-prime. California’s remaining options for easily developable in-state utility-scale renewables could be limited by 2025. Wind, geothermal, biomass, and small hydro projects under contract (either existing or under construction) are about equal to the total developable potential estimated for each of these technologies in California’s renewable energy zones. 

Solar projects to date, however, exceed the amount of developable prime and borderline prime resources estimated to exist within California’s zones.  This suggests that California’s remaining solar resource areas tend have less solar exposure than what has already been developed and might be less productive.”

Less productive = more expensive. This report also points out that California will have 44.3 million people in 2025, requiring renewable generation to grow even further. ]

The consequences of too much solar power include:

  • Since solar power provides energy when least needed (mid-day rather than peak morning and late afternoon hours), massive amounts of power from national gas power plants (tens of gigawatt hours of energy) would need to QUICKLY ramp up as the sun’s power rapidly fades in the afternoon, requiring large expensive natural gas back-up plants. But natural gas is finite, so that’s a temporary solution. The only other commercial source of dispatchable energy is (pumped) hydropower, but there are few spots to put new dams, and existing dams are limited much of the year due to drought, fisheries, agriculture, and drinking water. Geothermal, nuclear, coal are not dispatchable — they are considered baseload power running 24 x 7. They can’t ramp up and down in less than 6 to 8 hours because that damages their equipment.  When built they expected to generate a certain percent of power per day to pay back their cost, so when they have to shut down quickly because solar and wind power have first rights to provide power.  This can drive electricity prices negative, but since ramping down can damage their equipment, coal, natural gas, and nuclear plants lose money to continue generating power. This is why many nuclear and coal power plants are shutting down — they are losing money.  But since solar and wind are unreliable, intermittent, and unpredictable, the electric grid needs them to be ready to fill in when the sun goes down and the wind dies.
  • Utility scale energy battery storage is dispatchable, but far from commercial.
  • Large-scale curtailment of solar PV during times of over-generation reduce the value of solar capacity additions to investors.
  • Real-time pricing during times of over-generation could limit or eliminate the net-metering advantage of PV on residential and commercial-scale installations.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

Benson, S., and Majumdar, A. July 12, 2016. On the path to deep decarbonization: Avoiding the solar wall. Stanford University.

Photo: Close-up of solar modules. Credit: NREL

Photo: NREL

While you might not have been paying attention, California’s electric grid has undergone a radical transformation. Today, more than 25 percent of the electricity used in California comes from renewable energy resources, not including the additional 10-15 percent from large-scale hydropower or all the electricity generated from roof-top solar photovoltaic (PV) panels. Compared to only five years ago, solar energy has grown 20-fold (Figure 1). In the middle of a typical springtime day, solar energy from utility-scale power plants provides an impressive 7 gigawatts (GW) of grid-connected power, accounting for about 20 percent of the electricity used across the state (Figure 2).

Last year, the total electricity from both in-state and out-of-state resources was 296,843  gigawatt-hours (GWh), including out-of-state generation from unspecified sources. An estimated 8 percent of California’s electricity consumption was generated from wind farms and 6 percent from solar power plants connected directly to the California Independent System Operator (CAISO) grid. This rapid growth is great news for the nascent renewable energy industry and can serve as proof-point for the scale-up of renewable energy.  In addition to these utility-scale renewable energy power plants, California has an additional 3.5 GW of solar “self-generation” on the customer side of the meter that offsets demand for electricity from the grid when the sun is shining. And, a growing third source is “community solar,” where residents and businesses invest in small, local solar plants.

Figure 1. Total electricity generation in California from solar and wind energy directly connected to the CASIO grid (California Energy Almanac).

By law, in 2020 California plans to have 33% of its electricity sourced from renewable sources and 50% by 2030 under its renewable portfolio standard (RPS) requirements. Many in-state renewable energy projects are in the pipeline, including nearly 9 GW of solar PV and 1.8 GW of new wind projects that have received environmental permits.  Power purchase agreements have already been signed for at least 1 GW of new solar projects. If all of these permitted projects were developed, California would have about 16 GW of solar-generating capacity by 2020.

While wind has provided a significant portion of California’s renewables to date, the majority of new additions for meeting the 2020 33% RPS requirement is forecast to come from direct grid-connected solar PV. California has an enormous and high-quality solar resource, with an estimated technical potential of more than 4,000 GW for utility-scale solar and 76 GW for rooftop solar. Unlike the Midwest, California has a modest technical potential for wind and many of the best sites are already developed. California does have a large offshore-wind resource, some of which is now in permitting, but high costs and technological challenges remain. Importing electricity generated by onshore wind from neighboring states is promising, but some imports will require new high-voltage transmission lines that may take a decade to plan, site, permit, finance and build.

Figure 2. California energy mix on May 29, 2016. Note that renewables provide more than 40% of the power during the middle of the day. Of this, more than 30% is from solar power (CAISO Daily Renewables Watch).

California has a major effort – the Renewable Energy Transmission Initiative (RETI) – that has successfully identified and built lines to meet its RPS requirements. This year, California launched a new phase of RETI to develop the additional transmission, both in-state and out-of-state, needed for the 50-percent RPS. A recent study supporting California Senate Bill 350 implementation (which includes the 50-percent RPS) showed from $1 billion to  $1.5 billion annual savings by adding major  transmission lines that would bring more out-of-state wind energy into California.

If, instead, California continues to rely mostly on solar resource for meeting the 2030 50-percent RPS, the total statewide solar-generating capacity would reach 30 to 40 GW under peak production, according to a report by Energy and Environmental Economics Inc. (E3). Under these conditions, on a sunny day, for most of the year, California would be generating more electric power than it needs during the middle of the day from solar energy alone. E3 calculates that this large amount of overgeneration could be a problem 23 percent of the time, resulting in curtailment of 8.9 percent of available renewable energy, with marginal overgeneration by solar PV of 42-65 percent. In other words, California could hit the solar wall. And this does not even consider that midday demand is likely to decrease due to the installation of additional residential and commercial solar PV systems “behind the electricity meter.”

Consequences of hitting the solar wall

Just a decade ago it would have been nearly unthinkable that during the middle of the day solar energy could provide more electricity than an economy as large as California’s needs. But supportive policies, rapid scale-up and decreasing costs make this possibility a reality today. While from some perspectives this is very encouraging, in reality, there are consequences for hitting the solar wall. For example:

  • Reliance on so much solar energy would require rapid ramping capacity for more than 10s of GW of natural gas power plants from 4:00-6:00 p.m., when the sun is going down and electricity demand goes up as people return home.
  • Large back-up capacity from natural gas plants or access to other sources of dispatchable electricity would be required for days when the sun isn’t shining.
  • Zero marginal-cost solar generation could squeeze out other valuable low-carbon electricity sources that can provide baseload power. For example, natural gas combined cycle plants, geothermal energy and nuclear power that cannot operate during these times at zero marginal cost.
  • Large-scale curtailment of solar PV during times of over-generation, which will reduce the value of solar capacity additions to investors.
  • Real-time pricing during times of over-generation could limit or eliminate the net-metering advantage of PV on residential and commercial-scale installations.

There is no doubt that California’s solar energy potential is invaluable, but we must take steps to avoid the solar wall.  Fortunately, these issues are being recognized and addressed at many levels in California.

Avoiding the solar wall

Numerous approaches to avoiding the solar wall are available today, and in the future more options will exist as we develop new technologies, policies and markets to take advantage of large solar-energy resources that exist around the world. In the short term, key actions include:

  • Develop a renewable energy-generation mix that is well-balanced among solar, wind and other forms of renewable generation. The right generation mix will be region specific, but for California should include increasing wind generation to provide nighttime power. [my comment: what other renewable generation?  To reach an 80 to 100% renewable grid, most of the power has to come from solar and wind with a little help from geothermal and hydropower]
  • Support regional generation markets across wide geographic areas to balance the variability of renewable generation. California has created an energy imbalance market with participants in Nevada, Wyoming and Oregon. Expansion of regional markets is being studied as part of the implementation of Senate Bill 350, California’s 50-percent RPS law.
  • Ensure adequate capacity of rapid ramping natural gas plants  to provide reliable supply during the morning and evening hours as the sun rises and sets. [my comment: natural gas is finite! Conventional natural gas peaked in 2001 in the U.S., shale gas is peaking both economically now and geologically by 2020, and we have only 4 Liquefied Natural Gas (LNG) import terminals].
  • Expand use of load shifting through real-time pricing to incentivize using power during daytime hours when large amounts of solar power are available.
  • Encourage daytime smart charging of electric vehicles to take advantage of abundant and zero marginal-cost solar generation. Achieving this will require workplace charging stations and new business models. With transportation at about 40% of the state’s energy use, electrification of the transport sector could have the dual benefits of eliminating tailpipe emissions and providing demand for abundant and low-cost solar energy. [My comment: the math and computer algorithms to have a smart grid are far from existence, batteries aren’t much better than they were 210 years ago, and trucks can’t run on batteries. ]
  • Increase energy storage to avoid curtailment of solar over-generation during peak production periods. For now, few financial incentives exist for large-scale pumped-hydropower or compressed air storage projects [my comment: that’s not the problem!  There are very few places left to put pumped-hydro and no spots at all to put compressed air facilities, unless they’re above ground, which is crazy expensive]. Levelized costs of small-scale storage in batteries range from about $300 to more than $1,000/megawatt-hour (MWh) depending on the use-case and the technology. These are expensive compared to pumped-hydro storage at $190 to $270/MWh. For comparison, gas peaker plants have a levelized cost of $165 to $218/MWh. The business case for battery storage will be limited until prices come down significantly. Both R&D and scale-up will be needed to reduce costs. [my comment: utility scale battery storage is FAR from commercial, and only sodium sulfur (NaS) batteries have enough material on earth to store half a day of world electricity (see Barnhart, Charles J. and Benson, Sally M. January 30, 2013. On the importance of reducing the energetic and material demands of electrical energy storage. Energy Environ. Sci., 2013, 6, 1083-1092)]
  • [ My comment: Furthermore, utility scale battery storage is far from being commercial. Using data from the Department of Energy (DOE/EPRI 2013) energy storage handbook “Electricity storage handbook in collaboration with NRECA”, I calculated that the cost of NaS batteries capable of storing 24 hours of electricity generation in the United States came to $40.77 trillion dollars, covered 923 square miles, and weighed in at a husky 450 million tons.
    Sodium Sulfur (NaS) Battery Cost Calculation:
    NaS Battery 100 MW. Total Plant Cost (TPC) $316,796,550. Energy
    Capacity @ rated depth-of-discharge 86.4 MWh. Size: 200,000 square feet.
    Weight: 7000,000 lbs, Battery replacement 15 years (DOE/EPRI p. 245).
    128,700 NaS batteries needed for 1 day of storage = 11.12 TWh/0.0000864 TWh.
    $40.77 trillion dollars to replace the battery every 15 years = 128,700 NaS * $316,796,550 TPC.
    923 square miles = 200,000 square feet * 128,700 NaS batteries.
    450 million short tons = 7,000,000 lbs * 128,700 batteries/2000 lbs.
    Using similar logic and data from DOE/EPRI, Li-ion batteries would cost
    $11.9 trillion dollars, take up 345 square miles, and weigh 74 million tons. Lead–acid (advanced) would cost $8.3 trillion dollars, take up 217.5 square miles, and weigh 15.8 million tons.
  • Use electrolysis to produce hydrogen fuel to augment the natural gas grid, generate heat and power with fuel cells, or power hydrogen vehicles. [my comment: hydrogen is the least likely energy solution, even more unlikely than fusion]. Also, compared to storing electricity in batteries, hydrogen-based storage systems that combine electrolysis and  fuel cells are about three times less efficient.  In addition, today, these technologies are expensive, and significant cost reductions will be required to make them competitive alternatives.
  • For the longer term, scientists are developing new methods to produce fuels from renewable energy. The SUNCAT Center and  the Joint Center for Artificial Photosynthesis are developing new materials to produce “zero net carbon fuels” from carbon dioxide, water and renewable energy that can be used for transportation or backing up the electric grid. While we don’t know if and when the needed breakthroughs will occur, the game-changing potential of net zero carbon fuels would unlock the full potential of solar energy and break through the solar wall.  [My comment: before reading further, if the fuel isn’t DIESEL to keep trucks running, then what’s the point? And given that we’re at peak oil, peak coal, and peak natural gas, we don’t have the time for breakthroughs to occur. You’d want to prepare at least 20 years ahead of time].

Taking full advantage of the power from the sun

The global potential of solar energy is enormous and surely it can play a major role in a deeply decarbonized future energy system. In Thomas Edison’s words, “I’d put my money on the sun and solar energy. What a source of power! I hope we don’t have to wait until oil and coal run out before we tackle that.”*

We have work to do, but we are well on the way.  Who would have imagined just five years ago that solar energy would provide 6% of California’s electricity and is on track to double, triple or go beyond? But we need to be smart – avoiding running into the solar wall by balancing the generation mix, expanding regional markets, creating real-time markets to increase demand during solar peak-generating periods and creating new electricity demand, such as day-time charging for electric vehicles. In the longer term, electricity storage, hydrogen generation and zero net carbon fuels will further unlock the potential of solar energy.

*As quoted in Uncommon Friends : Life with Thomas Edison, Henry Ford, Harvey Firestone, Alexis Carrel & Charles Lindbergh(1987), by James Newton, p. 31

REFERENCES

NREL. August 2013. Beyond Renewable Portfolio Standards: An Assessment of Regional Supply and Demand Conditions Affecting the Future of Renewable Energy in the West. National Renewable Energy Laboratory.

Posted in Photovoltaic Solar, Renewable Integration | Tagged , , , | 7 Comments

Why human waste should be used for fertilizer

[ At John Jeavons Biointensive workshop back in 2003, I learned that phosphorous is limited and mostly being lost to oceans and other waterways after exiting sewage treatment plants.  He said that it’s dangerous if done incorrectly and wasn’t going to cover this at the workshop, but to keep it in mind for the future.

Fertilizer increase crop production up to 5 times per acre. To give you an idea of how important natural-gas (feedstock and energy to make it) fertilizer is, here are a few paragraphs from Yeonmi Park’s recent book “In order to live: A North Korean girl’s journey to freedom”:

“One of the big problems in North Korea was a fertilizer shortage. When the economy collapsed in the 1990s, the Soviet Union stopped sending fertilizer to us and our own factories stopped producing it. Whatever was donated from other countries couldn’t get to the farms because the transportation system had also broken down. this led to crop failures that made the famine even worse. So the government came up with a campaign to fill the fertilizer gap with a local and renewable source: human and animal waste. Every worker and schoolchild had a quota to fill.  Every member of the household had a daily assignment, so when we got up in the morning, it was like a war. My aunts were the most competitive.

“Remember not to poop in school! Wait to do it here!” my aunt in Kowon told me every day.

Whenever my aunt in Songnam-ri traveled away from home and had to pop somewhere else, she loudly complained that she didn’t have a plastic bag with her to save it.

The big effort to collect waste peaked in January so it could be ready for growing season. Our bathrooms were usually far from the house, so you had to be carefu lneighbors didn’t steal from you at night. Some people would lock up their outhouses to keep the poop thieves away. At school the teachers would send us out into the streets to find poop and carry it back to class.  If we saw a dog pooping in the street, it was like gold. My uncle in Kowon had a big dog who made a big poop—and everyone in the family would fight over it.

Our problems could not be fixed with tears and sweat, and the economy went into total collapse after torrential rains caused terrible flooding that wiped out most of the rice harvest…as many as a million North Koreans died from starvation or disease during the worst years of the famine.

When foreign food aid finally started pouring into the country to help famine victims, the government diverted most of it to the military, whose needs always came first. What food did get through to local authorities for distribution quickly ended up being sold on the black market”

Below is a review of The Wastewater Gardener: Preserving the planet, one flush at a time, by Mark Nelson, Synergetic Press]

Barnett, A. August 2, 2014. Excellent excrement. Why do we waste human waste? We don’t have to. NewScientist.

Would you dine in an artificial wetland laced with human waste? In The Wastewater Gardener, Marc Nelson makes an inspiring case for a new ecology of water

Rainforest destruction, melting glaciers, acid oceans, the fate of polar bears, whales and pandas. You can understand why we get worked up about them ecologically. But wastewater?

The problem is excrement. Psychologically, we seem to be deeply averse to the stuff and want to avoid contact whenever possible – we don’t even want to think about it, we just want it out of the way.

The solution, a universal pipe-based waste network, works well until domestic and industrial chemicals and other non-biological waste are mixed in. Treating the resulting toxic soup, as Mark Nelson explains in The Wastewater Gardener, is not only a major technological challenge, but also uses enormous amounts of one of the planet’s most limited resources: fresh water.

Each adult produces between 7 and 18 ounces of faeces per day. With our current population, that’s a yearly 500 million tonnes. Centralized sewage systems use between 1000 and 2000 tons of water to move each ton of faeces, and another 6000 to 8000 tons to process it.

Even then, this processed waste often ends up in waterways, affecting wildlife and communities downstream, and it eventually finds its way to the ocean. There it contributes to the process of eutrophication, which creates dead zones, killing coral reefs and other sea creatures.

But it doesn’t have to be like that. As head of Wastewater Gardens International, Nelson has traveled the world, developing and promoting artificial wetlands as the most logical way to use what we otherwise flush away.

Except that, as Nelson points out, with 7 billion-plus people, there really is no “away”. Besides, what the public purse pays to detox and dump can be put to profitable work, fertilising greenery for urban spaces and fruits and vegetables for domestic and commercial use, for example.

Less than 3% of Earth’s water is fresh, and only a tiny portion of that is easily available to us. Most of the water that standard sewage systems use to move human waste is drinkable. Diminishing water resources mean alternatives are pressingly needed. Wastewater gardens, where marsh plants are used to filter lavatory output and allow cleaned water to enter natural watercourses, are very much part of that solution.

Nelson clearly understands the yuck factor and goes to great lengths to show that having a shallow vat of human-waste-laced water nearby is far less vile than we might imagine, especially when it is covered by gravel and interlaced with plant roots. Restaurants with tables dotted between ponds containing the ever-filtering artificial wetlands provide convincing proof.

Constructed wetlands can take on big jobs, too: a mixture of papyrus, lotus and other plants have successfully and beautifully detoxified water from Indonesian batik-dying factories. This water had killed cows downstream and caused running battles between farmers and factory workers.

The Wastewater Gardener is not a “how to” story, but more a “how it was done” account. Nelson tells how these wetlands started to become mainstream in less than 30 years. With humility and humour, he recounts how, as a boy from New York City, he acquired hands-on ranching knowledge in New Mexico, then studied under American ecology guru, Howard Thomas Odum.

And stories of his experiences everywhere from urban Bali and the Australian outback to Morocco’s Atlas mountains and Mexico’s Cancún coast illustrate the gravelly, muddy evolution of his big idea. An inspiring read, not just for the smallest room.

Posted in Waste, Water | Tagged , , , , , | 6 Comments

Scientific American: Peak Oil may keep catastrophic climate change in check

[ “According to the Intergovernmental Panel on Climate Change, about 50% of carbon dioxide emitted by human activity will be removed from the atmosphere within 30 years, and a further 30% will be removed within a few centuries. The remaining 20% may stay in the atmosphere for many thousands of years (GAO. 2014. CLIMATE CHANGE: Energy Infrastructure Risks and Adaptation Efforts GAO-14-74. United States Government Accountability Office).Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”]

Ogburn, S. P. October 29, 2013.  Peak Oil may keep catastrophic climate change in check.  Scientific American.

Scientists suggest that the highest possible pollution rates are unlikely

Even as governments worldwide have largely failed to limit emissions of global warming gases, the decline of fossil fuel production may reduce those emissions significantly, experts said yesterday during a panel discussion at the Geological Society of America meeting.

Conventional production of oil has been on a plateau since 2005, said James Murray, a professor of oceanography at the University of Washington, who chaired the panel.

As production of conventional oil, which is far easier to get out of the ground, decreases, companies have turned to unconventional sources, such as those in deep water, tar sands or tight oil reserves, which have to be released by hydraulic fracturing.

But those techniques tend to lead to production peaks that tail off quickly, Murray said.

The panelists said these trends belie the high-end emission scenario from the Intergovernmental Panel on Climate Change (IPCC). That scenario, known as RCP 8.5, and often referred to as the “business as usual” scenario, has carbon dioxide emissions increasing through 2100.

“I just think it’s going to be really hard to achieve some of these really high CO2 scenarios,” Murray said.

David Rutledge, an engineering professor at the California Institute of Technology who studies world coal production, said the IPCC’s “business as usual” scenario is unrealistic because it essentially assumes that growth of fossil fuels like coal will continue apace, which is unlikely.

Recovery estimates may be too high

In reality, governments tend to overestimate their coal reserves, and much of these reserves will never be accessed, Rutledge said.

“There is little relationship between the RCPs and the actual historical experience of oil, gas and coal production,” Rutledge said.

Rutledge said of the four IPCC scenarios, he found the second RCP scenario, RCP 4.5, where carbon dioxide emissions flatten out around 2080, to be more plausible under a business-as-usual scenario for coal exploitation.

“4.5 would be the closest one if you look at the mining history,” Rutledge said. “My own opinion is that no one should use RCP 8.5 for any purpose at all.”

David Hughes, of Global Sustainability Research Inc., pointed out that production from tight oil fields like North Dakota’s Bakken and Texas’ Eagle Ford plays quickly reach what he called “middle age,” when production begins to fall off.

He said it is likely that the Bakken Shale oil play will peak in 2015 or 2016 and that the Eagle Ford Shale play, another significant U.S. oil production area, will peak soon after.

“Long-term [production] sustainability is highly questionable, and environmental impacts are a major concern,” Hughes said.

Charles Hall, a professor at the State University of New York who researches energy and wealth, in graph after graph showed that almost every oil-producing country has reached its peak of oil production.  This is even with a tripling of oil prices over the analysis period, Hall said.

Pieter Tans, a climate scientist at the National Oceanic and Atmospheric Administration who wrapped up the panel, said that while governments and policymakers should still aggressively pursue the goal of reducing greenhouse gas emissions, he did not believe that the most severe IPCC scenario, RCP 8.5, was likely.

From a climate perspective, there is some good news about the likely decline in the growth of fossil fuel production discussed by others at the panel, Tans said.

“It does decrease the chances of catastrophic climate change,” he said.

Posted in But not from climate change: Peak Fossil Fuels, Peak Coal, Peak Oil | Tagged , , , | Leave a comment

Ecological cost of new roads in Africa

[ What follows are excerpts from the January 8 2014 issue of Newscientist’s “Africa’s road-building frenzy will transform continent” by Andy Coghlan.  ]

China is funding most of the new roads to get the minerals they’ve mined, and transport food from the millions of acres of agricultural land they’ve purchased.

These roads are expected to impact large regions of untouched natural habitat. A quarter of the known 4151 mineral deposits in central africa are in irreplaceable natural habitat, most of them unprotected

There are few roads there now, which are only 25% paved — 204 km/road per 1000 square kilometer of land versus a world average of 944 km/road that’s more than half paved.

The 6,000 miles of roads will be expanded 10-fold to 60,000.

Roads enable farmers to produce more because they can buy tools, fertilizer, and other supplies easily. So roads would increase agricultural production given that farmers 4 hours from a city reach 45% of potential output, farmers twice as far (8 hours away) produce just 5% of what they could.

 

 

Posted in Roads, Transportation | Leave a comment

Why the world is headed the way of Easter island

Petros Sekeris. 19 November 2014. Violence ahead as tragedies of the commons spread. NewScientist.

The world risks heading the way of Easter Island – a spiral into conflict as depleted natural resources are plundered.

There is a growing feeling that resources vital to sustain human life, such as fresh water, land and fossil fuels, are being used too fast to ensure our long-term presence on the planet. It seems obvious that nations should cooperate on this problem, and yet successful cross-border solutions and agreements are hard to find. Why don’t we act for the common good more often?

Look around the world and you can see instances of water-related inter-state tension and conflicts in many regions, including the Middle East (Jordan river basin, Tigris-Euphrates basin), Asia (Indus river), and Africa (the Nile).

“Fish wars” have erupted sporadically, such as Europe’s cod wars, and while these have been more contained, they could resurge amid decreasing stocks. In the same way, the shared resource of global climate continues to be threatened by the relentless burning of fossil fuels.

Our degradation of the environment is ominous and much evidence points to a clear link between the scarcity of vital resources and conflict. One wonders, then, why world leaders failed to reach a substantive agreement on climate change at the Copenhagen summit in 2009; or why fishing and hunting quotas for endangered species are so hard to implement; or why the use and pollution of river basins is not better regulated.

Explanations such as poor forecasting of resources, the short-term mindset of politicians, or simply the refusal to recognize the problem are usually given.

However, what if these are not the real reasons and something more fundamental is at work?

For example, imagine a depletable natural resource – such as a water basin – jointly owned by two countries. Both drain it for drinking, sanitation, irrigation and so on. Draining too quickly will result in it drying out. Most game theory work says that working for the common good is the optimum choice for both nations. But this does not square with conflicts we see, or the widely held view that more are inevitable.

To address this, I designed a simulation that allowed the use of violence to control resources (The Rand Journal of Economics, vol 45, p 521). In a world where force is a very real option and history suggests it is used or threatened more often than we might hope, this seemed reasonable.

The outcome offers an explanation for the gap between theory and reality. Having constructed a game-theoretical model, I found that when conflict is allowed it always occurred, but only when resources become heavily depleted.

And, crucially, the very expectation of impending conflict led to non-cooperation in the short term and sped up depletion of the common resource. I would argue that this resource-grabbing tallies with what we see in much of the world, be it disputes over fossil fuels, fresh water, land or marine resources.

Are there any historical examples that illustrate this effect of “conflict expectation” and more rapid resource use? Possibly. The demise of the first society on Easter Island is salient. It is thought Polynesians were first to colonize this isolated, 160-square-kilometre Pacific island around AD 900. At its peak, 30,000 people may have lived there.  Their society was organized in hierarchical clans, peacefully competing for supremacy by displaying vast stone statues. To move them, the tallest trees needed to be felled and used as rollers. Deforestation resulted, says Diamond. Instead of reaching agreements, the islanders rapidly devastated their lands, and by the time the first Europeans arrived in 1722, no tree taller than 3 meters stood there.

An ecological disaster and dramatic deprivation must have occurred. According to Diamond, a sort of military coup took place, sparking prolonged conflict. It is reasonable to imagine that the clans realized that trees – also vital for things like fishing boats – were in short supply, and so grabbed what they could before the inevitable violence.

The conclusions I’ve drawn on the impact of over-use of resources today on future conflict are purely theoretical. So with economists Giacomo De Luca and Dominic Spengler of the University of York, UK, I am designing a lab experiment to see whether humans in a controlled environment do deplete resources faster when given the possibility to use violent control. Our early findings point that way. Such evidence would shed new light on the failure of international cooperation over the preservation of the environment.

What’s next? I have not yet considered human ingenuity in adapting to a changing environment. Whether that will be sufficient to achieve a sustainable path depends on the rate of depletion versus adaptation.

Inevitable conflict and accelerated use of depleted resources may be more likely to become a reality within weak states and in the international arena, where weak institutions are more likely. For example, signing a carbon emissions treaty today does not commit a country beyond mild sanctions that the global community may or may not impose. In addition, a change in government in a powerful country is sufficient for a treaty to be revised, curbing the incentives of others to join.

All this reinforces the need for stronger institutions and international bodies if we are to avert a tragedy of the commons in a violent world. Sadly, this will require overcoming the very problem we are trying to solve: a lack of international cooperation.

Petros Sekeris is an economist at the University of Portsmouth, UK

Posted in ! PEAK EVERYTHING | Tagged , | 6 Comments

Ted Trainer criticizes Hatfield-Dodds CSIRO study in Nature that denies “Limits to Growth”

[This study denies “Limits To Growth”, and I’ve posted Ted Trainer’s objections below.  It is alarming Nature would publish such claptrap.  Has Rupert Murdoch secretly purchased them? Alice Friedemann www.energyskeptic.com]

Ted Trainer.  November 2015. A brief critical response to the CSIRO study:

Hatfield-Dodds, et al., (2015) Australia is ‘free to choose’ economic growth and falling environmental pressures, Nature, 5th Nov, 527, pp. 49 – 53. 1doi:10.1038/nature16065

http://www.nature.com/nature/journal/v527/n7576/full/nature16065.html#affil-auth

This study (by eighteen authors) concludes that Australia can achieve sustainable levels of resource use and environmental impact by 2050 without interfering with economic growth and without any radical change in values or behavior. About twenty scenarios are modeled, and reported in many detailed plots in the c. 25 page Nature article plus the Supplementary Information document. These credentials make it likely that the findings will be widely reported and accepted. There are however a number of problematic aspects of the study. Following are brief notes on some of these, supporting the view that the paper’s conclusions are mistaken. They contradict the now large “limits to growth” literature so it is very important that they should be considered carefully.

The problem with “scenarios”.

The study reports on scenarios, mostly in the form of plots of trends on a baseline extending to 2050. Scenarios are commonly used but are of no value unless they are accompanied by full information on the assumptions on which they are based and full presentation of derivations. In this case we are given neither, meaning that the paper is little more than a set of unsupported claims. These might be correct, but the exercise would only be of value if we were able to assess this.

It is possible to prove just about anything by feeding specific assumptions into models, especially when a number of optimistic assumptions are combined. This is not to say dishonesty is involved. Estimates of future efficiencies and costs typically vary greatly in fields like renewable energy, emissions analysis, carbon sequestration, the hydrogen economy, and biomass technologies. If a set of relatively optimistic plausible numbers is taken it can produce conclusions many times more favorable than a set of plausible pessimistic numbers. In the case of this study it seems to me from the conclusions given that some quite implausibly optimistic assumptions have been made.

In other words, the paper does not explain how its claimed 2050 figures can be achieved; it simply states that they can be. The claims might be valid, but we can’t evaluate them. What we want is to know is how/why it is thought that they can be achieved, to be able to rework the arithmetic to assess the validity of these conclusions, and to be able to consider whether the assumptions underlying them are plausible. If an analysis does not provide us with the information enabling us to do these things it is not far from worthless. There are a number of analyses of this kind in the renewable energy field. I take a dim view of Nature’s poor standards in accepting such a paper, especially when it provides strong support for a much contested and I believe erroneous position on what is probably the most important issue we face; viz. whether or not there are limits to growth.

“Decoupling”.

The study strongly accepts the “decoupling” thesis, i.e., that economic growth can be separated from increasing resource use and ecological impacts.

Reviews have found that at present there is virtually no satisfactory support for the claim that this is happening. (Burton, 2015.) Over the longer term energy use for instance has tracked almost exactly in parallel with GDP growth. It is not very helpful for this paper to say, “We find that substantial economic and physical decoupling is possible.” Even if substantial decoupling could be shown to be possible the important question is, could the magnitude of the effect be sufficient?

There are impressive reasons for thinking that the effect could not be sufficiently powerful to achieve the outcomes this paper envisages. According to these authors by 2050 Australian GDP can multiply by 2.7 while resource use falls 35%. That would leave a ratio of resource use to GDP that is around one-fifth of the present level. No evidence or reason is given to indicate why this is thought to be possible — in an era when just about all material, biological and ecological resource grades, costs, scarcities problems etc. are deteriorating rapidly. Add the cumulative global resource depletion that will occur in the next 35 years during which they estimate that GWP will multiply by 2.5.

There are numerous well known indices which show how enormous decoupling would have to be if economic growth could continue while resource and ecological impacts become sustainable. For instance ghe World Wildlife Fund’s “Footprint” analysis shows that the amount of productive land needed to provide an Australian with energy, food, water and shelter is about 7-8 ha. If 9.7 billion people were to live as we do then we’d need up to 78 billion ha of productive land … but that’s about ten times the amount there is on the planet. And if present loss rates continue we will have only half the present amount of cropland by 2050.

Similarly, if by 2050 all 9.7 billion people were to have risen to the GDP per capita Australians would have then given 3% p.a. economic growth, world economic output would be about 25 times as great every year as it is now. Is it plausible that “decoupling” could allow GWP, the amount of producing, purchasing and using up going on, to multiply by 20+ while rich world per capita resource use can be cut to one-tenth or one-twentieth of the present total? What is the case for thinking that anything like this could be done?

Given these kinds of multiples, a 35% reduction in materials demand (i.e., only 25% per capita given that the analysis envisages a 37 million population in 2050) would not get us far towards a global consumption rate that is sustainable and possible for all.

Presumably it is being assumed that the economy would be much more heavily centered on provision of services than at present rather than on producing resource-intensive commodities and goods, but services are remarkably energy and resource intensive, even when associated factors such as getting workers to offices, and training them in the first place, are not included. Again we would need to see assumptions and numbers.

I sent a draft of this critique to the main author. His only response regarding the decoupling issue was to say that a paper by Schandl et al. (2015) provides “more explanation.” But that paper does not provide any evidence or argument supporting the claim that decoupling is possible.  It isn’t even concerned with that question. What the paper does is make a basic assumption on carbon price and another on materials use efficiency, and then look at the effects on GDP etc. to 2050.

The Schandl et al. paper assumes that the efficiency of use of materials could improve at up to 4.5% p.a, compared with the historical rate said to be 1.5% p.a.  No reason is given for thinking that this extremely high rate is realistically achievable. If it was achieved then by 2050 materials used per unit of production would be around 4% of what it is now.  To put it mildly, we would need a very convincing case before we could take this expectation seriously.

But the biggest problem with the Schandl et al. paper is that it is pretty clearly saying that if we implement a high carbon price, and achieve an up to 4.5% p.a. improvement in materials efficiency, then by 2050 there will be significant decoupling, without affecting GDP.  But this only saying if we assume that significant decoupling takes place each year from now on, then by 2050 we will have significantly decoupled. (!) The paper is little more than an exploration of the effects of improving materials efficiency at the rates stated.

But the ultimate point about the Schandl et al. paper is that clearly and emphatically says that none of the scenarios they explore result in absolute decoupling.

On p. 5 they say,

“Our results show that while relative decoupling can be achieved in some scenarios, none would lead to an absolute reduction in energy or materials footprint.”  (They do say carbon would go down.)

“…even strong carbon abatement and large investment into resource efficiency would see global energy use growing from …(416 EJ/y to 1128 EJ/y in 2050.)

Note again the paper was the sole reference given to me when I asked the CSIRO authors what is the support for the decoupling thesis(!)

By the way, that energy growth figure is far higher than I have seen anyone predict, even the IEA. Energy demand more than doubles in all three of their scenarios, so to say the least, there is no absolute energy decoupling. To quote the paper again, “…energy use continues to be strongly coupled with economic activity in all three scenarios.” (p.5.) We are left with question, how sustainably could we find 2.7 times present world energy supply. The paper does not consider the difficulty of doing this via renewables. (I have published a number of papers arguing that this cannot be done affordably.)

Similarly they say that global materials use would increase markedly, from 79 billion tons/y to 183 billion tonnes/y. This would only be a small “relative” decoupling, but it would be 2.3 times the present burden on the planet due to resource extraction.

Thus it would seem that a) it is highly implausible that anything like the expected/assumed decoupling could be achieved, b) no reason is given to expect that it could, c) in fact even when Schandl et al. make very implausible assumptions they admit decoupling does not result, and d) even if the most optimistic CSIRO rate was achieved was it would leave Australian levels of resource and ecological impact far higher than those enabling a sustainable world (explained further below.)

Bio-sequestration

The second of the two big assumptions the paper’s optimism depends on is the assumed potential for bio-sequestration of carbon. It says that in 2050 large quantities of carbon based energy would still be being used and up to 59 million ha would be planted to take carbon from the atmosphere. (All our cropland is only c. 24 million ha and all our agricultural land is about 85 million ha.) The yield assumption does not seem to be stated; is it 15 t/ha, or the more like 5 t/ha likely from a very large area of more or less average land? The main problem with the use of land to soak up carbon via plant growth is that after about 60 years the trees are more or less fully grown and will not take up any more carbon; what then?

The implications of this do not seem to be considered. It means that in the second half of the century an amount of new planting would be needed each year that was big enough to take out the amount of carbon emitted that year. Given that the economy in 2050 is expected to be 2.7 times bigger than it is now, and still growing at a normal rate, the area to be planted each year would be substantial, and increasing.

Fig. 2 shows that in 2050 a net 200 million tonnes of CO2 would be being taken out of the atmosphere each year. That is, in addition to taking out the emissions generated by the large amount of fossil fuels still being used in 2050 (which seems to be around 1.825 EJ), another 220 million tonnes would be taken out (the amount from power plus transport), making a total in the region of 450 million tonnes/y. Assuming 10 tonnes/ha/y forest growth (it would be more like 5 t/y for a large area), taking out approximately 36 tonnes of CO2/ha/y, the additional area to be planted each year would be 12.5 million ha, and more when it is to cope with an economy that is growing.

How has the carbon embodied in the production and transport of imports been accounted? It would seem that the 2050 economy would have to be even more dependent on services than the present economy, meaning there would be heavy importation of goods no longer produced in Australia. The energy, carbon, resource and Third World justice effects of imports is only beginning to be attended to, and the picture is disturbing. For instance for a rich country the amount of carbon emissions due to imported goods is typically as great as or much greater than the amount released from energy production. (And it shows up on the books of the exporting country, not the rich country consuming the goods.) Has the amount of bio-sequestration needed to deal with this been included?

In a reply to my draft of this discussion the main author said that “… the carbon sequestered by plantings on currently cleared satiates after a period and does not provide a permanent flow.” This is difficult to understand because it would seem to contradict their entire case. Their defence of the possibility of growth and affluence depends heavily on the capacity of bio-sequestration to take out as much CO2 each year as we are putting in but this reply seems to be admitting that their strategy could only do that until around 2050.

Randers, one of the original Limits to Growth authors, doesn’t think we will run into limits problems by 2050, but he thinks by about 2070 they will be catastrophic. The time line isn’t crucial; the original book wasn’t concerned with when we will hit the wall; it was concerned that we are going to hit it. At the best the CSIRO paper provides some reason to think it will be later rather than sooner, but it doesn’t give us any good reason to think we won’t hit it. Yet the paper is being taken to mean there are no limits to growth to worry about.

What carbon price will do it?

The study seems to have assumed that power generators will find it economic to shift from carbon fuels to renewables when the price of carbon rises to about $50/tonne (i.e., rises at 4.5% p.a. from $15/t.) Lenzen’s soon to be published detailed study of Australian renewable potential is likely to indicate that the price needed to drive carbon out of the generating system is $500/tonne. His colleague working on the German situation says that there the price would be close to $1000/tonne. (The CSIRO paper does not assume close to compete elimination of carbon fuels.)

The study seems to have made the very common mistake of taking the cost of carbon that would make it more economic for a generator to shift the generation of 1 kWh from carbon fueled power station to a wind turbine. But this is not the right question. A power supply system with a large fraction of renewable input would have to have a very large amount of redundant generating capacity, most of it sitting idle most of the time, to be able to guarantee supply during periods of low wind or solar energy, or it would have to retain much carbon-fuelled capacity, sitting idle most of the time. Either way high capital costs are created for the system. The multiple for a 100% renewable system seems to be in the range of 4 to 10 times the amount of plant that would do the job if renewables worked to peak capacity all the time. So the price of carbon would have to be high before it became cheaper for power generators to shift to renewable technologies.

No analysis of renewables.

Renewable energy is claimed to provide a significant proportion of the power and transport energy but there is no reference to the many, difficult and unsettled associated problems of intermittency, redundant capacity, and storage, and the resulting total system capital costs. It is utterly impossible to derive conclusions about the viability and cost of sustainable alternative systems without carrying out detailed and convincing analyses of this field.

Conservation potential?

The plots show that it is being assumed that demand and impacts can be greatly reduced by conservation and efficiency effort. This is commonly assumed but few if any optimistic pronouncements take into account the significant energy, resource and environmental cost of saving energy, resources and environment. In other words claims are often only about gross reductions achievable and not net reductions.

Powerful examples of this are given by figures on housing and vehicles. Much attention is given to the German Passivhaus which it is said can reduce energy consumption by 75% or more. However this kind of claim usually refers only to energy consumed within the house, and does not take into account the energy used to install the typically elaborate insulation and heat transfer equipment. The issue seems to be unsettled but a recent study by Crawford and Stephen (2013) found that the total life-cycle energy cost for the Passivhaus is actually greater than for a normal German house.

Even more common is the claim that electric vehicles (assumed to make up 25% of transport energy use in this study) can reduce energy use by 75 – 80%, but this does not take into account the considerable energy costs in producing EVs. The State Government of Victoria’s trial of EVs found that they reduce emissions only if powered by renewable energy. (Carey, 2012.) Otherwise life-cycle emissions taking into account all factors in addition to fuel are actually 29% greater than those of petrol driven cars. Mateja (2003) finds that electric cars involve much higher embodied energy costs than normal cars. Bryce (2010) says 60% of the life cycle energy and environmental cost of these cars is to do with their production and disposal, not their on-road performance.

Again it would be important to see what assumptions are being made by these authors in arriving at the optimistic conservation and efficiency claims being made.

Water.

It is said that water extraction might increase 101%, but desalinization would be important. What are the energy implications of this? Also what would be the water implications of 59 million ha growing trees. There is reference to fact that this is an issue but the implications and the magnitudes are not made clear.

What would the cost be?

It is one thing to show that something could be done but it is another to show that it could be afforded. The paper claims that no significant cost to GDP would be involved. Even if the decoupling and sequestration assumptions were valid we would want to know the cost of doing those things, e.g., of maintaining and harvesting 59 million ha, and of producing half the power by renewables. My understanding of Lenzen’s current study is that it seems to be indicating that a fully renewable power supply system would result in a production cost around four or five times the present cost of fossil fuelled power. (The CSIRO paper does say the cost of power production could double.) This would be affordable, but would have major disruptive effects, especially on GDP as energy costs feed into everything and have multiplier effects.

The post GFC stagnation, and wild fluctuation in oil prices seem to have shown how surprisingly fragile and sensitive the global economy is to resource input factors. Tverberg (2015) argues persuasively that resource limits to do with the increasing difficulty of providing oil and its deteriorating EROI led to the recent spectacular rise in its price, which in turn depressed the economy, which led to the present low oil demand and prices. This suggests how disruptive a significant rise in electricity price might be. This paper adds questions to do with the probable costs for all that bio-sequestration, and especially regarding the EROI assumed for biofuels which are assumed to provide 25% of transport energy. (Various studies find that it is around 1.4 or less for corn based ethanol, which suggests that option is not worth bothering with.)

Would it scale to 9.7 billion people?

The amount of land planted for bio-sequestration would not. The area assumed for the optimistic scenario, up to 59 million ha forest plantation for sequestration plus 35 million ha for “biodiversity planting” would total 2.2 ha per person (assuming population will reach 37 million by 2050.) But Australia has much more potential forest area than most countries and the amount of forest on the planet now averages about only 0.45 ha per person, and is heading for .25 ha by 2050.

The expected 2050 consumption of petroleum and gas is considerable. Leaving aside whether there will be much of either left by then, the per capita use would be 35 GJ per person. Thus for 9.7 billion people demand would be 340 EJ which is about 1.7 times present world oil consumption … and therefore far from a plausible amount all could be consuming in 2050.

These numbers mean that even if the optimistic scenario could be achieved it would fall far short of one that could save the planet. It would still leave Australians living at per capita levels of resource use that were many times higher than all could share.

Conclusions.

As noted above, it would be difficult to suggest an issue that is more important than whether or not the limits to growth thesis is valid. The case for it has been accumulating weight for at least fifty years and in my opinion has long been beyond serious challenge. All resource stocks are being depleted at significantly unsustainable rates, summarized by the WWF conclusion that 1.5 planet Earth’s would be needed to provide them sustainably. And only about 2 billion are using them; what happens when 11 billion (the UN’s 2100 expectation) rise to our levels of consumption … let alone the levels we will have then given 3% growth … that is, levels that might be ten times as high as they are now.

This is the kind of arithmetic that is now leading considerable and increasing numbers of people to see the dominant obsession with affluence and growth and tech-fixes as absurd and suicidal, and to join the De-growth and associated movements such as Voluntary Simplicity, eco-villages and transition towns. We who are working in this area believe we know how to save the planet and we know the only way it can be saved. It is to shift to ways that do not create the problems now destroying the planet, depleting resources, condemning billions to deprivation, causing resource wars and damaging the quality of life in even the richest countries. Our “Simpler Way” vision (http://thesimplerway.info) would be easily and quickly achieved, if that was what people wanted to do. It isn’t and it will not be considered until the conditions presently devastating the lives of billions begin to impact supermarket shelves in the countries now living well on their grossly unfair proportion of world wealth. By which time it will probably be too late. The CSIRO paper is saying what just about everyone wants to hear, i.e., that there is no need to worry about any need to take The Simpler Way seriously.

References

Bryce, R., (2010), Power Hungry, Public Affairs, New York.

Burton, M., (2015), “The Decoupling Debate: Can Economic Growth Really Continue Without Emission Increases?”, The Leap, October 23.

Carey, A., 2012. Electric cars make more emissions unless green powered. The Age, 4th Dec.

Crawford, R., A. Stephan, (2013), “The significance of embodied energy in certified passive houses.”, World Academy of Science, Engineering and Technology, 78, 589 –595.

Mateja, D., (2000), ‘Hybrids aren’t so green after all’, www.usnews.com/usnews/biztech/articles/060331/31hybrids.htm

Schandl, H., et al., (2015), “Decoupling global environmental pressure and economic growth: Scenarios for energy use, materials use and carbon emissions.” J. of Cleaner Production, (In press.)

Tverberg, G., (2015) “Oops! Low oil prices are related to a debt bubble”, Our Finite World, November 3.

Posted in Limits To Growth, Other Experts | Tagged , , | Leave a comment

The latest monster ships could be a disaster

Gray, W. 20 November 2013. Don’t abandon ship! A new generation of monster ships will be even harder to rescue. NewScientist.

Should any of the new monster-sized ships run aground or sink, the resulting chaos could block a major shipping lane and create an environmental disaster that could bankrupt ship owners and the insurance industry alike.

With vessels of this size conventional salvage will be all but impossible. 

Despite a steady rise in air and road transport, our reliance on shipping remains overwhelming: ships move roughly 90% of all global trade, carrying billions of tons of manufactured goods and raw materials.

To cope, ship designers are paying close attention to fuel efficiency. Along with better engines and new hull designs, they are chasing economies of scale by constructing ever larger vessels that burn less fuel for each tonne of cargo they carry.

These monsters are already plying the seas. There are 29 bulk carriers about 360 meters long (1181 feet). Designed to feed Brazilian iron ore to furnaces in China and Europe, each is capable of carrying up to 400,000 tons. More are on order.

The most rapid increase in size has come with container ships. In the 1990s the largest carried about 5000 shipping containers; the Maersk Mc-Kinney Møller can carry 18,000. Shipyards will soon begin work on the next generation, some 40 meters longer and capable of carrying 20,000 containers, and there are rumors of even larger vessels to come.

But with record-breaking size comes the risk of eye-watering costs should anything go wrong. Roughly 1000 serious shipping incidents occur each year, and according to a recent analysis by a group of maritime insurers, the costs of repair – or in the worst-case scenario, wreck salvage and clean-up – are set to rise rapidly. The value of a single mega-ship’s cargo, for instance, can easily exceed $1 billion, while stricter environmental legislation in many parts of the world means that should a wreck create pollution, those liable can expect to be hit with mammoth clean-up bills.

When the Costa Concordia ran aground, “The easiest and cheapest way of removing the Concordia would have been to cut her up in situ and take her away in pieces,” says Mark Hoddinott, from the International Salvage Union. However, the island of Giglio, where the Costa Concordia came to grief, is part of a marine park on one of Italy’s most environmentally sensitive coasts. As a result, the authorities insisted she be moved in one piece.  The location of the wreck was fortunate. 2380 tons of fuel were able to be removed rather than leak into the sensitive environment.

The site is close to some of the biggest shipyards in Europe, so the salvage equipment could reach the wreck quickly. It is also relatively sheltered, making the key step of fuel removal easier, and since the Costa Concordia was designed for short cruises, it only carried small amounts of fuel.

Had it been a mega-ship it would have been a different story, even in such sheltered waters, says Sloane. Such vessels carry more than 20,000 tonnes of fuel, so removing it is a major operation. And since fuel must be removed first, any delay will exacerbate the disaster. “I don’t think there’s many places in the world where you could do an operation on this sort of scale,” Sloane says.

In many ways removal of cargo containers is even harder, as these 6-meter-long boxes can be stacked up to nine deep above and below deck. The lower decks often include built-in metal guideways designed to speed up loading and unloading in harbour, but with the hull at an angle, these can jam containers together. Several recent salvage operations have sent a shuddering warning through the industry.

In 2007, for example, a container ship called Napoli ran aground in Lyme Bay on the UK’s south coast after her engine room flooded. The cold conditions meant the vessel’s 3500 tonnes of fuel had to be warmed before it could be pumped out, so almost three weeks passed before the salvage teams could begin to remove the 2300 cargo containers. Even then, salvors had to man-handle lifting chains around each cargo container before removal so it took three and a half months to recover them all. Still unable to refloat due to damage, the hull was eventually blown apart with explosives and removed for scrap.

Worse came in 2011, when the container ship Rena ran aground off the coast of New Zealand. It was 11 days before salvors could begin controlled oil removal and a further month before the first container was removed. Eventually a giant crane was brought in but it was still slow going – just six containers per day were salvaged. Hit by bad weather, the wreck eventually broke up and the stern sank.

Compared with the latest ships, the Rena was a tiddler capable of carrying just 3351 containers, yet only 1007 were recovered in an operation that lasted more than a year. “Offshore, in a remote location, when the ship has anything over a 5-degree list, it’s almost impossible,” says Sloane. “You have to have bigger and bigger cranes, on barges, and it’s very slow and very challenging. The big ones are going to be a nightmare.”

In fact the gigantic Emma Maersk container ship has already hit trouble. In February this year, the 397-metre-long vessel lost power off the Egyptian coast. Luckily it was brought safely to port where almost 13,500 containers were unloaded in a two-week-long shore-based operation while the hull was repaired. In less favorable weather conditions and in a more remote location, things could have been very different. Industry experts suggest that unloading the cargo of a mega-ship in the open sea could take up to three years to complete, if indeed it can be done at all.

Posted in Ships and Barges | Tagged , | 3 Comments