Menhaden going extinct — the key food source at bottom of food web

[ This is a really long introduction so I can reply to a comment that menhaden are “neither overfished nor experiencing overfishing”.   Huh? Both their numbers and range are small compared to their original population. If there were a reduction, or a moratorium on menhaden fishing for 10 years, the population of bluefin tuna, striped bass, redfish, bluefish, and humpback whales, to name just a few of the 79 species that feed on menhaden, would explode and create far more commercial and sports fishing jobs than the only active Atlantic coast menhaden processing plant (Omega Protein) at Reedville, Virginia.

According to Chairman Robert H. Boyles Jr., Chairman of the ASMFC Menhaden Management Board and a fisheries biologist with over 35 years of federal fishery conservation and management experience both nationally and internationally with 20 years of experience in the headquarters of the National Marine Fisheries Service (NMFS)):

“I am concerned about the dwindling number of Atlantic menhaden and the impact that their disappearance has on the health of coastal ecosystems, fisheries and economies. Once abundant along the entire Eastern Seaboard, t he menhaden population has reached an all-time low of less than 10% of its historic size.  It is not enough to limit fishing mortality by the industrial-scale menhaden industry in order to provide a maximum sustainable yield for that fishery because menhaden are much more important ecologically as prey for higher trophic level species of the Atlantic coast. Menhaden are the primary prey item for a majority of Atlantic coast predatory fishes. But they are also a direct link in converting the primary productivity of coastal marshes into fish biomass of the other species targeted extensively by both commercial and recreational fishermen.  Menhaden are unique in that they are able to digest detritus (produced by the breakdown of marsh grass into small pieces which develop a coating of bacteria that the menhaden strips off in its digestive tract and expels, later to develop another bacterial coating,
etc. etc.).  Thus, menhaden are able to convert estuarine primary productivity (by marsh grasses) into fish flesh – first theirs and later, when preyed upon, by vast numbers of marine predators such as striped bass and bluefin tuna.  Because menhaden are able to convert marsh productivity into fish flesh and they are the primary link in the Atlantic coastal fish food web, enlightened menhaden management is thus extremely important.  Without menhaden, we would not have the abundance of marine fish t hat exist off the Atlantic coast. This is the conclusion of scientists at NMFS’ Southeast Fisheries Science Center Laboratory at Beaufort, NC, during my tenure with the agency. I understand that …the Board is under pressure to [increase] the quota for 2015. I urge you to explicitly provide for the needs of menhaden predators before increasing menhaden catch limits for 2015 or beyond.

It’s important to note that there has been no documented increase in menhaden abundance here in the Northeast. De spite industry claims that large schools of previously uncounted adult fish are present in northern water s, these sightings appear to end in Providence Rhode Island–well short of menhadens historic range through Massachusetts, New Hampshire, and Maine. More troubling for us on outer Cape Cod is the lack of juveniles (peanut bunker) that used to frequent our Ocean beaches drawing Striped Bass and Bluefish into the surf zone to the delight of the many surfcasters who live and visit here. Since 2008 when the last such instance of peanut bunker were present in our waters, there have been zero such schools here, making the beaches mos tly devoid of Striped Bass and B luefish through much of our brief fishing season.This is just one example of the negative consequences a diminished supply of menhaden has to our way of life, our local economy, and the health and vitality  of our marine ecosystem. It’s time to hold the line with menhaden conservation, and resist those interests who would like to see a relaxing of the catch limits we worked so hard to a ch ieve. Let’s allow menhaden the chance to recover to former levels of abundance thr oughout their historic range-including Massachusetts, New Hampshire, and Maine.”

Andrew David Thaler in the article “Six reasons why Menhaden are the greatest fish we ever fished“: 

“You could be forgiven if you thought that the American industrial revolution was powered by whale oil. The glossy lubricant was used primarily for lighting in pre-industrial America. By the time Herman Melville published Moby Dick, the golden age of whaling was already in decline. The Civil War was its death blow. Out of that conflict came the industrial menhaden industry. By 1880, half a billion menhaden were being rendered into oil and fertilizer. There were almost three times as many menhaden ships as whaling ship. A menhaden boat could produce more oil in a week than a whaling ship could during it’s entire, multi-year voyage, and it could do so close to shore and out of harms way.

The environmental movement and fisheries ecology rose from the first menhaden collapse. George Marshs’s Man and Nature, later retitled The Earth as Modified by Human Action, published in 1864, was the first major work to link the principles of naturalism with the rigor of ecology. Its publication mark the beginning of the modern environmental movement. In Extirpation of Aquatic Animals, he points to the decimation of menhaden as one of the key examples of our impact on the oceans. In 1879, George Brown Goode released his monumental work, A History of the Menhaden, the first, and still one of the the most comprehensive, studies of an American fishery. In 1880, we were running out of menhaden. The schools that Goode had studied, primarily north of Cape Cod, were gone. Even today, the menhaden’s range is a fraction of what it once was. Yet we continue to fish for them, for one very compelling reason.

Menhaden are really, really good at making more Menhaden.  An adult female menhaden can produce more than 300,000 eggs in a year. They become sexually mature after 2 years. If you look up fecund in a dictionary, you’ll see a picture of a menhaden. That fact alone is the reason that, even after a major fisheries collapse in 1880, they are still a viable fishery, though we have fallen from half a billion tons at its peaks to a measly 300,000 tons, today. This profound fecundity also means that, with reduced pressure, they still have a chance to recover.”

You have to wonder if the NOAA and Atlantic States Marine (ASMFC) are captured government agencies, since the Chesapeake Bay Ecological Foundation, Inc writes that:

Chesapeake Bay Ecological Foundation (CBEF) disagrees with the Atlantic States Marine Fisheries Commission’s (ASMFC) 2015 menhaden stock assessment’s conclusion that menhaden have not been overfished in five decades. The informed public is confused and skeptical since the assessment does not address ecological overfishing unsustainable harvest levels that disrupt the natural balance between predators and prey). Although the 2015 assessment indicated menhaden reproductive potential may be greater than previously thought, total abundance of this indispensable prey species is low and continuing to decline.

The Pew Charitable Trusts has 10 reasons why the menhaden catch limit should not be increased in 2017, here are a few of them:

Menhaden remain far below their historic numbers in the Northern and Southern regions of the East Coast. However, if conservation efforts continue, they can be abundant again from Maine to Florida.

  • The vast majority of the menhaden fished coast-wide are caught in one relatively small area near the mouth of the Chesapeake Bay, which is also a vital nursery area for many ocean species. An increase in the 2017 quota would intensify fishing in this important and vulnerable estuary, which is already harmed by the loss of important habitats and poor water quality.
  • The latest peer-reviewed scientific assessment identified a growing number of mature menhaden, indicating that the population is poised to increase.   But it also suggests that menhaden are still vulnerable: The overall number in recent years was near historic lows, as was the number of young fish surviving long enough to reproduce and help the population grow.
  • Giving menhaden time under current catch limits to return to their historic population size and range will deliver the highest benefit to Atlantic coast ecosystems, economies, and fishermen.

As to whether menhaden improve water quality or not, I can find only one paper on this done in a laboratory 5 years ago, so this doesn’t seem to be a settled matter at all.

Consider too that predatory fish can’t make omega-3’s themselves, but are high in omega-3 fatty acids because of eating menhaden, making them an important part of a healthy diet.

Since we now use 10 calories of fossil fuels to provide 1 food calorie, when oil declines (tractors, harvesters, distribution of food, etc run on diesel fuel) and natural gas declines (both feedstock and energy source to make fertilizers that can grow up to 5 times as much food per acre than in the past), and population and immigration continue to grow exponentially (to drive down wages to enriches the rich), clearly there will be food shortages some day soon, since we are at peak oil, at or close to peak coal, and nearing peak natural gas as well.  Allowing menhaden to increase by decreasing catch limits, or better yet, a moratorium, is an easy way to lower suffering in the future by increasing fisheries, and menhaden are also a great fertilizer.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Alice Friedemann. March 23, 2009. Meet menhaden – before this ecologically critical fish vanishes. Ethicurean.


Ever heard of menhaden? Probably not, although perhaps you’re familiar with the fish’s other names: bunker, pogies, mossbacks, bugmouths, alewifes, and fat-backs. You may be surprised to learn they’re the most important fish in the Atlantic and Gulf waters.

Menhaden are the vacuum cleaners of our coasts, filtering up to four gallons of water a minute to extract phytoplankton (algae and other tiny plants). They grow no more than a foot long at most, yet the weight of an entire school of menhaden can equal that of a blue whale.

On land, plants are at the bottom of the food chain, eaten by many herbivores—mice, rabbits, cattle, insects, and so on. In the ocean, plants are also at the bottom of the food chain. The difference is, there’s only one main herbivore: menhaden. The other filter feeders—like baleen whales, herring, and shad—eat zooplankton (tiny animals).

This gives menhaden an extraordinary weight in the oceanic ecosystem: they are the main food source of the entire food web above, and the main species keeping the ecosystem healthy, by clearing the water of excess algae.

Unfortunately, as H. Bruce Franklin documents in “The Most Important Fish in the Sea: Menhaden and America,” they’re almost all gone. And one company, Omega Protein, is systematically eliminating the few that remain, for fishmeal and poultry feed.

Men had it

When the Pilgrims first arrived in the New World, they were astounded by the abundant sea life. The rivers and coasts were teaming with 6-foot-long salmon, foot-wide oysters, and schools of 140-pound striped bass. There were so many whales criss-crossing bays, estuaries, and the coast that they were a peril to ships.

The food chain for all of this cornucopia of life depended on billions of menhaden, once so plentiful that they formed a veritable river of flesh along the Atlantic coast, writes Franklin.

I’d never heard of menhaden until my husband, who grew up in Florida, mentioned them. Just half a century ago, when he and his friends were swimming and the menhaden came through, “they looked like the shadow of a large, approaching cloud—the water boiled with fish, and everyone got out as fast as they could because there were sharks slashing through them, biting at anything that moved.”

Franklin describes menhaden schools as acting like a single organism: “Flashes of silver with flips of forked tails and splashes, whirling swiftly…in moves more dazzling than those of a modern dancer, as they seek escape from hordes of bluefish below and gulls above…a breathtaking experience.”

Menhaden were eaten by dozens of kinds of fish, as well as sea mammals and birds. (Humans don’t choose to eat them because they smell awful and are too oily. But we do eat them indirectly when we dine on menhaden predators, such as tuna, cod, shark, and swordfish.)

The Native American word for menhaden translates to “fertilizer”: they buried these fish below the corn they planted. The Pilgrims copied them, and grew triple the corn they could have otherwise. Later generations forgot about using menhaden as fertilizer, until an article about the practice in 1792 changed all that. It wasn’t long before millions of tons of menhaden were caught and dragged as far as seven miles inland to be dumped on fields, saving farmers the enormous cost of importing guano from Peru. By 1880 menhaden had also replaced whales as a source of oil, and the bits that weren’t used for oil were made into fertilizer or animal feed and shipped all over the country.

Meanwhile, wealthy landowners had permanent nets strung across rivers abutting their property, scooping up all passing fish. Unsurprisingly, fish populations declined dramatically, and by 1870, 90% were gone. Commercial fishermen and citizens desperately tried to stop permanent nets and the menhaden fleets, but wealthy interests were able to prevent any restrictions on fishing. By 1800 salmon had been fished out of New York and Connecticut, by 1840 there were no salmon south of Maine, and when the menhaden industry was finally banned in Maine in 1879, it was too late, the menhaden were gone, and the northern fishery collapsed.

Measuring from the 1860s to today, the combined weight of all the menhaden harvested is more than that of all other commercial fish—more than all the salmon, cod, tuna, halibut, herring, swordfish, flounder, snapper, anchovies, mackerel, and so on that humanity has dragged from the water in the last century and a half.

State by state, the commercial fishing industry wiped out menhaden and gone bankrupt. But it has never died out completely, because the U.S. government has spent taxpayer money to keep the industry going in states where menhaden still existed. There was no reason to do this, Franklin writes: menhaden oil, animal feed, and fertilizer have all been replaced with much cheaper petroleum and soybean substitutes. The role that menhaden play in the ocean’s food chain, however, is irreplaceable.

The ocean’s hoovers, damned

Image of menhaden catch from NOAA

Image of menhaden catch from NOAA

One company, Omega Protein, now catches the majority of menhaden, hunting down the last few remaining schools in two of the most productive fisheries, the Gulf of Mexico and Chesapeake Bay, both of which have suffered tremendous ecological damage and fishery destruction the past few decades. More than 30 Omega spotter planes direct a fleet of 61 ships to where the menhaden swim close to the surface. Omega Protein turns the aquatic herbivores into poultry feed and fishmeal for farmed salmon, two products for which there are cheaper and less devastating alternative sources.

Menhaden are not the only forage fish species being overharvested. According to the Marine Fish Conservation Network, “Globally, about 30% of all marine fish landed each year are forage fish (anchovies, sardines, hake, herring, Pollock, squid, krill) that are processed directly into fishmeal and oil and used in livestock and aquaculture feeds.” The U.N Food and Agriculture Organization (FAO) estimates that insatiable demand from the global aquaculture industry will outstrip the available supplies of sources of fishmeal and fish oil within the next decade.

Not only are menhaden the main food item for many fish, but they play an even more critical role in the health of any aquatic ecosystem. They filter phytoplankton out, allowing sunlight to reach the depths where aquatic plants can prosper, which increases oxygen levels, allowing shellfish and fish to thrive. When algae aren’t consumed, they erupt into toxic algal blooms, die and sink to the bottom, smothering plants and depleting oxygen. This leads to massive die-offs of all sea life within these areas and is a major contributing factor, along with agricultural run-off from the Mississippi River, to the 8,000-square-mile dead zone in the Gulf of Mexico.

If it were somehow possible to shut down the menhaden industry entirely, Franklin says, and the pitifully few populations protected and nursed back to health, then the ocean and estuaries could be cleansed, shellfish and fish populations recover, and a new sport and commercial fishing industry emerge as the dozens of fish that feast on menhaden return. Oysters, crabs, striped bass, and many other tasty species of seafood might thrive again if the oceans were cleared of toxic algal blooms. Far more jobs would be created if menhaden schools were to recover than would be lost if Omega Protein were forced to get out of the menhaden business.

No conservation organizations are trying to get rid of Omega Protein entirely. But legislatively, it’s been difficult even to get the fishing limits cut back or lower the bycatch of other fish. Congressmen and commissions in Mississippi, Texas, and Virginia have bowed to pressure from Omega Protein and the aquaculture industry in their states. The Marine Fish Conservation Network reports that “on Jan. 20, the Mississippi Commission on Marine Resources voted not to change any regulations or impose a catch limit on the menhaden industry. Omega’s stock rose the next day.”

The MFCN, which represents over 200 conservation and fishing groups, has a comprehensive list of related legislation and a wealth of resources about the menhaden and other fish issues. Other sites where you can learn more or take action:

Posted in Biodiversity Loss, Extinction, Fisheries, Starvation | Tagged , , , , | 7 Comments

Thorium in the news

[ When trucks stop running, civilization as we know it ends.  Nuclear electricity doesn’t matter a rat’s ass if trucks can’t be electrified to run on batteries or overhead wires — especially the tractors and harvesters that plant and harvest billions of acres of farmland.  Even if trucks can be electrified, it won’t matter if thorium or uranium nuclear power plants can’t be flexibly ramped up and down to match the ever-increasing amounts of renewable power mainly generated by wind and solar as fossil fuels decline.  Nuclear (and coal) plants now typically take 6 to 8 hours to ramp up or down, but wind and solar require a response within seconds as they abruptly stop and start, which is now done with natural gas, a finite resource that is fast disappearing despite all the hoopla about 100 to 200 years of energy independence.  And finally, money would be better spent on  determining whether the energy returned on invested is at least 8 or more (Lambert, Hall 2011), to mine, process, build, and maintain thorium power plants before we start building them.  Alice Friedemann ]

Bagla, P. November 13, 2015. Thorium seen as nuclear’s new frontier. Science 350:  726-727.

In the 1950s, U.S. nuclear scientists proposed building a fleet of nuclear-powered airplanes. That was probably a bad idea.

Compared with uranium, thorium is 3 to 4 times more abundant than uranium and harder to divert to weapons production, and it yields less radioactive waste. But thorium can’t simply be swapped in for uranium in standard reactors.  Driving the interest in thorium is the latest in a string of accidents involving uranium-fueled power reactors. The meltdowns at the Fukushima Daiichi Nuclear Power Plant in Japan in March 2011 prompted many countries to take operating reactors offline and to scale back or scuttle plans to build new ones. India plans to have a thorium power reactor running within 10 years.

Thorium holds little appeal for bomb makers: Daughter isotopes, born as thorium naturally decays, are highly radioactive, emitting gamma rays that would fry weapon electronics and make thorium-derived bombs cumbersome to store. At the same time, thorium-based fuels yield much less high-level radioactive waste than uranium or plutonium, and molten-salt reactors are touted by their backers as meltdown proof.

The catch is that thorium itself is not fissile…. it is like wood too soggy for a fire and must be converted into fissile material by bombarding thorium with neutrons to transmute it into fissile uranium-233.  The disaster-scarred track record of uranium reactors casts a long shadow on thorium, too. Ever since the United States during the Cold War went whole hog into uranium, “the world has been paying a price for the wrong technology choice,” argues Jean-Pierre Revol, president of the international Thorium Energy Committee in Geneva, Switzerland.

Posted in Thorium | Tagged , | 3 Comments

Why the demise of civilization is inevitable

MacKenzie, D. April 2, 2008. Why the demise of civilisation may be inevitable. NewScientist.

Every civilization in history has collapsed. Why should ours be any different?

Homer-Dixon doubts we can stave off collapse completely. He points to what he calls “tectonic” stresses that will shove our rigid, tightly coupled system outside the range of conditions it is becoming ever more finely tuned to. These include population growth, the growing divide between the world’s rich and poor, financial instability, weapons proliferation, disappearing forests and fisheries, and climate change. In imposing new complex solutions we will run into the problem of diminishing returns – just as we are running out of cheap and plentiful energy.

The stakes are high. Historically, collapse always led to a fall in population. “Today’s population levels depend on fossil fuels and industrial agriculture,” says Tainter. “Take those away and there would be a reduction in the Earth’s population that is too gruesome to think about.”

If industrialized civilization does fall, the urban masses – half the world’s population – will be most vulnerable. Much of our hard-won knowledge could be lost, too. “The people with the least to lose are subsistence farmers,” Bar-Yam observes, and for some who survive, conditions might actually improve. Perhaps the meek really will inherit the Earth.

A few researchers have been making such claims for years. Disturbingly, recent insights from fields such as complexity theory suggest that they are right. It appears that once a society develops beyond a certain level of complexity it becomes increasingly fragile. Eventually, it reaches a point at which even a relatively minor disturbance can bring everything crashing down.

Some say we have already reached this point, and that it is time to start thinking about how we might manage collapse.
Environmental mismanagement

History is not on our side. Think of Sumeria, of ancient Egypt and of the Maya. In his 2005 best-seller Collapse, Jared Diamond of the University of California, Los Angeles, blamed environmental mismanagement for the fall of the Mayan civilization and others, and warned that we might be heading the same way unless we choose to stop destroying our environmental support systems.

Lester Brown of the Earth Policy Institute in Washington DC agrees. He has long argued that governments must pay more attention to vital environmental resources.

Others think our problems run deeper. From the moment our ancestors started to settle down and “For the past 10,000 years, problem solving has produced increasing complexity in human societies,” says Joseph Tainter, an archaeologist at Utah State University, Logan, and author of the 1988 book The Collapse of Complex Societies.

If crops fail because rain is patchy, build irrigation canals. When they silt up, organize dredging crews. When the bigger crop yields lead to a bigger population, build more canals. When there are too many for ad hoc repairs, install a management bureaucracy, and tax people to pay for it. When they complain, invent tax inspectors and a system to record the sums paid. That much the Sumerians knew.
Diminishing returns

There is, however, a price to be paid. Every extra layer of organization imposes a cost in terms of energy, the common currency of all human efforts, from building canals to educating scribes. And increasing complexity, Tainter realized, produces diminishing returns. The extra food produced by each extra hour of labor – or joule of energy invested per farmed hectare – diminishes as that investment mounts. We see the same thing today in a declining number of patents per dollar invested in research as that research investment mounts. This law of diminishing returns appears everywhere, Tainter says.

To keep growing, societies must keep solving problems as they arise. Yet each problem solved means more complexity. Success generates a larger population, more kinds of specialists, more resources to manage, more information to juggle – and, ultimately, less bang for your buck.

Eventually, says Tainter, the point is reached when all the energy and resources available to a society are required just to maintain its existing level of complexity. Then when the climate changes or barbarians invade, overstretched institutions break down and civil order collapses. What emerges is a less complex society, which is organised on a smaller scale or has been taken over by another group.

Tainter sees diminishing returns as the underlying reason for the collapse of all ancient civilizations, from the early Chinese dynasties to the Greek city state of Mycenae. These civilizations relied on the solar energy that could be harvested from food, fodder and wood, and from wind. When this had been stretched to its limit, things fell apart.
An ineluctable process

Western industrial civilization has become bigger and more complex than any before it by exploiting new sources of energy, notably coal and oil, but these are limited. There are increasing signs of diminishing returns: the energy required to get each new joule of oil is mounting and although global food production is still increasing, constant innovation is needed to cope with environmental degradation and evolving pests and diseases – the yield boosts per unit of investment in innovation are shrinking.

Is Tainter right? An analysis of complex systems has led Yaneer Bar-Yam, head of the New England Complex Systems Institute in Cambridge, Massachusetts, to the same conclusion that Tainter reached from studying history. Social organizations become steadily more complex as they are required to deal both with environmental problems and with challenges from neighboring societies that are also becoming more complex, Bar-Yam says. This eventually leads to a fundamental shift in the way the society is organized.

“To run a hierarchy, managers cannot be less complex than the system they are managing,” Bar-Yam says. As complexity increases, societies add ever more layers of management but, ultimately in a hierarchy, one individual has to try and get their head around the whole thing, and this starts to become impossible.
Increasing connectedness

Things are not that simple, says Thomas Homer-Dixon, a political scientist at the University of Toronto, Canada, and author of the 2006 book The Upside of Down. “Initially, increasing connectedness and diversity helps: if one village has a crop failure, it can get food from another village that didn’t.”

As connections increase, though, networked systems become increasingly tightly coupled. This means the impacts of failures can propagate: the more closely those two villages come to depend on each other, the more both will suffer if either has a problem. “Complexity leads to higher vulnerability in some ways,” says Bar-Yam. “This is not widely understood.”

The reason is that as networks become ever tighter, they start to transmit shocks rather than absorb them. “The intricate networks that tightly connect us together – and move people, materials, information, money and energy – amplify and transmit any shock,” says Homer-Dixon. “A financial crisis, a terrorist attack or a disease outbreak has almost instant destabilizing effects, from one side of the world to the other.”

For instance, in 2003 large areas of North America and Europe suffered blackouts when apparently insignificant nodes of their respective electricity grids failed. And this year China suffered a similar blackout after heavy snow hit power lines. Tightly coupled networks like these create the potential for propagating failure across many critical industries, says Charles Perrow of Yale University, a leading authority on industrial accidents and disasters.
Credit crunch

Perrow says interconnectedness in the global production system has now reached the point where “a breakdown anywhere increasingly means a breakdown everywhere”. This is especially true of the world’s financial systems, where the coupling is very tight. “Now we have a debt crisis with the biggest player, the US. The consequences could be enormous.”

“The networks that connect us can amplify any shocks. A breakdown anywhere increasingly means a breakdown everywhere”

“A networked society behaves like a multicellular organism,” says Bar-Yam, “random damage is like lopping a chunk off a sheep.” Whether or not the sheep survives depends on which chunk is lost. And while we are pretty sure which chunks a sheep needs, it isn’t clear – it may not even be predictable – which chunks of our densely networked civilization are critical, until it’s too late.

“When we do the analysis, almost any part is critical if you lose enough of it,” says Bar-Yam. “Now that we can ask questions of such systems in more sophisticated ways, we are discovering that they can be very vulnerable. That means civilization is very vulnerable.”
Tightly coupled system

Scientists in other fields are also warning that complex systems are prone to collapse. Similar ideas have emerged from the study of natural cycles in ecosystems, based on the work of ecologist Buzz Holling, now at the University of Florida, Gainesville. Some ecosystems become steadily more complex over time: as a patch of new forest grows and matures, specialist species may replace more generalist species, biomass builds up and the trees, beetles and bacteria form an increasingly rigid and ever more tightly coupled system.

“It becomes an extremely efficient system for remaining constant in the face of the normal range of conditions,” says Homer-Dixon. But unusual conditions – an insect outbreak, fire or drought – can trigger dramatic changes as the impact cascades through the system. The end result may be the collapse of the old ecosystem and its replacement by a newer, simpler one.

Globalization is resulting in the same tight coupling and fine-tuning of our systems to a narrow range of conditions, he says. Redundancy is being systematically eliminated as companies maximize profits. Some products are produced by only one factory worldwide. Financially, it makes sense, as mass production maximizes efficiency. Unfortunately, it also minimizes resilience. “We need to be more selective about increasing the connectivity and speed of our critical systems,” says Homer-Dixon. “Sometimes the costs outweigh the benefits.”


Posted in 3) Fast Crash, Interdependencies | Tagged , , , | 1 Comment

Effects of biodiesel on diesel engines: John Deere

[ Since petroleum is finite, the most important focus of U.S. energy research ought to be keeping trucks operating, since civilization ends when trucks stop running.  Ideally this would be done with a “drop-in” fuel that can be burned in existing diesel engines so we don’t have to scrap the trillions of dollars we have invested in trucks, and transportation and oil pipeline infrastructure and to save the enormous amount of energy, cost, and materials required to build a new fuel distribution system, service stations, new/modified trucks and so on.

Since the most important trucks are the ones that plant and harvest food, here’s what John Deere has to say about using biodiesel in their engines.

For an even more detailed account of how biodiesel affects engines made to burn petroleum-based diesel, see:

Radich, A. 2004. Biodiesel performance, costs, and use. Energy Information Administration.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

John Deere. 2016. Using Biodiesel in John Deere Engines.

All John Deere engines can use biodiesel blends. B5 blends are preferred, but concentrations up to 20 percent (B20) can be used providing the biodiesel used in the fuel blend meets the standards set by the American Society of Testing Materials (ASTM) D6751 or European Standard (EN) 14214.

John Deere engines with exhaust filters should not use biodiesel blends above B20. Concentrations above B20 may harm the engine’s emissions control system. Specific risks include, but are not limited to, more frequent regeneration, soot accumulation, and increased intervals for ash removal. For these engines, John Deere-approved fuel conditioners containing detergent/dispersant additives are required when using B20, and recommended when using lower biodiesel blends.

John Deere engines without exhaust filters can operate on biodiesel blends below and above B20 (up to 100 percent biodiesel); however, they should be operated at levels above B20 ONLY if the biodiesel is permitted by law and meets the EN 14214 specification. Engines operating on biodiesel blends above B20 may not fully comply with or be permitted by all applicable emissions regulations. For these engines, John Deere-approved fuel conditioners containing detergent/dispersant additives are required when using biodiesel blends of B20 or higher, and recommended when using lower biodiesel blends.

Biodiesel impacts all diesel engines — no matter what brand. In an effort to ensure every John Deere engine user enjoys a positive experience, we want you to know the benefits as well as the cautions of using biodiesel.

Material compatibility

  • Through repeated exposure, biodiesel can seep through certain seals, gaskets, hoses, elastomers, glues, and plastics. This is more of a problem in older engines.
  • Natural rubber, nitrile, and butyl rubber are particularly vulnerable to degradation.
  • Brass, bronze, copper, lead, tin, and zinc can accelerate the oxidation of biodiesel and create deposits in the engine.


  • Compared to conventional petroleum diesel fuel, B20 will result in slight reductions in power and fuel economy. Expect a 2% reduction in power and a 3% reduction in fuel economy when using B20 biodiesel. Expect up to a 12% reduction in power and an 18% reduction in fuel economy when using B100.
  • Biodiesel can accelerate the degradation of crankcase oil.
  • When using biodiesel fuel, the engine oil level must be checked daily.
  • In no instance should the fuel dilution of the oil be allowed to exceed 5%. OILSCAN™ can be used to verify fuel dilution levels.
  • Fuel should be sampled periodically to ensure a consistent percentage of biodiesel.
  • Biodiesel can reduce water separator efficiency.
  • Biodiesel can cause cold weather flow degradation.

Storage and Handling

  • To improve storage of biodiesel fuels, John Deere recommends the use of a fuel conditioner. To be effective, the conditioner needs to be added when the fuel is fresh (close to the time it is produced). Periodic testing of the fuel is recommended to ensure it continues to meet specifications.
  • Fuel conditioners can improve fuel flow in cold temperatures and oxidation stability in the summer.
  • Tanks should be kept as full as possible to minimize condensation because water accelerates microbial growth.
  • Acceptable storage tank materials include aluminum, steel, fluorinated polyethylene, fluorinated polypropylene, Teflon®, and most fiberglass.
  • Sedimentation and water should be removed on a routine basis.
  • New fuel filters should be installed when biodiesel is introduced to older or used engines. For the first two changes, the fuel filter life will be half the standard.
  • Biodiesel might cause corrosion and deposit formation due to higher acidity.


  • Users are responsible for compliance with local emissions regulations limiting the use of biodiesel in emissions-certified engines.
  • Biodiesel tends to increase NOx emissions while reducing smoke.
  • The use of biodiesel blends above B20 can impact the performance and maintenance of exhaust filters.

This list of considerations is not intended to be all-inclusive. Consult your engine operators manual, your local John Deere engine distributor or equipment dealer, or visit for more information.


  • The John Deere warranty covers only defects in material and workmanship as manufactured and sold by John Deere. Failures caused by poor quality fuel of any type cannot be compensated under our warranty.
  • IMPORTANT: Raw pressed vegetable oils are NOT acceptable for use as fuel in any concentration in John Deere engines. Their use could cause engine failure.
Posted in Biodiesel, Trucks | Tagged , , | Leave a comment

Restore wild bison

[ Native wild animals have the least impact on ecosystems. While cattle plod along in line with one another, bison dance their own unique steps across the landscape, and don’t develop deep rutted grooves that can erode soil like cattle do.  As toxic invasive species invade rangeland and can no longer be controlled, domestic cattle are less likely to be able to cope. So let’s bring the bison back now, so our descendants can once again hunt them on horseback a century (or less) from now. 

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

DeWoody, J.A. August 29, 2014. Heirloom genomes and bison conservation. A book review of James A. Bailey’s 2013 “American Plains Bison. Rewilding an Icon.” Science Vol. 345:1009

James A. Bailey, who was at Colorado State University, wrote this book “to describe some details of the breadth and depth of wildness in American plains bison, and to show how bison wildness is threatened by creeping domestication.”  It is a sobering assessment of bison biology today. Bailey’s thesis is that most bison advocates undervalue wildness to the detriment of bison and our sense of free will.

Early on, it becomes apparent that this book will not be a dusty academic treatise that only looks backward: Bailey quickly makes his foray into contemporary issues and notes that today, only two bison herds in the contiguous United States live in the presence of any nonhuman predators. How can we expect bison to continue a natural journey when we have removed predators as evolutionary drivers?

The book then proceeds to provide the obligatory summary of how humans devastated bison populations. Bailey does so in depressingly successful fashion, replete with photos and shameful anecdotes of our ancestors’ exploits.

Bailey drives home the point that the maintenance of bison in small pens is a travesty. Our society should instead retain elements of bison wildness by challenging them in a natural manner. This means with competitors, predators (wolves, bears, and humans), pathogens, and the environment itself. Wild plains bison evolved historically in the presence of pathogens and predators, and they remain the best means to control population sizes.

Bailey convincingly argues that traits necessary for the survival of wild animals are ill-adapted for captivity and vice versa. Animal production (i.e., farming or ranching) is insufficient to maintain the historical and biological integrity of wild animals. Many attributes of wild bison populations (e.g., dominance hierarchies) simply do not exist in most captive populations because artificial breeding schemes selectively avoid them. As a conservationist and hunter myself, I agree with this sentiment and particularly like his admonition that hunters and biologists must avoid artificial selection (e.g., for horn size) to the extent possible.

Bailey analyzes the 44 largest conservation herds of plains bison in the United States by considering a number of factors related to “wildness,” including range size, supplemental feeding, the presence of natural predators, sex ratio, and disease management (lack thereof being most consistent with wildness). The results are a bit disheartening: only two herds (Yellowstone and Utah’s Henry Mountains) stand out as justifiably “wild.

In the remaining herds, inbreeding, hybridization, and artificial selection (domestication) reduce the wildness of native bison and diminish their capacity to provide natural ecosystem services that are crucial to many other plains species. A variety of other species are described (e.g., prairie chickens and Arkansas darters) that could benefit from the establishment of large bison herds.

Bailey then switches gears, from esoteric discussions about why we should conserve wild bison to how we actually do so. The vast majority of states have no recovery plan for bison, and their regulatory status impedes conservation in most Western states because bison are not managed like most wildlife species. Bailey does not wave his hands and say “Let’s rewild bison everywhere,” but he provides detailed suggestions about where and how to conserve wild bison. The practical problems that are apt to confound bison rewilding are clearly recognized, including the uncertainties of global warming, the (incomprehensible) fact that many states do not legally recognize bison as wildlife, and the tendency to relegate bison to poor-quality lands because they will be easiest (and cheapest) to acquire. He argues that an objective review of bison status under the Endangered Species Act would at least identify them as threatened and recognizes that it might take a century or more to widely restore wild plains bison in the United States.

Bailey’s musings about nature resonated deeply, in large part because of the author’s intuitive understanding of wildness. He writes that “Wildness is the most unique and irreplaceable characteristic of wildlife. … In a world where humans increasingly restrict their own freedom by crowding and monotonizing their environment, wild bison should be retained, at least as a symbol of what we have sacrificed in domesticating and civilizing ourselves.

The plight of the bison should strike a chord with Bailey’s intended general audience, but ecologists, evolutionary biologists, and wildlife managers will find this book an effective primer on practical conservation biology. It should be required reading for all wildlife students; I know it will be for mine.

Posted in Agriculture | Tagged , , | 1 Comment

A U.S. Senate hearing on T. Boone Pickens plans for natural gas and wind to reduce oil dependence

[ This session is unusual in that the words “peak oil” are spoken several times, and M. King Hubbert, James Howard Kunstler, and Matt Simmons are lauded.   Gal Luft points out that “10 years ago, Osama bin Laden predicted that oil would be $144 a barrel. Everybody laughed at him. Oil was only $12 a barrel at the time. He was right.”

Pickens ideas about running transportation on natural gas so far haven’t worked out so far.  It was hoped that 20% of trucks would be running on natural gas by now, but only 3% are, for many reasons that I explain in my book “When Trucks Stop Running: Energy and the Future of Transportation.  

At least Pickens realizes that it is heavy-duty transportation, especially trucks, that are the most important.  Yet cars dominate discussions and the lion-share of funding for “energy solutions.  And guess what, people drive more miles when cars get more efficient, undoing the oil saved.   Even if people  drove less, so what?  Trucks, trains, and ships BURN DIESEL. Cars burn gasoline.  Diesel engines can’t run on gasoline (or ethanol, diesohol and many other fuels).  Diesel engine are just as important for our high level of civilization as the diesel they burn because they’re twice as efficient than a gasoline engine and far more powerful, lasting up to 40 years and a million miles.

Alice Friedemann ]

T. Boone Pickens

T. Boone Pickens

Senate 110-1023. July 22, 2008. Energy security. An American imperative.  U.S. Senate hearing.

Excerpts from this 175 page hearing follow:

T. BOONE PICKENS, Founder & CEO, BP Capital Management

We had produced 1 trillion barrels of oil at the turn of the century. It is interesting because if you look at King Hubbert’s extension, peak oil, and what would happen, the guy was great, in my estimation. I am a disciple.  I don’t think there are 2 trillion barrels of oil.  You may say take the oil shale on the western slope and this and that and everything. You can add up a bunch of stuff. When you add it up, it is going to be very expensive oil. But in looking at conventional oil—I live and you live and everybody in this room lives in the hydrocarbon era, and that era started with the automobile in 1900.

Half of the oil that I see out there had been produced by the year 2000.

Now, we have another trillion barrels, and you say, well, that is another hundred years. No. You started slow, ramped up, and now the next trillion is going to go out of the system within the next 50 years. So you are going to be forced to abandon the hydrocarbon era.

Can you imagine researchers 500 years out that come back and look at us? They are going to say, ‘‘That was a strange crowd. They lived on oil as a fuel.’’

We are going to have to make it to the next fuel. But what is going to happen, if I am right on what I am trying to do, I am going to awaken the American people, and they are going to see what they are up against. When they walk out of a room, they will turn off the lights. They do not do that now.

The Pickens Plan starts with harnessing wind and building solar capabilities. We are blessed with some of the best wind and solar resources in the world.  The plan substitutes electricity generated by natural gas-fired plants with wind-generated electricity. Natural gas-fired is 22 percent; the wind is going to replace that 22 percent. The natural gas freed up is directed to transportation needs of the country. The natural gas is cheaper, cleaner than gasoline, and its supply is plentiful. And, most of all, it is American.

But natural gas is nothing more than a bridge to the next fuel because when you get to 2050, we are pretty well maxed out on hydrocarbons as a transportation fuel. I almost think it is divine intervention to have natural gas show up at such a critical time for this country, and to be able to use it as a bridge to the next fuel in the next 20 or 30 years.

And 70% of the oil is used for transportation. When a barrel of oil comes to the United States today, it will be moved to a refinery, refined, then go into marketing, then go into our cars, and in 4 months it is gone. It is gone. We burn it up. It is out of here. And so we have to get a hold of this situation.


Senator COLLINS. How much of the solution also should encompass energy conservation?

Mr. PICKENS. Oh, it has got to be on page 1, of course. We have got to conserve. There is no question about that. We have been very wasteful. But in our defense, we had cheap oil. And as long as we had cheap oil—I don’t know whether you have seen Jim Kunstler.  I went over to Southern Methodist University (SMU) and heard him the other night. He is worth hearing. He is a generalist, but he tells us where we made the mistakes. We did not develop our rail system. You look at the world today, we go places and we want to ride on a 200-mile-an-hour train. We have to go to a foreign country to do that. We don’t have that. Why don’t we have it? Because we had cheap oil. It didn’t make sense for us to. It was expensive. We were going to subsidize it.   And we built too far away from our work. He says you are going to move to your work now because of the cost of energy. And it was really interesting because this was 2 years ago and the guy nailed it. I listened to what he had to say. I watched what has happened, and he was right on.

If you go with my plan and get 400,000 megawatts of wind in the central part of the country, you have helped the economy. Now, what is the cost of your energy? I am guessing in 10 years you are going to be a long way down the track to an electric vehicle. But, remember, an electric vehicle does not do heavy duty. So you are going to have to continue to use natural gas with heavy duty vehicles.

Ethanol is a light-duty fuel. Ethanol cannot work for heavy duty. But natural gas can. So I am approaching it with the view that natural gas would be for heavy duty, first and all. Mandate to the fleets that they have got to go to natural gas. 38 percent of the fuel used in America is used to move goods. And that is done by trucks.

Geoffrey Anderson, President & CEO Smart Growth America

The real opportunity out there right now is to allow people to drive less and to be able to do more. We can do that by building more walkable and complete communities. A lot of the growth in oil use has been as a result of spread out landscapes that have no options besides driving.  There is a real move now to create more walkable communities where homes are closer to jobs, shops are closer to work, and all of these things can be reached either on foot, by bike, with transit, or by shorter car trips.

Real estate and our research indicate that about a third of the market is interested in having more walkable communities, more compact communities. The fact is that for the last 50 years, we have essentially built drive-only communities, so the two-thirds of the market that really is interested in that product is well provided for.

Work trips only account for 25 to 35% of trips a household takes.  Denser communities mean kids can walk to school (50% used to, just 11% now), and daily errands require shorter trips.  The current way we are building communities is locking in oil dependence in the transportation sector.

Senator Lieberman.  The near total dependence of our economy, the energy sector of it—and particularly the transportation sector—on oil is weakening our Nation’s position in the world while enriching and strengthening a lot of countries in the rest of the world, many of them volatile and some of them just plain hostile to the United States of America. For well over a generation, America’s leaders have seen this growing dependence on foreign oil but essentially sat back and watched passively as trillions of dollars of our American, hard-earned wealth has been used to buy that oil and thereby go to countries abroad. And during that more than a generation, America’s leaders have done little or nothing about that problem. Apparently, it took $4-a-gallon gasoline to wake up the American people and their leaders here in Washington, to make all of us angry and anxious enough to get serious about breaking our national dependency on foreign oil.

Senator Collins. Beyond the impact on countless families struggling with high costs, our growing dependence on foreign oil is a threat to our national and economic security. One of our witnesses, Mr. Pickens, has vividly illustrated our ever-increasing dependence on foreign sources of oil in the Middle East and Venezuela. We are impoverishing ourselves while enriching regimes that are in many cases hostile to America. Ending our dependence on foreign oil and securing our own energy future is an American imperative. Our Nation must embrace a comprehensive strategy to reduce, and ultimately eliminate, our reliance on Middle East oil. We must expand and diversify American energy resources, and while doing so, improve our environment.

Our Nation missed an enormous opportunity on another October day 35 years ago. On October 17, 1973, the Organization of Arab Petroleum Exporting Countries, the predecessor of the Organization of Petroleum Exporting Countries (OPEC), hit the United States with an oil embargo. The immediate results were soaring gasoline prices, fuel shortages, lines at filling stations, and an economic recession. Unfortunately, after the immediate crisis passed, the long-term result was a steady increase in oil imports and a dependence that worsens each day. The 1973 embargo was a wake-up call that we failed to heed. The current crisis is a fire alarm that we must not ignore.

It also requires action by government. From establishing a timeline for energy security to undertaking critical investments to stimulate research in alternatives to expanding the production and conservation tax credits, government has a critical role to play.

Mr. Pickens.  In 1945, we were exporting oil to our allies. By 1970, we were importing 24% of our oil. By the 1980s, it was 37%. And in 1991, during the Gulf War, it was 42%. Today, we are approaching 70%. Much of our dependency is on oil from countries that are not friendly, and some would even like to see us fail as a democracy and as the leader of the free world. I am convinced we are paying for both sides of the Iraq war. We are giving them tools to accomplish their mission without ever having to do anything but sell us oil. This is more than a disturbing trend line. It is a recipe for national disaster. It has gone on for 40 years now. This is a crisis that cannot be left to the next generation to solve, and it is a shame if we do not do something about it. And we can, without bringing our economy and way of life to a halt.

I will tell you what [the American people] do understand. They know it is something very bad about energy. They do not think they are being told the truth about energy. And it is confusing to them. I think when we come out of this, by the time we get—I want to elevate this into the presidential debate, and it is not there yet. OK. Elevate it there. By the time we get the elections over, whoever wins, the American people are going to demand they know the truth about energy, they know what they are up against, and they will respond. We will see the energy use go down dramatically when they see what it is going to cost. They can see that it does not have anything to do with Exxon or Chevron or anybody else running up the price. It does not have anything to do with some speculator on Wall Street. That is not what we are faced with. We are faced with 85 million barrels a day of production in the world, and we are using 25 percent of it, with 4 percent of the population, and we only have 3 percent of the reserves. In the United States, we have nothing to do with the price of oil. We only have 3 percent of the reserves.

SENATOR VOINOVICH. Wind produces about 1.5 percent of our energy in this country. I think renewables are about—let’s see, about 9 percent, most of it is hydroelectric. How can you ramp that up over a quick period of time? And, second of all, as you know, down in Texas you have had some times when the wind just kind of stopped and you have had some reliability problems. And if you are going to use wind, you know that if you are going to have reliability, you are going to have to back up that wind with some ordinary baseload energy generation.

SENATOR DOMENICI.  You are so right that we must get the people to understand; that the United States is sending so much of our resources to foreign countries just to acquire crude oil; that it should be doubtful in the minds of intelligent people as to whether America can continue this kind of exportation of our assets, of our resources to foreign countries for 5 or 10 years. I actually do not believe we can. I believe we will become poorer and poorer and poorer as we send $500 to $700 billion a year overseas for crude oil. We are in a real mess. You are not against us opening more of the offshore assets of the United States where there are 85 percent that are locked up in a moratorium of one type or another and you cannot drill even if you wanted to. Are you on the side of those who say lift those and start drilling in an appropriate——

Mr. PICKENS. I am saying do everything you can do to get off of foreign oil, is what I am saying.

Senator DOMENICI. And that is one.

Mr. PICKENS. That is one. It is not going to do it.  It is not big enough. You do not have enough reserves in the offshore to do it. I think you are going to get a rude awakening as to value of the east and west coast when it is opened up and when it is put up for sale. When those tracts are put up for sale, I think you are going to be surprised at the [low] price you get for the tracts [ because most of the remaining oil is in the Gulf ].

There is no question that if I am right on the peak oil at 85 million barrels, in 10 years we are going to have less than 85 million barrels available to the world. Now, the question is: What is the demand? I have to think in 10 years the demand for oil— because the price now is going up. In 10 years, you are going to have $300 a barrel oil. Maybe higher, I don’t know. But this is really— it is a tough question to look out 10 years on this one. But I can tell you this: In 10 years, if we continue to drift like we are drifting, you are going to be importing 80 percent of your oil. And I promise you, it will be over $300 a barrel.

Senator VOINOVICH. I went to some war games at the National Defense University, and they talked about the vulnerability that we have. And some folks out at Stanford said that in the next 10 years there is a 80-percent chance that the cut-off of oil will bring our economy to its knees. So we have a certain urgency that we have right now to get on with this.


When we talk about national security, we need to realize that 63% of the world’s natural gas reserves are in the hands of Russia, Iran, Qatar, Saudi Arabia, and United Arab Emirates. These countries are now in the process of developing and discussing the establishment of a natural gas cartel. So shifting our transportation sector from oil to natural gas is like jumping from the frying pan into the fire. This is a spectacularly bad idea for us to shift our transportation sector from one resource that we do not have to another that we do not have. And we only have 3 percent of the world reserves of natural gas. The situation is very similar to our situation with regards to oil.

Just to remind the Committee that 10 years ago, Osama bin Laden predicted that oil would be $144 a barrel. Everybody laughed at him. Oil was only $12 a barrel at the time. He was right, and as a result, we are exporting hundreds of billions of dollars. This is the first year that we actually are going to pay foreign countries more than we pay our own military to protect us.

In order to understand what should be the road to energy security, we must first understand why we are where we are. There are many reasons why we have the oil crisis now. Of course, strong demand in developing Asia, speculation, geological decline, geopolitical risk, all of them have contributed their share. But, in my view, by far the main culprit is OPEC’s reluctance to ramp up production. This cartel owns 78 percent of the world’s proven reserves, and it produces about 40 percent of its oil production.

Our energy security problem stems from the fact that our transportation sector is dominated by petroleum. And while being in a hole, we continue to dig.

We put on the road annually 16 million new cars, almost all of them gasoline only, each with an average street life of 16.8 years. A Senator elected in 2008 will witness the introduction of 102 million gasoline-only cars during his or her 6- year term.

This means that neither efforts to expand petroleum supply nor those to crimp petroleum demand through increased Corporate Average Economy Fuel (CAFE) standards will be enough to reduce America’s strategic vulnerability. Such non-transformational policies at best buy us a few more years of complacency, while ensuring a much worse dependence down the road when America’s conventional oil reserves are even more depleted.

[ Luft then goes on with solutions for CARS: ethanol, methanol, open fuel standards, electric – whether Pickens plan makes sense or not, at least Pickens understand it is heavy –duty transportation that needs to be kept running. Clearly Luft hasn’t read Matt Simmons “Twilight in the Desert: The Coming Saudi Oil Shock and the World Economy”, which makes the case that the Saudi’s and other Middle Eastern oil producing nations greatly exaggerated their reserves. And why should the Saudi’s produce oil as fast as we want them to, when they’d prefer for their oil to last many more generations. Also, producing oil at too fast a rate can damage the oil field and reduce the ultimate amount of oil extracted – indeed, it’s thought that before the U.S. and British oil companies in the Middle East that got kicked out did this before they left ].


I would like to start this testimony by acknowledging… the inspiring role of Matt Simmons, who is well known for alerting our country to peak oil and peak oil issues.

You have heard about T. Boone Pickens’ wonderful plan, but we sit in the corner of the country, and we are not very close to the wind belt that runs up and down from Kansas to Texas. So what do we do? T. Boone Pickens’ plan utilizes the wind corridor from the Dakotas down to Texas to generate anywhere from 200 to 400 gigawatts, depending on how much you want to generate. But that leaves us out, if you wish, on the east coast and on the west coast unless we build very expensive transmission systems. The majority of the U.S. population, actually close to 28 States, utilize more than 70 percent of the Earth’s electricity around the coasts of the United States. So the major demand for electricity is around the perimeter of the country.  There are line losses that take place, and, of course, there are transmission costs, and building transmission lines in heavily populated areas is very expensive as well from a permitting viewpoint and so forth. And if you look at the population centers on the east coast, for example, the Midatlantic States and up in the New England area, it would be very costly to build transmission lines in those areas.

Wind speed is actually high when we need it. We need to heat ourselves in the State of Maine and in the Northeast, and the heating costs are our biggest issues. But in the wintertime, the wind blows twice as fast as it does in the summertime, and the power generated from the wind is the cube of the wind speed. So in the wintertime, per month, we can generate 8 times as much power as we do in the summertime. You can think of wind off the coast of Maine as a seasonal crop right now that can help us heat the State of Maine. [ What will Maine do in the summer? ]

Another hearing on Natural Gas

House 113-58. June 20, 2013.  U.S. energy abundance. Manufacturing competitiveness and America’s energy advantage. U.S. House hearing.

CICIO, PRESIDENT, INDUSTRIAL ENERGY CONSUMERS OF AMERICA. The shale gas revolution and lower natural gas and feed stock costs have launched the start of the manufacturing renaissance with announced manufacturing investments of over $110 billion. This is the first wave of investment. The second wave will be from our downstream customers who will relocate to be near their suppliers and reduce their costs. The Boston Consulting Group estimates that 5 million new jobs will be created in manufacturing by 2020. Every dollar’s worth of natural gas run through our manufacturing economy creates up to $8 in added value. This is a superior economic use of natural gas than exporting LNG. The $110 billion investment will also create new natural gas demand between 7 and 9 Bcf a day, about an 11% increase. This is all good news. The most significant threat to the fulfillment of the manufacturing renaissance will be determined by the speed of LNG export terminal approvals and the volume of its shipments, which brings me to the key points of my testimony. Doing it right can be a win-win for producers and consumers of natural gas. Doing it wrong will result in spiking natural gas and electricity prices and an end to the manufacturing renaissance. We need to avoid what happened in Australia.

IECA is not opposed to LNG exports but warns policymakers that careless due diligence by the DOE on the public interest determination of LNG export applications to non-free- trade countries is a real concern. LNG terminal approvals are for 30 years. A lot can happen in 30 years.

Domestic demand is accelerating and LNG export demand is additive to that demand. For example, just six of the most likely export terminals would increase demand by 16%. The export demand would be on top of the AEO 2013 demand increase of 6% by 2020. Neither demand number includes the manufacturing renaissance of an 11% demand. Combined, this is a 33% increase. This is a huge increase in a very short time frame, and this does not include new demand that will occur from the EPA’s utility mat and EPA’s greenhouse gas regulations.

The public interest determination for approval of LNG exports to non-free-trade countries is the law. The public interest test is really important, because it is a safeguard to ensure that decisions are being made correctly and with up-to-date information.

Dean Cordle, president, CEO of AC&S, a chemical manufacturer, appearing on behalf of the American Chemistry Council.   This shale gas revolution has transformed our company. We are putting steel in the ground, as we speak, we are nearing completion of a new production unit, and my focus right now on growth opportunities is certainly centered in the oil and gas industry and the downstream derivatives.

The U.S. chemical industry is highly energy intensive. We use energy inputs, mainly natural gas and natural gas liquids as both our major fuel source and feed stock. About 75% of the cost of the producing petrochemicals and plastics is related to the cost of energy-derived raw materials. Consequently, our ability to compete in global markets is largely determined by the price and availability of natural gas and gas liquids. The consulting firm IHS forecasts that the U.S. has a 100-year supply of natural gas.

This abundant and affordable supply of natural gas has transformed the U.S. chemical industry from the world’s high-cost producer 5 years ago to the world’s low cost producer today. As a result, the U.S. enjoys a decisive competitive advantage in the cost of producing basic petrochemicals. For example, it costs less than $400 a ton to produce ethylene in the United States, whereas it compares $1,000 a ton in Europe and even more in Japan. As a result of this cost advantage, dozens of companies are making plans to invest in new U.S.-based chemical production capacity.

ACC estimates that more than $72 billion in new capital expenditures will be invested in the U.S. between 2012 and 2020. Roughly half of those investments will come from firms that are based outside of the U.S. The U.S. is emerging as the place to manufacture chemicals now. The supply response from shale gas will directly create tens of thousands of new jobs in the U.S. chemical industry.  Policy will play an important role if we are to optimize our competitive

Andre de Ruyter, senior group executive for Sasol Limited. Sasol is an integrated international energy and chemicals company. We employ about 34,000 people in 38 countries worldwide. We operate large-scale fuel and chemical plants throughout the world, and we are listed on the Johannesburg and New York stock exchanges.

NOTE: At this time Sasol was going to build a $22 billion dollar GTL facility in Louisiana that could make 96,000 barrels a day of 30% naptha and chemicals, and 70% diesel (67,000 barrels/day or .3% of daily oil consumption).  But due to the low price of natural gas, this plant isn’t being built. A year before Sasol cancelled this GTL plant, Royal Dutch Shell plc abandoned plans also canceled a GTL plant in Louisiana because it was too expensive – the initial cost ballooned from $12.5 billion to more than $20 billion.

While natural gas is a major energy source for global power generation, it has up to now lacked the versatility to embrace transportation needs. With our proven GTL technology, we can fundamentally alter the chemistry of natural gas so that we can convert it to approximately 100,000 barrels per day of gas-to-liquids diesel for use in transportation.

Unlike other alternative fuels, GTL diesel is fully fungible with conventional diesel and requires no adjustment to engine technology or to distribution infrastructure.  Diesel will continue to be the workhorse of the global economy for the foreseeable future with demand expanded to grow 65% by 2040 (ExxonMovil. The outlook for energy: a view to 2040, Irving, Texas: march 2013).

GTL diesel’s high quality makes it highly suitable for use as a blend stock by crude oil refineries to upgrade their products into high quality fuels; however, when gas-to- liquids diesel is used neat, it has the added benefit of leading to lower emissions of particulates and other pollutants as a result of the fact that it contains essentially zero sulfur and very low aromatic compounds.

With our partner, Qatar Petroleum, we have produced more than 45 million barrels of diesel fuel for export into the international market since the commissioning of our ORYX gas-to-liquids facility in Qatar in 2007.

We have also supplied GTL jet fuel to the Department of Defense, who uses it for experimental purposes.

Mr. WHITFIELD. I notice today the Federal Reserve board yesterday, I guess, said they are going to kind of stop our easy money policy, so we may see interest rates start edging up soon. So the policies that the U.S. Government adopts are going to have a dramatic impact on the cost of energy. And energy costs are a key component for continuing to grow our manufacturing base and create jobs. And so when we talk about that, we are talking about the regulations, we are talking about an all-of-the-above energy policy, which many of you talked about specifically in your testimony, but I would remind everyone once again that the Obama administration says an all-of-the-above, but they systematically are trying to eliminate some fossil fuels, particularly coal. And I notice—I was reading the Federal Register footnotes on the proposed greenhouse gas new source performance standard for new electric generating units. And in the register, it says the Department of Energy National Energy Technology Laboratory estimates that when that rule becomes final, that the technology that the coal industry would have to use to meet the emissions standards would add 80 percent to the cost of electricity; that one standard, 80 percent increase. So we are all excited now and we feel good about these low energy costs, but as we move forward, we have to think about the policies and the impact, because I, for one, as many of you said in your testimony, do believe we need all of the above.

Green energy alone is not going to get it done.

Mr. SCALISE. I think the fact that we are here in a committee hearing in Congress talking about how technology and energy is revolutionizing our country, and not only creating tens of thousands of really good high-paying jobs, which is something that we ought to be focused on every single day, but also allowing our country to be energy independent. Here is one case where we have got the opportunity to reduce our dependence on, in many cases, Middle Eastern countries who don’t like us, where we are spending billions of dollars to countries who use that money against us, to kill Americans in many cases. And so the revolution in energy is, I think, one of the most important things if we want to get our economy back on track, get our country moving again, create jobs and create the energy security I think that Americans expect and deserve.

If we want everybody to live in squalor and poverty, you know, then we go with the old economy. If we actually want to create jobs and manufacture, make things in this country so that we can create jobs and increase everybody’s lifestyle, not just in America, but in other countries, it starts with energy, and safe and secure energy, and that is what this is all about.

Mr. MCNERNEY. How is the natural gas mostly used? Is it used as a chemical, as a solute? Is it used to create heat through burning, or is it used to create electricity?

Mr. CORDLE. In two primary ways. We use natural gas to fire our steam boilers in our chemical production facility. And the overall lowering of that cost has certainly helped us dramatically. In the overall chemical manufacturing industry, it is a raw material, it is an ingredient in what we make in terms of our products.  Natural gas, when it comes out of the ground, has several components:  ethane, propane, and a few others.  Ethane is the key raw material that is cracked and turned into ethylene, ethylene oxide, and then eventually it comes into polyethylene in the plastics that we use every day.

Mr. MCNERNEY. What is the energy balance of the GTL liquids; that is, energy in your product, divided by energy into the process and energy in the natural gas? What does the balance look like?

Mr. DE RUYTER. We use about 9.5 Bcf per day to produce 100,000 barrels of diesel per day. So you could work out the balance from that.  It is a ratio between natural gas in and diesel out on the other side of the process.

Mr. JOHNSON. There is a nearly an almost limitless supply of natural gas, if the Federal Government doesn’t mess up the opportunity, and from a manufacturing perspective, if we aren’t forced to use gas for power generation instead of cheaper coal. I would suggest your time and the time of your members would be better spent helping us make sure that the administration doesn’t stamp out the coal industry, which is the most cost affordable, reliable form of energy on the planet.

Posted in Energy Dependence, Natural Gas Vehicles | Tagged , , , | Leave a comment

Natural gas is a stupid transportation fuel

[ My comment: The only reason natural gas has come up as a transportation fuel at all is the false belief that there is 100 years of natural gas (even this article does, but natural gas may last far less for reasons explained in articles here).

Although this article focuses on cars, the same critique applies to heavy-duty trucks as well, which need even bigger, heavier tanks.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

Service, R. F. October 31, 2014. Stepping on the gas. Science Vol. 346, Issue 6209, pp. 538-541 

At a conference on natural gas-powered vehicles Dane Boysen, head of a natural gas vehicle research program at the U.S. Department of Energy’s Advanced Research Projects Agency-Energy, said what industry stalwarts don’t want to hear:

“Honestly, natural gas is not that great of a transportation fuel.” In fact, he adds, “it’s a stupid fuel.”

This is because of the low energy density of natural gas. A liter of gasoline will propel a typical car more than 10,000 meters down the road; a liter of natural gas just 13 meters. Even when natural gas is chilled or jammed into a high-pressure tank—at a high cost of both energy and money—it still can’t match gasoline’s range.

Nevertheless, Boysen’s ARPAE project, called Methane Opportunities for Vehicular Energy (MOVE), is in the middle of spending $30 million over 5 years to jump-start the development of natural gas-powered cars and light-duty trucks which now burn over 60% of oil used in transportation.

But as Stephen Yborra, who directs market development for NGVAmerica, puts it, “there are an awful lot of hurdles to overcome.” Honda, for example, already makes a natural gas version of its Civic sedan. But it has sold only 2,000 of them in the United States, compared with more than 1.5 million gasoline-powered cars a year. Major improvements in fuel tanks, pumps, and infrastructure will be needed before natural gas vehicles rule the road.

One by one, Boysen ticks off formidable technical challenges and the efforts engineers are making to solve them.

GAS TANK MATERIALS. The biggest problem goes back to the meager energy density of natural gas. At ambient temperature and pressure, it’s a mere 40,000 joules per liter, slightly more than 1/1000th that of gasoline. To carry enough fuel, a car needs an oversized fuel tank, which eats into its cargo space. As a result, Honda’s natural gas Civic has less than half the trunk volume of its gasoline counterpart. “Drivers hate this because they can’t pick up people at the airport,” Boysen says.

The fuel tanks also have to be pressurized—another source of headaches. Today’s tanks compress gas to 250 bar, about 250 times atmospheric pressure. To handle the stresses, tanks must be made either from thick metal—which makes them heavy—or from lighter but expensive carbon fiber. Current tanks add an average of $3500 to the cost of natural gas vehicles.

  • GAS TANK SHAPES. Spongelike fuel storage at modest pressures might free engineers to build tanks in shapes other than the now-standard high-pressure cylinder. That’s critical, because in a car, a cylinder occupies a box as big as its largest dimension, wasting a lot of space. For heavy-duty trucks and buses, which don’t have tight space constraints, an awkward tank shape is less of a problem. But it’s a killer for passenger cars.
  • GASSING UP. One challenge is the time it takes to fill up. Gasoline pumps can supply as much as 10 gallons (38 liters) of fuel per minute, an energy transfer rate equivalent to 20 megawatts of power. Today’s CNG systems can fill the equivalent of a 15-gallon (57-liter) tank in 5 minutes. But they are expensive and primarily service trucks and specialized fleets.
    Many advocates of natural gas cars dream of a low-pressure compressor that could be used for home refueling, as roughly half of U.S. homes—some 60 million—already have a natural gas line. If cars could be refueled at home, consumers would tolerate slower filling rates, as they do with electric vehicles. One such compressor is already on the market, Boysen notes. But it costs $5500.
  • But with so few vehicles on the road, compressor manufacturers have been unwilling to invest in new technologies. As a result, says Bradley Zigler, a combustion researcher at the National Renewable Energy Laboratory in Golden, Colorado, “right now there is a valley of death between research progress and commercially available technologies.
  • INFRASTRUCTURE, INFRASTRUCTURE, INFRASTRUCTURE. Even if engineers do it all—come up with a cheap space-age crystal to hold gas in a low-pressure tank, a more efficient natural gas–burning engine to reduce the demand for a large tank, and a cheap new compressor—that still might not be enough. For drivers to gamble tens of thousands of dollars on a new kind of car, analysts say, they’ll need all of these technologies to be widely available at the same time. “It has to be in a box,” Youssef says. “To me, that’s the biggest hurdle. I’m afraid we’re not there yet.”
  • Even then, Boysen notes, natural gas vehicles would face competition from a more-than-viable alternative: the gasoline- and diesel-powered cars that now make up 93% of passenger vehicles on the road. Drivers will need to be convinced that a natural gas car will work at least as well as current cars do. They will need to know they can buy fuel wherever and whenever they want. And they will need a nationwide network of mechanics and parts suppliers to fix things when they break. Gasoline-powered and electric cars already cover the whole menu, but would-be competitors have far to go.

This suite of demands is particularly acute for truly novel technologies, such as hydrogen-powered fuel cell vehicles. The lack of an existing fueling infrastructure for those cars makes it far less likely that drivers will embrace them. But the fact that such challenges are also proving daunting to natural gas-powered cars, with their sizable fuel cost advantage, underscores just how difficult it is to transform the way we drive. For Boysen and his colleagues, the allure of natural gas is stronger than ever. But they know reality can be unkind to even the most appealing technologies.

Posted in Automobiles, Natural Gas Vehicles, Transportation | Tagged , , | Leave a comment

Methane hydrate apocalypse? Maybe not…

 [ I don’t think there’s enough evidence yet to decide for sure there won’t be a gas hydrate apocalypse, but here’s some evidence that this might not happen. Even Benton, who wrote “When Life Nearly Died” didn’t pin the mass murderer of the Permian extinction on gas hydrates. 

I’ve also read Peter Ward’s “Under a Green Sky” and many other books and peer-reviewed articles, and will continue to try to follow the mystery of extinction events and who the mass murderers were. 

Until then, here are some of the articles I’ve run across that cast some doubt on this being a sure thing, though clearly more research needs to be done.

There’s also the hope that peak oil, which makes all other resources available, including coal, natural gas and oil itself, prevent us from going extinct as energy reduces population back to about 1 billion or so (whatever it was before fossil fuels replaced wood and muscle power) and we don’t have the energy to cross all of the other 9 boundaries as well.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer] 

SBC. June 2015. Gas Hydrates. Taking the heat out of the burning-ice debate. Potential and future of Gas Hydrates. SBC energy institute.

Recent studies (e.g. Whiteman et al) have raised the alarm that methane emissions could occur in the Arctic, especially over the East Siberian Shelf and in Siberian Lakes (e.g. Shakhova et al). However, there is a vigorous academic debate on the origin and potential impact of these emissions. As acknowledged by the IPCC: “How much of this CH 4 originates from decomposing organic carbon or from destabilizing hydrates is not known. There is also no evidence available to determine whether these sources have been stimulated by recent regional warming, or whether they have always existed
…since the last deglaciation. More research is therefore urgently needed.

The response of gas hydrates to climate change has only been investigated recently. Modeling in this field remains in its infancy. As a consequence, the likelihood, and impact, of gas-hydrate dissociation due to climate change is still poorly understood and more research is needed.

The first uncertainty is the amount of gas hydrates stored on Earth. Global gas-in-place estimates range over an order of magnitude 1,000-20,000 tcm, with most estimates around 3,000 tcm. Estimates are even more uncertain at the regional level. For instance, there are no models for Antarctic reservoirs, and estimates for Arctic permafrost have only been done recently.

In the permafrost, additional uncertainty arises from the origin of methane emissions, whereas in the case of ocean sediments, the mechanisms by which methane is released and its ability to reach the atmosphere are also disputed. So are the biochemical and chemical consequences that gas-hydrate releases would have on oxidation mechanisms e.g. there may be resource limitations hindering methane oxidation in the ocean.

Since gas hydrates are only stable under high pressures and at low temperatures, there have been concerns that climate change could result in gas-hydrate dissociation and the release of methane into the atmosphere. The response of gas hydrates to climate change has only been investigated recently. Modelling in this field is in its infancy and faces major uncertainties. Nevertheless, it is generally agreed that gas-hydrate dissociation is likely to be a regional phenomenon, rather than a global one, and more likely to occur in subsea permafrost and upper continental shelves than in deep-water reservoirs, which make up the majority of gas hydrates. Indeed,the later are relatively well insulated from climate change because of the slow propagation of warming and the long ventilation time of the ocean. Moreover, the release of methane from gas-hydrate dissociation should be chronic rather than explosive, as was once assumed;and emissions to the atmosphere caused by hydrate dissociation should be in the form of CO2 because of the oxidation of methane in the water column.

no MH apocalypse Thermal diffusivity and ocean thermal









1 Graphs adapted from Archer (2007), “Methane hydrate stability and anthropogenic climate change”. In the graph on the right, ventilation timescale corresponds to the timescale required by temperature (heat), pressure and solutes such as methane to diffuse through the sediments

Ocean thermal response varies according to depth, as highlighted in the graph above (left), but also from place to place, especially in deep-water locations, due to ocean currents. In sediments, the diffusion of heat towards deeper layers takes time and varies primarily according to depth, but also according to the composition of the sediment and to the geothermal gradient.  Heat can diffuse approximately 100 meters in about 300 years (point A). Solutes such as dissolved methane diffuse even more slowly (100 meters in about 30,000 years), point B), while pressure perturbation (e.g. following a sea-level rise) diffuses more quickly (100 meters in about 3 years), point C.

As a result of thermal inertia, heat diffusion and the melting of permafrost take time, and should be slow enough to insulate most hydrate deposits from expected anthropogenic warming over a 100-year timescale. Nevertheless, temperature increases in high latitudes, such as the Arctic, are expected to be much higher than increases in the mean global temperature, and are therefore more likely to affect gas-hydrates reservoirs. Rises in sea level would result in pressure increases at the seafloor that may mitigate further dissociation of offshore gas-hydrate deposits. However, it is likely to be insufficient to negate the warming.

Even if warming were to reach the gas hydrate stability zone, the fate of any methane released would be uncertain.Gas could escape if the pressure exceeded the sediment’s lithostatic pressure, but it might also remain in place. In addition, since gas-hydrate dissociation will start at the edge of the stability zone, even if gas were able to migrate, it might subsequently be trapped in newly formed hydrates.

Finally, even if methane were able to migrate towards the seafloor, it would probably not reach the atmosphere. Most methane is expected to be oxidized in the water column rather than released by bubble plumes or other “transport pathways” directly into the atmosphere as methane. Nevertheless, the oxidation of methane produces CO2, which will have an impact on ocean acidification and will remain in the atmosphere.

The susceptibility of gas-hydrate deposits to climate-change-induced dissociation varies significantly, according to reservoir location

The susceptibility of gas-hydrate deposits to climate-change-induced dissociation varies significantly, according to reservoir location. (1) Moridis et al.2011. Challenges, uncertainties and issues facing production from gas hydrate deposits.


The risk of climate change causing gas-hydrate dissociation and methane leaks varies significantly by location.This can be explained by depth differentials, the existence of mitigation mechanisms such as water-column oxidation, or by the exposure of gas-hydrate deposits to varying regional warming phenomena. High-latitude warming is expected to be much greater than global-mean-temperature warming.

As a rule-of-thumb, gas hydrates held within subsea permafrost on the circum-Arctic ocean shelves and on upper continental slopes are the most prone to dissociation. Subsea permafrost, which were flooded under relatively warm waters due to sea level rises thousands of years ago, have been exposed to dramatic rises in temperature that have led to a significant degradation both of subsea permafrost and t he gas hydrates within it.The latter are believed to store a greater quantity of gas hydrates than the former, but methane releases are less likely to reach directly the atmosphere because of oxidation in the water column.

However, it is very unlikely that climate warming will disturb gas-hydrate deposits that are held in deep-water reservoirs around 95% of all deposits on a millennial timescale. Finally,
gas hydrates in seafloor mounds may also dissociate as a result of warming, overlying water or pressure perturbation, but these account for a very limited share of gas hydrates in place.

The sensitivity of gas-hydrate deposits in onshore permafrost,especially at the top of the hydrate stability zone, is more uncertain and subject to greater debate

Archer et al. calculated that between 35 and 940 GtC of methane could escape as a result of global warming of 3° C, with maximum consequences of adding a further 0.5° C to global warming. On top of the uncertainty reflected in the range above, there are other considerable uncertainties, notably concerning the effectiveness of mitigation mechanisms and the long-term outlook, since methane will continue to be released, even if warming stops.

Reagan and Moridis (2007), “Oceanic gas hydrate instability and dissociation under climate change scenarios”;
Maslin et al. (2010), “Gas hydrates: past and future geohazard?”;
Shakhova et al. (2010), “Predicted Methane Emission on the East Siberian Shelf”;
Whitemann et al. (2013), “Climate science: Vast costs of Arctic change”

Ananthaswamy, A. May 20, 2015 Methane apocalypse? Defusing the Arctic’s time bomb. NewScientist.

Do the huge craters pockmarking Siberia herald a release of underground methane that could exceed our worst climate change fears?  They look like massive bomb craters. So far 7 of these gaping chasms have been discovered in Siberia, apparently caused by pockets of methane exploding out of the melting permafrost. Has the Arctic methane time bomb begun to detonate in a more literal way than anyone imagined?

The “methane time bomb” is the popular shorthand for the idea that the thawing of the Arctic could at any moment trigger the sudden release of massive amounts of the potent greenhouse gas methane, rapidly accelerating the warming of the planet. Some refer to it in more dramatic terms: the Arctic methane catastrophe or methane apocalypse.

Some scientists have been issuing dire warnings about this. There is even an Arctic Methane Emergency Group. Others, though, think that while we are on course for catastrophic warming, the one thing we don’t need to worry about is the so-called methane time bomb. The possibility of an imminent release massive enough to accelerate warming can be ruled out, they say. So who is right?

Few scientists think there is any chance of limiting warming to 2 °C, even though many still publicly support this goal. Our carbon dioxide emissions are the main cause of the warming, but methane is a significant player.

Methane is a highly potent greenhouse gas – causing 86 times as much warming per molecule as CO2 over a 20-year period. Fortunately, there’s very little of it in the atmosphere. Before humans arrived on the scene there was less than 1000 parts per billion. Levels started rising very slowly around 5000 years ago, possibly to due to rice farming. They’ve gone up more since the industrial age began: the fossil fuel industry is by far the single biggest source, followed by farting farm animals, leaking landfills and so on. Only a tiny percentage comes from melting Arctic permafrost.

The level in the atmosphere is now nearing 1900 ppb, but that’s still low. CO2 levels were much higher to start with, around 270,000 ppb before the industrial age. They have now shot up to 400,000 ppb today. The main reason is that CO2 persists for hundreds of years, so even small increases in emissions lead to its buildup in the atmosphere, just as water dripping into a bath with the plug left in can fill the bath eventually.

Methane, by contrast, breaks down after just 12 years, so its level in the atmosphere can only increase if there are big ongoing emissions.

So for methane to cause a big jump in global warming there not only has to be a massive source, it has to be released very rapidly. Is there such a source?

Yes, claim a few scientists. They point to the Arctic permafrost, and specifically to the East Siberian Arctic shelf. This vast submerged shelf underlies a huge area of the Arctic Ocean, which is less than 100 meters deep in most places. During past ice ages, when sea level dropped 120 meters, the land froze solid.

This permafrost was covered by rising seas as the ice age ended around 15,000 years ago. The upper layer has been slowly melting as the relative warmth of the seawater penetrates down. But the frozen layer is still hundreds of meters thick. No one doubts that there is plenty of carbon locked away in and under it. The questions are, how much is there, how much will come out in the form of methane, and how fast?

Natalia Shakhova of the International Arctic Research Center at the University of Alaska Fairbanks, has been studying the East Siberian Arctic shelf for more than two decades. Her team has made more than 30 expeditions to the region, in winter and in summer, collected thousands of water samples and tons of seabed cores during four drilling campaigns and made millions of measurements of ambient levels of methane in the air.

Her team has estimated that there is a whopping 1750 gigatons of methane buried in and below the subsea permafrost, some of it in the form of methane hydrates – an ice-like substance that forms when methane and water combine under the right temperature and pressure. What’s more, they say that the permafrost is already beginning to thaw in places. “Our results show that… [the] subsea permafrost is perforating and opening gas migration paths for methane from the seabed to be released to the water column,” says Shakhova.

Her team’s work hit the headlines in 2010, when in a letter in the journal Science they reported finding more than 100 hot spots where methane was bubbling out from the seabed. But as others pointed out, it was not clear whether these emissions were something new or had been going on for thousands of years.

More sensational stuff was to follow. In another 2010 paper, the team explored the consequences of 50 gigatons of methane – 3% of their estimated total – entering the atmosphere (Doklady Earth Sciences, vol 430, p 190). If this happened over five years methane levels could soar to 20,000 ppb, albeit briefly. Using a simple model, the team calculated that if the world was on course to warm 2 °C by 2100, the extra methane would lead to additional warming of 1.3 °C, so temperatures would hit 3.3 °C by 2100.

This study appeared in an obscure journal and did not get much attention at the time. But then Peter Wadhams of the University of Cambridge and colleagues decided to see how much difference a huge methane release between 2015 and 2025 would make when added to an existing model of the economic costs of global warming. “A 50-gigaton reservoir of methane, stored in the form of hydrates, exists on the East Siberian Arctic shelf,” they stated in Nature, citing Shakhova’s paper as evidence. “It is likely to be emitted as the seabed warms, either steadily over 50 years or suddenly. Understandably, this was big news.

But in reality the idea that 50 gigatons could suddenly be released, or that there’s a store of 1750 gigatons in total, is very far from being accepted fact. On the contrary, Patrick Crill, a biogeochemist at Stockholm University in Sweden who studies methane release from the Arctic, says it is simply untenable. He wants Shakhova’s team to be more open about how they came up with these figures. “The data aren’t available,” says Crill. “It’s not very clear how those extrapolations are made, what the geophysics are that lead to those kinds of claims.

Shakhova now says, “We never stated that 50 gigatons is likely to be released in near or distant future.” It is true that the 2010 study explores the consequences of the release of 50 gigatons rather than explicitly claiming that this will happen. However, it has certainly been widely misunderstood both by other scientists and the media. And her team’s papers continue to fuel the idea that we should be worried about dramatic and damaging releases of methane from the Arctic.

But other researchers disagree. “The Arctic methane catastrophe hypothesis mostly works if you believe that there is a lot of methane hydrate,” says Carolyn Ruppel, who heads the gas hydrates project for the US Geological Survey in Woods Hole, Massachusetts. And her team estimates that there are only 20 gigatons of permafrost-associated hydrates in the Arctic (Journal of Chemical and Engineering Data, vol 60, p 429). If this is right, there’s little reason for concern.

The issue is not just how much methane hydrate there is, but whether it could be released rapidly enough to build up to high levels.

This could happen soon only if the hydrates are shallow enough to be destabilized by heat from the warming Arctic Ocean.

But David Archer of the University of Chicago says that hydrates could only exist hundreds of meters below the sea floor. That’s far too deep for any surface warming to have a rapid impact. The heat will take thousands of years to work its way down to that depth, he calculated last year, and only then will the hydrates respond (Biogeosciences Discussions, vol 12, p 1). “There is no way to get it all out on a short timescale,” says Archer. “That’s the crux of my position.

This concerted push back against the idea of an impending methane bomb has led to something of a feud. Commenting on Archer’s paper, for instance, Shakhova said he clearly knew nothing about the topic. She has repeatedly pointed out that her team has actual experience of collecting data in the East Siberian Ice shelf, unlike her detractors.

But there is skepticism about Shakhova’s actual measurements, too. For instance, her team has reported that methane levels above some hotspots in the East Siberian shelf were as high as 8000 ppb. Last summer, Crill was aboard the Swedish icebreaker Oden, measuring levels of methane over the East Siberian shelf. Nowhere did he find levels this high. Even when the Oden ventured near the hotspots identified by Shakhova’s team, he never saw levels much beyond 2000 ppb. “There was no indication of any large-scale rapid degassing,” says Crill.

It’s not clear why other teams are finding lower levels than Shakhova’s. But to find out if a catastrophic release of methane is imminent, there is another line of evidence we can turn to. Thanks to ice cores from places like Greenland, we have a record of past methane levels going back hundreds of thousands of years. If there are lots of shallow hydrates in the Arctic poised to release methane as soon it warms up a little, they should have done so in the past, and this should show up in the ice cores, says Gavin Schmidt of the NASA Goddard Institute for Space Studies in New York.

Around 6000 years ago, although the world as a whole was not warmer, Arctic summers were much warmer thanks to the peculiarities of Earth’s orbit. There is no sign of any short-term spikes in methane at this time. “There’s absolutely nothing,” says Schmidt. “If those methane hydrates were there, they were there 6000 years ago. They weren’t triggered 6000 years ago, so it’s unlikely they’d be triggered imminently.

During the last interglacial period, 125,000 years ago, when temperatures in the Arctic were about 3 °C warmer than now, methane levels rose a little, as expected in warmer periods, but never exceeded 750 ppb. Again, there’s no sign of the kind of spike a large release would produce.

There is, then, no solid evidence to back the idea of a methane bomb and past climate records suggest there is no cause for alarm. Extraordinary claims require extraordinary proof, otherwise it’s going to undermine credibility and slow down our ability to actually make the decisions that we are going to have to make as a society.

No one is saying methane is not a concern. Levels are now the highest they’ve been for at least 800,000 years and climbing. The Intergovernmental Panel on Climate Change’s worst-case emissions scenario assumes a big rise in methane, to as much as 4000 ppb by 2100.

What about the gaping craters? They are certainly spectacular and scary-looking. The latest idea is that they are caused by the release of pockets of compressed methane as ice seals melt. But the amount of methane released per crater is minuscule in global terms. Around 20 million craters would have to form within a few years to release 50 gigatons of the gas.

Posted in CO2 and Methane, Methane Hydrates | Tagged , , | Comments Off on Methane hydrate apocalypse? Maybe not…

California could hit the solar wall

[ According to this Stanford University article, if California uses mainly solar power to meet a 50% Renewable Portfolio Standard (RPS), on sunny days, for most of the year, more power would be generated mid-day than needed 23% of the time, and over-generation by solar PV from 42 to 65%. On average, nearly 9% of solar or other electricity generation would have to be be shut down. In other words, California could hit the solar wall.

Why would California mainly use solar rather than wind power as well? “Unlike the Midwest, California has a modest technical potential for wind and many of the best sites are already developed [my comment: any sites not developed yet are too expensive or far from the grid]. California does have a large offshore-wind resource but high costs and technological challenges remain. Importing electricity generated by onshore wind from neighboring states is promising, but some imports will require new high-voltage transmission lines that may take a decade to plan, site, permit, finance and build.”

Overall, California has already developed MOST of the best sites (NREL 2013):

“Prime renewable resources include wind (40% capacity factor or better), solar (7.5 DNI or better), and discovered geothermal potential. All other renewable resources are non-prime. California’s remaining options for easily developable in-state utility-scale renewables could be limited by 2025. Wind, geothermal, biomass, and small hydro projects under contract (either existing or under construction) are about equal to the total developable potential estimated for each of these technologies in California’s renewable energy zones. 

Solar projects to date, however, exceed the amount of developable prime and borderline prime resources estimated to exist within California’s zones.  This suggests that California’s remaining solar resource areas tend have less solar exposure than what has already been developed and might be less productive.”

Less productive = more expensive. This report also points out that California will have 44.3 million people in 2025, requiring renewable generation to grow even further. ]

The consequences of too much solar power include:

  • Since solar power provides energy when least needed (mid-day rather than peak morning and late afternoon hours), massive amounts of power from national gas power plants (tens of gigawatt hours of energy) would need to QUICKLY ramp up as the sun’s power rapidly fades in the afternoon, requiring large expensive natural gas back-up plants. But natural gas is finite, so that’s a temporary solution. The only other commercial source of dispatchable energy is (pumped) hydropower, but there are few spots to put new dams, and existing dams are limited much of the year due to drought, fisheries, agriculture, and drinking water. Geothermal, nuclear, coal are not dispatchable — they are considered baseload power running 24 x 7. They can’t ramp up and down in less than 6 to 8 hours because that damages their equipment.  When built they expected to generate a certain percent of power per day to pay back their cost, so when they have to shut down quickly because solar and wind power have first rights to provide power.  This can drive electricity prices negative, but since ramping down can damage their equipment, coal, natural gas, and nuclear plants lose money to continue generating power. This is why many nuclear and coal power plants are shutting down — they are losing money.  But since solar and wind are unreliable, intermittent, and unpredictable, the electric grid needs them to be ready to fill in when the sun goes down and the wind dies.
  • Utility scale energy battery storage is dispatchable, but far from commercial.
  • Large-scale curtailment of solar PV during times of over-generation reduce the value of solar capacity additions to investors.
  • Real-time pricing during times of over-generation could limit or eliminate the net-metering advantage of PV on residential and commercial-scale installations.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

Benson, S., and Majumdar, A. July 12, 2016. On the path to deep decarbonization: Avoiding the solar wall. Stanford University.

Photo: Close-up of solar modules. Credit: NREL

Photo: NREL

While you might not have been paying attention, California’s electric grid has undergone a radical transformation. Today, more than 25 percent of the electricity used in California comes from renewable energy resources, not including the additional 10-15 percent from large-scale hydropower or all the electricity generated from roof-top solar photovoltaic (PV) panels. Compared to only five years ago, solar energy has grown 20-fold (Figure 1). In the middle of a typical springtime day, solar energy from utility-scale power plants provides an impressive 7 gigawatts (GW) of grid-connected power, accounting for about 20 percent of the electricity used across the state (Figure 2).

Last year, the total electricity from both in-state and out-of-state resources was 296,843  gigawatt-hours (GWh), including out-of-state generation from unspecified sources. An estimated 8 percent of California’s electricity consumption was generated from wind farms and 6 percent from solar power plants connected directly to the California Independent System Operator (CAISO) grid. This rapid growth is great news for the nascent renewable energy industry and can serve as proof-point for the scale-up of renewable energy.  In addition to these utility-scale renewable energy power plants, California has an additional 3.5 GW of solar “self-generation” on the customer side of the meter that offsets demand for electricity from the grid when the sun is shining. And, a growing third source is “community solar,” where residents and businesses invest in small, local solar plants.

Figure 1. Total electricity generation in California from solar and wind energy directly connected to the CASIO grid (California Energy Almanac).

By law, in 2020 California plans to have 33% of its electricity sourced from renewable sources and 50% by 2030 under its renewable portfolio standard (RPS) requirements. Many in-state renewable energy projects are in the pipeline, including nearly 9 GW of solar PV and 1.8 GW of new wind projects that have received environmental permits.  Power purchase agreements have already been signed for at least 1 GW of new solar projects. If all of these permitted projects were developed, California would have about 16 GW of solar-generating capacity by 2020.

While wind has provided a significant portion of California’s renewables to date, the majority of new additions for meeting the 2020 33% RPS requirement is forecast to come from direct grid-connected solar PV. California has an enormous and high-quality solar resource, with an estimated technical potential of more than 4,000 GW for utility-scale solar and 76 GW for rooftop solar. Unlike the Midwest, California has a modest technical potential for wind and many of the best sites are already developed. California does have a large offshore-wind resource, some of which is now in permitting, but high costs and technological challenges remain. Importing electricity generated by onshore wind from neighboring states is promising, but some imports will require new high-voltage transmission lines that may take a decade to plan, site, permit, finance and build.

Figure 2. California energy mix on May 29, 2016. Note that renewables provide more than 40% of the power during the middle of the day. Of this, more than 30% is from solar power (CAISO Daily Renewables Watch).

California has a major effort – the Renewable Energy Transmission Initiative (RETI) – that has successfully identified and built lines to meet its RPS requirements. This year, California launched a new phase of RETI to develop the additional transmission, both in-state and out-of-state, needed for the 50-percent RPS. A recent study supporting California Senate Bill 350 implementation (which includes the 50-percent RPS) showed from $1 billion to  $1.5 billion annual savings by adding major  transmission lines that would bring more out-of-state wind energy into California.

If, instead, California continues to rely mostly on solar resource for meeting the 2030 50-percent RPS, the total statewide solar-generating capacity would reach 30 to 40 GW under peak production, according to a report by Energy and Environmental Economics Inc. (E3). Under these conditions, on a sunny day, for most of the year, California would be generating more electric power than it needs during the middle of the day from solar energy alone. E3 calculates that this large amount of overgeneration could be a problem 23 percent of the time, resulting in curtailment of 8.9 percent of available renewable energy, with marginal overgeneration by solar PV of 42-65 percent. In other words, California could hit the solar wall. And this does not even consider that midday demand is likely to decrease due to the installation of additional residential and commercial solar PV systems “behind the electricity meter.”

Consequences of hitting the solar wall

Just a decade ago it would have been nearly unthinkable that during the middle of the day solar energy could provide more electricity than an economy as large as California’s needs. But supportive policies, rapid scale-up and decreasing costs make this possibility a reality today. While from some perspectives this is very encouraging, in reality, there are consequences for hitting the solar wall. For example:

  • Reliance on so much solar energy would require rapid ramping capacity for more than 10s of GW of natural gas power plants from 4:00-6:00 p.m., when the sun is going down and electricity demand goes up as people return home.
  • Large back-up capacity from natural gas plants or access to other sources of dispatchable electricity would be required for days when the sun isn’t shining.
  • Zero marginal-cost solar generation could squeeze out other valuable low-carbon electricity sources that can provide baseload power. For example, natural gas combined cycle plants, geothermal energy and nuclear power that cannot operate during these times at zero marginal cost.
  • Large-scale curtailment of solar PV during times of over-generation, which will reduce the value of solar capacity additions to investors.
  • Real-time pricing during times of over-generation could limit or eliminate the net-metering advantage of PV on residential and commercial-scale installations.

There is no doubt that California’s solar energy potential is invaluable, but we must take steps to avoid the solar wall.  Fortunately, these issues are being recognized and addressed at many levels in California.

Avoiding the solar wall

Numerous approaches to avoiding the solar wall are available today, and in the future more options will exist as we develop new technologies, policies and markets to take advantage of large solar-energy resources that exist around the world. In the short term, key actions include:

  • Develop a renewable energy-generation mix that is well-balanced among solar, wind and other forms of renewable generation. The right generation mix will be region specific, but for California should include increasing wind generation to provide nighttime power. [my comment: what other renewable generation?  To reach an 80 to 100% renewable grid, most of the power has to come from solar and wind with a little help from geothermal and hydropower]
  • Support regional generation markets across wide geographic areas to balance the variability of renewable generation. California has created an energy imbalance market with participants in Nevada, Wyoming and Oregon. Expansion of regional markets is being studied as part of the implementation of Senate Bill 350, California’s 50-percent RPS law.
  • Ensure adequate capacity of rapid ramping natural gas plants  to provide reliable supply during the morning and evening hours as the sun rises and sets. [my comment: natural gas is finite! Conventional natural gas peaked in 2001 in the U.S., shale gas is peaking both economically now and geologically by 2020, and we have only 4 Liquefied Natural Gas (LNG) import terminals].
  • Expand use of load shifting through real-time pricing to incentivize using power during daytime hours when large amounts of solar power are available.
  • Encourage daytime smart charging of electric vehicles to take advantage of abundant and zero marginal-cost solar generation. Achieving this will require workplace charging stations and new business models. With transportation at about 40% of the state’s energy use, electrification of the transport sector could have the dual benefits of eliminating tailpipe emissions and providing demand for abundant and low-cost solar energy. [My comment: the math and computer algorithms to have a smart grid are far from existence, batteries aren’t much better than they were 210 years ago, and trucks can’t run on batteries. ]
  • Increase energy storage to avoid curtailment of solar over-generation during peak production periods. For now, few financial incentives exist for large-scale pumped-hydropower or compressed air storage projects [my comment: that’s not the problem!  There are very few places left to put pumped-hydro and no spots at all to put compressed air facilities, unless they’re above ground, which is crazy expensive]. Levelized costs of small-scale storage in batteries range from about $300 to more than $1,000/megawatt-hour (MWh) depending on the use-case and the technology. These are expensive compared to pumped-hydro storage at $190 to $270/MWh. For comparison, gas peaker plants have a levelized cost of $165 to $218/MWh. The business case for battery storage will be limited until prices come down significantly. Both R&D and scale-up will be needed to reduce costs. [my comment: utility scale battery storage is FAR from commercial, and only sodium sulfur (NaS) batteries have enough material on earth to store half a day of world electricity (see Barnhart, Charles J. and Benson, Sally M. January 30, 2013. On the importance of reducing the energetic and material demands of electrical energy storage. Energy Environ. Sci., 2013, 6, 1083-1092)]
  • [ My comment: Furthermore, utility scale battery storage is far from being commercial. Using data from the Department of Energy (DOE/EPRI 2013) energy storage handbook “Electricity storage handbook in collaboration with NRECA”, I calculated that the cost of NaS batteries capable of storing 24 hours of electricity generation in the United States came to $40.77 trillion dollars, covered 923 square miles, and weighed in at a husky 450 million tons.
    Sodium Sulfur (NaS) Battery Cost Calculation:
    NaS Battery 100 MW. Total Plant Cost (TPC) $316,796,550. Energy
    Capacity @ rated depth-of-discharge 86.4 MWh. Size: 200,000 square feet.
    Weight: 7000,000 lbs, Battery replacement 15 years (DOE/EPRI p. 245).
    128,700 NaS batteries needed for 1 day of storage = 11.12 TWh/0.0000864 TWh.
    $40.77 trillion dollars to replace the battery every 15 years = 128,700 NaS * $316,796,550 TPC.
    923 square miles = 200,000 square feet * 128,700 NaS batteries.
    450 million short tons = 7,000,000 lbs * 128,700 batteries/2000 lbs.
    Using similar logic and data from DOE/EPRI, Li-ion batteries would cost
    $11.9 trillion dollars, take up 345 square miles, and weigh 74 million tons. Lead–acid (advanced) would cost $8.3 trillion dollars, take up 217.5 square miles, and weigh 15.8 million tons.
  • Use electrolysis to produce hydrogen fuel to augment the natural gas grid, generate heat and power with fuel cells, or power hydrogen vehicles. [my comment: hydrogen is the least likely energy solution, even more unlikely than fusion]. Also, compared to storing electricity in batteries, hydrogen-based storage systems that combine electrolysis and  fuel cells are about three times less efficient.  In addition, today, these technologies are expensive, and significant cost reductions will be required to make them competitive alternatives.
  • For the longer term, scientists are developing new methods to produce fuels from renewable energy. The SUNCAT Center and  the Joint Center for Artificial Photosynthesis are developing new materials to produce “zero net carbon fuels” from carbon dioxide, water and renewable energy that can be used for transportation or backing up the electric grid. While we don’t know if and when the needed breakthroughs will occur, the game-changing potential of net zero carbon fuels would unlock the full potential of solar energy and break through the solar wall.  [My comment: before reading further, if the fuel isn’t DIESEL to keep trucks running, then what’s the point? And given that we’re at peak oil, peak coal, and peak natural gas, we don’t have the time for breakthroughs to occur. You’d want to prepare at least 20 years ahead of time].

Taking full advantage of the power from the sun

The global potential of solar energy is enormous and surely it can play a major role in a deeply decarbonized future energy system. In Thomas Edison’s words, “I’d put my money on the sun and solar energy. What a source of power! I hope we don’t have to wait until oil and coal run out before we tackle that.”*

We have work to do, but we are well on the way.  Who would have imagined just five years ago that solar energy would provide 6% of California’s electricity and is on track to double, triple or go beyond? But we need to be smart – avoiding running into the solar wall by balancing the generation mix, expanding regional markets, creating real-time markets to increase demand during solar peak-generating periods and creating new electricity demand, such as day-time charging for electric vehicles. In the longer term, electricity storage, hydrogen generation and zero net carbon fuels will further unlock the potential of solar energy.

*As quoted in Uncommon Friends : Life with Thomas Edison, Henry Ford, Harvey Firestone, Alexis Carrel & Charles Lindbergh(1987), by James Newton, p. 31


NREL. August 2013. Beyond Renewable Portfolio Standards: An Assessment of Regional Supply and Demand Conditions Affecting the Future of Renewable Energy in the West. National Renewable Energy Laboratory.

Posted in Photovoltaic Solar, Renewable Integration | Tagged , , , | 7 Comments

Why human waste should be used for fertilizer

[ At John Jeavons Biointensive workshop back in 2003, I learned that phosphorous is limited and mostly being lost to oceans and other waterways after exiting sewage treatment plants.  He said that it’s dangerous if done incorrectly and wasn’t going to cover this at the workshop, but to keep it in mind for the future.

Modern fertilizers made with the Nobel-prizing winning method of using natural gas as feedstock and energy source can increase crop production up to 5 times, but at a tremendous cost of poor soil health and pollution (see Peak soil).  Fossil fuels will inevitably decline some day, and force us back to organic agriculture and using human manure again.

To give you an idea of how essential fertilizer is, here are a few paragraphs from Yeonmi Park’s recent book “In order to live: A North Korean girl’s journey to freedom”:

“One of the big problems in North Korea was a fertilizer shortage. When the economy collapsed in the 1990s, the Soviet Union stopped sending fertilizer to us and our own factories stopped producing it. Whatever was donated from other countries couldn’t get to the farms because the transportation system had also broken down. this led to crop failures that made the famine even worse. So the government came up with a campaign to fill the fertilizer gap with a local and renewable source: human and animal waste. Every worker and schoolchild had a quota to fill.  Every member of the household had a daily assignment, so when we got up in the morning, it was like a war. My aunts were the most competitive.

“Remember not to poop in school! Wait to do it here!” my aunt in Kowon told me every day.

Whenever my aunt in Songnam-ri traveled away from home and had to pop somewhere else, she loudly complained that she didn’t have a plastic bag with her to save it.

The big effort to collect waste peaked in January so it could be ready for growing season. Our bathrooms were usually far from the house, so you had to be carefu lneighbors didn’t steal from you at night. Some people would lock up their outhouses to keep the poop thieves away. At school the teachers would send us out into the streets to find poop and carry it back to class.  If we saw a dog pooping in the street, it was like gold. My uncle in Kowon had a big dog who made a big poop—and everyone in the family would fight over it.

Our problems could not be fixed with tears and sweat, and the economy went into total collapse after torrential rains caused terrible flooding that wiped out most of the rice harvest…as many as a million North Koreans died from starvation or disease during the worst years of the famine.

When foreign food aid finally started pouring into the country to help famine victims, the government diverted most of it to the military, whose needs always came first. What food did get through to local authorities for distribution quickly ended up being sold on the black market”

Despite this, North Korea is a barren, destroyed landscape that can grow little food and is yet another example of a nation that is collapsing in part from degraded topsoil, as Montgomery wrote about in his book “Dirt: the erosion of civilization”.

Below is a review of The Wastewater Gardener: Preserving the planet, one flush at a time, by Mark Nelson, Synergetic Press.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

Barnett, A. August 2, 2014. Excellent excrement. Why do we waste human waste? We don’t have to. NewScientist.

Would you dine in an artificial wetland laced with human waste? In The Wastewater Gardener, Marc Nelson makes an inspiring case for a new ecology of water

Rainforest destruction, melting glaciers, acid oceans, the fate of polar bears, whales and pandas. You can understand why we get worked up about them ecologically. But wastewater?

The problem is excrement. Psychologically, we seem to be deeply averse to the stuff and want to avoid contact whenever possible – we don’t even want to think about it, we just want it out of the way.

The solution, a universal pipe-based waste network, works well until domestic and industrial chemicals and other non-biological waste are mixed in. Treating the resulting toxic soup, as Mark Nelson explains in The Wastewater Gardener, is not only a major technological challenge, but also uses enormous amounts of one of the planet’s most limited resources: fresh water.

Each adult produces between 7 and 18 ounces of faeces per day. With our current population, that’s a yearly 500 million tonnes. Centralized sewage systems use between 1000 and 2000 tons of water to move each ton of faeces, and another 6000 to 8000 tons to process it.

Even then, this processed waste often ends up in waterways, affecting wildlife and communities downstream, and it eventually finds its way to the ocean. There it contributes to the process of eutrophication, which creates dead zones, killing coral reefs and other sea creatures.

But it doesn’t have to be like that. As head of Wastewater Gardens International, Nelson has traveled the world, developing and promoting artificial wetlands as the most logical way to use what we otherwise flush away.

Except that, as Nelson points out, with 7 billion-plus people, there really is no “away”. Besides, what the public purse pays to detox and dump can be put to profitable work, fertilising greenery for urban spaces and fruits and vegetables for domestic and commercial use, for example.

Less than 3% of Earth’s water is fresh, and only a tiny portion of that is easily available to us. Most of the water that standard sewage systems use to move human waste is drinkable. Diminishing water resources mean alternatives are pressingly needed. Wastewater gardens, where marsh plants are used to filter lavatory output and allow cleaned water to enter natural watercourses, are very much part of that solution.

Nelson clearly understands the yuck factor and goes to great lengths to show that having a shallow vat of human-waste-laced water nearby is far less vile than we might imagine, especially when it is covered by gravel and interlaced with plant roots. Restaurants with tables dotted between ponds containing the ever-filtering artificial wetlands provide convincing proof.

Constructed wetlands can take on big jobs, too: a mixture of papyrus, lotus and other plants have successfully and beautifully detoxified water from Indonesian batik-dying factories. This water had killed cows downstream and caused running battles between farmers and factory workers.

The Wastewater Gardener is not a “how to” story, but more a “how it was done” account. Nelson tells how these wetlands started to become mainstream in less than 30 years. With humility and humour, he recounts how, as a boy from New York City, he acquired hands-on ranching knowledge in New Mexico, then studied under American ecology guru, Howard Thomas Odum.

And stories of his experiences everywhere from urban Bali and the Australian outback to Morocco’s Atlas mountains and Mexico’s Cancún coast illustrate the gravelly, muddy evolution of his big idea. An inspiring read, not just for the smallest room.

Posted in Soil, Waste, Water | Tagged , , , , , | 6 Comments