Natural gas: The fracking fallacy Nature 2014

Inman, Mason. December 3, 2014. Natural Gas: The fracking fallacy. Nature 516, 28-30


The EIA projects that production will rise by more than 50% over the next quarter of a century, and perhaps beyond, with shale formations supplying much of that increase.

But such optimism contrasts with forecasts developed by a team of specialists at the University of Texas, which is analyzing the geological conditions using data at much higher resolution than the EIA’s. The Texas team projects that gas production from four of the most productive formations will peak in the coming years and then quickly decline. If that pattern holds for other formations that the team has not yet analyzed, it could mean much less natural gas in the United States’ future.

Like all energy forecasts, the lower projections from the Texas team could turn out to be inaccurate. Technological advances in the next few decades could open up more resources at lower costs, driving US production even higher than the EIA has predicted. But it is also possible that the Texas forecasts are too high, and that gas production will fall off even faster than the team suggests.

The one certainty here is that the United States and other nations have invested relatively little in tracking and assessing their natural resources. The EIA has a total budget of US$117 million, less than the value of one day’s gas production from the country’s shale formations.

Natural gas: The fracking fallacy

The United States is banking on decades of abundant natural gas to power its economic resurgence. That may be wishful thinking.

When US President Barack Obama talks about the future, he foresees a thriving US economy fueled to a large degree by vast amounts of natural gas pouring from domestic wells. “We have a supply of natural gas that can last America nearly 100 years,” he declared in his 2012 State of the Union address.

Obama’s statement reflects an optimism that has permeated the United States. It is all thanks to fracking — or hydraulic fracturingwhich has made it possible to coax natural gas at a relatively low price out of the fine-grained rock known as shale. Around the country, terms such as ‘shale revolution’ and ‘energy abundance’ echo through corporate boardrooms.

Companies are betting big on forecasts of cheap, plentiful natural gas. Over the next 20 years, US industry and electricity producers are expected to invest hundreds of billions of dollars in new plants that rely on natural gas. And billions more dollars are pouring into the construction of export facilities that will enable the United States to ship liquefied natural gas to Europe, Asia and South America.

All of those investments are based on the expectation that US gas production will climb for decades, in line with the official forecasts by the US Energy Information Administration (EIA). As agency director Adam Sieminski put it last year: “For natural gas, the EIA has no doubt at all that production can continue to grow all the way out to 2040.”

But a careful examination of the assumptions behind such bullish forecasts suggests that they may be overly optimistic, in part because the government’s predictions rely on coarse-grained studies of major shale formations, or plays. Now, researchers are analyzing those formations in much greater detail and are issuing more-conservative forecasts. They calculate that such formations have relatively small ‘sweet spots’ where it will be profitable to extract gas.

Tad Patzek, head of the University of Texas at Austin’s department of petroleum and geosystems engineering says this is “bad news, we’re setting ourselves up for a major fiasco”.

If US natural-gas production falls, plans to export large amounts overseas could fizzle. And nations hoping to tap their own shale formations may reconsider. “If it begins to look as if it’s going to end in tears in the United States, that would certainly have an impact on the enthusiasm in different parts of the world,” says economist Paul Stevens of Chatham House, a London-based think tank.

The idea that natural gas will be abundant is a sharp turnaround from more pessimistic outlooks that prevailed until about five years ago. Throughout the 1990s, US natural-gas production had been stuck on a plateau. With gas supplying 25% of US energy, there were widespread worries that supplies would shrink and the nation would become dependent on imports. The EIA, which collects energy data and provides a long-term outlook for US energy, projected as recently as 2008 that US natural-gas production would remain fairly flat for the following couple of decades.

The shale boom caught everyone by surprise. It relied on fracking technology that had been around for decades — but when gas prices were low, the technology was considered too costly to use on shale. In the 2000s, however, prices rose high enough to for companies to afford fracking shale formations. Combined with new techniques for drilling long horizontal wells, this pushed US natural-gas production to an all-time high, allowing the nation to regain a title it had previously held for decades: the world’s top natural-gas producer.

Rich rocks

Much of the credit for that goes to the Marcellus shale formation, which stretches across West Virginia, Pennsylvania and New York. Beneath thickly forested rolling hills, companies have sunk more than 8,000 wells over several years, and are adding about 100 more every month. Each well extends down for about 2 kilometers before veering sideways and snaking for more than a kilometer through the shale. The Marcellus now supplies 385 million cubic meters of gas per day, more than enough to supply half of the gas currently burned in US power plants.

A substantial portion of the rest of the US gas supply comes from three other shale plays — the Barnett in Texas, the Fayetteville in Arkansas and the Haynesville, which straddles the Louisiana–Texas border. Together, these ‘big four’ plays boast more than 30,000 wells and are responsible for two-thirds of current US shale-gas production.

The EIA — like nearly all other forecasters — did not see the boom coming, and has consistently underestimated how much gas would come from shale. But as the boom unfolded, the agency substantially raised its long-term expectations for shale gas. In its Annual Energy Outlook 2014, the ‘reference case’ scenario — based on the expectation that natural-gas prices will gradually rise, but remain relatively low — shows US production growing until 2040, driven by large increases in shale gas.

The EIA has not published its projections for individual shale-gas plays, but has released them to Nature. In the latest reference-case forecast, production from the big four plays would continue rising quickly until 2020, then plateau for at least 20 years. Other shale-gas plays would keep the boom going until 2040

Petroleum-industry analysts create their own shale-gas forecasts, which generally fall in the neighborhood of the EIA assessment. “EIA’s outlook is pretty close to the consensus,” says economist Guy Caruso of the Center for Strategic and International Studies in Washington DC, who is a former director of the agency. However, these consultancies rarely release the details behind their forecasts. That makes it difficult to assess and discuss their assumptions and methods, argues Ruud Weijermars, a geoscientist at Texas A&M University in College Station. Industry and consultancy studies are “entirely different from the peer-reviewed domain”, he says.

To provide rigorous and transparent forecasts of shale-gas production, a team of a dozen geoscientists, petroleum engineers and economists at the University of Texas at Austin has spent more than three years on a systematic set of studies of the major shale plays. The research was funded by a US$1.5-million grant from the Alfred P. Sloan Foundation in New York City, and has been appearing gradually in academic journals1, 2, 3, 4, 5 and conference presentations. That work is the “most authoritative” in this area so far, says Weijermars.

If natural-gas prices were to follow the scenario that the EIA used in its 2014 annual report, the Texas team forecasts that production from the big four plays would peak in 2020, and decline from then on. By 2030, these plays would be producing only about half as much as in the EIA’s reference case. Even the agency’s most conservative scenarios seem to be higher than the Texas team’s forecasts. “Obviously they do not agree very well with the EIA results,” says Patzek.

The main difference between the Texas and EIA forecasts may come down to how fine-grained each assessment is.

  • The EIA breaks up each shale play by county, calculating an average well productivity for that area. But counties often cover more than 1,000 square kilometers, large enough to hold thousands of horizontal fracked wells.
  • The Texas team, by contrast, splits each play into blocks of one square mile (2.6 square kilometers)a resolution at least 20 times finer than the EIA’s.

Resolution matters because each play has sweet spots that yield a lot of gas, and large areas where wells are less productive. Companies try to target the sweet spots first, so wells drilled in the future may be less productive than current ones. The EIA’s model so far has assumed that future wells will be at least as productive as past wells in the same county. But this approach, Patzek argues, “leads to results that are way too optimistic”.

The high resolution of the Texas studies allows their model to distinguish the sweet spots from the marginal areas. As a result, says study co-leader Scott Tinker, a geoscientist at the University of Texas at Austin, “we’ve been able to say, better than in the past, what a future well would look like”.

The Texas and EIA studies also differ in how they estimate the total number of wells that could be economically drilled in each play. The EIA does not explicitly state that number, but its analysis seems to require more wells than the Texas assessment, which excludes areas where drilling would be difficult, such as under lakes or major cities. These features of the model were chosen to “mimic reality”, Tinker says, and were based on team members’ long experience in the petroleum industry.

Alternative Futures

The lower forecasts from Texas mesh with a few independent studies that use simpler methods. Studies by the following researchers suggest that increasing production, as in the EIA’s forecasts, would require a significant and sustained increase in drilling over the next 25 years, which may not be profitable.

  1. Weijermars 6, R. 2014. US shale gas production outlook based on well roll-out rate scenarios. Applied Energy, 124, 283-297.
  2. Mark Kaiser7 of Louisiana State University in Baton Rouge
  3. retired Geological Survey of Canada geologist David Hughes8,

Some industry insiders are impressed by the Texas assessment. Richard Nehring, an oil and gas analyst at Nehring Associates in Colorado Springs, Colorado, which operates a widely used database of oil and gas fields, says the team’s approach is “how unconventional resource assessments should be done”.

Patzek acknowledges that forecasts of shale plays “are very, very difficult and uncertain”, in part because the technologies and approaches to drilling are rapidly evolving. In newer plays, companies are still working out the best spots to drill. And it is still unclear how tightly wells can be packed before they significantly interfere with each other.

Yet in a working paper9 published online on 14 October, two EIA analysts acknowledge problems with the agency’s methods so far. They argue that it would be better to draw upon high-resolution geological maps, and they point to those generated by the Texas team as an example of how such models could improve forecasts by delineating sweet spots. The paper carries a disclaimer that the authors’ views are not necessarily those of the EIA — but the agency does plan to use a new approach along these lines when it assesses the Marcellus play for its 2015 annual report. (When Nature asked the authors of that paper for an on-the-record interview, they referred questions to Staub.)

Boom or bust

Patzek argues that actual production could come out lower than the team’s forecasts. He talks about it hitting a peak in the next decade or so — and after that, “there’s going to be a pretty fast decline on the other side”, he says. “That’s when there’s going to be a rude awakening for the United States.” He expects that gas prices will rise steeply, and that the nation may end up building more gas-powered industrial plants and vehicles than it will be able to afford to run. “The bottom line is, no matter what happens and how it unfolds,” he says, “it cannot be good for the US economy.”

If forecasting is difficult for the United States, which can draw on data for tens of thousands of shale-gas wells, the uncertainty is much larger in countries with fewer wells. The EIA has commissioned estimates of world shale potential from Advanced Resources International (ARI), a consultancy in Washington DC, which concluded in 2013 that shale formations worldwide are likely to hold a total of 220 trillion cubic meters of recoverable natural gas10. At current consumption rates — with natural gas supplying one-quarter of global energy — that would provide a 65-year supply. However, the ARI report does not state a range of uncertainty on its estimates, nor how much gas might be economical to extract.

Such figures are “extremely dubious”, argues Stevens. “It’s sort of people wetting fingers and waving them in the air.” He cites ARI’s assessments of Poland, which is estimated to have the largest shale-gas resources in Europe. Between 2011 and 2013, the ARI reduced its estimate for Poland’s most promising areas by one-third, saying that some test wells had yielded less than anticipated. Meanwhile, the Polish Geological Institute did its own study11, calculating that the same regions held less than one-tenth of the gas in ARI’s initial estimate.

If gas supplies in the United States dry up faster than expected — or environmental opposition grows stronger — countries such as Poland will be less likely to have their own shale booms, say experts.

For the moment, however, optimism about shale gas reigns — especially in the United States. And that is what worries some energy experts. “There is a huge amount of uncertainty,” says Nehring. “The problem is, people say, ‘Just give me a number’. Single numbers, even if they’re wrong, are a lot more comforting.”

The EIA is underfunded

Patzek says that the EIA’s method amounts to “educated guesswork”. But he and others are reluctant to come down too hard. The EIA is doing “the best with the resources they have and the timelines they have”, says Patzek. Its 2014 budget — which covers data collection and forecasting for all types of energy — totaled just $117 million, about the cost of drilling a dozen wells in the Haynesville shale. The EIA is “good value for the money”, says Caruso. “I always felt we were underfunded. The EIA was being asked to do more and more, with less and less.”

Representatives of the EIA defend the agency’s assessments and argue that they should not be compared with the Texas studies because they use different assumptions and include many scenarios. “Both modelling efforts are valuable, and in many respects feed each other,” says John Staub, leader of the EIA’s team on oil and gas exploration and production analysis. “In fact, EIA has incorporated insights from the University of Texas team,” he says.

Access the data used in this feature at


  1. Patzek, T. W., Male, F. & Marder, M. Gas production in the Barnett Shale obeys a simple scaling theory. Proc. Natl Acad. Sci. USA 110, 19731–19736 (2013).

Ten years ago, US natural gas cost 50% more than that from Russia. Now, it is threefold less. US gas prices plummeted because of the shale gas revolution. However, a key question remains: At what rate will the new hydrofractured horizontal wells in shales continue to produce gas? We analyze the simplest model of gas production consistent with basic physics of the extraction process. Its exact solution produces a nearly universal scaling law for gas wells in each shale play, where production first declines as 1 over the square root of time and then exponentially. The result is a surprisingly accurate description of gas extraction from thousands of wells in the United States’ oldest shale play, the Barnett Shale.

The fast progress of hydraulic fracturing technology (SI Text, Figs. S1 and S2) has led to the extraction of natural gas and oil from tens of thousands of wells drilled into mudrock (commonly called shale) formations. The wells are mainly in the United States, although there is significant potential on all continents (1). The “fracking” technology has generated considerable concern about environmental consequences (2, 3) and about whether hydrocarbon extraction from mudrocks will ultimately be profitable (4). The cumulative gas obtained from the hydrofractured horizontal wells and the profits to be made depend upon production rate. Because large-scale use of hydraulic fracturing in mudrocks is relatively new, data on the behavior of hydrofractured wells on the scale of 10 y or more are only now becoming available.

There is more than a century of experience describing how petroleum and gas production declines over time for vertical wells. The geometry of horizontal wells in gas-rich mudrocks is quite different from the configuration that has guided intuition for the past century. The mudrock formations are thin layers, on the order of 30–90 m thick, lying at characteristic depths of 2 km or more and extending over areas of thousands of square kilometers. Wells that access these deposits drop vertically from the surface of the earth and then turn so as to extend horizontally within the mudrock for 1–8 km. The mudrock layers have such low natural permeability that they have trapped gas for millions of years, and this gas becomes accessible only after an elaborate process that involves drilling horizontal wells, fracturing the rock with pressurized water, and propping the fractures open with sand. Gas seeps from the region between each two consecutive fractures into the highly permeable fracture planes and into the wellbore, and it is rapidly produced from there.

Gas released by hydraulic fracturing can only be extracted from the finite volume where permeability is enhanced. Exponential decline of production once the interference time has been reached is inevitable, and extrapolations based upon the power law that prevails earlier are inaccurate. The majority of wells are too young to be displaying interference yet. The precise amount of gas they produce, and therefore their ultimate profitability, will depend upon when interference sets in.

For the moment, it is necessary to live with some uncertainty. Upper and lower bounds on gas in place are still far apart, even in the Barnett Shale with the longest history of production. Pessimists (4) see only the lower bounds, whereas optimists (19) look beyond the upper bounds. A detailed economic analysis based on the model presented here is possible, however, and is being published elsewhere (17, 18, 20, 21). The theoretical tools we are providing should make it possible to detect the onset of interference at the earliest possible date, provide increasingly accurate production forecasts as data become available, and assist with rational decisions about how hydraulic fracturing should proceed in light of its impact on the US environment and economy.

  1. Browning, J. et al. Oil Gas J. 111 (8), 62–73 (2013).
  2. Browning, J. et al. Oil Gas J. 111 (9), 88–95 (2013).
  3. Browning, J. et al. Oil Gas J. 112 (1), 64–73 (2014).
  4. Gülen, G., Browning, J., Ikonnikova, S. & Tinker, S. W. Energy 60, 302–315 (2013).
  5. Weijermars, R. Appl. Energy 124, 283–297 (2014).
  6. Kaiser, M. J. & Yu, Y. Oil Gas J. 112 (3), 62–65 (2014).
  7. Hughes, J. D. Drilling Deeper (Post Carbon Institute, 2014); available at and Hughes JD (2013) Energy: A reality check on the shale revolution. Nature 494(7437):307–308
  8. Cook, T. & Van Wagener, D. Improving Well Productivity Based Modeling with the Incorporation of Geologic Dependencies (EIA, 2014); available at
  9. US Energy Information Administration Technically Recoverable Shale Oil and Shale Gas Resources (EIA, 2013); available at
  10. Assessment of Shale Gas and Shale Oil Resources of the Lower Paleozoic Baltic–Podlasie–Lublin Basin in Poland — First Report (Polish Geological Institute, 2012); available at
  11. Assessment of Shale Gas and Shale Oil Resources of the Lower Paleozoic Baltic–Podlasie–Lublin Basin in Poland — First Report (Polish Geological Institute, 2012); available at


Posted in Fossil Fuels, Natural Gas | Tagged , , , | Leave a comment

Life before Cars: When Pedestrians Ruled the Streets

When Pedestrians Ruled the Streets

December 2014, By Clive Thompson. Smithsonian Magazine.

When you visit any city in America today, it’s a sea of cars, with pedestrians dodging between the speeding autos. It’s almost hard to imagine now, but in the late 1890s, the situation was completely reversed. Pedestrians dominated the roads, and cars were the rare, tentative interlopers. Horse-drawn carriages and streetcars existed, but they were comparatively slow.

So pedestrians ruled. “The streets were absolutely black with people,” as one observer described the view in the nation’s capital. People strolled to and fro down the center of the avenue, pausing to buy snacks from vendors. They’d chat with friends or even “manicure your nails,” as one chamber of commerce wryly noted. And when they stepped off a sidewalk, they did it anywhere they pleased.

“They’d stride right into the street, casting little more than a glance around them…anywhere and at any angle,” as Peter D. Norton, a historian and author of Fighting Traffic: The Dawn of the Motor Age in the American City, tells me. “Boys of 10, 12 or 14 would be selling newspapers, delivering telegrams and running errands.” For children, streets were playgrounds.

At the turn of the century, motor vehicles were handmade, expensive toys of the rich, and widely regarded as rare and dangerous. When the first electric car emerged in Britain in the 19th century, the speed limit was set at four miles an hour so a man could run ahead with a flag, warning citizens of the oncoming menace, notes Tom Vanderbilt, author of Traffic: Why We Drive the Way We Do (And What It Says About Us).

Things changed dramatically in 1908 when Henry Ford released the first Model T. Suddenly a car was affordable, and a fast one, too: The Model T could zoom up to 45 miles an hour. Middle-class families scooped them up, mostly in cities, and as they began to race through the streets, they ran headlong into pedestrians—with lethal results. By 1925, auto accidents accounted for two-thirds of the entire death toll in cities with populations over 25,000.

An outcry arose, aimed squarely at drivers. The public regarded them as murderers. Walking in the streets? That was normal. Driving? Now that was aberrant—a crazy new form of selfish behavior.

“Nation Roused Against Motor Killings” read the headline of a typical New York Times story, decrying “the homicidal orgy of the motor car.” The editorial went on to quote a New York City traffic court magistrate, Bruce Cobb, who exhorted, “The slaughter cannot go on. The mangling and crushing cannot continue.” Editorial cartoons routinely showed a car piloted by the grim reaper, mowing down innocents.

When Milwaukee held a “safety week” poster competition, citizens sent in lurid designs of car accident victims. The winner was a drawing of a horrified woman holding the bloody corpse of her child. Children killed while playing in the streets were particularly mourned. They constituted one-third of all traffic deaths in 1925; half of them were killed on their home blocks. During New York’s 1922 “safety week” event, 10,000 children marched in the streets, 1,054 of them in a separate group symbolizing the number killed in accidents the previous year.

Drivers wrote their own letters to newspapers, pleading to be understood. “We are not a bunch of murderers and cutthroats,” one said. Yet they were indeed at the center of a fight that, clearly, could only have one winner. To whom should the streets belong?


By the early 1920s, anti-car sentiment was so high that carmakers and driver associations—who called themselves “motordom”—feared they would permanently lose the public.

You could see the damage in car sales, which slumped by 12 percent between 1923 and 1924, after years of steady increase. Worse, anti-car legislation loomed: Citizens and politicians were agitating for “speed governors” to limit how fast cars could go. “Gear them down to fifteen or twenty miles per hour,” as one letter-writer urged. Charles Hayes, president of the Chicago Motor Club, fretted that cities would impose “unbearable restrictions” on cars.

Hayes and his car-company colleagues decided to fight back. It was time to target not the behavior of cars—but the behavior of pedestrians. Motordom would have to persuade city people that, as Hayes argued, “the streets are made for vehicles to run upon”—and not for people to walk. If you got run over, it was your fault, not that of the motorist. Motordom began to mount a clever and witty public-relations campaign.

Their most brilliant stratagem: To popularize the term “jaywalker.” The term derived from “jay,” a derisive term for a country bumpkin. In the early 1920s, “jaywalker” wasn’t very well known. So pro-car forces actively promoted it, producing cards for Boy Scouts to hand out warning pedestrians to cross only at street corners. At a New York safety event, a man dressed like a hayseed was jokingly rear-ended over and over again by a Model T. In the 1922 Detroit safety week parade, the Packard Motor Car Company produced a huge tombstone float—except, as Norton notes, it now blamed the jaywalker, not the driver: “Erected to the Memory of Mr. J. Walker: He Stepped from the Curb Without Looking.”

The use of “jaywalker” was a brilliant psychological ploy. What’s the best way to convince urbanites not to wander in the streets? Make the behavior seem unsophisticated—something you’d expect from hicks fresh off the turnip truck. Car companies used the self-regarding snobbery of city-dwellers against themselves. And the campaign worked. Only a few years later, in 1924, “jaywalker” was so well-known it appeared in a dictionary: “One who crosses a street without observing the traffic regulations for pedestrians.”

Meanwhile, newspapers were shifting allegiance to the automakers—in part, Norton and Vanderbilt argue, because they were profiting heavily from car ads. So they too began blaming pedestrians for causing accidents.

“It is impossible for all classes of modern traffic to occupy the same right of way at the same time in safety,” as the Providence Sunday Journal noted in a 1921 article called “The Jay Walker Problem,” reprinted from the pro-car Motor magazine.

In retrospect, you could have predicted that pedestrians were doomed. They were politically outmatched. “There was a road lobby of asphalt users, but there was no lobby of pedestrians,” Vanderbilt says. And cars were a genuinely useful technology. As pedestrians, Americans may have feared their dangers—but as drivers, they loved the mobility.

By the early ’30s, the war was over. Ever after, “the street would be monopolized by motor vehicles,” Norton tells me. “Most of the children would be gone; those who were still there would be on the sidewalks.” By the 1960s, cars had become so dominant that when civil engineers made the first computer models to study how traffic flowed, they didn’t even bother to include pedestrians.


The triumph of the automobile changed the shape of America, as environmentalists ruefully point out. Cars allowed the suburbs to explode, and big suburbs allowed for energy-hungry monster homes. Even in midcentury, critics could see this coming too. “When the American people, through their Congress, voted for a twenty-six-billion-dollar highway program, the most charitable thing to assume is that they hadn’t the faintest notion of what they were doing,” Lewis Mumford wrote sadly in 1958.

Posted in Automobiles, Transportation | Leave a comment

Why Fusion Will Never Work

Fusion is not likely to work out, yet it is the only possible energy source that could replace fossil fuels (Hoffert, et al).

Be thankful it can’t be done: Unlimited energy from Fusion would lead to unlimited human reproduction and depletion of every resource on earth

The immense gravity of the sun creates fusion by pushing atoms together.  We can’t do that on earth, where the two choices (and the main projects pursuing them) are:

1) ITER: use magnetic fields to contain plasma until atoms collide and fuse. This has been compared to holding jello together with rubber bands.

But there is nothing to say about fusion from the International Thermonuclear Experimental Reactor because it’s still being built:

  • The cost so far is $22.3 billion
  • The original deadline was 2016, the latest 2027 date is highly unlikely.
  • Their goal of a ‘burning plasma’ that produces more energy than the machine itself consumes is at least 20 years away
  • It’s so poorly run that a recent assessment found serious problems with the project’s leadership, management, and governance. The report was so damning the project’s governing body only allowed senior management to see it because they feared “the project could be interpreted as a major failure”.
  • April 2014: The U.S. contribution to ITER will cost a total of $3.9 billion — 4 times as much as originally estimated according to a report that came out April 10, 2014
  • Even if ITER does reach break-even someday, it will have produced just heat, not the ultimate aim, electricity. More work will be needed to hook it up to a generator. For ITER and tokamaks in general, commercialization remains several decades away.

2) The National Ignition Facility (NIF) at Lawrence Livermore National Laboratory is trying to use lasers to fuse hydrogen atoms together.

Despite all the recent publicity from a recent test, this project is at least as far as ITER is from attempting fusion:

  • The cost so far is $5.3 billion dollars
  • The original deadline was 2009.  A physicist working on the project, Denise Hinkel, said of the recent 2014 test that “we’re so far away from fusion it may not be a useful way to talk about what’s happening here at Livermore”.

The goal of the NIF is to achieve “ignition”. That means that the fused hydrogen atoms need to generate as much energy as was used to run the lasers that bombarded them with heat and pressure.

According to Mark Herrmann, at Sandia National Laboratory, the pressures achieved in the recent test were “1,000 times lower” than needed to meet the criteria for ignition.

Well, actually, according to the June 2014 issue of Scientific American, it was a hell of a lot less than that (Biello):

  • 17,000 joules of energy were yielded by the fuel pellet
  • 500,000,000,000,000 joules (500 trillion joules) were required just to feed the lasers alone
  • the pellet needs to yield 29.4 million more times energy to reach ignition.  Not 1,000.
  • Or if you look at it another way, that’s .0000000034% (17,000/500,000,000,000,000 = .000000000034 )
  • Biello concludes “A source of nearly unlimited, clean energy is still decades away”.

When you consider what it would take to reach ignition, you will understand why many physicists don’t think NIF will ever work and is a total waste of money:

To reach ignition, 192 lasers in an area the size of 3 football fields will need to heat a tiny ball of hydrogen gas the size of a peppercorn to 50 million degrees Centigrade at 150 billion times the pressure of Earth’s atmosphere. Each of the 192 lasers must bombard the peppercorn at exactly the same time with perfect symmetry on all sides.  If there is any lack of symmetry, the peppercorn will be squeezed like a balloon, which creates escape holes for the hydrogen and no fusion.

To get to ignition scientists would need create a source of energy greater than all the energy pumped into the system by the facility’s 192 high-powered lasers – a goal some scientists say may be unachievable.

And if somehow NIF succeeded, practical fusion would still likely be decades away. NIF, at its quickest, fires once every few hours. The targets take weeks to build with artisan precision. A commercial laser fusion power plant would probably have to vaporize fuel pellets at a rate of 10 per second (Chang).

“You want to look at the big lie in each program,” says Edward C. Morse, a professor of nuclear engineering at the University of California, Berkeley. “The big lie in [laser-based] fusion is that we can make these target capsules for a nickel a piece.” The target capsules, the peppercorn-size balls of deuterium-tritium fuel, have to be exquisitely machined and precisely round to ensure that they compress evenly from all sides. Any bump on the pellet and the target won’t blow, which makes current iterations of the pellets prohibitively expensive. Although Livermore (LLNL), which plans to make its pellets on site, does not release anticipated costs, the Laboratory for Laser Energetics at the University of Rochester also makes similar deuterium-tritium balls. “The reality now is that the annual budget to make targets that are used at Rochester is several million dollars, and they make about six capsules a year,” Morse says. “So you might say those are $1 million a piece.” LLNL can only blast one pellet every few hours, but in the future, targets will need to cycle through the chamber with the speed of a Gatling gun consuming almost 90,000 targets a day (Moyer).

[Although other smaller fusion breakthrough projects are announced frequently, I predict they will end b 2025 or sooner when the energy crisis strikes, the financial system breaks down, or some other “black swan” strikes.

Since this is a liquid fuels crisis, and diesel-fuel is essential for freight transportation, which can’t be electrified [as explained in my upcoming book at Springer], electricity doesn’t buy us anything anyhow.

Below are many articles that go into the details of why fusion is so difficult to achieve. Since I’ve drastically reduced and edited them, plus taken out the pretty pictures, you might want to read the originals in their entirety.

Alice Friedemann at]

Fusion’s False Dawn

March/April 2010. by Michael Moyer. Scientific American.

Scientists have long dreamed of harnessing nuclear fusion—the power plant of the stars—for a safe, clean and virtually unlimited energy supply. Even as a historic milestone nears, skeptics question whether a working reactor will ever be possible

The deuterium-tritium fusion only kicks in at temperatures above 150 million degrees Celsius — 25,00 times hotter than the surface of the sun.

Yet the flash of ignition may be the easy part. The challenges of constructing and operating a fusion-based power plant could be more severe than the physics challenge of generating the fireballs in the first place.  A working reactor would have to be made of materials that can withstand temperatures of millions of degrees for years on end. It would be constantly bombarded by high-energy nuclear particles–conditions that turn ordinary materials brittle and radioactive. It has to make its own nuclear fuel in a complex breeding process. And to be a useful energy-producing member of the electricity grid, it has to do these things pretty much constantly–with no outages, interruptions or mishaps–for decades.

Fusion plasmas are hard to control. Imagine holding a large, squishy balloon. Now squeeze it down to as small as it will go. No matter how evenly you apply pressure, the balloon will always squirt out through a space between your fingers. The same problem applies to plasmas. Anytime scientists tried to clench them down into a tight enough ball to induce fusion, the plasma would find a way to squirt out the sides. It is a paradox germane to all types of fusion reactors–the hotter you make the plasma and the tighter you squeeze it, the more it fights your efforts to contain it.  So scientists have built ever larger magnetic bottles, but every time they did so, new problems emerged.

No matter how you make fusion happen–whether you use megajoule lasers (like at Lawrence Livermore National Laboratory) or the crunch of magnetic fields–energy payout will come in the currency of neutrons. Because these particles are neutral, they are not affected by electric or magnetic fields. Moreover, they pass straight through most solid materials as well.

The only way to make a neutron stop is to have it directly strike an atomic nucleus. Such collisions are often ruinous. The neutrons coming out of a deuterium-tritium fusion reaction are so energetic that they can knock out of position an atom in what would ordinarily be a strong metal–steel for instance. Over time these whacks weaken a reactor, turning structural components brittle.

Other times the neutrons will turn material radioactive, dangerously so.

Other times the neutrons will turn benign material radioactive. When a neutron hits an atomic nucleus, the nucleus can absorb the neutron and become unstable. A steady stream of neutrons—even if they come from a “clean” reaction such as fusion—would make any ordinary container dangerously radioactive, Baker says. “If someone wants to sell you any kind of nuclear system and says there is no radioactivity, hang onto your wallet.”

A fusion-based power plant must also convert energy from the neutrons into heat that drives a turbine. Future reactor designs make the conversion in a region surrounding the fusion core called the blanket. Although the chance is small that a given neutron will hit any single atomic nucleus in a blanket, a blanket thick enough and made from the right material—a few meters’ worth of steel, perhaps—will capture nearly all the neutrons passing through. These collisions heat the blanket, and a liquid coolant such as molten salt draws that heat out of the reactor. The hot salt is then used to boil water, and as in any other generator, this steam spins a turbine to generate electricity.

Except it is not so simple. The blanket has another job, one just as critical to the ultimate success of the reactor as extracting energy. The blanket has to make the fuel that will eventually go back into the reactor.

Although deuterium is cheap and abundant, tritium is exceptionally rare and must be harvested from nuclear reactions. An ordinary nuclear power plant can make between two to three kilograms of it in a year, at an estimated cost of between $80 million and $120 million a kilogram. Unfortunately, a magnetic fusion plant will consume about a kilogram of tritium a week. “The fusion needs are way, way beyond what fission can supply,” says Mohamed Abdou, director of the Fusion Science and Technology Center at the University of California, Los Angeles.

For a fusion plant to generate its own tritium, it has to borrow some of the neutrons that would otherwise be used for energy. Inside the blanket channels of lithium, a soft, highly reactive metal, would capture energetic neutrons to make helium and tritium. The tritium would escape out through the channels, get captured by the reactor and be reinjected into the plasma.

When you get to the fine print, though, the accounting becomes precarious. Every fusion reaction devours exactly one tritium ion and produces exactly one neutron. So every neutron coming out of the reactor must make at least one tritium ion, or else the reactor will soon run a tritium deficit—consuming more than it creates. Avoiding this obstacle is possible only if scientists manage to induce a complicated cascade of reactions. First, a neutron hits a lithium 7 isotope, which, although it consumes energy, produces both a tritium ion and a neutron. Then this second neutron goes on to hit a lithium 6 isotope and produce a second tritium ion.

Moreover, all this tritium has to be collected and reintroduced to the plasma with near 100 percent efficiency. “In this chain reaction you cannot lose a single neutron, otherwise the reaction stops,” says Michael Dittmar, a particle physicist at the Swiss Federal Institute for Technology in Zurich. “The first thing one should do [before building a reactor] is to show that the tritium production can function. It is pretty obvious that this is completely out of the question.”

“This is a very fancy gadget, this fusion blanket,” Hazeltine says. “It is accepting a lot of heat and taking care of that heat without overheating itself. It is accepting neutrons, and it is made out of very sophisticated materials so it doesn’t have a short lifetime in the face of those neutrons. And it is taking those neutrons and using them to turn lithium into tritium.

ITER, unfortunately, will not test blanket designs. That is why many scientists—especially those in the U.S., which is not playing a large role in the design, construction or operation of ITER—argue that a separate facility is needed to design and build a blanket. “You must show that you can do this in a practical system,” Abdou says, “and we have never built or tested a blanket. Never.” If such a test facility received funding tomorrow, Abdou estimates that it would take between 30 and 75 years to understand the issues sufficiently well to begin construction on an operational power plant. “I believe it’s doable,” he says, “but it’s a lot of work.”

The Big Lie

Let’s say it happens. The year is 2050. Both the NIF and ITER were unqualified successes, hitting their targets for energy gain on time and under budget. Mother Nature held no surprises as physicists ramped up the energy in each system; the ever unruly plasmas behaved as expected. A separate materials facility demonstrated how to build a blanket that could generate tritium and convert neutrons to electricity, as well as stand up to the subatomic stresses of daily use in a fusion plant. And let’s assume that the estimated cost for a working fusion plant is only $10 billion. Will it be a useful option?

Even for those who have spent their lives pursuing the dream of fusion energy, the question is a difficult one to answer. The problem is that fusion-based power plants—like ordinary fission plants—would be used to generate baseload power. That is, to recoup their high initial costs, they would need to always be on. “Whenever you have any system that is capital-intensive, you want to run it around the clock because you are not paying for the fuel,” Baker says.

Unfortunately, it is extremely difficult to keep a plasma going for any appreciable length of time. So far reactors have been able to maintain a fusing plasma for less than a second. The goal of ITER is to maintain a burning plasma for tens of seconds. Going from that duration to around-the-clock operation is yet another huge leap. “Fusion will need to hit 90 percent availability,” says Baker, a figure that includes the downtime required for regular maintenance. “This is by far the greatest uncertainty in projecting the economic reliability of fusion systems.

It used to be that fusion was [seen as] fundamentally different from dirty fossil fuels or dangerous uranium. It was beautiful and pure—a permanent fix, an end to our thirst for energy. It was as close to the perfection of the cosmos as humans were ever likely to get. Now those visions are receding. Fusion is just one more option and one that will take decades of work to bear fruit…the age of unlimited energy is not [in sight].

The Most Expensive Science Experiment Ever

June 27, 2013 By Daniel Clery, Popular Science

Some people have spent their whole working lives researching fusion and then retired feeling bitter at what they see as a wasted career. But that hasn’t stopped new recruits joining the effort every year…, perhaps motivated by … the need for fusion has never been greater, considering the twin threats of dwindling oil supplies and climate change.  ITER won’t generate any electricity, but designers hope to go beyond break-even and spark enough fusion reactions to produce 10 times as much heat as that pumped in to make it work.

To get there requires a reactor of epic proportions:

  • The building containing the reactor will be nearly 200 feet tall and extend 43 feet underground.
  • The reactor inside will weigh 23,000 tons.
  • Rare earth metal niobium will be combined with tin to make superconducting wires for the reactor’s magnets. When finished, they will have made 50,000 miles of wire, enough to wrap around the equator twice.
  • There will be 18 magnets, each 46 feet tall and weighing 360 tons (as much as a fully-laden jumbo jet) with  giant D-shaped coils of wire forming the electromagnets used to contain the plasma

That huge sum of money is, for the nations involved, a gamble against a future in which access to energy will become an issue of national security. Most agree that oil production is going to decline sharply during this century.  That doesn’t leave many options for the world’s future energy supplies. Conventional nuclear power makes people uneasy for many reasons, including safety, the problems of disposing of waste, nuclear proliferation and terrorism.

Alternative energy sources such as wind, wave and solar power will undoubtedly be a part of our energy future. It would be very hard, however, for our modern energy-hungry society to function on alternative energy alone because it is naturally intermittent–sometimes the sun doesn’t shine and the wind doesn’t blow–and also diffuse–alternative technologies take up a lot of space to produce not very much power.

Difficult choices lie ahead over energy and, some fear, wars will be fought in coming decades over access to energy resources, especially as the vast populations of countries such China and India increase in prosperity and demand more energy. Anywhere that oil is produced or transported–the Strait of Hormuz, the South China Sea, the Caspian Sea, the Arctic–could be a flashpoint. Supporting fusion is like backing a long shot: it may not come through, but if it does it will pay back handsomely. No one is promising that fusion energy will be cheap; reactors are expensive things to build and operate. But in a fusion-powered world geopolitics would no longer be dominated by the oil industry, so no more oil embargoes, no wild swings in the price of crude and no more worrying that Russia will turn off the tap on its gas pipelines.

Star power: Small fusion start-ups aim for break-even

16 August 2011 by David Hambling,

The deuterium-tritium fusion only kicks in at temperatures above 150 million degrees Celcius — 25,00 times hotter than the surface of the sun. Not only does reaching such temperatures require a lot of energy, but no known material can withstand them once they have been achieved. The ultra-hot, ultra-dense plasma at the heart of a fusion reactor must instead be kept well away from the walls of its container using magnetic fields. Following a trick devised in the Soviet Union in the 1950s, the plasma is generated inside a doughnut or torus-shaped vessel, where encircling magnetic fields keep the plasma spiraling clear of the walls – a configuration known as a tokamak. This confinement is not perfect: the plasma has a tendency to expand, cool and leak out, limiting the time during which fusion can occur. The bigger the tokamak, the better the chance of extracting a meaningful amount of energy, since larger magnetic fields hold the plasma at a greater distance, meaning a longer confinement time.

Break-even is the dream ITER was conceived to realize.

With a huge confinement volume, it should contain a plasma for several minutes, ultimately producing 10 times as much power as is put in.  But this long confinement time brings its own challenges. An elaborate system of gutters is needed to extract from the plasma the helium produced in the reaction, along with other impurities. The neutrons emitted, which are chargeless and so not contained by magnetic fields, bombard the inside wall of the torus, making it radioactive and meaning it must be regularly replaced. These neutrons are also needed to breed the tritium that sustains the reaction, so the walls must be designed in such a way that the neutrons can be captured on lithium to make tritium. The details of how to do this are still being worked out.

The success of the project is by no means guaranteed 

“We know we can produce plasmas with all the right elements, but when you are operating on this scale there are uncertainties,” says David Campbell, a senior ITER scientist. Extrapolations from the performance of predecessors suggest a range of possible outcomes, he says. The most likely is that ITER will work as planned, delivering 10 times break-even energy. Yet there is a chance it might work better – or produce too little energy to be useful for commercial fusion.


Richard Wolfson, in “Nuclear Choices: A Citizen’s Guide to Nuclear Technology”:

“In the long run, fusion itself could bring on the ultimate climactic crisis. The energy released in fusion would not otherwise be available on Earth; it would represent a new input to the global energy flow. Like all the rest of the global energy, fusion energy would ultimately become heat that Earth would have to radiate into space. As long as humanity kept its energy consumption a tiny fraction of the global energy flow, there would be no major problem. But history shows that human energy consumption grows rapidly when it is not limited by shortages of fuel. Fusion fuel would be unlimited, so our species might expand its energy consumption to the point where the output of our fusion reactors became significant relative to the global input of solar energy. At that point Earth’s temperature would inevitably rise. This long-term criticism of fusion holds for any energy source that could add to Earth’s energy flow even a few percent of what the Sun provides. Only solar energy itself escapes this criticism”. page 274

Robert L. Hirsch, author of the Department of Energy 2005 Peak Oil study, in his book “The Impending World Energy Mess”:

“Fusion has been in the research stage since the 1950s….Fusion happens when fuels are heated to hundreds of millions of degrees long enough for more energy to be released than was used to create the heat. Containment of fusion fuels on the sun is by gravity. Since gravity is not usable for fusion on earth, researchers have used magnetic fields, electrostatic fields, and inertia to provide containment. Thus far, no magnetic or electrostatic fusion concept has demonstrated success.”  Hirsch thinks this will never work out and it’s been a waste of tens of billions of dollars.


William Parkins, formerly the chief scientist at Rockwell International, asks in the 10 Mar 2006 edition of Science  “Fusion Power: Will it Ever Come?

When I read Parkins article and translated some of the measurements to ones more familiar to me, it was obvious that fusion would never see the light of day:

  • Fusion requires heating D-T (deuterium-tritium) to a temperature of 180 million degrees Fahrenheit — 6.5 times hotter than the core of the sun.
  • So much heat is generated that the reactor vacuum vessel has to be at least 65 feet long, and no matter what the material, will need to be replaced periodically because the heat will make the reactor increasingly brittle as it undergoes radiation damage.  The vessel must retain vacuum integrity, requiring many connections for heat transfer and other systems.  Vacuum leaks are inevitable and could only be solved with remotely controlled equipment.
  • A major part of the cost of a fusion plant is the blanket-shield component. Its area equals that of the reactor vacuum vessel, about 4,500 cubic yards in a 1000 MWe plant.  The surrounding blanket-shield, made of expensive materials, would need to be at least 5.5 feet thick and weigh 10,000 metric tons, conservatively costing $1.8 billion dollars.

Here are some of the other difficulties Parkins points out in this article:

The blanket-shield component “amounts to $1,800/kWe of rated capacity—more than nuclear fission reactor plants cost today. This does not include the vacuum vessel, magnetic field windings with their associated cryogenic system, and other systems for vacuum pumping, plasma heating, fueling, “ash” removal, and hydrogen isotope separation. Helium compressors, primary heat exchangers, and power conversion components would have to be housed outside of the steel containment building—required to prevent escape of radioactive tritium in the event of an accident. It will be at least twice the diameter of those common in nuclear plants because of the size of the fusion reactor.

Scaling of the construction costs from the Bechtel estimates suggests a total plant cost on the order of $15 billion, or $15,000/kWe of plant rating. At a plant factor of 0.8 and total annual charges of 17% against the capital investment, these capital charges alone would contribute 36 cents to the cost of generating each kilowatt hour. This is far outside the competitive price range.

The history of this dream is as expensive as it is discouraging. Over the past half-century, fusion appropriations in the U.S. federal budget alone have run at about a quarter-billion dollars a year. Lobbying by some members of the physics community has resulted in a concentration of work at a few major projects—the Tokamak Fusion Test Reactor at Princeton, the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory, and the International Thermonuclear Experimental Reactor (ITER), the multinational facility now scheduled to be constructed in France after prolonged negotiation. NIF is years behind schedule and greatly over budget; it has poor political prospects, and the requirement for waiting between laser shots makes it a doubtful source for reliable power.

Even if a practical means of generating a sustained, net power-producing fusion reaction were found, prospects of excessive plant cost per unit of electric output, requirement for reactor vessel replacement, and need for remote maintenance for ensuring vessel vacuum integrity lie ahead. What executive would invest in a fusion power plant if faced with any one of these obstacles? It’s time to sell fusion for physics, not power”.

Former House of Representatives Congressman Roscoe Bartlett (R-MD), head of the “Peak Oil Caucus”:

“…hoping to solve our energy problems with fusion is a bit like you or me hoping to solve our personal financial problems by winning the lottery. That would be real nice. I think the odds are somewhere near the same. I am about as likely to win the lottery as we are to come to economically feasible fusion.”

Bartlett’s full speech to congress:

National Academy of Sciences. 2013. An Assessment of the Prospects for Inertial Fusion Energy

The 3 principal research efforts in the USA are all trying to implode fusion fuel pellets by: (1) lasers, including solid state lasers at the Lawrence Livermore National Laboratory’s (LLNL’s) NIF and the University of Rochester’s Laboratory for Laser Energetics (LLE), as well as the krypton fluoride gas lasers at the Naval Research Laboratory; (2) particle beams, being explored by a consortium of laboratories led by the Lawrence Berkeley National Laboratory (LBNL); and (3) pulsed magnetic fields, being explored on the Z machine at Sandia National Laboratories. The minimum technical accomplishment that would give confidence that commercial fusion may be feasible—the ignition of a fuel pellet in the laboratory—has not been achieved.

This is 247 pages long chock-full of the problems that fusion must overcome – not just technical but the funding — billions of dollars in the unlikely event any of the various flavors of fusion makes enough progress to scale up to a higher level.  If you ever wanted to know the minutiae of why fusion will never work, this is a great document to read  — if you can understand it that is.  I spent about 10 minutes grabbing just a few of the hundreds of “challenges” that need to be overcome:

  • Making a reliable, long-lived chamber is challenging since the charged particles, target debris, and X-rays will erode the wall surface and the neutrons will embrittle and weaken the solid materials.
  • Unless the initial layer surfaces are very smooth (i.e., perturbations are smaller than about 20 nm), short-wavelength (wavelength comparable to shell thickness) perturbations can grow rapidly and destroy the compressing shell. Mix Similarly, near the end of the implosion, such instabilities can mix colder material into the spot that must be heated to ignition. If too much cold material is injected into the hot spot, ignition will not occur. Most of the fuel must be compressed to high density, approximately 1,000 to 4,000 times solid density.
  • To initiate fusion, the deuterium and tritium fuel must be heated to over 50 million degrees and held together long enough for the reactions to take place. Drivers must deliver very uniform ablation; otherwise the target is compressed asymmetrically. If the compression of the target is insufficient, the fusion reaction rate is too slow and the target disassembles before the reactions take place. Asymmetric compression excites strong Rayleigh-Taylor instabilities that spoil compression and mix dense cold plasma with the less dense hot spot. Preheating of the target can also spoil compression. For example, mistimed driver pulses can shock heat the target before compression. Also, interaction of the driver with the surrounding plasma can create fast electrons that penetrate and preheat the target.
  • The technology for the reactor chambers, including heat exhaust and management of tritium, involves difficult and complicated issues with multiple, frequently competing goals and requirements.  Understanding the performance at the level of subsystems such as a breeding blanket and tritium management, and integrating these complex subsystems into a robust and self-consistent design will be very challenging.
  • Avoiding frequent replacement of components that are difficult to access and replace will be important to achieving high availability. Such components will need to achieve a very high level of operational reliability.
  • Experimental investigations of the fast-ignition concept are challenging and involve extremely high-energy-density physics: ultraintense lasers (>1019 W cm–2); pressures in excess of 1 Gbar; magnetic fields in excess of 100 MG; and electric fields in excess of 1012 V/m. Addressing the sheer complexity and scale of the problem inherently requires the high-energy and high-power laser facilities


Biello, David.  June 2014. A Milestone on the Long and Winding Road to Fusion.  Scientific American.

Chang, Ken. Mar 18, 2014. Machinery of an Energy Dream Machinery of an Energy Dream. New York Times.

Clery, D. 28 February 2014. New Review Slams Fusion Project’s Management. Science: Vol. 343 no. 6174 pp. 957-8.

Hinkel, D *, Springer P * , Standen, A, Krasny, M. Feb 13, 2014. Bay Area Scientists Make Breakthrough on Nuclear Fusion. Forum. (*) scientists at Lawrence Livermore National Laboratory.

Moyer, M. March/April 2010. Fusion’s False Dawn. Scientific American.

Perlman, David. Feb 13, 2014. Livermore Lab’s fusion energy tests get closer to ‘ignition’. San Francisco Chronicle.

Posted in Alternative Energy, Energy, Fusion | Tagged , , | Leave a comment

Natural Gas-to-Liquids (GTL) as a Drop-in Diesel fuel

[The reason GTL is a big deal is that it can substitute for diesel without having to modify a diesel engine to do so — therefore, it’s a “drop-in fuel” substitute.

However, GTL isn’t likely to be the answer, since it’s far more “economically attractive” to use natural gas to produce electricity and Liquified Natural Gas (LNG), according to the U.S. Energy Information administration.

Once oil begins to decline, a substitute for diesel fuel must be found to continue to allow the billions of truck, train, shipping, and agricultural tractor/harvester etc., diesel combustion engines to operate since they can’t run on diesohol, ethanol, and can at most use 5 to 20% biodiesel (hough there are some truck diesel engines warrantied now for 100% biodiesel).  Yet even if all diesel engines were warrantied for 100% biodiesel, there can never be enough biodiesel made to be the sole provider of freight vehicle fuel, not even if all oilseeds (soybeans, etc) now being used for food were also converted to biodiesel.

Currently there are only 5 GTL plants in the world producing from 2,700 to 140,000 barrels/day.  Another is under construction, and 3 are proposed in the USA, only 1 large-scale).

There are 2 articles below. The 1st one explains the GTL process and the 2nd looks at the largest GTL plant in the world in Qatar. 

In the USA, a GTL plant can only be profitable if it maximizes wax production for the chemicals market rather than making diesel and other fuels.

Alice Friedemann]

Gas-to-liquids plants face challenges in the U.S. market

February 19, 2014. United States Energy Information Administration (EIA)

The most common GTL technique to convert natural gas to diesel and other liquid fuels (and waxes) is Fischer-Tropsch (F-T) synthesis.

Although F-T synthesis has been around for nearly a century, it is very expensive but has lately been of interest due to the growing spread between the value of petroleum products and the cost of natural gas.

The first step is to convert natural gas to a mixture of hydrogen, carbon dioxide, and carbon monoxide (syngas) and then removing sulfur, water, and carbon dioxide to prevent catalyst contamination. The F-T reaction combines hydrogen with carbon monoxide to form different liquid hydrocarbons. These liquid products are then further processed using different refining technologies into liquid fuels.

The F-T reaction typically happens at high pressure (40 atmospheres) and temperature (500o-840oF) in the presence of an iron catalyst. The cost of building a reaction vessel to produce the required volume of fuel or products and to withstand these temperatures and pressures can be considerable ($18-19 billion to create the Qatar facility).

Diagram of GTL process, as explained in the article text

Source: U.S. Energy Information Administration

There are currently five GTL plants operating globally, with capacities ranging from 2,700 barrels per day (bbl/d) to 140,000 bbl/d. Two in Malaysia,  two in Qatar, and one in South Africa. One plant in Nigeria is currently under construction. Three plants in the United States are proposed, only one of them  a large-scale GTL plant. In December 2013, Shell cancelled plans to build a large-scale GTL facility in Louisiana because of high capital costs. The Annual Energy Outlook 2014 does not include any large-scale GTL facilities in the United States through 2040. Other uses for available natural gas in industry, electric power generation, and exports of pipeline and liquefied natural gas are more economically attractive than GTL.

To improve the profitability of GTL plants, developers have reconfigured their designs to include the production of waxes and lubricating products. Because of the smaller size of the chemical market, smaller-scale GTL plants similar to those proposed in the Midwest are economically viable. F-T waxes are used in industries producing candles, paints and coatings, resins, plastic, synthetic rubber, tires, and other products.

High Costs Slow Quest For Ultraclean Diesel

February 23, 2007, by Russell Gold. Wall Street Journal 

[Although this article was published in 2007, it’s still true in 2014.

Updates: In 2012 the Shell Qatar plant reached full production of 140,000 GTL barrels/day (b/d), a drop in the bucket of the 90,000,000 b/d produced worldwide and over its lifetime will produce 3 billion barrels of oil equivalent (half GTL, half other products), less than 1 month of world oil production]

The rush to build a new industry that turns natural gas into a transportation fuel is stumbling over rising costs, showing how tough it is for emerging fuels to compete with crude oil.

This past week, Exxon Mobil Corp. backed out of plans to build an enormous gas-to-liquids, or GTL, plant in Qatar. Yesterday, Royal Dutch Shell PLC broke ground on its own similarly sized GTL plant in Qatar, but said the cost might have tripled to as high as $18 billion.

  • The Hope: Energy companies have been investing in a potential petroleum substitute that turns natural gas into a liquid fuel.
  • The Problem: Gas-to-liquids projects have surged in cost due to overall oil-patch inflation.
  • The Result: Exxon Mobil this week joined other companies putting GTL projects on hold.

Escalating budgets are threatening to constrain the growth of the GTL industry, which produces a clear liquid that can run existing diesel engines without any of the sooty pollutants associated with diesel. The rising costs of steel, engineering and labor have led to steep inflation among major energy projects world-wide, underscoring how the rush to find new fuel sources is driving up the cost of developing them.

Higher costs have hit companies developing Canada’s oil sands, where crude is packed into tar-like deposits. Prices for corn and other crops have risen, in part, because of the U.S.’s increasing interest in ethanol.

Other than Shell’s Qatar facility, the only other GTL plant under construction also is facing cost pressures. Last year, Halliburton Co. hal -10.86% took a charge to earnings because of delays and cost increases for the plant its KBR Inc. kbr -11.23% unit is building in Nigeria for Chevron Corp. cvx -5.42% and Sasol Ltd. ssl -10.03%

Exxon officials wouldn’t say whether rising costs were the main factor in the decision to drop plans for the Qatar GTL plant. “Deciding not to progress with GTL is in line with our investment approach, which is very disciplined,” said Exxon spokeswoman Jeanne Miller.

Other GTL proposals, including projects led by Marathon Oil Corp. MRO -11.02% and ConocoPhillips, cop -6.72% in recent years were put on hold.

Exxon’s decision is likely to cause other companies to rethink their commitment to GTL. Bernard Picchi, an energy analyst for research and trading firm Wall Street Access, who keeps close tabs on GTL, said he expects other GTL hopefuls “to take a timeout, a deep breath and re-evaluate the cost and technology.”

Using technology developed in Nazi Germany, the process of turning natural gas into liquids had long been too expensive to be commercial. Small-scale GTL plants in Malaysia, operated by Shell, and South Africa, by Sasol, have been in operation for years. Several years ago, the Middle Eastern nation of Qatar decided to encourage large projects to turn its natural-gas resources into an exportable liquid fuel. The scale of the facilities, as well as rising oil costs, were expected to make the GTL fuel competitive. The Exxon and Shell projects, alongside a project by Sasol, were set to generate more than 300,000 barrels a day of the fuel.

Qatar hoped the plants could help GTL put a dent in crude oil’s near-monopoly on the world’s largest energy market — powering the world’s vehicles — by creating an alternative fuel.

Shell Chief Executive Jeroen van der Veer, flanked by Qatar’s energy minister and Prince Charles of Britain, said yesterday that Shell had an advantage over other competitors because of the GTL plant in Malaysia it has operated since 1993. “For us, GTL is proven technology,” he told reporters in Qatar, according to Reuters. He said the project remained inside its development-cost estimates of $4 to $6 per oil-equivalent barrel of production over a period of time.

Based on that, total project costs have been pegged as high as $18 billion based on estimated lifetime output of about three billion barrels of oil equivalent. A Shell spokesman said that is comparable to other big exploration and production projects it undertakes.

At the same time as announcing the end of its GTL project, Exxon said it had been selected by Qatar Petroleum to participate in a project to tap offshore natural gas for the industrial and power sector. The project, in which Exxon will own a 10% stake, will deliver 1.5 billion cubic feet of gas a day by 2012.

Posted in GTL Gas-To-Liquids | Tagged , , , , , | Leave a comment


Weißbach, D., et al. April 2013. Energy intensities, EROIs, and energy payback times of electricity generating power plants. Energy. Vol 52: 1, 210–221

Producing natural gas from maize growing, so-called biogas, is energetically expensive due to the large electricity needs for the fermentation plants, followed by the agriculture’s energy demand because of fertilizers and machines.

Biogas-fired plants, even though they need no buffering, have the problem of enormous fuel provisioning effort which brings them clearly below the economic limit with no potential of improvements in reach.

Smil, Vaclav. 2010. Energy Myths and Realities: Bringing Science to the Energy Policy Debate. AEI Press.

Smil has this to say about China before modernization:

“…biogas digesters were unable to produce enough fuel to cook rice three times a day, still less every day for four seasons.

The reasons were obvious to anyone familiar with the complexities of bacterial processes. Biogas generation, simple in principle, is a fairly demanding process to manage in practice.  Here are some of the pitfalls:

  1. The slightest leakage will destroy the anaerobic condition required by methanogenic bacteria
  2. Low temperatures (below 20°C),
  3. Improper feedstock addition,
  4. Poor mixing practices
  5. Shortages of appropriate substrates will result in low (or no) fermentation rates,
  6. Undesirable carbon-to-nitrogen ratios and pH
  7. Formation of heavy scum.

Unless it is assiduously managed, a biogas digester can rapidly turn into an expensive waste pit, which—unless emptied and properly restarted—will have to be abandoned, as millions were in China. Even widespread fermentation would have provided no more than 10 percent of rural household energy use during the early 1980s, and once the privatization of farming got underway, most of the small family digesters were abandoned.

More than half of humanity is now living in cities, and an increasing share inhabits megacities from São Paulo to Bangkok, from Cairo to Chongqing, and megalopolises, or conglomerates of megacities. How can these combinations of high population, transportation, and industrial density be powered by small-scale, decentralized, soft-energy conversions? How can the fuel for vehicles moving along eight- or twelve-lane highways be derived from crops grown locally?

How can the massive factories producing microchips or electronic gadgets for the entire planet be energized by attached biogas digesters or by tree-derived methanol? And while some small-scale renewable conversions can be truly helpful to a poor rural household or to a small village, they cannot support such basic, modern, energy-efficient industries as iron and steel making, nitrogen fertilizer synthesis by the Haber-Bosch process, and cement production.”


Posted in Biofuels | Tagged , , | Leave a comment

Wind power will not save us

SCALE.  Too many windmills need to be built to replace oil.

Worldwide, 32,850 wind turbines with 70 to 100 meter blades generating 1.65 MW built every year for the next 50 years, or 1,642,000 total would be needed to replace the oil we burn in one year at a cost of 3.3 trillion dollars over 4,000 square miles (Cubic mile of oil).

The DOE estimates 18,000 square miles of good wind sites in the USA, which could produce 20% of America’s electricity in total.  this would require over 140,000 1.5 MW towers costing at least $300 billion dollars, and innumerable natural gas peaking plants to balance the load when the wind isn’t blowing.  Despite all the happy fracking, natural gas is a limited resource, and as it is substituted for coal more and more, roughly around 2018 to 2025 (depending on the economy, how many natural gas burning trucks and cars are created, etc), the fracking boom will end rather abruptly in many areas, and the price of natural gas will skyrocket.

Or consider just the wind power needed to replace offshore oil in the Gulf of Mexico:

At 5.8 MBtu heat value in a barrel of oil and 3412 BTU in a kWh, 1.7 million barrels per day of gulf oil equals 2.9 billion kWh per day, or 1,059 billion kWh a year. Yet the total 2008 wind generation in Texas was 14.23 billion kWh, and 5.42 billion kWh in California. Which means you’d need 195 California’s, or 74 Texas’s of wind, and 20 years to build it (Nelder).

Windmills are useless without The Grid

Just as oil doesn’t do much useful work when not burned within a combustion engine, wind needs a vast, interconnected grid.  The larger the grid, the more wind that can be added to it.  But we don’t have that infrastructure — indeed, what we do have now is falling apart due to deregulation of utilities, with no monetary rewards for any player to maintain or upgrade the grid.

Most of the really good, strong wind areas are so far from cities that it’s useless because the energy to build a grid extending to these regions would use more energy than the wind would provide.

Sure, oil and natural gas require pipelines too, but they’re already in place, built back when the EROEI of oil was 100:1 — though

The Grid Can’t Handle any more Wind Power

Power struggle: Green energy versus a grid that’s not ready. Minders of a fragile national power grid say the rush to renewable energy might actually make it harder to keep the lights on. Evan Halper, Dec 2, 2013. Los Angeles Times.

The grid is built on an antiquated tangle of market rules, operational formulas and business models.  Planners are struggling to plot where and when to deploy solar panels, wind turbines and hydrogen fuel cells without knowing whether regulators will approve the transmission lines to support them.

Energy officials worry a lot these days about the stability of the massive patchwork of wires, substations and algorithms that keeps electricity flowing. They rattle off several scenarios that could lead to a collapse of the power grid — a well-executed cyberattack, a freak storm, sabotage.

But as states race to bring more wind, solar and geothermal power online, those and other forms of alternative energy have become a new source of anxiety. The problem is that renewable energy adds unprecedented levels of stress to a grid designed for the previous century.

Green energy is the least predictable kind. Nobody can say for certain when the wind will blow or the sun will shine. A field of solar panels might be cranking out huge amounts of energy one minute and a tiny amount the next if a thick cloud arrives. In many cases, renewable resources exist where transmission lines don’t.

“The grid was not built for renewables,” said Trieu Mai, senior analyst at the National Renewable Energy Laboratory.

The role of the grid is to keep the supply of power steady and predictable

Engineers carefully calibrate how much juice to feed into the system. The balancing requires painstaking precision. A momentary overload can crash the system.

The California Public Utilities Commission last month ordered large power companies to invest heavily in efforts to develop storage technologies that could bottle up wind and solar power, allowing the energy to be distributed more evenly over time.

Whether those technologies will ever be economically viable on a large scale is hotly debated.

Windmills are too dependent on oil, from mining and fabrication to delivery and maintenance and fail the test of “can they reproduce themselves with wind power?”

Manufacturing wind turbines is an energy and resource-intensive process. A typical wind turbine contains more than 8,000 different components, many of which are made from steel, cast iron, and concrete, which use so much ghg to create that it is highly unlikely wind saves any carbon dioxide.

On top of that, wind power usually replaces hydropower, which is already carbon dioxide free. When coal or natural gas sources are, these plants are ramped down or switched  to standby, and they’re still burning fuel and emitting carbon dioxide in these modes.  Ramping up and frequent restarting causes thermal generators to run less efficiently and to emit more carbon dioxide and other toxic materials.  It’s hard to tell if wind power saves carbon dioxide or generates extra if you look at the entire life cycle and the need for fossil fuel burning plants to kick in when the wind isn’t blowing.

Oil-based combustion engines are used from start to finish to mine the material to make the windmill, fabricate of the windmill, deliver the windmill components to the installation site, make an enormous amount of concrete and deliver and pour it to on the site where the windmill will be embedded, trenching machines and other equipment to connect windmills to the grid. The maintenance vehicles run on oil, giant road-grading equipment and other oil-based vehicles are used to build and maintain the concrete, asphalt, and dirt roads to windmills and the electric grid, the oil-based cars of windmill employees, and the entire supply chain to deliver over 8,000 windmill components from world-wide parts manufacturers via oil-burning trucks, trains, and ships.

Because wind and solar are intermittent, natural gas peaking plants must be built and fire up quickly when the wind dies down or the sun isn’t shining.

As Nate Hagens, former editor of theoildrum has said, wind turbines are fossil fuel extenders, using huge material and human resources.  Building more stuff, when we are about to have less stuff, just digs the hole deeper.

Not only would windmills have to generate enough power to reproduce themselves, but they have to make enough power to run civilization.  And how are they going to reproduce themselves? Think of the energy to make the cement and steel of a 300 foot tower with three 150 foot rotor blades sweeping an acre of air at 100 miles per hour.  The turbine housing alone weighs over 56 tons, the blade assembly 36 tons, and the whole tower assembly is over 163 tons.  Florida Power & Light says a typical turbine site is 42 by 42 foot area with a 30 foot hole filled with tons of steel rebar-reinforced concrete –about 1,250 tons to hold the 300 foot tower in place (Rosenbloom).

Plus you’d have to electrify all transportation — that’s an awfully long electric cord out to Siberia or Outer Mongolia to the mining trucks gathering the ore to make new windmills.

Supply Chain Failure and limited supply of Rare Metals

Rare metals, are, well, RARE.  We might run out of them sooner than oil, either geologically or politically, since 95% of them are mined almost totally in China, and they will certainly run out or decrease in extraction as oil supplies continue to decline.

Windmill Turbines depend on neodymium and dysprosium.

Estimates of the exact amount of rare earth minerals in wind turbines vary, but in any case the numbers are staggering. According to the Bulletin of Atomic Sciences, a 2 megawatt (MW) wind turbine contains about 800 pounds of neodymium and 130 pounds of dysprosium. The MIT study cited above estimates that a 2 MW wind turbine contains about 752 pounds of rare earth minerals.

Tremendous environmental damage from mining material for windmills

Mining 1 ton of rare earth minerals produces about 1 ton of radioactive waste, according to the Institute for the Analysis of Global Security. In 2012, the U.S. added a record 13,131 MW of wind generating capacity. That means that between 4.9 million pounds (using MIT’s estimate) and 6.1 million pounds (using the Bulletin of Atomic Science’s estimate) of rare earths were used in wind turbines installed in 2012. It also means that between 4.9 million and 6.1 million pounds of radioactive waste were created to make these wind turbines — more than America’s nuclear industry, which produces between 4.4 million and 5 million pounds of spent nuclear fuel each year.

Yet nuclear energy comprised about one-fifth of America’s electrical generation in 2012, while wind accounted for just 3.5 percent of all electricity generated in the United States.

Not only do rare earths create radioactive waste residue, but according to the Chinese Society for Rare Earths, “one ton of calcined rare earth ore generates 9,600 to 12,000 cubic meters (339,021 to 423,776 cubic feet) of waste gas containing dust concentrate, hydrofluoric acid, sulfur dioxide, and sulfuric acid, [and] approximately 75 cubic meters (2,649 cubic feet) of acidic wastewater.”

Not enough time to scale wind up

Like solar, wind accounts for only a tiny fraction of renewable energy consumption in the United States, about a tenth of one percent, and will be hard to scale up in the short time left. EIA. June 2006. Renewable Energy Annual.

Not enough materials to scale wind up

To build 3 TW of wind power you’d need double the world’s current steel production, half the world’s copper production, 30 times the world’s fiber glass production, and almost half of the world’s coal (Prieto).

To scale wind up to provide 20% of America’s electricity by 2030, you’d need 19,300 square miles of windmills that would use this much raw material:


  1. 129,000,000  Concrete
  2.     9,060,000 Steel
  3.        103,600 Aluminum
  4.          74,400 Copper
  5.        574,000 Glass-Reinforced Plastic

Source: Wiley, 2007

On top of that, you’d need to add thousands of Natural Gas Combustion Turbines to kick in when the wind died down, and the electric grid requires an enormous expansion, including 19,000 miles of new high-voltage transmission corridors (NAS 2009).

Most of wind will never be captured

Windmills can only capture a fraction of the wind blowing — not enough or too much and the windmill shuts down.

Most of the really good wind is in remote locations far from the grid, and can’t be connected.

The wind above a windmill can’t be captured, you can’t get the wind from the ground to a mile high.

Proposed windmill “kites” that extract wind from the jet stream, sounds wonderful, just 1% could supply all of our energy.  But hey, how do you harvest a hurricane, the winds can blow 125 mph up there, and they’re up there with the jets 7 miles high, surely that can’t be easy to pull off.

Much of the time, over an entire region, there is no wind blowing at all, a huge problem for balancing the electricity on the grid, which has to be kept within a narrow range (about 10% of the electricity on the grid is never delivered to a customer, it’s there to balance the flow so that surges don’t cause blackouts leading to the loss of power for millions of people).

Globally we use about 12 terawatts of energy a year. There’s 85 terawatts of wind, but most of it is over the deep ocean, or the many miles above, which we are unlikely to ever capture.  A giant windmill perched on a giant boat has already used more energy in its construction than it will ever generate before this vast amount of energy intensive steel and other material rusts or sinks in violent storms.  Plus you need to string cables from the windmill or ship to land.

The maximum extractable energy from high jet stream wind is 200 times less than reported previously, and trying to extract them would profoundly impact the entire climate system of the planet.  If we tried to extract the maximum possible 7.5 TW from the jet stream, “the atmosphere would generate 40 times less wind energy than what we would gain from the wind turbines, resulting in drastic changes in temperature and weather” according to Lee Miller, the author of the study (Miller).

Carlos De Castro, a professor of Applied Physics, estimates that at most 1Terawatt is the upper limit of the electrical potential of wind energy. This value is much lower than previous estimates (De Castro).

Betz’s law means you can never harvest more than 59% of the wind, no matter how well you build a windmill.

And another huge part of the wind is above the windmills on land.  So you can really only capture a very small part of the wind that’s blowing.

You also have to space the windmills far apart, because on the other side of a windmill that has just “captured” wind, there’s no wind left (Hayden).

For example, if the best possible wind strip along the coast between San Francisco and LA were covered with the maximum possible number of windmills (an area about 300 miles long by one mile deep) you’d get enough wind, when it was blowing, to replace only one of the dozens of power plants in California (Hayden).

A wind farm takes up 30 to 200 times the space of a gas plant (Paul Gipe, Wind Energy Comes of Age, p. 396). A 50 megawatt wind farm can take up anywhere from two to twenty-five square miles (Proceedings of National Avian-Wind Power Planning Meeting, p. 11).

Wind is only strong enough to justify windmills in a few regions

The wind needs to be at least force level 4 (13-18 mph) for as much of the year as possible to make it economically possible. This means that a great deal of land is not practical for the purpose.  The land that is most suitable already has windmills, or is too far from the grid to be connected.

A Class 3 windmill farm needs double the number of generators to produce the same amount of energy as windmills in a class 6 field (Prieto).

The 1997 US EIA/DOE study (2002) came to the remarkable conclusion that “…many non-technical wind cost adjustment factors … result in economically viable wind power sites on only 1% of the area which is otherwise technically available…”

The electric grid needs to be much larger than it is now to make wind feasible

Without a vastly expanded grid to balance the unpredictability of wind over a large area, wind can’t provide a significant portion of electrical generation.  But expanding the grid to the proper size would not only cost trillions of dollars and years we don’t have, now that we’re at peak, we’d have to ruin many national parks, wilderness areas, and other natural areas to install them.

And then, after the oil was gone, and there was no way to replace or maintain windmills, they’d sit there, our version of Easter Island heads, of absolutely no use to future generations, not even for hanging laundry.

Much of the land in the USA (the areas where there’s lots of wind) is quite far from population centers. And when you hook windmills to the grid, you lose quite a bit of energy over transmission lines, especially since most of the wind is far from cities.   It also takes a lot of energy to build and maintain the electric grid infrastructure itself. Remote wind sites often result in construction of additional transmission lines, estimated to cost as much as $300,000-$1 million per mile. (Energy Choices in a Competitive Era, Center for Energy and Economic Development Study, 1995 Study, p. 14). The economics of transmission are poor because while the line must be sized at peak output, wind’s low capacity factor ensures significant under-utilization.

Wind blows the strongest when customer demand is the weakest

In Denmark, where some of the world’s largest wind farms exist, wind blows the hardest when consumer demand is the lowest, so Denmark ends up selling its extra electricity to other countries for pennies, and then when demand is up, buys electricity back at much higher prices.  Denmark’s citizens pay some of the highest electricity rates on earth (Castelvecchi).

In Texas and California, wind and solar are too erratic to provide more than 20% of a regions total energy capacity because it’s too difficult to balance supply and demand beyond that amount.

Wind varies greatly depending on the weather. Often it hardly blows at all during some seasons.  In California, we need electricity the most in summer when peak loads are reached, but that’s the season the least wind blows.  On our hottest days, wind capacity factors drop to as low as .02 at peak electric demand. At a time when the system most needs reliable base load capacity, wind base capacity is unavailable.

Wind is unreliable, requiring expensive natural gas peaking plants (rarely included in EROEI of wind and solar)

According to Eon Netz, one of the 4 grid managers in Germany, for every 10 MW of wind power added to the system, at least 8 MW of back-up power must also be dedicated.  So you’re not saving on fossil fuels and often have to ADD fossil fuel plants to make up for the wind power when the wind isn’t blowing!

In other words, wind needs 100% back-up of its maximum output.

The first chart is the “Mona Lisa” of wind unreliability, measured at one of California’s largest wind farms. The second is from the California Independent System Operator, showing how wind power tends to be low when power demand is high (and vice-versa). Wind should play an important role, but unless there is a high-voltage, high-capacity, high-density grid to accompany it (as in Northern Europe), or electricity storage, the variability of wind means that co-located natural gas peaking plants are needed as well. The cost of such natural gas plants are rarely factored into the all-in costs of wind (Cembalest).

Wind surges, dies, stops, starts, so it has to be modulated in order to be usable by power companies, and ultimately, homes and businesses. This modulation means that the power grid can only use a maximum of 10% of its power from wind, or the network becomes too unstable and uncontrollable. Because of this problem,  windmills are built to capture wind only at certain speeds, so when the wind is light or too strong, power is not generated.

For example, in 1994, California wind power operated at only 23 percent realized average capacity in 1994 (California Energy Commission, Wind Project Performance: 1994).

No way to store wind energy

We don’t have EROEI-positive batteries, compressed air, or enough pumped water dams to store wind energy and concentrate it enough to do useful work and generate power when the wind isn’t blowing.  There are no storage methods that can return the same amount of energy put into them, so having to store energy reduces the amount of energy returned.  Compressed air storage is inefficient because “air heats up when it is compressed and gets cold when it is allowed to expand.  That means some of the energy that goes into compression is lost as waste heat.  And if the air is simply let out, it can get so cold that it freezes everything it touches, including industrial-strength turbines.  PowerSouth and E.ON burn natural gas to create a hot gas stream that warms the cold air as it expands into the turbines, reducing overall energy efficiency and releasing carbon dioxide, which undermines some of the benefits of wind power” (Castelvecchi).

Wind Power surges harm industrial customers

Japan’s biggest wind power supplier, may scrap a plan to build turbines on the northern island of Hokkaido after the regional utility cut proposed electricity purchases, blaming unreliable supply. Power surges can be a problem for industrial customers, said Hirotaka Hayashi, a spokesman at Hokkaido Electric. Utilities often need to cut back power generation at other plants to lessen the effect of excess power from wind energy.

“Continental European countries such as Germany and Denmark can transfer excess power from windmills to other countries,” said Arakawa. “The electricity networks of Japan’s 10 utilities aren’t connected like those in Europe. That’s the reason why it’s difficult to install windmills in Japan.”

To ensure steady supply, Tohoku Electric Power Co., Japan’s fourth-biggest generator, in March started requiring owners of new windmills to store energy in batteries before distribution rather than send the electricity direct to the utility, said spokesman Satoshi Arakawa. That requirement has increased wind project installation costs to 300,000 yen ($2,560) per kilowatt, from 200,000 yen, according to Toshiro Ito, vice president of EcoPower Co., Japan’s third-biggest wind power supplier (Takemoto).

Energy returned on Energy Invested is negative

Wind farms require vast amounts of steel and concrete, which in terms of mining, fabrication, and transportation to the site represent a huge amount of fossil fuel energy. The Zond 40-45 megawatt wind farm is composed of 150 wind turbines weighing 35 tons each — over 10 million pounds.

The 5,700 turbines installed in the United States in 2009 used 36,000 miles of steel rebar and 1.7 million cubic yards of concrete (enough to pave a four-foot-wide, 7,630-mile-long sidewalk). The gearbox of a 2-megawatt wind turbine has 800 pounds of neodymium and 130 pounds of dysprosium — rare earth metals that are found in low-grade hard-to-find deposits that are very expensive to make. (American Wind Energy Association).

Materials like carbon fiber that would make them more efficient cost  several times more and use up a great deal more fossil fuel energy to fabricate than a fiber glass blade.

From the mining of the metals to make windmills, to their fabrication, delivery, operation, to their Maintenance is very dependent upon fossil fuel energy and fossil fuel driven machinery. Wind energy at best could increase the amount of energy generated while fossil fuels last, but is too dependent on them to outlast the oil age.

After a few years, maintenance costs skyrocket.  The larger the windmill, the more complex maintenance is needed, yet the larger the windmill, the more wind can be captured.

Offshore Wind Farms likely to be destroyed by Hurricanes

The U.S. Department of Energy has estimated that if the United States is to generate 20% of its electricity from wind, over 50 GW will be required from shallow offshore turbines. Hurricanes are a potential risk to these turbines. Turbine tower buckling has been observed in typhoons, but no offshore wind turbines have yet been built in the United States.  In the most vulnerable areas now being actively considered by developers, nearly half the turbines in a farm are likely to be destroyed in a 20-year period.  (Rose).

Source: Rose, S. 2 June 2011. Quantifying the Hurricane Risk to Offshore Wind Turbines.  Carnegie Mellon University.

Offshore Windmills have other problems

Offshore windmills are battered by waves and wind, and ice is also a huge problem.

Offshore windmills need to exist in water that’s 60 meters or less. 15 meters or less is ideal economically as well as making the windmills less susceptible to large waves and wind damage.  But many states along the west coast don’t have shallow shelves where windmills can be built — California’s best wind, by far, is offshore, but the water is far too deep for windmills, and the best wind is in the northern part of the state, too far away to be connected to the grid.

Offshore windmills are a hazard to navigation of freighters and other ships.

The states that have by far the best wind resources and shallow depths offshore are North Carolina, Louisiana, and Texas, but they have 5 or more times the occurrence of hurricanes.

As climate change leads to rising sea levels over the next thousand years, windmills will be rendered useless.

Offshore windmills could conflict with other uses:

  1. Ship navigation
  2. Aquaculture
  3. Fisheries and subsistence fishing
  4. Boating, scuba diving, and surfing
  5. Sand and gravel extraction
  6. Oil and gas infrastructure
  7. Compete with potential wave energy devices

Offshore windparks will affect sediment transport, potentially clogging navigation channels, erosion, depositing of sediment on recreational areas, affect shoreline vegetation, scour sediments leading to loss of habitat for benthic communities, and damage existing seabed infrastructure.

Building windmills offshore can lead to chemical contaminants, smothering, suspended sediments, turbidity, substratum loss, scouring, bird strikes, and noise.

There is a potential for offshore wind farms to interfere with telecommunications, FAA radar systems, and marine communications (VHF [very high frequency] radio and radar).

Land use changes.  The windfarm offshore must be connected to the grid onshore, and there need to be roads to set up onshore substations and transmission lines.  Plus industrial sites and ports to construct, operate, and decommission the windmillls.  Roadways need to be potentially quite large to transport the enormous components of a windmill (Michel)

Operating and Maintenance costs too expensive

Offshore windmills will be subject to a tremendous amount of corrosion from the salt water and air.

Wind mills are battered year round by hail storms, strong winds, blizzards, and temperature extremes from below freezing to hundred degree heat in summer. Corrosion increases over time.

The same windmill can be beaten up variably, with the wind speed at the end of one blade considerably stronger than the wind at the tip of the other.  This caused Suzlon blades to crack several years ago.

A windmill is only as weak as it’s weakest component, and the more components a windmill has, the more complex the maintenance.  Wind turbines are complex machines. Each has around 7,000 or more components, according to Tom Maves, deputy director for manufacturing and supply chain at the American Wind Energy Association (Galbraith).

Maintenance costs start to rise after 2 years (it’s almost impossible to find out what these costs are from turbine makers). Vibration and corrosion damage the rotating blades, and the bearings, gear boxes, axles, and blades are subjected to high stresses.

Gearboxes can be the Achilles’ heel, costing up to $500,000 to fix due to the high cost of replacement parts, cranes (which can cost $75,000-$100,000), post installation testing, re-commissioning and lost power production.

If the electric grid were to be built up enough to balance the wind energy load better, the windmills breaking down in remote locations would require a huge amount of energy to keep trees cut back and remote roads built and kept up to deliver and maintain the turbine and grid infrastructure.

Large scale wind farms need to “overcome significant barriers”: Costs overall are too high, and windmills in lower wind speed areas need to become more cost effective. Low wind speed areas are 20 times more common than high wind areas, and five times closer to the existing electrical distribution systems. Improvement is needed in integrating fluctuating wind power into the electrical grid with minimal impact on cost and reliability. Offshore wind facilities cost more to install, operate, and maintain than onshore windmills. NREL

Windmills wear out from ice storms, hitting insects, dust and sand abrade the blades and structure, and so on.

We Can’t build large enough windmills

Useful energy increases with the square of the blade length, and there’s more wind the higher up you go, so ideally you’d build very tall wind towers with huge blades.  But conventional materials can’t handle these high wind conditions, and new, super-strong materials are too expensive.

As towers get to be 100 meters high and more, and blade length increases, shipping them gets challenging. Trucks carrying big towers and blades must sometimes move with police escorts and avoid certain overpasses or small roads (Galbraith).

Investment takes too long, if ever to pay back

There isn’t already a lot of wind power because investors aren’t willing to wait 20 to 30 years to get their money back when they can invest it in oil and natural gas drilling and get most of their money back in a few years.

Suckers who believe the wind proponent value of EROEI at 20:1 don’t understand the factors left out, such as rare metals, natural gas peaking plants, grid infrastructure, maintenance costs, and so on.

Not In My Back Yard

There’s been a great deal of NIMBYism preventing windmills from being built so far. Some of the objections are visual blight, bird killing, noise, and erosion from service roads.

After 25 years of marriage, I still have to sometimes go downstairs to sleep when my husband snores too loudly, so I can imagine how annoying windmill noise might be.  And even more so after someone sent me a document entitled “Confidential issues report draft. Waubra & Other Victorian Wind Farm Noise Impact Assessments” that made the case that windmill noise affects the quality of life, disturbs sleep, and has adverse health effects. I especially liked the descriptions of possible noices: whooshes, rumble-thumps, whining, clunks, and swooshes.  Low frequency sounds can penetrate walls and windows and cause vibrations and pressure changes.  Many people affected would like to come up with a standard that windmill farms must be at least 2 kilometres away and not exceed a noise level of 35 dB(A) at any time outside neighboring dwellings.

Wind turbines depend on rare earth metals

Such as the neodymium used in turbine magnets.  Neodymium prices quadrupled this year, and that’s with wind still making up less than 3% of global electricity generation (Cembalest).

Environmental Impact

The environmental impact of mining the rare metals required for windmills makes their use questionable.  Mongolia has large reserves of rare earth metals, especially neodymium, the element needed to make the magnets in wind turbines.  Its extraction has led to a 5-mile wide poisonous tailings lake in northern China.  Nearby farmland for miles is now unproductive, and one of China’s key waterways is at risk. “This vast, hissing cauldron of chemicals is the dumping ground for seven million tons a year of mined rare earth after it has been doused in acid and chemicals and processed through red-hot furnaces to extract its components.  Rusting pipelines meander for miles from factories processing rare earths in Baotou out to the man-made lake where, mixed with water, the foul-smelling radioactive waste from this industrial process is pumped day after day” (Parry).

Local and Global Weather are affected

Scientists modeled the impact of a hypothetical large-scale wind farm in the Great Plains. Their conclusion in The Journal of Geophysical Research, is that thousands of turbines concentrated in one area can affect local weather, by making warmer drier conditions from the atmospheric mixing in the blades wake.  The warming and drying that occur when the upper air mass reaches the surface is a significant change, Dr. Baidya Roy said, and is similar to the kinds of local atmospheric changes that occur with large-scale deforestation (2Nov 2004. Catch the Wind, Change the Weather. New York Times.

“We shouldn’t be surprised that extracting wind energy on a global scale is going to have a noticeable effect. … There is really no such thing as a free lunch,” said David Keith, a professor of energy and the environment at the University of Calgary and lead author of a report in the Proceedings of the National Academy of Sciences.

Specifically, if wind generation were expanded to the point where it produced 10% of today’s energy, the models say cooling in the Arctic and a warming across the southern parts of North America should happen.

The exact mechanism for this is unclear, but the scientists believe it may have to do with the disruption of the flow of heat from the equator to the poles.


The Sierra club in Maine is asking the Minerals Management Service  to look at over a dozen aspects of wind offshore, including possible interference with known upwelling zones and/or important circulatory and current regimes that might influence the distribution or recruitment of marine species.

Wind affects the upwelling of nutrients and may be a key factor in booms and busts of the California sardine fishery and other marine species.

Ocean bottom

Installing offshore windmills requires excavation of the seafloor to create a level surface, and sinking the 250 to 350 ton foundations into the seabed, which are very expensive to build,since they require scour protection from large stones, erosion control mats, and so on.

Potential Impacts to Currents and Tides

Wind turbine foundations can affect the flow velocity and direction and increase turbulence. These changes to currents can affect sediment transport, resulting in erosion or piles of sediments on nearby shorelines.  Modified currents also could change the distribution of salinity, nutrients, effluents, river outflows, and thermal stratification, in turn affecting fish and benthic habitats.    Changes to major ocean currents such as the Gulf Stream could affect areas well beyond the continental United States, affecting the climate of North America as well as other continents (Michel).

Lack of a skilled and technical workforce

wind power officials see a much larger obstacle coming in the form of its own work force, a highly specialized group of technicians that combine working knowledge of mechanics, hydraulics, computers and meteorology with the willingness to climb 200 feet in the air in all kinds of weather (Twiddy).

Wind only produces electricity

We need liquid fuels for the immediate crisis at hand.

Wind has a low capacity Factor

In the very best windmill farms, the capacity factor is only 28 to 35%.

Wind turbines generate electrical energy when they are not shut down for maintenance, repair, or tours and the wind is between about 8 and 55 mph. Below a wind speed of around 30 mph, however, the amount of energy generated is very small.

A 100 MW rated wind farm is capable of producing 100 MW only during maximum peak winds.  Most of the time it will produce much less, or even no power at all when winds are lighter or not blowing.  In reality, 30 MW of power production or less is far more likely.  What wind farms actually produce is called the CAPACITY FACTOR.

Quite often you will only hear that a new wind farm will generate 100 MW of power.  Ignore that and look for what the capacity factor is.

This makes a difference in how many homes are served. Per Megawatt, a coal plant up 75% of the time provides enough power in the Northeast for 900 homes and a wind plant up 30% of the time power for only 350 homes. The southhas extremely voracious electricity consumers, so the numbers are much lower: 350 and 180 respectively.

Solar generators typically have a 25 percent capacity factor, because the generators do not produce electricity at night or on cloudy days.

Dead bugs and salt reduce wind power generation by 20 to 30%

Over time the build-up of dead insects and/or salt on off-shor turbine blades reduces power by up to 30%.

Kite Windmills

There are several research groups looking at generating electricity using giant kites up in the jet stream. But it won’t be easy.  Jet streams move around and change their location, airplanes need to stay well away, and lightning and thunderstorms might require them to be brought down.

The strongest wind is 6 miles above us, where winds are typically 60 miles per hour.  Some scientists think there’s enough wind to generate 100 times current global energy demand.

But Axel Kleidon and Lee Miller of the Max Planck Institute for Biogeochemistry believe that’s a massive overestimate of the amount of energy that could be obtained. If they’re right that jet stream wind results from a lack of friction, then at most 7.5 TW of power could be extracted, and that would have a major effect on climate (Earth System Dynamics, vol 2, p 201).

Wind Power Can’t be scaled up

Denmark is often pointed out as a country that scaled wind up to provide 20% of its power.   Yet because wind is so intermitent, no conventional power plants have been shut down because they need to step in when the wind isn’t blowing (enough).  The quick ramping up and down of these power plants actually increases greenhouse gas emissions.  And when the wind does blow enough, the power is surplus and most is sold to other countries at an extremely cheap price.  And often they have to import electricity!  The Danish pay the highest electricity prices in Europe.  The actual capacity is 20%, not the 30% the BWEA and AWEA claim is possible (Rosenbloom).

Small windmills

According to the American Wind Energy Association, these are the challenges of small windmills: they’re too expensive for most people, there’s insufficient product reliability, lack of consumer protection from unscrupulous suppliers, most local jurisdictions limit the height of structures to 35 feet (wind towers must be at least 60 feet high and higher than objects around them like trees, etc), utilities make it hard and discourage people from connecting to the grid, the inverters that modify the wildly fluctuating wind voltages into 60-cycle AC are too expensive, and they’re too noisy.

Wind turbines can NOT help us avoid blackouts

This is because wind turbines need power from the grid to work. A blackout knocks them out, too.

Castelvecchi, D. March 2012. Gather the Wind. If renewable energy is going to take off, we need good ways of storing it for the times when the sun isn’t shining and the wind isn’t blowing.  Scientific American.

Cembalest, Michael. 21 Nov 2011. Eye on the Market: The quixotic search for energy solutions. J. P. Morgan.

Cubic Mile of Oil.  Wikipedia.

De Castro, C. 2011. Global Wind Power Potential: Physical and technological limits. Energy Policy.

E.ON Netz Corp. Wind Report 2004.Renewable Energy Foundation. E.ON Netz Wind report 2005 shows UK renewables policy is mistaken.

Fisher, T. Oct 23, 2013. Big Wind’s Dirty Little Secret: Toxic Lakes and Radioactive Waste. Institute for Energy Research.

Galbraith, K. 7 Aug 2011. Wind Power Gains as Gear Improves. New York Times

Mason, V. 2005. Wind power in West Denmark. Lessons for the UK.

Michel, J, et al. July 2007. Worldwide Synthesis and Analysis of Existing Information Regarding Environmental Effects of Alternative Energy Uses on the Outer Continental Shelf. U.S. Department of the Interior. Minerals Management Service. OCS STUDY MMS 2007- 038

Miller, L. M. et al. Jet stream wind power as a renewable energy resource: little power, big impacts. Earth System Dynamics, 2011; 2 (2): 201 DOI: 10.5194/esd-2-201-2011

Nelder, C. 31 May 2010. 195 Californias or 74 Texases to Replace Offshore Oil. ASPO Peak Oil Review.

Parry, Simon. 11 Jan 2012.  In China, the true cost of Britain’s clean, green wind power experiment: Pollution on a disastrous scale.

Prieto, P. A.  21 Oct 2008. Solar + Wind in Spain/ World. Closing the growing gap? ASPO International conference.

Rose, S. et. al. 10 Jan 2012. Quantifying the hurricane risk to offshore wind turbines. Proceedings of the National Academy of Sciences.

Rosenbloom, E. 2006. A Problem With Wind Power.

Takemoto, Y. 31 Aug 2006. Eurus Energy May Scrap Wind Power Project in Japan.  Bloomberg.

Twiddy, D. 2 Feb 2008. Wind farms need techs to keep running. Associated Press.

Udall, Randy. How many wind turbines to meet the nation’s needs? Energyresources message 2202


More articles on wind problems in various areas (not cited above)

Clover, C. 9 Dec 2006. Wind farms ‘are failing to generate the predicted amount of electricity‘. Telegraph.


Not on the internet anymore:

Blackwell, R. Oct 30, 2005. How much wind power is too much? Globe and Mail.

Wind power has become a key part of Canada’s energy mix, with the number of installed wind turbines growing exponentially in recent months. But the fact the wind doesn’t blow all the time is creating a potential roadblock that could stall growth in the industry.

Alberta and Ontario, the two provinces with the most wind turbines up and whirling, face concerns that there are limits on how much power can be generated from the breeze before their electricity systems are destabilized.

Alberta recently put a temporary cap on wind generation at 900 megawatts — a level it could reach as early as next year — because of the uncertainty. And a report in Ontario released last week says that in some situations more than 5,000 MW of wind power, stable operation of the power grid could be jeopardized.

Warren Frost, vice-president for operations and reliability at the Alberta Electric System Operator, said studies done over the past couple of years showed there can be problems when wind contributes more than about 10 per cent of the province’s electricity — about 900 MW — because of the chance the wind could stop at any time.

Each 100 MW of wind power is enough to supply a city about the size of Lethbridge, Alta.

If the power “disappears on you when the wind dies, then you’ve got to make it up, either through importing from a neighbouring jurisdiction or by ramping up generators,” Mr. Frost said.

But Alberta is limited in its imports, because the provincial power grid has connections only with British Columbia and Saskatchewan. And hydroelectric plants with water reservoirs, which can turn on a dime to start producing power, are limited in the province. Coal-fired plants and most gas-fired plants take time to get up to speed, making them less useful as backups when the wind fails.

There can also be a problem, Mr. Frost noted, when the wind picks up and generates more power than is being demanded — that potential imbalance also has to be accounted for.

There are a number of ways to allow wind power to make up a greater proportion of the electricity supply, but they require more study, Mr. Frost said. First, he said, the province can develop more sophisticated ways of forecasting the wind so the power it generates is more predictable.

The province could also build more plants that can quickly respond if the wind dies down during a peak period, for example. But building new gas-powered plants merely to help handle the variability of wind is certain to raise the ire of environmentalists.

The province could also increase its connections to other jurisdictions, where it would buy surplus power when needed. Alberta is already looking at links with some northwestern U.S. states, including Montana.

Over all, Alberta is committed to “adding as much wind as feasible” Mr. Frost said. “What we’re balancing is the reliability [issue].

Robert Hornung, president of the Canadian Wind Energy Association, which represents companies in the wind business, said he prefers to think of Alberta’s 900 MW limit as a “speed bump” rather than a fixed cap.

“We have every confidence they’ll be able to go further than that,” Mr. Hornung said, particularly if the industry and regulators put some effort into wind forecasting over the next year or so. That’s crucial, he said, because “we have projects of many, many more megawatts than 900 waiting to proceed in Alberta.

In Ontario, the situation is less acute than in Alberta, but the wind study released last week — prepared for the industry and regulators — shows some similar concerns.

While wind power could be handled by the Ontario grid up to 5,000 MW — about 320 MW of wind turbines are currently in operation with another 960 MW in planning stages — the situation changes at higher levels, the study suggests.

Particularly during low demand periods when wind makes up a relatively high proportion of the power mix, “stable operation of the power system could be compromised” if backup systems can’t be ramped up quickly to deal with wind fluctuations, the report said.

But Ontario is in a better position than Alberta because it has far more interconnections with other provinces and states, where it can buy or sell power.

And it also has its wind turbines more geographically dispersed than Alberta, where most wind farms are in the south of the province. That means the chance of the wind failing everywhere at the same time is lower in Ontario.

Don Tench, director of planning and assessments for Ontario’s Independent Electricity System Operator, said he thinks better wind forecasting is the key to making the new source of power work effectively.

“If we have a few hours notice of a significant wind change, we can make plans to deal with it,” he said.

Mr. Frost, of the Alberta system operator, said European countries such as Denmark and Germany have been able to maintain a high proportion of wind power in their electricity systems mainly because they have multiple connections to other countries’ power grids. That gives them substantial flexibility to import or export power to compensate for wind fluctuation.

Germany, for example, has 39 international interconnections, he said, making variable wind conditions much easier to manage.

Wiley, L. 2007. Utility scale wind turbine manufacturing requirements. Presentation at the National Wind Coordinating Collaborative’s Wind Energy and Economic Development Forum, Lansing, Mich., April 24, 2007.

Posted in Alternative Energy, Energy, Wind | Tagged , , | Leave a comment

Population posts on the internet

[Below are posts I’ve run across on population I liked. There are no doubt thousands more worth reading as well, send me your favorite links.  I agree with Erlich that we aren’t going to do a damn thing about controlling our numbers.  It will be left to Mother Nature to cut our numbers back to what the earth can support after fossil fuels decline.  In the brief 100 years or so the oil-boom lasted, we have ravaged our atmosphere, oceans, and soil both chemically and physically with enormous diesel-combustion petroleum powered machines that blew up and leveled mountains, destroyed biodiversity to clear forests and wetlands to grow food, scarred the earth with mining, and paved the landscape with roads, parking lots, cities, shopping malls. But after reading Alan Weisman’s “The Earth Without us”, many of the scars will be gone 100 years from now, which is both wonderful and unbelievably sad, because much of our Enlightenment and knowledge is likely to disappear forever.  Alice Friedemann]

21 Nov 2014 Richard Adriann Reese The Population Bomb – revisited by What Is Sustainable.

Posted in Population | Leave a comment

Giant Oil Field Decline Rates

Summary of article 1, Cobb’s “Aging Giant Oil Fields” 2013

  • The world’s 507 giant oil fields comprise a little over 1% of all oil fields, but produce 60% of current world supply
  • Of the 331 largest fields, 261, or 79%, are declining at 6.5% per year.
  • Techno-fixes have made matters worse because they’ll increase the decline rate to 10% or more, because we’re getting oil now, faster, with new technology that we would have gotten later.
  • And that will make it harder for unconventional oil (tar sands, deep ocean, tight “fracked” oil, etc.) to replace it

Summary of article 2, Koppelaar’s “… future oil supply”:

Based on 3 studies, average global oil decline rate of 4.5 to 6% assumed. No problems until 2013, and only then if there’s a rapid recovery of the economic system. Otherwise:
2014: in a weak recovery oil starts to tighten
2017: weak recovery, growing demand can’t be met
2020: if there’s another economic downturn, there is ample supply for a decade]

Aging giant oil fields, not new discoveries are the key to future oil supply

April 7, 2013  by Kurt Kobb

With all the talk about new oil discoveries around the world and new techniques for extracting oil in such places as North Dakota and Texas, it would be easy to miss the main action in the oil supply story: Aging giant fields produce more than half of global oil supply and are already declining as a group. Research suggests that their annual production decline rates are likely to accelerate.

Here’s what the authors of “Giant oil field decline rates and their influence on world oil production” concluded:

  1. The world’s 507 giant oil fields comprise a little over 1% of all oil fields, but produce 60% of current world supply (2005). (A giant field is defined as having more than 500 million barrels of ultimately recoverable resources of conventional crude. Heavy oil deposits are not included in the study.)
  2. “[A] majority of the largest giant fields are over 50 years old, and fewer and fewer new giants have been discovered since the decade of the 1960s.” The top 10 fields with their location and the year production began are: Ghawar (Saudi Arabia) 1951, Burgan (Kuwait) 1945, Safaniya (Saudi Arabia) 1957, Rumaila (Iraq) 1955, Bolivar Coastal (Venezuela) 1917, Samotlor (Russia) 1964, Kirkuk (Iraq) 1934, Berri (Saudi Arabia) 1964, Manifa (Saudi Arabia) 1964, and Shaybah (Saudi Arabia) 1998 (discovered 1968). (This list was taken from Fredrik Robelius’s “Giant Oil Fields -The Highway to Oil.”)
  3. The 2009 study focused on 331 giant oil fields from a database previously created for the groundbreaking work of Robelius mentioned above. Of those, 261 or 79 percent are considered past their peak and in decline.
  4. The average annual production decline for those 261 fields has been 6.5 percent. That means, of course, that the number of barrels coming from these fields on average is 6.5 percent less EACH YEAR.
  5. Now, here’s the key insight from the study. An evaluation of giant fields by date of peak shows that new technologies applied to those fields have kept their production higher for longer only to lead to more rapid declines later. As the world’s giant fields continue to age and more start to decline, we can therefore expect the annual decline in their rate of production to worsen. Land-based and offshore giants that went into decline in the last decade showed annual production declines on average above 10 percent.
  6. What this means is that it will become progressively more difficult for new discoveries to replace declining production from existing giants. And, though I may sound like a broken record, it is important to remind readers that the world remains on a bumpy production plateau for crude oil including lease condensate (which is the definition of oil), a plateau which began in 2005.

[rest of article snipped from here on]

1 Mar 2010  Drawing the lower and upper boundaries of future oil supply

By Rembrandt Koppelaar, ASPO Netherlands

The oil supply challenge is often summarized in terms of the production volume equivalent of Saudi-Arabia’s that needs to be replaced.

This popular metric is based on in-depth studies of global decline rates that show a decline range between 4.5 and 6 percent over the current 73 million barrels of crude oil produced per day. By using such literature values for all types of production, it can be shown that:

  • In the next 3 years there’s a sufficient oil supply for world demand under any economic scenario.
  • Supply constraints will arise if OPEC proves to be too slow in turning available capacity into production.
  • Oil supply can no longer meet growing demand beyond 2013 only in the unlikely case of a rapid economic recovery.
  • In case of a fairly weak economic recovery the oil market will begin to tighten in 2014 when production capacity begins to decline and growing demand can no longer be met around 2017.
  • If we suffer another economic downturn, ample oil supply will be available for a period of at least a decade.

Decline rates over current conventional production.
Recent studies have been conducted to date on the global decline rate of total conventional oil production, including fields with rising, declining and plateau production.

1) Cambridge Energy Research Associates in 2007, showed that 2007 average decline of oil fields under production was 4.5% per year (CERA 2007). This study used data from 811 oil fields representing two thirds of global oil production, obtained from the IHS Energy database. The selection was comprised of 400 fields, each with reserves of more than 300 million barrels, that produced half of global production in 2006, and 411 fields with less than 300 million barrels that produced only 8.5% of production in 2006.

2) Höök et al. (2009) estimated that the overall decline rate is 6% globally based on the finding that decline rates in smaller fields are equal or greater than those of giant fields.

Based on these studies, a starting point for current decline lies between 4.5% and 6%. Within this range a decline rate around 5% can be taken as a reasonable number. The value given by CERA (2007) of 4.5% probably over represents giant and super giant fields and hence is likely too low as small fields have bigger decline rates. The value given by Höök et al. (2009a) of 6% is probably too high as the total decline rate is inferred directly from post-peak decline of giant and supergiant fields on the assumption that smaller fields will tend to have an equal and higher decline, ignoring the effect of fields still on a plateau and in build-up.

Although 5% is a good starting point, the catch lies in knowing what will happen in the future. More supergiant and giant fields will go into decline due to depletion as time passes by, causing an increase in the average decline rate that needs to be compensated. This was shown by Höök et al. (2009) who found that the world average decline rate of the 331 giant fields was near zero until 1960, after which the average decline rate increased by around 0.15% per year.  Höök, M., Hirsch, R., Aleklett, K., 2009. Giant oil field decline rates and their influence on world oil production, Energy Policy Vol. 37, pp. 2262-2272

For scenario analysis we can take optimistic and pessimistic boundaries based on the studies describe above. The most optimistic stance is to extrapolate the starting point decline rate, estimated here at 5%, onto the entire forecast horizon up to 2030. The most pessimistic view based on current information would be a rapid increase in decline in the next five to ten years up to 6.7% as the production-weighed decline rate rapidly catches up with the average decline rate. After this a more smooth decline increase of 0.15% per year as historically was the case, up to a value of 8.6% in 2030, is an informed estimate. The real decline will lie somewhere in between these two bounds.



Posted in Flow Rate, How Much Left | Leave a comment

Gail Tverberg: 8 pitfalls in evaluating green energy solutions

Eight Pitfalls in Evaluating Green Energy Solutions

Does the recent climate accord between US and China mean that many countries will now forge ahead with renewables and other green solutions? I think that there are more pitfalls than many realize.

Pitfall 1. Green solutions tend to push us from one set of resources that are a problem today (fossil fuels) to other resources that are likely to be problems in the longer term.  

The name of the game is “kicking the can down the road a little.” In a finite world, we are reaching many limits besides fossil fuels:

  1. Soil quality–erosion of topsoil, depleted minerals, added salt
  2. Fresh water–depletion of aquifers that only replenish over thousands of years
  3. Deforestation–cutting down trees faster than they regrow
  4. Ore quality–depletion of high quality ores, leaving us with low quality ores
  5. Extinction of other species–as we build more structures and disturb more land, we remove habitat that other species use, or pollute it
  6. Pollution–many types: CO2, heavy metals, noise, smog, fine particles, radiation, etc.
  7. Arable land per person, as population continues to rise

The danger in almost every “solution” is that we simply transfer our problems from one area to another. Growing corn for ethanol can be a problem for soil quality (erosion of topsoil), fresh water (using water from aquifers in Nebraska, Colorado). If farmers switch to no-till farming to prevent the erosion issue, then great amounts of Round Up are often used, leading to loss of lives of other species.

Encouraging use of forest products because they are renewable can lead to loss of forest cover, as more trees are made into wood chips. There can even be a roundabout reason for loss of forest cover: if high-cost renewables indirectly make citizens poorer, citizens may save money on fuel by illegally cutting down trees.

High tech goods tend to use considerable quantities of rare minerals, many of which are quite polluting if they are released into the environment where we work or live. This is a problem both for extraction and for long-term disposal.

Pitfall 2. Green solutions that use rare minerals are likely not very scalable because of quantity limits and low recycling rates.  

Computers, which are the heart of many high-tech goods, use almost the entire periodic table of elements.

Figure 1. Slide by Alicia Valero showing that almost the entire periodic table of elements is used for computers.

When minerals are used in small quantities, especially when they are used in conjunction with many other minerals, they become virtually impossible to recycle. Experience indicates that less than 1% of specialty metals are recycled.

Figure 2. Slide by Alicia Valero showing recycling rates of elements.

Green technologies, including solar panels, wind turbines, and batteries, have pushed resource use toward minerals that were little exploited in the past. If we try to ramp up usage, current mines are likely to deplete rapidly. We will eventually need to add new mines in areas where resource quality is lower and concern about pollution is higher. Costs will be much higher in such mines, making devices using such minerals less affordable, rather than more affordable, in the long run.

Of course, a second issue in the scalability of these resources has to do with limits on oil supply. As ores of scarce minerals deplete, more rather than less oil will be needed for extraction. If oil is in short supply, obtaining this oil is also likely to be a problem, also inhibiting scalability of the scarce mineral extraction. The issue with respect to oil supply may not be high price; it may be low price, for reasons I will explain later in this post.

Pitfall 3. High-cost energy sources are the opposite of the “gift that keeps on giving.” Instead, they often represent the “subsidy that keeps on taking.”

Oil that was cheap to extract (say $20 barrel) was the true “gift that keeps on giving.” It made workers more efficient in their jobs, thereby contributing to efficiency gains. It made countries using the oil more able to create goods and services cheaply, thus helping them compete better against other countries. Wages tended to rise, as long at the price of oil stayed below $40 or $50 per barrel (Figure 3).

Figure 3. Average wages in 2012$ compared to Brent oil price, also in 2012$. Average wages are total wages based on BEA data adjusted by the CPI-Urban, divided total population. Thus, they reflect changes in the proportion of population employed as well as wage levels.

More workers joined the work force, as well. This was possible in part because fossil fuels made contraceptives available, reducing family size. Fossil fuels also made tools such as dishwashers, clothes washers, and clothes dryers available, reducing the hours needed in housework. Once oil became high-priced (that is, over $40 or $50 per barrel), its favorable impact on wage growth disappeared.

When we attempt to add new higher-cost sources of energy, whether they are high-cost oil or high-cost renewables, they present a drag on the economy for three reasons:

  1. Consumers tend to cut back on discretionary expenditures, because energy products (including food, which is made oil and other energy products) are a necessity. These cutbacks feed back through the economy and lead to layoffs in discretionary sectors. If they are severe enough, they can lead to debt defaults as well, because laid-off workers have difficulty paying their bills.
  2.  An economy with high-priced sources of energy becomes less competitive in the world economy, competing with countries using less expensive sources of fuel. This tends to lead to lower employment in countries whose mix of energy is weighted toward high-priced fuels.
  3. With (1) and (2) happening, economic growth slows. There are fewer jobs and debt becomes harder to repay.

In some sense, the cost producing of an energy product is a measure of diminishing returns–that is, cost is a measure of the amount of resources that directly and indirectly or indirectly go into making that device or energy product, with higher cost reflecting increasing effort required to make an energy product. If more resources are used in producing high-cost energy products, fewer resources are available for the rest of the economy. Even if a country tries to hide this situation behind a subsidy, the problem comes back to bite the country. This issue underlies the reason that subsidies tend to “keeping on taking.”

The dollar amount of subsidies is also concerning. Currently, subsidies for renewables (before the multiplier effect) average at least $48 per barrel equivalent of oil.1 With the multiplier effect, the dollar amount of subsidies is likely more than the current cost of oil (about $80), and possibly even more than the peak cost of oil in 2008 (about $147). The subsidy (before multiplier effect) per metric ton of oil equivalent amounts to $351. This is far more than the charge for any carbon tax.

Pitfall 4. Green technology (including renewables) can only be add-ons to the fossil fuel system.

A major reason why green technology can only be add-ons to the fossil fuel system relates to Pitfalls 1 through 3. New devices, such as wind turbines, solar PV, and electric cars aren’t very scalable because of high required subsidies, depletion issues, pollution issues, and other limits that we don’t often think about.

A related reason is the fact that even if an energy product is “renewable,” it needs long-term maintenance. For example, a wind turbine needs replacement parts from around the world. These are not available without fossil fuels. Any electrical transmission system transporting wind or solar energy will need frequent repairs, also requiring fossil fuels, usually oil (for building roads and for operating repair trucks and helicopters).

Given the problems with scalability, there is no way that all current uses of fossil fuels can all be converted to run on renewables. According to BP data, in 2013 renewable energy (including biofuels and hydroelectric) amounted to only 9.4% of total energy use. Wind amounted to 1.1% of world energy use; solar amounted to 0.2% of world energy use.

Pitfall 5. We can’t expect oil prices to keep rising because of affordability issues.  

Economists tell us that if there are inadequate oil supplies there should be few problems:  higher prices will reduce demand, encourage more oil production, and encourage production of alternatives. Unfortunately, there is also a roundabout way that demand is reduced: wages tend to be affected by high oil prices, because high-priced oil tends to lead to less employment (Figure 3). With wages not rising much, the rate of growth of debt also tends to slow. The result is that products that use oil (such as cars) are less affordable, leading to less demand for oil. This seems to be the issue we are now encountering, with many young people unable to find good-paying jobs.

If oil prices decline, rather than rise, this creates a problem for renewables and other green alternatives, because needed subsidies are likely to rise rather than disappear.

The other issue with falling oil prices is that oil prices quickly become too low for producers. Producers cut back on new development, leading to a decrease in oil supply in a year or two. Renewables and the electric grid need oil for maintenance, so are likely to be affected as well. Related posts include Low Oil Prices: Sign of a Debt Bubble Collapse, Leading to the End of Oil Supply? and Oil Price Slide – No Good Way Out.

Pitfall 6. It is often difficult to get the finances for an electrical system that uses intermittent renewables to work out well.  

Intermittent renewables, such as electricity from wind, solar PV, and wave energy, tend to work acceptably well, in certain specialized cases:

  • When there is a lot of hydroelectricity nearby to offset shifts in intermittent renewable supply;
  • When the amount added is sufficient small that it has only a small impact on the grid;
  • When the cost of electricity from otherwise available sources, such as burning oil, is very high. This often happens on tropical islands. In such cases, the economy has already adjusted to very high-priced electricity.

Intermittent renewables can also work well supporting tasks that can be intermittent. For example, solar panels can work well for pumping water and for desalination, especially if the alternative is using diesel for fuel.

Where intermittent renewables tend not to work well is when

  1. Consumers and businesses expect to get a big credit for using electricity from intermittent renewables, but
  2. Electricity added to the grid by intermittent renewables leads to little cost savings for electricity providers.

For example, people with solar panels often expect “net metering,” a credit equal to the retail price of electricity for electricity sold to the electric grid. The benefit to electric grid is generally a lot less than the credit for net metering, because the utility still needs to maintain the transmission lines and do many of the functions that it did in the past, such as send out bills. In theory, the utility still should get paid for all of these functions, but doesn’t. Net metering gives way too much credit to those with solar panels, relative to the savings to the electric companies. This approach runs the risk of starving fossil fuel, nuclear, and grid portion of the system of needed revenue.

A similar problem can occur if an electric grid buys wind or solar energy on a preferential basis from commercial providers at wholesale rates in effect for that time of day. This practice tends to lead to a loss of profitability for fossil fuel-based providers of electricity. This is especially the case for natural gas “peaking plants” that normally operate for only a few hours a year, when electricity rates are very high.

Germany has been adding wind and solar, in an attempt to offset reductions in nuclear power production. Germany is now running into difficulty with its pricing approach for renewables. Some of its natural gas providers of electricity have threatened to shut down because they are not making adequate profits with the current pricing plan. Germany also finds itself using more cheap (but polluting) lignite coal, in an attempt to keep total electrical costs within a range customers can afford.

Pitfall 7. Adding intermittent renewables to the electric grid makes the operation of the grid more complex and more difficult to manage. We run the risk of more blackouts and eventual failure of the grid. 

In theory, we can change the electric grid in many ways at once. We can add intermittent renewables, “smart grids,” and “smart appliances” that turn on and off, depending on the needs of the electric grid. We can add the charging of electric automobiles as well. All of these changes add to the complexity of the system. They also increase the vulnerability of the system to hackers.

The usual assumption is that we can step up to the challenge–we can handle this increased complexity. A recent report by The Institution of Engineering and Technology in the UK on the Resilience of the Electricity Infrastructure questions whether this is the case. It says such changes, ” .  .  . vastly increase complexity and require a level of engineering coordination and integration that the current industry structure and market regime does not provide.” Perhaps the system can be changed so that more attention is focused on resilience, but incentives need to be changed to make resilience (and not profit) a top priority. It is doubtful this will happen.

The electric grid has been called the worlds ‘s largest and most complex machine. We “mess with it” at our own risk. Nafeez Ahmed recently published an article called The Coming Blackout Epidemic, discussing challenges grids are now facing. I have written about electric grid problems in the past myself: The US Electric Grid: Will it be Our Undoing?

Pitfall 8. A person needs to be very careful in looking at studies that claim to show favorable performance for intermittent renewables.  

Analysts often overestimate the benefits of wind and solar. Just this week a new report was published saying that the largest solar plant in the world is so far producing only half of the electricity originally anticipated since it opened in February 2014.

In my view, “standard” Energy Returned on Energy Invested (EROEI) and Life Cycle Analysis (LCA) calculations tend to overstate the benefits of intermittent renewables, because they do not include a “time variable,” and because they do not consider the effect of intermittency. More specialized studies that do include these variables show very concerning results. For example, Graham Palmer looks at the dynamic EROEI of solar PV, using batteries (replaced at eight year intervals) to mitigate intermittency.2 He did not include inverters–something that would be needed and would reduce the return further.

Figure 4. Graham Palmer's chart of Dynamic Energy Returned on Energy Invested from "Energy in Australia."

Palmer’s work indicates that because of the big energy investment initially required, the system is left in a deficit energy position for a very long time. The energy that is put into the system is not paid back until 25 years after the system is set up. After the full 30-year lifetime of the solar panel, the system returns 1.3 times the initial direct energy investment.

One further catch is that the energy used in the EROEI calculations includes only a list of direct energy inputs. The total energy required is much higher; it includes indirect inputs that are not directly measured as well as energy needed to provide necessary infrastructure, such as roads and schools. When these are considered, the minimum EROEI needs to be something like 10. Thus, the solar panel plus battery system modeled is really a net energy sink, rather than a net energy producer.  

Another study by Weissbach et al. looks at the impact of adjusting for intermittency. (This study, unlike Palmer’s, doesn’t attempt to adjust for timing differences.) It concludes, “The results show that nuclear, hydro, coal, and natural gas power systems . . . are one order of magnitude more effective than photovoltaics and wind power.”


It would be nice to have a way around limits in a finite world. Unfortunately, this is not possible in the long run. At best, green solutions can help us avoid limits for a little while longer.

The problem we have is that statements about green energy are often overly optimistic. Cost comparisons are often just plain wrong–for example, the supposed near grid parity of solar panels is an “apples to oranges” comparison. An electric utility cannot possibility credit a user with the full retail cost of electricity for the intermittent period it is available, without going broke. Similarly, it is easy to overpay for wind energy, if payments are made based on time-of-day wholesale electricity costs. We will continue to need our fossil-fueled balancing system for the electric grid indefinitely, so we need to continue to financially support this system.

There clearly are some green solutions that will work, at least until the resources needed to produce these solutions are exhausted or other limits are reached. For example, geothermal may be solutions in some locations. Hydroelectric, including “run of the stream” hydro, may be a solution in some locations. In all cases, a clear look at trade-offs needs to be done in advance. New devices, such as gravity powered lamps and solar thermal water heaters, may be helpful especially if they do not use resources in short supply and are not likely to cause pollution problems in the long run.

Expectations for wind and solar PV need to be reduced. Solar PV and offshore wind are both likely net energy sinks because of storage and balancing needs, if they are added to the electric grid in more than very small amounts. Onshore wind is less bad, but it needs to be evaluated closely in each particular location. The need for large subsidies should be a red flag that costs are likely to be high, both short and long term. Another consideration is that wind is likely to have a short lifespan if oil supplies are interrupted, because of its frequent need for replacement parts from around the world.

Some citizens who are concerned about the long-term viability of the electric grid will no doubt want to purchase their own solar systems with inverters and back-up batteries. I see no reason to discourage people who want to do this–the systems may prove to be of assistance to these citizens. But I see no reason to subsidize these purchases, except perhaps in areas (such as tropical islands) where this is the most cost-effective way of producing electric power.


[1] In 2013, the total amount of subsidies for renewables was $121 billion according to the IEA. If we compare this to the amount of renewables (biofuels + other renewables) reported by BP, we find that the subsidy per barrel of oil equivalent in was $48 per barrel of oil equivalent. These amounts are likely understated, because BP biofuels include fuel that doesn’t require subsidies, such as waste sawdust burned for electricity.

[2] Palmer’s work is published in Energy in Australia: Peak Oil, Solar Power, and Asia’s Economic Growth, published by Springer in 2014. This book is part of Prof. Charles Hall’s “Briefs in Energy” series.

Posted in Alternative Energy, Gail Tverberg | Tagged , , , , , , , , , | Leave a comment

How Much Oil is Left?

How Much Oil is Left?

[This is a complex question, because the quality of the oil matters.  We’ve gotten the good stuff, the light, easy oil. Much of the remaining oil is deep, nasty-gunky stuff, in arctic and other remote areas, and will take a lot more energy to produce and refine]

Ron Patterson. July 14, 2014. World Crude Oil Production by Geographical Area.

Check out the graph “World Less North America” at Peak Oil Barrel which shows world oil production minus North American production is down by 2 million barrels.  Are we starting to see the petticoats of the net energy cliff?  As David Hughes wrote in Drilling Deeper. A reality check on U.S. government forecasts for a lasting tight oil & Shale gas boom, both peak tight (fracked) oil and gas are likely to happen before 2020 in North America.  Powers has also documented this in great detail in his book “Cold, Hungry and in the Dark: Exploding the Natural Gas Supply Myth” and Arthur Berman discusses peaking oil and gas in the November 12, 2014 James Howard Kunstler podcast #260).

The latest estimate of oil production from ASPO:  June 2014 The Oil Production Story: Pre- and Post Peak Nations

In reviewing BP’s latest stats, the “Top 10″ nations still dominate the realm of oil, producing 66% of the world total.  Our summary table highlights two important pieces of the oil production story:

1) Nations that are past peak (see “Peak Year,” highlighted in turquoise )–because of geologic limits (e.g., Norway, the United Kingdom) or for above-ground reasons;

2) Nations that have yet to clearly peak.

It appears that about half of the Top 20 nations have seen their all-time highs in production.  In a number of others, production is currently increasing, with America the record-setting poster child. Yet during 2013, only four nations increased by over 100,000 barrels/day-year vs. 15 in 2004, while four nations experienced declines of roughly 100,000 b/d-year vs. three in 2004. And most importantly, Russia and China are likely near peak production.


Robert Rapier. Jun 25, 2012. How Much Oil Does the World Produce?

Cornucopians keep coming up with rosy predictions.  This article: Don’t worry, be happy, there’s plenty of oil, natural gas, & coal left has a list of articles that rebut their arguments, good summaries of how much oil is left and why peak oil is nearly upon us.

Finding More Oil

Deffeyes dismisses proposals to simply explore more or drill deeper. Oil was created by specific circumstances, and there just isn’t that much of it. First there had to be, in the dinosaur era, a shallow part of the sea where oxygen was low and prehistoric dead fish and fish poop could not completely decompose. Then the organic matter had to “cook” for 100 million years at the right depth, with the right temperature to break down the hydrocarbons into liquid without breaking them too far into natural gas. Almost all oil, he said, comes from between the hot-coffee warmth of 7,000 feet down and the turkey-basting scald of 15,000 feet down – a thin layer under the surface, and then only in limited areas. We could drill the deepest oil, he said, back in the 1940s.

“More than 70% of remaining oil reserves are in five countries in the Middle East: Iran, Iraq, Kuwait, Saudi Arabia, Oman,” said Dean Abrahamson, professor emeritus of environment and energy policy at the University of Minnesota. “The expectation is that, within the next 10 years, the world will become almost completely dependent on those countries.”

“In 2000, there were 16 discoveries of oil ‘mega-fields,'” Aaron Naparstek noted in the New York Press earlier this year. “In 2001, we found 8, and in 2002 only 3 such discoveries were made. Today, we consume about 6 barrels of oil for every 1 new barrel discovered.”

The Power of Exponential Growth: Every ten years we have burned more oil than all previous decades

Study this picture. It is why we are going to hit a brick wall, also known as the “net energy cliff”:

exponential 7pct oil needed


Posted in How Much Left, Oil | Leave a comment