41 Reasons why wind power can not replace fossil fuels

Source: Leonard, T. 2012. Broken down and rusting, is this the future of Britain’s ‘wind rush’? https://www.dailymail.co.uk/news/article-2116877/Is-future-Britains-wind-rush.html

Preface. Electricity simply doesn’t substitute for all the uses of fossil fuels, so windmills will never be able to reproduce themselves from the energy they generate — they are simply not sustainable.  Consider the life cycle of a wind turbine – giant diesel powered mining trucks and machines dig deep into the earth for iron ore, fossil-fueled ships take the ore to a facility that will crush it and permeate it with toxic chemicals to extract the metal from the ore, the metal will be taken in a diesel truck or locomotive to a smelter which runs exclusively on fossil fuels 24 x 7 x 365 for up to 22 years (any stoppage causes the lining to shatter so intermittent electricity won’t do). There are over 8,000 parts to a wind turbine which are delivered over global supply chains via petroleum-fueled ships, rail, air, and trucks to the assembly factory. Finally diesel cement trucks arrive at the wind turbine site to pour many tons of concrete and other diesel trucks carry segments of the wind turbine to the site and workers who drove gas or diesel vehicles to the site assemble it.

Here are the topics covered below in this long post:

  1. Windmills require petroleum every single step of their life cycle. If they can’t replicate themselves using wind turbine generated electricity, they are not sustainable
  2. SCALE. Too many windmills needed to replace fossil fuels
  3. SCALE. Wind turbines can’t be scaled up fast enough to replace fossils
  4. Not enough rare earth metals and enormous amounts of cement, steel, and other materials required
  5. Not enough dispatchable power to balance wind intermittency and unreliability
  6. Wind blows seasonally, so for much of there year there wouldn’t be enough wind
  7. When too much wind is blowing for the grid to handle, it has to be curtailed and/or drives electricity prices to zero, driving natural gas, coal, and nuclear power plants out of business
  8. The best wind areas will never be developed
  9. The Grid Can’t Handle Wind Power without natural gas, which is finite
  10. The role of the grid is to keep the supply of power steady and predictable. Wind does the opposite, at some point of penetration it may become impossible to keep the grid from crashing.
  11. The grid blacks out when the supply of power varies too much. Eventually too much wind penetration will crash the grid.
  12. Windmills wouldn’t be built without huge subsidies and tax breaks
  13. Tremendous environmental damage from mining material for windmills
  14. Not enough time to scale wind up
  15. The best wind is too high or remote to capture
  16. Too many turbines could affect Earth’s climate negatively
  17. Wide-scale US wind power could cause significant global warming. A Harvard study raises questions about just how much wind should be part of a climate solution
    Less wind can be captured than thought (see Max Planck Society)
  18. Wind is only strong enough to justify windmills in a few regions
  19. The electric grid needs to be much larger than it is now
  20. Wind blows the strongest when customer demand is the weakest
  21. No utility scale energy storage in sight
  22. Wind Power surges harm industrial customers
  23. Energy returned on Energy Invested is negative
  24. Windmills take up too much space
  25. Wind Turbines break down too often
  26. Large-scale wind energy slows down winds and reduces turbine efficiencies
  27. Offshore Wind Farms likely to be destroyed by Hurricanes
  28. The costs of lightning damage are too high
  29. Wind doesn’t reduce CO2
  30. Turbines increase the cost of farming
  31. Offshore Windmills battered by waves, wind, ice, corrosion, a hazard to ships and ecosystems
  32. Wind turbines are far more expensive than they appear to be
  33. Wind turbines are already going out of business and fewer built in Europe
  34. TRANSPORTATION LIMITATIONS: Windmills are so huge they’ve reached the limits of land transportation by truck or rail
  35. Windmills may only last 12 to 15 years, or at best 20 years
  36. Not In My Back Yard – NIMBYism
  37. Lack of a skilled and technical workforce
  38. Wind only produces electricity, what we face is a liquid fuels crisis
  39. Wind has a low capacity Factor
  40. Dead bugs and salt reduce wind power generation by 20 to 30%
  41. Small windmills too expensive, too noisy, unreliable, and height restricted

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Windmills require petroleum every single step of their life cycle. If they can’t replicate themselves using wind turbine generated electricity, they are not sustainable

Fossil fuels are essential for making wind turbines, as Robert Wilson explains in Can You Make a Wind Turbine Without Fossil Fuels?

Oil is used from start to finish — from mining to crushing ore and smelting it, to delivery to the fabrication plant to the supply chains for 8,000 parts in a turbine to the final delivery site. Cement trucks drive to the delivery site over roads built by diesel powered road equipment, fossil made cement and steel rebar to pour the foundations wind turbines sit on, diesel trucks haul the components of the turbine to the installation place, and diesel cranes lift the turbine sections and 8,000 parts upward. There are no electric blast furnaces, only fossil fueled ones to make cement and most steel, nor are there electric mining trucks, electric long haul trucks to deliver the 8,000 parts made all over the world, electric cement trucks, electric cranes, and so on. That means even if a wind turbine could generate enough energy to replicate itself, it wouldn’t matter, the process from start to finish needs to be electrified.

Not only would windmills have to generate enough power to reproduce themselves, but they have to make enough power above and beyond that to fuel the rest of civilization. Think of the energy to make the cement and steel of a 300 foot tower with three 150 foot rotor blades sweeping an acre of air at 100 miles per hour.  The turbine housing alone weighs over 56 tons, the blade assembly 36 tons, and the whole tower assembly is over 163 tons.  Florida Power & Light says a typical turbine site is 42 by 42 foot area with a 30 foot hole filled with tons of steel rebar-reinforced concrete –about 1,250 tons to hold the 300 foot tower in place (Rosenbloom).

The fossil fuels to construct offshore wind turbines is even greater (Anderson 2017):

“The precise volume of fuel consumed when constructing and operating offshore wind farms significantly varies depending on vessel size, weather conditions, load, etc., but a jack-up vessel (used to install turbine foundations) uses approximately 2,640 gallons per day of marine fuel – or 63 barrels per day – according to guidance provided by consultancy BVG Associates. Constructing a 500 MW installation requires between 200 and 300 days of jack-up rig time, which means between 12,571 barrels (bbls) and 18,857 bbls of marine fuel consumed during construction. For comparison, Amtrak consumed about 1.6 million bbls of diesel fuel in 2014, according to the Bureau of Transportation statistics. So, the jack-up rig fuel requirements of building a 500 MW offshore wind farm account for 0.8% to 1.2% of the fuel annually consumed by Amtrak.

In addition to driving the foundations into the seabed, offshore wind farm construction and maintenance activities include laying export and array cables; port construction; offshore substation installation; turbine installation; and crew transfer for the 20-year lifespans of these installations. These activities are completed using vessels, that for the most part, run on marine fuel. Developing the potential offshore wind project sites identified to date along the east coast alone – not to mention the west and Hawaiian coasts – would require tens of thousands of barrels of petroleum.”

SCALE.  Too many windmills needed to replace fossil fuels

See point #4 in Energy Overview. Oil is butter-fried-steak wrapped in bacon. Alternative Energy is lettuce.

Consider the scale of trying to replace fossil fuels. Every hour 3.7 million barrels of oil are pumped from wells; 932,000 tons of coal are dug; 395 million cubic meters of natural gas are piped from the ground.  In that same hour, another 9,300 people are added to the global population. By 2100, the world will be home to 11 billion of us (Jones Energy Policy).

Consider just the wind power needed to replace offshore oil in the Gulf of Mexico: At 5.8 MBtu heat value in a barrel of oil and 3412 BTU in a kWh, 1.7 million barrels per day of gulf oil equals 2.9 billion kWh per day, or 1,059 billion kWh a year. Yet the total 2008 wind generation in Texas was 14.23 billion kWh, and 5.42 billion kWh in California. Which means you’d need 195 California’s, or 74 Texas’s of wind, and 20 years to build it (Nelder).

And then start the process over every 20 years, the very short lifespan of wind turbines, when energy will be required to dismantle the old one and replace it with a new one (15 years for offshore wind turbines due to corrosion).

SCALE.  Wind turbines can’t be scaled up fast enough to replace fossils

Wind turbines come in many shapes and sizes, and their numbers are growing. Yet, they provide only 4.4 % of United States electricity, 181,791,000 MWh. That’s equivalent to the power generated by 32,428 2-MW wind turbines. To supply half of America’s power with wind, another 332,600 2-MW turbines are needed.

In the 11 states of the Western grid, if 35 % wind and solar were integrated, their intermittency would require 100 % backup from conventional sources such as natural gas and large hydro to maintain system reliability (NREL 2010a). So to get 100 % of electricity generation from wind, which is only blowing a third of the time on average (the capacity factor) requires building three times more reliable capacity to keep the system stable and backup wind when it isn’t blowing (CCST 2011).

So our 365,000 wind turbines need to be multiplied by at least three (plus energy storage) for a grand total of 1,095,000 wind turbines in a mostly renewable grid, and somehow connect even the most distant wind turbines in the remote regions of the Great Plains. That seems like the game is stacked against wind and solar, and that someone is moving the goalposts. Why three times more capacity? Read on. The capacity factor is how much power is actually generated, which is always a fraction of the theoretical maximum amount of power that could be obtained if the wind always blew at the optimum speed or every day was the sunniest day of the year.

Since the average wind capacity factor over a year is around 33 %, you need to build three times as much wind to provide power to the electric grid, and somehow connect them, if the goal is to replace nuclear and fossil generation, which theoretically could have 100 % capacity factors if grid operators chose to run them around the clock and needed no down time for maintenance. But that’s just an average, and there will be days when 300 % of what is needed would be generated and the excess would need to be stored or curtailed, and other days when less than 33 % was generated. The triple overkill is for reliability, to compensate for transmission losses, and make it more likely to have half of electricity power generation when needed. The record year for the most wind generation ever built in the U.S. was 2012, equivalent to 4819 2-MW turbines. If we tilted towards windmills and built that many a year starting in 2016, it would take 220 years to build 1,095,000 of them to generate half of our electricity. Since their life span is 20 years, even more time and windmills would actually be needed for construction (Davidsson et al. 2014).

And these statistics are for the U.S. alone. Multiply by 10 or more to provide wind energy to the whole world.

Not enough rare earth metals and enormous amounts of cement, steel, and other materials required

Rare metals, are RARE.  China controls 95% of them, and Trump’s trade wars would be a great reason to withhold them from the U.S.  They could become a problem even before oil decline makes all goods and services scarcer.

Windmill Turbines depend on neodymium and dysprosium. Estimates of the exact amount of rare earth minerals in wind turbines vary, but in any case the numbers are staggering. According to the Bulletin of Atomic Sciences, a 2 megawatt (MW) wind turbine contains about 800 pounds of neodymium and 130 pounds of dysprosium. The MIT study cited above estimates that a 2 MW wind turbine contains about 752 pounds of rare earth minerals. Neodymium prices quadrupled this year, and that’s with wind still making up less than 3% of global electricity generation (Cembalest).

Energy decline will make it more and more costly to build turbines as the price of oil becomes unaffordatb.e The word “wind” belies the fact that these machines are not light and airy. Nor easily popped up. Each 2-MW wind turbine weighs 3,375,000 pounds (Guezuraga 2012; USGS 2011), equal to 102 18-wheeler trucks. The wind may be free, but wind turbines surely are not. Check out these videos showing the enormous amount of material and fossil energy required to build just one windmill, and the pictures in this post which capture the 900 short tons of materials used per wind turbine. Now imagine 1,641,999 more of them for the United States alone to provide half our power.

The limits to growth due to lack of material resources, especially rare earth metals for off-shore turbines (to reduce maintenance of gearboxes) would be reached long before then.

Not enough dispatchable power to balance wind intermittency and unreliability 

The DOE estimates there are 18,000 square miles of good wind sites in the USA, which could produce 20% of America’s electricity in total.  This would require over 140,000 1.5 MW towers costing at least $300 billion dollars, and innumerable natural gas peaking plants to balance the load when the wind isn’t blowing.  Natural gas production is likely to peak as soon as 2020 and we don’t have the Liquefied Natural Gas (LNG) facilities to import natural gas (NG) from other countries, and NG is finite, and LNG even more so since it takes so much energy to chill it and keep it chilled at -260 F (-160 C), and the LNG ship burns 20% of the LNG fuel to deliver it (and some escapes as well along the way).

There isn’t enough dispatchable renewable power from batteries, pumped hydro storage, biomass, or Compressed Air Energy storage to balance, provide peak power and store power for even one day of U.S. electricity generation (11.12 TWh).

Wind blows seasonally, so for much of there year there wouldn’t be enough wind

See my post on how seasonal wind is here.

As well-known energy analyst Vaclav Smil (2010) puts it:

“The Wind Energy Association’s website deals with “the intermittency myth” by claiming that “there is little overall impact if the wind stops blowing somewhere—it is always blowing somewhere else”. True, but that “somewhere else” may be hundreds or thousands of miles away with no high-voltage transmission lines in between. A new worldwide system where wind would be the single largest source of electricity would require such vast intra- and intercontinental extensions of HV transmission lines to create sufficiently dense and powerful interconnections to deal with wind’s intermittency that both its cost and its land claims would be forbidding.

The North American continent has a relatively high frequency of prolonged calms, especially in the Southeast which is basically a wind desert year round.  The Southeast would have to import large blocks of electricity from the Great  Midwest—but this arrangement would require a number of additional long-distance high-voltage lines.”

When too much wind is blowing for the grid to handle, it has to be curtailed and/or drives electricity prices to zero, driving natural gas, coal, and nuclear power plants out of business

According to the MIT Technology Review (Martin 2016):

“In places with abundant wind and solar resources, like Texas and California, the price of electricity is dipping more and more frequently into negative territory. In other words, utilities that operate big fossil-fuel or nuclear plants, which are very costly to switch off and ramp up again, are running into problems when wind and solar farms are generating at their peaks. With too much energy supply to the grid, spot prices for power turn negative and utilities are forced to pay grid operators to take power off their hands.

That’s happened on about a dozen days over the past year in sunny Southern California, according to data from Bloomberg, and it’s liable to happen more often in the future. “In Texas, power at one major hub traded below zero for almost 50 hours in November and again in March,” according to the state’s grid operator. In Germany, negative energy prices have become commonplace, dramatically slashing utility revenues despite renewable energy subsidies that bolster electricity prices much more than in the United States.

The first solution to below-zero prices is to build more transmission to ship the power to places where demand is high. Germany now makes close to 2 billion euros a year off energy exports to neighboring countries, according to Berlin’s Fraunhofer Institute. But building out new long-distance, high-voltage transmission lines is expensive: Texas has spent $7 billion on transmission lines to ship power from the windy flatlands of west Texas to Dallas and Houston.”

The best wind areas will never be developed

Most of the best wind in the U.S. will never be developed — it is too far from cities, too far from existing transmission lines to harvest, or offshore the west coast where the ocean is too deep to build windmills.

wind resources and densely populated hubs in the US. It appears that only Minneapolis is near good (but not excellent, outstanding, or superb) wind. Source: NREL 2012. Download from the dynamic maps, GIS data, & analysis too webpage (http:/www.nrel.gov/gis/wind.html)

wind resources and densely populated hubs in the US. It appears that only Minneapolis is near good (but not excellent, outstanding, or superb) wind. Source: NREL 2012. Download from the dynamic maps, GIS data, & analysis too webpage (http:/www.nrel.gov/gis/wind.html)

 

 

 

 

 

 

 

 

 

Many sites with the nation’s best wind power resources have minimal or no access to electrical transmission facilities.

The best wind is far from the electric grid, and remote wind farms often need millions, or even billions of dollars in transmission lines – an overall $8 billion dollar project will send power from a 2,100 MW $4 billion Wyoming wind farm, $1.5 billion for a new CAES facility in Utah — the only salt cavern west big enough to do this, and $2.6 billion to run 525 miles of transmission lines from Wyoming to Utah to California (DATC, Gruver).  CAES are fossil fuel dependent — they’re basically a gas turbine that needs 40-60% less natural gas. The storage facility could yield 1,200 MW of electricity, enough to power 1.2 million California homes.  So whatever happened to the other 900 MW generated by the wind farm?  Add on operation and maintenance costs, the short longevity of a wind farm — 20 years at best — and the fossil fuel energy to fabricate all this steel, cement, aluminum, power the CAES, etc., and you have to wonder how sustainable “wind” power really is when it’s so fossil fuel dependent.

Just as oil doesn’t do much useful work when not burned within a combustion engine, wind needs a vast, interconnected grid or immense energy storage technologies (batteries, natural gas combustion turbines, etc).  The larger the grid, the more wind that can be added to it.  But we don’t have that infrastructure — indeed, what we do have now is falling apart due to deregulation of utilities, with no monetary rewards for any player to maintain or upgrade the grid.

Most of the really good, strong wind areas are so far from cities that it’s useless because the energy to build a grid extending to these regions would use more energy than the wind would provide.

Sure, oil and natural gas require pipelines too, but they’re already in place, built back when the EROEI of oil was 100:1 — though.

We now turn to the matter of adequate interconnections, which in theory looks fairly promising. A study by the National Renewable Energy Laboratory found that the United States has 175 GW of potential wind capacity located within 5 miles of existing lines carrying up to 230 kV, 284 GW within 10 miles of such lines, and 401 GW within 20 miles of such lines.35 But what matters more than distance to the nearest transmission line is that line’s capacity, and in this respect it is obvious that the situation in the United States is much inferior to that in Europe.

Europe has strong and essentially continent-wide north–south as well as east–west connections, while the United States does not have a comparably capable national network: high-voltage connections from the heart of the continent, where the wind potential is highest, to either coast are minimal or nonexistent. Consequently, the Dakotas could not become a major supplier to California or the Northeast without massive infrastructural additions. Jacobson and Masters argue that with an average cost of $310,000/km (an unrealistically low mean; see the next section), the construction of 10,000 km of new HV lines would cost only $3.1 billion, or less than 1 percent of the cost of 225,000 new turbines, and that HV direct current lines would be even cheaper.36 As with any entirely conceptual megaproject, these estimates are highly questionable; moreover, such an expansion is not very likely, given that the existing grid (aging, overloaded, and vulnerable) is overdue for extensive, and very expensive, upgrading,37 and that securing rights of way may be a greater challenge than arranging the needed financing (Smil 2010).

The Grid Can’t Handle Wind Power without natural gas, which is finite

According to Eon Netz, one of the four grid managers in Germany, for every 10 MW of wind power added to the system, at least eight MW of back-up power must also be dedicated.  So you’re not saving on fossil fuels and often have to ADD fossil fuel plants to make up for the wind power when the wind isn’t blowing!  In other words, wind needs almost 100% back-up of its maximum output.

Denmark is often pointed out as a country that scaled wind up to provide 20% of its power.   Yet because wind is so intermittent, no conventional power plants have been shut down because they need to step in when the wind isn’t blowing (enough).  The quick ramping up and down of these power plants actually increases greenhouse gas emissions.  And when the wind does blow enough, the power is surplus and most is sold to other countries at an extremely cheap price.  And often the Danes have to import electricity, paying the highest electricity prices in Europe.  The actual capacity is 20%, not the 30% the BWEA and AWEA claim is possible (Rosenbloom).

Power struggle: Green energy versus a grid that’s not ready. Minders of a fragile national power grid say the rush to renewable energy might actually make it harder to keep the lights on. Evan Halper, Dec 2, 2013. Los Angeles Times.

The grid is built on an antiquated tangle of market rules, operational formulas and business models.  Planners are struggling to plot where and when to deploy solar panels, wind turbines and hydrogen fuel cells without knowing whether regulators will approve the transmission lines to support them.

Energy officials worry a lot these days about the stability of the massive patchwork of wires, substations and algorithms that keeps electricity flowing. They rattle off several scenarios that could lead to a collapse of the power grid — a well-executed cyberattack, a freak storm, sabotage.

But as states race to bring more wind, solar and geothermal power online, those and other forms of alternative energy have become a new source of anxiety. The problem is that renewable energy adds unprecedented levels of stress to a grid designed for the previous century.

Green energy is the least predictable kind. Nobody can say for certain when the wind will blow or the sun will shine. A field of solar panels might be cranking out huge amounts of energy one minute and a tiny amount the next if a thick cloud arrives. In many cases, renewable resources exist where transmission lines don’t.

“The grid was not built for renewables,” said Trieu Mai, senior analyst at the National Renewable Energy Laboratory.

The first chart below is the “Mona Lisa” of wind unreliability, measured at one of California’s largest wind farms. The second is from the California Independent System Operator, showing how wind power tends to be low when power demand is high (and vice-versa). Wind should play an important role, but unless there is a high-voltage, high-capacity, high-density grid to accompany it (as in Northern Europe), or electricity storage, the variability of wind means that co-located natural gas peaking plants are needed as well. The cost of such natural gas plants are rarely factored into the EROI or LCOE costs of wind (Cembalest).

Also: German grid aching under solar power

The grid blacks out when the supply of power varies too much. Eventually too much wind penetration will crash the grid.

To keep the grid from crashing, it has to be kept within a narrow range. To do this, about 10% of the electricity on the grid is never delivered to a customer, it’s there to balance the flow so that surges don’t cause blackouts leading to the loss of power for millions of people.

Engineers carefully calibrate how much juice to feed into the system. The balancing requires painstaking precision. A momentary overload can crash the system.

Wind and solar generate just a tiny amount of energy now, but at some point their unpredictability, surges to nothingness, and intermittency will be too large a percent of the grid to manage, and crash the grid.

Windmills wouldn’t be built without huge subsidies and tax breaks

Wind speed matters.  Wind power increases with the cube of the wind speed. Doubling the wind speed gives eight times more wind power. Therefore, the selection of a high-wind-speed location is very important. For example, the difference between wind blowing at 10 mph and 12.6 mph is 100%  (IEC).

Clearly wind farms built where wind speeds are class 4 or higher will be more profitable. California, Oregon, and Washington have already built out their best class 4+ resources. It could take 10 years to never — if there were another financial crash or fossil fuels were declining and allocated mainly to agriculture and emergency services (DOE) — to build out transmission lines to remote high-wind areas before wind farms could be built.

As it is, a great deal of wind farms wouldn’t be built without subsidies.  Warren Buffet has said that he only invests in wind energy because “we get a tax credit if we build a lot of wind farms. That’s the only reason to build them. They don’t make sense without the tax credit” (Pfotenhauer).

Todd Kiefer: Just crunched an “EIA report to Congress on energy subsidies (http://www.eia.gov/analysis/requests/subsidy/pdf/subsidy.pdf).  In 2010 wind was subsidized at 2.16 cent/kWh and solar at 3.13 cent/kWh.   In 2013 (latest data available), wind was subsidized at 1.31 cent/kWh and solar at 6.36 cent/kWh.  Makes it easier to see why there are solar PPAs out there for 4 cent/kWh.

Full table of subsidies normalized to units of energy delivered is below.  M$ is million dollar.  Quad is quadrillion BTU.  BOE is barrel of oil equivalent.  Subsidies do not take into account offsetting federal revenues such as fees, permits, leases, excise taxes, corporate income taxes, etc.  Oil and gas generates a 2,000% return on these subsidies in federal corporate income and excise taxes alone (> $9/barrel).  Then there are the taxes from the 185,000 people directly employed in the oil and gas industry.  I haven’t researched coal in as much detail, but I’m sure the government gets a positive return.  Non-hydro renewables, on the other hand are surely net negative.”

Energy resource subsidies from http://www.eia.gov/analysis/requests/subsidy/pdf/subsidy.pdf

Energy resource subsidies from http://www.eia.gov/analysis/requests/subsidy/pdf/subsidy.pdf

Tremendous environmental damage from mining material for windmills

Mining 1 ton of rare earth minerals produces about 1 ton of radioactive waste, according to the Institute for the Analysis of Global Security. In 2012, the U.S. added a record 13,131 MW of wind generating capacity. That means that between 4.9 million pounds (using MIT’s estimate) and 6.1 million pounds (using the Bulletin of Atomic Science’s estimate) of rare earths were used in wind turbines installed in 2012. It also means that between 4.9 million and 6.1 million pounds of radioactive waste were created to make these wind turbines — more than America’s nuclear industry, which produces between 4.4 million and 5 million pounds of spent nuclear fuel each year.

Yet nuclear energy comprised about one-fifth of America’s electrical generation in 2012, while wind accounted for just 3.5 percent of all electricity generated in the United States.

Not only do rare earths create radioactive waste residue, but according to the Chinese Society for Rare Earths, “one ton of calcined rare earth ore generates 9,600 to 12,000 cubic meters (339,021 to 423,776 cubic feet) of waste gas containing dust concentrate, hydrofluoric acid, sulfur dioxide, and sulfuric acid, [and] approximately 75 cubic meters (2,649 cubic feet) of acidic wastewater.”

The environmental impact of mining the rare metals required for windmills makes their use questionable.  Mongolia has large reserves of rare earth metals, especially neodymium, the element needed to make the magnets in wind turbines.  Its extraction has led to a 5-mile wide poisonous tailings lake in northern China.  Nearby farmland for miles is now unproductive, and one of China’s key waterways is at risk. “This vast, hissing cauldron of chemicals is the dumping ground for seven million tons a year of mined rare earth after it has been doused in acid and chemicals and processed through red-hot furnaces to extract its components.  Rusting pipelines meander for miles from factories processing rare earths in Baotou out to the man-made lake where, mixed with water, the foul-smelling radioactive waste from this industrial process is pumped day after day” (Parry).

Not enough time to scale wind up

Like solar, wind accounts for only a tiny fraction of renewable energy consumption in the United States, about a tenth of one percent, and will be hard to scale up in the short time left. EIA. June 2006. Renewable Energy Annual.

The best wind is too high or remote to capture

Only the winds moving in the lowest few hundred meters above the surface can be intercepted. All the wind above a windmill is now unavailable. And only 59% of wind can ever be captured, no matter how well-built a wind turbine is according to Betz’s law.

Windmill “kites” were proposed decades ago, but are unlikely to work out, winds can be too strong, difficult to take down before a hurricane, tornado, or large storm, kites could fall onto developed areas, airplane traffic would need to be rerouted, and the line from the kite to the electricity generator likely to be too heavy for the kite, or short-circuit if too thin.

There are several research groups looking at generating electricity using giant kites up in the jet stream. But it won’t be easy.  Jet streams move around and change their location, airplanes need to stay well away, and lightning and thunderstorms might require them to be brought down.

The strongest wind is 6 miles above us, where winds are typically 60 miles per hour.  Some scientists think there’s enough wind to generate 100 times current global energy demand.

But Axel Kleidon and Lee Miller of the Max Planck Institute for Biogeochemistry believe that’s a massive overestimate of the amount of energy that could be obtained. If they’re right that jet stream wind results from a lack of friction, then at most 7.5 TW of power could be extracted, and that would have a major effect on climate (Earth System Dynamics, vol 2, p 201).

Globally we use about 12 terawatts of energy a year. There’s 85 terawatts of wind, but most of it is over the deep ocean, or the many miles above, which we are unlikely to ever capture.

And on top of all these problems,  windmills are built to capture wind only at certain ranges of speeds, so when the wind is too light or too strong, power is not generated.

Too many turbines could affect Earth’s climate negatively

Scientists do not know the maximum share of global atmospheric circulation that could be converted into electricity without changing the earth’s climate. Models of atmospheric circulation indicate that the very large-scale extraction of wind (requiring installed capacities on a TW scale needed to supply at least a quarter of today’s demand) reduces wind speeds and consequently lowers the average power density of wind-driven generation to around 1 W/m2 on scales larger than about 100 km (Smil).

The maximum extractable energy from high jet stream wind is 200 times less than imagined initially, and trying to extract them would profoundly impact the entire climate system of the planet.  If we tried to extract the maximum possible 7.5 TW from the jet stream, “the atmosphere would generate 40 times less wind energy than what we would gain from the wind turbines, resulting in drastic changes in temperature and weather” according to Lee Miller, the author of the study (Miller).

Scientists modeled the impact of a hypothetical large-scale wind farm in the Great Plains. Their conclusion in The Journal of Geophysical Research, is that thousands of turbines concentrated in one area can affect local weather, by making warmer drier conditions from the atmospheric mixing in the blades wake.  The warming and drying that occur when the upper air mass reaches the surface is a significant change, Dr. Baidya Roy said, and is similar to the kinds of local atmospheric changes that occur with large-scale deforestation (2Nov 2004. Catch the Wind, Change the Weather. New York Times.

“We shouldn’t be surprised that extracting wind energy on a global scale is going to have a noticeable effect. … There is really no such thing as a free lunch,” said David Keith, a professor of energy and the environment at the University of Calgary and lead author of a report in the Proceedings of the National Academy of Sciences.

Specifically, if wind generation were expanded to the point where it produced 10% of today’s energy, the models say cooling in the Arctic and a warming across the southern parts of North America should happen.

The exact mechanism for this is unclear, but the scientists believe it may have to do with the disruption of the flow of heat from the equator to the poles.

Wide-scale US wind power could cause significant global warming. A Harvard study raises questions about just how much wind should be part of a climate solution (Temple 2018)

Researchers found that a high amount of wind power could mean more climate warming regionally and in the decades ahead, raising serious questions about how much nations should look to wind power to clean up electricity systems.  The study, published in the journal Joule, found that if wind power supplied all US electricity demands, it would warm the surface of the continental United States by 0.24 °C. That could significantly exceed the reduction in US warming achieved by decarbonizing the nation’s electricity sector this century, which would be around 0.1 °C.

Our analysis suggests that it may make sense to push a bit harder on developing solar power and a bit less hard on wind, since the warming effect from wind was 10 times greater than the climate effect from solar farms, which can also have a tiny warming effect.

Less wind can be captured than thought (see Max Planck Society)

Large wind farms with a high density of installed capacity slow down the wind and generate less electricity than previously thought Image Large wind farms with a high density of installed capacity slow down the wind and generate less electricity than previously thought.  Less energy can be withdrawn from wind than was assumed up to now. For example, a previous prediction from a 2013 study by the German Federal Environmental Agency concluded that almost seven watts of electrical power per square metre could be generated from wind energy. However, an international research team led by scientists from the Max Planck Institute for Biogeochemistry in Jena has now shown that the amount of energy actually possible from wind power is considerably lower. These researchers calculated that a maximum of 1.1 watts of electricity could be generated per square metre over a large (105 km2) wind farm in the windy state of Kansas, United States (Miller 2014).

Wind is only strong enough to justify windmills in a few regions

The wind needs to be at least force level 4 (13-18 mph) for as much of the year as possible to make it economically possible. This means that a great deal of land is not practical for the purpose.  The land that is most suitable already has windmills, or is too far from the grid to be connected.

A Class 3 windmill farm needs double the number of generators to produce the same amount of energy as windmills in a class 6 field (Prieto).

The 1997 US EIA/DOE study (2002) came to the remarkable conclusion that “…many non-technical wind cost adjustment factors … result in economically viable wind power sites on only 1% of the area which is otherwise technically available…”

The electric grid needs to be much larger than it is now

Without a vastly expanded grid to balance the unpredictability of wind over a large area, and across seasons, wind can’t provide a significant portion of electrical generation.  But expanding the grid to the proper size would cost trillions of dollars. NIMBY has stopped this from happening, will continue to provide opposition, and states don’t like transmission lines to cross their borders. So it is very unlikely we will ever have a national grid, which would also make us more vulnerable to terrorism, accidents, and can actually be less, not more stable than the regional grids we have today.

Much of the land in the USA (the areas where there’s lots of wind) is quite far from population centers. And when you hook windmills to the grid, you lose quite a bit of energy over transmission lines.

It also takes a lot of energy to build and maintain the electric grid infrastructure itself. Remote wind sites often result in construction of additional transmission lines, estimated to cost as much as $300,000-$1 million per mile. (Energy Choices in a Competitive Era, Center for Energy and Economic Development Study, 1995 Study, p. 14). The economics of transmission are poor because while the line must be sized at peak output, wind’s low capacity factor ensures significant under-utilization.

As you can see in the chart below, a large balancing area (and sub-hourly energy markets) are the most important factors in integrating wind into the power grid:

System flexibility increases as the color of the numbered boxes progresses from red to green, and as the number increases from 1 to 10. The items at the top of the table are those attributes that help efficiently integrate wind power into power systems operation. Although the table uses a simplistic 1–10 scoring system, it has proven useful as a high-level, qualitative tool. The red, yellow, and green result cells show the ease (green) or difficulty (red) that a hypothetical system would likely have integrating large amounts of wind power. RTO is regional transmission organization; ISO is independent system operator. Source: Milligan, M.; et al. Oct 22, 2013. Wind Integration Cost and Cost-Causation. 12th Annual International Workshop on Large-Scale Integration of Wind Power into Power Systems as Well as on Transmission Networks for Offshore Wind Power Plants. NREL/CP-5D00-60411. National Renewable Energy Laboratory

System flexibility increases as the color of the numbered boxes progresses from red to green, and as the number increases from 1 to 10. The items at the top of the table are those attributes that help efficiently integrate wind power into power systems operation. Although the table uses a simplistic 1–10 scoring system, it has proven useful as a high-level, qualitative tool. The red, yellow, and green result cells show the ease (green) or difficulty (red) that a hypothetical system would likely have integrating large amounts of wind power. RTO is regional transmission organization; ISO is independent system operator. Source: Milligan, M.; et al. Oct 22, 2013. Wind Integration Cost and Cost-Causation. 12th Annual International Workshop on Large-Scale Integration of Wind Power into Power Systems as Well as on Transmission Networks for Offshore Wind Power Plants. NREL/CP-5D00-60411. National Renewable Energy Laboratory

Wind blows the strongest when customer demand is the weakest

In Denmark, where some of the world’s largest wind farms exist, wind blows the hardest when consumer demand is the lowest, so Denmark ends up selling its extra electricity to other countries for pennies, and then when demand is up, buys electricity back at much higher prices.  Denmark’s citizens pay some of the highest electricity rates on earth (Castelvecchi).

In Texas and California, wind and solar are too erratic to provide more than 20% of a regions total energy capacity because it’s too difficult to balance supply and demand beyond that amount.

Wind varies greatly depending on the weather. Often it hardly blows at all during some seasons.  In California, we need electricity the most in summer when peak loads are reached, but that’s the season the least wind blows.  On our hottest days, wind capacity factors drop to as low as .02 at peak electric demand. At a time when the system most needs reliable base load capacity, wind base capacity is unavailable.

No utility scale energy storage in sight

We don’t have EROEI-positive batteries, compressed air, or enough pumped water dams to store wind energy and concentrate it enough to do useful work and generate power when the wind isn’t blowing.  There are no power to gas, hydrogen, or any other fantasy storage methods even close to commercial development.

Nor are there ever going to be storage methods that can return the same amount of energy put into them, so having to store energy reduces the amount of energy returned.

Compressed air storage is inefficient because “air heats up when it is compressed and gets cold when it is allowed to expand.  That means some of the energy that goes into compression is lost as waste heat.  And if the air is simply let out, it can get so cold that it freezes everything it touches, including industrial-strength turbines.  PowerSouth and E.ON burn natural gas to create a hot gas stream that warms the cold air as it expands into the turbines, reducing overall energy efficiency and releasing carbon dioxide, which undermines some of the benefits of wind power” (Castelvecchi).

Wind Power surges harm industrial customers

Japan’s biggest wind power supplier, may scrap a plan to build turbines on the northern island of Hokkaido after the regional utility cut proposed electricity purchases, blaming unreliable supply. Power surges can be a problem for industrial customers, said Hirotaka Hayashi, a spokesman at Hokkaido Electric. Utilities often need to cut back power generation at other plants to lessen the effect of excess power from wind energy.

“Continental European countries such as Germany and Denmark can transfer excess power from windmills to other countries,” said Arakawa. “The electricity networks of Japan’s 10 utilities aren’t connected like those in Europe. That’s the reason why it’s difficult to install windmills in Japan.”

To ensure steady supply, Tohoku Electric Power Co., Japan’s fourth-biggest generator, in March started requiring owners of new windmills to store energy in batteries before distribution rather than send the electricity direct to the utility, said spokesman Satoshi Arakawa. That requirement has increased wind project installation costs to 300,000 yen ($2,560) per kilowatt, from 200,000 yen, according to Toshiro Ito, vice president of EcoPower Co., Japan’s third-biggest wind power supplier (Takemoto).

Energy returned on Energy Invested is negative

If the energy costs of intermittency, back-up conventional plant, and grid connection were added to the “cost” of windfarms, the EROEI would be far lower than current EROEI studies show.

Wind farms require vast amounts of steel and concrete, which in terms of mining, fabrication, and transportation to the site represent a huge amount of fossil fuel energy. The Zond 40-45 megawatt wind farm is composed of 150 wind turbines weighing 35 tons each — over 10 million pounds.

The 5,700 turbines installed in the United States in 2009 used 36,000 miles of steel rebar and 1.7 million cubic yards of concrete (enough to pave a four-foot-wide, 7,630-mile-long sidewalk). The gearbox of a 2-megawatt wind turbine has 800 pounds of neodymium and 130 pounds of dysprosium — rare earth metals that are found in low-grade hard-to-find deposits that are very expensive to make. (American Wind Energy Association).

Materials like carbon fiber that would make them more efficient cost  several times more and use up a great deal more fossil fuel energy to fabricate than a fiber glass blade.

From the mining of the metals to make windmills, to their fabrication, delivery, operation, to their Maintenance is very dependent upon fossil fuel energy and fossil fuel driven machinery. Wind energy at best could increase the amount of energy generated while fossil fuels last, but is too dependent on them to outlast the oil age.

After a few years, maintenance costs skyrocket.  The larger the windmill, the more complex maintenance is needed, yet the larger the windmill, the more wind can be captured.

Windmills take up too much space

A wind farm takes up 30 to 200 times the space of a natural gas electrical generation plant (Paul Gipe, Wind Energy Comes of Age, p. 396). A 50 megawatt wind farm can take up anywhere from two to twenty-five square miles (Proceedings of National Avian-Wind Power Planning Meeting, p. 11).

Vast amounts of land are required for wind turbines, which have to be spaced far apart since on the other side of a windmill that has just “captured” wind, there’s no wind left.  For example, if the best possible wind strip along the coast between San Francisco and LA were covered with the maximum possible number of windmills (an area about 300 miles long by one mile deep) you’d get enough wind, when it was blowing, to replace only one of the dozens of power plants in California (Hayden).

Development of a wind power plant results in a variety of temporary and permanent disturbances, including land occupied by wind turbine pads, access roads, substations, service buildings, and other infrastructure which physically occupy land area, or create impermeable surfaces. Additional direct impacts are associated with development in forested areas, where trees must be cleared around each turbine. Land modified for wind farms represents a potentially significant degradation in ecosystem quality (Arnett).

Supplying half of today’s electricity—that is, about 9 PWh—by wind would thus require about 4.1 TW of wind turbines; with 2 W/m2, they would claim about 2 million km2 (772,204 square miles), or an area roughly four times the size of France or larger than Mexico. With average power density of just 1 W/m2, the required area would rise to more than 4 million km2 (1,544,408 square miles), roughly an equivalent of half of Brazil or the combined area of Sudan (Africa’s largest country) and Iran. These calculations indicate that deriving substantial shares of the world’s electricity from wind would have large-scale spatial impacts. Obviously, only a small portion of those areas would be occupied by turbine towers and transforming stations, so that crop planting and animal grazing could take place close to a tower’s foundations. But even when assuming a large average turbine size of 2–3 MW, the access roads (which are required to carry heavy loads, as the total weight of foundations, tower, and turbine is more than 300 tons per unit) needed to build roughly 2 million turbines and new transmission lines to conduct their electricity would make a vastly larger land claim than the footprint of the towers; and a considerable energy demand would be created by keeping these roads, often in steep terrain, protected against erosion and open during inclement weather for servicing access (Smil).

The U.S. energy infrastructure, including the right of way for all high-voltage transmission lines, now occupies up to about 25,000 km2 (9,650 square miles), or 0.25 percent of the country’s area, roughly equal to the size of Vermont (Smill 2008). And the country’s entire impervious surface area of paved and built-up surface reached about 113,000 km2 (43,630 square miles) by the year 2000 (Elvidge). In contrast, relying on large wind turbines to supply all U.S. electricity demand (about 4 PWh) would require installing about 1.8 TW of new generating capacity, which (even when assuming an average of 2 W/m2) would require about 900,000 km2 of land (347,500 square miles)—nearly a tenth of the country’s land, or roughly the area of Texas and Kansas combined (Smil).

In practice, the area per windmill varies quite a bit, averaging about 50 acres per megawatt of capacity because they interfere with each other and need to be widely spaced apart.

aweo.org estimates:

Tom Gray of the American Wind Energy Association has written, “My rule of thumb is 60 acres per megawatt for wind farms on land.”

That may still not be enough for maximum efficiency. More recent research at Johns Hopkins University by Charles Meneveau suggests that large turbines in an array need to be spaced 15 rotor diameters apart, increasing the above examples to 185-250 acres required per installed megawatt.

Note that larger turbines are not substantially more efficient than small ones, because they require proportionally more space.

Remember that capacity is different from actual output. Typical average output is only 25% of capacity, so the area required for a megawatt of actual output is four times the area listed here for a megawatt of capacity. And because three-fifths of the time wind turbines produce power at a rate far below average, even more (2.5×, perhaps, for a total of 10×) — dispersed across a wide geographic area — would be needed for any hope of a steady supply.

Wind power is a good example of how the target of “industrial scale” energy production is wastefully using land, and creating a public backlash against renewable energy in the process. The larger the wind turbine, the further apart they must be spaced within wind farms, and consequently the lower the energy yield per hectare of land. Working theoretically, a large wind 2.3MW turbine (such as a Nordic N90 turbine) spaced five hub heights apart (an average separation distance) from other turbines has a capacity of 108 kiloWatts per hectare (kW/ha). However three 850kW turbines (such the Vestas V52) would occupy the same area of land, and even though they are 40% shorter they produce more power—111kW/ha (note, this figure includes a weighting that reflects the V52’s lower height). The reason that wind farm developers are building ever larger turbines is quite simple: Whilst capital costs can be discounted over future years, maintenance costs are always at the present value. Consequently the development of fewer, larger turbines increases the power output whilst reducing maintenance costs—increasing the return on the capital invested.

Taking the 111kW/ha figure as a representative energy density for wind, to match the UK’s major electricity generators 73,308 mega-Watts (MW) of net installed capacity [DUKES, 2005g], and assuming that the turbines generated for 30% of the time and that an additional 40% of capacity was required to charge batteries/fuel cells to provide a continuous power output, just over 3,000,000 hectares of turbines would be required—equivalent to around 13% of the UK’s land area. Theoretically then, we could generate our power requirements from wind turbines. But, as noted above, electricity is less than one-fifth of the UK’s total energy consumption, so this solution this would only answers a small part of the UK’s energy problem—for a total solution we’d have to densely cover half the UK’s land area in wind turbines (parliament.uk).

Even in windy regions (power class 4, 7–7.5 m/s at 50 meters above ground) such as the Dakotas, northern Texas, western Oklahoma, and coastal Oregon, where wind strikes the rotating blades with power density averaging 450 W/m2, the necessary spacing of wind turbines (at least five, and as much as ten, rotor diameters apart, depending on the location, to reduce excessive wake interference) creates much lower power densities per unit of land. For example, a large 3 MW Vestas machine with a rotor diameter of 112 meters spaced six diameters apart will have peak power density of 6.6 W/m2, but even if an average load factor were fairly high (at 30%), its annual rate would be reduced to only about 2 W/m2 (Smil).

Wind Turbines break down too often

DOE. 2014. Wind vision a new era for wind power in the United States. Department of Energy:

  1. Gearbox Reliability. A 2013 summary of insurance claims revealed that the average total cost of a gearbox failure was $380,000. An analysis of 1000 turbines over a 10-year period reported that 5% of turbines per year required a gearbox replacement [29]. Gearbox reliability remains a challenge for utility-scale wind turbines.
  2. Generator Reliability. A generator failure in 2013 was estimated to cost $310,000, while an estimated 3.5% of turbines required a generator replacement.
  3. Rotor Reliability. Average replacement costs for a blade failure are estimated at $240,000, with 2% of turbines requiring blade replacements annually. With larger blades being used on wind turbines, weight and aeroelastic limitations have put added pressure on blade design and manufacturing, which may be one of the explanations for the uptick in rotor-driven downtime. Blade failure can arise from manufacturing and design flaws, transportation, and operational damage. Manufacturing flaws include fiber misalignment, porosity, and poor bonding.  During transport from the manufacturing plant to the wind plant site, blades can undergo several lifts, which result in localized loads that can cause damage if not properly executed. Operational damage is primarily related to either lightning strikes or erosion of blade leading edges.

Large-scale wind energy slows down winds and reduces turbine efficiencies

2016-11-15. Phys.org. Original paper: Lee M. Miller et al. Wind speed reductions by large-scale wind turbine deployments lower turbine efficiencies and set low generation limits, Proceedings of the National Academy of Sciences (2016).

A new study published by scientists from the Max Planck Institute for Biogeochemistry in Jena, Germany, lowers the expectations of wind energy when used at large scales.

Every turbine removes energy from the winds, so that many turbines operating over large scales should reduce  speeds of the atmospheric flow. With many turbines, this effect should extend beyond the immediate wake behind each turbine and result in a general reduction of wind speeds. This wind speed reduction is critical, as it lowers the amount of energy that each turbine can extract from the winds.

Dr. Lee Miller, first author of the study, explains: “One should not assume that wind speeds are going to stay the same with a lot of wind turbines in a region. Wind speeds in climate models may not be completely realistic, but climate models can simulate the effect that many wind turbines have on wind speeds while observations cannot capture their effect.” The wind speed reduction would dramatically lower the efficiency by which turbines generate electricity. The authors calculated that when wind energy is used at its maximum potential in a given region, each turbine in the presence of many other turbines generates on average only about 20% of the electricity compared to what an isolated turbine would generate.

On land, they determined that only 3-4% of land areas have the potential to generate more than 1.0 watt of electricity per square meter of land surface, with a more typical potential of about 0.5 watt per square meter or less.

Offshore Wind Farms likely to be destroyed by Hurricanes

The U.S. Department of Energy has estimated that if the United States is to generate 20% of its electricity from wind, over 50 GW will be required from shallow offshore turbines. Hurricanes are a potential risk to these turbines. Turbine tower buckling has been observed in typhoons, but no offshore wind turbines have yet been built in the United States.  In the most vulnerable areas now being actively considered by developers, nearly half the turbines in a farm are likely to be destroyed in a 20-year period (Rose).

Source: Rose, S. 2 June 2011. Quantifying the Hurricane Risk to Offshore Wind Turbines.  Carnegie Mellon University.

During summer and early fall, global circulation brings frequent hurricanes that can affect the coastal and nearby inland regions extending from Texas to Nova Scotia. These would require repeated shutdown of all wind-generating facilities for a number of consecutive days and would repeatedly expose all turbines and their towers to serious risk of damage and possible prolonged repairs. One can argue that turbines should simply not be sited in these risky regions and that the needed power should come from the continent’s interior, where the machines would not be exposed to hurricanes, though they would remain vulnerable to frequent tornadoes (Smil 2010).

The costs of lightning damage are too high (Smith 2016, Froese 2015, Gromicko 2018)

Wind turbines are lightning magnets which can cause a great deal of damage. Blades explode; generators and control system electronics fry. And climate change is expected to increase lightning by 50%. The taller the windmill the more energy that can be captured since the strongest winds are at great heights, but lightning dangers increase with turbine height.

  • According to a German study, lightning strikes accounted for 80% of wind turbine insurance claims.
  • During its first full year of operation, 85% of the down time experienced by one southwestern commercial wind farm was lightning-related. Total lightning-related damage exceeded $250,000.
  • The German electric power company Energieerzeugungswerke Helgoland GmbH shut down and dismantled their Helgoland Island wind power plant after being denied insurance against further lightning losses. They had been in operation three years and suffered more than $540,000 (USD) in lightning-related damage.

Potential damage:

  • CONTROL SYSTEM.: These include sensors, actuators, and the motors for steering the equipment into the wind. According to the updated National Fire Protection Association handbook: “While physical blade damage is the most expensive and disruptive damage caused by lightning, by far the most common is damage to the control system”
  • ELECTRONICS: Wind turbines are deceptively complex, housing a transformer station, frequency converter, switchgear elements, and other expensive, sensitive equipment in a relatively small space
  • BLADE DAMAGE: A lightning strike to an unprotected blade will raise its temperature tremendously, perhaps as high as 54,000° F (30,000° C), and result in an explosive expansion of the air within the blade. This expansion can cause delamination, damage to the blade surface, melted glue, and cracking on the leading and trailing edges. Much of the damage may go undetected while significantly shortening the blade’s service life. One study found that wood epoxy blades are more lightning-resistant than GRP/glass epoxy blades; damage to generators; and batteries can be destroyed, or even detonated, by a lightning strike.

Even near lightning strikes weaken the blades and other components leading lead to serious turbine damage and downtime.

Wind doesn’t reduce CO2

See “In energy policy, Minnesota “Green” energy fails every test” or the original research paper “Energy Policy in Minnesota: The High Cost of Failure“.

Turbines increase the cost of farming

Building and maintaining a turbine requires heavy equipment that damages tiles under fields, which affects drainage in surrounding fields. Drainage problems can hurt crop yields and even stop a farmer from being able to plant in the first place. A turbine also makes it more difficult, or sometimes impossible, for crop dusters to fly over fields around it in order to spray pesticides that protect their crops Mensching 2017).

Offshore Windmills battered by waves, wind, ice, corrosion, a hazard to ships and ecosystems

Offshore windmills are battered by waves and wind, and ice is also a huge problem.

They must be much more reliable due to their vastly more challenging accessibility, rely on subsea power cable networks and substations far from land; and are coupled to a range of support structures, including floating systems that are highly dependent on water depth (DOE 2014).

Offshore windmills need special new vessels because offshore turbines are much larger than onshore, with cranes with maximum lift heights approaching 130 m and lifting capacities between 600 and 1,200 tons, blade lengths up to 80 meters, and rotors u to 165 meters using state-of-the-art composite fabrication facilities and extra special attention to ship blades to the project site.

In the United States  more than 60% of the offshore wind resource lies over water with depths of more than 60 m, but Offshore windmills need to be in water 60 meters or less. In 2008, all installations were in shallow water less than 30 meters deep.

Offshore windmills need to exist in water that’s 60 meters or less. Fifteen meters or less is ideal economically, as well as making the windmills less susceptible to large waves and wind damage.  But many states along the west coast don’t have shallow shelves where windmills can be built — California’s best wind, by far, is offshore, but the water is far too deep for windmills, and the best wind is in the northern part of the state, too far away to be connected to the grid.

Offshore windmills are a hazard to navigation of freighters and other ships.

The states that have by far the best wind resources and shallow depths offshore are North Carolina, Louisiana, and Texas, but they have 5 or more times the occurrence of hurricanes.

As climate change leads to rising sea levels over the next thousand years, windmills will be rendered useless.

Offshore windmills could conflict with other uses:

  1. Ship navigation
  2. Aquaculture
  3. Fisheries and subsistence fishing
  4. Boating, scuba diving, and surfing
  5. Sand and gravel extraction
  6. Oil and gas infrastructure
  7. Compete with potential wave energy devices

Offshore windparks will affect sediment transport, potentially clogging navigation channels, erosion, depositing of sediment on recreational areas, affect shoreline vegetation, scour sediments leading to loss of habitat for benthic communities, and damage existing seabed infrastructure.

Building windmills offshore can lead to chemical contaminants, smothering, suspended sediments, turbidity, substratum loss, scouring, bird strikes, and noise.

There is a potential for offshore wind farms to interfere with telecommunications, FAA radar systems, and marine communications (VHF [very high frequency] radio and radar).

Land use changes.  The windfarm offshore must be connected to the grid onshore, and there need to be roads to set up onshore substations and transmission lines.  Plus industrial sites and ports to construct, operate, and decommission the windmills.  Roadways need to be potentially quite large to transport the enormous components of a windmill (Michel)

Floating offshore turbines, tethered to fixed spots on the ocean floor rather than mounted directly to the seabed, exist only in prototype and concept stages of development. In addition to withstanding the greater corrosive properties of the marine environment, offshore turbines must be capable of withstanding a more complex structural vibration environment. Fleet availability has generally been lower and O&M costs higher for offshore installations. Further complicating offshore operations is the fact that maintenance access is more difficult and costly. In addition, balance-of-station costs in the form of complex foundations and underwater power collection and transmission systems are much greater for offshore wind energy projects (NREL 2014).

Installing offshore windmills requires excavation of the seafloor to create a level surface, and sinking the 250 to 350 ton foundations into the seabed, which are very expensive to build,since they require scour protection from large stones, erosion control mats, and so on.

Wind turbine foundations can affect the flow velocity and direction and increase turbulence. These changes to currents can affect sediment transport, resulting in erosion or piles of sediments on nearby shorelines.  Modified currents also could change the distribution of salinity, nutrients, effluents, river outflows, and thermal stratification, in turn affecting fish and benthic habitats.    Changes to major ocean currents such as the Gulf Stream could affect areas well beyond the continental United States, affecting the climate of North America as well as other continents (Michel).

Wind turbines are far more expensive than they appear to be

Willem Post goes into great detail about the true costs of wind energy with details from many wind projects around the world in A More Realistic Cost of Wind Energy.  The high cost of wind is hidden by enormous subsidies, beneficial tax rates, accelerated depreciation, not having to pay their share of energy storage, transmission lines, 7% loss of electricity over long transmission lines, and new power plants to back wind energy up when it isn’t producing. Nor does wind need to pay the additional costs of coal, nuclear, and natural gas plants for increased frequency of start/stop operation, keeping gas and coal plants available in cold standby mode, keeping some gas plants in synchronous (3,600 rpm) standby mode, and operating more hours in part-load-ramping mode (extra Btu/kWh, extra CO2/kWh)

George Taylor argues that wind costs at least twice as much in reality because of not having to pay for the conventional power plants that need to back it up, subsidies, tax depreciation, and so on in 2012 The Hidden Costs of Wind Electricity. Why the full cost of wind generation is unlikely to match the cost of natural gas, coal or nuclear generation

Corrosion costs aren’t added in either. Offshore windmills will be subject to a tremendous amount of corrosion from the salt water and air. Wind mills are battered year round by hail storms, strong winds, blizzards, and temperature extremes from below freezing to hundred degree heat in summer. Corrosion increases over time.

The same windmill can be beaten up variably, with the wind speed at the end of one blade considerably stronger than the wind at the tip of the other.  This caused Suzlon blades to crack several years ago.

Complexity: A windmill is only as weak as it’s weakest component, and the more components a windmill has, the more complex the maintenance.  Wind turbines are complex machines. Each has around 7,000 or more components, according to Tom Maves, deputy director for manufacturing and supply chain at the American Wind Energy Association (Galbraith).

Maintenance costs start to rise after 2 years (it’s almost impossible to find out what these costs are from turbine makers). Vibration and corrosion damage the rotating blades, and the bearings, gear boxes, axles, and blades are subjected to high stresses.

Gearboxes can be the Achilles’ heel, costing up to $500,000 to fix due to the high cost of replacement parts, cranes (which can cost $75,000-$100,000), post installation testing, re-commissioning and lost power production.

If the electric grid were to be built up enough to balance the wind energy load better, the windmills breaking down in remote locations would require a huge amount of energy to keep trees cut back and remote roads built and kept up to deliver and maintain the turbine and grid infrastructure.

Large scale wind farms need to “overcome significant barriers”: Costs overall are too high, and windmills in lower wind speed areas need to become more cost effective. Low wind speed areas are 20 times more common than high wind areas, and five times closer to the existing electrical distribution systems. Improvement is needed in integrating fluctuating wind power into the electrical grid with minimal impact on cost and reliability. Offshore wind facilities cost more to install, operate, and maintain than onshore windmills. NREL

Windmills wear out from ice storms, hitting insects, dust and sand abrade the blades and structure, and so on.

Wind turbines are already going out of business and fewer built in Europe

Investment in wind power is falling worldwide, especially in developing countries like China, which stopped building new turbines last month because most of the energy was being wasted. Wind power capacity is growing slowly because large numbers of people simply cannot get much of their electricity from wind.

Hundreds of wind turbines in the Netherlands are operating at a loss and could soon be demolished, according to an article published Thursday by the Dutch financial newspaper Financieele Dagblad.  Subsidies for generating wind energy aren’t cost effective anymore, according to the paper’s analysis. Most of Europe’s modern wind turbines are struggling to be profitable due to the inefficient subsidy structure.  Financieele Dagblad is extremely worried about the failure of the Dutch wind industry, because the Netherlands is already behind its green energy targets.

Dutch financial issues with wind power aren’t unique to the Netherlands. Globally, the wind power industry is slowing down and will continue to slow, according to a 2015 report by the International Energy Agency. The wind industry is growing the slowest rate in years due to changes in the structure of subsidies, issues with reliability, and consistently high prices.

TRANSPORTATION LIMITATIONS: Windmills are so huge they’ve reached the limits of land transportation by truck or rail

The best sites with class 4+ wind (good to superb) near transmission and cities are gone.  Wind Turbines to capture class 3 (fair) or class 4+ at 100 meters are too big for roads and rail. The Department of Energy would like to make wind turbines 140 meters or higher to capture the greater windspeeds at that height, but limits to growth are already being hit for 100 meter turbines.

Wind blades over 53 meters (174 feet) too big for roads. Source: DOE. 2014. Wind vision: a new era for wind power in the U.S.

Wind blades over 53 meters (174 feet) too big for roads. Source: DOE. 2014. Wind vision: a new era for wind power in the U.S.

The U.S. market has expanded to include lower wind speed sites (average wind speeds <7.5 m/s) closer to population centers. This is in part because of technological advancements and policy drivers. In some regions, it is also due to limited access to available transmission lines. As a result, from 1998 to 2013, the average estimated quality of the wind resource at 80 m for newly installed wind projects dropped by approximately 10%. This trend has increased the complexity and cost of transportation logistics because components such as blades and towers have increased in size to capture the resource at lower wind sites. As a result, existing transportation infrastructure is increasingly impacting component designs to balance energy production with transportability.

Useful energy increases with the square of the blade length, and there’s more wind the higher up you go, so ideally you’d build very tall wind towers with huge blades.  But conventional materials can’t handle these high wind conditions, and new, super-strong materials are too expensive.

Transportation Logistics. Installed turbine power ratings have continued to rise, to an average of 1.95 MW in 2012 including multiple models at more than 2 MWs and above [53]. As OEMs seek to capture more wind at lower wind speed sites, average rotor diameters have increased rapidly. Tower components have also increased in size and weight to access better winds higher above the ground. Wind turbine blades longer than 53 m begin to present a transportation obstacle due to the large turning radius, which hinders right of way or encroachment areas within corners or curves on roads or railways. Tower sections are generally limited to 4.3 m in diameter, or 4.6 m where routes permit, to fit under overhead obstructions.

The increased size, mass, and quantity of wind components has resulted in more actively managed wind turbine transportation logistics, making use of a variety of land transportation methods and modes. This has resulted in increased project costs of up to 10% of capital costs for some projects.

Design Impacts. Transportation constraints increasingly impact the design of wind turbine components, leading to higher capital costs resulting from suboptimal design. A prime example can be found in the industry-standard rolled steel wind turbine towers, which are limited to a structurally sub-optimal 4.3 meters (14.1 feet) diameter to comply with size and weight limits of U.S. roads. While it is possible to construct towers with hub heights up to 160 m at this constrained diameter, this height results in an exponential increase in the mass and cost of rolled steel towers as shown below.

Figure 2-39. Estimates of trucking and capital costs for conventional tubular towers, 2013. Source: DOE. 2014. Wind vision a new era for wind power in the U.S.

Figure 2-39. Estimates of trucking and capital costs for conventional tubular towers, 2013. Source: DOE. 2014. Wind vision a new era for wind power in the U.S.

As towers get to be 100 meters high and more, and blade length increases, shipping them gets challenging. Trucks carrying big towers and blades must sometimes move with police escorts and avoid certain overpasses or small roads (Galbraith).

Installation. Because of the lift height and mass, hoisting a wind turbine nacelle onto its tower requires the largest crane capacity of all wind turbine construction and installation phases. The masses of a 3-MW nacelle assembly and a 5-MW nacelle assembly are approximately 78 metric tons (t) and 130 t, respectively, without the gearbox and generator (104 t and 173 t with those components installed). Continued increases in tower heights and machine ratings are driving higher nacelle and blade weights. As a result, the availability, scheduling, and logistics of larger cranes have become increasingly challenging.

Because mobile cranes capable of installing the majority of turbines deployed in the United States are of a common size used for construction and other industries, an ample supply of such cranes existed into 2014. As the number of turbines installed at 100 m hub heights and above has increased, however, concerns about the availability of larger capacity cranes has grown.

Another challenge with larger crane classes is difficulty transporting them to and maneuvering them within the wind plant, especially in complex terrain. A 1,600-ton crane has a width of nearly 13 m (41 feet), wider than a two-lane interstate highway (including shoulders), and requires more than 100 semi-tractor trailers to transport it between projects. This makes transportation between turbines difficult and costly.

Department of energy. 2014. Wind Vision: a new era for wind power in the U.S. APPENDIX J:

Over-the-road transportation has limitations because of the length, width, height, and weight of loads that vary across the United States (Table E-5).

Constraint Road Rail
Mass (metric tonnes) 75 >163
Length (meters) 53 53
Width (meters) 4.11 4.27
Height (meters) 4.57 > 4.57

Table E–5. Summary of Key Minimum Logistics Constraints

Most nacelles and large components are shipped on common 13-axle trailers, which have a load constraint of about 165,000 pounds. As weights move above that threshold, the number of available trailers drops dramatically and the use of dual-lane or line trailers is required. These trailers have diminishing returns in terms of cargo capacity because they are heavier. For example, the capacity of a 19-axle trailer (the largest conventional trailer) is approximately 225,000 pounds (102 metric tonnes), which is roughly equivalent to a 4-MW wind turbine nacelle with the drive train removed.

Wind turbine blades above 53 m in length also present a transportation obstacle due to the large turning radius, which hinders right-of-way or encroachment areas within corners or curves. Blade and tower transportation barriers are caused by the difficulty of trucking long blades with wide chords on U.S. roads (in the future, transportation of large diameter root sections will have similar concerns). This barrier limits the length of blade that can be transported over roadways to 53–62 m, depending on design characteristics of the blade, such as the amount of pre-curve and type of airfoils used in the region of the maximum chord dimension.

In addition to the physical limits, each state along a transportation route has different permit requirements. This problem is exacerbated by higher volumes of shipments to wider locations as wind turbine deployments have increased in number. States are also shifting the burden of proof for the safety of large high-volume shipments onto the wind industry. The increased complexity and resulting costs and delays associated with these challenges have led the American Wind Energy Association’s Transportation & Logistics Working Group to coordinate with the American Association of State Highway and Transportation Officials in standardizing the permitting process across states.

Constraints on road transport have also led to an increased use of rail as an alternative for heavy wind components, such as the nacelle; high-volume components; and long-distance shipments. Rail is capable of shipping very heavy loads, greater than 163 metric tonnes, and does not generally require permits for each state. However, rail imposes its own length and width constraints and is not available in every location in which wind energy is being developed.

Trade-offs between rail and road transportation can also be constrained by cargo widths. Rail clearances are affected by overall shape of the cargo but begin to be restrictive on widths greater than 4.27 m (14 feet [ft]). Road transportation is subject to lane clearing constraints on loads exceeding 4.11 m (13 ft, 6 inches). A few select lanes can be cleared for widths up to 4.57 m (15 ft) for towers, but this is not a common occurrence. Road transport cost is affected by width but roads are generally capable of moving widths up to 4.87 m (16 ft). Widths in excess of 3.66 m (12 ft) require escorts. Widths in excess of 4.57 m (15 ft) may also include police escorts, which escalate cost and complexity.

Height can be a challenge in road transport, but rail is often capable of accommodating tall cargo without issue. Most wind turbines require a loaded height (cargo plus trailer deck height) of 4.72–4.77 m (15 ft, 6 inches–15 ft, 8 inches) in order to clear the tallest cargo (e.g., the nacelle or tower). This height is often at the upper limits of many areas of the country for road transport. Tower diameters that exceed 4.57 m (15 ft) often complicate the ability to find a clear route to site.

The numbers in this section are representative constraints; specific routes around the country may be more or less restricted. The key point is that transportation logistics issues are increasing, which can cause delays and added costs, as well as suboptimal component design.

Crane Availability. The availability of smaller (120–150 metric tonnes) “support” crawler cranes may also become more limited as the number of installed turbines increases. These small cranes are used to off-load turbine components, and to support the larger cranes required for the heaviest of nacelles or greater than 100-m hub-heights. These small crawlers are used in all forms of construction, especially infrastructure, and as infrastructure projects gain momentum, the supply of these cranes should increase. With the decline in wind installations in 2013, crane manufacturers have realigned to supply ultra-large crawler cranes to the power generation and petro-chemical facilities. While development of machines to improve capacities at heights required by the wind industry continues, the pace of such investments has fallen considerably.

Windmills may only last 12 to 15 years, or at best 20 years

Mackay, M. 29 Dec 2012. Wind turbines’ lifespan far shorter than believed, study suggests. The Courier.

A study commissioned by the Renewable Energy Foundation has found that the economic life of onshore wind turbines could be far less than that predicted by the industry. The “groundbreaking” research was carried out by academics at Edinburgh University and looked at 3,000 onshore wind turbines and years of wind farm performance data from the UK and Denmark. The results appear to show that the output from windfarms — allowing for variations in wind speed and site characteristics — declines substantially as they get older. By 10 years of age, the report found that the contribution of an average UK wind farm towards meeting electricity demand had declined by a third. That reduction in performance leads the study team to believe that it will be uneconomic to operate windfarms for more than 12 to 15 years — at odds with industry predictions of a 20- to 25-year lifespan. They may then have to be replaced with new machinery — a finding that the foundation believes has profound consequences for investors and government alike.

Scotland’s landscape could be blighted by the rotting remains of a failed regeneration of wind farms, according to a scathing new report.

Mendick, R. 30 Dec 2012. Wind Farm Turbines wear sooner than expected. The Telegraph

The study estimates that routine wear and tear will more than double the cost of electricity being produced by wind farms in the next decade.

  • Older turbines will need to be replaced more quickly than the industry estimates while many more will need to be built onshore if the Government is to meet renewable energy targets by 2020.
  • The extra cost is likely to be passed on to households, which already pay about £1 billion a year in a consumer subsidy that is added to electricity bills.
  • The report concludes that a wind turbine will typically generate more than twice as much electricity in its first year than when it is 15 years old.
  • Author Prof Gordon Hughes, an economist at Edinburgh University and a former energy adviser to the World Bank, discovered that the “load factor” — the efficiency rating of a turbine based on the percentage of electricity it actually produces compared with its theoretical maximum — is reduced from 24 per cent in the first 12 months of operation to just 11 per cent after 15 years.
  • The decline in the output of offshore wind farms, based on a study of Danish wind farms, appears even more dramatic. The load factor for turbines built on platforms in the sea is reduced from 39 per cent to 15 per cent after 10 years.
  • The study also looked at onshore turbines in Denmark and discovered that their decline was much less dramatic even though its wind farms tended to be older.Prof Hughes said that may be due to Danish turbines being smaller than British ones and possibly better maintained.
  • He said: “I strongly believe the bigger turbines are proving more difficult to manage and more likely to interfere with one another. British turbines have got bigger and wind farms have got bigger and they are creating turbulence which puts more stress on them.It is this stress that causes the breakdowns and maintenance requirements that is underlying the problem in performance that I have been seeing”.
  • Prof Hughes examined the output of 282 wind farms —about 3,000 turbines in total — in the UK and a further 823 onshore wind farms and 30 offshore wind farms in Denmark.
  • “Bluntly, wind turbines onshore and offshore still cost too much and wear out far too quickly to offer the developing world a realistic alternative to coal.

Prof Hughes said his analysis had uncovered a “hidden” truth that was not even known to the industry. His report was sent to an independent statistician at University College London who confirmed its findings.

Not In My Back Yard – NIMBYism

There’s been a great deal of NIMBYism preventing windmills from being built so far. Some of the objections are visual blight, bird killing, noise, and erosion from service roads.

After 25 years of marriage, I still have to sometimes go downstairs to sleep when my husband snores too loudly, so I can imagine how annoying windmill noise might be.  And even more so after someone sent me a document entitled “Confidential issues report draft. Waubra & Other Victorian Wind Farm Noise Impact Assessments” that made the case that windmill noise affects the quality of life, disturbs sleep, and has adverse health effects. I especially liked the descriptions of possible noices: whooshes, rumble-thumps, whining, clunks, and swooshes.  Low frequency sounds can penetrate walls and windows and cause vibrations and pressure changes.  Many people affected would like to come up with a standard that windmill farms must be at least 2 kilometres away and not exceed a noise level of 35 dB(A) at any time outside neighboring dwellings.

Offshore wind turbines could affect fisheries

The Sierra club in Maine is asking the Minerals Management Service  to look at over a dozen aspects of wind offshore, including possible interference with known upwelling zones and/or important circulatory and current regimes that might influence the distribution or recruitment of marine species. Wind affects the upwelling of nutrients and may be a key factor in booms and busts of the California sardine fishery and other marine species.

Fishermen are worried about the impact offshore wind turbines will have on them, including some of the same concerns as the maritime transportation sector, including safety of navigation, ensuring turbines are spaced widely enough to allow maneuvering and the issue of misleading radar echoes that can be generating by turbine rotors. For mobile gear fishermen, there is fear that large parts of the leases could be effectively off-limits to them because towers are too close together to safely trawl of dredge.

Lack of a skilled and technical workforce

Wind power officials see a much larger obstacle coming in the form of its own work force, a highly specialized group of technicians that combine working knowledge of mechanics, hydraulics, computers and meteorology with the willingness to climb 200 feet in the air in all kinds of weather (Twiddy).

Wind only produces electricity, what we face is a liquid fuels crisis

We need liquid fuels for the immediate crisis at hand.

Wind has a low capacity Factor

In the very best windmill farms, the capacity factor is only 28 to 35%.

Wind turbines generate electrical energy when they are not shut down for maintenance, repair, or tours and the wind is between about 8 and 55 mph. Below a wind speed of around 30 mph, however, the amount of energy generated is very small.

A 100 MW rated wind farm is capable of producing 100 MW only during maximum peak winds.  Most of the time it will produce much less, or even no power at all when winds are lighter or not blowing.  In reality, 30 MW of power production or less is far more likely.  What wind farms actually produce is called the CAPACITY FACTOR.

Quite often you will only hear that a new wind farm will generate 100 MW of power.  Ignore that and look for what the capacity factor is.

This makes a difference in how many homes are served. Per Megawatt, a coal plant up 75% of the time provides enough power in the Northeast for 900 homes and a wind plant up 30% of the time power for only 350 homes. The southhas extremely voracious electricity consumers, so the numbers are much lower: 350 and 180 respectively.

Solar generators typically have a 25 percent capacity factor, because the generators do not produce electricity at night or on cloudy days.

Dead bugs and salt reduce wind power generation by 20 to 30%

Over time the build-up of dead insects and/or salt on off-shor turbine blades reduces power by up to 30%.

Small windmills too expensive, too noisy, unreliable, and height restricted

According to the American Wind Energy Association, these are the challenges of small windmills: they’re too expensive for most people, there’s insufficient product reliability, lack of consumer protection from unscrupulous suppliers, most local jurisdictions limit the height of structures to 35 feet (wind towers must be at least 60 feet high and higher than objects around them like trees, etc), utilities make it hard and discourage people from connecting to the grid, the inverters that modify the wildly fluctuating wind voltages into 60-cycle AC are too expensive, and they’re too noisy.

REFERENCES

Anderson, J. March 1, 2017. You Can’t Have Offshore Wind Power Without Oil. Forbes.

Castelvecchi, D. March 2012. Gather the Wind. If renewable energy is going to take off, we need good ways of storing it for the times when the sun isn’t shining and the wind isn’t blowing.  Scientific American.

Cembalest, Michael. 21 Nov 2011. Eye on the Market: The quixotic search for energy solutions. J. P. Morgan.

Cubic Mile of Oil.  Wikipedia.

DATC. September 23, 2014. $8-billion green energy initiative proposed for Los Angeles. Duke American Transmission Co.

De Castro, C. 2011. Global Wind Power Potential: Physical and technological limits. Energy Policy.

E.ON Netz Corp. Wind Report 2004.Renewable Energy Foundation. E.ON Netz Wind report 2005 shows UK renewables policy is mistaken.

Elvidge, C. D. 2004. U.S. constructed area approaches the size of Ohio. Eos 85:233-34.

Fisher, T. Oct 23, 2013. Big Wind’s Dirty Little Secret: Toxic Lakes and Radioactive Waste. Institute for Energy Research.

Froese, M. 2015. Damage control: effects of near-lightning strikes on turbine blades. Windpowerengineering.com

Galbraith, K. 7 Aug 2011. Wind Power Gains as Gear Improves. New York Times

Gromicko, N. 2018. Wind turbines and lightning. Nachi.org  https://www.nachi.org/wind-turbines-lightning.htm

Gruver, M. September 24, 2014. Renewable energy plan hinges on huge Utah caverns. Associated Press.

IEC. 2014. Wind speed and power. Iowa Wind Center.

Martin, R. April 7, 2016. Texas and California have too much renewable energy. The rapid growth of wind and solar power in the states is wreaking havoc with energy prices. MIT Technology Review.

Mason, V. 2005. Wind power in West Denmark. Lessons for the UK.

Mensching, L. M. 2017. Wind energy isn’t a breeze. Slate.com

Michel, J, et al. July 2007. Worldwide Synthesis and Analysis of Existing Information Regarding Environmental Effects of Alternative Energy Uses on the Outer Continental Shelf. U.S. Department of the Interior. Minerals Management Service. OCS STUDY MMS 2007- 038

Miller, L. M., et al. May 16, 2014.  Two methods for estimating limits to large-scale wind power generation. Proceedings of the National Academy of Sciences.

Miller, L. M. et al. Jet stream wind power as a renewable energy resource: little power, big impacts. Earth System Dynamics, 2011; 2 (2): 201 DOI: 10.5194/esd-2-201-2011

Nelder, C. 31 May 2010. 195 Californias or 74 Texases to Replace Offshore Oil. ASPO Peak Oil Review.

NREL. 2014. Renewable Electricity Futures Study Exploration of High-Penetration Renewable Electricity Futures. National Renewable Energy Laboratory.

Parliament.uk. 21 Sep 2005. Memorandum submitted by Paul Mobbs, Mobbs’ Environmental Investigations. Select committee on environmental audit.

Parry, Simon. 11 Jan 2012.  In China, the true cost of Britain’s clean, green wind power experiment: Pollution on a disastrous scale. Dailymail.co.uk

Pfotenhauer, N. May 12, 2014. Big Wind’s bogus subsidies. U.S. News.

Rose, S. et. al. 10 Jan 2012. Quantifying the hurricane risk to offshore wind turbines. Proceedings of the National Academy of Sciences.

Rosenbloom, E. 2006. A Problem With Wind Power. aweo.org

Smil, V. 2008. Energy in nature and society. MIT Press.

Smil, V. 2010. Energy myths and realities. AEI press.

Smith, K. J. 2016. Surge protector. Scientific American.

Takemoto, Y. 31 Aug 2006. Eurus Energy May Scrap Wind Power Project in Japan.  Bloomberg.

Temple, J. 2018. Wide-scale US wind power could cause significant global warming. MIT Technology Review.

Trainer, T., 2012. A critique of Jacobson and Delucchi’s proposals for a world renewable energy supply. Energy Policy 44, 476–481.

Twiddy, D. 2 Feb 2008. Wind farms need techs to keep running. Associated Press.

Udall, Randy. How many wind turbines to meet the nation’s needs? Energyresources message 2202

More articles on wind problems in various areas (not cited above)

Clover, C. 9 Dec 2006. Wind farms ‘are failing to generate the predicted amount of electricity‘. Telegraph.

Means, E. Jan 12, 2015. Scotland Gagging on Wind Power. Energy Matters.

Not on the internet anymore:

Blackwell, R. Oct 30, 2005. How much wind power is too much? Globe and Mail.

Wind power has become a key part of Canada’s energy mix, with the number of installed wind turbines growing exponentially in recent months. But the fact the wind doesn’t blow all the time is creating a potential roadblock that could stall growth in the industry.

Alberta and Ontario, the two provinces with the most wind turbines up and whirling, face concerns that there are limits on how much power can be generated from the breeze before their electricity systems are destabilized.

Alberta recently put a temporary cap on wind generation at 900 megawatts — a level it could reach as early as next year — because of the uncertainty. And a report in Ontario released last week says that in some situations more than 5,000 MW of wind power, stable operation of the power grid could be jeopardized.

Warren Frost, vice-president for operations and reliability at the Alberta Electric System Operator, said studies done over the past couple of years showed there can be problems when wind contributes more than about 10 per cent of the province’s electricity — about 900 MW — because of the chance the wind could stop at any time.

Each 100 MW of wind power is enough to supply a city about the size of Lethbridge, Alta.

If the power “disappears on you when the wind dies, then you’ve got to make it up, either through importing from a neighbouring jurisdiction or by ramping up generators,” Mr. Frost said.

But Alberta is limited in its imports, because the provincial power grid has connections only with British Columbia and Saskatchewan. And hydroelectric plants with water reservoirs, which can turn on a dime to start producing power, are limited in the province. Coal-fired plants and most gas-fired plants take time to get up to speed, making them less useful as backups when the wind fails.

There can also be a problem, Mr. Frost noted, when the wind picks up and generates more power than is being demanded — that potential imbalance also has to be accounted for.

There are a number of ways to allow wind power to make up a greater proportion of the electricity supply, but they require more study, Mr. Frost said. First, he said, the province can develop more sophisticated ways of forecasting the wind so the power it generates is more predictable.

The province could also build more plants that can quickly respond if the wind dies down during a peak period, for example. But building new gas-powered plants merely to help handle the variability of wind is certain to raise the ire of environmentalists.

The province could also increase its connections to other jurisdictions, where it would buy surplus power when needed. Alberta is already looking at links with some northwestern U.S. states, including Montana.

Over all, Alberta is committed to “adding as much wind as feasible” Mr. Frost said. “What we’re balancing is the reliability [issue].

Robert Hornung, president of the Canadian Wind Energy Association, which represents companies in the wind business, said he prefers to think of Alberta’s 900 MW limit as a “speed bump” rather than a fixed cap.

“We have every confidence they’ll be able to go further than that,” Mr. Hornung said, particularly if the industry and regulators put some effort into wind forecasting over the next year or so. That’s crucial, he said, because “we have projects of many, many more megawatts than 900 waiting to proceed in Alberta.

In Ontario, the situation is less acute than in Alberta, but the wind study released last week — prepared for the industry and regulators — shows some similar concerns.

While wind power could be handled by the Ontario grid up to 5,000 MW — about 320 MW of wind turbines are currently in operation with another 960 MW in planning stages — the situation changes at higher levels, the study suggests.

Particularly during low demand periods when wind makes up a relatively high proportion of the power mix, “stable operation of the power system could be compromised” if backup systems can’t be ramped up quickly to deal with wind fluctuations, the report said.

But Ontario is in a better position than Alberta because it has far more interconnections with other provinces and states, where it can buy or sell power.

And it also has its wind turbines more geographically dispersed than Alberta, where most wind farms are in the south of the province. That means the chance of the wind failing everywhere at the same time is lower in Ontario.

Don Tench, director of planning and assessments for Ontario’s Independent Electricity System Operator, said he thinks better wind forecasting is the key to making the new source of power work effectively.

“If we have a few hours notice of a significant wind change, we can make plans to deal with it,” he said.

Mr. Frost, of the Alberta system operator, said European countries such as Denmark and Germany have been able to maintain a high proportion of wind power in their electricity systems mainly because they have multiple connections to other countries’ power grids. That gives them substantial flexibility to import or export power to compensate for wind fluctuation.

Germany, for example, has 39 international interconnections, he said, making variable wind conditions much easier to manage.

REFERENCES

Radford. 2016. Renewable energy demands the undoable. Climate news network.

Wiley, L. 2007. Utility scale wind turbine manufacturing requirements. Presentation at the National Wind Coordinating Collaborative’s Wind Energy and Economic Development Forum, Lansing, Mich., April 24, 2007.

Posted in Alternative Energy, Energy, Wind | Tagged , , , , , , , , | 1 Comment

What on earth is exergy?

Preface. This is one of the best explanations of exergy I’ve been able to find.  This paper makes the case that exergy ought to be considered by just about every industry and government to achieve greater energy efficiency, and makes the case that in many ways exergy is a more valuable measure than energy use when combined with mineral depletion.

My favorite example was:

“The need to take the quality of energy into account can be shown with a simple everyday example.  Take an office space and a car battery. The energy contained in the movement of air molecules in a 68 degree 20 cubic meter office is more than the energy stored in three standard 12 volt car batteries. But you can only use the energy in the air to keep yourself warm, while the energy in the batteries will start your car, cook your lunch, and run your computer.  The reason is that even if their quantities are the same, the quality – or usefulness – of the energy in the air and in the battery is different. In the air, the energy is randomly distributed, not readily accessible, and not easily used for anything other than keeping you warm.  But the electric battery energy is concentrated, controllable, and available for all sorts of uses. This difference is taken into account by exergy.”

But you really ought to go to the original source: https://www.scienceeurope.org/wp-content/uploads/2016/06/SE_Exergy_Brochure.pdf since I’ve left out the explanatory charts, graphs, and about a quarter of the information, especially the pages of how exergy should be used in policy-making, which those of you who are trying to slow down or lessen the impact of the Great Simplification might find the most interesting.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Brockway, P., et al. 2016. In a resource-constrained world: Think Exergy, not energy. Science Europe.

Exergy

It’s necessary to measure improved energy and resource efficiencies, but how? Of course, the amount of energy and raw materials that go into making something, or that go into services such as heating, communication, or transport, can be easily measured.  However, that does not consider the quality of the energy nor the rarity of the materials used. In order to account for the quality and not just the quantity of energy, as well as factoring in the raw materials used, we need to measure exergy.

Exergy can be considered to be useful energy, or the ability of energy to do work. Exergy can be measured not only for individual processes, but also for entire industries, and even for whole national economies. It provides a firm basis from which to judge the effect of policy measures taken to improve energy and resource efficiency, and to mitigate the effects of climate change.

Exergy as a Measure of Energy Quality

The need to take the quality of energy into account can be shown with a simple everyday example.  Take an office space and a car battery. The energy contained in the movement of air molecules in a 68 degree 20 cubic meter office is more than the energy stored in three standard 12 volt car batteries. But you can only use the energy in the air to keep yourself warm, while the energy in the batteries will start your car, cook your lunch, and run your computer.  The reason is that even if their quantities are the same, the quality – or usefulness – of the energy in the air and in the battery is different. In the air, the energy is randomly distributed, not readily accessible, and not easily used for anything other than keeping you warm.  But the electric battery energy is concentrated, controllable, and available for all sorts of uses. This difference is taken into account by exergy.

Thermodynamics is the Science of Energy

The concept of exergy is inextricably contained within the basic physical laws governing energy and resources, called thermodynamics.  These laws cannot be ignored: they are fundamental . Two of the basic laws in thermodynamics need to be considered:

First – Energy is conserved.

Second – heat cannot be fully converted into useful energy.  This second law concerns the concept of exergy. Every energy-conversion process destroys exergy. Take for example a conventional fossil fuel power station. Such a station transforms the chemical energy stored in coal to produce steam in a boiler, which is then converted by a turbine into mechanical energy and finally by a generator into electricity. in this process, only 30–35% of the chemical energy contained in the coal is converted into electrical energy; the remaining 65–70% is lost in the form of heat. Exergy analysis of this power generation plant identifies the boiler and turbine as the major sources of exergy loss. In order to improve the exergy efficiency, the boiler and turbine systems need to be altered through technical design and operational changes.

Exergy as a Measure of Resource Quality

Exergy can also be applied in order to take the quality of resources into account. A diluted resource is much more difficult to use than a concentrated one, as it first has to be collected or refined. The measure to take the concentration of a resource into account is its chemical potential (or chemical exergy).  The chemical potential of pure iron is much higher than the chemical potential of an iron ore diluted by other rocks.

An exergy consideration of any process takes into account the chemical potential of the resources used in the process. The problem with chemical potentials, however, is that it is only possible to measure their difference. In order to study the chemical potential of a specific resource, a reference point is needed. An interesting proposal as a reference point for natural minerals is the concept of ‘thanatia’, a hypothetical version of our planet where all mineral deposits have been exploited and their materials have been dispersed throughout the crust. Using thanatia as a model, it is possible to determine the exergy content of the Earth’s resources. By adding up all exergy expenditures, the rarity of resources and their products can be assessed.

Exergy Destruction in the Process Industry

Industry is a large user of both material and energy resources. Typically, an industrial production process needs the input of materials and of energy to transform those materials into products. Much of these inputs end up being discarded: in the case of materials as waste, and in the case of energy as heat. This is exergy destruction, since – recalling the Second Law of thermodynamics – not all inputs can be fully recovered as useful energy.

Methanol, for example, is a primary liquid petrochemical manufactured from natural gas. It is a key component of hundreds of chemicals that are integral parts of our daily lives such as plastics, synthetic fibers, adhesives, insulation, paints, pigments, and dyes. Before methanol production even begins, 10% of the natural gas is used to warm the chemical reactor. Subsequently, during production further reactor losses amount to 50%. This contributes to the exergy destruction footprint of methanol production and of all its products.

How can we Increase the Energy Efficiency of Production?

While exergy destruction for any process is never zero, it can be minimized. Every process has a characteristic exergy-destruction footprint. Knowledge of this footprint can be used to rationalize resource choices before production begins and to monitor the use of energy and resources during production. In a full life-cycle approach, it can be used to consider the total energy and resource ‘cost’ of a product: essentially its exergy-destruction footprint.

An example of a process where reducing exergy destruction can increase energy efficiency is distillation. Distillation is the most commonly applied separation technology in the world, responsible for up to 50% of both capital and operating costs in industrial processes. It is a process used to separate the different substances from a liquid mixture by selective evaporation and condensation. Commercially, distillation has many applications; in the previous example of methanol production, it is used to purify the methanol by removing reaction byproducts from it, such as water. The conventional separation of chemicals by distillation occurs in a column that is heated from below by a boiler, with the desired product (referred to as the condensate) produced from a condenser at the top. The exergy efficiency of this distillation setup is about 30%.

The obvious question is whether the same distillation results can be achieved with a higher exergy efficiency by operating the column differently. The answer to that question is yes, as there are better ways to add heat to the column than by a boiler. The boiler and condenser can be replaced by a series of heat exchangers along the column, producing a more exergy-efficient heating pattern. This arrangement minimizes the exergy destruction in the system, reducing the exergy footprint of the process. In this way, the same product can be obtained with only 60% of the original exergy loss.  This of course requires investment in replacing or retrofitting the technology, but in the long run such costs are compensated by lower operating costs.  Financial benefits aside, the potential impact of technological development driven by exergy analysis on the energy  and material efficiency of industry,  is enormous.

The Exergy Destruction Footprint – Developing More Environmental friendly Technologies

When exergy analysis is performed on a process, the exergy losses can be identified and the exergy-destruction footprint can be minimized. In the fossil fuel industry, single- and two-stage crude oil distillation are used to obtain materials from crude oil for fuels and for chemical feedstocks.

A single-stage system consists of a single heating furnace and a distillation column; a two-stage system adds another furnace (to heat the product of the first unit) and a second column.  Tests have shown that the two stage system has a much higher efficiency – 31.5% versus 14 for a single stage process.  This is because the two stage system can be better controlled than the one-stage system.  Adding more stages gives even better control.

It is important to keep in mind that there is no production without an exergy destruction footprint. 

A Large-scale Problem Needs a Common-scale Solution

In 2013, industry accounted for 25% of the EU’s total final energy consumption, making it the third-largest end-user after buildings and transport. Over 50% of industry’s total final energy consumption is attributed to just three sectors: iron and steel, chemical and pharmaceutical, and petroleum and refineries.

Between 2001 and 2011, EU industry reduced its energy intensity by 19%. However, significant efficiency potential remains. As previous examples of several industrial processes have shown, exergy analysis offers a guide to the development of more energy-efficient technologies and provides an objective basis for the comparison of sustainable alternatives. Energy analysis explains that electric and thermal energy are equivalent according to the First Law of thermodynamics, and that heating by an electric resistance heater can be 100% efficient. Exergy analysis, however, explains that heating by an electric heater wastes useful energy. When we know about this kind of waste, we can start to reduce it by minimizing exergy destruction. While the given examples have focused on industrial processes, exergy analysis can also tackle the energy and resource efficiency of larger consumers of energy, such as the buildings and transport sectors. It is important to highlight that exergy analysis can be used not only to quantify the historical resource use, efficiency and environmental performance, but also to explore future transport pathways, building structures and industrial processes.

As explained in the Opinion Paper “A common Scale for Our common Future: Exergy, a thermodynamic metric for Energy”, a major roadblock for implementing – or even finding – solutions to our societal challenges is the fact that energy and resource efficiency are commonly defined in economic, environmental, physical, and even political terms. Exergy is the resource of value, and considering it as such requires a cultural shift to the thermodynamic-metric approach of energy analysis. Exergy provides an apolitical scale to guide our judgement on the road to sustainability. Exergy is first step to a common-scale solution  to our large-scale problems.

ADOPTING EXERGY EFFICIENCY AS THE COMMON NATIONAL ENERGY-EFFICIENCY METRIC

Energy Efficiency as a Key Climate Policy: the Need to Measure Progress with Exergy

Improving the efficiency of energy use and transitioning to renewable energy are the two main climate policies aimed at meeting global carbon-reduction targets. The 2009 renewable Energy Directive mandates that 20% of energy consumed in the EU should be renewable by 2020.  At the same time, the EU’s 2012 Energy Efficiency Directive sets a 20% reduction target for energy use. Progress towards the renewable-energy target is straightforward to measure, since national energy use by renewable sources is collected and readily available. Indeed, for many citizens, the proportion of domestic electrical energy generated from renewable sources appears clearly defined on their electricity bills. In contrast, national-scale energy efficiency remains unclear and a qualitative comparison of renewable sources is lacking. A central problem is that there is no single, universal definition of national energy efficiency. In this void, a wide range of metrics is inconsistently adopted, based on economic activity, physical intensity or hybrid economic– physical indicators.

None of these methods are based on thermodynamics, however, making them inherently incapable of measuring energy efficiency in a meaningful way. As such, they are unable to contribute to evidence-based policy making or to measure progress towards energy efficiency targets. The EU is not alone, there is currently no national-scale thermodynamic based reporting of energy efficiency by any country in the world. Second-law thermodynamic efficiency – in other words, exergy efficiency – stands alone in offering a common scale for national, economy-wide energy efficiency measurement, applicable at all scales and across all sectors.

NATURAL RESOURCE CONSUMPTION

From Gaia to Thanatia: How to Assess the Loss of Natural Resources

As technology today uses an increasing number of elements from the periodic table, the demand for raw materials profoundly impacts on the mining sector. As ever lower grades of ore are being extracted from the earth, the use of energy, water and waste rock per unit of extracted material increases, resulting in greater environmental and social impact. Globally, the metal sector requires about 10% of the total primary energy consumption, mostly provided by fossil fuels. By 2050, the demand for many minerals, including gold, silver, indium, nickel, tin, copper, zinc, lead, and antimony, is predicted to be greater than their current reserves. Regrettably, many rare elements are profusely used, with limited recycling.

The loss of natural resources cannot be expressed in money, which is a volatile unit of measurement that is too far removed from the objective reality of physical loss. Neither can it be expressed in tonnage or energy alone, as these do not capture quality and value. Exergy can solve such shortcomings and be applied to resource consumption through the idea of ‘exergy cost’: the embodied exergy of any material, which takes the concentration of resources into account measured with reference to the ‘dead state’ of thanatia.

Thanatia – from the greek  personification of  Death – is a hypothetical dead state of the anthroposphere, conceiving an ultimate landfill where all mineral resources are irreversibly lost and dispersed, or in other words, at an evenly distributed crustal composition. If our society is squandering the natural resources that the Sun and geological evolution of the Earth have stored, we are converting their chemical exergy into a degraded environment that progressively becomes less able to support usual economic activities and eventually will fail to sustain life itself. The end state would be thanatia, a possible end to the ‘anthropocene’ period. It does not represent the end of life on our planet, but it does imply that mineral resources are no longer available in a concentrated form.

An Essential approach to making better use of our mineral resources: the application of mineral exergy rarity

The exergy of a mineral resource as calculated with thanatia as a reference can be measured as the minimum energy that could be used to extract that resource from bare rocks, instead of from its current mineral deposit. This is an essential approach, since the European commission’s communication ‘towards a circular Economy: a Zero Waste Programme for Europe’, states that “valuable materials are leaking from our economies” and that “pressure on resources is causing greater environmental degradation and fragility, Europe can benefit economically and environmentally from making better use of those resources.” Applied to minerals we can define a ‘mineral Exergy rarity’ (in kWh) as “the amount of exergy resources needed to obtain a mineral commodity from bare  rocks, using prevailing technologies”. Tthe ‘exergy rarity’ concept is thus able to quantify the rate of mineral capital depletion, taking a completely resource exhausted planet as a reference. This rarity assessment allows for a complete vision of mineral resources via a cradle-to-grave analysis. Exergy rarity is, in fact, a measure of the exergy-destruction footprint of a mineral, taking thanatia as a reference.

Given a certain state of technology, the exergy rarity is an identifying property of any commodity incorporating metals. Hence, exergy rarity (in kWh/kg) may be assessed for all mineral resources and artefacts thereof, from raw materials and chemical substances to electric and electronic appliances, renewable energies, and new materials. Especially those made with critical raw materials, whose recycling and recovery technologies should further enhance. Such thinking is a step towards “a better preservation of the Earth’s resources endowment and the use of the Laws of Thermodynamics for the assessment of energy and material resources as well as the planet’s dissipation of useful energy”. More than ever, the issue of dwindling resources needs an integrated global approach. Issues such as assessing exhaustion, dispersal, or scarcity are absent from economic considerations. An annual exergy content account of not only production, but of the depletion and dispersion of raw materials would enable a sound management of our material resources. Unfortunately, similar to the problem of inconsistent national energy-efficiency measurement, there is also a lack of consistency in natural-resource assessment, which is necessary for effective policy making.

It is time to charge for exergy use rather than for energy use. in the future, consumers should be informed about products and services in terms of their exergy content and destruction footprints in much the same way as they are about carbon emissions, and pay the price accordingly. that gives a scientific basis for charging for loss of valuable resources.

The energy and exergy used in production, operation and destruction must be paid back during life time in order to be sustainable. LCEA shows that solar thermal plants have much longer exergy payback time than energy payback time, 15.4 and 3.5 years respectively. Energy based analysis may lead to false assumptions in the evaluation of the sustainability of renewable energy systems.

References

  1. Science Europe Scientific committee for the Physical, chemical and mathematical Sciences, “a common Scale for Our common Future: Exergy, a thermodynamic metric for Energy, http://scieur.org/op-exergy
  2. a. valero capilla and a. valero Delgado, “thanatia: the Destiny of the Earth’s mineral resources, a thermodynamic cradle-to-cradle assessment”, World Scientific Publishing: Singapore, 2014.
  3. S. Kjelstrup, D. bedeaux, E. Johannessen, J. gross, “non-Equilibrium thermodynamics for Engineers”, World Scientific, 2010, see chapter 10 and references therein.
  4. h. al-muslim, i. Dincer and S.m. Zubair, “Exergy analysis of Single- and two-Stage crude Oil Distillation units”, Journal of Energy resources technology 125(3), 199–207, 2003. 5. SEt-Plan Secretariat, SEt-Plan actiOn n°6, DraFt iSSuES PaPEr, “continue efforts to make Eu industry less energy intensive and more competitive”, 25/01/2016,    https://setis.ec.europa.eu/system/files/issues_paper_action6_ee_industry.pdf
  5. European Parliament. Directive 2009/28/Ec of the European Parliament and of the council of 23 april 2009. Official Journal of the European union L140/16, 23.04.2009, pp. 16–62.
  6. European Parliament. Directive 2012/27/Eu of the European Parliament and of the council of 25 October 2012 on energy efficiency. Official Journal of the European union L315/1, 25.10.2012.
  7. P.E. brockway, J.r. barrett, t.J.  Foxon, and J.K.  Steinberger, “Divergence of trends in   uS and uK aggregate exergy efficiencies 1960–2010”, Environmental Science and   technology 48, 9874–9881, 2014.
  8. P.E. brockway, J.K. Steinberger, J.r. barrett, and t.J. Foxon, “understanding china’s past and future energy demand: an exergy efficiency and decomposition analysis”, applied Energy 155, 892– 903, 2015.
  9. Presentation of the “World Energy Outlook – 2015 Special report on Energy and climate”, presented by the international Energy agency’s Executive Director Fatih birol at the Eu Sustainable Energy Week, 2015.
  10. C.J. Koroneos, E.a. nanaki and g.a. xydis, “Sustainability indicators for the use of resources –the Exergy approach”, Sustainability 4, 1867–1878, 2012.
  11. http://eur-lex.europa.eu/legal-content/En/txt/?uri=cELEx%3a52014Dc0398 13. appeal to un and Eu by researchers who attended the 12th biannual Joint European thermodynamics conference, held in brescia, italy, from July 1, international Journal of thermodynamics 16(3), 2013.
  12. Federal nonnuclear energy research and development act of 1974,Public Law 93–577, http://legcounsel.house.gov/comps/Federal%20nonnuclear%20 Energy%20research%20and%20Development%20act%20Of%201974.pdf
  13. D. Favrat, F. marechal and O. Epelly, “the challenge of introducing an exergy indicator in a local law on energy”, Energy, 33, 130–136, 2008.
Posted in Exergy | 3 Comments

Why Concentrated Solar Power is not a good choice for a 100% Renewable Energy System

Preface.  The authors conclude that:

In light of the obtained results — a low capacity factor and Energy Returned on Invested, an intensive use of materials—some scarce, and the significant seasonal intermittence) — the potential contribution of current CSP technologies in a future 100% RES system seems very limited.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

Castro, C., et al. 2018. Concentrated Solar power: actual performance and foreseeable future in high penetration scenarios of renewable energies. Biophysical economics and resource quality.

Analyses proposing a high share of concentrated solar power (CSP) in future 100% renewable energy scenarios rely on the ability of this technology, through storage and/or hybridization, to partially avoid the problems associated with the hourly/ daily (short-term) variability of other variable renewable sources such as wind or solar photovoltaic. However, data used in the scienti?c literature are mainly theoretical values. In this work, the actual performance of CSP plants in operation from publicly available data from four countries (Spain, the USA, India, and United Arab Emirates) has been estimated for three dimensions: capacity factor (CF), seasonal variability, and energy return on energy invested (EROI).

The authors used real data from 34 CSP plants to find actual capacity factors, which were much lower than had been assumed.

OVERALL AVERAGE: ACTUAL CF 0.15–0.3, ASSUMED 0.25 to 0.75

CSP plant Technology Storage Hours Expected CF Literature CF Real CF
Nevada Solar One Parabolic 0.5 0.2 0.42–0.51 0.18
Solana Generating Parabolic 6 0.38 0.42–0.51 0.27
Genesis Parabolic No 0.26 0.25–0.5 0.28
Martin Next Generation Parabolic No 0.24 0.25–0.5 0.16
Mohave Parabolic No 0.24 0.25–0.5 0.21
SEGS III–IX Parabolic No   0.25–0.5 0.17
Crescent Dunes Tower 10 0.52 0.55–0.71 0.14
Ivanpah 1, 2, 3 Tower No 0.31 0.25–0.28 0.19
Maricopa Dish stirling No   0.25–0.28 0.19

Table 2 United States only, not shown: UAE, Spain, and India.  Estimates of the CF of several individual CSP plants, sets of plants and global USA and Spanish CSP systems: expected values from the industry, values used in the scientific literature and the results obtained in the work for real plants

In fact, the results obtained show that the actual performance of CSP plants is significantly worse than that projected by constructors and considered by the scientific literature in the theoretical studies:

low standard EROI of 1.3:1–2.4:1, 12 other researchers gave a range of 9.6 to 67.6 (see Table 7). Given that CSP plants cost more than any other kind of RES, it’s not surprising that the EROI is so low.

Other significant issues for CSP

  • intensive use of materials—some scarce
  • Substantial seasonal intermittence.

Conclusion

Analyses proposing a high share of CSP in future 100% RES scenarios rely on the ability of this technology, through storage and/or hybridization, to partially avoid the problems associated with the hourly/ daily (short-term) variability of other renewable variable sources, such as wind or PV.

But this advantage seems to be more than offset by the overall performance of real CSP plants. In fact, the results from CSP plants in operation, using publicly available data from four countries (Spain, the USA, India, and UAE) show that the actual performance of CSP plants is shown to be significantly worse than projected by the builders and in the scientific literature which has been using theoretical numbers.  In fact, the exaggeration in scientific literature is paradoxical given that there have been publicly available data for many power plants for years.

By overestimating the capacity factor, the life cycle analyses that estimate the energy and material requirements, EROI, environmental impacts, and economic costs are exaggerated as well.

The capacity factor turns out to be quite low, on the same order as wind and PV, CSP has very low EROI, intensive use of materials—some scarce—and significant seasonal intermittence problems, with seasonal variability worse than for wind or PV in Spain and the USA, where the output can be zero for many days in winter.

Since CSP has to be put in hot deserts with a lot of sunlight, they’re vulnerable to damage from wind, dust, sand, extreme temperatures, water scarcity, and more.

Posted in Concentrated Solar Power | Tagged , , , | 5 Comments

Solar PV requires too much land to replace fossils

Preface. This is a brief summary of the Capellan-Perez paper that calculates the land needed to use solar to replace electricity as well as the land needed if solar were to replace all of societies use of energy (i.e. transportation, manufacturing, industry, heating of homes and buildings, and so on).  The land needed in 40 different nations was estimated for each of these cases.

Another study I stumbled on looking for more insight into this paper estimates that 16 of 48 states in the U.S. have insufficient land for solar power to replace fossil fuels (Li 2018).

The authors estimates of the land needed, while five to ten times higher than other researchers, is still quite generous.  They don’t subtract land total unsuitable for solar farms, which require:  level ground, preferably south-facing, near high transmission capacity lines, within a power grid that can handle the excess capacity produced, not in a sensitive or protected area, and able to overcome all opposition such as military objections, NIMBY, and financially feasible, since often in areas with several solar farms speculators drive up land prices. So whatever their land estimates, the actual suitable land is probably much less.

Here is the press release from Universidad de Valladolid that does a good job of summarizing the paper which I found at the last minute:

“While fossil fuels represent concentrated underground deposits of energy, renewable energy sources are spread and dispersed along the territory. Hence, the transition to renewable energies will intensify the global competition for land. In this analysis, we have estimated the land-use requirements to supply all currently consumed electricity and final energy with domestic solar energy for 40 countries (27 member states of the European Union (EU-27), and 13 non-EU countries: Australia, Brazil, Canada, China, India, Indonesia, Japan, South Korea, Mexico, Russia, Turkey, and the USA). We focus on solar since it has the highest power density and biophysical potential among renewables.

The results show that for many advanced capitalist economies the land requirements to cover their current electricity consumption would be substantial, the situation being especially challenging for those located in northern latitudes with high population densities and high electricity consumption per capita. Replication of the exercise to explore the land-use requirements associated with a transition to a 100% solar powered economy indicates this transition may be physically unfeasible for countries such as Japan and most of the EU-27 member states. Their vulnerability is aggravated when accounting for the electricity and final energy footprint, i.e., the net embodied energy in international trade. If current dynamics continue, emerging countries such as India might reach a similar situation in the future.

Overall, our results indicate that the transition to renewable energies maintaining the current levels of energy consumption has the potential to create new vulnerabilities and/or reinforce existing ones in terms of energy, food security and biodiversity conservation.”

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Capellan-Perez, I., et al. 2017. Assessing vulnerabilities and limits in the transition to renewable energies: Land requirements under 100% solar energy scenarios. Renewable and Sustainable Energy Reviews 77: 760-782 

https://www.researchgate.net/publication/316643762_Assessing_vulnerabilities_and_limits_in_the_transition_to_renewable_energies_Land_requirements_under_100_solar_energy_scenarios

The transition to renewable energies will intensify the global competition for land because wind and solar energy are highly dispersed and need large areas to capture this energy.  Yet most analyses have concluded that land will not pose a problem.  We focus on solar alone because it has a higher power density than wind, hydro, or biomass.

In this paper we estimate the land-use requirements to supply all currently consumed electricity and final energy with domestic solar energy for 40 countries considering two key issues that are usually not taken into account: (1) the need to cope with the variability of the solar resource from the highs of summer to the lows of winter, and (2) a realistic estimate of the land solar technologies will occupy.

Our results show that for many advanced capitalist economies the land requirements to cover their current electricity consumption would be substantial, the situation being especially challenging for those located in northern latitudes with high populations and electricity consumption per capita.

Assessing the implications in terms of land availability (i.e., land not already used for human activities), to generate electricity only, the EU-27 requires half of its available land.

If solar power were to supply all energy used, not just electricity – in other words, the energy contained in oil, coal, and natural gas used for transportation, industry, chemicals, cement, steel, mining, and myriad other endeavors, there isn’t enough land in Japan and most of the EU-27 states.

Why?  Because the power density of solar is such a tiny fraction of what fossils provide us now. Fossils are very concentrated energy that can be consumed at high power rates of up to 11,000 electric averaged watts per square meter  (We/m2).  But the net power density of solar power plants is just 2–10 We/m2) which is 1,100 to 5,500 times less than fossils.   Wind requires even more space than solar at 0.5–2 We/m2, and hydropower as well with 0.5–7 We/m2, with biomass coming in dead last ~0.1 We/m2 at over one hundred thousand times less energy per square meter.

Solar power is intermittent and has high seasonal variability, so a redundant capacity as well as storage capacity is essential.  So for redundant capacity, if one megawatt of solar is produced on 6-8 acres of land, at least three times more land would be needed to gather solar power for the majority of the day (in the united states solar availability is on average 4.8 hours/day) when there’s little or no sun and during winter.  Additional land would be also be needed for energy storage, especially for the only commercial solution that exists, hydroelectric and pumped hydro storage, whose reservoirs take up a great deal of land.  For these reasons and many others, the authors estimate that a realistic land area is five to ten times higher than what other scientists have estimated.

The authors also note that their calculations are very conservative since they don’t take into account the International Energy Agency (IEA) estimate that world electricity demand will grow 2.1% per year on average between 2012 and 2040 (i.e., +80% cumulative growth in the period). In that case the amount of land needed is much higher than our estimate for current electricity consumption.

Another disadvantage of solar PV farms is that they compete with agriculture for land, both of which need level land, and solar can also reduce biodiversity where ever it’s placed.

When the authors say “available land”, that means solar farms are competing for all the other uses we have for land, to build homes, infrastructure, grow fiber, food, and so on.

Conclusion

Solar to replace all electricity generation only

Our findings show that the land needed is substantial, especially for those in northern latitudes with high population densities and high electricity consumption per capita such as the Netherlands, Belgium, the UK, Luxembourg, South Korea, Germany, Finland, Taiwan, Denmark and Japan (10–50% of available land). Moreover, accounting for the electricity footprint, i.e., for the net energy embodied in international trade (the energy used in China and India would need to come back to these 40 nations to provide power in a world fueled only by solar power), which increases the amount of land to 11 to 60%.

Solar to replace all energy used by societ

This is not possible for many nations within their own borders, especially the Netherlands, Luxembourg, Belgium, the UK, Denmark, Germany, South Korea, Taiwan, Finland, Japan, Ireland, Czech Republic, Sweden, Poland, Estonia and Italy.

End note: I had some trouble understanding this paper, partly because of the English used, and the academic language nearly all papers are written in, which is always a battle to translate.  I’m sure I missed a lot of good stuff because of it, so read the paper if this interests you.

Reference

Li, Y., et al. 2018. Land availability, utilization, and intensification for a solar powered economy. Proceedings of the 13th international symposioum on process systems engineering.

 

 

Posted in Solar | Tagged , , , , , | 9 Comments

75% of Earth’s Land Areas Are Degraded. Environmental damage threatens 3.2 billion people.

Source: United Nations University

 

Preface. Yikes, by 2050 95% of Earth’s land could be degraded and reduce or even end food production, forcing hundreds of millions to migrate.

Whatever you’ve read in the past about the State of the World, it’s gotten even worse:

More than 75% of terra firma has been altered by humans, a figure that will likely rise to more than 90% by 2050, according to the first comprehensive assessment of land degradation and its impacts. The report, released this week by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, was prepared by more than 100 experts from around the world. Crops and livestock affect the greatest area—a third of all land—by contributing to soil erosion and water pollution. Wetlands are among the most impacted of ecosystems; 87% have been destroyed over the past 3 centuries (Science 2018)

An even longer and more detailed report than that in the National Geographic below is here.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

Leahy, S. 2018. 75% of Earth’s Land Areas Are Degraded. A new report warns that environmental damage threatens the well-being of 3.2 billion people.  National Geographic.

More than 75% of Earth’s land areas are substantially degraded, undermining the well-being of 3.2 billion people, according to the world’s first comprehensive, evidence-based assessment. These lands that have either become deserts, are polluted, or have been deforested and converted to agricultural production and are also the main causes of species extinctions.

If this trend continues, 95% of the Earth’s land areas could become degraded by 2050. That would potentially force hundreds of millions of people to migrate, as food production collapses in many places, the report warns. (Learn more about biodiversity under threat.)

“Land degradation, biodiversity loss, and climate change are three different faces of the same central challenge: the increasingly dangerous impact of our choices on the health of our natural environment,” said Sir Robert Watson, chair of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES), which produced the report (launched Monday in Medellin, Colombia).

IPBES is the “IPCC for biodiversity”—a scientific assessment of the status of non-human life that makes up the Earth’s life-support system. The land degradation assessment took three years and more than 100 leading experts from 45 countries.

Rapid expansion and unsustainable management of croplands and grazing lands is the main driver of land degradation, causing significant loss of biodiversity and impacting food security, water purification, the provision of energy, and other contributions of nature essential to people. This has reached “critical levels” in many parts of the world, Watson said in an interview.

Underlying Causes

Wetlands have been hit hardest, with 87% lost globally in the last 300 years. Some 54% have been lost since 1900. Wetlands continue to be destroyed in Southeast Asia and the Congo region of Africa, mainly to plant oil palm.

Underlying drivers of land degradation, says the report, are the high-consumption lifestyles in the most developed economies, combined with rising consumption in developing and emerging economies. High and rising per capita consumption, amplified by continued population growth in many parts of the world, are driving unsustainable levels of agricultural expansion, natural resource and mineral extraction, and urbanization.

Land degradation is rarely considered an urgent issue by most governments. Ending land degradation and restoring degraded land would get humanity one third of the way to keeping global warming below 2°C, the target climate scientists say we need to avoid the most devastating impacts. Deforestation alone accounts for 10 percent of all human-induced emissions.

Reference

News at a Glance. 2018. Alarm over land degradation. Science 359: 1444.

Posted in Agriculture, Biodiversity Loss, Limits To Growth, Peak Food | Tagged , , , | Leave a comment

One less worry: the magnetic field flipping between north and south poles is not the end of the world

Preface.  The geomagnetic field reversal of polarity has occurred thousands of times in the geological past. We are overdue for another. Indeed, Earth’s dipole has decreased in strength by nearly 10% since it was first measured in 1840. It could happen within the next 2,000 years.

If the magnetic poles flip, it is likely solar radiation storms will crash power grids, satellites, and electronic communications for 10,000 years based on what we know of past reversals.

But not to worry, by 2100 there won’t be an electric grid, satellites, and electronic communications because there won’t be enough oil, coal, and natural gas left to run them.  Or wind and solar power, which also depend on fossils every single step of their life cycle.

By the time the poles flip, we’ll be back to horse drawn carriages, so not having GPS won’t be a big deal.   In a world that’s gone back to wood as the main energy and infrastructure resource, as in all past civilizations before fossils, no one is likely to even even notice the magnetic field is weak. Though we should feel sorry for migrating birds, it might throw them for a loop.

Theoretical physicist Richard Feynman once tried to describe what a magnetic field looked like: “Is it any different from trying to imagine a room full of invisible angels? No, it’s not like imagining invisible angels. It requires a much higher degree of imagination to understand the electromagnetic field than to understand invisible angels.”

Perhaps Feynman would have a better idea of what a magnetic field looks like if he’d gone to the arctic circle in the winter — auroras are electromagnetic fields shimmering and dancing across the night sky.

Though Feynman’s superstitious image is apt because the study of magnetism used to be part of religion, magic and natural philosophy. If the author had written this book a few hundred years ago, she might have been burned at the stake for her heresy.

Sure, if the poles flipped within the next 50 years, it would be a real disaster, just see my posts on an electromagnetic pulse here for details.  But the odds are good your great grandchild won’t even know it’s happened.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Buffett, B. 2018. A candid portrait of the scientists studying Earth’s declining magnetism warns of potential peril if the poles swap places. Science.

A book review of Alanna Mitchel, 2018, “The Spinning Magnet: The Electromagnetic Force That Created the Modern World–and Could Destroy It”, Dutton.

Earth’s magnetic field protects the environment from the harsh conditions of space and its strength has been declining since Carl Friedrich Gauss measured this in the 1830s. The decline suggests that the magnetic field may flip in less than 2,000 years.  The last time this happened was 780,000 years ago.

The outcome would be a substantial lowering of our protective shield.Should that happen again, the weak magnetic field would wreak havoc on our power grids and other infrastructure.

Recent examples of failures in this protective barrier (Kappenman 1997) serve to highlight the problem. A large solar storm in March 1989 sent high levels of charged particles streaming toward Earth. These particles impinged on the magnetic field and induced electric currents through power grids in Quebec, Canada. The ensuing blackout affected 6 million customers. A reduction in the field strength would allow charged particles to penetrate deeper into the Earth system, causing greater damage with even modest solar storms. A substantial and sustained collapse of the magnetic field during a reversal would likely end our present system of power distribution.

Throughout the book, there is a clear and effective attempt to cast a spotlight on the individuals who have contributed to our understanding of Earth’s magnetic field. Mitchell has a sharp eye for mannerisms and a vivid way of bringing personalities to the page. Her explanations are aimed at a nontechnical audience, and the analogies she uses to describe complex scientific ideas are always entertaining. For example, a crowded washroom at a “beer-soaked” sporting event serves as the starting point for an illustration of Pauli’s exclusion principle. Her enthusiasm for the book’s subject matter shines throughout.

There is little doubt that the magnetic field will reverse again. In the meantime, The Spinning Magnet gives readers a nontechnical description of electromagnetism and a measured assessment of the possible consequences for our modern world if it does so in the near future.

Reference

Kappenman, J. G., et al. 1997. Space weather from a user’s perspective: Geomagnetic storm forecasts and the power industry. American Geophysics Union 78: 37-45

Posted in Electric Grid, EMP Electromagnetic Pulse | Tagged , | Leave a comment

Crash alert: China’s resource crisis could be the trigger

Preface.  Way to go Nafeez Ahmed, your second home run of reality based reporting on the energy crisis this week.  There are countless economists within the mainstream media predicting an economic crisis worse than in 2008, but they totally ignore energy. How refreshing to see an article where energy is front and center in explaining why there may be an economic crash in the future.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

Nafeez Ahmed. September 12, 2018. The next financial crash is imminent, and China’s resource crisis could be the trigger. Over three decades, the value of energy China extracts from its domestic oil, gas and coal supplies has plummeted by half. Medium.com

China’s economic slowdown could be a key trigger of the coming global financial crisis, but one of its core drivers — China’s dwindling supplies of cheap domestic energy — is little understood by mainstream economists.

All eyes are on China as the world braces itself for what a growing number of financial analysts warn could be another global economic recession.

In a BBC interview to mark the 10th anniversary of the global financial crisis, Bank of England Governor Mark Carney described China as “one of the bigger risks” to global financial stability.

The Chinese “financial sector has developed very rapidly, and it has many of the same assumptions that were made in the run-up to the last financial crisis,” he warned:

“Could something like this happen again?… Could there be a trigger for a crisis — if we’re complacent, of course it could.”

Since 2007, China’s debts have quadrupled. According to the IMF, its total debt is now about 234 percent of gross GDP, which could rise to 300 percent by 2022. British financial journalist Harvey Jones catalogues a range of observations from several economists essentially warning that official data might not reflect how bad China’s economy is actually decelerating.

The great hope is that all this is merely a temporary blip as China transitions from a focus on manufacturing and exports toward domestic consumption and services.

Meanwhile, China’s annual rate of growth continues to decline. The British Foreign Office (FCO) has been monitoring China’s economic woes closely, and in a recent spate of monthly briefings this year has charted what appears to be its inevitable decline.

Last month, the FCO’s China Economics Network based out of the British Embassy in Beijing documented that China’s economy had “further softened… with indicators weakening across the board”.

The report found that: “Investment, industrial production, and retail sales all weakened, despite easing measures”; and noted that high-level Chinese measures to sustain economic growth were running out of steam.

China’s economic slowdown, moreover, coincides with brewing expectations that Wall Street’s longest running stock market bull run could be about to end soon.

One analysis of this sort came from Wall Street veteran Mark Newton, former Chief Technical Analyst at multi-billion dollar hedge fund Greywolf Capital, and prior to that a Morgan Stanley technical strategist.

Newton predicts that US stocks are close to peaking out, leading to a massive 40–50 percent plunge starting in the spring of 2019 or by 2020 at the latest. He explained that:

“Technically there have started to be warning signs with regards to negative momentum divergence (an indicator that can signal a pending trend reversal), which have appeared prior to most major market tops, including 2000 and 2007.”

Newton’s forecast is similar to a prediction made by US economist Professor Robert Aliber of the University of Chicago Booth School of Business. Earlier this year, INSURGE reported exclusively on Aliber’s forecast of a 40-50 percent stock market crash (in or shortly after 2018), based on examining the dynamic of previous banking crises.

The vulnerability of both the US and Chinese economies — not to mention the string of other vulnerabilities in numerous other countries from Brexit to Turkey to Italy — demonstrates that whatever the actual trigger might be, the resulting impact is likely to have a domino effect across multiple interconnected vulnerabilities.

This could well lead to a global financial crash scenario far worse than what began in 2008.

But financial analysts have completely missed a deeper biophysical driver of China’s economic descent: energy.

Last October, INSURGE drew attention to new scientific study led by the China University of Petroleum in Beijing, which found that China is about to experience a peak in its total oil production as early as 2018.

Without finding an alternative source of “new abundant energy resources”, the study warned, the 2018 peak in China’s combined conventional and unconventional oil will undermine continuing economic growth and “challenge the sustainable development of Chinese society.”

These conclusions have been corroborated by a new paper published this February in the journal Energy, once again led by a team at the China University of Petroleum.

The study applies the measure of Energy Return On Investment (EROI), a simple but powerful ratio to calculate how much energy is being invested to extract a particular quantity of energy.

The team attempted a more refined EROI calculation, noting that standard calculations look at energy obtained at the wellhead compared to what is used to extract it; whereas a more precise measure would look at energy available at ‘point of use’ (so, after extraction from the wellhead, processing and transportation until it is actually used for something tangible in society).

Using this approach to EROI, the study finds that over a period of around three decades (between 1987 and 2012), the value of the energy extracted from China’s domestic fossil fuel base declined by more than half from 11:1 to 5:1.

This means that more and more energy is being expended to extract a decreasing amount of energy: a process that is gradually undermining the rate of economic growth.

A similar finding extends to China’s coal consumption:

“In 1987, the energy production sectors consumed 1 ton standard coal equivalent (TCE) energy inputs for every 10.01 TCE of produce net energy. However, in 2012, this number declined to 4.25.”

The study uses this data to simulate the impact on China’s GDP, and concludes that China’s declining GDP is directly related to the declining EROI or energy value of its domestic hydrocarbon resource base.

But it isn’t just China experiencing an EROI decline. This is a global phenomenon, one that was recently noted by a scientific report to the United Nations that I covered for VICE, which warned that the global economy as a whole is shifting to a new era of declining resource quality.

This doesn’t mean we are ‘running out’ of fossil fuels — but it means that as the resource quality of those fuels decline, we increase the costs on our environment and systems of production, all of which increasingly impact on the health of the global economy.

As long as mainstream economic institutions remain blind to the fundamental biophysical basis of economics, as masterfully articulated by Charles Hall and Kent Klitgaard in their seminal book, Energy and the Wealth of Nations: An Introduction to BioPhysical Economics, they will remain in the dark about the core structural reasons why the current configuration of global capitalism is so prone to recurrent crisis and collapse.

Dr. Nafeez Ahmed is the founding editor of INSURGE intelligence. Nafeez is a 17-year investigative journalist, formerly of The Guardian where he reported on the geopolitics of social, economic and environmental crises. Nafeez reports on ‘global system change’ for VICE’s Motherboard, and on regional geopolitics for Middle East Eye. He has bylines in The Independent on Sunday, The Independent, The Scotsman, Sydney Morning Herald, The Age, Foreign Policy, The Atlantic, Quartz, New York Observer, The New Statesman, Prospect, Le Monde diplomatique, among other places. He has twice won the Project Censored Award for his investigative reporting; twice been featured in the Evening Standard’s top 1,000 list of most influential Londoners; and won the Naples Prize, Italy’s most prestigious literary award created by the President of the Republic. Nafeez is also a widely-published and cited interdisciplinary academic applying complex systems analysis to ecological and political violence.

 

 

Posted in Crash Coming Soon, EROEI remaining oil too low, Peak Oil | Tagged , , , , | Leave a comment

The coming crash in 2020 from high diesel prices for cleaner emission of oceangoing ships

Preface.  Ships made globalization possible, and play an essential role in our high standard of living, carrying 90% of global goods traded. But the need for a new, cleaner fuel may cause the next economic crisis.  What follows are excerpts from P. K. Verleger’s 2018 article “$200 Crude, the economic crisis of 2020, and policies to prevent catastrophe”.

Here are a few summary paragraphs from this paper:

The global economy likely faces an economic crash of horrible proportions in 2020 due to a lack of  low-sulfur diesel fuel for oceangoing ships when a new International Maritime Organization rule takes place January 1, 2020. Until now, ships have burned “the dregs” of crude oil, full of sulfur and other pollutants, because it was the least expensive fuel available.

The economic collapse I predict will occur because the world’s petroleum industry lacks the capacity needed to supply additional low-sulfur fuel to the shipping industry while meeting the requirements of existing customers such as farmers, truckers, railroads, and heavy equipment operators.

Operators of simple refineries, in theory, could survive the IMO 2020 transition by changing the crude oil they process to “light sweet” crudes that can yield high volumes of low sulfur distillate, crudes such as those from Nigeria.  There is, though, a market constraint to this option. Volumes of low-sulfur crude oil are limited, and supplies are less certain because these crudes are produced primarily in Nigeria, a country that suffers frequent, politically induced market disruptions. Thus, when the inflexible refiners begin bidding for Nigerian oil, prices will rise, perhaps as much as three or four-fold.

IEA economists explained at the time that the oil price rise from 2007 to 2008 resulted in part from the frenzied bidding for limited quantities of low-sulfur crude oil, especially supplies from Nigeria. Then, as today, many refineries could not manufacture low-sulfur diesel from other crude-oil types, such as the Middle East’s light crude oils, because they lacked the needed equipment. In 2008, such refiners contentiously bid for low-sulfur crude, driving prices higher as they sought to avoid closure. This inability to process higher-sulfur crude oils created a peculiar situation. Ships loaded with such crudes were stranded on the high seas because the cargo owners could not find buyers.

At the same time, prices for light sweet crudes rose to record levels. The desperate need for low-sulfur crudes caused buyers to bid their prices higher and higher. This situation will reoccur in 2020. The global refining industry will not be able to produce the additional volumes of low-sulfur diesel and low-sulfur fuel oil required by the maritime industry. In some cases, refiners will close because they cannot find buyers for the high-sulfur fuel they had sold as ship bunkers. In others, refiners will seek lighter, low-sulfur crude oils, bidding up prices as they did in 2008. This price increase may be double the 2008 rise, however, because the magnitude of the fuel shift is greater and the refining industry is less prepared.

The crude price rise will send all product prices higher. Diesel prices will lead, but gasoline and jet fuel will follow. US consumers could pay as much as $6 per gallon for gasoline and $8 or $9 per gallon for diesel fuel.

Below are excerpts about peak diesel from this article: Antonio Turiel, Ugo Bardi. 2018. For whom is peak oil coming? If you own a diesel car, it is coming for you! Cassandra’s legacy.

Six years ago we commented on this same blog that, of all the fuels derived from oil, diesel was the one that would probably see its production decline first. The reason why diesel production was likely to recede before that of, for example, gasoline had to do with the fall in conventional crude oil production since 2005 and the increasing weight of the so-called “unconventional oils,” bad substitutes not always suitable to produce diesel.

…since 2007 (and therefore before the official start of the economic crisis) the production of fuel oils has declined.

Surely, in this shortage, we can start noting the absence of some 2.5 Mb/d of conventional oil (more versatile for refining and therefore more suitable for the production of fuel oil), as we were told by the International Energy Agency in his last annual report. This explains the urgency to get rid of the diesel that has lately shaken the chancelleries of Europe: they hide behind real environmental problems (which have always troubled diesel, but which were always given less than a hoot) to try to make a quick adaptation to a situation of scarcity. A shortage that can be brutal, since no prevention was performed for a situation that has long been seen coming.

the production of heavy gas oil has been dropping from 2007, when there was not as much regulatory interest as there seems to be now. There is one aspect of the new regulations that I think is interesting to highlight here: from 2020 onwards, all ships will have to use fuel with a lower sulfur content. Since, typically, the large freighters use very heavy fuel oils, that requirement, they say, makes one fear that a shortage of diesel will occur. In fact, from what we have discussed in this post, what seems to be happening is that heavy fuel oils are declining very fast and ships will have no choice but to switch to diesel. That this is going to cause problems of diesel shortage is more than evident. It is an imminent problem, even more than the peaks in oil prices that, according to what the IEA announces, will appear by 2025.

fracking oil only serves to make gasoline and that is why the diesel problem remains.

That is why, dear reader, when you are told that the taxes on your diesel car will be raised in a brutal way, now you know why. Because it is preferred to adjust these imbalances with a mechanism that seems to be a market (although this is actually less free and more adjusted) rather than telling the truth. The fact is that, from now on, what can be expected is a real persecution against cars with an internal combustion engine (gasoline will be next, a few years after diesel).

And more from this author in a different article:

conventional oil crude arrived to a peak in 2005 (followed by minimum attempts in 2012, 2015 and 2016confirming a plateau, note of the translator. Data from Art Berman). This is a recognized fact, even by the International Energy Agency (IEA) in its World Energy Outlook (WEO) of 2010

This conventional crude oil is still the most of the oil we consume today worldwide; more than 70%, but their production is declining: in 2005 69,5 Mb/d were being produced. Today some 67 Mb/d. That is, some 2.5 Mb/d less.

The conventional oil crude is the easiest to extract and also the most versatile; the oil that has more wide uses. Specifically, it is the most adequate to refine diesel from it.

To compensate the conventional crude oil, the good one, several oil substitutes were gradually introduced. There are the most diverse ones: biofuels, bitumen,Light Tight Oil, Liquid fuels from natural gas….all of them have two common characteristics: they are most costly to be extracted and their production is quite limited, it cannot rise much.

Besides, most of these so called “non conventional oils” are not suitable to refine or distillate diesel. That’s why we have the present problems with diesel. The more the conventional oil crude production will fall, more will the diesel production drop.

In addition, the latest EIA 2018 report says that if oil companies continue to not invest in oil exploration and production as they have been the past few years, in 20205 we are likely to be short 34 million barrels per day — about a third of all liquid fuels we consume today.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

Verleger, P. K., Jr. July 2018. $200 Crude, the economic crisis of 2020, and policies to prevent catastrophe.   Pkverlegerllc.com

The proverb “For want of a nail” ends by warning that a kingdom was lost “all for want of a horseshoe nail.” The proverb dates to 1230. As Wikipedia explains, the aphorism warns of the importance of logistics, of having sufficient supplies of critical materials.  The global economy likely faces an economic crash of horrible proportions in 2020, not for want of a nail but want of low-sulfur diesel fuel. The lack of adequate supplies promises to send the price of this fuel— which is critical to the world’s agricultural, trucking, railroad, and shipping industries—to astoundingly high levels. Economic activity will slow and, in some places, grind to a halt. Food costs will climb as farmers, unable to pay for fuel, reduce plantings. Deliveries of goods and materials to factories and stores will slow or stop. Vehicle sales will plummet, especially those of gas-guzzling sport utility vehicles (SUVs). One or more major US automakers will face bankruptcy, even closure. Housing foreclosures will surge in the United States, Europe, and other parts of the world. Millions will join the ranks of the unemployed as they did in 2008. All for the want of low-sulfur diesel fuel or gasoil.   Wikipedia [https://tinyurl.com/n7sb629].

The International Maritime Organization (IMO) decreed that oceangoing ships must adopt measures to limit sulfur emissions or burn fuels containing less than 0.5 sulfur—in other words, switch to low-sulfur diesel fuel. The sulfur rule takes effect January 1, 2020.

The economic collapse I predict will occur because the world’s petroleum industry lacks the capacity needed to supply additional low-sulfur fuel to the shipping industry while meeting the requirements of existing customers such as farmers, truckers, railroads, and heavy equipment operators. These users purchase diesel fuel or gasoil, the petroleum product that accounts for the largest share of products consumed. In most countries, they must buy low-sulfur diesel fuel to reduce pollution.

Economists at the International Energy Agency, have warned that these prices must increase 20 to 30%.

While higher prices are worrisome, they should not by themselves lead to a major recession. After all, diesel fuel prices have increased more than 30% at various times this decade. However, these estimates assume that crude prices do not change.

Difficulties will arise because crude oil is not a homogeneous commodity like, for example, bottles of Jack Daniels Kentucky sour mash. Instead, crude oils vary regarding their qualities and composition, and these differences exceed those of most other goods.

Two important distinguishing factors among crude oils are how much sulfur they contain and the diesel fuel volume they produce when refined.  Some crude oils—the light sweet varieties—contain minimal sulfur and produce large amounts of low-sulfur diesel. A far greater number—the heavy sour crudes— contain a higher percentage of sulfur and do not produce diesel that meets environmental sulfur content standards without expensive additional processing.

While many world refineries can produce low-sulfur diesel fuel from heavy sour crudes, a large number have not been equipped to do this yet and thus cannot help in meeting the IMO 2020 requirements.

Much of the incremental crude that will be supplied in 2019 as world production increases will be Arab Heavy. The distillate produced from this crude contains between 1.8 and 2% sulfur.

Much of the sulfur in crude is not removed during refining but rather ends up in “fuel oil,” the “dregs” or residue left over after all the high-value products have been distilled out. It is the cheapest liquid fuel available. It is also viscous (it must be heated before use) and contains many pollutants, particularly sulfur, that are harmful to humans, animals, and plants. Since the turn of the 21st century, most fuel oil has been consumed by the shipping industry due to the environmental restrictions on other uses. It was only a matter of time before those restrictions came to marine fuel.

In order to make enough clean fuel available to vessels, very large price hikes may be required to suppress non-maritime use.

Refiners will need to “destroy” or find new markets for up to two million barrels per day of high-sulfur fuel oil. Some of it will be sold to oil-burning power plants such as those in the Middle East. These plants could and likely will shift to residual fuel oil to save money

Other volumes of high-sulfur fuel oil will be sold to refiners configured with cokers, where they will be “destroyed,” to use the oil industry’s language. Cokers split heavy fuel or heavy crude into light products and coke. ExxonMobil’s new coker at its Antwerp refinery, for example, will “turn high sulfur oils created as a byproduct of the refining process into various types of diesel, including shipping fuels that will meet new environmental laws.”4 These units will be critical in converting fuel that can no longer be burned in ships into marketable products. The rub is that cokers are very expensive (ExxonMobil’s will cost more than $1 billion) and require significant construction time.

The magnitude of the coming oil market transformation is unprecedented. This historic increase in demand for low-sulfur diesel combined with the equally historic need to dispose of unwanted fuel oil that will, absent moderating actions by nations and the IMO, cause an economic collapse in 2020.

Today, the high sulfur fuel oil price is roughly 90% of the crude price. In 2020, it could fall as low as 10% of the crude price. As a result, the price of low-sulfur distillate, which today sells for 120% of the crude price, would need to rise to perhaps 200% of the crude price to compensate the owners of refineries with limited flexibility that can produce some low-sulfur diesel along with equal or larger volumes of high sulfur fuel oil. Should prices of low-sulfur distillate fail to rise to such levels, these facilities will have to close.

Owners of simple refineries could attempt to procure a different crude feedstock. The only way for these refineries to vary their output is by changing the crude processed. Some crude oils, as mentioned, produce more low-sulfur diesel and less high-sulfur fuel oil than others. Operators of simple refineries, in theory, could survive the IMO 2020 transition by changing the crude oil they process to “light sweet” crudes that can yield high volumes of low sulfur distillate, crudes such as those from Nigeria.  There is, though, a market constraint to the third option. Volumes of low-sulfur crude oil are limited, and supplies are less certain because these crudes are produced primarily in Nigeria, a country that suffers frequent, politically induced market disruptions. Thus, when the inflexible refiners begin bidding for Nigerian oil, prices will rise, perhaps as much as three or four-fold.

Economist James Hamilton asserts strongly, for instance, that the oil price increase in 2008 would have caused a recession on its own. The price rise had already exacerbated a significant downturn in the US automobile industry. General Motors, Ford, and Chrysler had begun closing plants and laying off workers early in the year as sales of SUVs and many autos all but stopped due to lack of demand.

IEA economists explained at the time that the oil price rise from 2007 to 2008 resulted in part from the frenzied bidding for limited quantities of low-sulfur crude oil, especially supplies from Nigeria. Then, as today, many refineries could not manufacture low-sulfur diesel from other crude-oil types, such as the Middle East’s light crude oils, because they lacked the needed equipment. In 2008, such refiners contentiously bid for low-sulfur crude, driving prices higher as they sought to avoid closure. This inability to process higher-sulfur crude oils created a peculiar situation. Ships loaded with such crudes were stranded on the high seas because the cargo owners could not find buyers.

At the same time, prices for light sweet crudes rose to record levels. The desperate need for low-sulfur crudes caused buyers to bid their prices higher and higher. This situation will reoccur in 2020. The global refining industry will not be able to produce the additional volumes of low-sulfur diesel and low-sulfur fuel oil required by the maritime industry. In some cases, refiners will close because they cannot find buyers for the high-sulfur fuel they had sold as ship bunkers. In others, refiners will seek lighter, low-sulfur crude oils, bidding up prices as they did in 2008. This price increase may be double the 2008 rise, however, because the magnitude of the fuel shift is greater and the refining industry is less prepared.

The crude price rise will send all product prices higher. Diesel prices will lead, but gasoline and jet fuel will follow. US consumers could pay as much as $6 per gallon for gasoline and $8 or $9 per gallon for diesel fuel.

The high petroleum product prices will have two impacts. First, prices of everything consumed in the economy will rise. Second, high prices will force consumers to spend less on other goods and services, which will depress demand for airline travel, restaurant dinners, and new automobiles, to mention just a few. The potential impact of higher fuel prices on everything purchased across the economy is obvious. They will raise costs in the agricultural sector, leading to higher food prices. They will boost delivery costs and airline ticket prices.

Sadly, the economic losses could be much greater than any experienced in the prior five decades. The US economy will be further handicapped by the federal government’s debt. The ratio of US debt to GDP has increased from 60% in 2008 to 103% today

The increase in debt, combined with the tax cuts enacted in 2017, leaves the country with little room to address a recession. Instead, a large oil price increase could lead to an extraordinarily difficult downturn.

The government might find it impossible to fund an infrastructure program. Many states might be unable to provide income supplements to the unemployed. Emerging market nations would suffer as well. These nations would be especially exposed because they already face significant economic weakness as a strengthening dollar and rising US interest rates cause large declines in bond and equity markets in countries such as Brazil and Turkey.

If it were a country, the global shipping industry would rank as the 6th largest emitter of greenhouse gasses worldwide.

The IMO adopted a rule in 2008 that contemplated removing most sulfur from fuels used in the world’s oceangoing vessels, which number more than sixty thousand.

Oil production in Venezuela, a major player in the global oil market, collapsed. OPEC, Russia, and several other producing countries reduced output to force inventory liquidations and raise prices. To top it off, in 2018 the United States seems intent on reinstating sanctions on Iran, possibly removing a crude supply source that might be essential in cushioning price increases. These events and actions will all influence market developments in 2020 when the IMO rule becomes effective.

The amount of crude available for refining has a direct impact on the availability of diesel fuel. At the most basic level, world refiners can produce roughly 560,000 barrels of diesel from every million barrels of crude refined, according to Morgan Stanley analysts, so 1.8 million barrels per day of crude must be refined to produce one million barrels per day of diesel.

Global crude production of one hundred million barrels per day in 2020 would require an 8% increase in output from 2017. The annual rate of increase would need to be 3% per year, three times the rate of increase for the last decade. Achieving this boost will be difficult, if not impossible, should the changes in the global supply situation noted at the start of the section— Venezuela’s production decline, OPEC’s output restraint, and the reinstatement of US sanctions on Iran—remain unchanged.

The collapse of Venezuela’s oil production was not anticipated in 2016. Oil output from the country totaled around two million barrels per day when the IMO program was ratified. Two years later, output has declined to 1.5 million barrels per day. By 2020, Venezuela may be producing no crude, which would remove 1.5 million barrels per day from the global market.

Taken together, the loss of Venezuelan output, the inventory reduction engineered by OPEC, Russia, and a few other producers, and the renewed sanctions on Iran will subtract 2.5 to three million barrels per day from the market.

These estimates assume consumers in every country accept the higher prices. This assumption is questionable, however. Recently, truck drivers in Brazil brought the nation to a standstill while demanding lower diesel prices. Eventually, the Brazilian government gave in to the drivers’ demands when gasoline stations ran dry and grocery store shelves emptied. The president cut the diesel price twelve percent, reduced the road tolls paid by trucks, and offered other benefits to end the strike. Truck drivers in other countries could respond in the same way to high prices.

Believe it or not, this prediction must be viewed as optimistic even though the economic consequences of oil selling for $130 per barrel would be terrible. It is optimistic because it assumes market disruptions will be limited to a loss of Iranian crude and the collapse of Venezuelan output. It also assumes the pipeline constraints that keep US “light tight” crude oil (LTO) away from the market today will be resolved and that world refiners will be able to process the LTOs. Finally, it assumes that production in Canada, Libya, and Nigeria continues uninterrupted and that no other disruptive events occur.

US LTOs may create problems for refineries even if they get to market. These crudes are very light. Many refiners must blend other crudes with them before processing. The analysis here assumes this obstacle will be overcome.

A large oil price increase could create a catastrophe where debt cannot be serviced, and a situation such as the Asian debt crisis of 1997 could result.

Any action taken would probably occur after the economic collapse was well under way, just as the financial problems that caused the 2008 meltdown were only addressed after 2008.

These members see global warming as a serious issue and strongly favor the Paris Accords adopted in 2016. The United States withdrew from that agreement in 2017. Thus, one can envision the IMO members refusing to moderate the 2020 rule unless the United States reverses course and ratifies the Paris climate agreement. The United States has no control over the IMO and so can do nothing on its own. It is part of a very small minority there.

The Trump administration’s trade policy will further weaken the willingness of other nations to ease restrictions to help the US. The United States has followed an aggressive unilateral trade strategy since Donald Trump became president. His administration’s policies have left many frustrated and angry. The upcoming economic squeeze tied to the IMO rule provides them a way to even the score.

Economic policies being followed by the Trump administration threaten to reduce the amount of goods moving in international trade. Ironically, a trade war could decrease the amount of fuel used in international commerce, which would lessen the sulfur rule’s impact.

The IMO regulation on marine-fuel sulfur content, if left unchanged, will likely have widespread impacts on the petroleum sector. Crude oil prices could rise to $160 per barrel or higher as the rule takes effect, assuming no market disruptions. Prices could rise much higher with any disruption, even a moderate one. The higher prices will slow economic growth. If they breach $200 per barrel, they would likely lead to a recession or worse.

 

 

Posted in By People, Crash Coming Soon, Peak Oil, Ships and Barges | Tagged , , , | 3 Comments

India wants to build dangerous fast breeder reactors

Preface. India was planning to build six fast breeder reactors in 2016, but now in 2018, they’ve reduced the number to 2.  This is despite the high cost, instability, danger, and accidents of the 16 previous world-wide attempts that have shut down, including the Monju fast breeder in Japan, which began decommissioning in 2018.

Breeders that produce commercial power don’t exist. There are only four small experimental prototypes operating.

Breeder reactors are much closer to being bombs than conventional reactors – the effects of an accident would be catastrophic economically and in the number of lives lost if it failed near a city (Wolfson).

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

Ramana, M. V. 2016. A fast reactor at any cost: The perverse pursuit of breeder reactors in India. Bulletin of the Atomic Scientists.

Projections for the country’s nuclear capacity produced by India’s Department of Atomic Energy (DAE) call for constructing literally hundreds of breeder reactors by mid-century. For a variety of reasons, these projections will not materialize, making the pursuit of breeder reactors wasteful.

But first, some history. The DAE’s fascination with breeder reactors goes back to the 1950s. The founders of India’s atomic energy program, in particular physicist Homi J. Bhabha, did what most people in those roles did around that time: portray nuclear energy as the inevitable choice for providing electricity to millions of Indians and others around the world. At the first major United Nations-sponsored meeting in Geneva in 1955, for example, Bhabha argued for “the absolute necessity of finding some new sources of energy, if the light of our civilization is not to be extinguished, because we have burnt our fuel reserves. It is in this context that we turn to atomic energy for a solution… For the full industrialization of the under-developed countries, for the continuation of our civilization and its further development, atomic energy is not merely an aid; it is an absolute necessity.” Consequently, Bhabha proposed that India expand its production of atomic energy rapidly.

There was a problem though. India had a relatively small amount of good quality uranium ore that could be mined economically. But it was known that the country did have large reserves of thorium, a radioactive element that was considered a “great potential source of energy.” But despite all the praises one often hears about it, thorium has a major shortcoming: It cannot be used to fuel a nuclear reactor directly but has to first be converted into the chain-reacting element uranium-233, through a series of nuclear reactions. To produce uranium-233 in large quantities, Bhabha proposed a three-step plan that involved starting with the more readily available uranium ore. The first stage of this three-phase strategy involves the use of uranium fuel in heavy water reactors, followed by reprocessing the irradiated spent fuel to extract the plutonium. In the second stage, the plutonium is used to provide the startup cores of fast breeder reactors, and these cores would then be surrounded by “blankets” of either depleted or natural uranium to produce more plutonium. If the blanket were thorium, it would produce chain-reacting uranium-233. Finally, the third stage would involve breeder reactors using uranium-233 in their cores and thorium in their blankets. Breeder reactors, therefore, formed the basis of two of the three stages.

Bhabha was hardly alone in thinking of breeders. The first breeder reactor concept was developed by Leό Szilárd in 1943, who was responding to concerns, shared by colleagues who were engaged in developing the first nuclear bomb, that uranium would be scarce. The idea of a phased program involving uranium and thorium had also been proposed in October 1954 by François Perrin, the head of the French Atomic Energy Commission, who argued that France will “have to use for power production both primary reactors [using natural or slightly enriched uranium] and secondary breeder reactors [fast neutron plutonium reactors] … in the slightly more distant future … this second type of reactor … may be replaced by slow neutron breeders using thorium and uranium-233. We have considered this last possibility very seriously since the discovery of large deposits of thorium ores in Madagascar.” (At that time, Madagascar was a French colony, achieving independence only in 1960.)

That was then. In the more than 60 years that have passed since the adoption of the three-phase plan, we have learned a lot about breeder reactors. Three of the important lessons are that fast breeder reactors are costly to build and operate; they have special safety problems; and they have severe reliability problems, including persistent sodium leaks.

These problems were observed in countries around the world, and have not been solved despite spending over $100 billion (in 2007 dollars) on breeder reactor research and development, and on constructing prototypes.

India’s own experience with breeders so far consists of one, small, pilot-scale fast breeder reactor, whose operating history has been patchy. The budget for the Fast Breeder Test Reactor (FBTR) was approved by the Department of Atomic Energy in 1971, with an anticipated commissioning date of 1976. But it was October 1985 before the reactor finally attained criticality, and a further eight years (i.e., 1993) elapsed before its steam generator began operating. The final cost was more than triple the initial cost estimate. But the reactor’s troubles were just beginning.

The FBTR’s operations have been marred by several accidents of varying intensity. Dealing with even relatively minor accidents has been complicated, and the associated delays have been long. As of 2013, the FBTR had operated for only 49,000 hours in 26 years, or barely 21 percent of the maximum possible operating time. Although the FBTR was originally designed to generate 13.2 megawatts of electricity, the most it has achieved is 4.2 megawatts. But rather than realizing that the FBTR’s performance was typical of breeders elsewhere and learning the appropriate lesson—that they are unreliable and susceptible to shutdowns—the DAE terms this history as demonstrating a “successful operation of FBTR” and describes the “development of Fast Breeder Reactor technology” as “one of the many salient successes” of the Indian nuclear power program.

Even before the Fast Breeder Test Reactor had been constructed, India’s Department of Atomic Energy embarked on designing a much larger reactor, the previously mentioned Prototype Fast Breeder Reactor, or PFBR. Designed to generate 500 megawatts of electricity, the PFBR would be nearly 120 times larger than its testbed cousin, the FBTR. The difficulties of such scaling-up are apparent when one considers the French experience in building the 1,240 megawatt Superphenix breeder reactor; that reactor was designed on the basis of experience with both a test and a 250-megawatt demonstration reactor and still proved a complete failure. Nonetheless, the DAE pressed on.

Full steam ahead. Work on designing the PFBR started in 1981, and nearly a decade later, the trade journal Nucleonics Week reported that the Indian government had “recently approved the reactor’s preliminary design and … awarded construction permits” and that the reactor would be on line by the year 2000.

That was not to be. After multiple delays, construction of the PFBR finally started in 2004; then, the reactor was projected to become critical in 2010. The following year, the director announced that the project “will be completed 18 months ahead of schedule.”

The saga since then has involved a series of delays, followed by promises of imminent project completion. The current promise is for a 2017 commissioning date. Regardless of whether that happens, the PFBR has already taken more than twice as long to construct as initially projected. Alongside the lengthy delay comes a cost increase of nearly 63 percent—so far.

Even at the original cost estimate, and assuming high prices for uranium ($200 per kilogram) and heavy water (around $600 per kilogram), my former colleague J. Y. Suchitra, an economist, and I showed several years ago that electricity from the PFBR will be about 80 percent more expensive in comparison with electricity from nuclear power plants based on the heavy water that the DAE itself is building. These assumptions were intended to make the PFBR look economically more attractive than it really will be. A lower uranium price will make electricity from heavy water reactors cheaper. On the global market, current spot prices of uranium are around $50 per kilogram and declining; they have not exceeded $100 per kilogram for many years. Likewise, the heavy water cost assumed was quite high; the United States recently purchased heavy water from Iran at a cost of $269 per kilogram instead of the $600 per kilogram assumed figure.

The calculation also assumed that breeder reactors operate extremely reliably, with a load factor of 80%. (Load factors are the ratio of the actual amount of electrical energy generated by a reactor to what it should have produced if it had operated at its design level continuously.) No breeder reactor has achieved an 80% load factor; by comparison, in the real world the UK’s Prototype Fast Reactor and France’s Phenix had load factors of 26.9% and 40.5% respectively.

Consequently, even with very optimistic assumptions about the cost and performance of India’s Prototype Fast Breeder Reactor, and the deliberate choice of high costs for the inputs used in heavy water reactors, the PFBR cannot compete with nuclear electricity from the others kinds of reactors that India’s Department of Atomic Energy builds. With more realistic values and after accounting for the significant construction cost escalation, electricity from the Prototype Fast Breeder Reactor could be 200 percent more expensive than that from heavy water reactors.

But such arguments don’t resonate with DAE officials. As one unnamed official told sociologist Catherine Mei Ling Wong, “India has no option … we have very modest resources of uranium. Suppose tomorrow, the import of uranium is banned … then you will have to live with this modest uranium. So … you have to have a fast reactor at any cost. There, economics is of secondary importance.” This argument is misleading because India’s uranium resource base is not a single fixed number. The resource base increases with continued exploration for new deposits, as well as technological improvements in uranium extraction. In addition, as with any other mineral, at higher prices it becomes economic to mine lower quality and less accessible ores. In other words, if the price offered for uranium is higher, the amount of uranium available will be larger, at least for the foreseeable future.

One must keep these factors in mind when making economic comparisons between breeder reactors and heavy water reactors. Even for the earlier set of assumptions, without the dramatic cost increase of the PFBR factored in, breeders become competitive only when uranium prices exceeded $1,375 per kilogram—a truly astronomical figure, given the current spot price of $50 per kilogram. Significantly larger quantities of uranium will become available at such a price. In other words, the pursuit of breeder reactors will not be economically justified even when uranium becomes really, really scarce—which is not going to happen for decades, perhaps even centuries, given that nuclear power globally is not growing all that much.

The DAE, of course, claims that future breeder reactors will be cheaper. But that decline in costs will likely come with a greater risk of severe accidents. This is because the PFBR, and other breeder reactors, are susceptible to a special kind of accident called a core disassembly accident. In these reactors, the core where the nuclear reactions take place is not in its most reactive—or energy producing—configuration. An accident involving the fuel moving around within the core, (when some of it melts, for example) could lead to more energy production, which leads to more core melting, and so on, potentially leading to a large, explosive energy release that might rupture the reactor vessel and disperse radioactive material into the environment. The PFBR, in particular, has not been designed with a containment structure that is capable of withstanding such an accident. Making breeder reactors cheaper could well increase the likelihood and impact of such core disassembly accidents.

What of the DAE’s projections of large numbers of breeder reactors to be constructed by mid-century? It turns out that the methodology used by the DAE in its projections suffers from a fundamental error, and the DAE’s calculations have not accounted properly for the future availability of plutonium that will be necessary to construct the many, many breeder reactors the DAE proposes to build. What the DAE has omitted in its calculations is the lag period between the time a certain amount of plutonium is committed to a breeder reactor and when it reappears (along with additional plutonium) for refueling the same reactor, thus contributing to the start-up fuel for a new breeder reactor. A careful calculation that takes into account the constraints flowing from plutonium availability leads to drastically lower projections. The projections could be even lower if one takes into account the potential delays because of infrastructural and manufacturing problems. The bottom line: Even if all was going well, the breeder reactor strategy will simply not fulfill the DAE’s hopes of supplying a significant fraction of India’s electricity.

Ulterior motives? For all the praises it sings of breeder reactors, there is one reason for its attraction to the PFBR that the DAE does not talk much about, except indirectly. Consider this interview by the Indian Express, a national newspaper, with Anil Kakodkar, then-secretary of the DAE, about the US-India nuclear deal: “Both from the point of view of maintaining long-term energy security and for maintaining the minimum credible deterrent, the fast breeder programme just cannot be put on the civilian list. This would amount to getting shackled and India certainly cannot compromise one [security] for the other.” (There is some code language here. “Minimum credible deterrent” is a euphemism for India’s nuclear weapons arsenal. “Put on the civilian list” means that the International Atomic Energy Agency will not safeguard the reactor, and so it is possible for fissile materials from the reactor to be diverted to making nuclear weapons.)

What this points to is the possibility that breeder reactors like the PFBR can be used as a way to quietly increase the Department of Atomic Energy’s weapons-grade plutonium production capacity several-fold. But as mentioned earlier, this is not a reason that the DAE likes to publicly admit. Nevertheless, the significance of keeping the PFBR outside of safeguards has not been lost, especially on Pakistan.

Breeder reactors have always underpinned the DAE’s claims about generating large quantities of electricity. That promise has been an important source of its political power. For this reason, India’s DAE is unlikely to abandon its commitment to breeder reactors. But given the troubled history of breeder reactors, both in India and elsewhere, the more appropriate strategy to follow would be to simply abandon the three-phase strategy. The DAE’s reliance on a technology shown to be unreliable suggests that the organization is incapable of learning the appropriate lessons from its past and makes it more likely that nuclear power will never become a major source of electricity in India.

References

NP. 2018. India slashes plans for new nuclear reactors by two-thirds. Neutronbytes.com

Wolfson, R. 1993. Nuclear Choices: A Citizen's Guide to Nuclear Technology. MIT Press

Posted in Nuclear Power | Tagged , , , , | Leave a comment

Germany’s wind energy mess: As subsidies expire, thousands Of turbines to close

Preface. This means that the talk about renewables being so much cheaper than anything else isn’t necessarily true.  If wind were profitable, more turbines would be built to replace the old ones  without subsidies needed. Unless they can be dumped in the 3rd world, they’ll be modern civilizations Easter Head icons.

Summary: A large number of Germany’s 29,000 turbines are approaching 20-years-old and for the most part, they are outdated [my note: 20 years is the lifespan of wind turbines]. The generous subsidies granted at the time of their installation are slated to expire soon and thus make them unprofitable. By 2020, 5,700 turbines with an installed capacity of 45 GW will see their subsidies run out. And after 2020, thousands of these turbines will lose their subsidies with each passing year, which means they will be taken offline and mothballed. So with new turbines coming online only slowly, it’s entirely possible that wind energy output in Germany will decline in the coming years.

It’s impossible to recycle composite materials because the large blades are made of fiberglass composite materials whose components cannot be separated from each other. Burning the blades is extremely difficult, toxic, and energy-intensive. So naturally, there’s a huge incentive for German wind park operators to dump the old contraptions onto third-world countries, and to let them deal later with the garbage.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

April 23, 2018. Germany’s wind energy mess: As subsidies expire, thousands of turbines to close. Climate Change Dispatch.

As older turbines see subsidies expire, thousands are expected to be taken offline due to lack of profitability.

Green nightmare: Wind park operators eye shipping thousands of tons of wind turbine litter to third world countries – and leaving their concrete rubbish in the ground.

The Swiss national daily Baseler Zeitung here recently reported how Germany’s wind industry is facing a potential “abandonment”.

Approvals tougher to get

This is yet another blow to Germany’s Energiewende (transition to green energies). A few days ago, I reported here how the German solar industry had seen a monumental jobs’ bloodbath and investments have been slashed to a tiny fraction of what they once were.

Over the years, Germany has made approvals for new wind parks more difficult as the country reels from an unstable power grid and growing protests against the blighted landscapes and health hazards.

Now that the wind energy boom has ended, the Baseler Zeitung reports that “the shutdown of numerous wind turbines could soon lead to a drop in production” after having seen years of ruddy growth.

Subsidies for old turbines run out

Today a large number of Germany’s 29,000 total turbines nationwide are approaching 20-years-old and for the most part, they are outdated.

Worse: the generous subsidies granted at the time of their installation are slated to expire soon and thus make them unprofitable.

After 2020, thousands of these turbines will lose their subsidies with each passing year, which means they will be taken offline and mothballed.

The Baseler Zeitung writes:

The Baseler Zeitung adds that some 5,700 plants with an installed capacity of 45 GW will see their subsidies run out by 2020.  In the following years, it will be between 2000 and 3000 GW, for which the state subsidization is eliminated. The German Wind Energy Association estimates that by 2023 around 14,000 MW of installed capacity will lose production, which is more than a quarter of German wind power capacity on land.  According to the German Wind Energy Association, installed capacity per megawatt is expected to cost 30,000 euros.

The Swiss daily reports further:  So with new turbines coming online only slowly, it’s entirely possible that wind energy output in Germany will recede in the coming years, thus making the country appear even less serious about climate protection.

Wind turbine dump in Africa?

So what happens to the old turbines that will get taken offline?

Wind park owners hope to send their scrapped wind turbine clunkers to third-world buyers, Africa for example. But if these buyers instead opt for new energy systems, then German wind park operators will be forced to dismantle and recycle them – a costly endeavor, reports the Baseler Zeitung.

Impossible to recycle composite materials

The problem here is the large blades, which are made of fiberglass composite materials and whose components cannot be separated from each other.  Burning the blades is extremely difficult, toxic, and energy-intensive.

So naturally, there’s a huge incentive for German wind park operators to dump the old contraptions onto third-world countries, and to let them deal later with the garbage.

Sweeping garbage under the rug

Next, the Baseler Zeitung brings up the disposal of the massive 3,000-tonne reinforced concrete turbine base, which according to German law must be removed. The complete removal of the concrete base can quickly cost hundreds of thousands of euros.

Some of these concrete bases reach depths of 20 meters and penetrate multiple ground layers, the Baseler Zeitung reports, adding:

Already wind park operators are circumventing this huge expense by only removing the top two meters of the concrete and steel base, and then hiding the rest with a layer of soil, the Baseler writes.

In the end, most of the concrete base will remain as garbage buried in the ground, and the above-ground turbine litter will likely get shipped to third-world countries.

That’s Germany’s Energiewende and contribution to protecting the environment and climate!

Posted in Electric Grid, Energy, Wind | Tagged , , , , | 9 Comments