Saudi oil infrastructure at risk from drone attacks

Preface. This NYT article was published 4 months ago, and its warning just came true. Quite prescient!

Drones make it pretty easy to anonymously attack the thousands of miles of pipelines across the Arabian peninsula, oil tankers, pumping stations, and refineries. The Saudis counter that they’ve spent quite a bit to protect their infrastructure, but now that drones can be launched 1,000 miles away to accurately hit targets, whatever protections they have may not be enough, because they can evade the kingdom’s main air defenses, which are intended to repel missiles and aircraft rather than smaller objects.

At least as great a threat is Iran or some other nation using cyber warfare to damage the petroleum infrastructure of Saudi Arabia and its neighbors.

Peak oil production can not only happen for geological reasons. Politics (war) can also bring peak production about, making the collapse of civilization happen that much sooner, and perhaps a lot of oil left in the ground, which climate activists should love.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Reed, S. May 17, 2019. Saudi Oil Infrastructure at Risk as Small Attacks Raise Potential for Big Disruption. New York Times.

Saudi Arabia spent heavily to protect its oil production lines but rapid changes in technology may mean ports and pipelines are increasingly exposed in the turbulent region.

Across the Arabian peninsula, thousands of miles of pipes run above and below the desert in one of the world’s most sophisticated production lines for pumping oil from the ground and distributing it around the world. This vast system of oil fields, refineries and ports has largely run like clockwork despite political turbulence across the region.

Then a drone strike claimed by Houthi rebels this week forced the Saudis to temporarily halt the flow of a crucial oil artery to the west side of the country. The assault came a day after mysterious incidents damaged two Saudi tankers and two other ships in a key port in the United Arab Emirates.

These were perhaps the most serious attacks on the kingdom’s oil infrastructure since Al Qaeda militants were thwarted trying to blow up a key Saudi facility at Abqaiq in 2006.

While American officials are still trying to determine whether Iran was behind these incidents, the question for the oil market is how well the Saudi and Persian Gulf infrastructure is protected and whether, with tensions building in the region, it could survive a conflict with Iran.

Analysts and executives of Saudi Aramco, the national oil company, say the kingdom has spent heavily to protect the industry that is its lifeblood. Key Saudi installations are tightly guarded and protected by missile batteries and other weaponry. “Security systems were bulked up in the 2000s amid the Al Qaeda threat, including the 2006 attack on the Abqaiq facility,” said Ben Cahill, manager for research & advisory, at Energy Intelligence, a research firm. “The country’s oil fields, refineries and pipelines are blanketed by surveillance and remote sensing.”

In light of that security effort, Mr. Cahill and other analysts concede that it was eye-opening, even shocking, that a drone apparently launched from as far as 500 miles away in Yemen, managed to cross deep into Saudi Arabia and cause damage.

It was also worrisome and even embarrassing that someone managed to damage tankers in waters off Fujairah, a vital port in the United Arab Emirates where ships take on fuel and provisions on their way in and out of the Gulf.

Despite the security spending of the last decade, rapid changes in technology may mean that the Saudi infrastructure is more exposed than previously thought, analysts say. United Nations experts have estimated, for instance, that drones used by the Houthis have a range of nearly 1,000 miles allowing them to reach well into Saudi Arabia. “The simple fact that they managed to reach tankers and a pipeline” is meaningful, said Riccardo Fabiani, a geopolitical analyst at Energy Aspects, a market research firm. “It means they could strike at the heart of Saudi interests if they wanted to.”

Iran is well-placed for inflicting pain in the no-war-no-peace existence in the region. Analysts say it is proficient at using relatively cheap unconventional weapons like drones and speed boats, and at covering its tracks. It can also make use of proxies including the Houthi rebels, who claimed responsibility for the pipeline attack.

Analysts say that drones could prove to be a nuisance for producers like the Saudis. It would be difficult if not impossible to protect an entire pipeline system, and even concentrating air defense units around key points like pumping stations, which were hit this week, would mean taking these defenses from somewhere else.

Drones may also be able to evade the kingdom’s main air defenses, which are intended to repel missiles and aircraft rather than smaller objects. Jeremy Binnie, a Middle East and Africa defense specialist at Jane’s Defense Weekly, said that satellite imagery showed that the key Saudi export terminal at Ras Tanura was guarded by batteries of sophisticated United States-made Hawk surface-to-air missiles. But these weapons “might not be able to engage the UAVs (drones) that Iran has developed with small radar cross sections,” he said.

Another concern is that Iran, which is regarded as skilled in digital hacking, could use cyber warfare to damage the petroleum infrastructure of Saudi Arabia and its neighbors.

At Saudi Aramco, activities like drilling wells, pumping oil to the surface, and loading the fuel on tankers can all be monitored and managed remotely. Such sophistication, though, may also create openings for attack. “A lot of those movements are run out of a central command center at Saudi Aramco headquarters,” said Phillip Cornell, a fellow at the Atlantic Council, a Washington-based research institution, who previously worked at Aramco as a senior corporate planning adviser.

Mr. Cornell said that Aramco officials suspected Iran was responsible for a cyber attack earlier in this decade and that “there has been a lot of investment to reinforce those cyber security defenses.”

However, analysts say the cyber vulnerabilities remain a major worry. “I think cyber is the really underappreciated risk,” said Helima Croft, an oil analyst at RBC Capital markets, an investment bank.


Posted in Middle East, Oil & Gas, Peak Oil | Tagged , , , | Leave a comment

Why “fracked” shale oil and gas will not save us

This image has an empty alt attribute; its file name is Fracking-flare.jpg

Preface. As early as 2011 experts were questioning how large fracked natural gas reserves were.

The latest IEA 2018 report predicts shale oil/gas could start to decline by 2025, and all global oil as soon as 2023. 

Shale oil and gas might not even exist without super low interest rates making it quite easy to borrow money, as Bethany McLean writes in her book “Saudi America”.  And even though these companies are $300 billion in debt, as long as they can get money, they’ll continue to drill.  Some day the dumb middle-class money will be surprised that their 401K and other high interest mutual funds and bonds have crashed after the next economic crash.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Cunningham, N. 2019. The Shale Boom Is About To Go Bust. oilprice.com

The shale industry faces an uncertain future as drillers try to outrun the treadmill of precipitous well declines.

For years, companies have deployed an array of drilling techniques to extract more oil and gas out of their wells, steadily intensifying each stage of the operation. Longer laterals, more water, more frac sand, closer spacing of wells – pushing each of these to their limits, for the most part, led to more production. Higher output allowed the industry to outpace the infamous decline rates from shale wells.

In fact, since 2012, average lateral lengths have increased 44 percent to over 7,000 feet and the volume of water used in drilling has surged more than 250 percent, according to a new report for the Post Carbon Institute. Taken together, longer laterals and more prodigious use of water and sand means that a well drilled in 2018 can reach 2.6 times as much reservoir rock as a well drilled in 2012, the report says.

That sounds impressive, but the industry may simply be frontloading production. The suite of drilling techniques “have lowered costs and allowed the resource to be extracted with fewer wells, but have not significantly increased the ultimate recoverable resource,” J. David Hughes, an earth scientist, and author of the Post Carbon report, warned. Technological improvements “don’t change the fundamental characteristics of shale production, they only speed up the boom-to-bust life cycle,” he said.

For a while, there was enough acreage to allow for a blistering growth rate, but the boom days eventually have to come to an end. There are already some signs of strain in the shale patch, where intensification of drilling techniques has begun to see diminishing returns. Putting wells too close together can lead to less reservoir pressure, reducing overall production. The industry is only now reckoning with this so-called “parent-child” well interference problem.

Also, more water and more sand and longer laterals all have their limits. Last year, major shale gas driller EQT drilled a lateral that exceeded 18,000 feet. The company boasted that it would continue to ratchet up the length to as long as 20,000 feet. But EQT quickly found out that it had problems when it exceeded 15,000 feet. “The decision to drill some of the longest horizontal wells ever in shale rocks turned into a costly misstep costing hundreds of millions of dollars,” the Wall Street Journal reported earlier this year.

Ultimately, precipitous decline rates mean that huge volumes of capital are needed just to keep output from declining. In 2018, the industry spent $70 billion on drilling 9,975 wells, according to Hughes, with $54 billion going specifically to oil. “Of the $54 billion spent on tight oil plays in 2018, 70% served to offset field declines and 30% to increase production,” Hughes wrote.

As the shale play matures, the field gets crowded, the sweet spots are all drilled, and some of these operational problems begin to mushroom. “Declining well productivity in some plays, despite application of better technology, are a prelude to what will eventually happen in all plays: production will fall as costs rise,” Hughes said. “Assuming shale production can grow forever based on ever-improving technology is a mistake—geology will ultimately dictate the costs and quantity of resources that can be recovered.”

There are already examples of this scenario unfolding. The Eagle Ford and Bakken, for instance, are both “mature plays,” Hughes argues, in which the best acreage has been picked over. Better technology and an intensification of drilling techniques have arrested decline, and even led to a renewed increase in production. But ultimate recovery won’t be any higher; drilling techniques merely allow “the play to be drained with fewer wells,” Hughes said. And in the case of the Eagle Ford, “there appears to be significant deterioration in longer-term well productivity through overcrowding of wells in sweet spots, resulting in well interference and/or drilling in more marginal areas that are outside of sweet-spots within counties.”

In other words, a more aggressive drilling approach just frontloads production, and leads to exhaustion sooner. “Technology improvements appear to have hit the law of diminishing returns in terms of increasing production—they cannot reverse the realities of over-crowded wells and geology,” Hughes said.

The story is not all that different in the Permian, save for the much higher levels of spending and drilling. Post Carbon estimates that it the Permian requires 2,121 new wells each year just to keep production flat, and in 2018 the industry drilled 4,133 wells, leading to a big jump in output. At such frenzied levels of drilling, the Permian could continue to see production growth in the years ahead, but the steady increase in water and frac sand “have reached their limits.” As a result, “declining well productivity as sweet-spots are exhausted will require higher drilling rates and expenditures in the future to maintain growth and offset field decline,” Hughes warned.

7/29/2017 ASPO newsletter: Last week a $2 billion private-equity fund heavily invested in oil and gas wells went bust, raising the question of whether this is about to happen to other investors. With oil prices in the $40s, many drilling operators are losing money with every barrel of oil they produce. These losses ultimately flow back to the investors that acquired their oilfield assets back when oil was selling for over $100 a barrel.

If spectacular increases in US oil production are to come in the five to eight years, most analysts say it must come from the Permian Basin which is the only shale oil play currently experiencing rapid growth. The Eagle Ford and Bakken shale oil deposits peaked four years ago and have not enjoyed much growth recently and without a substantial increase in investment, there is little hope that there will be substantial growth in the Gulf of Mexico oil fields.

This leaves the Permian Basin as the one oil play that could give America “energy dominance” by pushing up US shale oil production from 5.5 million b/d to 12 million. For this to happen, there must be enough oil in the Permian oil play that can be exploited at prevailing prices. A recent analysis of likely Permian reserves by Arthur Berman concludes that the total recoverable oil in the Permian Basin is likely to be on the order of 3.7 billion barrels and not the 160 billion barrels that the CEO of Pioneer Natural Resources recently was claiming could be recovered from the Permian. The two leading producers of Permian shale oil are anticipating peak production in 2019, which is not that far away.

3/19/2015 ASPO newsletter: Reining in overseas drilling for shale oil: After spending more than five years and billions of dollars trying to re-create the US shale boom overseas, some of the world’s biggest oil companies are starting to give up amid a world-wide collapse in crude prices. Chevron Corp., Exxon Mobil Corp. and Royal Dutch Shell PLC have packed up nearly all of their hydraulic fracturing wildcatting in Europe, Russia, and China. The reasons vary from sanctions in Russia, a ban in France, a moratorium in Germany and poor results in Poland to crude prices below what it can cost to produce a barrel of shale oil.

Here are just a few of many articles on this topic:

  1. 2016-4-30 Australian Public Broadcaster ABC unable to look at oil statistics
  2. May 10, 2015. Einhorn’s Fracking Concerns Are Nothing New, But They Matter For Investors (Part 1). SeekingAlpha.com
  3. R. Heinberg. Chapter 5 of How Fracking’s False Promise of Plenty Imperils Our Future: The Economics of Fracking: Who Benefits? October 2013.
  4. Hall, C. Are we Entering the Second Half of the Age of Oil? Some epirical constraints on optimists’ predictions of an oil-rich future. 2013 Geological Society of America.
  5. Richard Heinberg. 12 Nov 2012. Museletter #246: Gas Bubble Leaking, About to Burst.
  6. Gail Tverberg. 17 Oct 2012. Why Natural Gas isn’t Likely to be the World’s Energy Savior.
  7. Jeff Goodell. 1 Mar 2012. Politics: The Big Fracking Bubble: The Scam Behind the Gas Boom. Rolling Stone.
  8. James Howard Kunstler. 19 Nov 2012. Epic Disappointment.
  9. David Hughes. 29 May 2011. Will Natural Gas Fuel America in the 21st Century?
  10. James Stafford. 12 Nov 2012. Shale Gas Will be the Next Bubble to Pop – An Interview with Arthur Berman.
  11. Peter Coy. 12 Nov 2012. U.S. the New Saudi Arabia? Peak Oilers Scoff.  Bloomberg.
  12. Kelly, S. 29 Apr 2013. Faster Drilling, Diminishing Returns in Shale Plays Nationwide?

October 29, 2013 Tom Whipple. The Peak Oil Crisis: The Shale Oil Bubble.  Falls Church News-Press.

“fracked oil is very expensive, requiring circa $80 a barrel to cover the costs of extraction. Production from fracked oil wells drops off quickly, so new wells have to be drilled constantly to maintain production. Until recently information about just how fast our fracked oil wells were depleting was hard to come by, so the hype about the US becoming energy independent and a major oil exporter became conventional wisdom for most.

Nearly all of the growth in U.S. onshore crude production these days is coming from North Dakota’s Bakken field and Texas’s Eagle Ford. They account for nearly 2 million of the 2.4 million b/d increase in oil production that the US has seen in recent years. It sure looks as if the increase in production in these fields will keep up with the rate of decline within the next 12 to 18 months and that US shale oil production will no longer be growing. While it is possible that a surge of investment will increase the drilling to keep up with declines in production from the older wells, this is expensive, and for now it looks as if oil prices are heading for a level where fracked oil production is not profitable. Outside geologists with access to proprietary data on decline rates have been forecasting for some time now that as the number of wells increases and their quality declines, the shale boom will be coming to an end in the next two years. The release of EIA data seems to confirm these predictions.”

David Hughes at the 2013 Geological Society of America: These are heady times for U.S. oil bulls, with projections of production from tight oil rising to five million barrels per day, or more, by 2019, from essentially nothing just a few years ago. This compares to total U.S. oil production of less than seven million barrels per day as recently as 2008. Declarations of near term “energy independence” are commonplace in the main stream media.

Notwithstanding the substantial contribution of this new supply made possible by the combination of multi-stage hydraulic fracturing and horizontal drilling, the U.S. burns more than 18 million barrels per day. Even five million barrels per day of tight oil production is highly unlikely to free the U.S. from the need for imported oil. Furthermore, tight oil fields are characterized by high decline rates and the need for continual high rates of drilling to maintain production levels. The long term sustainability of tight oil production is thus of paramount concern.

An analysis of the Bakken Field, of North Dakota and Montana, and the Eagle Ford Field, of Texas, which together comprise more than half of projected tight oil production, reveals static field production declines of about 40 percent annually. Moreover, these fields are far from homogenous in terms of well productivity, with “sweet spots” of high productivity comprising a small proportion of the touted productive area. These sweet spots are targeted first resulting in the spectacular ramp up in production observed in these plays, but the steep decline rates inevitably take their toll. Production in the Bakken Field, which is the poster child for tight oil, has plateaued in the past few months, and requires 120 new wells each month to maintain production. The Eagle Ford is still growing rapidly, with 3000 new wells added each year, but it is only a question of time before the sweet spots are exhausted.

Tight oil is an important contributor to U.S. energy supply, but its long term sustainability is questionable. It should be not be viewed as a panacea for business-as-usual in future U.S. energy security planning.

December 11, 2013   California shale.  Tom Whipple.

[Note: the US EIA states that this California shale has 65% of the total recoverable shale oil resource base in the country, with 400 billion barrels of oil and 15 billion barrels using today’s technology.  But it doesn’t look like that will work out].

We now have a second look at the Monterey shale and things don’t look so rosy. First, the geology of California is similar to a bowl of spaghetti with the earth squeezed into folds and steep inclines, not the 20,000 sq. miles of flat-laying shale deposits found in North Dakota. The Monterey shales are thick and complex, and do not lend themselves to drilling the long horizontal wells that can be fracked so productively in other places. Much of the shale oil in California appears to have drained over the years into conventional oil reservoirs and has already been extracted by many of the 238,000 oil wells that have been drilled in the state during the last century.

Our new study by an experienced Canadian geologist, who has already examined the productivity of other shale oil formations in the US, concludes that the government and its contractor’s study is absurdly optimistic about the prospects for shale oil production in California. Despite the use of all the latest drilling and production techniques, oil production in California has fallen from 1.1 million b/d 30 years ago to 500,000 b/d today. It is highly unlikely that this will be turned around given the geology of the region.

The Department of Energy’s report starts with the assumption that California’s shale is much like that in Texas and North Dakota. It posits that the oil industry will only have to drill 28,000 new wells, each yielding ridiculously large 550,000 barrels of oil, to extract California’s shale oil. This is simply not supported by the recent history of drilling in the state and is unlikely to happen. We will be lucky if California’s oil production does not continue to decline, for its geology is simply not the same.

D. Rogers. Feb 2013. Shale & Wall Street: Was the Decline in Natural Gas Prices Orchestrated?

Excerpts from a 32-page report:

Leases were bundled and flipped on unproved shale fields in much the same way as mortgage-backed securities had been bundled and sold on questionable underlying mortgage assets prior to the economic downturn of 2007.

In 2011, shale mergers and acquisitions (M&A) accounted for $46.5 B in deals and became one of the largest profit centers for some Wall Street investment banks. This anomaly bears scrutiny since shale wells were considerably underperforming in dollar terms during this time. Analysts and investment bankers, nevertheless, emerged as some of the most vocal proponents of shale exploitation. By ensuring that production continued at a frenzied pace, in spite of poor well performance (in dollar terms), a glut in the market for natural gas resulted and prices were driven to new lows. In 2011, U.S. demand for natural gas was exceeded by supply by a factor of four. It is highly unlikely that market-savvy bankers did not recognize that by overproducing natural gas a glut would occur with a concomitant severe price decline. This price decline, however, opened the door for significant transactional deals worth billions of dollars and thereby secured further large fees for the investment banks involved. In fact, shales became one of the largest profit centers within these banks in their energy M&A portfolios since 2010. The recent natural gas market glut was largely effected through overproduction of natural gas in order to meet financial analyst’s production targets and to provide cash flow to support operators’ imprudent leverage positions.

Wall Street promoted the shale gas drilling frenzy, which resulted in prices lower than the cost of production and thereby profited [enormously] from mergers & acquisitions and other transactional fees.

U.S. shale gas and shale oil reserves have been overestimated by a minimum of 100% and by as much as 400-500% by operators according to actual well production data filed in various states.

Shale oil wells are following the same steep decline rates and poor recovery efficiency observed in shale gas wells.

The price of natural gas has been driven down largely due to severe overproduction in meeting financial analysts’ targets of production growth for share appreciation coupled and exacerbated by imprudent leverage and thus a concomitant need to produce to meet debt service.

Due to extreme levels of debt, stated proved undeveloped reserves (PUDs) may not have been in compliance with SEC rules at some shale companies because of the threat of collateral default for those operators.

Industry is demonstrating reticence to engage in further shale investment, abandoning pipeline projects, IPOs and joint venture projects in spite of public rhetoric proclaiming shales to be a panacea for U.S. energy policy.

Exportation is being pursued for the differential between the domestic and international prices in an effort to shore up ailing balance sheets invested in shale assets

It is imperative that shale be examined thoroughly and independently to assess the true value of shale assets, particularly since policy on both the state and national level is being implemented based on production projections that are overtly optimistic (and thereby unrealistic) and wells that are significantly underperforming original projections.

for more than a decade the largest oil and gas producers (the “Majors” as they are collectively called) have not been able to materially expand their reserve replacement ratios.14 In fact, approximately one quarter of their reserve growth has come from acquisitions rather than the drill bit, such as ExxonMobil’s acquisition of XTO Energy. This constitutes consolidation rather than organic growth.

To give another example, in 2010 Chevron replaced less than one fourth of the oil and gas it had sold the prior year.  This is highly problematic for the future share price of these companies and explains the exuberant share repurchase programs which they have engaged in recently, buying back shares in excess of as much $5 billion a quarter in the case of ExxonMobil. This is, of course, highly problematic for the future health of global economies. It is also problematic for the share prices of the individual fossil fuel companies.

Further, there are various grades and types of hydrocarbons, some much more efficient as fuels than others. Additionally, some hydrocarbons simply require such an expenditure of energy to extract and produce that their use becomes questionable.

In order for a publicly traded oil and gas company to grow extensively, it must manage not only its core business but also the relationship it enjoys with its investment bankers. Thus, publicly traded oil and gas companies have essentially two sets of economics. There is what may be called field economics, which addresses the basic day to day operations of the company and what is actually occurring out in the field with regard to well costs, production history, etc.; the other set is Wall Street or “Street” economics. This entails keeping a company attractive to financial analysts and investors so that the share price moves up and access to the capital markets is assured. “Street” economics has more to do with the frenzy we have seen in shales than does actual well performance in the field.

Before the mortgage crisis, once the extent of the appetite was realized for credit default swaps, representatives of the capital markets worldwide embraced the new products. The fees generated were immense. It was similar with shale. Land was bid up to ridiculous prices with signing bonuses reaching nearly $30,000/acre and leases on unproven fields being flipped for as much as $25,000/acre, multiples of original investment.  There seemed an unending appetite.

In another example of parallels: credit default swaps were not traded on any exchange, so transparency became a paramount issue. It proved very difficult to accurately measure the underlying fundamentals with such a lack of transparency. It was the same with shales. Due to the new technology of hydrofracture stimulation, shale results could not be verified for a number of years. There simply was not enough historical production data available to make a reasonable assessment. It wasn’t until Q3 of 2009 that enough production history on shale wells in the Barnett had been filed with the Texas Railroad Commission that well performance could be checked.24 What emerged was significantly different from the operators’ original rosy projections. Of further interest is the fact that once numbers could begin to be verified in a play, operators sold assets quickly. This has followed in each play in the U.S. as it matured. The dismal performance numbers were recognized as a potential drag on company share prices. A good example would be the operators in the Barnett play in Texas. The primary players were Chesapeake Energy(significant portion of assets sold or jv’ed), Range Resources (all Barnett assets sold), Encana,( all Barnett assets sold) and Quicksilver Resources (company attempting to monetize all Barnett assets via MLP or asset sale since 2011. In that time frame, stock has plunged from about $15/share to $2.50/ share).

The issue of well performance disclosure has continued to mask problems in shale production. States such as Pennsylvania and Ohio do not release well performance data on a timely basis, which makes it very difficult to get a true picture of actual well history.

WRITE-DOWNS

In the lead up to the mortgage crisis, there were hints of things to come in the form of asset write downs.

Similar hints have been emerging with regard to shale (several examples listed)

This is of particular interest. Pipeline projects are expensive and require that a steady and consistent stream of gas or oil can be counted on for a long period of time in order to recoup initial capital outlay. Once initial capital is recouped, however, they tend to be cash cows. Given the steep decline curves for shale oil that are now readily apparent, it appears that operators  recognize that the Bakken will not be a long-term play. As such, they are not prepared to invest the needed capital upfront for a pipeline: again, a distinct lack of confidence in the long term viability of shales.

December 2013. DRILLING CALIFORNIA. A Reality Check on the Monterey Shale By J. David Hughes

it is not clear that hydraulic fracking techniques like those used on the Bakken and Eagle Ford will work on the Monterey shale.

Bakken/Eagle Ford

  • Deposits less than a few hundred feet thick
  • Flat and gently dip
  • Bakken: 20,000 square miles
  • Eagle Ford: 8,000 Square miles

Monterey

  • Complex and unpredictable
  • Much thicker deposits of up to 2,000 feet
  • Much deeper – anywhere from the surface to 18,000 feet down
  • 2,000 square miles
  • 1,363 wells have been drilled in shale reservoirs of the Monterey Formation. Oil production from these wells peaked in 2002, and as of February 2013 only 557 wells were still in production.3 Most of these wells appear to be recovering migrated oil, not “tight oil” from or near source rock as is the case in the Bakken and Eagle Ford plays.

The EIA/INTEK report assumed that 28,032 tight oil wells could be drilled over 1,752 square miles (16 wells per square mile) and that each well would recover 550,000 barrels of oil. The data suggest, however, that these assumptions are extremely optimistic for the following reasons:

  • Initial productivity per well from existing Monterey wells is on average only a half to a quarter of the assumptions in the EIA/INTEK report. Cumulative recovery of oil per well from existing Monterey wells is likely to average a third or less of that assumed by the EIA/INTEK report.
  • Existing Monterey shale fields are restricted to relatively small geographic areas. The widespread regions of mature Monterey shale source rock amenable to high tight oil production from dense drilling assumed by the EIA/INTEK report (16 wells per square mile) likely do not exist.

Thus the EIA/INTEK estimate of 15.4 billion barrels of recoverable oil from the Monterey shale is likely to be highly overstated. Certainly some additional oil will be recovered from the Monterey shale, but this is likely to be only modest incremental production—even using modern production techniques such as high volume hydraulic fracturing and acidization. This may help to temporarily offset California’s longstanding oil production decline, but it is not likely to create a statewide economic boom.

Art Berman: There is about eight years’ worth of shale gas supply available in the United States. The math: If you divide the “technically recoverable resource” of about 1,900 Tcf (trillion cubic feet) of gas, as identified by the Potential Gas Committee’s (PGC’s) report by annual U.S. consumption, you come up with 90 years. However, the PGC’s report says the “probable recoverable resource” is only 550 Tcf— 25% of the “technically recoverable resource. Then, if you divide the 550 Tcf “probable recoverable resource” by 3, which represents the amount of the resource that is actually provided by shale gas, you get about 180 Tcf. (Nov-Dec 2012. A contrarian on Shale Gas. Amerian Public Power Association.)

August 2013. Shale truth interview Arthur Berman Segments 1, 2, 3, 4, 5

Some paraphrased excerpts from these videos:

The high rate of drilling means someone is willing to spend money and there is gas, but not necessarily a lot of profit.  My concern is – what if there’s an economic contraction? They’re spending far more than their earnings, they have to find money to drill more wells. Twice as much as they’re making. A person couldn’t sustain that, and neither can they if capital goes away.  The assumption is they must be making money.  No, other reasons are to keep your leases, maintain production growth so Wall St thinks you’re a good company and stock price doesn’t go down.  If people give E&P money, they’ll spend it.  Lots of wells doesn’t persuade me they’re making money.

Question: if not profitable, then what do you think it will take for politicians and public to understand that?

Art: There’s not big money being made, but being spent.  They think that over time they’ll figure out a way to make money at it, their experts see demand growing and price increasing, especially in north America where we’ve nearly gone through our oil reserves. So our economy will be more and more NG oriented.  There is a lot of gas but most isn’t commercial, but if prices go up, then more will be profitable.  By the end it will be the Exxon, Chevron, Anardako, Apache, Statoil, with the deep pockets, to get through this non-profitable period.  Everyone is losing money right now.  I’ve looked at all their balance sheets, none are making money.  If you’re making money it ought to show up in the SEC filings.  Chesapeake for 2012 is clearly the shale gas leader, they started it, have the most land, the 2nd largest reserves of natural gas after Exxon, but their SEC filing is a train wreck, they lost a Billion last year, had to write down reserves, 3/4 of earnings are from asset sales, not operations.  If CHK is a paragon, then we’re in trouble.   Forget about reserves, resources — clearly no one is making any money.

Q: We hear about a GLUT of shale NG?

No, it’s been flat since Dec 2011, for 15 months flat. do we have an oversupply? Not really, we have an equilibrium that’s keeping prices somewhat low, though they’ve doubled since April.  The question is how long will it take until there’s less NG than we demand, then price will come up.  I think it’ll be sooner than the others are predicting.  over the past 7 decades we’ve had 5 complete fiascos based on predictions everyone agreed on. Now that it’s cheap forever, what about when we were building LNG to import it? Now we think the situation is reversed, no more problems.

Shale is about 35% of our NG supply. Where’s the 65%? Conventional, in terminal decline. No one is drilling those wells. 2/3 of our gas is declining, and shale is not increasing.  Barnett, Haynesville, etc.,  only the Marcellus is continuing to increase.

Q: What should we do?

Change our behavior. No silver bullet suolutions. There are no solutions. The solutions, if they are out there, are very long term. so only 2 things to do: efficiency – use the fuels we have more efficiently. The more important thing is our own behavior – how do we use energy – how many drive alone to work? Leave lights on, not enough insulation — simple things, not solar panels. People don’t feel like they need to conserve when they hear we’re energy independent. Even if we had 100 years of gas, why should we use it up fast?

Several years ago there were the real estate shenanigans, subprime, securitized mortgages, credit default swaps, and so on responsible for economic collapse.  This is similar to some of the shale gas ventures. There are some striking comparisons. We’re told it’ll get better and bigger forever and ever, that S&P/Moody’s mortgages grade A investments but they were wrong, now all the experts say everyone’s making money in shale gas.

But look at the financial position of CHK or Enron, It’s a kind of a train wreck. Many natural gas companies are run well and expect they’ll make money -but whether we believe that – I’m just raising a cautionary note.  I believe the financial collapse had as much to do with energy as real estate. oil got to nearly $150/barrel at same time subprime reached its crescendo, that has an effect — Energy isn’t a sector separate from economics, politics – it’s what ties everything together.

 

Bill Powers: The U.S. has nowhere close to a 100-year supply.

In his new book “Cold, Hungry and in the Dark: Exploding the Natural Gas Supply Myth”, Powers concludes the USA has a 5 to 7 year supply of shale gas.  People and companies who benefit economically are behind the promotion of the shale gas myth. In reality, many corporations are taking write-downs of their reserves. (Peter Byrne. 8 Nov 2012. US Shale Gas Won’t Last Ten Years: Bill Powers. The Energy Report.)

James Howard Kunstler, 24 Sep 2012 “Duty” 

“In all the monumental yammer of the media sages surrounding the candidates they follow, and among the freighted legions of meticulously trained economists who try so hard to fit their equations and models over the spilled chicken guts of daily events, there is no sense of the transience of things. Tom Friedman over at The New York Times still thinks that the petroleum-saturated present he calls “the global economy” is a permanent condition of human life, and so does virtually every elected and appointed official in Washington, not to mention every broadcaster in Manhattan.Someone told all these clowns about 14 months ago that we will be able to keep running WalMart on shale oil and shale gas virtually forever, and they swallowed the story whole, and then force-fed it down the distracted public’s throat. In reality – that alternative universe to flat-screen America – all the mechanisms that allow us to keep running this wondrous show teeter on a razor’s age of extreme fragility.  We’re one bomb-vest or High Frequency Trading keystroke away from a possible dark age…”

Richard Heinberg (excerpt from July 2012 Museletter #242: The End of Growth Update Part 2):

whether the latest financial news is giddy or dismal, whether oil prices are up or down, the game of growing the economy by increasing the production of affordable transport fuel is now officially over. Previously, we enjoyed both a growing economy and low fuel prices, with the latter feeding the former; now we see “cheap” oil only when the economy is in a tailspin of demand destruction.

Yes, “fracking” has given America temporarily inexpensive and abundant natural gas—so why not oil? In the case of natural gas, record-high prices back in 2006-2007 (due to depletion of conventional gas deposits) led to truly heroic rates of drilling and a temporary supply glut. Gas has become so cheap in fact that the shale gas industry is imploding, starting with Chesapeake Energy, the biggest fracker of them all. Producers are losing money on each well, so they’re pulling back on drilling even if that hurts their company’s share value (which it does). Next we’ll see a consolidation of the industry, rising prices, and falling production—against nearly everyone’s recent expectations. This is all clearly and persuasively explained in David Hughes’s recent report for Post Carbon Institute, “Will Natural Gas Fuel America for the 21st Century?
 
The media-storm touting America’s energy resurgence has been truly surreal. During the past 12 months oil prices were at their highest sustained level in history, while the rate of world crude oil production has been flat-lined for seven years. Available oil exports are disappearing from world markets as exporting countries use ever more of their product domestically. The data shout “Peak Oil!” but news markets demand happy talk. And so pundits seize upon a temporary production increase in North Dakota achieved by fracking oil-bearing shale as a “game changer.” Once again we’re told that technology will save us!
 

In reality, virtually all the easy, cheap oil has already been found and put into production; what’s left to find and produce will be hard, nasty, and expensive. Oil-bearing shales have been known to geologists for decades, and fracking has been part of the technical arsenal of the industry since the 1980s, but the cost of development was considered too high. Meanwhile, despite its “miraculous” growth in domestic oil production, the United States saw its trade deficit in oil increase to $327 billion in 2011, accounting for 58 percent of the total trade deficit, the highest-ever annual share.

Not necessarily. As the economy tanks, that will cut demand for oil and the price will fall below the new-supply break-even level; when that happens, companies will cancel or delay new projects (as they did in late 2008 when the per-barrel price fell to $40). But if, for the moment, the economic news looks good, demand will grow and oil prices must inevitably return to levels that justify new supply. And those price levels are just high enough to begin undermining economic growth, as a spate of recent economic research has shown.

Huffington Post, 27 Mar 2013: Deep water and tight oil (like what comes from the Bakken and Eagle Ford) have extremely high depletion rates.

Deep water wells deplete about 10-20 percent a year. Tight oil depletes at about 40 percent annually the first few years. Think about that latter number, where most of our new oil extraction is coming from. What if you had a part-time job but within two years you would only be making about a quarter of your current income. Probably need a new part time job, right? But that one does the same thing. Before you know it, you need 40,000 part time jobs. But even that doesn’t help, because they’re all still depleting at 40 percent. Give a thought to how much you would have to work to maintain your original income after five years, then ten years, then twenty years…  Source: The Reward for Being Right About Peak Oil: Scorn Heaped With Derision

ASPO newsletter 19 Nov 2012:

The International Energy Agency 2012 World Energy Outlook forecast that US shale oil production would continue to grow more rapidly than expected for the rest of the decade, leading to the US becoming the world’s largest oil producer by 2012 and largely energy independent (though not for oil) by 2035.

The world’s press trumpeted the energy crisis was now way in the future and that all would be well for the next 20 years. The few writers that consulted people in the peak oil community buried their skeptical comments at the bottom of their stories.

The issue is how much longer shale oil production can continue to grow at the spectacular rates of the past few years before it too peaks and starts to decline. Oil production from the Bakken Shale in North Dakota is about 660,000 b/d, Eagle Ford in Texas about 600,000 b/d and growing. Some say the 2 fields will produce 2 million b/d in the next year or so. The IEA seems to be saying that tight oil production in the US will peak at about 5 million b/d around 2020.

Most people in the peak oil community who have looked into the issue have major problems with these forecasts, believing that a peak in shale oil production around 3 million b/d is more realistic. Remember, demand is increasing at about 750,000 b/d each year, so 8 years from now an additional 6 million b/d of new production will be required worldwide plus another 3-4 million b/d will be needed to replace the depletion from existing fields every year.

The first problem with the IEA’s estimate is the rapid depletion of fracked oil wells. Despite limited experience with this relatively new technology, some are calculating that production from many of these wells is dropping by over 40% or more a year.

We know the average daily production from North Dakota’s 4,630 producing wells is currently 143 b/d. If we assume that the Bakken oil fields are to produce 2.5 million b/d by the end of the decade then it will need some 18,000 wells, each producing the average of 140 b/d. While this is a not an inconceivable number, when one takes into account that most, if not all of these wells will have be redrilled twice in the next 8 years, the number becomes improbable. We shall have to drill more and more wells just to maintain the same level of production.

The second problem with the optimism over tight oil is the very high cost of the horizontal drilling and fracking of these wells, which may run 3 to 4 times that of a conventional well. Some people put the cost of producing a barrel of oil from the Bakken at $80-90, which is just about where oil is currently selling in the region. Should the global economy continue to contract, the selling price of fracked oil could well fall below the cost of production, bringing a marked slowdown to further drilling.

Roger Blanchard: A Closer Look at Bakken and U.S. Oil Production:

  • Oil production outside of Texas and North Dakota has actually declined in the last few years
  • Bakken extends over a large area of North Dakota, Montana and Saskatchewan, but just 4 counties in North Dakota are 80.8% of all the oil production. Even within that area, some spots are better than others.
  • “Oil wells in the Bakken region decline rapidly. From data I’ve seen, the average decline in the first year is ~60%. The only way to maintain or increase Bakken oil production is to rapidly increase the number of wells. As the industry has to drill in less fruitful areas, being able to maintain production will become an increasing challenge.”
  • “I expect oil production in the Bakken to peak in 2013 to 2015. I expect Texas oil production to have a secondary peak around 2014 (Texas oil production peaked in 1972 at 3.57 mb/d while it’s presently ~1.5 mb/d). If oil production in both Texas and North Dakota begins to decline around 2015, I expect U.S. oil production as a whole to begin to decline in that same time frame.” (my comment: Blanchard also says that oil production is declining now in the Gulf of Mexico!)

ASPO Newsletter Nov 26, 2012   Who do you believe, Likvern or North Dakota official?

Rune Likvern

  • performed an in-depth analysis of data from fracked wells in North Dakota and concluded that the fracked wells are depleting so fast production from the region is unlikely to get much beyond 600,000-700,000 b/d.
  • the average Bakken well produces 85,000 barrels of oil in its first year
  • production steady due to accelerating rate of drilling at about 143 b/d. September to september # of wells: 590 2009-2010  1010 2010-2011  1762 2011-2012
  • Each well costs $10 million – how long can that be sustained?
  • Hess oil costs was $13 million per well drilled/fracked
  • Unless the geology is significantly different, what happens in the Bakken over the next few years will be similar to Texas shales. A recent study of 1000 wells in the Eagle Ford, Texas field shows that each well will produce about 120,000 barrels over its lifetime. This is a long way from the 600,000 barrels North Dakota claims each well will yield.

Director of the North Dakota Oil & Gas Division:

  • production may reach any where from 900,000 to 1.2 million b/d in the next 3 years and sustain this level until 2020 or even 2025 before tapering off to 650-700,000 b/d by 2050.
  • the average Bakken well produces 329,000 barrels of oil in its first year
  • Predicts a revenue 3 times higher over the lifetime of a well than Likvern:Over a 45-year lifetime, each well will produce 615,000 barrels of oil, easily covering the $9 million it costs to drill and frack. if avg prd is 329,000 barrels first year, even spectacular rates of depletion allows a well to produce 600,000 barrels in 5 or 6 years. If Likvern is right and the average well yields 85,000 barrels the first year, then it would only 200,000 barrels.
  • Platts estimates each well generates $20 million in profits

An energy expert (I don’t have permission to give attribution) on oil and gas shales:

  • Barnett (once our great natural gas savior) has peaked (at least for now)
  • Haynesville has peaked
  • Montana Bakken oil has peaked and is half way down
  • North Dakota is increasing rapidly but gets most of its oil out of two sweet spots — Bakken is not nearly as big as it appears on maps… so far all the oil drilling is concentrated in 3 sweet spots: Parshall, Nesson anticline and Elm Coulee Montana. These areas, about 5-10 percent of of the Bakken area on the map, are packed with oil wells and there are essentially none in other areas and according to USGS those other wells produce little oil.
  • The question is: is it all only about sweet spots? How many more sweet spots are there? …early estimate of the EROI of these sites is about same as US oil now (~10:1) FOR THE SWEET SPOTS only.
  • Also from a Texas oil man: “Few are making profits in Eagle Ford. Its a vast Ponzi scheme

Chris Nelder: “… the decline rates of shale gas wells are steep. They vary widely from play to play, but the output of shale gas wells commonly falls by 50% to 60% or more in the first year of production. This is why I have called it a treadmill: you have to keep drilling furiously to maintain flat output.

In the U.S., the aggregate decline of natural gas production from both conventional and unconventional sources is now 32% per year, so 22 bcf/d of new production must be added every year to keep overall production flat, according to Canadian geologist David Hughes. That’s close to the total output of U.S. shale gas, after nearly a decade of its development. It will require thousands more shale gas and tight oil wells to keep domestic gas production flat.”

American Geophysical Union conference 2012: TITLE: The Future of Fossil Fuels: A Century of Abundance or a Century of Decline?

ABSTRACT: Horizontal drilling, hydraulic fracturing, and other advanced technologies have spawned a host of new euphoric forecasts of hydrocarbon abundance. Yet although the world’s remaining oil and gas resources are enormous, most of them are destined to stay in the ground due to real–]world constraints on price, flow rates, investor appetite, supply chain security, resource quality, and global economic conditions. While laboring under the mistaken belief that it sits atop a 100–]year supply of natural gas, the U.S. is contemplating exporting nearly all of its shale gas production even as that production is already flattening due to poor economics. Instead of bringing “energy independence” to the U.S. and making it the top oil exporter, unrestricted drilling for tight oil and in the federal outer continental shelf would cut the lifespan of U.S. oil production in half and make it the world’s most desperate oil importer by mid–]century. And current forecasts for Canadian tar sands production are as unrealistic as their failed predecessors. Over the past century, world energy production has moved progressively from high quality resources with high production rates and low costs to lower quality resources with lower production rates and higher costs, and that progression is accelerating. Soon we will discover the limits of practical extraction, as production costs exceed consumer price tolerance. Oil and gas from tight formations, shale, bitumen, kerogen, coalbeds, deepwater, and the Arctic are not the stuff of new abundance, but the oil junkie’s last dirty fix. This session will highlight the gap between the story the industry tells about our energy future, and the story the data tells about resource size, production rates, costs, and consumer price tolerance. It will show why it’s time to put aside unrealistic visions of continued dependence on fossil fuels, face up to a century of decline, and commit ourselves to energy and transportation transition.

Bill Powers: “There is production decline in the Haynesville and Barnett shales. Output is declining in the Woodford Shale in Oklahoma. Some of the older shale plays, such as the Fayetteville Shale, are starting to roll over. As these shale plays reverse direction and the Marcellus Shale slows down its production growth, overall U.S. production will fall.  At the same time, Canadian production is falling. And Canada has historically been the main natural gas import source for the U.S. In fact, Canada has already experienced a significant decline in gas production — about 25%, since a peak in 2002 — and has dramatically slowed its exports to the United States.”

Art Berman: in 2011 published a report showing industry reserves had been overstated by at least 100% based on detailed review of both individual well and group decline profiles for Barnett, Fayetteville, and Haynesville Shale plays.

 

Ian Urbina. 25 Jun 2011. Insiders Sound an Alarm Amid a Natural Gas Rush. New York Times.

[Natural] gas may not be as easy and cheap to extract from shale formations deep underground as [energy] companies are saying, according to hundreds of industry e-mails and internal documents and an analysis of data from thousands of wells.

In the e-mails, energy executives, industry lawyers, state geologists and market analysts voice skepticism about lofty forecasts and question whether companies are intentionally, and even illegally, overstating the productivity of their wells and the size of their reserves. Many of these e-mails also suggest a view that is in stark contrast to more bullish public comments made by the industry, in much the same way that insiders have raised doubts about previous financial bubbles.

“Money is pouring in” from investors even though shale gas is “inherently unprofitable,” an analyst from PNC Wealth Management, an investment company,  wrote to a contractor in a February e-mail. “Reminds you of dot-coms.

The word in the world of independents is that the shale plays are just giant Ponzi schemes and the economics just do not work,” an analyst from IHS Drilling Data, an energy research company,  wrote in an e-mail on Aug. 28, 2009.

Company data for more than 10,000 wells in three major shale gas formations raise further questions about the industry’s prospects. There is undoubtedly a vast amount of gas in the formations. The question remains how affordably it can be extracted.

The data show that while there are some very active wells, they are often surrounded by vast zones of less-productive wells that in some cases cost more to drill and operate than the gas they produce is worth. Also, the amount of gas produced by many of the successful wells is falling much faster than initially predicted by energy companies, making it more difficult for them to turn a profit over the long run.

If the industry does not live up to expectations, the impact will be felt widely…if natural gas ultimately proves more expensive to extract from the ground than has been predicted, landowners, investors and lenders could see their investments falter, while consumers will pay a price in higher electricity and home heating bills.

There are implications for the environment, too. The technology used to get gas flowing out of the ground — called hydraulic fracturing, or hydrofracking — can require over a million gallons of water per well, and some of that water must be disposed of because it becomes contaminated by the process. If shale gas wells fade faster than expected, energy companies will have to drill more wells or hydrofrack them more often, resulting in more toxic waste.

The e-mails were obtained through open-records requests or provided to The New York Times by industry consultants and analysts who say they believe that the public perception of shale gas does not match reality.

Studying the Data

Ms. Rogers, a former stockbroker with Merrill Lynch, said she started studying well data from shale companies in October 2009 after attending a speech by the chief executive of Chesapeake, Aubrey McClendon. The math was not adding up, her research showed that wells were petering out faster than expected.

In May 2010, the Federal Reserve Bank of Dallas called a meeting to discuss the matter after prodding from Ms. Rogers. One speaker was Kenneth B. Medlock III, an energy expert at Rice University, who described a promising future for the shale gas industry in the United States. When he was done, Ms. Rogers peppered him with questions.

Might growing environmental concerns raise the cost of doing business? If wells were dying off faster than predicted, how many new wells would need to be drilled to meet projections?

Mr. Medlock conceded that production in the Barnett shale formation — or “play,” in industry jargon — was indeed flat and would probably soon decline.

Bubbling Doubts

Some doubts about the industry are being raised by people who work inside energy companies, too.

“In these shale gas plays no well is really economic right now, they are all losing a little money or only making a little bit of money.”  Around the same time the geologist sent this e-mail, Mr. McClendon, Chesapeake’s chief executive, told investors, “It’s time to get bullish on natural gas.

In September 2009, a geologist from ConocoPhillips, one of the largest producers of natural gas in the Barnett shale, warned in  an e-mail to a colleague that shale gas might end up as “the world’s largest uneconomic field.”
Forecasting these reserves is a tricky science. Early predictions are sometimes lowered because of drops in gas prices, as happened in 2008. Intentionally overbooking reserves, however, is illegal because it misleads investors. Industry e-mails, mostly from 2009 and later, include language from oil and gas executives questioning whether other energy companies are doing just that.

The e-mails do not explicitly accuse any companies of breaking the law. But the number of e-mails, the seniority of the people writing them, the variety of positions they hold and the language they use — including comparisons to Ponzi schemes and attempts to “con” Wall Street — suggest that questions about the shale gas industry exist in many corners.

“Do you think that there may be something suspicious going with the public companies in regard to booking shale reserves?” a senior official from Ivy Energy, an investment firm specializing in the energy sector, wrote in  a 2009 e-mail.

A former Enron executive wrote in 2009 while working at an energy company: “I wonder when they will start telling people these wells are just not what they thought they were going to be?” He added that the behavior of shale gas companies reminded him of what he saw when he worked at Enron.

Production data, provided by companies to state regulators and reviewed by The Times, show that many wells are not performing as the industry expected. In three major shale formations — the Barnett in Texas, the Haynesville in East Texas and Louisiana and the Fayetteville, across Arkansas — less than 20 percent of the area heralded by companies as productive is emerging as likely to be profitable under current market conditions, according to the data and industry analysts.

Richard K. Stoneburner, president and chief operating officer of Petrohawk Energy, said that looking at entire shale formations was misleading because some companies drilled only in the best areas or had lower costs. “Outside those areas, you can drill a lot of wells that will never live up to expectations,” he added.

Although energy companies routinely project that shale gas wells will produce gas at a reasonable rate for anywhere from 20 to 65 years, these companies have been making such predictions based on limited data and a certain amount of guesswork, since shale drilling is a relatively new practice.

Most gas companies claim that production will drop sharply after the first few years but then level off, allowing most wells to produce gas for decades.  Gas production data reviewed by The Times suggest that many wells in shale gas fields do not level off the way many companies predict but instead decline steadily.

“This kind of data is making it harder and harder to deny that the shale gas revolution is being oversold,” said Art Berman, a Houston-based geologist who worked for two decades at Amoco and has been one of the most vocal skeptics of shale gas economics.

The Barnett shale, which has the longest production history, provides the most reliable case study for predicting future shale gas potential. The data suggest that if the wells’ production continues to decline in the current manner, many will become financially unviable within 10 to 15 years.

A review of more than 9,000 wells, using data from 2003 to 2009, shows that — based on widely used industry assumptions about the market price of gas and the cost of drilling and operating a well — less than 10% of the wells had recouped their estimated costs by the time they were 7 years old.

In private exchanges, many industry insiders are skeptical, even cynical, about the industry’s pronouncements. “All about making money,” an official from Schlumberger, an oil and gas services company, wrote in  a July 2010 e-mail to a former federal regulator about drilling a well in Europe, where some United States shale companies are hunting for better market opportunities.

“Looks like crap,” the Schlumberger official wrote about the well’s performance, according to the regulator, “but operator will flip it based on ‘potential’ and make some money on it.”

David Hughes at the 2012 American Geophysical Union conference 2012: Shale Gas and Tight Oil: A Panacea for the Energy Woes of America?

ABSTRACT: Shale gas has been heralded as a game changer in the struggle to meet America’s demand for energy. The Pickens Plan of Texas oil and gas pioneer T.Boone Pickens suggests that gas can replace coal for much of U.S. electricity generation, and oil for, at least, truck transportation. Industry lobby groups such as ANGA declare that the dream of clean, abundant, home grown energy is now reality. In Canada, politicians in British Columbia are racing to export the virtual bounty of shale gas via LNG to Asia despite the fact that Canadian gas production is down 16% from its 2001 peak). And the EIA has forecast that the U.S. will become a net exporter of gas by 20213. Similarly, recent reports from Citigroup and Harvard suggest that an oil glut is on the horizon thanks in part to the application of fracking technology to formerly inaccessible low permeability tight oil plays. The fundamentals of well costs and declines belie this optimism. Shale gas is expensive gas. In the early days it was declared that continuous plays like shale gas were manufacturing operations, and that geology didn’t matter. One could drill a well anywhere, it was suggested, and expect consistent production. Unfortunately, Mother Nature always has the last word, and inevitably the vast expanses of purported potential shale gas resources contracted to core areas, where geological conditions were optimal. The cost to produce shale gas ranges from $4.00 per thousand cubic feet (mcf) to $10.00, depending on the play. Natural gas production is a story about declines which now amount to 32% per year in the U.S. So 22 billion cubic feet per day of production now has to be replaced each year to keep overall production flat. At current prices of $2.50/mcf, industry is short about $50 billion per year in cash flow to make this happen. As a result I expect falling production and rising prices in the near to medium term. Similarly,
tight oil plays in North Dakota and Texas have been heralded as a new Saudi Arabiah of oil. Growth in production has been spectacular, but currently amounts to just one million barrels per day which is less than 15% of US oil and other liquids production. Tight oil is offsetting declines in conventional crude oil production as well as contributing to a modest production increase from the 40 year US crude oil production low of 2008. The mantra that natural gas is a transition fuel to a low carbon future is false. The environmental costs of shale gas extraction have been documented in legions of anecdotal and scientific reports. Methane and fracture fluid contamination of groundwater, induced seismicity from fracture water injection, industrialized landscapes and air emissions, and the fact that near term emissions from shale gas generation of electricity are worse than coal. Tight oil also comes with environmental costs but has been a saviour in that it at least temporarily arrested a terminal decline in US oil production. A sane energy security strategy for America must focus on radically reducing energy consumption through investments in infrastructure that provides alternatives to our current high energy throughput. Shale gas and tight oil will be an important contributors to future energy requirements, given that other gas and oil sources are declining, but there is no free lunch.

2012 American Geophysical Union conference 2012: Charles A. Hall. Quantity vs quality of oil: Implications for the future economy

ABSTRACT: There has considerable interest recently in various indications of important changes in the technology of oil production and its impact on US oil production. The data indicate a clear increase in oil production for the US after 40 years of year by year decline. This has led some commentators to predict that the US will become a net oil exporter before long. Maps showing the enormous extent of e.g. the Bakken formation in North Dakota and Montana, and our ability to now exploit this oil using the new techniques of horizontal drilling and fracking, gives the impression that there are enormous new oil reserves that can satisfy our wants indefinitely. Other assessments indicate that the amount of oil still available globally is 3, 4 or more times the usual assessments of about 1 trillion barrels. But “oil” is not a single substance, but rather a suite of materials of widely varying qualities and hence utility. One important index is the energy return on investment (EROI), the ratio of energy returned from energy used to get it. EROI reflects the balance of the countervailing impacts of depletion and technology and ultimately determines the price of a fuel. This ratio is declining all around the world, and gives a practical limit to how much oil we can exploit at an energy and economic profit. Bringing quality of oil into the equation gives a much more restrictive estimate of how much oil we are likely to be able to exploit for fuel.

The “shale revolution” has been often touted as a game changer in energy production (1). Indeed, during the past few years, the increasing production trend of shale (or “tight”) gas in the US has generated a wave of optimism invading the media and the Web. However, not everyone has joined the chorus and several commentators have predicted that the trend would be short lived (see, e.g. Sorrell (2), Laherrere (3), Hughes (4), and Turiel (5)). Some have flatly stated that the effort in gas production in the US is simply a financial bubble, destined to deflate soon (see e.g. Orlov (6) and Berman (7)).  Some, such as R. P. Siegel (8) even argue that the bursting of the gas bubble might bring about a financial collapse not unlike the one of 2008.

While the optimism about the future of natural gas seems to be still prevalent, the data show that the gas bubble may be already bursting. The most recent data from EIA (9) show that that the total US gas production has not been growing for the past 1-2 years and that it shows signs to be declining. Fitted with a Gaussian curve, it shows a peak taking place around the end of 2012.

The declining trend is not yet very pronounced and specific data about shale gas production after 2011 are not available in the EIA site (9). However, since the production of conventional gas has been declining since 2007, the production of shale gas may not be declining yet, but it is surely not growing any more at the rates that were common just a few years ago.

In any case, there are data indicating that the decline of total gas production in the US was expected. Drilling rigs for gas has been plummeting down during the past few years, as shown in the following figure (data from Baker and Hughes (10)

Obviously, one can’t extract anything without having drilled first to find it. Since the lifetime of shale gas wells is of the order of a few years, it was unavoidable that the drop in the number of gas drilling rigs would generate in a production decline; which is what we are seeing today.

Basically, these data seem to confirm the interpretation that we are facing a financial “gas bubble”, rather than a robust trend of development of new resources. The gas glut produced by the rush to gas of the past few years has lowered prices to the point that companies have been extracting gas without making any profit, actually losing money in the process (7). That couldn’t last forever.

In the near future, the decline in gas production in the US may lead to an increase in prices which, in turn, may direct the industry to restart drilling for gas. But it remains to be seen if prices high enough to generate a profit are affordable for consumers. In any case, the idea of a “gas revolution” that will bring for us an age of abundance is rapidly fading.

In the end, what we are doing with gas is simply one more step along a path that we are forced to follow. With the gradual disappearance of high grade mineral resources, we must extract the minerals we need from lower grade resources, and this is more expensive and more polluting. That’s exactly what happening with gas but it is much more general. As described in the most recent report of the Club of Rome (Plundering the Planet (11)), the gradual depletion of high grade mineral resources is leading us to a world where mineral commodities will be rarer and more expensive. We will have to adapt to this empty new world.

1. http://belfercenter.ksg.harvard.edu/files/Oil-%20The%20Next%20Revolution.pdf

2. http://www.theoildrum.com/node/9327

3 http://www.theoildrum.com/node/9495

4. http://shalebubble.org/drill-baby-drill/

5. http://crashoil.blogspot.it/2010/12/un-mar-de-gas-natural.html

6. http://cluborlov.blogspot.com.es/2012/05/shale-gas-view-from-russia.html

7. http://www.youtube.com/watch?v=cDDseuvO2SM&feature=share&list=PL3yVl0q9sFIwgSJfLSBEA3pLrDsxMDMwe

8. http://www.triplepundit.com/2013/02/shale-gas-bubble-threatens-second-economic-collapse/

9. http://www.eia.gov/naturalgas/

10. http://www.bakerhughes.com/rig-count

11. http://www.clubofrome.org/?p=6166

————————————————————————————————-

Posted in Natural Gas, Oil & Gas Fracked, Peak Natural Gas, Peak Oil | Tagged , , , , | Leave a comment

Why technology can’t solve all of our problems

I have many posts about energy contraptions that have had hundreds of thousands of breakthrough stories in the media, yet here we are, without powerful enough batteries, hydrogen fuel cells, and so on to replace fossil fuels despite the exciting news that never actually happened.  Here are some of my posts on this topic:

Below is a post by Kevin Cameron, who points out that inventions can take a long time to come to fruition, such as the carbon-fiber he saw in 1976 that didn’t appear in a commercial flight until 2011—35 years later.  But this hasn’t appeared in motorcycles yet.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Kevin Cameron. 2019. Electric bikes, rocket fuel, and the possibility of disappointment. Yearning for a breakthrough that may never come.  Cycleworld.com

Our expectations of what technology can give us result from fast changes in areas such as computing. We are impressed by social change caused by the coming of phones that are really pocket-sized computer terminals connected to “an internet of everything.” This tells us that technology can give us whatever we want and soon.

We know that around 1975 John B. Goodenough devised a workable rechargeable battery of high energy density based upon lithium ions. By 1992, the concept had been commercialized by Sony, and since then, with increasing intensity, hundreds of academic and commercial laboratories have striven night and day to be the first to create for electric vehicles a better battery that has it all.

Having it all means being affordable, free of high-priced materials, safe in operation, fast-charging, capable of producing large current, long-lasting, and able to store enough energy in a lightweight package to compete heads-up with the range and performance of vehicles powered by internal-combustion engines.

Many different electrode chemistries exist for the basic lithium-ion process, but no single one of them has yet achieved a “round” performance offering all of the listed desirable attributes.

One way to look at the future is to say, “Well, all these hundreds of labs have been hard at it for many years, so the big breakthrough will come any minute. We deserve it. All it takes is for some brilliant chemist to add a pinch of this or that and we’re home.”

The development of Rocket Fuels

I was fascinated to tear through the late John D. Clark’s little book Ignition!, which describes the intensity, creativity, and results of rocket-fuels research from the end of World War II to the mid-1960s. Military planners knew that any future world war would be decided by nuclear ICBMs, with results final in 20 minutes. Driven by that sharp spur, world governments poured treasure and resources into the development of rocket fuels that might give them Cold War leverage.

Chemists can compute the ideal energy yields of even quite complex fuel and oxidizer molecules. But it’s quite another thing to realize ideals in practice. Some fuels, such as powdered aluminum, burn too slowly to react completely before they leave the nozzle. Others failed to meet the services’ standards for storability or low freezing temperatures. Some compounds slowly deteriorated over time, generating gas that could burst storage containers. Little by little, useful combinations were discovered, but the exotic promise of super fuels kept research going, trying to create ultra-high-energy combinations based on boron, fluorine (it eats through glass), and even remarkably poisonous mercury. Standards were created to measure sensitivity; some fuels, especially monopropellants, detonated if poured, or if they touched dust, or from micro-cavitation. Countless thousands of compounds were proposed, produced in labs, tested, and, if possible, fired in research rocket engines. Micro-contamination of containers by trace elements led to terrible surprises. Researchers were injured or killed.

Here we are in 2019 with heavy rocket boosters still burning RP-1—a kind of standardized kerosene—and liquid oxygen, as in the Apollo program.

At the same time, the US Air Force worked to develop range-extending boron-based fuels for gas turbines. One class of fuels showed promise, until its combustion product turned out to be a viscous material like melted glass, which plugged up the engines in which it was burned. There were explosions with no discoverable cause. Eventually the materials under study proved just too dangerous, and the program was ended.

Here we are in 2019 with heavy rocket boosters still burning RP-1—a kind of standardized kerosene—and liquid oxygen, as in the Apollo program. Higher-performing engines burn liquid hydrogen and liquid oxygen, and the storable propellants are either conventional solids or based on good old nitrogen tetroxide and acids. In the mid-1960s, computer programs were run on mainframes to exhaustively grind through all possible chemistries. The result? The same compounds that had been discovered by conventional means, so many of which were too sensitive to be used.

There are other ongoing programs that have yet to succeed. One is thermonuclear fusion, a potentially unlimited power source. For much of my lifetime, I have read that, “The problem of controlled fusion will be solved within the next 50 years.” Another is the “room-temperature” superconductor, which would enable super-efficient electrical devices and lossless transmission of power. In each case, success would be of great benefit to humankind. We want success very much. We feel we deserve it.

Posted in Critical Thinking, Transportation | 5 Comments

Challenges to making California’s grid renewable

The critical role of natural gas in meeting electricity demand with intermittent wind and solar resources. 2013. Velerity

What follows is a report from the California Energy Commission. But in less bureaucratic language, this may summarize it better (Petersen 2019):

“I’ve always been amazed at a strange mental disconnect that’s common among renewable power advocates. On one hand, they freely acknowledge that industrial societies can’t function without stable power grids to supply electricity on demand, 24 hours a day, seven days a week, 365 days a year, and with 99.999% reliability. On the other hand, they insist that we have a moral duty to use non-dispatchable, intermittent, and generally unreliable power from renewables despite the fact that intermittency is the mortal enemy of a stable electric grid. The electrons from wind turbines and solar panels may be green and squeaky clean, but their intermittent electric current is the grid equivalent of sewage in a mountain stream.

According to the California Independent System Operator, or CAISO, the biggest challenge of managing a greener grid is maintaining a precise balance between supply and demand as the percentage of intermittent power from renewables increases. To meet the challenge, CAISO is working overtime to develop a fleet of flexible power resources with the capacity to:

  • Sustain upward and downward ramp;
  • Respond for a defined period of time;
  • Change ramp directions quickly;
  • Store energy or modify energy use;
  • React quickly to meet expected operating levels;
  • Start with short notice from a zero or low-electricity operating level;
  • Start and stop multiple times per day; and
  • Accurately forecast operating capability.

While the contract price for green electricity from a wind- or solar-farm may be cheaper than the contract price for electricity from a conventional power plant, the downstream cost of making green power stable, reliable, and useful in the electric grid can be immense, which is why electricity in states that have implemented renewable portfolio standards is often more costly than it is in states that haven’t implemented RPS programs.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Meier, A. May 2014. Challenges to the integration of renewable resources at high system penetration. California Energy Commission.

Energy Research and Development Division Final report. 2014.  California Energy Commission, California Institute for Energy and Environment, Alexandra von Meier California Institute for Energy, and Environment University of California

Excerpts

Successfully integrating renewable resources into the electric grid at penetration levels to meet a 33 percent Renewables Portfolio Standard for California presents diverse technical and organizational challenges.

Renewable and distributed resources introduce space (spatial) and time (temporal) constraints on resource availability and are not always available where or when they are wanted.

Although every energy resource has limitations, the constraints associated with renewables may be more stringent and different from those constraints that baseload power systems were designed and built around.

These unique constraints must be addressed to mitigate problems and overcome difficulties while maximizing the benefits of renewable resources. New efforts are required to coordinate time and space within the electric grid at greater resolution or with a higher degree of refinement than in the past. This requires measuring and actively controlling diverse components of the power system on smaller time scales while working toward long‐term goals. These smaller time scales may be hourly or by the minute, but could also be in the milli‐ or even microsecond range.

To cope with intermittent renewables there needs to be

  • reserve generation capacity at least a day
  • dispatchable generation with high ramp rates in MW/s
  • generation with regulation capability
  • dispatchable electric storage
  • electric demand response (from customers)
  • direct load control down to a 5-second time scale without impacting end-use (!) their exclamation mark, not mine in http://uc-ciee.org/downloads/Renewable_Energy_2010.pdf

It also important to plan and design around the diverse details of local distribution circuits while considering systemic interactions throughout the Western interconnect. Simultaneously coordinating or balancing these resources in an electric under a variety of time and distances, without any specific technology to assist, is defined as a “smart grid.”

Temporal coordination specifically addresses the renewable resources time‐varying behavior and how this intermittency interacts with other components on the grid where not only quantities of power but rates of change and response times are crucially important.

Research needs for temporal coordination relate to:

  • resource intermittence,
  • forecasting and modeling on finer time scales;
  • electric storage and implementation on different time scales;
  • demand response and its implementation as a firm resource;
  • and dynamic behavior of the alternating current grid, including stability and low‐frequency oscillations, and the related behavior of switch‐controlled generation.

Different technologies, management strategies and incentive mechanisms are necessary to address coordination on different time scales.

Spatial coordination refers to how resources are interconnected and connected to loads through the transmission and distribution system. This means connecting remote resources and also addressing the location‐specific effects of a given resource being connected in a particular place. The latter is particularly relevant for distributed generation, which includes numerous smaller units interconnected at the distribution rather than the transmission level.

Research needs for spatial coordination relate to: technical, social and economic challenges for

  • long‐distance transmission expansion;
  • problematic aspects of high‐penetration distributed generation on distribution circuits, including clustering, capacity limitations, modeling of generation and load, voltage regulation, circuit protection, and prevention of unintentional islanding;
  • microgrids and potential strategic development of microgrid concepts, including intentional islanding and variable power quality and reliability.

A challenge to “smart grid” coordination is managing unprecedented amounts of data associated with an unprecedented number of decisions and control actions at various levels throughout the grid.

This report outlined substantial challenges on the way to meeting these goals.

More work is required to move from the status quo to a system with 33 percent of intermittent renewables. The complex nature of the grid and the refining temporal and spatial coordination represented a profound departure from the capabilities of the legacy or baseload system. Any “smart grid” development will require time for learning.

Researchers concluded that time was of the essence in answering the many foundational questions about how to design and evaluate new system capabilities, how to re‐write standards and procedures accordingly, how to create incentives to elicit the most constructive behavior from market participants and how to support operators in their efforts to keep the grid working reliably during these transitions. Addressing these questions early may help prevent costly mistakes and delays later on.

CHAPTER 1: Introduction to the Coordination Challenge

Successfully integrating renewable resources in the electric grid at high penetration levels – that is, meeting a 33 percent renewables portfolio standard for California – requires diverse technical and organizational challenges. Some of these challenges have been well‐recognized in the literature, while others are emerging from more recent observations. What these challenges have in common is that they can be characterized as a coordination challenge. Renewable and distributed resources introduce space or location (spatial) and time (temporal) constraints on resource availability. It is not always possible to have the resources available where and when they are required.

New efforts will be required to coordinate these resources in space and time within the electric grid.

A combination of economic and technical pressures has made grid operators pay more attention to the grid’s dynamic behaviors, some of which occur within a fraction of an alternating current cycle (one‐sixtieth of a second). The entire range of these relevant time increments in electric grid operation and planning spans fifteen orders of magnitude: from the micro‐second interval on which a solid‐state switching device operates, to the tens of years it may take to bring a new fleet of generation and transmission resources online or as a billion seconds (a season).CA grid 33 pct renewable time scaleIn the spatial dimension, it is also the case that power systems have expanded geographically and become strongly interdependent over long distances, while local effects such as power quality are simultaneously gaining importance. About six orders of magnitude covered ‐ from the very proximate impacts of harmonics (on the scale of an individual building) to the wide‐area stability and reliability effects that reach across the Western Interconnect, on the scale of a thousand miles.

Because of their unique properties, any effort to integrate renewable resources to a high penetration level will push outward time and distance scales on which the grid is operated. For example, it will force distant resource locations to be considered as well as unprecedented levels of distributed generation on customer rooftops.

The physical characteristics of these new generators will have important implications for system dynamic behavior.

In extending the time and distance scales for grid operations and planning, integrating renewable resources adds to and possibly compounds other, pre‐existing technical end economic pressures.

This suggests at least a partial definition for what has recently emerged as a” Holy Grail” or the “smart grid.” The “smart grid” is one that allows or facilitates managing electric power systems simultaneously on larger and smaller scales of distance and time.

Special emphasis is at the smaller end of each scale, where a “smart grid” allows managing energy and information at higher resolution than the legacy or baseload system.

The fact that solar and wind power are intermittent and non‐dispatchable is widely recognized.

More specifically, the problematic aspects of intermittence include the following:

High variability of wind power. Not only can wind speeds change rapidly, but because the mechanical power contained in the wind is proportional to wind speed cubed, a small change in wind speed causes a large change in power output from a wind rotor.

  1. High correlation of hourly average wind speed among prime California wind areas. With many wind farms on the grid, the variability of wind power is somewhat mitigated by randomness: especially the most rapid variations tend to be statistically smoothed out once the output from many wind areas is summed up. However, while brief gusts of wind do not tend to occur simultaneously everywhere, the overall daily and even hourly patterns for the best California wind sites tend to be quite similar, because they are driven by the same overall weather patterns across the state.
  2. Time lag between solar generation peak and late afternoon demand peak. The availability of solar power generally has an excellent coincidence with summer‐peaking demand. However, while the highest load days are reliably sunny, the peak air‐conditioning loads occur later in the afternoon due to the thermal inertia of buildings, typically lagging peak insolation by several hours.
  3. Rapid solar output variation due to passing clouds. Passing cloud events tend to be randomized over larger areas, but can cause very rapid output variations locally. This effect is therefore more important for large, contiguous photovoltaic arrays (that can be affected by a cloud all at once) than for the sum of many smaller, distributed PV arrays. Passing clouds are also less important for solar thermal generation than for PV because the ramp rate is mitigated by thermal inertia (and because concentrating solar plants tend to be built in relatively cloudless climates, since they can only use direct, not diffuse sunlight).
  4. Limited forecasting abilities. Rapid change of power output is especially problematic when it comes without warning. In principle, intermittence can be addressed by firming resources, including • reserve generation capacity • dispatchable generation with high ramp rates • generation with regulation capability • dispatchable electric storage • electric demand response that can be used in various combinations to offset the variability of renewable generation output. Vital characteristics of these firming resources include not only the capacity they can provide, but their response times and ramp rates.

Solar and wind power forecasting obviously hinges on the ability to predict temperature, sunshine and wind conditions. While weather services can offer reasonably good forecasts for larger areas within a resolution of hours to days, ranges of uncertainty increase significantly for very local forecasts. Ideally, advance warning could be provided at the earliest possible time before variations in solar and wind output occur, to provide actionable intelligence to system operators.

Needed:

Real‐time forecasting tools for wind speed, temperature, total insolation (for PV) and direct normal insolation (for concentrating solar), down to the time scale of minutes

Tools for operators that translate weather forecast into renewable output forecast and action items to compensate for variations.

A related question is the extent to which the variability of renewable resources will cancel or compound at high penetration levels, locally and system‐wide. Specifically, we wish to know how rapidly aggregate output will vary for large and diverse collections of solar and wind resources.

Needed: • Analysis of short‐term variability for solar and wind resources, individually and aggregate, to estimate quantity and ramp rates of firming resources required.

Analysis of wide area deployment of balancing resources such as storage, shared among control areas, to compensate effectively for short‐term variability.

2.1.3 Background: Firming Resources Resources to “firm up” intermittent generation include

  • reserve generation capacity
  • dispatchable generation with high ramp rates

The various types of firming generation resources are distinguished by the time scale on which they can be called to operate and the rate at which they can ramp power output up or down.
The most responsive resources are hydroelectric generators and gas turbines.

The difficult question is how much of each might be needed.

Electric storage includes a range of standard and emerging technologies:

  • pumped hydro
  • stationary battery banks
  • thermal storage at solar plants
  • electric vehicles
  • compressed air (CAES)
  • supercapacitors
  • flywheels
  • superconducting magnetic (SMES)
  • hydrogen from electrolysis or thermal decomposition of H2O

An inexpensive, practical, controllable, scalable and rapidly deployable storage technology would substantially relieve systemic constraints related to renewables integration.

The spectrum of time scales for different storage applications is illustrated in Figure 5.

  • months: seasonal energy storage (hydro power)
  • 4‐8 hours: demand shifting
  • 2 hours: supplemental energy dispatch
  • 15‐30 minutes: up‐ and down‐regulation
  • seconds to minutes: solar & wind output smoothing
  • sub‐milliseconds: power quality adjustment; flexible AC transmission system (FACTS) devices that shift power within a single cycle

Given that storing electric energy is expensive compared to the intrinsic value of the energy, the pertinent questions at this time concern what incentives there are for electric storage, at what level or type of implementation, and for what time target.

Alternating ‐current (a.c.) power systems exhibit behavior distinct from direct‐current (d.c.) circuits. Their essential characteristics during steady‐state operation, such as average power transfer from one node to another, can usually be adequately predicted by referring to d.c. models. But as a.c. systems become larger and more complex, and as their utilization approaches the limits of their capacity, peculiar and transient behaviors unique to a.c. become more important.

 

3 Eto, Joe et al. 2008. Real Time Grid Reliability Management. California Energy Commission, PIER Transmission research Program. CEC‐500‐2008‐049.

The increased need to manage California’s electricity grid in real time is a result of the ongoing transition from a system operated by vertically integrated utilities serving native loads to one operated by an independent system operator supporting competitive energy markets. During this transition period, the traditional approach to reliability management—construction of new transmission lines—has not been pursued due to unresolved issues related to the financing and recovery of transmission project costs. In the absence of investments in new transmission infrastructure, the best strategy for managing reliability is to equip system operators with better real-time information about actual operating margins so that they can better understand and manage the risk of operating closer to the edge.

Traditional rotating generators support grid stability by resisting changes in rotational speed, both due to magnetic forces and their own mechanical rotational inertia. Through their inherent tendency to keep rotating at a constant speed, these generators give the entire AC system a tendency to return to a steady operating state in the face of disturbances. Legacy power systems were designed with this inertial behavior in mind.

Large fossil fuel and nuclear generators naturally promote 60-Hz grid stability because their rotational speed is constant due to magnetic forces and inertia. Despite disturbances they to revert to a steady operating state. But the inverters that renewable energy use to supply AC power depend on very rapid on-off switching within solid-state semiconductor materials. It’s possible that at some point when a larger percent of power comes from renewables, these inverters will destabilize the grid voltages, frequencies, and oscillations by not responding collectively well to temporary disturbances and that we’ll need to keep large rotating generators to maintain stability.

Unlike conventional rotating generators, inverters produce alternating current by very rapid on‐off switching within solid‐state semiconductor materials. Inverters are used whenever 60‐Hz AC power is supplied to the grid from

  • c. sources such as PV modules, fuel cells or batteries
  • variable speed generators, such as wind whose output is conditioned by successive a.c.‐c.‐a.c. conversion (this does not include all wind generators, but a significant fraction of newly installed machines). What we do not understand well are the dynamic effects on a.c. systems of switch‐controlled generation:
  • How will switch‐controlled generators collectively respond to temporary disturbances, and how can they act to stabilize system voltage and frequency?
  • What will be the effect of switch‐controlled generation on wide‐area, low‐frequency oscillations?
  • Can inverters “fake” inertia and what would it take to program them accordingly?
  • What is the minimum system‐wide contribution from large, rotating generators required for stability?

Needed:

  • Modeling of high‐penetration renewable scenarios on a shorter time scale, including dynamic behavior of generation units that impacts voltage and frequency stability
  • Generator models for solar and wind machines
  • Inverter performance analysis, standardization and specification of interconnection requirements that includes dynamic behavior
  • Synchro‐phasor measurements at an increased number of locations, including distribution circuits, to diagnose problems and inform optimal management of inverters

CHAPTER 3: Spatial Coordination

Relevant distance scales in power system operation span six orders of magnitude, from local effects of power quality on the scale of an individual building to hundreds or even thousands of miles across interconnected systems. A “smart grid” with high penetration of renewables will require simultaneous consideration of small‐ and large‐scale compatibilities and coordination.

3.1 Transmission Level: Long-distance Issues

3.1.1 Background: Transmission Issues

The need for transmission capacity to remote areas with prime solar and wind resources is widely recognized. It is worth noting that renewable resources are not unique in imposing new transmission requirements. For example, a new fleet of nuclear power plants would likely be constrained by siting considerations that would similarly require the construction of new transmission capacity. In the case of solar and wind power, however, we know where the most attractive resources are – and they are not where most people live. Challenges for transmission expansion include social, economic and technical factors. Social and economic challenges for transmission expansion include • Long project lead times for transmission siting, sometimes significantly exceeding lead times for generation

NIMBY resistance to transmission siting based on aesthetics and other concerns (e.g., exposure to electromagnetic fields) • Higher cost of alternatives to visible overhead transmission • Uncertainty about future transmission needs and economically optimal levels

On the technical side, • Long‐distance AC. power transfers are constrained by stability limits (phase angle separation) regardless of thermal transmission capacity • Increased long‐distance AC power transfers may exacerbate low‐frequency oscillations (phase angle and voltage), potentially compromising system stability and security

Both of the above technical constraints can in theory be addressed with a.c.‐d.c. conversion, at significant cost. The crucial point, however, is that simply adding more, bigger wires will not always provide increased transmission capacity for the grid. Instead, it appears that legacy a.c. systems are reaching or have reached a maximum of geographic expansion and interconnectivity that still leaves them operable in terms of the system’s dynamic behavior. Further expansion of long‐distance power transfers, whether from renewable or other sources, will very likely require the increased use of newer technologies in transmission systems to overcome the dynamic constraints.

3.1.2 Research Needs Related to Transmission

On the social‐political and economic side, research needs relate to the problems of deciding how much transmission is needed where, and at what reasonable cost to whom. In addition, options for addressing siting constraints can be expanded by making transmission lines less visible or otherwise less obtrusive. Needed: • Analysis of economic costs and benefits to communities hosting rights of way • Political evaluation of accelerated siting processes • Continuing analysis to identify optimal investment level in transmission capacity relative to intermittent generation capacity, and to evaluate incentives • Public education, including interpretation of findings regarding EMF exposure • Continuing R&D on lower‐visibility transmission technologies, including compact designs and underground cables

Needed: • Dynamic system modeling on large geographic scale (WECC) providing analysis of likely stability problems to be encountered in transmission expansion scenario, the benefit potential of various d.c. link options • Continuing R&D on new infrastructure materials, devices and techniques that enable transmission capacity increases, including: dynamic thermal rating, power flow control, e.g. FACTS devices o fault current controllers, intelligent protection systems, e.g. adaptive relaying, stochastic planning and modeling tools, new conductor materials and engineered line and system configurations4

CHAPTER 4: Overarching Coordination Issues

Refinement of both spatial and temporal coordination – in other words, “smartness” – demands a substantial increase of information flow among various components on the electric grid. This information flow has implications for system control strategies, including the role of human operators. Some of this coordination is specifically associated with renewable and distributed resources, requiring increased information volume for • mitigating intermittence of renewable resources accommodating siting constraints for renewable and distributed generation

Problematic issues in the context of information aggregation include the following: • How much data volume is manageable for both operators and communications systems? • What level of resolution needs to be preserved? • What data must be monitored continuously, and what opportunities exist to filter data by exceptional events? • How can information best be presented to operators to support situational awareness?

Once data have been selected and aggregated into manageable batches, they must be translated or somehow used to frame and inform action items for operators. For example, we might ask what local information goes into an operator’s decision to switch a particular feeder section, or to dispatch demand response, generation or storage. Operating procedures are necessarily based on the particular sets of information and control tools available to operators. The introduction of significant volumes of new data as well as potential control capabilities on more refined temporal and spatial scales also forces decisions about how this information is to be used, strategically and practically. Issues concerning actionable items include the following: • What new tasks and responsibilities are created for grid operators, especially distribution operators, by distributed resources? • How are these tasks defined? • What control actions may be taken by parties other than utility operators? Needed: • Modeling of distribution circuit operation with high penetration of diverse distributed resources, including evaluation of control strategies. 4. Locus of Control

A question related to the definition of action items is who, exactly, is taking the action. With large amounts of data to be evaluated and many decisions to be made in potentially a short time frame, it is natural to surmise that some set of decisions would be made and actions initiated by automated systems of some sort, whether they be open‐loop with human oversight or closed‐loop “expert systems” that are assigned domains of responsibility. Such domains may range from small to substantial: for example, automation may mean a load thermostat that automatically resets itself in response to an input (e.g. price or demand response signal); distributed storage that charges or discharges in response to a schedule, signal or measurement of circuit conditions; or it could mean entire distribution feeders being switched automatically.

Finally, it would be naive to expect any substantial innovation in a technical system as complex as the electric grid to proceed without setbacks, or for an updated and improved system to operate henceforth without failures. Rather than wishing away mistakes and untoward events, the crucial question is what corrective feedback mechanisms are available, not if but when failures do occur. This includes, for example, contingency plans in response to failures of hardware, communications or control algorithms, cyber‐security breach, or any other unexpected behavior on the part of a system component, human or machine. A higher degree of spatial and temporal resolution in coordinating electric grids – more information, more decisions, and more actions – means many more opportunities for intervention and correction, but first it means many more opportunities for things to go wrong.

CHAPTER 5: Conclusion

The effective integration of large amounts of new resources, including distributed and renewable resources, hinges on the ability to coordinate the electric grid in space and time on a wide range of scales. The capability to perform such coordination, independent of any particular technology used to accomplish it, can be taken to define a “smart grid.”

Ultimately, “smart” coordination of the grid should serve to • mitigate technical difficulties associated with renewable resources, thereby enabling California to meet its policy goals for a renewable portfolio • maximize beneficial functions renewable generation can perform toward supporting grid stability and reliability

Much work lies between the status quo and a system with 33 percent of intermittent renewables. Due to the complex nature of the grid, and because the refinement of temporal and spatial coordination represents a profound departure from the capabilities of our legacy system, any “smart grid” development will require time for learning, and will need to draw on empirical performance data as they become available. Time is of the essence, therefore, in answering the many foundational questions about how to design and evaluate new system capabilities, how to re‐write standards and procedures accordingly, how to incentivize the most constructive behavior from market participants, and how to support operators in their efforts to keep the grid working reliably in the face of these transitions. With all the research needs detailed in this white paper, the hope is that questions addressed early may help prevent costly mistakes and delays later on. The more aggressively these research efforts are pursued, the more likely California will be able to meet its 2020 goals for renewable resource integration.

References

Petersen, J. 2019. CAISO Data Highlights Critical Flaws In The Evolving Renewables Plus Storage Mythology. seekingalpha.

National Renewable Energy Laboratory. Western Wind and Solar Integration Study. May 2010. http://wind.nrel.gov/public/WWIS/

Vittal, Vijay, “The Impact of Renewable Resources on the Performance and Reliability of the Electricity Grid.” National Academy of Engineering Publications, Vol. 40 No. 1, March 2010. http://www.nae.edu/Publications/TheBridge/Archives/TheElectricityGrid/18587.aspx

Posted in Grid instability | Tagged , , , , , , , | 2 Comments

One million plant & animal species at risk of extinction

As usual, no mention of birth control or carrying capacity.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

Plumer, B. 2019. Humans Are Speeding Extinction and Altering the Natural World at an ‘Unprecedented’ Pace. New York Times.

Extinction rates are tens to hundreds of times higher than they have been in the past 10 million years.

Over the past 50 years, global biodiversity loss has primarily been driven by activities like the clearing of forests for farmland, the expansion of roads and cities, logging, hunting, overfishing, water pollution and the transport of invasive species around the globe.

All told, three-quarters of the world’s land area has been significantly altered by people, the report found, and 85 percent of the world’s wetlands have vanished since the 18th century.

Humans are transforming Earth’s natural landscapes so dramatically that as many as one million plant and animal species are now at risk of extinction, posing a dire threat to ecosystems that people all over the world depend on for their survival, a sweeping new United Nations assessment has concluded.

The 1,500-page report, compiled by hundreds of international experts and based on thousands of scientific studies, is the most exhaustive look yet at the decline in biodiversity across the globe and the dangers that creates for human civilization.

Its conclusions are stark. In most major land habitats, from the savannas of Africa to the rain forests of South America, the average abundance of native plant and animal life has fallen by 20 percent or more, mainly over the past century. With the human population passing 7 billion, activities like farming, logging, poaching, fishing and mining are altering the natural world at a rate “unprecedented in human history.”

At the same time, a new threat has emerged: Global warming has become a major driver of wildlife decline, the assessment found, by shifting or shrinking the local climates that many mammals, birds, insects, fish and plants evolved to survive in. When combined with the other ways humans are damaging the environment, climate change is now pushing a growing number of species, such as the Bengal tiger, closer to extinction.

As a result, biodiversity loss is projected to accelerate through 2050, particularly in the tropics, unless countries drastically step up their conservation efforts.

The report is not the first to paint a grim portrait of Earth’s ecosystems. But it goes further by detailing how closely human well-being is intertwined with the fate of other species.

“For a long time, people just thought of biodiversity as saving nature for its own sake,” said Robert Watson, chair of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services,which conducted the assessment at the request of national governments. “But this report makes clear the links between biodiversity and nature and things like food security and clean water in both rich and poor countries.”

previous report by the group had estimated that, in the Americas, nature provides some $24 trillion of non-monetized benefits to humans each year. The Amazon rain forest absorbs immense quantities of carbon dioxide and helps slow the pace of global warming. Wetlands purify drinking water. Coral reefs sustain tourism and fisheries in the Caribbean. Exotic tropical plants form the basis of a variety of medicines.

But as these natural landscapes wither and become less biologically rich, the services they can provide to humans have been dwindling.

Humans are producing more food than ever, but land degradation is already harming agricultural productivity on 23 percent of the planet’s land area, the new report said. The decline of wild bees and other insects that help pollinate fruits and vegetables is putting up to $577 billion in annual crop production at risk. The loss of mangrove forests and coral reefs along coasts could expose up to 300 million people to increased risk of flooding.

The authors note that the devastation of nature has become so severe that piecemeal efforts to protect individual species or to set up wildlife refuges will no longer be sufficient.

Posted in Biodiversity Loss, Extinction | Tagged , | 2 Comments

Global wildlife populations have fallen 60% in just 40 years

Below is a summary of the World Wildlife Fund report.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Hale, T. 2018. Global Wildlife Populations Have Fallen By 60 Percent In 40 Years, WWF Report Reveals. iflscience.com

The World Wildlife Fund (WWF) has just released its biennial Living Planet Report 2018, a colossal report to track the health of the world’s wildlife populations. All in all, it paints a truly damning picture of “runaway human consumption” and the damage it’s afflicting on the world’s biodiversity.  

Global populations of monitored vertebrate species have declined in size by 60 percent on average between 1970 and 2014, according to the report, which utilizes data from the Zoological Society of London’s (ZSL) Living Planet Index and the IUCN Red List of threatened species, among others. The leading driver behind this steep decline is human consumption, which has led to to the degrading of habitat through agriculture, as well as the direct overexploitation of wildlife, such as overfishing and poaching.

“Science is showing us the harsh reality our forests, oceans, and rivers are enduring at our hands. Inch by inch and species by species, shrinking wildlife numbers and wild places are an indicator of the tremendous impact and pressure we are exerting on the planet, undermining the very living fabric that sustains us all: nature and biodiversity,” Marco Lambertini, Director General of WWF International, said in a statement.

It’s worth highlighting what that “60 percent” figure means exactly – because it doesn’t mean there were 60 percent fewer animals on the planet in 2014 compared to 1970. The report tracked 16,704 different populations of over 4,000 vertebrate species from 1970 to 2014. Across all of these populations, on average, the populations declined by 60 percent. Some small populations could theoretically suffer a 90 percent loss just by a handful of individuals dying. Even if most larger populations only decline by a tiny percentage, the small populations’ large losses will bring the total average up.

Freshwater wildlife has seen the most dramatic decline of all, with an average population drop of 83 percent since 1970. The tropics are also among some of the hardest hit ecosystems, with South and Central America suffering average population declines of 89 percent.

Posted in Biodiversity Loss, Extinction | 1 Comment

The Coming Copper Peak

Elon Musk told a closed-door Washington conference of miners, regulators and lawmakers that he sees a shortage of EV minerals coming, including copper and nickel (Scheyder 2019).   Other rare metals used in cars include neodymium, lanthanum, terbium, and dysprosium (Gorman 2009).

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Richard A. Kerr. February 14, 2014. The Coming Copper Peak.  Science 343:722-724.

Production of the vital metal will top out and decline within decades, according to a new model that may hold lessons for other resources.

If you take social unrest and environmental factors into account, the peak could be as early as the 2020s

As a crude way of taking account of social and environmental constraints on production, Northey and colleagues reduced the amount of copper available for extraction in their model by 50%. Then the peak that came in the late 2030s falls to the early 2020s, just a decade away.

After peak Copper

Whenever it comes, the copper peak will bring change.  Graedel and his Yale colleagues reported in a paper published on 2 December 2013 in the Proceedings of the National Academy of Sciences that copper is one of four metals—chromium, manganese, and lead being the others—for which “no good substitutes are presently available for their major uses.”

If electrons are the lifeblood of a modern economy, copper makes up its blood vessels. In cables, wires, and contacts, copper is at the core of the electrical distribution system, from power stations to the internet. A small car has 20 kilograms (44 lbs) of copper in everything from its starter motor to the radiator; hybrid cars have twice that. But even in the face of exponentially rising consumption—reaching 17 million metric tons in 2012—miners have for 10,000 years met the world’s demand for copper.

But perhaps not for much longer. A group of resource specialists has taken the first shot at projecting how much more copper miners will wring from the planet. In their model runs, described this month in the journal Resources, Conservation and Recycling, production peaks by about mid-century even if copper is more abundant than most geologists believe.

Predicting when production of any natural resource will peak is fraught with uncertainty. Witness the running debate over when world oil production will peak (Science, 3 February 2012, p. 522).

The team is applying its depletion model to other mineral resources, from oil to lithium, that also face exponentially escalating demands on a depleting resource.

The world’s copper future is not as rosy as a minimum “125-year supply” might suggest, however. For one thing, any future world will have more people in it, perhaps a third more by 2050. And the hope, at least, is that a larger proportion of those people will enjoy a higher standard of living, which today means a higher consumption of copper per person. Sooner or later, world copper production will increase until demand cannot be met from much-depleted deposits. At that point, production will peak and eventually go into decline—a pattern seen in the early 1970s with U.S. oil production.

For any resource, the timing of the peak depends on a dynamic interplay of geology, economics, and technology. But resource modeler Steve Mohr of the University of Technology, Sydney (UTS), in Australia, waded in anyway. For his 2010 dissertation, he developed a mathematical model for projecting production of mineral resources, taking account of expected demand and the amount thought to be still in the ground. In concept, it is much like the Hubbert curves drawn for peak oil production, but Mohr’s model is the first to be applied to other mineral resources without the assumption that supplies are unlimited.

Exponential growth

Increasing the amount of accessible copper by 50% to account for what might yet be discovered moves the production peak back only a few years, to about 2045 — even doubling the copper pushes peak production back only to about 2050Quadrupling only delays peak until 2075.

Copper trouble spots

The world has been so thoroughly explored for copper that most of the big deposits have probably already been found. Although there will be plenty of discoveries, they will likely be on the small side.

“The critical issues constraining the copper industry are social, environmental, and economic,” Mudd writes in an e-mail. Any process intended to extract a kilogram of metal locked in a ton of rock buried hundreds of meters down inevitably raises issues of energy and water consumption, pollution, and local community concerns.

Civil war and instability make many large copper deposits unavailable

Mudd has a long list of copper mining trouble spots. The Reko Diq deposit in northwestern Pakistan close to both Iran and Afghanistan holds $232 billion of copper, but it is tantalizingly out of reach, with security problems and conflicts between local government and mining companies continuing to prevent development. The big Panguna mine in Bougainville, Papua New Guinea, has been closed for 25 years, ever since its social and environmental effects sparked a 10-year civil war that left about 20,000 dead.

Are we about to destroy the largest salmon fishery in the world for copper?

On 15 January the U.S. Environmental Protection Agency issued a study of the potential effects of the yet-to-be-proposed Pebble Mine on Bristol Bay in southwestern Alaska. Environmental groups had already targeted the project, and the study gives them plenty of new ammunition, finding that it would destroy as much as 150 kilometers of salmon-supporting streams and wipe out more than 2000 hectares of wetlands, ponds, and lakes.

Gold and Oil have already peaked

Copper is far from the only mineral resource in a race between depletion—which pushes up costs—and new technology, which can increase supply and push costs down. Gold production has been flat for the past decade despite a soaring price (Science, 2 March 2012, p. 1038). Much crystal ball–gazing has considered the fate of world oil production. “Peakists” think the world may be at or near the peak now, pointing to the long run of $100-a-barrel oil as evidence that the squeeze is already on.

Coal likely to peak in 2034, all fossil fuels by 2030, according to Mohr’s model

Fridley, Heinberg, Patzek, and other scientists believe Peak Coal is already here or likely by 2020.

Coal will begin to falter soon after, his model suggests, with production most likely peaking in 2034. The production of all fossil fuels, the bottom line of his dissertation, will peak by 2030, according to Mohr’s best estimate. Only lithium, the essential element of electric and hybrid vehicle batteries, looks to offer a sufficient supply through this century. So keep an eye on oil and gold the next few years; copper may peak close behind.

References

Gorman, S. August 30, 2009. As hybrid cars gobble rare metals, shortage looms. Reuters.

Scheyder, E. 2019. Exclusive: Tesla expects global shortage of electric vehicle battery minerals. Reuters.

Posted in Important Minerals | Tagged , , , | 4 Comments

Why did people vote for Trump?

Before the election, it was widely known that Trump was a gangster who bragged about grabbing women’s asses, lied constantly, went bankrupt 4 times, and much more. So how could people have voted for him? To find out I read may books on such as Deer hunting with Jesus”, Hillbilly elegy, Strangers in their own land, and White Trash.
The best explanation I’ve seen can be found in the Psychology Today article below that I’ve shortened and reworded.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Bobby Azarian Ph.D. Dec 27, 2018. A Complete Psychological Analysis of Trump’s Support.  Science can help us make sense of the president’s political invincibility. Psychology Today.

What is most baffling is Trump’s apparent political invincibility. As he himself said even before he won the presidential election, “I could stand in the middle of 5th Avenue and shoot somebody and I wouldn’t lose voters.” Unfortunately for the American people, this wild-sounding claim appears to be truer than not. It should also motivate us to explore the science underlying such peculiar human behavior, so we can learn from it, and potentially inoculate against it.

We should be asking why his inflammatory rhetoric and numerous scandals haven’t sunk him. We are talking about a man who was caught on tape saying, “When you’re a star, they let you do it. You can do anything. Grab them by the pussy.” Politically surviving that video is not normal, or anything close to it, and such a revelation would likely have been the end of Barack Obama or George Bush had it surfaced weeks before the election.

While dozens of psychologists have analyzed Trump, to explain the man’s political invincibility, it is more important to understand the minds of his staunch supporters. While various popular articles have illuminated a multitude of reasons for his unwavering support, there appears to be no comprehensive analysis that contains all of them. Since there seems to be a real demand for this information, I have tried to provide that analysis below.

This list will begin with the more benign reasons for Trump’s intransigent support. As the list goes on, the explanations become increasingly worrisome, and toward the end, border on the pathological. The psychological phenomena described below mostly pertain to those supporters who would follow Trump off a cliff. These are the people who will stand by his side no matter what scandals come to light, or what sort of evidence for immoral and illegal behavior surfaces.

1. Practicality Trumps Morality

For some wealthy people, it’s simply a financial matter. Trump offers tax cuts for the rich and wants to do away with government regulation that gets in the way of businessmen making money, even when that regulation exists for the purpose of protecting the environment. Others, like blue-collared workers, like the fact that the president is trying to bring jobs back to America from places like China. Some people who genuinely are not racist (those who are will be discussed later) simply want stronger immigration laws because they know that a country with open borders is not sustainable. These people have put their practical concerns above their moral ones. To them, it does not make a difference if he’s a vagina-grabber, or if his campaign team colluded with Russia to help him defeat his political opponent. It is unknown whether these people are eternally bound to Trump in the way others are, but we may soon find out if the Mueller investigation is allowed to come to completion.

2. The Brain’s Attention System Is More Strongly Engaged by Trump

According to a study that monitored brain activity while participants watched 40 minutes of political ads and debate clips from the presidential candidates, Donald Trump is unique in his ability to keep the brain engaged. While Hillary Clinton could only hold attention for so long, Trump kept both attention and emotional arousal high throughout the viewing session. This pattern of activity was seen even when Trump made remarks that individuals didn’t necessarily agree with. His showmanship and simple language clearly resonate with some at a visceral level.

3. America’s Obsession with Entertainment and Celebrities

Essentially, the loyalty of Trump supporters may in part be explained by America’s addiction to entertainment and reality TV. To some, it doesn’t matter what Trump actually says because he’s so amusing to watch. With the Donald, you are always left wondering what outrageous thing he is going to say or do next. He keeps us on the edge of our seat, and for that reason, some Trump supporters will forgive anything he says. They are happy as long as they are kept entertained.

4. “Some Men Just Want to Watch the World Burn.”

Some people are supporting Trump simply to be rebellious or to introduce chaos into the political system. They may have such distaste for the establishment and democrats like Hillary Clinton that their support for Trump is a symbolic middle finger directed at Washington. These people may have other issues, like an innate desire to troll others or an obsession with schadenfreude.

5. The Fear Factor: Conservatives Are More Sensitive to Threat

Science has  shown that the conservative brain has an exaggerated fear response when faced with stimuli that may be perceived as threatening. A 2008 study in the journal Science found that conservatives have a stronger physiological reaction to startling noises and graphic images compared to liberals. A brain-imaging study published in Current Biology revealed that those who lean right politically tend to have a larger amygdala — a structure that is electrically active during states of fear and anxiety. And a 2014 fMRI study found that it is possible to predict whether someone is a liberal or conservative simply by looking at their brain activity while they view threatening or disgusting images, such as mutilated bodies. Specifically, the brains of self-identified conservatives generated more activity overall in response to the disturbing images.

These brain responses are automatic and not influenced by logic or reason. As long as Trump continues to portray Muslims and Hispanic immigrants as imminent threats, many conservative brains will involuntarily light up like light bulbs being controlled by a switch. Fear keeps his followers energized and focused on safety. And when you think you’ve found your protector, you become less concerned with offensive and divisive remarks.

6. The Power of Mortality Reminders and Perceived Existential Threat

A well-supported theory from social psychology, known as Terror Management Theory, explains why Trump’s fear mongering is doubly effective. The theory is based on the fact that humans have a unique awareness of their own mortality. The inevitably of one’s death creates existential terror and anxiety that is always residing below the surface. In order to manage this terror, humans adopt cultural worldviews — like religions, political ideologies, and national identities — that act as a buffer by instilling life with meaning and value.

Terror Management Theory predicts that when people are reminded of their own mortality, which happens with fear mongering, they will more strongly defend those who share their worldviews and national or ethnic identity, and act out more aggressively towards those who do not. Hundreds of studies have supported this hypothesis, and some have specifically shown that triggering thoughts of death tends to shift people towards the right.

Not only do death reminders increase nationalism, they may influence voting habits in favor of more conservative presidential candidates. And more disturbingly, in a study with American students, scientists found that making mortality salient increased support for extreme military interventions by American forces that could kill thousands of civilians overseas. Interestingly, the effect was present only in conservatives.

By constantly emphasizing existential threat, Trump may be creating a psychological condition that makes the brain respond positively rather than negatively to bigoted statements and divisive rhetoric

7. The Dunning-Kruger Effect: Humans Often Overestimate Their Political Expertise

Some who support Donald Trump are under-informed or misinformed about the issues at hand. When Trump tells them that crime is skyrocketing in the United States, or that the economy is the worst it’s ever been, they simply take his word for it.

The Dunning-Kruger effect explains that the problem isn’t just that they are misinformed; it’s that they are completely unaware that they are misinformed, which creates a double burden.

Studies have shown that people who lack expertise in some area of knowledge often have a cognitive bias that prevents them from realizing that they lack expertise. As psychologist David Dunning puts it in an op-ed for Politico, “The knowledge and intelligence that are required to be good at a task are often the same qualities needed to recognize that one is not good at that task — and if one lacks such knowledge and intelligence, one remains ignorant that one is not good at the task. This includes political judgment.” These people cannot be reached because they mistakenly believe they are the ones who should be reaching others.

8. Relative Deprivation — A Misguided Sense of Entitlement

Relative deprivation refers to the experience of being deprived of something to which one believes they are entitled. It is the discontent felt when one compares their position in life to others who they feel are equal or inferior but have unfairly had more success than them.

Common explanations for Trump’s popularity among non-bigoted voters involve economics. There is no doubt that some Trump supporters are simply angry that American jobs are being lost to Mexico and China, which is certainly understandable, although these loyalists often ignore the fact that some of these careers are actually being lost due to the accelerating pace of automation.

These Trump supporters are experiencing relative deprivation, and are common among the swing states like Ohio, Michigan, and Pennsylvania. This kind of deprivation is specifically referred to as “relative,” as opposed to “absolute,” because the feeling is often based on a skewed perception of what one is entitled to.

9. Lack of Exposure to Dissimilar Others

Intergroup contact refers to contact with members of groups that are outside one’s own, which has been experimentally shown to reduce prejudice. As such, it’s important to note that there is growing evidence that Trump’s white supporters have experienced significantly less contact with minorities than other Americans. For example, a 2016 study found that “…the racial and ethnic isolation of Whites at the zip-code level is one of the strongest predictors of Trump support.” This correlation persisted while controlling for dozens of other variables. In agreement with this finding, the same researchers found that support for Trump increased with the voters’ physical distance from the Mexican border. These racial biases might be more implicit than explicit, the latter which is addressed in #14.

10. Trump’s Conspiracy Theories Target the Mentally Vulnerable

While the conspiracy theory crowd — who predominantly support Donald Trump and crackpot allies like Alex Jones and the shadowy QAnon — may appear to just be an odd quirk of modern society, some of them may suffer from psychological illnesses that involve paranoia and delusions, such as schizophrenia, or are at least vulnerable to them, like those with schizotypy personalities.

The link between schizotypy and belief in conspiracy theories is well-established, and a recent study published in the journal Psychiatry Research has demonstrated that it is still very prevalent in the population. The researchers found that those who were more likely to believe in outlandish conspiracy theories, such as the idea that the U.S. government created the AIDS epidemic, consistently scored high on measures of “odd beliefs and magical thinking.” One feature of magical thinking is a tendency to make connections between things that are actually unrelated in reality.

Donald Trump and media allies target these people directly. All one has to do is visit alt-right websites and discussion boards to see the evidence for such manipulation.

11. Trump Taps into the Nation’s Collective Narcissism

Collective narcissism is an unrealistic shared belief in the greatness of one’s national group. It often occurs when a group who believes it represents the ‘true identity’ of a nation — the ‘ingroup,’ in this case White Americans — perceives itself as being disadvantaged compared to outgroups who are getting ahead of them ‘unrightfully.’ This psychological phenomenon is related to relative deprivation (#6).

study published last year in the journal Social Psychological and Personality Science found a direct link between national collective narcissism and support for Donald Trump. This correlation was discovered by researchers at the University of Warsaw, who surveyed over 400 Americans with a series of questionnaires about political and social beliefs. Where individual narcissism causes aggressiveness toward other individuals, collective narcissism involves negative attitudes and aggression toward ‘outsider’ groups (outgroups), who are perceived as threats.

Donald Trump exacerbates collective narcissism with his anti-immigrant, anti-elitist, and strongly nationalistic rhetoric. By referring to his supporters, an overwhelmingly white group, as being “true patriots” or “real Americans,” he promotes a brand of populism that is the epitome of “identity politics,” a term that is usually associated with the political left. Left-wing identity politics, as misguided as they may sometimes be, are generally aimed at achieving equality, while the right-wing brand is based on a belief that one nationality or race is superior or entitled to success and wealth for no other reason than identity.

12. The Desire to Want to Dominate Others

Social dominance orientation (SDO) — which is distinct from but related to authoritarian personality (#13) — refers to people who have a preference for the societal hierarchy of groups, specifically with a structure in which the high-status groups have dominance over the low-status ones. Those with SDO are typically dominant, tough-minded, and driven by self-interest.

In Trump’s speeches, he appeals to those with SDO by repeatedly making a clear distinction between groups that have a generally higher status in society (White), and those groups that are typically thought of as belonging to a lower status (immigrants and minorities). A 2016 survey study of 406 American adults published last year in the journal Personality and Individual Differences found that those who scored high on both SDO and authoritarianism were more likely to vote for Trump in the election.

13. Authoritarian Personality 

Authoritarianism refers to the advocacy or enforcement of strict obedience to authority at the expense of personal freedom, and is commonly associated with a lack of concern for the opinions or needs of others. Authoritarian personality is characterized by belief in total and complete obedience to authority. Those with this personality often display aggression toward outgroup members, submissiveness to authority, resistance to new experiences, and a rigid hierarchical view of society. Authoritarianism is often triggered by fear, making it easy for leaders who exaggerate threat or fear monger to gain their allegiance.

Although authoritarian personality is found among liberals, it is more common among the right-wing around the world. President Trump’s speeches, which are laced with absolutist terms like “losers” and “complete disasters,” are naturally appealing to those with such a personality.

While research showed that Republican voters in the U.S. scored higher than Democrats on measures of authoritarianism before Trump emerged on the political scene, a 2016 Politico survey found that high authoritarians greatly favored then-candidate Trump, which led to a correct prediction that he would win the election, despite the polls saying otherwise.

14. Racism and Bigotry

It would be grossly unfair and inaccurate to say that every one of Trump’s supporters have prejudice against ethnic and religious minorities, but it would be equally inaccurate to say that few do. The Republican party, going at least as far back to Richard Nixon’s “southern strategy,” has historically used tactics that appealed to bigotry, such as lacing speeches with “dog whistles” — code words that signaled prejudice toward minorities that were designed to be heard by racists but no one else.

While the dog whistles of the past were subtler, Trump’s signaling is sometimes shockingly direct. There’s no denying that he routinely appeals to racist and bigoted supporters when he calls Muslims “dangerous” and Mexican immigrants “rapists” and “murderers,” often in a blanketed fashion. Perhaps unsurprisingly, a recent study has shown that support for Trump is correlated with a standard scale of modern racism.

Posted in Critical Thinking, Politics | Tagged , , | 7 Comments

Climate change risks could cause an American “Fukushima”

Preface. Nuclear power plants need a constant supply of electric power to pump cool water into a reactor’s core.

Ninety percent of them, 54 plants, have at least one flood risk exceeding their design.

If flooding stops the power supply long enough, as happened in Fukushima, the core can overheat, melting through its container, as well as the nearby spent nuclear fuel pools which unlike the core, are in the open air, releasing deadly levels of radiation.

Related post:  A Nuclear spent fuel fire at Peach Bottom in Pennsylvania could force 18 million people to evacuate

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

*** Some excerpts from:

Flavelle, C., et al. 2019. U.S. Nuclear Power Plants Weren’t Built for Climate Change. Bloomberg.

The NRC directed the operators of the 60 or so working U.S. nuclear power plants to evaluate their current flood risk, using the latest weather modeling technology and accounting for the effects of climate change. Companies were told to compare those risks with what their plants, many almost 50 years old, were built to withstand, and, where there was a gap, to explain how they would close it.

That process has revealed a lot of gaps. But Gregory Jaczko, former chairman of the U.S. Nuclear Regulatory Commission (NRC) and others say that the commission’s new leadership, appointed by President Donald Trump, hasn’t done enough to require owners of nuclear power plants to take preventative measures—and that the risks are increasing as climate change worsens.

Ninety percent of plants, 54 of them, have at least one flood risk exceeding their design. Fifty-three weren’t built to withstand their current risk from intense precipitation; 25 didn’t account for current flood projections from streams and rivers; 19 weren’t designed for their expected maximum storm surge; 19 face three or more threats that they weren’t designed to handle.

The industry argues that rather than redesign facilities to address increased flood risk, which Jaczko advocates, it’s enough to focus mainly on storing emergency generators, pumps, and other equipment in on-site concrete bunkers, a system they call Flex, for Flexible Mitigation Capability. Not only did the NRC agree with that view, it ruled on Jan. 24 that nuclear plants wouldn’t have to update that equipment to deal with new, higher levels of expected flooding. It also eliminated a requirement that plants run Flex drills.

The commission’s three members appointed by President Trump wrote that existing regulations were sufficient to protect the country’s nuclear reactors. Jaczko disagrees as do the two Democratic appointees. “The majority of the commission has decided that licensees can ignore these reevaluated hazards,” commissioner Jeff Baran wrote in dissent. His colleague Stephen Burns called the decision “baffling.” Through a spokesman, the Republican appointees declined to comment.

“Nuclear power is weird—it exists to produce electricity, and at the same time it can’t exist without electricity,” says Allison Macfarlane, who chaired the NRC from 2012 through 2014. Plants need constant power to pump cool water into a reactor’s core; if flooding interrupts that power supply for long enough, as happened in Fukushima, the core can overheat, melting through its container and releasing deadly levels of radiation.

The true risk to U.S. nuclear facilities may be even greater than what the documents from the nuclear commission show. The commission allowed nuclear plant operators not only to perform their own estimates of current flood risk but also to decide what assumptions to make—for example, the maximum likely hurricane speed or how much rain would fall in an extreme storm. (The commission reviews that work.) The commission also rejected a recommendation by their own staff that would require nuclear power plants to update their risk assessments periodically to reflect the advancing threat of climate change.

Whatever the likelihood of a Fukushima-style disaster, the aftermath offers a glimpse of the costs of failure. Eight years later, much of the adjacent city of Okuma remains uninhabitable; in 2016 the Japanese government estimated total cleanup and compensation costs would approach $200 billion.

Posted in Climate Change, Nuclear Waste | Tagged , , , | 3 Comments

China is deforesting Russia

Preface. Here’s more than half of a New York Times article about China deforesting Russia. Yikes! Peak oil had better come soon before we denude the earth.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Myers, S. L. 2019. China’s Voracious Appetite for Timber Stokes Fury in Russia and Beyond. After sharply restricting logging in its own forests, China turned to imports, overwhelming even a country with abundant resources: Russia. New York Times.

From the Altai Mountains to the Pacific Coast, logging is ravaging Russia’s vast forests, leaving behind swathes of scarred earth studded with dying stumps.

The culprit, to many Russians, is clear: China. Chinese demand is also stripping forests elsewhere — from Peru to Papua New Guinea, Mozambique to Myanmar.

Since China began restricting commercial logging in its own natural forests two decades ago, it has increasingly turned to Russia, importing huge amounts of wood in 2017 to satisfy the voracious appetite of its construction companies and furniture manufacturers.

“In Siberia, people understand they need the forests to survive,” said Eugene Simonov, an environmentalist who has studied the impact of commercial logging in Russia’s Far East. “And they know their forests are now being stolen.”

Russia has been a witting collaborator, too, selling Chinese companies logging rights at low cost and, critics say, turning a blind eye to logging beyond what is legally allowed.

In the Solomon Islands, the current pace of logging by Chinese companies could exhaust the country’s once pristine rain forests by 2036, according to Global Witness, an environmental group. In Indonesia, activists warn that illegal logging linked to a company with Chinese partners threatens one of the last strongholds for orangutans on the island of Borneo.

Environmentalists say China has simply shifted the harm of unbridled logging from home to abroad, even as it reaps the economic benefits. Some warn that the scale of logging today could deplete what unspoiled forests remain, contributing to global warming.

At the same time, China is protecting its own woodlands.

Two decades ago, concerns about denuded mountains, polluted rivers and devastating floods along the Yangtze River made worse by damaged watersheds prompted the Communist government to begin restricting commercial logging in the nation’s forests.

The country’s demand for wood did not diminish, however. Nor did the world’s demand for plywood and furniture, the main wood products that China makes and exports.

It is one thing for Chinese demand to overwhelm small, poor nations desperate for cash, but it is another for it to drain the resources of a far larger country, one that regards itself as a superpower and a strategic partner to China.

The trade has instead underscored Russia’s overreliance on natural resources and provoked a popular backlash that strains the otherwise warm relations between the countries’ two leaders, Vladimir Putin and Xi Jinping.

Protests have erupted in many cities. Members in Russia’s upper house of parliament have assailed officials for ignoring the environmental damage in Siberia and the Far East. Residents and environmentalists complain that logging is spoiling Russian watersheds and destroying the habitats of the endangered Siberian tiger and Amur leopard.

China’s stunning economic transformation over the last four decades has driven its demand. It is now the world’s largest importer of wood. The US is second. It is also the largest exporter — turning much of the wood it imports into products headed to Home Depots and Ikeas around the world.

More than 500 companies operate in Russia now, often with Russian partners, according to a report by Vita Spivak, a scholar on China for the Carnegie Moscow Center. Russia once delivered almost no wood to China; it now accounts for more than 20 percent of China’s imports by value.

Russia sells such logging concessions at prices that vary by region and type of wood, but on average, they cost roughly $2 a hectare, or 80 cents an acre, per year, according to Mr. Shmatkov of the World Wildlife Fund. That is far below the cost in other countries.

Government corruption, criminality and the lack of economic development in Siberia and the Far East have made the crisis worse.

Also, in many rural areas of the Russian Far East and Siberia, there are few other ways to make money, or to make a living, than stripping natural resources of the vast surrounding forests. Logging without contracts is also common, while arsonists are suspected of having set fires to forests, because scorched trees can be legally culled and sold.

Posted in Deforestation | Tagged , , | Comments Off on China is deforesting Russia