Nafeez Ahmed: Venezuela’s collapse is a window into how the Oil Age will unravel

Preface. Ahmed is one of the best writers on the energy crisis and other biophysical calamities. He’s written about why many states are failing now in part due to peak oil, but also drought and other biophysical factors in his book “Failing States, Collapsing Systems BioPhysical Triggers of Political Violence“. Below is his take on Venezula, where peak oil production occurred in 1997.

What happened there may be how events unfold in the United States as well, so it is worth reading how collapsing states like Venezuela fail if you’re curious about your own future.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Nafeez Ahmed. 2019. Venezuela’s collapse is a window into how the Oil Age will unravel. medium.com

For some, the crisis in Venezuela is all about the endemic corruption of Nicolás Maduro, continuing the broken legacy of Chavez’s ideological experiment in socialism under the mounting insidious influence of Putin. For others, it’s all about the ongoing counter-democratic meddling of the United States, which has for years wanted to bring Venezuela — with its huge oil reserves — back into the orbit of American power, and is now interfering again to undermine a democratically elected leader in Latin America.

Neither side truly understands the real driving force behind the collapse of Venezuela: we have moved into the twilight of the Age of Oil.

So how does a country like Venezuela with the largest reserves of crude oil in the world end up incapable of developing them? While various elements of socialism, corruption and neoliberal capitalism are all implicated in various ways, what no one’s talking about — especially the global oil industry — is that over the last decade, we’ve shifted into a new era. The world has moved from largely extracting cheap, easy crude, to becoming increasingly dependent on unconventional forms of oil and gas that are much more difficult and expensive to produce.

Oil isn’t running out, in fact, it’s everywhere — we’ve more than enough to fry the planet. But as the easy, cheap stuff has plateaued, production costs have soared. And as a consequence the most expensive oil to produce has become increasingly unprofitable.

In a country like Venezuela, emerging from a history of US interference, plagued by internal economic mismanagement, combined with external intensifying pressure from US sanctions, this decline in profitability has became fatal.

Since Hugo Chavez’s election in 1999, the US has continued to explore numerous ways to interfere in and undermine his socialist government. This is consistent with the track record of US overt and covert interventionism across Latin America, which has sought to overthrow democratically elected governments which undermine US interests in the region, supported right-wing autocratic regimes, and funded, trained and armed far-right death squads complicit in wantonly massacring hundreds of thousands of people.

For all the triumphant moralizing in parts of the Western media about the failures of Venezuela’s socialist experiment, there has been little reflection on the role of this horrific counter-democratic US foreign policy in paving the way for a populist hunger for nationalist and independent alternatives to US-backed cronyism.

Before Chavez

Venezuela used to be a dream US ally, model free-market economy, and a major oil producer. With the largest reserves of crude oil in the world, the conventional narrative is that its current implosion can only be due to colossal mismanagement of its domestic resources.

Described back in 1990 by the New York Times as “one of Latin America’s oldest and most stable democracies”, the newspaper of record predicted that, thanks to the geopolitical volatility of the Middle East, Venezuela “is poised to play a newly prominent role in the United States energy scene well into the 1990’s”. At the time, Venezuelan oil production was helping to “offset the shortage caused by the embargo of oil from Iraq and Kuwait” amidst higher oil prices triggered by the simmering conflict.

But the NYT had camouflaged a deepening economic crisis. As noted by leading expert on Latin America, Javier Corrales, in ReVista: Harvard Review of Latin America, Venezuela had never recovered from currency and debt crises it had experienced in the 1980s. Economic chaos continued well into the 1990s, just as the Times had celebrated the market economy’s friendship with the US, explained Corrales: “Inflation remained indomitable and among the highest in the region, economic growth continued to be volatile and oil-dependent, growth per capita stagnated, unemployment rates surged, and public sector deficits endured despite continuous spending cutbacks.”

Prior to the ascension of Chavez, the entrenched party-political system so applauded by the US, and courted by international institutions like the IMF, was essentially crumbling. “According to a recent report by Data Information Resources to the Venezuelan-American Chamber of Commerce, in the last 25 years the share of household income spent on food has shot up to 72%, from 28%,” lamented the New York Times in 1996. “The middle class has shrunk by a third. An estimated 53 percent of jobs are now classified as ‘informal’ — in the underground economy — as compared with 33% in the late 1970’s”.

The NYT piece cynically put all the blame for the deepening crisis on “government largesse” and interventionism in the economy. But even here, within the subtext the paper acknowledged a historical backdrop of consistent IMF-backed austerity measures. According to the NYT, even the ostensibly anti-austerity president Rafael Caldera — who had promised more “state-financed populism” as an antidote to years of IMF-wrought austerity — ended up “negotiating for a $3 billion loan from the IMF” along with “a second loan of undisclosed size to ease the social impact of any hardships imposed by an IMF agreement.”

So it is convenient that today’s loud and self-righteous moral denunciations of Maduro ignore the instrumental role played by US efforts to impose market fundamentalism in wreaking economic and social havoc across Venezuelan society. Of course, outside the fanatical echo chambers of the Trump White House and the likes of the New York Times, the devastating impact of US-backed World Bank and IMF austerity measures is well-documented among serious economists.

In a paper for the London School of Economics, development economist Professor Jonathan DiJohn of the UN Research Institute for Social Development found that US-backed economic “liberalization not only failed to revive private investment and economic growth, but also contributed to a worsening of the factorial distribution of income, which contributed to growing polarisation of politics.”

Neoliberal reforms further compounded already existing centralized nepotistic political structures vulnerable to corruption. Far from strengthening the state, they led to a collapse in the state’s regulative power. Analysts who hark back to a Venezuelan free market golden age ignore the fact that far from reducing corruption, “financial deregulation, large-scale privatizations, and private monopolies create[d] large rents, and thus rent-seeking/corruption opportunities.”

Instead of leading to meaningful economic reforms, neoliberalisation stymied genuine reform and entrenched elite power. And this is precisely how the West helped create the Chavez it loves to hate. In the words of Corrales in the Harvard Review: “economic collapse and party system collapse—are intimately related. Venezuela’s repeated failure to reform its economy made existing politicians increasingly unpopular, who in turn responded by privileging populist policies over real reforms. The result was a vicious cycle of economic and political party decay, ultimately paving the way for the rise of Chavez.”

Dead oil

While it is now fashionable to blame the collapse of the Venezuelan oil industry solely on Chavez’s socialism, Caldera’s privatization of the oil sector was unable to forestall the decline in oil production, which peaked in 1997 at around 3.5 million barrels a day. By 1999, Chavez’s first actual year in office, production had already dropped dramatically by around 30 percent.

A deeper look reveals that the causes of Venezuela’s oil problems are slightly more complicated than the ‘Chávez killed it’ meme. Since peaking around 1997, Venezuelan oil production has declined over the last two decades, but in recent years has experienced a precipitous fall. There can be little doubt that serious mismanagement in the oil industry has played a role in this decline. However, there is a fundamental driver other than mismanagement which the press has consistently ignored in reporting on Venezuala’s current crisis: the increasingly fraught economics of oil.

The vast bulk of Venezuela’s oil is not conventional crude, but unconventional “heavy oil”, a highly viscous liquid that requires unconventional techniques to extract and flow, often with heat from steam, and/or mixing it with lighter forms of crude in the refining process. Heavy oil thus has a higher cost of extraction than normal crude, and a lower market price due to the refining difficulties. In theory, heavy oil can be produced at below break-even prices to a profit, but greater investment is still needed to get to that point.

The higher costs of extraction and refining have played a key role in making Venezuela’s oil production efforts increasingly unprofitable and unsustainable. When oil prices were at their height between 2005 and 2008, Venezuela was able to weather the inefficiencies and mismanagement in its oil industry due to much higher profits thanks to prices between $100 and $150 a barrel. Global oil prices were spiking as global conventional crude oil production began to plateau, causing an increasing shift to unconventional sources.

That global shift did not mean that oil was running out, but that we were moving deeper into dependence on more difficult and expensive forms of unconventional oil and gas. The shift can be best understood through the concept of Energy Return on Investment (EROI), pioneered principally by the State University of New York environmental scientist Professor Charles Hall, a ratio which measures how much energy is used to extract a particular quantity of energy from any resource. Hall has shown that as we are consuming ever larger quantities of energy, we are using more and more energy to do so, leaving less ‘surplus energy’ at the end to underpin social and economic activity.

This creates a counter-intuitive dynamic — even as production soars, the quality of the energy we are producing declines, its costs are higher, industry profits are squeezed, and the surplus available to sustain continued economic growth dwindles. As the surplus energy available to sustain economic growth is squeezed, in real terms the biophysical capacity of the economy to continue buying the very oil being produced reduces. Economic recession (partly induced by the previous era of oil price spikes) interacts with the lack of affordability of oil, leading the market price to collapse.

That in turn renders the most expensive unconventional oil and gas projects potentially unprofitable, unless they can find ways to cover their losses through external subsidies of some kind, such as government grants or extended lines of credit. And this is the key difference between Venezuela and countries like the US and Canada, where extremely low EROI levels for production have been sustained largely through massive multi-billion dollar loans — fueling an energy boom that is likely to come to a catastrophic endwhen the debt-turkey comes home to roost.

“It’s all a bit reminiscent of the dot-com bubble of the late 1990s, when internet companies were valued on the number of eyeballs they attracted, not on the profits they were likely to make,” wrote Bethany McLean recently (once again in the New York Times), a US journalist well-known for her work on the Enron collapse. “As long as investors were willing to believe that profits were coming, it all worked — until it didn’t.”

A number of scientists have previously estimated the EROI of heavy oil production to amount to around 9:1 (with room for variation up or down depending on how inputs are accounted for and calculated; the unfashionable but probably more accurate approach would be downwards, closer to 6:1 when both direct and indirect energy costs are considered). Compare this to the EROI of about 20:1 for conventional crude prior to 2000, which gives an indication of the challenge Venezuela faced — which unlike the US and Canada, had emerged into the Chavez era from a history of neoliberal devastation and debt-expansion that already made further investments or subsidies to Venezuela’s oil industry a difficult ask.

Venezuela, in that sense, was ill-prepared to adapt to the post-2014 oil price collapse, compared to its wealthier, Western competitors in other forms of unconventional oil and gas. To be sure, then, the collapse of Venezuela’s oil industry cannot be reduced to geological factors, though there can be little doubt that those factors and their economic ramifications tend to be underplayed in conventional explanations. Above-ground factors were clearly a major problem in terms of chronic inadequacy of investment and the resulting degradation of production infrastructure. A balanced picture thus has to acknowledge both that Venezuela’s vast reserves are far more expensive and difficult to bring to market than standard conventional oil; and that Venezuala’s very specific economic circumstances in the wake of decades of failed IMF-austerity put the country in an extremely weak position to keep its oil show on the road.

Since 2008, oil production has declined by more than 350,000 barrels per day, and more than 800,000 per day since its peak level in 1997. This has driven the collapse of net exports by over 1.1 million barrels per day since 1998. Meanwhile, to sustain refining of heavy oil, Venezuela has increasingly imported light oil to blend with heavy oil as well as for domestic consumption. Currently, only extra-heavy oil production in the Orinoco Oil Belt has been able to increase, while conventional oil production continues to rapidly decline. Despite significant proved conventional reserves, these still require more expensive enhanced recovery techniques and infrastructure investments — which are unavailable. But profit margins from exports of extra-heavy crude are much smaller due to the higher costs of blending, upgrading and transportation, and the heavy discounts in international refining markets. In summary, oil industry expert Professor Francisco Monaldi at the Center for Energy and the Environment at IESA in Venezuela concludes: “oil production in Venezuela is comprised of increasingly heavier oil and thus less profitable, PDVSA’s operated production is falling more rapidly, and the production that generates cash-flow is almost half of the total production. These trends were problematic enough at peak oil prices, but with prices falling they become much more acute.”

The folly of endless growth

Unfortunately, much like his predecessors, Chavez didn’t appreciate the complexities, let alone the biophysical economics, of the oil industry. Rather, he saw it simplistically through the short-term lens of his own ideological socialist experiment.

From 1998 until his death in 2013, Chavez’s application of what he called ‘socialism’ to the oil industry succeeded in reducing poverty from 55 to 34 percent, helped 1.5 million adults become literate, and delivered healthcare to 70% of the population with Cuban doctors. All this apparent progress was enabled by oil revenues. But it was an unsustainable pipe-dream.

Instead of investing oil revenues back into production, Chavez spent them away on his social programs during the heyday of the oil price spikes, with no thought to the industry he was drawing from — and in the mistaken belief that prices would stay high. By the time prices collapsed due to the global shift to difficult oil described earlier — reducing Venezuala’s state revenues (96 percent of which come from oil) — Chavez had no currency reserves to fall back on.

Chavez had thus dramatically compounded the legacy of problems he had been left with. He had mimicked the same mistake made by the West before 2008, pursuing a path of ‘progress’ based on an unsustainable consumption of resources, fueled by debt, and bound to come crashing down.

So when he ran out of oil money, he did what governments effectively did worldwide after the 2008 financial crash through quantitative easing: he simply printed money.

The immediate impact was to drive up inflation. He simultaneously fixed the exchange rate to dollars, hiked up the minimum wage, while forcing prices of staple goods like bread to stay low. This of course turned businesses selling such staple goods or involved at every chain in their production into unprofitable enterprises, which could no longer afford to pay their own employees due to hemorrhaging income levels. Meanwhile, he slashed subsidies to farmers and other industries, while imposing quotas on them to maintain production. Instead of producing the desired result, many businesses ended up selling their goods on the black market in an attempt to make a profit.

As the economic crisis escalated, and as oil production declined, Chavez pinned his hopes on the potential transformation that could be ushered in by massive state investment in a new type of economy based on nationalized, self or cooperatively managed industries. Those investments, too, had little results. Dr Asa Cusack, an expert on Venezuela at the London School of Economics, points out that “even though the number of cooperatives exploded, in practice they were often as inefficient, corrupt, nepotistic, and exploitative as the private sector that they were supposed to displace.”

Meanwhile, with its currency reserves depleted, the government has had to slash imports by over 65 percent since 2012, while simultaneously reducing social spending to even lower than it was under IMF austerity reforms in the 1990s. Chavistan crisis-driven ‘socialism’ began with unsustainable social spending and has now switched to catastrophic levels of austerity that make neoliberalism look timid.

In this context, the rise of the black market and organized crime, exploited by both the government and the opposition, became a way of life while the economy, food production, health-care and basic infrastructure collapsed with frightening speed and ferocity.

Climate wild cards

Amidst this perfect storm, the wild card of climate impacts pushed Venezuela over the edge, accelerating an already dizzying spiral of crises. In March 2018, on the back of hyperinflation and recession, the government enforced electricity rationing across six western states. In one state, San Cristobal, residents reported 14-hour stretches without power after water levels in reservoirs used for hydroelectric plants were reduced due to drought. A similar crisis had erupted two years earlier when water levels behind the Guri Dam, which provides well over half the country’s electricity, hit record lows.

Venezuela generates around 65% of its electricity from hydropower, with a view to leave as much oil available as possible for export. But this has made electricity supplies increasingly vulnerable to droughts induced by climate change impacts.

It is well known that the El-Nino Southern Oscillation, the biggest fluctuation in the earth’s climate system comprising a cycle of warm and cold sea-surface temperatures in the tropical Pacific Ocean, is increasing in frequency and intensity due to climate change. A new study on the impact of climate change in Venezuela finds that between 1950 and 2004, 12 out of 15 El-Nino events coincided with years in which “mean annual flow” of water in the Caroni River basin, affecting the Guri reservoir and hydroelectric power, was “smaller than the historical mean.”

From 2013 to 2016, an intensified El-Nino cycle meant that there was little rain in Venezuela, culminating in a crippling deficit in 2015. It was the worst drought in almost half a century in the country, severely straining the country’s aging and poorly managed energy grid, resulting in rolling blackouts.

According to Professor Juan Carlos Sanchez, a co-recipient of the 2007 Nobel Peace Prize for his work with Intergovernmental Panel on Climate Change (IPCC), these trends will dramatically deteriorate under a business as usual scenario. Large areas of Venezuelan states which are already water scarce, such as Falcon, Sucre, Lara and Zulia, including the north of the Guajira peninsula, will undergo desertification. Land degradation and decreased rainfall would devastate production of corn, black beans and plantains across much of the country. Sanchez predicts that some regions of the country will receive 25 percent less water than today. And that means even less electricity. By mid-century, climate models indicate an overall 18 percent decrease in rainfall in the Caroni River basin that leads to the Guri Dam.

Unfortunately, no Venezuelan government has ever taken seriously its climate pledges, preferring to escalate as much as possible its oil production, and even intensifying the CO2 intensive practice of gas flaring. Meanwhile, escalating climate change is set to exacerbate Venezuela’s electricity blackouts, infrastructure collapse and agricultural crisis.

Economic war

The crisis convergence unfolding in Venezuela gives us a window into what can happen when a post-oil future is foisted upon you. As domestic energy supplies dwindle, the state’s capacity to function recedes in unprecedented ways, opening the way for state-failure. As the state collapses, new smaller centers of power emerge, competing for control of diminishing resources.

In this context, reports of food-trafficking as a mechanism of ‘economic war’ are real, but they are not exclusive to either political side. All sides have become incentivized to horde products and sell them on the black market as a direct result of the collapsing economy, retrograde government price controls and wildly speculative prices.

Venezuelan state-owned media have pinpointed cases where private companies engaged in hoarding have close ties to the opposition. In response, the government has appropriated vast assets, farmland, bakeries, other businesses — but has failed to lift production.

On the other hand, Katiuska Rodriguez, a journalist investigating shortages at El Nacional, a pro-opposition newspaper, said that there is little clear evidence of hoarding being a result of an ‘economic war’ by capitalist business elites against the government. Although real, she explained, hoarding is driven largely by commercial interests in survival.

And yet, there is mounting evidence that the Maduro government is complicit in not just hoarding, but mass embezzlement of public funds. Sociologist Chris Carlson of the City University of New York Graduate Center points outthat a number of former senior Chavista government officials have come on record to confirm how powerful elites within the government have exploited the crisis to extract huge profits for themselves. “A gang was created that was only interested in getting their hands on the oil revenue,” said Hector Navarro, former Chavista minister and socialist party leader. Similarly, Chavez’s former finance minister, Jorge Giordani, estimated that some $300 billion was embezzled in this way.

And yet, the real economic war is not really going on inside Venezuela. It has been conducted by the US against Venezuela, through a draconian sanctions regime which has exacerbated the arc of collapse. Francisco Rodriguez, Chief Economist at Torino Economics in New York, points out that a major drop in Venezuela’s production numbers occurred precisely “at the time at which the United States decided to impose financial sanctions on Venezuela.”

He argues that: “Advocates of sanctions on Venezuela claim that these target the Maduro regime but do not affect the Venezuelan people. If the sanctions regime can be linked to the deterioration of the country’s export capacity and to its consequent import and growth collapse, then this claim is clearly wrong.” Rodriguez marshals a range of evidence suggesting this might well be the case.

Others with direct expertise have gone further. Former UN special rapporteur to Venezuela, Alfred de Zayas, who finished his term at the UN in March 2018, criticised the US for engaging in “economic warfare” against Venezuela. On his fact-finding mission to the country in late 2017, he confirmed the role of overdependence on oil, poor governance and corruption, but blamed the US, EU and Canadian sanctions for worsening the economic crisis and “killing” Venezuelans.

US goals are fairly transparent. In an interview with FOX News that has been completely ignored by the press, Trump’s National Security Advisor John Bolton explained the focus of US attention: “We’re looking at the oil assets. That’s the single most important income stream to the government of Venezuela. We’re looking at what to do to that.” He continued: “… we’re in conversation with major American companies now… I think we’re trying to get to the same end result here… It will make a big difference to the United States economically if we could have American oil companies really invest in and produce the oil capabilities in Venezuela.”

The coming oil crisis

It is not entirely surprising that Bolton is particularly eager at this time to extend US energy companies into Venezuela.

North American exploration and production companies have seen their net debt balloon from $50 billion in 2005 to nearly $200 billion by 2015. “[The fracking] industry doesn’t make money…. It’s on much shakier financial footing than most people realize,” said McLean, who has just authored the book, Saudi America: The Truth About Fracking and How It’s Changing the World. Indeed, there is serious gulf between oil industry claims about opportunities for profit, and what is actually happening in those companies: “When you look at oil companies’ presentations, there’s something that doesn’t make sense because they show their investors these beautiful investor decks with gorgeous slides indicating that they will produce an 80% or 60% internal rate of return. And then you go to the corporate level and you see that the company isn’t making money, and you wonder what happened between point A and point B.”

In short, cheap debt-money has permitted the industry to grow — but how long that can continue is an open question. “Part of the point in writing my book was just to make people aware that as we trump at American energy independence, let’s think about some of the foundation of this [industry] and how insecure it actually is, so that we’re also planning for the future in different ways”, adds McLean.

Indeed, US shale oil and gas production is forecast to peak in around a decade — or in as little as four years. It’s not just the US. Europe as a continent is already well into the post-peak phase, and Russian oil ministry officials privately anticipate an imminent peak within the next few years. As China, India and other Asian powers experience further demand growth, everyone will be looking increasingly for a viable energy supply, whether from the Middle East or Latin America. But it won’t come cheap, or easy. And it won’t be healthy for the planet.

Whatever their ultimate causes, the horrifying collapse of Venezuela heralds insights into a possible future for today’s major oil producers — including the United States. The US is enjoying a revival in its oil industry but how long it will last and how sustainable it is are awkward questions that few pundits dare to ask — except a brave few, such as McLean.

This does not necessarily mean oil production will simply slowly grind to a halt. As production limits are reached using current techniques, new techniques might be brought into play to try to mine vast reserves of more difficult resources. However, whatever technological innovations emerge they are unlikely to be able to avert the trajectory of increasing costs of extraction, refining and processing before getting fossil fuels to market. And this means that the surplus energy available to devote to the delivery of public goods familiar to modern industrial consumerist societies will become smaller and smaller.

As we shift into a post-carbon era, we will have to adapt new economic thinking, and restructure our ways of life from the ground up.

Right now the Venezuelan people find themselves locked into a vicious cycle of ill-conceived human systems collapsing into violent in-fighting, in the face of the earth system crisis erupting beneath them. It is not yet too late for the rest of the world to learn a lesson. We can either be dragged into a world after oil kicking and screaming, or we can roll up our sleeves and walk there in a manner of our own choosing. It really is up to us. Venezuela should function as a warning sign as to what can happen when we bury our heads in the (oil) sands.

Posted in Central & South America, Oil (Tar) Sands, Peak Oil | Tagged , | 1 Comment

Climate change effects on hydropower in California

This image has an empty alt attribute; its file name is drought-dam-lake-oroville.jpg

Preface. The main impact of climate change will be on hydropower in California, which is the largest source of renewable electric power. Besides natural gas, it is the only dispatchable form of power to balance unreliable, intermittent wind and solar power.

But hydropower is often unavailable (i.e. drought, low reservoirs, to provide months of agriculture and drinking water, protect fisheries, etc).

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

CEC. September 2014. Climate change impacts on generation of wind, solar, and hydropower in California. California Energy Commission Lawrence Livermore National Laboratory .

Excerpts:

The study findings for hydroelectric power generation show significant reductions that are a consequence of the large predicted reduction in annual mean precipitation in the global climate models used. Reduced precipitation and resulting reductions in runoff result in reduced hydropower generation in all months and elevation bands. These results indicate that a future that is both drier and warmer would have important impacts on the ability to generate electricity from hydropower.

Increased production of electricity from renewables, although desirable from environmental and other viewpoints, may create difficulties in consistently meeting demand for electricity and may complicate the job of operating the state’s transmission system. This would be true of any major change in electrical supply portfolio but is especially so when the proportion of weather-dependent renewables- which are subject to uncontrolled fluctuations-is increased.

Climate change may affect the ability to generate needed amounts of electricity from weather-dependent renewable resources. This could compromise California’s ability to meet renewable targets. For example, it is well documented that climate change is affecting the seasonal timing of river flows such that less hydropower is generated during months of peak demand and maximum electricity value. Generating solar and wind power may also be impacted by long-term changes in climate.

A changing climate may bring increases or decreases in mean wind speeds, as well as greater or lesser variation wind speeds. These changes could make long- term planning for wind energy purposes problematic. Some regions where continued wind development is occurring, such as California and the Great Plains, may be especially susceptible to climate change because the wind regimes of these regions are dominated by one particular atmospheric circulation pattern.

CHAPTER 4: Hydropower

Numerous modeling studies, starting with Gleick (1987), have predicted that anthropogenic climate change will have significant impacts on the natural hydrology of California, with implications for water scarcity, flood risk, and hydropower generation. The best-known of these impacts are straightforward consequences of increased temperatures: a reduced fraction of precipitation falling as snow, reduced snow extent and snow-water equivalent, and earlier melting of snow. An increased fraction of precipitation as rain in turn result s in increased wintertime runoff and river flow; earlier and reduced snow melt results in reduced late-season runoff and river flow. Despite the well-known lack of consensus among global climate models about future changes in annual California precipitation amounts, the effects just mentioned are robustly predicted because they result from warming, about which there is consensus. Confidence in these predictions is increased by observational studies that show these changes to be underway as well as by studies involving both observations and modeling that indicate that observed changes in western U.S. hydrology are too rapid to be explained entirely by natural causes.

One possible consequence of human-caused changes in mountain hydrology in the western United States is changes in hydropower production, especially from high-altitude facilities on watersheds that have historically been snow-dominated. This concern is especially acute, since a majority of the state’s hydropower is produced in facilities of this type. Furthermore, these high-elevation facilities have relatively little storage capacity, implying limited capability to adapt to changes in climate.

A shift toward earlier-in-the-year snowmelt and runoff would tend to produce similar changes in the timing of hydropower generation. In particular, in the absence of adequate storage capacity, it might become difficult to produce power at the end of the dry season, when demand for electricity can be very high.

On the other hand, a large enough reservoir could store enough water to effectively buffer this problem and allow power generation throughout the dry season. This means that the effects of climate change on hydropower generation will depend strongly on reservoir size. And of course on altitude, being greatest at intermediate altitudes where slight warming will raise the temperature above freezing. Watersheds that are already rain-dominated, or are well below freezing, will not exhibit the effects discussed here in the near future.

Of course, besides issues of seasonal timing, a significant increase or decrease in annual total precipitation would be an important benefit or detriment (respectively) to hydropower generation.

The published literature largely supports this picture.

Madani and Lund (2009) looked at hydropower generation in 137 high-elevation systems under three simple climate change scenarios: wet, dry, and warming only. It found that existing storage capacity is sufficient to largely compensate for expected changes in the seasonal timing of snowmelt, runoff, and river flow. A hypothetical decrease in annual total runoff, however, translates more directly into a corresponding reduction in energy generation. The predicted response to a hypothetical increase in annual runoff, however, is not symmetrical: this scenario results in increased spill and very little increase in energy production.

Results

The research team’s results for optimized energy generation are driven primarily by large projected reductions in precipitation in the future climate scenario. In the study area, annual mean precipitation in the future period is reduced by as much as 30 percent compared to in the historical reference period. Because of the complex relationships among precipitation, evapotranspiration, and runoff, these already- large precipitation decreases produce proportionately larger reductions in run off and stream flow. In other words, the percentage reductions in runoff and river flow exceed those in precipitation.

This phenomenon is exaggerated by the tendency for warming to result in increased evaporation.

Disproportionate decreases in runoff in a dry future- climate scenario are seen in other modeling studies Jones et al. (2005) investigated changes in runoff in several surface hydrology models in response to a hypothetical 1% change in precipitation and found responses ranging from 1.8 to 4.1%; that is, the percentage response in runoff was anywhere from roughly double to roughly 4x the percentage change in precipitation.

Reduced precipitation and resulting reductions in runoff result in reduced hydropower generation in all months and elevation bands

These results indicate that a future that is both drier and warmer would have important impacts on the ability to generate electricity from hydropower.

Posted in Energy, Hydropower | Tagged , , | 2 Comments

Hydropower can’t help with the energy crisis

Preface. When fossil fuels are gone, there aren’t many ways to balance the unreliable, intermittent, and often absent for weeks at a time power from wind and solar.  Biofuels and burning biomass is one solution, it’s dispatchable and can kick in at any time to make up for lack of wind and solar, but using biomass as a power source is one of the most destructive ways to generate power as I explain in “Peak Soil” and probably has a negative return on energy invested.

So Plan B for renewable power would have to be hydropower.  That was the main proposal Stanford professor Mark Jacobson had to keep the electric grid stable and up and running.  But in 2017, a group of scientists pointed out that Jacobson’s proposal rested upon the assumption that we can increase the amount of power from U.S. hydroelectric dams 10-fold when, according to the Department of Energy and all major studies, the real potential is just 1% percent of that.  And since dams are so ecologically destructive, there would be a great deal of opposition to even building 1% of the dams Jacobson proposed.

Plus, most states don’t have hydropower. Ten states have 80% of hydropower, with Washington state a whopping 25% of hydro-electricity.

Hydropower isn’t always available.  A lot of water has to be held back to provide agriculture and cities with water, so there will be many times of the year when it can’t be released to keep the electric grid up.

And hydropower isn’t renewable, dams have a lifespan of 50 to 200 years.

Without all that additional hydroelectricity, the 100% renewables proposal falls apart. There is no Plan C because of all the shortcomings of battery technologies.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

  1. Ultimately dams silt up, usually within 25 to 200 years, so hydro-power is not a renewable source of power.
  2. Eventually the rebar in dams will rust and break apart the cement, causing the dam to fail (A Century from Now Concrete Will be Nothing But Rubble)
  3. We’ve already dammed up the best rivers. There are now more than 45,000 dams around the world, affecting more than half — 172 out of 292 — of the globe’s large river systems.  The largest are 1,000 feet high.
  4. Damming prevents salmon and other fish migration.
  5. We’ve built dams in more than half of the large river systems and have decreased the amount of sediment flowing to the world’s coasts by nearly 20%. This is causing long-term harm to the world’s river ecosystems and raising risks that many coastal areas — sometimes hundreds of miles from the dams — will be flooded soon because they are deprived of sediments that help offset soil erosion. The harmful effects of ebbing soil deposits will be accelerated by the rising sea levels caused by global warming, say the researchers. More than 37% of the world’s population, or over 2.1 billion people, live within 93 miles of a coast.
  6. Dams reduce biodiversity
  7. Dams create habitats more easily invaded by invasive plants, fish, snails, insects, and animals
  8. Dams can increase greenhouse gases as impounded water gets choked with rotting vegetation
  9. Dams, interbasin transfers, and water withdrawals for irrigation have fragmented 60% of the world’s rivers
  10. It can take years to build even a small run-of-river project.  Below are the permits/agencies AMP needed to build 4 run-of-river turbines in the Ohio River:

LIST OF PERMITS/APPROVAL/LICENSES/EVALUATIONS

  1. OPSB Certificate, Ohio Power Siting, Certificates for 50MW+ projects and T-line
  2. Preliminary Permit, FERC, Permit to prepare and submit a License App.
  3. License, FERC, Comprehensive energy project license
  4. NEPA, EPA, Compliance with statute on federal projects
  5. Section 404/10, Army Corps, Impacts to jurisdictional water
  6. Section 408, Army Corps, Permission to impair federal structure
  7. Section 401, OEPA, Impacts to wetlands/streams
  8. Water withdrawal registration, ODNR, Withdrawal of water
  9. NPDES, EPA/OEPA, Discharge of industrial water
  10. Stormwater Permit, OEPA, Manage site/construction stormwater
  11. Historic Preservervation Act, SHPO, Evaluation of cultural/historic resources
  12. Endangered Species Evaluation, ODNR/USF&W, Evaluation of endangered/threatened species
  13. License, FAA, Transmission Tower approval for aviation
  14. ODOT Permit, ODOT, Roadway considerations/crossings
  15. Flood Impact Approval, FEMA, To insure no impacts to flood waters

OTHER REQUIRED/POTENTIAL CONSULTING AGENCIES

  1. U.S Dept. of Agriculture-Forestry
  2. National Park Service
  3. U.S. Bureau of Land Management
  4. Federal Emergency Management Agency
  5. U.S. Geological Services
  6. U.S. Department of Commerce

OTHER REQUIREMENT Regional Transmission Organization Interconnection Process (more than 20 MW)–PJM or MISO in our region

References

Juan Pablo Orego.  River Killers: The False solution of Mega-dams. A chapter within 2012 “The Energy Reader: Overdevelopment and the Delusion of Endless Growth” by Tom Butler, eds et al.

Patrick McCully. 2001. Silenced Rivers: The Ecology and Politics of Large Dams. Zed Books.

World Commission on Dams. 2000. Dams and Development: A New Framework for Decision-Making. Earthscan.

LeRoy Poff, et al. April 3, 2007. “Homogenization of Regional River Dynamics by Dams and Global Biodiversity Implications,” Proceedings of the National Academy of Sciences 104, no. 14 pp 5732–5737.

Edward Goldsmith and Nicholas Hildyard. 1984.  The Social and Environmental Effects of Large Dams. Sierra Club Books

Fred Pearce. 1992. The Dammed: Rivers, Dams, and the Coming World Water Crisis.  Bodley Head.

International Energy Agency, Key World Energy Statistics (Paris: IEA, 2010).

World Commission on Dams, Dams and Development

Posted in Alternative Energy, Energy, Hydropower | Tagged , , | 1 Comment

High-Tech can’t last: there are limited essential elements

This image has an empty alt attribute; its file name is iphone-rare-earth-minerals.jpgThere are 17 rare earth elements in the periodic table. About nine of those elements go into every iPhone sold… and if China were suddenly to disappear from a map tomorrow, Apple would lose about 90% of those elements.  Source: Brownlee 2013.  

Preface. This long post describes the rare metals and minerals phones, laptops, cars, microchips, and other essential high-tech products civilization depends on.

Metals and minerals aren’t just physically limited, they can be economically limited by a financial collapse, which dries up credit and the ability to borrow for new projects to mine and crush ores. Economic collapse drives companies and even nations out of business, disrupting supply chains.

Supply chains can also be disrupted by energy shortages and natural disasters. The more complex, the more minerals, metals, and other materials, machines, chemicals, a product depends on, the greater the odds of disruption.

Minerals and metals can also be politically limitedChina controls over 90% of some critical elements.

And of course, they’re energetically limited.  Once oil begins to decline, so too will mining and all other manufacturing steps, which all depend on fossil energy.

The next war over resources is likely to be done via cyber-attacks that take down an opponent’s electric grid, which would affect nearly all of the other essential infrastructure such as agriculture; defense; energy; healthcare, banking, finance; drinking water and water treatment systems; commercial facilities; dams; emergency services; nuclear reactors, information technology; communications; postal and shipping; transportation and systems; government facilities; and critical manufacturing (NIPP)

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity, XX2 report

***

Rare Earth metals are used in many products:

  1. Magnets (Neodymium, Praseodymium, Terbium, Dysprosium): Motors, disc drives, MRI, power generation, microphones and speakers, magnetic refrigeration
  2. Metallurgical alloys (Lanthanum, Cerium, Praseodymium, Neodymium, Yttrium): NimH batteries, fuel cells, steel, lighter flints, super alloys, aluminum/magnesium
  3. Phosphors (Europium, Yttrium, Terbium, Neodymium, Erbium, Gadolinium, Cerium, Praseodymium): display phosphors CRT, LPD, LCD; fluorescent lighting, medical imaging, lasers, fiber optics
  4. Glass and Polishing (Cerium, Lanthanum, Praseodymium, Neodymium, Gadolinium, Erbium, Holmium): polishing compounds, decolorizers, UV resistant glass, X-ray imaging
  5. Catalysts (Lanthanum, Cerium, Praseodymium, Neodymium): petroleum refining, catalytic converter, diesel additives, chemical processing, industrial pollution scrubbing
  6. Other applications:
  • Nuclear (Europium, Gadolinium, Cerium, Yttrium, Sm, Erbium)
  • Defense (Neodymium, Praseodymium, Dysprosium, Terbium, Europium, Yttrium, Lanthanum, Lutetium, Scandium, Samarium)
  • Water Treatment
  • Pigments
  • Fertilizers
  • Fuel cells (SOFC use lanthaneum, cerium, prasedymium)

iPhones (Stone 2019)

200 million of iPhones are sold a year, each of them with 75 of the 118 elements in the periodic table, many of them rare, many of them sourced only from China.  The minerals mentioned in this article were tungsten, tantalum, copper, tin, gold, silver, palladium, aluminum, cobalt, neodymium, gallium, all of which produce toxic byproducts during their mining and the refining of metals.

And less than one percent of these metals are recycled, due to the how difficult it is to collect enough electronic devices to make recycling worthwhile and getting the extremely minute quantities of metals out of them.

Each element was extracted from ores using hands, shovels and hammers, heavy machinery, and explosives, then smelted and refined into metals before being molded, cut, screwedglued, and soldered into products that are stuffed into packages and shipped worldwide for sale. Every step in this production process requires fossil fuel energy.

Recycling is very expensive, and iPhones would need to cost $5,000 to recover the extreme costs recycling would entail.  And recycling also generates a lot of waste as acids and other chemicals are used to try to separate the various metals from each other.  Recycling also takes energy, and today it’s basically impossible to extract all the metals that went into a phone. 

Apple’s parts are soldered and glued into place before being fastened together with proprietary screws which makes basic repairs like swapping out a broken screen or replacing a dead battery a headache. Which makes it difficult for anyone lacking a half dozen robotic arms to tear apart an iPhone to recycle the components. This is why most  e-waste recyclers still primarily mainly recycle CRT TVs and other bulky, pre-smartphone-era devices.  They don’t have the precision equipment to take apart a phone or tablet which were made difficult to tear apart, and they can potentially explode during the process. 

For Apple, this may be a feature rather than a bug: Documents obtained by Motherboard in 2017 revealed that the company requires its recycling partners to shred iPhones and MacBooks so that their components cannot be reused, further reducing the value recyclers can get out.

Microchips: 60 minerals & metals

These are nearly as essential as fossil fuels to maintaining civilization, yet depend on 60 minerals & metals, chemicals, high-tech machines, etc., making them more vulnerable than any other product to supply chain and cascading failures.

While just 12 minerals were used to fabricate microchips initially, now over 60 different kinds of minerals are required (NMA 2017):

    • The U.S. is 100% dependent on imports for 19 different minerals and over 50% for another 43 minerals.  These trends are unsustainable in a highly competitive world economy in which the demand for minerals continues to grow and supply stability is a growing concern.
    • Many of these minerals are both rare and past peak production
    • Many of them come from only one country (single-source failure)
    • China is the sole source for many of these minerals, and other countries such as failed nations like the Democratic Republic of Congo are not a reliable source.

Laptops need 44 raw materials from 27 Countries (Ruffle 2010)

Laptop supply chain: Geographical

Aluminum, Antimony, Arsenic, Barium, Beryllium, Bismuth, Boron, Bromine, Cadmium, Chromium, Cobalt, Copper, Europium, Ferrite, Gallium, Germanium, Gold, Indium, Lead, Lithium, Magnesium, Manganese, Mercury, Nickel, Niobium, Palladium, Petroleum, Phosphorus, Platinum, Refined Gallium, Rhodium, Ruthenium, Selenium, Silicon, Silver, Stainless steel, Steel, Tantalum, Terbium, Tin, Titanium, Vanadium, Yttrium,  Zinc

Argentina, Australia, Belgium, Brazil, Canada, Chile, China, Columbia, Democratic Republic Congo, Egypt, Ethiopia, France, Israel, Japan, Kazahkstan, Malaysia, Mexico, Namibia, Nigeria, Norway, Peru, Russia, Saudi Arabia, South Africa, Sudan, Ukraine, USA

Source: laptop supply chain assembly process documented in Bonanni et al (2010):

We’re dependent on China for 100% of these metals and minerals: Arsenic, Asbestos, Bauxite, Alulmina, Cesium, Fluorspar, Gallium, Graphite (natural), Indium, Manganese, Mica (sheet, natural), Niobium (columbium), Quartz crystal (industrial), Rubidium, Strontium, Tantalum, Thallium, Thorium, Vanadium, Yttrium

Percent dependency on imports for these minerals: 99% gemstone 96% Vanadium 92% Bismuth 91% Platinum 90% Germanium 88% Iodine 85% Diamond (natural industrial stone) 87% Antimony 86% Rhenium 83% Barite 77% Titanium mineral concentrates 81% potash (essential fr agriculture) 78% cobalt 78% Rhenium 75% Tin 73% Silicon carbide (crude) 72% Zinc 70% Chromium 65% Garnet (industrial) 64% Titanium (sponge) 62% Peat 57% Silver 54% Palladium 49% Nickel 46% magnesium compounds 42% Tungsten 36% silicon 35% copper 35% Nitrogen (fixed, Ammonia: essential for industrial agriculture)

Eight Rare Earth Metals are used in hybrid electric vehicles

rareEarthMetalsInHybridCarSource: Ree applications in a hybrid electric vehicle. Molycorp Inc. 2010
  1. Cerium: UV cut glass, Glass and mirrors, polishing powder, LCD screen, catalytic converter, hybrid NiMH battery, Diesel fuel additive
  2. Dysprosium: Hybrid electric motor and generator
  3. Europium: LCD screen
  4. Lanthanum: Catalytic Converter, Hybrid NiMH battery, diesel fuel additive
  5. Neodymium: magnets in 25+ electric motors throughout vehicle, Headlight Glass, Hybrid electric motor and generator
  6. Praseodymium: Hybrid electric motor and generator
  7. Terbium: Hybrid electric motor and generator
  8. Yttrium: LCD screen, component sensors

Rare Earth Elements

Rare earth elements (and platinum group metals) are essential for high-tech technology: i.e. hybrid cars, computers, cell phones, television — anything with a microchip, even toasters.  They are finite, mostly controlled by China (up to 97% by some estimates), the last resources are mainly in war-torn failed states in Africa, Afghanistan, etc., and vulnerable to supply chain failure.

To provide most of our power through renewables would take hundreds of times the amount of rare earth metals that we are mining today,” according to Thomas Graedel at the Yale School of Forestry & Environmental Studies.

So renewable energy resources like windmills and solar PV may not be able to replace fossil fuels, since there’s not enough of many essential minerals to scale this technology up.

There are no substitutes for rare earth minerals and metals.

Computer chips are dependent on 60 minerals, many rare, which is why this will be one of the first technologies to fail in the future as a series of cascading failures, supply chain breakdowns, and other problems arise when fossil fuels start to decline at exponential rates within the next decade.  Computer chips are also vulnerable to Liebig’s Law of the Mininum, since if even one of these 60 minerals is missing, the chip can’t be manufactured.

Since mining is one of the most energy intensive and polluting enterprises, the decline of fossil fuels will cause many mines to shut down, hastening the end of hi-tech products as needed rare metals — even common ones at some point down the energy ladder — are no longer available. We mined the highest concentration ores at a time when fossil fuels were plentiful, now we’re down to low-grade ore at a time when the RATE of fossil fuel extraction is about to exponentially decline.

China controls many of these rare metals, Russia has 80% of palladium supplies, another potential source of supply chain breakdowns if they’re withheld from world markets.

Why are rare metals rare?

By and large they make up a few parts per billion of Earth’s crust, and we don’t know where they are, according to Murray Hitzman, an economic geologist at the Colorado School of Mines.  Some of these minerals are byproducts of mining for aluminium, zinc and copper.

An element’s price isn’t the only problem. The rare earth group of elements, to which many of the most technologically critical belong, are generally found together in ores that also contain small amounts of radioactive elements such as thorium and uranium. In 1998, chemical processing of these ores was suspended at the only US mine for rare earth elements in Mountain Pass, California, due to environmental concerns associated with these radioactive contaminants. The mine is expected to reopen with improved safeguards later this year, but until then the world is dependent on China for nearly all its rare-earth supplies. Since 2005, China has been placing increasingly stringent limits on exports, citing demand from its own burgeoning manufacturing industries.

That means politicians hoping to wean the west off its ruinous oil dependence are in for a nasty surprise: new and greener technologies are hardly a recipe for self-sufficiency.

So what can we do? Finding more readily available materials that perform the same technological tricks not likely, says Karl Gschneidner, a metallurgist at the DoE’s Ames Laboratory. Europium has been used to generate red light in televisions for almost 50 years, he says, while neodymium magnets have been around for 25. “People have been looking ever since day one to replace them, and nobody’s done it yet.”

Technological concerns and environmental permits can delay extraction for 15 years after an ore deposit is discovered.

Cerium (see Lanthanum) is used in catalytic converters, oil refining

Dysprosium  has magnetic properties that don’t go away in high temperatures, essential for high-performance magnets in turbines, hard discs, and many other products. The US navy has used it in an advanced active sonar transducer, producing and then picking up high-powered “pings” underwater.

According to the US DoE, there are no suitable replacements, and so it’s the most critical element for emerging clean energy technologies. China is the only country with significant known deposits, Mines in Australia and Canada only have small quantities  Shortfall of dysprosium are expected before 2015.

Erbium  is a essential for the optical fibers used to transport light-encoded information around the world because they amplify light as it’s lost along the way.

Europium  is essential for lighting, so far no substitutes have been found. Everything from fluorescent light bulbs to laptop and iPhone screens relies on small but critical amounts of europium to generate a pleasant red color and terbium to make green.

“There are only 100 elements known to man, and we know what colors all of them produce, and those are the only ones that produce those particular shades,” says Alex King, director of Ames Laboratory, a rare-earth research center.

Europium and terbium combined help to produce the images on most television screens. Yttrium plays a supporting role as well.

According to the DoE, europium could be in short supply as early as 2015 – and terbium even sooner. For yttrium we have already reached crunch time: demand outstripped supply in 2010.

Gadolinium (Gd) is used in TV screens, X-ray and MRI scanning systems. In nuclear power plants it’s used in boiling water reactors to even performance.  Gadolinium oxide is also used to absorb neutrons as the uranium oxide fuels gets used up.

Hafnium  has amazing heat resistance so it was used as part of the alloy used in the nozzle of rocket thrusters fitted to the Apollo lunar module. It’s also used in the transistors of powerful computer chips because hafnium oxide is a highly effective electrical insulator. Compared with silicon dioxide, which is conventionally used to switch transistors on and off, it is much less likely to let unwanted currents seep through. It also switches 20% faster, allowing more information to pass. This has enabled transistor size to shrink from 65 nanometres with silicon dioxide  to 32 nm.  Such innovations also keep smartphones small.

Indium is used in touchscreens, PV thin films, and solar cells.  China has 73% of the world’s Indium reserves and refines half of it. China limits indium exports. The USA has been 100% dependent on indium imports since 1972.

Without expanded production after 2015, the DoE says reductions in “non-clean energy demand” will be needed “to prevent shortages and price spikes”. In other words, we might need to choose which is the more important – smartphones or solar cells.

Lanthanum   

  • Is the metal in nickel-metal-hydride batteries used in hybrid cars.
  • Used as a catalyst in oil refining to separate oil into products like gasoline, jet fuel and heating oil
  • Added to swimming-pool cleaner as an algae remover; it absorbs phosphate from the water, starving algae of its fundamental food source
  • camera and telescope lenses, carbon lighting in studios and  cinema projection

Lithium-ion batteries are unsurpassed in energy density, and dominate the market in laptops, cellphones and other devices where a slimline figure is all-important.

Yet they are also rather explosive characters: computer manufacturer Dell recalled four million lithium laptop batteries in 2006 amid fears they might burst into flames if overheated. That risk makes them unsuitable for use in electric and hybrid electric cars, leaving the market to the less explosion-prone nickel-metal-hydride batteries.

This is where lanthanum and cerium come in. They are the main components of a “mischmetal” mixture of rare earth elements that makes up the nickel-metal-hydride battery’s negative electrode. The increased demand for electric cars, and the elements’ subsidiary roles as phosphorescents in energy-saving light bulbs, place lanthanum and cerium on the US DoE’s short-term “near-critical” list for green technologies – a position also assumed by lithium in the medium term.

Neodymium (Nd) 

  • Used in magnets in generators in wind turbines, hybrid cars, laptops, loudspeakers, and computer hard drives
  • Used in high-temp dry film lubricant that works at 2,000 degrees Fahrenheit
  • Used in welding goggles to cut out the yellow-green wavelength of light, which would burn your retina

Neodymium is used in the magnets that keep the motors of both wind turbines and electric cars turning. When mixed with iron and boron, neodymium makes magnets 12 times stronger than conventional iron magnets.

These numerous uses make for a perfect storm threatening future supplies. In its Critical Materials Strategy, which assesses elements crucial for future green-energy technologies, the US Department of Energy estimates that wind turbines and electric cars could make up 40 per cent of neodymium demand in an already overstretched market. Together with increasing demand for the element in personal electronic devices, that makes for a clear “critical” rating.

Praseodymium (Pr) Creates strong metals for aircraft engines and in the glass used to protect welders and glass makers

Promethium (Pm)

Rhenium is used in compact fluorescent light bulbs, and is a byproduct of copper. It’s one of the scarcest elements, and helps steel retain its shape and hardness even under extreme force and high temperatures.

Samarium (Sm)

Scandium (Sc)

Technetium is very rare because technetium, though present within uranium ores in Earth’s crust, quickly falls apart through radioactive decay. Globally, around 30 million medical procedures involving technetium are performed each year. But two new Canadian reactors which were to secure supplies of technetium and other medical isotopes have been mothballed. So it questionable whether these procedures can continue at the same rate (New Scientist, 16 January 2010, p 30). For now, a handful of aging reactors supplies the world’s hospitals.

Tellurium  In 2009, solar cells made from thin films of cadmium telluride became the first to undercut bulky silicon panels in cost per watt of electricity generating capacity. That points to a cheaper future for solar power – perhaps.

Both cadmium and tellurium are mining by-products – cadmium from zinc, and tellurium from copper. Cadmium’s toxicity means it is in plentiful supply: zinc producers are obliged to remove it during refining, and it has precious few other uses.

For tellurium, the situation is reversed. Because the global market for the element has been minute compared with that for copper – some $100 million against over $100 billion – there has been little incentive to extract it. That will change as demand grows, but better extraction methods are expected to only double the supply, which will be nowhere near enough to cover the predicted demand if the new-style solar cells take off. The US DoE anticipates a supply shortfall by 2025.

Terbium (Tb) (see Europium) Used in energy-efficient lighting

Yttrium (Y) (also see Europium)

  • Used in ceramic called yttria-stabliized-zirconia, or YSZ, which has the structural strength of a diamond and is used to make wind-turbine blades
  • Powderized YSZ is used as an electrolyte in fuel cells
  • Yttrium phosphors are used in fluorescent lamps

Related articles

References

BBC. 13 March 2012. What are ‘rare earths’ used for?

British Geological Survey. Rare Earth Elements. Natural environment research council.

British Geological Survey, World Mineral Production 2005. Available at http://www.bgs.ac.uk/mineralsuk/commodity/world/home.html

Brownlee, J. 2013. Read About China’s “Apocalyptic, Toxic” Stranglehold On The iPhone’s Rare Earth Elements. cultofmac.com.

Cheng, Z., Dedrick, J. and Kraemer, K. Technology and Organizational Factors in the Notebook Industry Supply Chain. Institute for Supply Management 2006.

Crow, James Mitchell.  20 June 2011. 13 exotic elements we can’t live without.    New Scientist

Dean, J. 2005. The Laptop Trail The Modern PC Is a Model Of Hyperefficient Production And Geopolitical Sensitivities. The Wall Street Journal.

Emsley, J. 2001. Nature’s Building Block, Oxford University Press. 12 Metric Tons of Ruthenium are mined each year out of a global supply estimated at 5,000. Ruthenium is used to harden platinum and palladium for electrical contacts and fountain pen nibs, and as a coating in hard disks. 75% of worldwide Polysilicon production is located in the US and Japan [The Prometheus Institute]. Silicon is available in almost every country, primarily in the form of sand.  http://pcic.merage.uci.edu/papers/2006/CAPSenglish.pdf

Metallurgical Plants Database for Google Earth. Available at http://www.pyro.co.za/MetPlants/

NIPP.  2013. The National Infrastructure Protection Plan. Department of Homeland Security.

NMA. 2017. Minerals: America’s strength. National Mining Association.

Ruffle, S. 2010. System shock framework: resilient international supply chains. University of Cambridge.

Stone, M. 2019. Behind the Hype of Apple’s Plan to End Mining. earther.gizmodo.com

Tweney, D. 14 Mar 2007. What’s Inside Your Laptop? PCMag.com

US Department of Energy, Critical Materials Strategy, bit.ly/eLFwuo  American Physical Society and Materials Research Society, Energy Critical Elements
US Geological Survey, Mineral Commodity Summaries,

Williams, Eric D. , Ayres, Robert U., Heller, Miriam. The 1.7 kilogram microchip: Energy and material use in the production of semiconductor devices. Sci. Technol. 36 (24), 5504. 2002.
www.it-environment.org/publications/1.7%20kg%20microchip.pdf

 

Laptop supply chain: Geographical
Posted in ! PEAK EVERYTHING, Alternative Energy, Cascading Failure, Important Minerals, Microchip Fabrication stops, Supply Chains | Tagged , , , , | 4 Comments

Black starting the grid after a power outage

Toronto during the 2003 Northeast blackout, which required black-starting of generating stations. Source: https://en.wikipedia.org/wiki/Black_start

Black starts

Large blackouts can be quite devastating and it isn’t easy to restart the electric grid again.

This is typically done by designated black start units of natural gas, coal, hydro, or nuclear power plants that can restart themselves using their own power with no help from the rest of the electrical grid.  Not all power plants can restart themselves.

After a brief introduction to black starts, I have a recent example of one in Venezuela to give you an idea of how hard restarting a grid can be.

Clearly a renewable grid running mainly on wind and solar will crash a lot, and without hydropower or fossil fuels to restart the grid (which are finite and won’t be available at some point), the idea we can just do stuff when the grid is up and wait it out for when the grid is down isn’t going to work.  This is a huge problem for a 100% renewable system that may not be solvable.  Microgrids don’t solve anything, manufacturing and industry require mind-boggling amounts to electricity to stay in business.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

In regions lucky enough to have hydropower (just 10 states have 80% of the hydropower in the U.S.) this is usually the designated black start source since a hydroelectric station needs very little initial power to start, and can put a large block of power on line very quickly to allow start-up of fossil-fuel or nuclear stations.

Wind turbines are not suitable for black start because wind may not be available when needed (Fox 2007) and likewise solar power plants suffer from the same problem.

The impact of a blackout exponentially increases with the duration of the blackout, and the duration of restoration decreases exponentially with the availability of initial sources of power. For several time-critical loads, quick restoration (minutes rather than hours or even days) is crucial. Blackstart generators, which can be started without any connection to the grid, are a key element in restoring service after a widespread outage. These initial sources of power include pump-storage hydropower, which can take 5-10 minutes to start, to certain types of combustion turbines, which take on the order of hours.

For a limited outage, restoration can be rapid, which will then allow sufficient time for repair to bring the system to full operability, although there may be a challenge for subsurface cables in metropolitan areas. On the other hand, in widespread outages, restoration itself may be a significant barrier, as was the case in the 1965 and 2003 Northeast blackouts. Natural disasters, however, can also lead to significant issues of repair—after Hurricanes Rita and Katrina, full repair of the electric power system took several years (NAS)

Restoring a system from a blackout required a very careful choreography of re-energizing transmission lines from generators that were still online inside the blacked-out area, from systems from outside the blacked-out area, restoring station power to off-line generating units so they could be restarted, synchronizing the generators to the interconnection, and then constantly balancing generation and demand as additional generating units and additional customer demands are restored to service.

Many may not realize it takes days to bring nuclear and coal fired power plants back on-line, so restoring power was done with gas-fired plants normally used for peak periods to cover baseload needs normally coal and nuclear-powered. The diversity of our energy systems proved invaluable (CR).

Restarting the grid after the 2003 power outage was especially difficult.

The blackout shutdown over 100 power plants, including 22 nuclear reactors, cutoff power for 50 million people in 8 states and Canada, including much of the Northeast corridor and the core of the American financial network, and showed just how vulnerable our tightly knit network of generators, transmission lines, and other critical infrastructure is.

The dependence of major infrastructural systems on the continued supply of electrical energy, and of oil and gas, is well recognized. Telecommunications, information technology, and the Internet, as well as food and water supplies, homes and worksites, are dependent on electricity; numerous commercial and transportation facilities are also dependent on natural gas and refined oil products.

Newman, L. H. 2018. Why it’s so hard to restart Venezuela’s power grid. Wired.com

Venezuela’s massive nationwide power outages, which began on Thursday, have so far resulted in at least 20 deaths, looting, and loss of access to food, water, fuel, and cash for many of the country’s 31 million residents. Late Monday, the United States said its diplomats would leave the US embassy in Caracas, citing deteriorating conditions. As the societal impacts intensify and Venezuela’s internal power struggle continues, the country is clearly struggling to restart its grid and meaningfully restore power—a problem exacerbated by its aging infrastructure.

Reenergizing a dead grid, a process known as a black start, is challenging under any circumstances.

Government statements and reports indicate that the blackout stems from a problem at the enormous Guri dam hydropower plant in eastern Venezuela, which generates 80 percent of the country’s electricity. And the already arduous process of restoring power seems hobbled by years of system neglect. It’s also unclear whether Venezuela has the specialists, workforce, and spare equipment available on the ground to triage the situation quickly.

“The challenge with black start is always just knowing specifically what happened,” says Nathan Wallace, director of cyber operations and a staff engineer at secure grid companies Cybirical and Ampirical Solutions. “It sounds like there may be lack of maintenance and some mismanagement. And typically if a system hasn’t been maintained, that means they really don’t have the visualization needed to understand the state of the system in real time. If the procedure for black start is not accurately representing the state of the system, there can be problems.”

A black start generally involves seeding power from an independent source—like small diesel generators or natural gas turbines—to restart power plants in an otherwise dead transmission network. This process is often called bootstrapping. Hydroelectric plants in particular can be designed to essentially black-start themselves. In these plants, water—often from a dam, as in the case of Guri—flows through a turbine, which spins it, powering an electric generator. Since it takes relatively little independent energy to open the water intake gates and potentially generate a lot of power very quickly, hydroelectric plants can work well for black start. It is unclear whether Venezuela’s Guri plant is designed with this scenario in mind.

What makes any black-start process especially complicated is the need to load balance a system, so that as power surges through, the supply from the generator matches the demand. Otherwise the generation plant will run too fast or be exhausted, causing the system to fail again.

It’s a large stepwise process to build up load, build up generation, build up more load, build up more generation until they’ve got enough reliability to go to the next element of the system. If a utility has issues with maintenance, or has a history of operational issues, or they don’t have a plan, or that plan is outdated, or if they don’t have a really good understanding of the limitations of the grid system, everything the utility is attempting to do becomes far more difficult.

Venezuela’s grid is based on a classic model of bulk power generation. From a centralized plant—in this case, Guri—substations transform electricity from low to high voltage so it can be transmitted all over the country and then converted back down to lower voltage for local distribution. This is fairly typical in small countries, though some prioritize adding diverse generation or connecting with neighboring grids to increase redundancy. Black-start researchers and practitioners say, though, that any model has pros and cons. While distributed systems don’t have a single point of generation failure, they can be more difficult to black start if they do go down, since more generation sites need to be bootstrapped and there are more loads to balance.

Regardless of the setup, the crucial component of all black starts is understanding what caused the outage, having the ability to fix it, and working with a system that can handle the power surges and fluctuations involved in bringing power back online. Without all of these elements in place, says Tim Yardley, a senior researcher at the University of Illinois focused on industrial control crisis simulations, black starts can be prohibitively difficult to execute.

“Reenergizing a grid in some ways is more of a shock to the system than it operating in its norm,” Yardley says. “If infrastructure is aging, and there’s a lack of maintenance and repairs, as you try to turn it back on and try to balance the loads you may have stuff that’s not going to come back up, infrastructure that’s been physically damaged or that was in such a bad state of repair that reenergizing it causes other problems.”

Crews attempting to deal with black-starting a frail and brittle grid also face major safety considerations, like explosions. “You have a maintenance issue and a manpower issue, because it’s extremely dangerous to reenergize a system if you have gear that hasn’t been maintained well,” Yardley notes.

Venezuela has faced years of power instability since about 2009, including two major blackouts in 2013 and a power and water crisis in 2016. At times the blackouts were caused in part by weather conditions like El Niño, but overall they have established a pattern of poor planning, mismanagement, and lack of investment on the part of the government. President Maduro has repeatedly overseen rationing efforts resulting in erratic power and has even set official national clocks back to put the country’s morning commute in daylight.

References

CR. September 4 & 23, 2003. Implications of power blackouts for the nation’s cybersecurity and critical infrastructure protection. Congressional Record, House of Representatives. Serial No. 108–23. Christopher Cox, California, Chairman select committee on homeland security

Fox, Brendan et al; Wind Power Integration – Connection and System Operational Aspects, Institution of Engineering and Technology, 2007 page 245

NAS. 2012. Terrorism and the Electric Power Delivery System. National Academy of Science

NAS. 2013. The Resilience of the Electric Power Delivery System in Response to Terrorism and Natural Disasters. National Academy of Science

Posted in Grid instability | Tagged , , , | 9 Comments

Rare Earth – why we may be the only intelligent species in the universe

Preface. I think that Ward & Brownlee’s 2000 book “Rare Earth: why complex life is uncommon in the universe” is one of the most profound and important books I’ve ever read.  What if we are the only intelligent species in the galaxy, or universe?  There are many strong reasons to think so.  Bacteria on the other hand, a dime a dozen, probably splattered all over planets within a reasonable Goldilocks zone from their star.

It was clear to me in college from my ecology classes that we were destroying our life support systems here on earth so we could “Shop shop shop and Grow Grow Grow!” and that could possibly drive us extinct –though I have always thought someone would survive, perhaps a remote rainforest tribe or Amish farmers in Patagonia.  But they’d have to live on a ruined planet that might take millions of years to recover, as was the case after the mother of all extinctions, The Permian.   

This is an issue many people are aware of, but not worried about. We can always go to Mars.  Not!  As I write about in “Escaped to Mars after we’ve trashed the Earth?”

Nearly all of the damage we’ve done to the planet is due to our use of fossil fuels, which magnifies our puny muscle power by many orders of magnitude and bloated our population to nearly 8 billion people. A healthy human being peddling quickly on a bicycle can produce enough energy to light a 100-watt bulb. If this person works eight hours a day, five days a week, it would take  8.6 years of human labor to produce the energy stored in one barrel of oil. The world today consumes 89 million barrels of oil, every single day.   So when we are back to our muscles rather than machines equal to hundreds of horses, we simply won’t be able to do as much marm (more about energy slaves here.)

As oil declines, population will shrink to 1 billion (what it was before fossil fuels), and the tremendously destructive machines and manufacturing of pesticides and other toxic chemical poisons cease, our ability to make plastics, pollute air, land, and water, catch the last fish in the sea, deplete fresh water and topsoil – all of the harm we’re doing will diminish tremendously.  However, we’ve already degraded the planet so much that the final population total after energy descent may be well less than 1 billion.

When it comes to life in the universe, I have no doubt that one-celled life forms are all over the galaxy, but there is a good chance we are the only intelligent, aware species in our galaxy or even universe. 

So energy descent is depressing, but from the greater picture view of how fossils are rendering our planet uninhabitable, their disappearance may be the only way to preserve our species, which may be the only lifeforms gazing out on the universe in awe and wonder.  What a tragedy if we destroyed ourselves (though we could still blow it with nuclear wars over the remaining oil). 

I doubt I’ll survive energy descent, but knowing that this tragedy may allow our species to survive gives me great comfort.

Below is the latest news on how rare intelligent life in the universe may be, and a long extract from Wikipedia’s summary of the “Rare Earth Hypothesis”. But you really ought to go to the original Wikipedia article, since I didn’t include their pictures, nor the counter-arguments, and the references, or better yet, the book “Rare Earth”

At the end, I also have an article from The New Yorker about space aliens visiting earth, perhaps it’s a good thing they almost certainly don’t exist.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

***

2019. New study dramatically narrows the search for advanced life in the universe. Phys.org.

Scientists may need to rethink their estimates for how many planets outside our solar system could host a rich diversity of life. In a new study, a UC Riverside–led team discovered that a buildup of toxic gases in the atmospheres of most planets makes them unfit for complex life as we know it. Accounting for predicted levels of certain toxic gases narrows the safe zone for complex life by at least half—and in some instances eliminates it altogether.

Using computer models to study atmospheric climate and photochemistry on a variety of planets, the team first considered carbon dioxide. Any scuba diver knows that too much of this gas in the body can be deadly. But planets too far from their host star require carbon dioxide—a potent greenhouse gas—to maintain temperatures above freezing. Earth included.

“To sustain liquid water at the outer edge of the conventional habitable zone, a planet would need tens of thousands of times more carbon dioxide than Earth has today,” said Edward Schwieterman, the study’s lead author and a NASA Postdoctoral Program fellow working with Lyons. “That’s far beyond the levels known to be toxic to human and animal life on Earth.”

Carbon dioxide toxicity alone restricts simple animal life to no more than half of the traditional habitable zone. For humans and other higher order animals, which are more sensitive, the safe zone shrinks to less than one third of that area.

What is more, no safe zone at all exists for certain stars, including two of the sun’s nearest neighbors, Proxima Centauri and TRAPPIST-1. The type and intensity of ultraviolet radiation that these cooler, dimmer stars emit can lead to high concentrations of carbon monoxide, another deadly gas. Carbon monoxide cannot accumulate on Earth because our hotter, brighter sun drives chemical reactions in the atmosphere that destroy it quickly.

If life exists elsewhere in the solar system, Schwieterman explained, it is deep below a rocky or icy surface. So, exoplanets may be our best hope for finding habitable worlds more like our own.

“I think showing how rare and special our planet is only enhances the case for protecting it,” Schwieterman said. “As far as we know, Earth is the only planet in the universe that can sustain human life.”

Source: Schwieterman, E. W., et al. 2019. A limited habitable zone for complex life. The Astrophysical Journal.

Wikipedia. 2019. Rare Earth Hypothesis. https://en.wikipedia.org/wiki/Rare_Earth_hypothesis

Requirements for complex life

The Rare Earth hypothesis argues that the evolution of biological complexity requires a host of fortuitous circumstances, such as a galactic habitable zone, a central star and planetary system having the requisite character, the circumstellar habitable zone, a right-sized terrestrial planet, the advantage of a gas giant guardian like Jupiter and a large natural satellite, conditions needed to ensure the planet has a magnetosphere and plate tectonics, the chemistry of the lithosphere, atmosphere, and oceans, the role of “evolutionary pumps” such as massive glaciation and rare bolide impacts, and whatever led to the appearance of the eukaryote cell, sexual reproduction and the Cambrian explosion of animal, plant, and fungi phyla. The evolution of human intelligence may have required yet further events, which are extremely unlikely to have happened were it not for the Cretaceous–Paleogene extinction event 66 million years ago removing dinosaurs as the dominant terrestrial vertebrates.

In order for a small rocky planet to support complex life, Ward and Brownlee argue, the values of several variables must fall within narrow ranges. The universe is so vast that it could contain many Earth-like planets. But if such planets exist, they are likely to be separated from each other by many thousands of light years. Such distances may preclude communication among any intelligent species evolving on such planets, which would solve the Fermi paradox: “If extraterrestrial aliens are common, why aren’t they obvious?”

The right location in the right kind of galaxy

Rare Earth suggests that much of the known universe, including large parts of our galaxy, are “dead zones” unable to support complex life. Those parts of a galaxy where complex life is possible make up the galactic habitable zone, primarily characterized by distance from the Galactic Center. As that distance increases:

  1. Star metallicity declines. Metals (which in astronomy means all elements other than hydrogen and helium) are necessary to the formation of terrestrial planets.
  2. The X-ray and gamma ray radiation from the black hole at the galactic center, and from nearby neutron stars, becomes less intense. Thus the early universe, and present-day galactic regions where stellar density is high and supernovae are common, will be dead zones.
  3. Gravitational perturbation of planets and planetesimals by nearby stars becomes less likely as the density of stars decreases. Hence the further a planet lies from the Galactic Center or a spiral arm, the less likely it is to be struck by a large bolide which could extinguish all complex life on a planet.
  4. Item #1 rules out the outer reaches of a galaxy; #2 and #3 rule out galactic inner regions. Hence a galaxy’s habitable zone may be a ring sandwiched between its uninhabitable center and outer reaches.
  5. Also, a habitable planetary system must maintain its favorable location long enough for complex life to evolve. A star with an eccentric (elliptic or hyperbolic) galactic orbit will pass through some spiral arms, unfavorable regions of high star density; thus a life-bearing star must have a galactic orbit that is nearly circular, with a close synchronization between the orbital velocity of the star and of the spiral arms. This further restricts the galactic habitable zone within a fairly narrow range of distances from the Galactic Center. Lineweaver et al. calculate this zone to be a ring 7 to 9 kiloparsecs in radius, including no more than 10% of the stars in the Milky Way, about 20 to 40 billion stars. Gonzalez, et al. would halve these numbers; they estimate that at most 5% of stars in the Milky Way fall in the galactic habitable zone.
  6. Approximately 77% of observed galaxies are spiral, two-thirds of all spiral galaxies are barred, and more than half, like the Milky Way, exhibit multiple arms. According to Rare Earth, our own galaxy is unusually quiet and dim (see below), representing just 7% of its kind. Even so, this would still represent more than 200 billion galaxies in the known universe.
  7. Our galaxy also appears unusually favorable in suffering fewer collisions with other galaxies over the last 10 billion years, which can cause more supernovae and other disturbances. Also, the Milky Way’s central black hole seems to have neither too much nor too little activity (Scharf 2012).
  8. The orbit of the Sun around the center of the Milky Way is indeed almost perfectly circular, with a period of 226 Ma (million years), closely matching the rotational period of the galaxy. However, the majority of stars in barred spiral galaxies populate the spiral arms rather than the halo and tend to move in gravitationally aligned orbits, so there is little that is unusual about the Sun’s orbit. While the Rare Earth hypothesis predicts that the Sun should rarely, if ever, have passed through a spiral arm since its formation, astronomer Karen Masters has calculated that the orbit of the Sun takes it through a major spiral arm approximately every 100 million years. Some researchers have suggested that several mass extinctions do correspond with previous crossings of the spiral arms.

Orbiting at the right distance from the right type of star

According to the hypothesis, Earth has an improbable orbit in the very narrow habitable zone (dark green) around the Sun.

The terrestrial example suggests that complex life requires liquid water, requiring an orbital distance neither too close nor too far from the central star, another scale of habitable zone or Goldilocks Principle: The habitable zone varies with the star’s type and age.

For advanced life, the star must also be highly stable, which is typical of middle star life, about 4.6 billion years old. Proper metallicity and size are also important to stability. The Sun has a low 0.1% luminosity variation. To date no solar twin star, with an exact match of the sun’s luminosity variation, has been found, though some come close. The star must have no stellar companions, as in binary systems, which would disrupt the orbits of planets. Estimates suggest 50% or more of all star systems are binary. The habitable zone for a main sequence star very gradually moves out over its lifespan until it becomes a white dwarf and the habitable zone vanishes.

The liquid water and other gases available in the habitable zone bring the benefit of greenhouse warming. Even though the Earth’s atmosphere contains a water vapor concentration from 0% (in arid regions) to 4% (in rain forest and ocean regions) and – as of February 2018 – only 408.05 parts per million of CO2, these small amounts suffice to raise the average surface temperature by about 40 °C, with the dominant contribution being due to water vapor, which together with clouds makes up between 66% and 85% of Earth’s greenhouse effect, with CO2 contributing between 9% and 26% of the effect.

Rocky planets must orbit within the habitable zone for life to form. Although the habitable zone of such hot stars as Sirius or Vega is wide, hot stars also emit much more ultraviolet radiation that ionizes any planetary atmosphere. They may become red giants before advanced life evolves on their planets. These considerations rule out the massive and powerful stars of type F6 to O (see stellar classification) as homes to evolved metazoan life.

Small red dwarf stars conversely have small habitable zones wherein planets are in tidal lock, with one very hot side always facing the star and another very cold side; and they are also at increased risk of solar flares (see Aurelia). Life therefore cannot arise in such systems. Rare Earth proponents claim that only stars from F7 to K1 types are hospitable. Such stars are rare: G type stars such as the Sun (between the hotter F and cooler K) comprise only 9% of the hydrogen-burning stars in the Milky Way.

Such aged stars as red giants and white dwarfs are also unlikely to support life. Red giants are common in globular clusters and elliptical galaxies. White dwarfs are mostly dying stars that have already completed their red giant phase. Stars that become red giants expand into or overheat the habitable zones of their youth and middle age (though theoretically planets at a much greater distance may become habitable).

An energy output that varies with the lifetime of the star will likely prevent life (e.g., as Cepheid variables). A sudden decrease, even if brief, may freeze the water of orbiting planets, and a significant increase may evaporate it and cause a greenhouse effect that prevents the oceans from reforming.

All known life requires the complex chemistry of metallic elements. The absorption spectrum of a star reveals the presence of metals within, and studies of stellar spectra reveal that many, perhaps most, stars are poor in metals. Because heavy metals originate in supernova explosions, metallicity increases in the universe over time. Low metallicity characterizes the early universe: globular clusters and other stars that formed when the universe was young, stars in most galaxies other than large spirals, and stars in the outer regions of all galaxies. Metal-rich central stars capable of supporting complex life are therefore believed to be most common in the quiet suburbs of the larger spiral galaxies—where radiation also happens to be weak.

With the right arrangement of planets

Rare Earth proponents argue that a planetary system capable of sustaining complex life must be structured more or less like the Solar System, with small and rocky inner planets and outer gas giants. Without the protection of ‘celestial vacuum cleaner’ planets with strong gravitational pull, a planet would be subject to more catastrophic asteroid collisions.

Observations of exo-planets have shown that arrangements of planets similar to our Solar System are rare. Most planetary systems have super Earths, several times larger than Earth, close to their star, whereas our Solar System’s inner region has only a few small rocky planets and none inside Mercury’s orbit. Only 10% of stars have giant planets similar to Jupiter and Saturn, and those few rarely have stable nearly circular orbits distant from their star. Konstantin Batygin and colleagues argue that these features can be explained if, early in the history of the Solar System, Jupiter and Saturn drifted towards the Sun, sending showers of planetesimals towards the super-Earths which sent them spiralling into the Sun, and ferrying icy building blocks into the terrestrial region of the Solar System which provided the building blocks for the rocky planets. The two giant planets then drifted out again to their present position. However, in the view of Batygin and his colleagues: “The concatenation of chance events required for this delicate choreography suggest that small, Earth-like rocky planets – and perhaps life itself – could be rare throughout the cosmos.”

A continuously stable orbit

Rare Earth argues that a gas giant must not be too close to a body where life is developing. Close placement of gas giant(s) could disrupt the orbit of a potential life-bearing planet, either directly or by drifting into the habitable zone.

Newtonian dynamics can produce chaotic planetary orbits, especially in a system having large planets at high orbital eccentricity.

The need for stable orbits rules out stars with systems of planets that contain large planets with orbits close to the host star (called “hot Jupiters“). It is believed that hot Jupiters have migrated inwards to their current orbits. In the process, they would have catastrophically disrupted the orbits of any planets in the habitable zone. To exacerbate matters, hot Jupiters are much more common orbiting F and G class stars.

A terrestrial planet of the right size

It is argued that life requires terrestrial planets like Earth and as gas giants lack such a surface, that complex life cannot arise there.

A planet that is too small cannot hold much atmosphere, making surface temperature low and variable and oceans impossible. A small planet will also tend to have a rough surface, with large mountains and deep canyons. The core will cool faster, and plate tectonics may be brief or entirely absent. A planet that is too large will retain too dense an atmosphere like Venus. Although Venus is similar in size and mass to Earth, its surface atmospheric pressure is 92 times that of Earth, and surface temperature of 735 K (462 °C; 863 °F). Earth had a similar early atmosphere to Venus, but may have lost it in the giant impact event.

With plate tectonics

Rare Earth proponents argue that plate tectonics and a strong magnetic field are essential for biodiversity, global temperature regulation, and the carbon cycle. The lack of mountain chains elsewhere in the Solar System is direct evidence that Earth is the only body with plate tectonics, and thus the only nearby body capable of supporting life.

Plate tectonics depend on the right chemical composition and a long-lasting source of heat from radioactive decay. Continents must be made of less dense felsic rocks that “float” on underlying denser mafic rock. Taylor emphasizes that tectonic subduction zones require the lubrication of oceans of water. Plate tectonics also provides a means of biochemical cycling.

Plate tectonics and as a result continental drift and the creation of separate land masses would create diversified ecosystems and biodiversity, one of the strongest defences against extinction. An example of species diversification and later competition on Earth’s continents is the Great American Interchange. North and Middle America drifted into South America at around 3.5 to 3 Ma. The fauna of South America evolved separately for about 30 million years, since Antarctica separated. Many species were subsequently wiped out in mainly South America by competing Northern American animals.

Diamonds: bad for life. The planets circling some stars may be too diamond-rich, as much as 50% pure diamond. Their mantle might consist of a hard, brittle diamond that is incapable of flowing. Whereas iron and silicon trap heat inside our planet, resulting in geothermal energy, diamonds transfer heat so readily that the planet’s interior would quickly freeze. Without geothermal energy, there couldn’t be any plate tectonics, magnetic field, or atmosphere. Panero describes these diamond super-earths as “very cold, dark” worlds (Wilkins 2011).

A large moon

The Moon is unusual because the other rocky planets in the Solar System either have no satellites (Mercury and Venus), or only tiny satellites which are probably captured asteroids (Mars).

The Giant-impact theory hypothesizes that the Moon resulted from the impact of a Mars-sized body, dubbed Theia, with the young Earth. This giant impact also gave the Earth its axial tilt (inclination) and velocity of rotation. Rapid rotation reduces the daily variation in temperature and makes photosynthesis viable. The Rare Earth hypothesis further argues that the axial tilt cannot be too large or too small (relative to the orbital plane). A planet with a large tilt will experience extreme seasonal variations in climate. A planet with little or no tilt will lack the stimulus to evolution that climate variation provides. In this view, the Earth’s tilt is “just right”. The gravity of a large satellite also stabilizes the planet’s tilt; without this effect the variation in tilt would be chaotic, probably making complex life forms on land impossible.

If the Earth had no Moon, the ocean tides resulting solely from the Sun’s gravity would be only half that of the lunar tides. A large satellite gives rise to tidal pools, which may be essential for the formation of complex life, though this is far from certain.

A large satellite also increases the likelihood of plate tectonics through the effect of tidal forces on the planet’s crust. The impact that formed the Moon may also have initiated plate tectonics, without which the continental crust would cover the entire planet, leaving no room for oceanic crust. It is possible that the large scale mantle convection needed to drive plate tectonics could not have emerged in the absence of crustal inhomogeneity. A further theory indicates that such a large moon may also contribute to maintaining a planet’s magnetic shield by continually acting upon a metallic planetary core as dynamo, thus protecting the surface of the planet from charged particles and cosmic rays, and helping to ensure the atmosphere is not stripped over time by solar winds.

Atmosphere

A terrestrial planet of the right size is needed to retain an atmosphere, like Earth and Venus. On Earth, once the giant impact of Theia thinned Earth’s atmosphere, other events were needed to make the atmosphere capable of sustaining life. The Late Heavy Bombardment reseeded Earth with water lost after the impact of Theia. The development of an ozone layer formed protection from ultraviolet (UV) sunlight. Nitrogen and carbon dioxide are needed in a correct ratio for life to form. Lightning is needed for nitrogen fixation. The carbon dioxide gas needed for life comes from sources such as volcanoes and geysers. Carbon dioxide is only needed at low levels] (currently at 400 ppm); at high levels it is poisonous. Precipitation is needed to have a stable water cycle. A proper atmosphere must reduce diurnal temperature variation.

One or more evolutionary triggers for complex life

Regardless of whether planets with similar physical attributes to the Earth are rare or not, some argue that life usually remains simple bacteria. Biochemist Nick Lane argues that simple cells (prokaryotes) emerged soon after Earth’s formation, but since almost half the planet’s life had passed before they evolved into complex ones (eukaryotes) all of whom share a common ancestor, this event can only have happened once. In some views, prokaryotes lack the cellular architecture to evolve into eukaryotes because a bacterium expanded up to eukaryotic proportions would have tens of thousands of times less energy available; two billion years ago, one simple cell incorporated itself into another, multiplied, and evolved into mitochondria that supplied the vast increase in available energy that enabled the evolution of complex life. If this incorporation occurred only once in four billion years or is otherwise unlikely, then life on most planets remains simple. An alternative view is that mitochondria evolution was environmentally triggered, and that mitochondria-containing organisms appeared soon after the first traces of atmospheric oxygen. Oxygen was needed for powering the process of aerobic respiration for both plants and animals.

The evolution and persistence of sexual reproduction is another mystery in biology. The purpose of sexual reproduction is unclear, as in many organisms it has a 50% cost (fitness disadvantage) in relation to asexual reproduction. Mating types (types of gametes, according to their compatibility) may have arisen as a result of anisogamy (gamete dimorphism), or the male and female genders may have evolved before anisogamy. It is also unknown why most sexual organisms use a binary mating system, and why some organisms have gamete dimorphism. Charles Darwin was the first to suggest that sexual selection drives speciation; without it, complex life would probably not have evolved.

The right time in evolution

While life on Earth is regarded to have spawned relatively early in the planet’s history, the evolution from multicellular to intelligent organisms took around 800 million years. Civilizations on Earth have existed for about 12,000 years and radio communication reaching space has existed for less than 100 years. Relative to the age of the Solar System (~4.57 Ga) this is a short time, in which extreme climatic variations, super volcanoes, and large meteorite impacts were absent. These events would severely harm intelligent life, as well as life in general. For example, the Permian-Triassic mass extinction, caused by widespread and continuous volcanic eruptions in an area the size of Western Europe, led to the extinction of 95% of known species around 251.2 Ma ago. About 65 million years ago, the Chicxulub impact at the Cretaceous–Paleogene boundary (~65.5 Ma) on the Yucatán peninsula in Mexico led to a mass extinction of the most advanced species at that time.

If there were intelligent extraterrestrial civilizations able to make contact with distant Earth, they would have to live in the same 12Ka period of the 800Ma evolution of life.

Chain of Improbable coincidences (Gribbin 2018)

Many things had to go right for us to exist. Serendipity in the timing and location of our home star and planet as well as lucky conditions on earth and fortuitous developments in the evolution of life, resulted in human beings.

Timing. If the sun and earth had been born any earlier in galactic history, our planet would likely have had too few metals to form life. These elements are created during stellar deaths, and it took billions of years for enough stars to form and die to enrich the materials that built our solar system.

Location. The sun likes in a goldilocks zone within the milky way – not too close to the galactic center, where stars are more crowded and dangerous events such as supernovae and gamma-ray bursts are common, and not too far, where stars are too sparse for enough metals to build up to form rocky planets.

Technological Civilization. Once multicellular life arose, the development of an intelligent species was far from assured, and our species may have come close to extinction several times. Evolution doesn’t have a goal of creating intelligence, and if you asked an elephant what the goal of evolution was, she would probably tell you to evolve here extraordinary trunk with its thousands of muscles and consequent exquisite flexibility.  And without fossil fuels, we would have the civilization of what existed in the 14th century.  To become who we are today required language, an opposable thumb, the invention of fire, and much more, all very unlikely to have happened, yet here we are.

Snowball Earth (Ward & Brownlee)

It is possible that the extreme conditions of snowball earth were required to force multicellular life to evolve 650 million years ago when the Earth’s surface became entirely or nearly frozen at least once. 

Complex life evolved just once. All complex life is descended from a single common ancestor. Why? Nick Lane says that natural selection normally favors fast replication, keeping simple cells simple. Then a freak event occurred: an archaeon engulfed a bacterium and the 2 cells formed a symbiotic relationship. That transformed the dynamics of evolution, leading to a period of rapid change that produced innovations such as sex. The incorporated bacterium eventually evolved into mitochondria, the energy generators of complex cells.  So there was nothing inevitable about the rise of the sophisticated organisms from which we evolved. “The unavoidable conclusion is that the universe should be full of bacteria, but more complex life will be rare” (NS 2010).

If an alien civilization does arise, it will wipe itself out (Williams 2016)

‘Stargazing Live’ presenter Brian Cox believes the search for celestial life will ultimately prove futile. Cox believes that any alien civilization is destined to wipe itself out shortly after it evolves.

“One solution to the Fermi paradox is that it is not possible to run a world that has the power to destroy itself and that needs global collaborative solutions to prevent that,” Cox said.

The physicist explained that advances in science and technology would rapidly outstrip the development of institutions capable of keeping them under control, leading to the civilizations self-destruction: “It may be that the growth of science and engineering inevitably outstrips the development of political expertise, leading to disaster. We could be approaching that position.”

References

Gribbin, J. 2018. Why we are probably the only intelligent life in the galaxy. Scientific American.

NS. 2010. An unlikely story. New Scientist.

Scharf, C. 2012. The benevolence of black holes. Scientific American.

Wilkins, A. 2011. The galaxy could be full of diamond planets. Gizmodo.

Williams, O. 2016. Brian Cox Explains Why He Thinks We’ll Never Find Aliens. His answer doesn’t bode well for the future of humanity. Huffingtonpost

Bonus article

This website explains why renewables can’t replace fossil fuels, or keep trucks running.  At a science writers conference I was asked by many people, well, what can we do then, and I threw up my hands and said “The Space Aliens will have to save us”.   Let’s hope it’s not these space aliens though.

Paul Simms. 2009. Attention, people of earth. The New Yorker.

We are on our way to your planet. We will be there shortly. But in this, our first contact with you, our “headline” is: We do not want your gravel.

We are coming to Earth, first of all, just to see if we can actually do it. Second, we hope to learn about you and your culture(s). Third—if we end up having some free time—we wouldn’t mind taking a firsthand look at your almost ridiculously bountiful stores of gravel. But all we want to do is look.

You’re probably wondering if we mean you harm. Good question! So you’re going to like the answer, which is: We mean you no harm. Truth be told, there is a faction of us who want to completely annihilate you. But they’re not in power right now. And a significant majority of us find their views abhorrent and almost even barbaric.

But, thanks to the fact that our government operates on a system very similar to your Earth democracy, we have to tolerate the views of this “loyal opposition,” even while we hope that they never regain power, which they probably won’t (if the current poll tracking numbers hold up).

By the way, if we do take any of your gravel, it’s going to be such a small percentage of your massive gravel supply that you probably won’t even notice it’s gone.

You may be wondering how we know your language. We are aware that there’s a theory on your planet that we (or other alien species from the far reaches of the galaxy) have been able to learn your language from your television transmissions. This is not the case, because most of us don’t really watch TV. Most of our knowledge about your Earth TV comes from reading Zeitgeisty think pieces by our resident intellectuals, who watch it not for fun but for ideas for their print articles about how Earth TV holds a mirror up to Earth society, and so on. We mean, we’ll watch Earth TV sometimes—if it happens to be on already—but, generally, we prefer to read a good book or revive the lost art of conversation.

Sadly, Earth TV is like a vast wasteland, as the Earthling Newton Minow once said. But, for those of you who can understand things only in TV terms, just think of us as being very similar to Mork from Ork, in that he was a friendly, non-gravel-wanting alien who visited Earth just to find out what was there, and not to harvest gravel.

Speaking of a vast wasteland, you might want to start picking out and clearing off a place for our spacecraft to land. Our spacecraft, as you will see shortly, is huge. Do not be alarmed; this does not mean that each one of us is that much bigger than each one of you. It’s just that there were so many of us who wanted to come that we had to build a really huge spacecraft.

So, again, no cause for alarm.

(Full disclosure: each of us actually is much bigger than each of you, and there’s nothing we can do about it. So please don’t use any of your Earth-style discrimination against us. This is just how we are, and it’s not our fault.)

Anyway, re our spacecraft: it’s kind of gigantic. The deceleration thrusters alone are sort of, like . . . well, imagine four of your Vesuvius volcanoes (but bigger), turned upside down.

We don’t want to hurt anyone, so, if you could just clear off one continent, we think we can keep unintended fatalities to a minimum. Australia would probably work. (But don’t say Antarctica. Because we’d just melt it, and then you’d all end up underwater. Which would make it virtually impossible for us to learn about your hopes and your dreams, and your culture, and to harvest relatively small, sample-size amounts of your gravel, just for scientific study.)

A little bit about us: our males have two penises, while our females have only one. So, gender-wise, if you use simple math, we’re pretty much identical to you.

And, as far as protocol goes, we’re a pretty informal species. If you want to put together a welcoming ceremony with all your kings and queens and Presidents and Prime Ministers and leading gravel-owners, that’s fine. But please don’t feel like you have to.

Technically, it would be possible for us to share our space-travel technology with you, so that you could build a spacecraft and travel to our planet also. But, for right now, it just feels like it would be better if we came to your place.

Speaking of gravel, one thing we can’t tell from our monitoring of Earth is how your gravel tastes. It’s just something we’re curious about, for no real reason. Is it salty? It looks salty.

Maybe you could form a commission of scientists/gravel-tasters to look into this and let us know. Just have them collect all the gravel you have and put it in one big pile. (There are some pretty big empty parts of Utah, New Mexico, and Russia that might be good spots for such a large gravel pile, but that’s just an F.Y.I.)

Then, if you could have your top scientists/gravel-tasters go through this gravel pile, tasting each and every piece, that would be great. Also, if it’s not too much of a hassle, have them put all the saltier-tasting pieces in a separate pile.

Anyway, that about wraps up this transmission! Looking forward to seeing you very soon. (Sorry we couldn’t have given you more notice, but we didn’t want you Earth people going crazy and looting stuff and having sex in the streets out of panic about losing all your delicious gravel, which is something that is definitely not going to happen, because, when it comes down to it, what is gravel really but just a bunch of baby rocks?)

Our E.T.A. on Earth is sometime in the next four hundred and fifty to five hundred years, which we know is a blink of an eye in your Earth time, so start getting ready! Let’s have fun with this.

Yours,

A Species from a Galaxy You Haven’t Even Noticed Yet

P.S.—We saw that you sent some people to your moon recently. Good job! But, just to let you know, don’t waste your time with the moon. There’s no gravel there. We already checked.

Posted in Evolution, Human Nature, Life Before Oil | Tagged | 11 Comments

Threats to America’s oil pipeline grid


Preface. At some point of energy decline there will be Americans who tap into pipelines to get scarce oil for themselves and to sell it on black markets. Just look at the massive amount of oil being stolen in Nigeria here. And the rate of theft is increasing, in 2017 9,000 barrels a day were stolen versus 6,000 in 2016, which also often resulted in messy spills.

The United States has 150,000 miles of crude oil pipelines, while Nigeria has just 2,800 miles and can’t protect them from theft (Wikipedia 2015).

One of the best and most effective ways governments can help their citizens cope with energy decline is to ration oil to agriculture whatever it needs, and after that other essential services and citizens. If oil theft can’t be prevented, the descent of civilization will be even faster.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Hampton, L., et al. 2016. Bolt cutters expose vulnerability of North America’s oil pipeline grid. Reuters.

All it took was a pair of bolt cutters and the elbow grease of a few climate activists to carry out an audacious act of sabotage on North America’s massive oil and gas pipeline system.

For an industry increasingly reliant on gadgets such as digital sensors, infrared cameras and drones to monitor security and check for leaks, the sabotage illustrated how vulnerable pipelines are to low-tech attacks.

On Tuesday, climate activists broke through fences and cut locks and chains simultaneously in several states and simply turned the pipelines off.

All they had to do was twist shut giant valves on five cross-border pipelines that together can send 2.8 million barrels a day of crude to the United States from Canada – equal to about 15 percent of daily U.S. consumption.

The activists did no damage to the pipelines, which operating companies shut down as a precaution for checks before restarting.

The United States is the world’s largest energy market, and the infrastructure to drill, refine, store and deliver that energy to consumers is connected by millions of miles of pipeline that are impossible to protect entirely from attack.

“You’re not manning these things on a permanent basis. It’s not viable,” said Stewart Dewar, a project manager at Senstar, an Ottawa-based company that authored a 2012 white paper on pipeline security. “It’s too expensive.”

References

Wikipedia. 2015. List of countries by total length of pipelines.

Posted in Fuel Distribution, Oil & Gas, Terrorism | Tagged | 1 Comment

Vanishing open spaces: population growth and sprawl in America


Before the fossil fuel age began, about 80 to 90% of people farmed to make a living. Since the end of the oil age will send us back to the past, farmland and farmers will once again comprise the greatest numbers of people.  So it’s alarming that on the cusp of peak global production of oil, we’re losing farmland at such a fast clip to development.   We need all the land we can get – in the Great Depression people were hungry, back when there was just a quarter of the population we have now, with 25% of people still farmers, unlike the 2% today.

Cities were originally built where the best farmland and water existed. As cities and towns grow, they sprawl outwards over this prime farmland – in fact, that’s where 85% of developmental sprawl happens. The United Nations calls this soil sealing – the permanent covering of soil with impermeable materials such as asphalt or structures.  This leads to a total soil loss of food and fiber production, for water to infiltrate and be held and purified, and often increases flooding , the ability of the soil to hold water, loss of purification capacities, loss of carbon sequestration, increased urban heat from the loss of vegetation, and less biodiversity (FAO 2015).

Between 1945 and 1975, enough farms disappeared beneath concrete to pave Nebraska (Montgomery 2007), about 49.5 million acres (77,350 square miles).

Between 1982 and 2010 the U.S. lost 41.4 million acres, 14% of its crop land.  That’s equal to 65,000 square miles, an area as large as Maine, New Hampshire, Vermont, Massachusetts, Connecticut, Rhode Island, Delaware, New York, and Pennsylvania

Over a third of all land that has ever been developed occurred in the last 25 years.  If we keep paving over cropland at this rate, it will all be gone in 200 years.  

It’s hard to imagine this ending without energy decline.  In the U.S., population grew by 18.4 million people from 2010 to 2018 (10,700,000 births, 7,700,000 legal migrations).  So not counting illegal immigration, every year another 2.3 million people arrive who need to be fed, housed, provided with clean water, sewage systems, roads, stores, and much more.  

That is equivalent to building a new Houston every year from scratch.  Or a new Miami, St. Louis, Pittsburgh, Cincinnati, Cleveland, AND Atlanta. Every year. At this rate of population growth, there’ll be 184 million more people in the U.S. by 2100, nearly twice as many people as in the largest 311 cities in the U.S. (USC 2018, Wikipedia 2017). 

Not only is soil being paved over, it’s being degraded. The Midwest has lost over half of its topsoil in just 100 years because of intensive industrial agricultural soil mining.  Globally, over 75% of Earth’s land is significantly degraded, affecting 3.2 billion people. At current rates, 95% of Earths land could be degraded by 2050, forcing millions to migrate as food production fails (Leahy 2018).

As it is, there isn’t enough land to grow more than a fraction of the biofuels needed to replace diesel fuel, and with land vanishing from development and topsoil loss, that’s an empty hope for the future.

Kolankiewicz’s “Vanishing open spaces” is also sprawling at 170 pages, so I didn’t begin to cover everything in it.  If you’re interested in learning more, or want to see where your town or city ranks on the sprawl charts, you might want to delve into this more.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Kolankiewicz, L., et al. 2014. Vanishing Open Spaces Population Growth and Sprawl in America.  Numbers USA.

The massive destruction of America’s open spaces continued during the first decade of this new century. In just the eight years from 2002 to 2010, over 8.3 million acres (approximately 13,000 square miles) of farmland and natural habitat succumbed to the bulldozer’s blade. That is an area larger than the entire state of Maryland – cleared, scraped, filled, paved and built over – in less than a decade.

This update of our previous studies about sprawl at the end of the 20th century relies on the latest painstaking surveys by government agencies. They have tracked how, since the beginning of the 21st century, America’s population is growing by tens of millions more residents and sprawling over vast new expanses of woodlands, wetlands, fields and pastures. These are the open spaces on which the country’s human residents depend for food, fiber and the nourishment of their spirits, and to which the non-human inhabitants often tenuously cling for life itself.

This study finds that around 70% of those losses around Urbanized Areas over the last decade were related to the nation’s continuing trend of high population growth. Yet, there is little sign that the nation is ready to substantially change this population trend – or even to much discuss it – although the open-space destruction it is driving is not sustainable over the long term.

Sprawl Data and Analysis for Each City (2000-2010) Residents of all 497 of the Urbanized Areas designated by the U.S. Bureau of Census can find in this report answers to the following questions about your home city over the past decade:

TOTAL SPRAWL: How many square miles of farmland and natural habitat were destroyed as your city expanded outward? Where does that rank among all other cities in open-space loss?

Roughly 85 percent of all destruction of farmland and natural habitat nationwide occurred around the edges of the 497 mostly-sprawling Urbanized Areas. And much of the rest of the losses are due to urban residents’ demands for rural second homes, rural recreation development and rural transportation.

Many of our findings about the relationship between population growth and sprawl may strike the reader as unsurprising and as simple common sense. We agree that, for example, it just makes sense that the cities with the largest population growth would tend to have the largest sprawl. But the need for this study is found in the fact that few in the news media or in organizations that express concern about the loss of farmland and natural habitat identify population growth as a major factor, or even a factor to be modified at all.

The 3.8% increase in per capita land consumption during the 2000-2010 decade compares with the 11.5% increase in population (See Figure ES-2). That slowing of consumption growth would appear to be the result of a combination of factors, including smart growth efforts, higher gasoline prices, fiscal and budgetary constraints (limiting new road-building, for example), various changes in the per capita factors listed earlier, and the recession-inducing mortgage meltdown.

Cumulative Results of Sprawl Are Piling Up For Farm and Forest.  The country’s cropland provides a frightening example of where these trends will lead if dramatic changes are not made. In 1980, we had an average of 1.9 acres of cropland for each American. But 90 million more people have been added to the country since then. Not only are there far more people to share the cropland, but there is far less cropland to share because of the way cities have sprawled to accommodate the extra population. By 2010, the average amount of cropland per American had fallen from 1.9 acres to 1.2 acres. If this trend were to continue, there would be only 0.7 acre of cropland per American in 2050 and only 0.3 acre in 2100. That seems unfathomable, but it is the trajectory the country is traveling.

From 1982 to 2010, 41.4 million acres (approximately 65,000 square miles) – an area equivalent to the state of Florida – of previously undeveloped non-federal rural land was paved over to accommodate our growing cities.

Of these 41 million lost acres of open space, over 17 million acres were forestland, 11 million acres cropland, and 12 million acres pasture and rangeland.

As the Natural Resources Conservation Service of the U.S. Department of Agriculture put it in their 2007 summary report that reviewed the 1982-2007 quarter-century: “The net change of rural land into developed land has averaged 1.6 million acres per year over the last 25 years, resulting in reduced agricultural land, rangeland, and forest land. Loss of prime farmland, which may consist of agriculture land or forest land, is of particular concern due to its potential effect on crop production and wildlife.

The NRCS also observed that “more than one-third of all land that has ever been developed in the lower 48 states was developed during the last quarter-century.

The total area of developed land grew from 71.9 million acres (112,356 square miles) in 1982 to 113.3 million acres (177,096 square miles) in 2010. This latter area is about equal in size to the entire states of Maine, New Hampshire, Vermont, Massachusetts, Connecticut, Rhode Island, Delaware, New York, and Pennsylvania. All of this land was developed from either agricultural land or natural habitat.

Since the NRCS began its National Resources Inventory (NRI) in 1982, its data reveal that every new person added to the United States population entails on average the elimination of about half an acre of farmland or natural habitat.

According to the World Wildlife Fund, habitat loss poses the greatest threat to endangered species. The United States is home to over 1,000 endangered or threatened animal and plant species that are seriously harmed by ever-encroaching development.

The single greatest type of land developed in each period was forest land. Forest land is, of course, wildlife habitat. More broadly, it is a type of “natural capital” that provides a range of ecological services and socioeconomic benefits, among them climate regulation, watershed protection, soil conservation, flood prevention, streamflow moderation, wood products, aesthetic qualities, and it serves as a magnet for outdoor recreation such as hunting, fishing, hiking, and wildlife observation and photography.   [AND IT KEEPS TOPSOIL FROM WASHING AWAY]

National Opinion Survey for This Study Finds Americans Concerned With the subject of sprawl largely absent from the news for several years now,

We commissioned Pulse Opinion Research to poll likely American voters on their attitudes. The full survey and results can be viewed in Appendix K. Key findings: 92% say it is important (71% “very important”) to protect farmland from • development to ensure the ability to feed the U.S. population in the future. • By a 3-1 margin, Americans think it is unethical to pave over good cropland rather than being legitimate to provide housing for a growing population.

It also discusses ways that local officials trying to protect surrounding open spaces can slow local population growth through such means as requiring developers to pay the full costs of the population growth they attract (developer’s or impact fees).

Local officials supportive of growth control can hope only to slow population growth in their jurisdictions if national population continues to increase by some 2.5 to 3 million additional residents each year. These 25-30 million additional Americans each decade will nearly all settle in some community, inevitably leading to additional sprawl as far and as long as the eye can see.

Nearly all long-term population growth in the United States is in the hands of federal policy makers, because nearly all long-term population growth is related to federal immigration policies that have increased the annual settlement of immigrants from one-quarter million in the 1950s and1960s to more than a full million per year since 1990. Until the numerical level of national immigration is addressed, even the best local plans and political commitment will be unable to stop sprawl. Any serious efforts to halt the loss of farmlands and wildlife habitats must include reducing the volume of U.S. population growth. And a presidential commission on sustainability concluded that the U.S. population cannot be stabilized without deep reductions in annual legal immigration and more effective control of illegal immigration.

That would appear to be a popular option among most Americans, according to this study’s national survey (Appendix K): • 68% of likely voters said the government should “reduce immigration to slow down population growth.” • In light of concerns about sprawl, 40% of respondents said they would like annual immigration to be cut from one million a year to either 100,000 or zero per year. (63% said cut immigration at least in half to 500,000 a year.)

While there is more than one way to define sprawl, our studies consider it to be the conversion of open spaces like farmland and natural habitat into developed land holding man-made structures and surfaces on the expanding edges of urban areas or elsewhere.

1.1 Still a Problem After All These Years (and Americans Still Concerned) When the first edition of this study was published in 2001, sprawl was a hot topic with many environmental organizations, and the general public worried about the impacts of ever expanding cities and the nation’s steadily disappearing rural land.1 Thirteen years later, sprawl is still devouring valuable farm and forestland, but national environmental groups, by and large, have shifted their focus to global issues and away from the loss of habitat and open space due to the unsustainable growth of cities.

Table 1 lists the top 10 Urbanized Areas that eliminated the most rural land over the past decade (2000-2010). Clearing, scraping, paving, and building over thousands of square miles of America’s woodlands, wetlands, croplands, prairies, pastures, range, deserts, and fields, they truly earned the dubious distinction as the nation’s “Top Sprawlers.” It is noteworthy, and surely not a coincidence, that four of the Top Ten Sprawlers are in Texas, the state that grew far more (adding the greatest number of people) than any other state in the country from 2000 to 2010 – 4.2 million compared to California’s 3.2 million and Florida’s 2.8 million.

By 2002, the more than 1.5 million legal and illegal immigrants who settled in the country each year along with 750,000 yearly births to immigrants caused 87% of the annual increase in the U.S. population.

Contrary to the common perception, about half the country’s immigrants lived in the nation’s suburbs. The pull of the suburbs is even greater in the second generation. Of the children of immigrants who settled down and purchased a home, only 24 percent did so in the nation’s central cities.  The suburbanization of immigrants and their children was a welcome sign of integration. But it also meant they contribute to sprawl just like other Americans.

“In short,” concluded the 2003 study, “Smart Growth efforts to slow or stop the increase in per capita land use are being negated by population growth. Immigration-driven population growth, in effect, is ‘out-smarting’ Smart Growth initiatives by forcing continued rural land destruction.

American sprawl is more than a domestic issue. It also has global implications. The relentless and accelerating disappearance of natural habitats dominated by communities of wild plants and animals, replaced by biologically impoverished artificial habitats dominated by human structures and communities, contributes cumulatively to what may become a “state shift” or “tipping point” in Earth’s biosphere.

This would be an uncontrollable, rapid transition to a less desirable condition in which the biosphere’s ability to sustain us and other species would be severely compromised. A 2012 paper in the prestigious British scientific journal Nature reviews the evidence that: “…such planetary scale critical transitions have occurred previously in the biosphere, albeit rarely, and that humans are now forcing another such transition, with the potential to transform Earth rapidly and irreversibly into a state unknown in human experience.

1.4 National Security Implications of Farmland Loss

Development is not the only factor responsible for the degradation and disappearance of high-quality agricultural land. Arable land is also vulnerable to other damaging natural and anthropogenic forces such as soil erosion from wind and water, and salinization and waterlogging from irrigation, which can compromise the fertility, productivity, and depth of soils, and possibly even lead to their premature withdrawal from agriculture. Many of these adverse effects are due to over-exploitation by intensive agricultural practices needed to constantly raise agricultural productivity (yield per acre)

Thus, the potent combination of unrelenting development and land degradation from soil erosion and other factors is reducing America’s productive agricultural land base even as the demands on that same land base from a growing population are increasing.

The NRI estimates that the amount of cropland in the United States declined from 420 million acres in 1982 to 361 million acres in 2010, a decrease of nearly 60 million acres (14 percent) in just 28 years (Figure 5). Some of this cropland (cumulatively, 27 million acres in 2010) was withheld from active farming with federal government support and subsidies and placed into the Conservation Reserve Program (CRP), but these tend to be marginal or fragile sites on which cultivation is not deemed to be sustainable in any case. Even with the federal ethanol mandate and strong financial incentives over much of the last decade to grow corn in order to produce ethanol as fuel for vehicles, the amount of cropland dropped by seven million acres in the eight years between 2002 and 2010, increasing slightly between 2007 and 2010.8 The land uses into which cropland was converted are depicted in Figure 6. Cropland Converted to other Land Uses from 2007 to 2010

If the same rate of cropland conversion and loss that prevailed from1982 to 2010 were to continue to the year 2100, the United States will have lost an additional 193 million acres of its remaining 361 million acres of cropland, for a total cumulative loss of 253 million acres. Only 168 million acres would then remain – about 40 percent of the original allotment – and none of this acreage would be in pristine condition after two centuries or so of intensive exploitation. Its soils and nutrients, while perhaps not exhausted, would require even greater inputs of costly fertilizers. Two of the most crucial fertilizers – ammonium nitrate, produced from natural gas, and phosphorus, produced from phosphate mines – may be far more expensive, perhaps prohibitively so, in 2100 than at present, due to the inexorable depletion of the highest-quality reserves of these non-renewable resources.

However, this dire scenario is unlikely to come to pass, even if the United States continues to reject population stabilization as an acceptable course of action or to enact more aggressive farmland protection measures. This because rising demand and prices for foodstuffs would increase the value of land maintained as cropland vis-à-vis developed land, and because conversion from other types of lands to cropland, including pastureland, rangeland, forested land and other natural areas, would certainly occur (Figure 8). This actually did happen from 2007 to 2010, during which the area in cropland increased by 1.9 million acres; most of this was CRP land called back into production because high agricultural commodity prices encouraged farmers to plant it. Again, in an ideal world, erosive or sensitive CRP lands should not be cultivated and would best be conserved as wildlife habitat; that is why the voluntary Conservation Reserve Program was established in the first place in the 1980s.

Furthermore, the decrease from 1982 to 2010 in the acreage of highest quality soils classified as Prime Farmland, which constitutes only 23 percent (or 316 million acres) of the non-Federal rural land base was “only” 13 million acres, compared to the nearly 60-million-acre decrease in cropland. NRCS states that “most of this loss was due to development.

As shown in Figure 9, not all designated Prime Farmland is cultivated as cropland; indeed, only 64 percent of it is cropland; the rest is in other non-developed land uses or cover types.

Figure 9. Prime Farmland by Type in 2010

Ominous, divergent trends – an increasing population, a decreasing arable land base, diversions of water supplies needed for irrigated agriculture to urban populations, and a modern, mechanized agriculture that is heavily dependent on limited fossil fuels at all stages – have led some scientists to conclude that someday within this century the United States may cease to be a net food exporter.9 Food grown in this country would be needed for domestic consumption. By mid-century, the ratio of arable land per capita may have dropped to the point that, “the diet of the average American will, of necessity, include more grains, legumes, tubers, fruits and vegetables, and significantly less animal products.”10 While this may in fact constitute a healthier diet, it would also represent a significant loss of choice for a country that has always prided itself on its abundant agriculture, plentiful consumer options, and comparative freedom from want.

Preserving farmland and maintaining its fertility is more than a question of producing an adequate supply of food and engendering a healthy diet for Americans, it is a matter of national security. According to Brig. Gen. (Ret.) W.E. King, Ph.D., P.E., Dean of Academics, U.S. Army Command and General Staff College, Fort Leavenworth, Kansas, without a sustainable environment and resources that meet basic human needs, instability and insecurity will be the order of the day.

As Oxford ecology professor Norman Meyers noted in a now-classic 1986 article: “…national security is not just about fighting forces and weaponry. It relates to watersheds, croplands, forests, genetic resources, climate and other factors that rarely figure in the minds of military experts and political leaders…”

One of the lasting effects on the world food system of the global crisis in food prices from 2007 to 2008 has been the accelerating acquisition of farmland in poorer countries by wealthier countries which seek to ensure their food supplies.

By 2009, foreign governments and investors had already purchased more than 50 million acres (78,000 square miles) of farmland – an area the size of Nebraska – in Africa and Latin America.

Finally, U.S. agriculture and related food industries contribute nearly $1 trillion to our national economy annually. They comprise more than 13 percent of the GDP and employ 17 percent of the labor force.

Between 2000 and 2010, the country’s urban population grew by 12.1%, in comparison with total U.S. population growth of 9.7% during the same period. In other words, America’s urban areas grew at a faster pace than the country as a whole, continuing a demographic trend – a relative shift or migration of the population from rural to urban areas – that has been underway for more than a century. This trend is evident around the entire world.

NRI’s category of developed land differs from that used by other federal data collection entities. While other studies and inventories emphasize characteristics of human populations (e.g., Census of Population) and housing units (e.g., American Housing Survey), for the NRI, the intent is to identify which lands have been permanently eliminated from the rural land base. The NRI Developed Land category includes: (a) large tracts of urban and built-up land; (b) small tracts of built-up land less than 10 acres in size; and (c) land outside of these built up areas that is in a rural transportation corridor (roads, interstates, railroads, and associated rights-of-way).

An urban area’s population growth today is much more likely to be the result of enticing residents from elsewhere. Local and state governments can and do create many incentives that encourage people to move into a city. These include aggressive campaigns to persuade industries to move their jobs from another location, public subsidies for the infrastructure that supports businesses, expansion of water service and sewage lines into new areas, new housing developments and new residents, and general public relations that increase the attractiveness of a city to outsiders. Even without trying, a city can attract new residents just by maintaining amenities and a high quality of life, especially if the nation’s population is growing significantly, as continues to be the case today.

Even the best Smart Growth, New Urbanism, and LEED43 strategies were able to engineer only so much population density. As long as population is still growing, the land area taken up by our cities will almost certainly continue to grow.

Dr. John Holdren, Assistant to the President for Science and Technology and Director of the White House Office of Science and Technology Policy since 2009, developed and applied this methodology in a scientific paper evaluating how much of the increase in energy consumption in the United States in recent decades was due to population growth, and how much to increasing per capita energy consumption.

Given this apportionment or breakdown, opponents of sprawl in the nation’s worst sprawling Urbanized Area, for example, can know that nearly their entire problem has been the inability to stabilize the Atlanta area’s population. In contrast, a relatively small part of the problem (15%) has been the inability to stabilize the per capita land use of the area.

% of Total Sprawl Related to Growth in PER CAPITA  LAND CONSUMPTION

Figure 17 illustrates the results of applying the Holdren method to the entire population and land area of the 96 largest Urbanized Areas (corresponding to the 100 largest UAs in the 1990 UA delineation and our earlier 2001 and 2003 studies). Of the 57,055 square miles of total sprawl, 30.5% of the lost rural land was related to the growth in per capita land consumption by the residents of those cities. In contrast, 69.5% of the lost rural land, more than two-thirds, was related to the fact that an additional 17 million people net, moved into or were born in those cities. It is worth noting that from 1970 to 1990,

for these same UAs, Population Growth accounted for about half of Overall Sprawl, and Per Capita Sprawl for the other half. For the most recent 2000-2010 period, in contrast, Population Growth has obviously become the dominant factor, accounting for about seven out of every ten acres converted from rural land to urban land.

Figure 18 shows us that of the aggregate 8,844,435 acres of rural land lost to sprawl between 2000 and 2010, 73 percent, or roughly 6,450,000 acres, were lost due to population increase. Only 27%, or roughly 2,000,000 acres, were lost due to the increase in per capita land consumption between 2000 and 2010.

Conclusions There is a broad correlation between population size and sprawl: generally, the larger a city or state’s population, the larger the land area it will sprawl across.

The positive (upward tilting toward the right) slope of the best-fit line means that as a state’s population increases, the area of built-up, developed land increases as well. This demolishes the whimsical notion entertained by some that there is no connection between population size or growth rates and environmental impact.

Although the pace of sprawl may have peaked in the late 1990s and early 2000s, as recently as the late 2000s and in all likelihood today as well, sprawl continues at a rate that exceeds that of even the 1980s and early 1990s.

At this pace, sprawl would continue to convert an additional 15 million acres (23,000 square miles) of agricultural land and wildlife habitat into built-up land every decade. By 2050, another 60 million acres (92,000 square miles) of rural lands will have been paved or covered with subdivisions, office parks, and commercial strips, at great cost to our agricultural potential, wildlife habitat, livability, and general environmental quality.

Smart growth efforts, higher gasoline prices, fiscal and budgetary constraints (limiting new road-building, for example), and the recession-inducing mortgage meltdown may have all played roles in slowing the rate of sprawl late in the first decade of this century. The extent to which any of these and still other unforeseen factors may affect the rate of sprawl in the coming decades is unknown and unpredictable.

In the West, water scarcity is also likely to restrict far-flung, never-ending development.

population growth, during the decade just passed (2000-2010), accounted for approximately 70-90% of sprawl; declining density or increasing per capita land consumption accounted for about 10-30%.

A central goal of Smart Growth is to preserve open space, farmland, natural beauty and critical environmental areas by preventing declining density. Thus, places where population density increases should be hailed as success stories. Between 2000 and 2010, there were 192 urbanized areas (39% of all UAs) whose density either remained the same or increased – in other words, their per capita land consumption remained constant or decreased. However, many of these cities still experienced significant sprawl, a couple of thousand square miles in total between 2000 and 2010.

No city better exemplifies this phenomenon than Portland, Oregon.  Despite being lauded for its urban growth boundary (UGB), extensive light rail infrastructure, and high-density mixed-use developments, even Portland UA still sprawled outward an additional 50.4 square miles. The addition of 266,760 people during the decade was more than enough to wipe out the increased population density and cause the city area to swell by an additional 11%.

Salem, Oregon, whose urbanized area population grew by 14% from 2000 to 2010, and which has quickly become the second largest city in Oregon.  The population grew by over 300,000 people, causing the Raleigh Urbanized Area to become more densely populated. But despite Raleigh’s drop in per capita acreage, its 63% increase in population caused it to sprawl out over 198.5 square miles in these 10 years. The drop in per capita land consumption can be explained by the efforts of city planners to tame sprawl by directing development toward certain centers within the Urbanized Area. These were not enough to prevent the construction of new suburban neighborhoods, the development of retail centers, and the creation of roads and highways to connect these sprawl products.

The decline of the steel industry left parts of the city abandoned as “brownfields”, driving residents to build outward into the suburbs.

Following the logic of this study’s findings it isn’t hard to conclude that even the most aggressive and well-intentioned policies promoting smarter growth, better urban planning, and higher residential densities cannot escape the immense population pressures facing many communities around our rapidly growing nation.

It seems as though even the best-intentioned and politically palatable urban planning policies, are only able to slow, not halt, Urban Sprawl. Using this approach, a given patch of open space beyond the existing periphery of a typical rapidly expanding city would fall to sprawl in ten years instead of seven, but fall to sprawl it would. Under Smart Growth alone, city boundaries will never stop devouring countryside.

Simply stated, the results of this study indicate that population growth has more than twice the impact on sprawl as do all other factors combined. Neglecting the population factors in the anti-sprawl fight would be to ignore more than two-thirds of the problem.

Local Influence on Sprawl

Local policy makers truly trying to curb sprawl in our cities have a number of policy actions to pursue. While most local officials see population growth as an indicator of the vibrancy and vitality of their respective communities, there is little evidence to suggest that unfettered population growth is any of those things. Well-known sprawl critic and urban planner Eben Fodor challenged this very notion in his 2010 study “Relationship between Growth and Prosperity in 100 Largest U.S. Metropolitan Areas.” Fodor’s study found that rapidly expanding metropolitan areas did not hold up well in terms of standard economic indicators such as unemployment, per capita income, and poverty rates in comparison with slower growing metropolitan areas. Yet, despite this, local officials and city planners continue to offer subsidies and tax breaks to attract new residents, investment and development. Many times these subsidies are born unfairly by existing residents, who see their property taxes rise and are stuck with the bill to pay for sprawling highways, new schools, water and waste water treatment, and energy grids farther from the urban core.

Sprawl in the Sunbelt, and especially the Southwest, is of particular concern because of the hot desert climates of many of these cities. Southwestern metropolises like Phoenix are some of the most energy intensive cities and their growth puts added pressure on already scant water resources. In order for cities to properly address sprawl, taxpayer subsidies need to be removed and the true costs of development need to be borne by those developing the land. Also, as Harvard economist Edward Glaeser suggests, the true social costs of activities such as driving should be paid for.

National Influence on Population Growth

Beyond the short term, local officials supportive of growth control can hope only to slow population growth in their jurisdictions if national population continues to increase by some 2.5 to 3 million additional residents each year. These 25-30 million additional Americans each decade will nearly all settle in some community, inevitably leading to additional sprawl as far and as long as the eye can see.

In essence there are only three sources of national population growth: native fertility (in conjunction with slowly increasing life spans), immigration, and immigrant fertility. We know the following about their contribution to long-term growth:

  • Native fertility: At 1.9 births per woman, it remains below the replacement level of 2.1 and has not been a source of long-term population growth in the U.S since 1971.
  • Immigration: The sole source of long-term population growth in the United States is immigration, due both to new immigrants (arriving at about four times higher than the “replacement level” where immigration equals emigration) and to immigrants’ fertility, which despite declines during the recession has remained well above replacement level.

Thus, long-term population growth in the United States is in the hands of federal policy makers. It is they who have increased the annual settlement of immigrants from one-quarter million in the 1950s and1960s to over a million since 1990. Until the numerical level of national immigration is addressed, even the best local plans and political commitment will be unable to stop sprawl. Any serious efforts to halt the loss of farmland and wildlife habitat must include reducing the volume of population growth, which requires lowering the level of immigrants entering the country each year

A far more sustainable immigration level would be the approximately half-million a year recommended in 1995 by the bi-partisan U.S. Commission on Immigration Reform, established by President Clinton and chaired by former Congresswoman Barbara Jordan.

This lower level of immigration at around 500,000 a year would drive far less sprawl than the present levels exceeding a million a year. But unless Americans decide to lower their birth rates to far below replacement level, the 500,000 a year would still drive considerable population growth and sprawl indefinitely.

That is why another federal commission recommended far greater reductions in immigration. The President’s Council on Sustainable Development in 1996 recommended that the United States stabilize its population in order to meet various environmental and quality-of-life goals, and it called for reducing immigration to a level that would allow for a stable population. At current just below-replacement native fertility rates, that would require a return down to at least the quarter-million level of immigration in the 1950s and 1960s.

The Population and Consumption Task Force of President Clinton’s Council on Sustainable Development concluded in 1996: “This is a sensitive issue, but reducing immigration levels is a necessary part of population stabilization and the drive toward sustainability.”46 The 2014 Pulse Opinion Research poll did not give voters a choice of 250,000, but 40% of voters chose the options of 100,000 or zero. The full results on “how many legal immigrants should the government allow each year” were: 7% – Two million 14% – One million 23% – Half a million 20% – 100,000 20% – Zero

A key way in which growth from immigration has a somewhat smaller effect on sprawl is the lower average income level and, thus, a lower consumption level of the average immigrant. But we found that an assumption about immigrants having less of an effect because they presumably prefer central cities to suburbs was false. The majority of immigrants now live in suburbs where the sprawl occurs.47 And the adult children of immigrants were found to be just as likely to shun living in core cities as the adult children of natives. In fact, the lower incomes were causing immigrants to move to the edges of cities and even to rural settlements beyond the cities to find cheaper housing.

On a local level, the sprawl pressures of population growth are similar regardless of where the new residents originate. But very few Urbanized Areas are likely to be able to subdue population growth and sprawl if the federal government continues policies that add around 20 million people to the nation each decade, all of whom have to settle in some locality. The reality – which can only be mitigated but not eliminated by good planning or Smart Growth – is that these localities all occupy lands that were formerly productive agricultural lands or irreplaceable natural habitats

References

FAO. 2015. Status of the World’s Soil Resources Main Report. Food and Agriculture Organization of the United Nations and Intergovernmental Technical Panel on Soils, Rome, Italy.

Leahy, S. 2018. 75% of Earth’s Land Areas Are Degraded. A new report warns that environmental damage threatens the well-being of 3.2 billion people.  National Geographic.

USC. 2018. Table 4. Cumulative Estimates of the Components of Resident Population Change for the United States, Regions, States, and Puerto Rico: April 1, 2010 to July 1, 2018.  United States Census.

Wikipedia. 2017. List of United States cities by population. Wikipedia.org.

Posted in Agriculture, Overpopulation, Soil | Tagged , , | 9 Comments

Book review of Mikhail’s “The beekeeper: rescuing the stolen women of Iraq”


Preface. This is a gruesome post you may want to skip.

My main interest in this book was what will happen to the hundreds of millions forced to flee in the future because of the crash of civilization as oil declines, topsoil vanishes, sea levels rise, fresh water disappears, droughts / hurricanes / tornadoes / fires, invasive species and pests ruin crop production and a hundred other calamities occur in the future.   Potentially you if you live long enough…

But mainly oil decline will be at the root of it all, since with oil fires can be fought, fresh water pumped for over a thousand feet down, topsoil amended with natural gas fertilizers, pests crushed with oil-based pesticides, and so on.  Fossil fuels allow 6.5 billion extra people to be alive today.

Although this book is about Christian families in Iraq and their Muslim terrorist oppressors (the Daesh who call themselves the Islamic State, see wiki here for more info), these atrocities are a common pattern I’ve seen in other books about what refugees go through.  In the future even more civil wars will erupt everywhere that depends on oil in any way as resources and energy decline.  Various groups will try to take control of regions and kill those not part of the in group.

Basically what happens, and has occurred throughout human history is that men and older women are killed, nubile women sold as wives, and children become slaves.  There are many examples to be found in the old testament, such as “The Israelites war against Midian, and “slew every male”. They take captive the women and children, and take all cattle, flocks and goods as loot, and burn all cities and camps. When they return to Moses, he is angered, and commands “Now therefore kill every male among the little ones, and kill every woman that hath known man by lying with him. But all the women children, that have not known man by lying with him, keep alive for yourselves” (Numbers 31).   There are a lot more examples from the bible in Wikipedia at “The Bible and violence”.

Those without a place to go, and water and food along the way often don’t make it.  Many of the survivors randomly knocked on doors seeking help and got it, though we don’t know about those who knocked on the wrong door and were turned in to their captors.   

Read the depressing accounts of escapees below for details.

Dunya Mikhail.  2018. The Beekeeper. Rescuing the Stolen Women of Iraq . New Directions.

I didn’t ask my students if they knew that the letter was now being written in red on doors, notifying residents that they must leave their homes or else face death. Reduced to an N, those Nasara — “Christians” — were shaken out of sleep by megaphones blaring all over town that they had 24 hours to get out, and that they couldn’t take anything with them; and just like that, with the stroke of a red marker across their doors, they would have to abandon the houses they’d lived in for over 1,500 years. They’d leave their doors ajar and turn their backs on houses that would become Property of the Islamic State. But I didn’t explain any of this. My job is to teach Arabic,

Abdullah translated what Nadia said into Arabic for me: I was at home when my husband, moving the telephone away from his ear, told us, “We have to leave now, Daesh is nearby.” That was a Sunday morning, the first Sunday in August, when we fled our home in the village of Sawlakh, east of Sinjar, along with our neighbors and their families. I walked with my husband and our three children alongside a caravan of nearly 200 people

It was very hot outside and we had departed without any water or food or diapers. We headed up into the mountains, stopping every hour so that we could rest a bit, especially for the sake of the exhausted children. We found a vegetable farm and stopped to pick tomatoes — we were so thirsty. That’s when we were surrounded by Daesh fighters. First they loaded the men, then the women and children, onto big trucks, taking us to Mosul.

When they unloaded us in Mosul, they separated the virgins from the married women; they also set apart children over the age of twelve.

Then they took us to a school in Talafar where we stayed for eighteen days, studying Quran. They forced us to recite verses in that filthy place, even as we were dying of hunger and thirst. They told us that we were infidels, that we must convert to Islam because it’s “the true faith,” and that we’d have to get married. Then they transferred us to another building near Raqqa, in Syria, where they put us up for auction.

They handed me a slip of paper with the name of the buyer written on it, informing me that it was my marriage certificate. I had no idea what they’d done with my husband and his father and his brother and all the rest of our relatives who’d been with us in the convoy. The man who’d bought me told me I was now his wife.

We stayed there for three months, and during that time we made hundreds of rockets. My children and I worked twelve hours a day for them. They gave my five-year-old daughter the most dangerous job, tying together the detonation lines.

At any moment a mistake could explode the bomb right in her face. Along with another female captive, I would load the rockets into a truck. She was a Yazidi from my village, and she had two children. We became so close that we conspired to escape together.

The seven of us stood in front of the bakery with both anxiety and hope.

A man gestured for us to get into his car. He took us to Manbij province, northeast of Aleppo, then to the Euphrates. The plan was for us to cross over to Kobani in a skiff. But we saw dead people lying in the road, which sent our children into a panic, making them shake and cry. I felt like I was going to throw up and my friend covered her eyes. The driver had to take us back to Manbij, where we spent the night in a house whose inhabitants seemed to have fled. The smuggler explained to us that most of the homes there had been abandoned after Daesh’s assault. It was a very small house that still smelled of people, as if they had just left. We stayed the night there, but too nervous that the Daeshis would find us, we counted the minutes until morning, unable to sleep. After the smuggler picked us up, we headed for a rural area east of the Euphrates. There he instructed us to get out of the car and walk toward the river. We followed his instructions, continuing our journey on foot. After about half an hour of walking, we heard the sound of gunshots. We hid among the reeds in the marshes, huddled there for hours, afraid of what might happen at any moment. The smuggler was still with us but he had become extremely tense, especially when the children started crying. He ordered us to stay absolutely silent.

Once the sound of gunfire had subsided, we continued walking to the edge of the river, crossing in a skiff over to Kobani, on the Turkish border. There we were greeted by a group of people, mostly women. They took us to a hotel where we were able to rest for a few days. They gave us fresh clothes and then drove us to Dohuk Province in Iraq, where Abdullah and my mother-in-law lived. Now I live with her. She prays every day for the return of her son, my husband, my real husband.

Our work isn’t without danger, of course. Daesh gruesomely executed one of our drivers when he was caught. We were extremely sad to lose him. He was a young man, and I depended on him very much. In fact, up until now, we’ve lost twelve smugglers.” “How?” “Sometimes Daesh will propose letting the sabaya return to their families in exchange for a large sum of money. Those who are serious will release their sabaya in exchange for the money; yet there are others who claim they’re willing to go through with the exchange but then ambush the go-between when he shows up, killing him despite their previously agreed-upon arrangement.

About 25% of direct purchases from Daesh ended up with our smugglers getting ambushed.

I instruct the family to give the captive my telephone number so that I can make arrangements with her directly. Then we come up with a plan based on where she is. I use Google Maps to scope out the area — the old map of Syria I used back when I was selling honey is no good anymore because many of those regions have changed. Now I know all the neighborhoods in Raqqa, building by building. When the captive calls me, I pick a specific rendezvous point and a code word,

Once they get far enough away, she’ll be moved into a safe house, the same houses where smugglers warehoused cigarettes in the past. She’ll stay there for a few days, until the commotion caused by her disappearance dies down

After two or three days the driver will come back to the safe house and they’ll continue their journey by car, then on foot for another five or six hours. Sometimes the operation will include crossing the river to Turkey in a skiff and, finally, spending about twelve hours in another car in order to reach the northern border of Iraq, where her family will finally greet her. Sometimes I’ll follow the mission step by step; sometimes I cross over into Syria to meet with the smugglers, guiding and encouraging them. There’s no need for me to welcome back those captives but often I tag along with the family to the border region between Iraq and Syria because I love being a part of these moments. It’s indescribable, everyone bursting with ecstasy and tears and hugs; I’ve witnessed this over seventy times, and every time I can’t keep myself from crying.

Marwa opened the door at four in the morning, then closed it behind her and walked out into the street, flagged down a cab and got in. The taxi driver was stunned. As you know it’s rare to find a young lady hailing a taxi in the street at such an early hour. Where are you going? he asked her. She broke down in tears, telling him that she had just escaped from Daesh. Kill me please, I beg you, just don’t take me back to them. “I can take you to a neighborhood where the clans are sure to offer you shelter, he said. When they open the door, tell them: ‘I’m at your mercy.’ Arab clans won’t turn away anyone who knocks on their door and says that. Dawn was extremely quiet as Marwa approached a large house and knocked on the front door. “A woman opened the door. As soon as she listened to Marwa’s story she invited her inside.

But when the woman’s husband heard that she had run away from Daesh he refused to take her in. He didn’t want to shoulder the responsibility; he said that he would have to hand her over to the police. The wife pleaded with her husband to just let the girl be on her way; eventually she apologized as she said goodbye to Marwa at the door. Marwa headed somewhere else, this time knocking on the door of a smaller house. A man opened the door with his wife and children behind him. When she told them she was running away from Daesh they invited her inside. They sat down in a circle around her and asked her to tell them what had happened. She wept even as they tried to calm her down, telling her they weren’t going to abandon her. Their house and their furniture signaled extreme poverty — they didn’t even have a telephone. They promised that as soon as the shops opened in the morning they’d take her to the Internet café so she could use the phone. When Marwa called me, I didn’t have a functioning network yet, but I decided to make a few calls and find her a smuggler. Marwa ended up staying with that family for fifteen days. They shared their food with her and told her repeatedly that she was safe with them. By the time I found a smuggler we’d run into a snag: the owner of the Internet café found out that she’d escaped from Daesh and threatened the generous family that he would send her back if they didn’t pay him $7,500. The family agreed to the ransom even though they had nothing, asking the man to give them time to scrape the money together. The members of the family went from house to house, managing to raise $7,000. When they went to give the money to the Internet café owner, they asked him to forgive the remaining five hundred; he agreed and let Marwa leave with the driver. Marwa came back alone, without her mother or father or sisters or brothers. My brother and my sister and fifty-six members of my family, including cousins, are still missing.

We heard the booming sound of artillery. We had never heard such blasts, even in times of war. Twenty-eight of us gathered together — my mother, my siblings, and their families — all of us hesitant to flee. It isn’t so easy for a person to give up their home.

A lot of people died on the journey, including the ill, whose families had to leave them behind.

Those who had tried to go home were captured by Daesh after the withdrawal of Peshmerga forces.

My sister, my brother, my cousins, and all of their families were among those who had gone back and fallen into the trap. The worst thing I heard was that Daesh had separated the elderly from everyone else and had buried them all alive

We managed to reach the Syrian border on a road that was being protected by the People’s Defense Brigades. To tell you the truth, it was an unusual protection force, as it was mostly made up of women. Throughout that harsh and difficult journey we’d hoped an American or European plane would come to airlift us all to safety, but that never happened. Our convoy had about 350people, including women on the verge of giving birth, disabled people who were barely able to walk,

They threw us down there in shifts. Every 15 minutes they would lower down about a dozen men from the outcropping and open fire on them. They arranged us into rows, telling us to line up next to each other so it would be easier for them to shoot us. My brother was in the first shift. My other brother was in the second shift. I was in the third. I knew everyone down there with me; they were my neighbors and friends.

After they shouted Allahu Akbar, the sound of gunfire rang out, and once they had finished shooting us one by one, I was swimming in a pool of blood. They shot at us again, then a third time. I shut my eyes and prepared to die, as one must.” “How long did you stay like that?” “I was bleeding there for almost five hours.” “Where were you shot?” “In three different places. Once in my foot and twice in my hand.” “And did everyone else die?” “All except for one other man, Idrees, a childhood friend of mine. His feet were injured.

 “You need tricks,” Badia told me when I asked her how she’d managed to escape Daesh. The first trick was to stop bathing for an entire month, until she smelled so bad that the fighters would stay away from her, refusing to buy her. The second trick was to claim that she was married, and that the little child beside her was her son. It took longer for married women to be sold. The third trick was to pretend she was pregnant in order to avoid being raped, even if only temporarily. The

We were a big family living in the village of Kocho — my mother and father, and my five brothers and five sisters. In the beginning we heard that Daesh had occupied Mosul; we heard that they were killing people there, raping women; we heard that they were coming toward us, that they were going to do the same thing to us. We didn’t believe it.

Daesh was a lie. And even if it wasn’t a lie, they would never make it to Kurdistan because the Peshmerga fighters would stop them. We had a hundred soldiers. Surely they would be able to protect us. We shared these rumors until late into the night. At two in the morning my father’s telephone rang. It was his friend from the village of Siba Sheikh Khidr. He said: “You have to leave. Daesh has reached our land. They’re going to kill us all.

We would take a few steps toward the door, then retreat. We’d make up our minds to leave, but then remain where we were.

A caravan of thirty families emerged and headed toward the mountains. We decided to do the same. We joined our relatives and friends, but just as we were about to leave, a group of Peshmerga fighters arrived, saying they would put Daesh in their crosshairs and stop them in their tracks. Everyone was fired up, including my father. We decided to stay and assist the Peshmerga, or fight alongside them.

Then we heard the terrible news that those thirty families that had set out before us had been stopped by Daesh, that they had killed all the men and enslaved the women and children. At that point the Peshmerga made up their minds to go assess the situation and then report back to us. They advised us to stay where we were until they returned with an update. They left and never came back. They didn’t send any word. They left us there, adrift. We never learned what happened to them.

Everybody was calling their relatives who had fled, trying to find out whatever they could about what was going on. None of the men picked up their phones. The women who answered their phones said that the men had all been killed.

Daesh had surrounded the area, and it was too late to get away. At 4 p.m. on August 3, 2014, Daesh came to our homes. Our first shock was seeing men we knew among them. They didn’t live far from our village. We even used to consider them friends. But now they had joined the ranks of Daesh. They behaved as if they were our enemies.

At midnight, all the children who were older than six were taken away from their mothers and sent to a training camp. In the morning they took all the older women, even the pregnant ones, and killed them all. They dumped them into fishponds

Some of the women and children died of thirst. At that point a man showed up with a bucket of water. But before we could drink any of it, he threw in a dirty diaper. I don’t know why he did this, but we drank the water anyway, despite the filth. We nearly died of thirst. I think they put some kind of chemical in the water because all of us got dizzy and nauseous and tired.

Ssomeone they called “the Caliph” came and announced that we would have to marry the fighters. We said: “We’re already married.” The Caliph said: “We killed all of your men. So now you’re for sale on the market.

They ordered us to bathe, but I went into the bathroom and came back out again without washing. I knew they were going to come and smell me, and cleanliness was dangerous in that situation. A month passed, and every day I began to smell worse. I didn’t even wash my face despite the fact that my eyes were itching from crying so much. They brought us fresh clothes to make us more enticing to the customers. They said: “Put on these beautiful clothes. The photographer will be here any minute.

 “Nobody wants you, so we’re going to send you to Syria.” They moved us to a building in Raqqa. There I was reunited with my sisters, my brother’s wife, and my friends — they said they’d been there for two weeks. After thirteen days they sold us off, ten of us for each man. An American came and bought me along with nine other women. He took us to his house in Aleppo. His guards there all called him “the American Emir.” The first thing he ordered us to do was bathe. He pointed toward the bathroom, saying: “Get in line. Each and every one of you has to take a bath. Or else.” Then he brought us new clothes and told us to put them on.

He introduced himself to me, and said, in formal Arabic: “I’m an American. Tell me, when was your last period?” “Why are you asking?” “Because we don’t marry pregnant women.” “It’s been five months.” “Well then, I won’t marry you today. Tomorrow I’ll take you to the doctor to see whether or not you’re pregnant.” I went back to our room and Nada looked at me inquisitively. I said: “We’ve got to get out of here tomorrow. Otherwise the Emir is going to find out I lied, and then he’ll rape me.” Nada agreed that we would run away the next morning, as soon as the Emir left the house — he went out every day at 10 a.m., and didn’t come back until nine at night.

The Emir showed me photos of his family on his computer: his American wife, his one-year-old son, and his infant daughter. The two children were playing on swings in a park. He said he’d been a teacher in an elementary school. “Isn’t it haram for you to abandon two small children who might be wondering where their father is?” “I go to America every once in a while, to see my family, then I come back.

The next night he drugged me and then raped me five times. When he woke up in the morning, he said: “Don’t tell anyone that the boy isn’t yours. If the members of the organization find out, they’ll kill me. This has to be our secret.” “Whatever you say.” “We’ll raise him together, you and I. But I’m going to sell Nada.” “No. Please. I need her. I don’t have anyone else. You go to work all day — I can’t bear to be here without Nada.

After living with him for two months we tried to run away, unsuccessfully. We tried to run away four times, but the Daesh police brought us back each time. And each time he punished me with a beating. On the fourth time he was so angry that he strung me up by my feet and beat me mercilessly. Even worse, he left with my nephew, and when he came back the boy wasn’t with him. I was beside myself. I begged. I wept. But he didn’t care. A week went by and he wouldn’t speak to me. He didn’t tell me what he had done with my nephew.

It was 9 p.m. when he called me into his room. He said: “We’re going to Kobani to fight. We might be gone four or five days. I’m going to lock the doors. You can’t go out — not at all, not even to buy bread. Do you need me to bring you anything before we go?” “No. We have everything we need. Thanks.” We made a plan to break down the door and run away.

We got our Islamic clothes ready and started looking for something to break down the door. We found some small metal tools and used them to smash it. We had to work at that for hours. We didn’t go to sleep until we managed to finally break down the door at four o’clock in the morning, but waited until eight so we wouldn’t raise any suspicions. We hurried as far away from the house as we could. After about two hundred yards we saw a cell phone shop with a sign that read “International Phone.” We went inside. I still remembered the phone number given to me by a woman in that building where we’d been detained before we were sold. She told me: “Memorize this number.” Then I gave it to somebody else and told her: “Memorize this number.” I repeated the number in my head every day so I would never forget it. We were a few steps away from the phone. I told the shopkeeper that we wanted to use the phone but we didn’t have any money. He said: “Sorry. No free calls.” I asked him: “Do you know the Emir Abu Abdullah the American? I’m his wife. He went to Kobani. I need to call him to make sure he’s okay. I’m new here. I don’t know anybody else.

The little boy I told you about made it, the one who was in Daesh’s camp. He arrived with his mother and his younger brother.” “They were training to fight, right?” “Yes. Ragheb was forced to train for four hours every day, learning how to kill, how to chop off people’s heads. They would also teach him Quran for two hours a day, and fiqh for another hour. They have classes on everything, from how to wash your hands to sex education, from impurity to handling an animal, from genetics to just about anything you can imagine — and things you can’t even imagine. And finally a personalized sermon to convince him to die for God, so that he’ll be rewarded in heaven. They have special passes to get into heaven that are handed out at the end.

Both routes would eventually lead to Mount Sinjar, the same mountain refuge that had protected them from harm every time. They’d done this many times over the course of history: the people of the region, in times of danger, wouldn’t think about going anywhere else, they wouldn’t think twice.

The half of the caravan heading west reached the mountain, and survived, but the other half heading east, including Elias’s family, never made it there. The Daeshis were waiting in their path and they were captured. Daesh took them to Mosul

A week later the Mosulli driver brought Kamy three packs of cigarettes, which she kept carefully hidden. As soon as the Daeshis left, Kamy opened a pack, took pleasure in a kind of luxury, and breathed out some of her repressed anger. She found herself smiling at the generosity shown to her by the Mosulli driver. But the next day she saw something she would never forget: a Daeshi holding up two severed hands in front of the captives. He said those were the hands of the tanker driver who’d brought the captives cigarettes. Kamy nearly choked, as if she had inhaled all of the tobacco of the world in a single moment, thinking, I wish I were dead, I wish I hadn’t asked him for anything.

I ended up spending a year confronting those beasts along with the other young female captives from my village, in a house in the Deir al-Zor area in Syria. They raped us, beat us; they forced us to cook and clean and wash their clothes. During the day, they would take their weapons and go out. At night, they would come back and gather together to take drugs and recite religious verses. When they told us it was time for “Quran lessons,” this also meant that they were also going to rape us, because they typically did that right after prayers. They would take naked pictures of us with their cell phones, and before starting each “Quran lesson,” they’d exchange pictures of us with one another to see whether there was anyone who wanted to swap with them.

The main motivation for these Daesh men was sexual: they would kill anyone in order to rape women. In the end they would kill themselves to meet their houris in heaven.

Whenever Abu Nasir needed money, he would give me to someone temporarily, loaning me and then taking me back later. All I could think about was escaping but it took seventy days before I was able to steal the key from Abu Nasir. I managed to escape but the terrible realization was that my family was all missing.

Posted in Collapse of Civilizations, Collapsing, Mass migrations, Middle East, Refugee Camps, Social Disorder, Terrorism | Tagged | 8 Comments

Book review of Jaczko’s “Confessions of a rogue nuclear regulator”

Preface. After presenting a lot of evidence for why nuclear power plants are inherently unsafe, Jaczko concludes: “There is only one logical answer: we must stop generating nuclear waste, and that means we must stop using nuclear power. You would think that it would make sense to suspend nuclear power projects until we know what to do with the waste they create”.

Jaczko isn’t the first to sound the alarm on the safety of nuclear power plants.  There’s also the 128 page report by Hirsch called “Nuclear Reactor Hazards Ongoing Dangers of Operating Nuclear Technology in the 21st Century”, or my summary of this paper at energyskeptic “Summary of Greenpeace Nuclear Reactor Hazards”.

I read this book hoping Jaczko would explain why he shut Yucca mountain down.  The 2013 book “Too Hot to Touch: The Problem of High-Level Nuclear Waste” by William M. Alley & Rosemarie Alley, Cambridge University Press goes into great detail about why Yucca Mountain is the ideal place to put nuclear waste.

I have a lot of problems with Yucca being shut down. How is it safer to have 70,000 tons of spent nuclear reactor fuel and 20,000 giant canisters of high-level radioactive waste at 121 sites across 39 states, with another 70,000 tons on the way before reactors reach the end of their life? 

Spent fuel pools in America’s 104 nuclear power plants, have an average of 10 times more radioactive fuel stored than what was at Fukushima, most of them so full they have four times the amount they were designed to hold.

All of this waste will harm future generations for at least a million years, all of these above ground sites are vulnerable to terrorists, tsunamis, floods, rising sea levels, hurricanes, electric grid outages, earthquakes, tornadoes, and other disasters.

So Yucca mountain isn’t perfect?  Not making a choice about where to store nuclear waste is a choice. We will expose many future generations to toxic radioactive wastes if we don’t clean them up now.

Here is what Jaczko has to say for why he shut down Yucca Mountain:

“There were many technical, political, and safety reasons why the site was not ideal, in fact Yucca failed to meet the original geological criteria. The rock that would hold the nuclear waste allowed far too much water to penetrate; water would eventually free the radiation and carry it elsewhere. In addition safety studies that showed the site to be acceptable were based on infeasible computer simulations projecting radiation hazards over millions of years. Realistically forecasting the complex, long-term behavior of spent nuclear fuel in underground facilities is scientifically impossible. After 35 years, the Yucca mountain project was over.” 

Yet Jaczko knows his decision to leave nuclear waste at 121 sites is dangerous:

“As waste piles up, we leave behind dangerous materials that later generations will eventually have to confront. The short-term solution—leaving it where it is—can certainly be accomplished with minimal hazard to the public. But such solutions require active maintenance and monitoring by a less than willing industry. This is already an organizational and financial burden. In 30,000 years when these companies no longer exist who will be responsible for this material?” [my comment: or even 30 years after a financial crash or oil decline]

Thousands of scenarios were modeled at Yucca mountain of every combination of earthquake, volcanic intrusion and eruption, upwelling water, increased rainfall, and much more. Jaczko offers no countering scientific evidence, which I expected to find in his book. Yucca mountain passed with flying colors, here are just a few reasons why:

  • Volcanic activity stopped millions of years ago
  • Earthquakes mainly affect the land surface — not deep underground storage
  • Waste could be stored 1,000 feet below the land surface yet still be 1,000 feet above the water table in an area with little water and only a few inches of rain a year.  Rain was not likely to travel 1,000 feet down.
  • The entire area is a closed basin. No surface water leaves the area.  The Colorado River is more than 100 miles away.
  • There’s no gold, silver, or oil to tempt future generations to dig or drill into the nuclear waste.
  • The mountain is made of a rock that makes tunneling easy yet at the same time tough enough to form stable walls that are unlikely to collapse.

If Jaczcko’s secret motive was to stop Yucca waste storage so states wouldn’t build more nuclear power plants (6 states won’t allow new plants until there’s nuclear waste disposal), he shouldn’t have worried.  The upfront costs to build a nuclear power plant is 4 times an equivalent natural gas plant so banks aren’t going to lend money, no money will be coming in for the minimum of ten years it takes to get permission and fight off lawsuits and NIMBYism, there are uninsurable liabilities, and there are limited uranium reserves left.

And once peak oil production hits, most likely within the next 5 years according to the latest IEA 2018 report, the odds are that we’ll spend dwindling energy on nuclear waste disposal to protect thousands of future generations is nil.  That rapidly disappearing oil (at an exponential 6% per year) is going to be spent growing food and wars.

Jaczcko spends a few paragraphs on the hazards of spent nuclear fuel pools and points out that terrorism, floods, earthquakes, tornadoes, mudslides, and hurricanes could affect them enough for another Fukushima to happen here.  

But if his agenda is to stop new nuclear power plants, he should have mentioned the 2016 report of the National Research Council “Lessons Learned from the Fukushima nuclear accident for improving safety and security of U.S. Nuclear plant” in which it was learned that “If electric power were out 12 to 31 days (depending on how hot the stored fuel was), the fuel from the reactor core cooling down in a nearby nuclear spent fuel pool could catch on fire and cause millions of flee from thousands of square miles of contaminated land, because these pools aren’t in a containment vessel.”

The National Research Council estimated that if a spent nuclear fuel fire happened at the Peach Bottom nuclear power plant in Pennsylvania, nearly 3.5 million people would need to be evacuated and 12 thousand square miles of land would be contaminated.  A Princeton University study that looked at the same scenario concluded it was more likely that 18 million people would need to evacuated and 39,000 square miles of land contaminated (see my post on this here).

In the worst case, nearly all of U.S. reactors would be involved if there were a nuclear bomb generated electromagnetic pulse, which could take the electric grid down for a year or more (see U.S. House hearing testimony of Dr. Pry at The EMP Commission estimates a nationwide blackout lasting one year could kill up to 9 of 10 Americans through starvation, disease, and societal collapse. 

Okay, enough criticizing. Overall this book will interest anyone who is concerned about nuclear power, which comes up a lot now as a potential part of the Green New Deal and a way to provide power without CO2.

Here are some excerpts from the first half of the book, the second half is worth reading too.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Gregory Jaczko. 2019. Confessions of a Rogue Nuclear Regulator. Simon & Schuster.   

The problem was that I wasn’t the kind of leader the NRC was used to: I had no ties to the industry, no broad connections across Washington, and no political motivation other than to respect the power of nuclear technology while also being sure it is deployed safely. I knew my scientific brain could stay on top of the facts. I knew to do my homework and to work hard. But I could also be aggressive when pursuing the facts, sometimes pressing a point without being sensitive to the pride of those around me. This may have had something to do with why I eventually got run out of town. But I also think that happened because I saw things up close that I was not meant to see: an agency overwhelmed by the industry it is supposed to regulate and a political system determined to keep it that way.

And I was especially determined to speak up after the nuclear disaster at Fukushima in Japan, which happened while I was chairman of the NRC. This cataclysm was the culmination of a series of events that changed my view about nuclear power. When I started at the NRC, I gave no thought to the question of whether nuclear power could be contained. By the end, I no longer had that luxury. I know nuclear power is a failed technology. This is the story of how I came to this belief.

The next step in my nomination, beyond excitedly telling my parents, was to wait. And wait. And wait. And wait. Nominees to commission positions become hostages for leverage in the U.S. Senate, as the confirmation process creates the opportunity for senators to fulfill other related—or unrelated—goals by placing a hold on a nomination until they get what they want. In my case, the confirmation process took two years.

Up until that point, I had been a surrogate for Senator Reid and for Congressman Markey, with very little record of my own. Since both of these legislators had been antagonists of the nuclear power industry for decades, I was guilty by association. With little to go on, the industry had to assume the worst: that my bosses’ views were my views. That triggered relentless opposition from the industry and its standard-bearers in the U.S. Senate.

The blunt message I would get over the next two years of Senate stalling was that honesty and integrity mean nothing if you are perceived to be critical of nuclear power.

Frustrated with the two years of obstruction, Reid decided to place holds on every nominee waiting to pass through the Senate’s approval process—more than three hundred people—until I was confirmed. But even this muscular action—which made for great headlines in Nevada, where Reid was seen as fighting for the interests of the state—was not enough. There was one hold on my nomination he could not get released, that of Pete Domenici. The New Mexico senator was known as “Saint Pete” among nuclear proponents because of his prolific and unflinching support of the nuclear energy and nuclear weapons industries. In the mid-1990s he had made a very simple threat to the NRC: Reduce your intrusiveness by adopting more industry-friendly approaches to regulation, or your budget will be slashed.

The Nuclear Regulatory Commission oversees all the commercial nuclear power plants in the United States. It is part of the family of government agencies known as independent regulatory commissions,

To ensure that each commission has, at least in theory, a diversity of views, no more than three of its members can belong to any one political party.

Each commissioner serves a term of 5 years and the terms are staggered, so one member leaves the commission every year as a new one is seated. These agencies are designed to be independent of but not isolated from the president, whose power comes from the fact that the president chooses each board’s chair. This chair wields tremendous authority,

When I chaired it means having executive responsibility for nearly 4,000 staff members and a budget of over $1 billion. Congress, however, has even greater control than the president over the independent regulatory commissions, because it oversees and funds them.

Because these regulatory commissions wield enormous power over industries like telecommunications, commercial banking, investment, and electricity, the commissioners are often the subject of intense fighting in Washington.

In the case of the NRC, powerful electric utilities strongly influence the choice of commissioners, as they depend on allies on the board for their livelihood; no nuclear power plant can operate without the agency’s approval. For the past several years, this has meant that the NRC’s board has been made up primarily of industry-backing commissioners. Prospective commissioners who might make safety a priority—or even dare to oppose nuclear power—don’t survive the Senate confirmation process.

Although people talk about the nuclear power industry as if it were a monolith, nuclear power is produced by many different companies in many different sectors of the economy. Some of their names are familiar: General Electric, Westinghouse, Toshiba. Most of them make products, plants, and services that create all types of electricity, not just from nuclear power, using a combination of traditional and renewable energy resources.

What all of these disparate electricity producers, suppliers, and distributors have in common is membership in the Nuclear Energy Institute, the lobbying organization representing the industry’s interests.

When it comes to influencing laws and regulations, NEI members have a history of acting as one. This solidarity gives them tremendous influence with Congress.

Killing regulations, or even modifying them slightly, can produce savings of millions of dollars per year in operating costs, equipment purchases, and technical analysis. With millions to spend and a unified message, NEI shapes every NRC regulation, guidance, and policy. In some instances, NEI works through formal channels, commenting on documents produced for the public. In others, it exerts its power through informal meetings with commissioners. In any given month, I could be visited by as many representatives of the industry as I would be by public interest groups across my entire seven and a half years on the commission.

A typical visit from a representative of NEI or a utility company would start at the middle manager level and end with the commissioners. That way, if NEI heard troubling news from midlevel staff, they could raise the issue with one or more friendly commissioners, and actions would be taken. I saw this happen all the time, even though staff members were repeatedly told to not take direction from commissioners or industry executives.

 “Health care and energy are the president’s two most important issues. And nuclear power is crucial to his energy program. We don’t need any distractions from that basic goal. So don’t fuck it up.” I took this to mean that I shouldn’t be too hard on the industry because the president needed its support to address his climate change goals.

Although I had already spent more than four years at the agency, I had kept my distance from industry leaders. I knew them and they knew me, but I believed it would be easier to make objective safety decisions if I didn’t get too friendly with them.

Then bigger issues came along. The first arose when I pushed to make good on the president’s promise to end the program to store nuclear waste in Nevada. 

No one can design a safety system that will work perfectly. Reactor design is inherently unsafe because a nuclear plant’s power—if left unchecked—is sufficient to cause a massive release of radiation. So nuclear power plant accidents will happen. Not every day. Not every decade. Not predictably. But they will happen nonetheless.

The designers of nuclear facilities would not agree that accidents are inevitable. When building their safety backups, they essentially say, “Whatever you need, double or triple it.” If it takes one pump to move water during an accident, for example, then put in another pump somewhere in the plant. However, this fail-safe setup only reduces the chance of an accident; it does not eliminate it. What if a failure disables both pumps simultaneously? And what about the problems that no engineer, scientist, or safety regulator can foresee? No amount of planning can prepare a plant for every situation. Every disaster makes its own rules—and humans cannot learn them in advance. Who would have thought a tsunami would cause a nuclear disaster in Japan?

Uncertainty about when an accident will happen is exactly why the industry makes the argument for doing nothing. “Why spend billions of dollars to prevent something that might not happen for thousands of years, if at all?” they say. But the accident at the Fukushima plant is a rebuttal to that argument: despite decades of advances in safety systems, reactor physics knowledge, and nuclear plant operator performance, a catastrophic accident shocked most of the world simply by happening. Maybe another accident won’t happen for thousands of years. Or maybe it will happen tomorrow.

Many tried to dismiss Fukushima as a result of Japanese unwillingness to challenge authority. Their engineers simply didn’t push back against the norms that stand in the way of safety, people said. But that same obeisance to the powerful is exactly what I saw at home in the NRC.

When I realized how flawed the safety technology was—not just in Japan but at U.S. nuclear facilities—I decided I would do everything I could to fix it. My determination set up a major conflict between my fellow commissioners and me. Following the Fukushima accident they appeared to me most concerned with preventing the agency from inflicting pain on an industry now struggling to respond to a major nuclear power plant accident in a country far away.

American politicians had long ago been led to believe that these kinds of calamities were no longer possible. And so pressure was placed on the agency—even after the disaster—to do just enough to say safety was taken care of, but not so much that it forced the industry to make meaningful changes. From my prime seat at the most significant contest over the future of nuclear power, I saw the industry and its allies continue to try to thwart even the most basic and commonsense safety reforms.

In hindsight, the Fukushima incident revealed what has long been the sad truth about nuclear safety: the nuclear power industry has developed too much control over the NRC and Congress. In the aftermath of the accident, I found myself moving from my role as a scientist impressed by nuclear power to a fierce nuclear safety advocate. I now believe that nuclear power is more hazardous than it is worth. Because the industry relies too much on controlling its own regulation, the continued use of nuclear power will lead to catastrophe in this country or somewhere else in the world. That is a truth we all must confront.

3 nuclear accidents:

Pennsylvania, in 1979. Three Mile Island

Chernobyl nuclear power plant in the Soviet Union.

2002 at the troubled Davis-Besse nuclear power plant in Ohio.

The problem is that with each new accident, all the people in charge of nuclear safety seemed to revert to the belief that this one would be the last one. As chairman of the NRC I battled nearly every day against this instinct to believe the worst was over. You can prepare for the next accident only if you can get all the players to admit that a next one is coming, even if when and where are impossible to predict.

Three mile island

It started on March 28 at around 4:00 a.m., when a water pump stopped working. The failed pump affected the steam generators, large cylinders filled with many tiny metal tubes that help turn hot water from the nuclear engine into steam so that the turbines can create electricity. When the flow of water was cut off, this massive heat exchange stopped working, creating the conditions for a serious accident. The reactor engine was immediately turned off. But so long as the reactor fuel remained hot (which it would for quite some time), its natural radioactive decay would continue, producing enough heat (called “decay heat”) to melt through the metal containers enclosing the reactor fuel. (This same problem would later affect the Fukushima plant.) The failure of the main feedwater pump was not in and of itself a serious crisis. But the systems responsible for removing the decay heat—and the people operating those systems—did not respond correctly.

As the reactor shut down, the closed cooling system suddenly no longer had anywhere to deposit its energy. This caused a significant spike in pressure in the pipes circulating water to cool the reactor. Plants of this type are outfitted with a large tank of water designed to regulate this pressure; it’s called a pressurizer. Like a bob on a fishing line, the pressurizer water level rises and falls to keep the pressure consistent. When it gets too high, a valve opens to release some of that pressure. During the initial phase of the accident, this safety valve did something it wasn’t supposed to do: it stayed open after the pressure had been relieved. Operators can fix a stuck pilot-operated relief valve, as this pesky component is called. But the people running the plant were let down by their instruments. The control panel, with all its lights, knobs, and switches, told them the valve had closed.

The open valve allowed essential water to pour out of the pressurizer, draining the reactor vessel, exposing the nuclear fuel to air. These hot fuel rods now lacked the necessary cooling to keep from melting.

Seeing the pressurizer appear to go solid—as they were taught to expect—the operators reduced the water in the reactor cooling system. This made the reactor fuel even hotter. As the pressure dropped throughout the system, the immense pumps that circulate water through the plant began to vibrate fiercely. To protect the pumps, the operators turned them on and off, further reducing the heat removal capability of the limited amount of water left in the reactor vessel. The fuel began to melt, releasing a burst of radioactive material into the containment structure.

By evening the reactor’s normal cooling had been restored, but the damage was done.

Outside the walls of the Three Mile Island plant, the confusion was just beginning.

The first signal that something serious might be happening came when a general emergency (the highest level of safety alert) was declared around 7:00 a.m. Because of ineffective communication, however, this alert did not reach the NRC’s regional staff outside Philadelphia for another forty-five minutes. Contacting government officials—even in an emergency—is never easy, and this was before cell phones and text messaging. Since the NRC rarely required power plants to notify the agency about less significant issues, these communication challenges were only now becoming apparent. It would take a few more hours before the White House learned about the situation. Nothing about this communication failure is unique. As I learned in the wake of the Fukushima accident, crises on this scale are often characterized by incoherent communication and conflicting information. Both the Three Mile Island and Fukushima disasters featured contradictory assessments of the state of the reactor, a limited appreciation of the fact that the damage to the reactor had occurred very early, and rapidly changing statements from elected officials. To the public, these statements can appear to suggest prevarication or incompetence. But when government officials—imperfect human beings like everyone else—try to make sense of the complicated physics of a nuclear reactor accident, they will invariably make mistakes in communication

After a general emergency was declared at the Three Mile Island plant, the governor of Pennsylvania, Dick Thornburgh, chose not to execute an evacuation. Although state officials are responsible for such decisions, they rarely have the background in nuclear technology to accurately assess the situation and instead rely on experts at the plant or the NRC, who are also scrambling to understand what is going on. Of course, communication between these disparate groups is never perfect. Elected officials in Harrisburg received updates from the press instead of the plant.  

The accident was over, but more than ten years would pass before the plant would be cleaned up. Over $1 billion was spent to recover and dispose of the damaged reactor fuel. The nation may have avoided a nuclear catastrophe, but the costs were high—and Americans had lost confidence in nuclear power.

The Three Mile Island accident exposed serious weaknesses in the control rooms, communication and safety systems, and operations of nuclear power plants, leading the NRC to add or modify countless regulations to address these shortcomings. Control room layouts, emergency procedures, and operations practices were changed. More alerts and information panels dotted the control boards.

Chernobyl

The Soviet Union’s nuclear plants were also technologically and operationally different from most in the West, which meant that what went wrong at Chernobyl did not exactly apply elsewhere. While water, for example, performs many of the operational and safety functions in American reactor systems, the Chernobyl reactor relied on graphite, which significantly increased the accident’s radiation contamination. What’s more, the accident read like a handbook of everything not to do when operating a nuclear power plant. Even the most ardent nuclear opponents would have had a hard time believing the people who controlled nuclear plants in the West would be so careless.

Chernobyl was not used as a learning opportunity. The NRC’s final assessment of the disaster found that no changes should be required by American plants. The world’s most significant commercial nuclear reactor accident would have no discernible impact on the nuclear industry in the United States.

Davis-Besse

Before Fukushima, the most prominent nuclear incident in recent times took place at the Davis-Besse nuclear power plant near Toledo, Ohio. As so often happens, Davis-Besse’s problem had begun years before it was finally discovered. The designers of the first wave of nuclear plants had limited experience with the metals and other materials used to build these structures, so some of their choices turned out to perform worse than expected in the high heat, harsh radiation, and extreme chemical environment of nuclear reactors.

Throughout that decade each additional probe into Alloy 600 conditions had identified new physical evidence suggesting the problem was worse than the models and many nuclear safety professionals had predicted. This is one of the more important implications of Davis-Besse: despite decades spent evaluating nuclear reactors, we can always discover new problems that surprise us. This challenges the idea that professionals can ever really know for sure what’s safe when it comes to a nuclear plant.

After the NRC’s first formal notice about the vulnerability of Alloy 600, plants responded in a variety of ways. Some made modifications quickly; others asked for more time. This second approach is typical in the nuclear industry. No issue ever appears to be pressing because there is a mistaken belief that early warnings inside the plants themselves will always preface a major incident. Leaks will appear well before pipes ever break. Inspections will catch cracks before they grow big enough to affect the performance of vital safety equipment. Fires will be caught and extinguished before they can spread. The operators of the Davis-Besse plant shared this complacency.

The issue at Davis-Besse started with the reactor pressure vessel head, which had parts made of Alloy 600. This large steel lid caps the container housing the reactor fuel, making it one of the most important barriers keeping radioactive material out of the environment. Like most barriers in a nuclear plant, the vessel head has openings to allow equipment to access the reactor fuel and measure the status of the reactor engine. One of these penetrations that dot the top of the lid like a series of chimneys was severely corroded. The cause of the corrosion was boric acid, which had leaked through cracks in the Alloy 600. (Boric acid is added to the water used to cool the reactor to help control the nuclear fission process.) The corrosion made the surface of the metal look like popcorn—not a difficult sign to miss.

Indeed the signs of boric acid corrosion are so unmissable that the NRC was confident operators would notice any prospective problem long before it posed a hazard. But at Davis-Besse, if anyone noticed, no one said a word. Earlier in 2001 the NRC had asked all plants to send data on the conditions of parts made from Alloy 600 and the ability of inspection programs to identify cracks long before they became a cause for alarm. This information was due in December. But Davis-Besse delayed responding to the agency’s request. The operators planned to gather the information the following spring, when the plant would shut down to perform routine maintenance.

Worried about the risk of waiting until spring, the NRC ordered Davis-Besse to stop operations.

Subsequent inspections revealed extensive damage: the six-inch steel vessel head had corroded away completely. During the inspection, the chimney-like protrusion where the leak originated toppled over like a domino, hitting the one next to it. The only remaining barrier to the reactor was a thin piece of steel not designed to hold back the pressures that would come during operation. Had Davis-Besse been in operation, a significant accident would likely have occurred.

The incident was a tremendous embarrassment to the industry and the agency. Warning sign after warning sign from inspection after inspection had indicated that there was a leak in the reactor pressure vessel head, yet neither the NRC nor the plant owner took action. While the Three Mile Island accident was the result of a minor equipment malfunction followed by human error, the problem at Davis-Besse was in some ways much more serious. The damage to the reactor vessel was so significant that had the thin steel liner failed, there would have been no easy remedy, no matter what the operators did.

There followed the usual round of hand-wringing, report writing, and penance serving. The Davis-Besse plant owners received a record fine of $5.5 million from the agency and $28 million from the Department of Justice, a pittance compared to the cost of the accident that would likely have occurred. At a time when some nuclear plants were generating profits of nearly $1 million a day, this was hardly a significant penalty. No senior executives were held responsible,

The NRC launched a massive effort, the Davis-Besse Lessons Learned Task Force, to try to prevent this kind of systematic human failure from happening again. The program lasted for more than a decade, well into the time I served on the commission. It is difficult to prevent the kinds of systematic failures that characterized the Davis-Besse accident, especially since the false information provided by the people criminally charged made it harder to identify what actually went wrong.

Fire is one of the biggest hazards inside a nuclear plant. With duplicate and triplicate safety systems throughout, the worst dangers come from events that can take out all these systems at one time—a “common cause failure” in industry jargon. A plant’s maze of hallways and passageways provides an easy environment for heat and flame to sweep through, causing potentially unfixable damage to safety systems. The flames’ most vulnerable targets are the data and power cables that supply information about vital plant systems and make those systems work. In the late 1990s, calculation after calculation by modern computer models confirmed that fire brought the most significant risk of complete breakdown at many nuclear power plants. Yet the industry and the regulators were slow to grasp the importance of these models, so slow that by the time I became NRC chairman in 2009 this issue was still unresolved.

My attempt to improve the ability of nuclear power plants to deal with fires turned into a drama featuring industry foot-dragging, obfuscation, and downright resistance.

Despite their formidable size, the containment structures of many nuclear power plants, designed to corral dangerous radiation in the event of an accident, are punctured by vents and ducts. These penetration points are the weak spots that can undermine an otherwise airtight containment shell. A leak in one of these areas is a significant problem.

The workers were searching for a possible leak in the walls separating the reactor from the public. To determine the location of a draft—which could serve as an escape route for dangerous radioactive material—a technician held a candle up to places where there might be holes and watched to see if the flame wiggled in the slight breeze of outward-flowing air. While performing this low-tech examination, the technician held the candle too close to a nearby cable; its insulation started to burn. Over the next several hours, the fire raced along cables like a fuse on a stick of dynamite in a cartoon, taking out not only many of the safety systems of the reactor where the fire occurred, but also those of a second reactor whose cables shared this spreading room. As the fire burned the plastic insulation coating off the cables, the raw metal wire—now exposed—could easily touch other wires, leading to electrical shorts that disabled vital safety equipment. It took hours for plant engineers and operators to determine how best to arrest the blaze, confusion that wasted precious time and allowed more and more systems to burn. As we all learn as children, water and live electric wires can be a dangerous combination, and so the plant operators feared that water used to douse the flames would react with the exposed wiring of the now-burned cables. Eventually they did use water, and the fire was extinguished, but not before causing significant damage to the plant’s vital systems, despite the fact that the actual fire progressed only a short distance. The primary emergency cooling systems were rendered useless, forcing the plant to shut down for over a year.

The incident alerted the industry and the NRC to the fact that fires could no longer be treated as merely a company problem. They were a public safety threat.

This realization led to a comprehensive rewrite of the agency’s fire safety standards—standards that would then go unenforced for decades.

After the Browns Ferry fire, the agency designed a straightforward approach to safeguard plants against a typical fire that could spread throughout the facility, wiping out many systems. The rules were simple, so simple that I could easily remember and recite them. As the Browns Ferry fire showed, the plant’s most vulnerable elements were the power and control cables that ran throughout the building like nerves in the human body. To address this, the new deterministic rules called for separation: keeping combustibles far away from one another. That way a fire confined to one spot might disable some but not all of the safety systems in a plant. The problem was that not all systems could be separated. Unless plants were going to be completely redesigned to isolate each independent safety system in a separate control room, all the cables for all the equipment would coalesce in one room. This meant that in addition to separating everything that could be separated, you needed a way to prevent fires from spreading in places where you could not achieve separation. So the agency added another requirement: systems that could not be sufficiently separated had to be protected against fires. Either safety systems had to be separated from one another by twenty feet, or the plant had to have each system protected by a barrier that could withstand a nearby fire for three hours, or the plant had to have systems protected by a barrier that could withstand a fire for one hour if there was also a fire suppression and detection system nearby. There was one more requirement too: there had to be an alternate control room in case the main control room was disabled.

In principle, twenty feet of separation between vital safety equipment seems reasonable; if one piece of equipment is fifteen feet away from another, simply move one of them another five feet. But this becomes difficult when the room the equipment is in is only fifteen feet wide. And if the room is locked in like the middle piece in a jigsaw puzzle amid other rooms inside the fortress that is a nuclear power plant, then moving walls to accommodate a greater need for separation is nearly impossible.

So almost as soon as the new fire safety rules were enacted, the industry challenged them in court as unworkable—not to mention a financial burden.

Finally, after years of debate, the courts eventually upheld the rules put in place after Browns Ferry, but only because the NRC promised to be flexible, allowing companies exemptions to pursue alternative approaches to preventing fires from spreading. And so the great fire regulation exemption marathon began. Over the subsequent decades, some plants would have hundreds of exemptions, many of them never even reviewed by the NRC.

Compare, for example, the threat of nuclear disaster with other hazards, like driving a car. Surely, the nuclear power supporters argued, a public that understood they were more likely to die in a car accident than from an accident at a nuclear power plant would come to embrace nuclear technology.

During Senator Pete Domenici’s push to weaken the authority of the NRC in the late 1990s, he advocated for more reliance on voluntary, risk-informed, performance-based standards, shifting the responsibility for oversight from the agency to the industry.

The NRC at the time agreed with Domenici. When I joined the commission in 2005, it was still trying to encourage power plant owners to adopt these voluntary safety standards. Of course, voluntary standards would be accepted only if they worked in the industry’s favor—namely, when they reduced regulation and saved money. In contrast, the new fire protection rules determined by computer modeling would cost money—tens of millions of dollars per plant—making them unattractive to most power plant owners.

It’s worth emphasizing: these were fire safety regulations the nuclear power industry itself had developed. Why was it so difficult to convince them to support their own standards?  Because the nation had been living with imperfect fire safety regulations for 30 years. Waiting a little longer couldn’t hurt.  Also, it was hard to find safety experts who understood the new cutting edge simulations, another good reason to delay.

Posted in Nuclear, Nuclear, Nuclear Power, Nuclear Power, Nuclear spent fuel fire | Tagged | 3 Comments