World’s Oceans are losing Oxygen rapidly

Preface. Yikes, add deoxygenization to your list of worries. Oxygen levels in the world’s oceans declined by roughly 2% from 1960 and 2010. The decline was largely due to climate change, though other human activities such as nutrient runoff from farms into waterways added to the problem.

That’s a deadly big deal. An increase in the water temperature of the world’s oceans of around six degrees Celsius — which some scientists predict could occur as soon as 2100 — could stop oxygen production by phytoplankton by disrupting the process of photosynthesis. About two-thirds of the planet’s total atmospheric oxygen is produced by ocean phytoplankton. Cessation would result in the depletion of atmospheric oxygen on a global scale resulting in a mass die-off of humans and other creatures (Sekerci 2015).

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Pierre-Louis, K. 2019. World’s Oceans are losing Oxygen rapidly, study finds. New York Times.

The world’s oceans are gasping for breath, a report issued Saturday at the annual global climate talks in Madrid has concluded.

The report represents the combined efforts of 67 scientists from 17 countries and was released by the International Union for Conservation of Nature. It found that oxygen levels in the world’s oceans declined by roughly 2 percent between 1960 and 2010. The decline, called deoxygenation, is largely attributed to climate change, although other human activities are contributing to the problem. One example is so-called nutrient runoff, when too many nutrients from fertilizers used on farms and lawns wash into waterways.

Water holds less oxygen by volume than air does. And as ocean temperatures increase, the warmer water can’t hold as much gas, including oxygen, as cooler water.  Warming temperatures also affect the ability of ocean water to mix, so that the oxygen absorbed on the top layer doesn’t properly get down into the deeper ocean. And what oxygen is available gets used up more quickly because marine life uses more oxygen when temperatures are warmer.

The decline might not seem significant because, “we’re sort of sitting surrounded by plenty of oxygen and we don’t think small losses of oxygen affect us,” said Dan Laffoley, the principal adviser in the conservation union’s global marine and polar program and an editor of the report. “But if we were to try and go up Mount Everest without oxygen, there would come a point where a 2 percent loss of oxygen in our surroundings would become very significant.”

“The ocean is not uniformly populated with oxygen,” he added. One study in the journal Science, for example, found that water in some parts of the tropics had experienced a 40 to 50 percent reduction in oxygen.

We see this along the coast of California with these mass fish die-offs as the most dramatic example of this kind of creep of deoxygenation on the coastal ocean.

According to Dr. Laffoley, if the heat absorbed by the oceans since 1955 had gone into the lower levels of the atmosphere instead, land temperatures would be warmer by 65 degrees Fahrenheit, or 36 degrees Celsius.

References

Sekerci, Y., et al. 2015. Mathematical Modelling of Plankton–Oxygen Dynamics Under the Climate Change. Bulletin of Mathematical Biology.

Posted in Climate Change, Extinction, Mass Extinction, Planetary Boundaries | Tagged , , | 7 Comments

Abrupt Impacts of Climate Change

climate-change-frog-jumping

Preface. This is a summary of the National Research Council 2013 study of abrupt changes of climate change.

Related:

2019-12-6. Research reveals past rapid Antarctic ice loss due to ocean warming.  “…the sensitive West Antarctic Ice Sheet collapsed during a warming period just over a million years ago when atmospheric carbon dioxide levels were lower than today.”

2015-8-5. The Point of No Return: Climate Change Nightmares Are Already Here.  The worst predicted impacts of climate change are starting to happen — and much faster than climate scientists expected. Rolling Stone.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

NRC. 2013. Abrupt Impacts of Climate Change: Anticipating surprises. National Research Council, National Academies of Sciences press.

“Abrupt climate change is generally defined as occurring when some part of the climate system passes a threshold or tipping point resulting in a rapid change that produces a new state lasting decades or longer (Alley et al., 2003). In this case “rapid” refers to timelines of a few years to decades.

“Abrupt climate change can occur on a regional, continental, hemispheric, or even global basis. Even a gradual forcing of a system with naturally occurring and chaotic variability can cause some part of the system to cross a threshold, triggering an abrupt change. Therefore, it is likely that gradual or monotonic forcings increase the probability of an abrupt change occurring.

Climate is changing, forced out of the range of the last million years by levels of carbon dioxide and other greenhouse gases not seen in Earth’s atmosphere for a very long time.

It is clear that the planet will be warmer, sea level will rise, and patterns of rainfall will change. But the future is also partly uncertain—there is considerable uncertainty about how we will arrive at that different climate. Will the changes be gradual, allowing natural systems and societal infrastructure to adjust in a timely fashion? Or will some of the changes be more abrupt, crossing some threshold or “tipping point” to change so fast that the time between when a problem is recognized and when action is required shrinks to the point where orderly adaptation is not possible?

A study of Earth’s climate history suggests the inevitability of “tipping points”— thresholds beyond which major and rapid changes occur when crossed—that lead to abrupt changes in the climate system.

The history of climate on the planet—as read in archives such as tree rings, ocean sediments, and ice cores—is punctuated with large changes that occurred rapidly, over the course of decades to as little as a few years.

There are many potential tipping points in nature, as described in this report, and many more that we humans create in our own systems. The current rate of carbon emissions is changing the climate system at an accelerating pace, making the chances of crossing tipping points all the more likely.

Scientific research has already helped us reduce this uncertainty in two important cases; potential abrupt changes in ocean deep water formation and the release of carbon from frozen soils and ices in the polar regions were once of serious near-term concern are now understood to be less imminent, although still worrisome as slow changes over longer time horizons. In contrast, the potential for abrupt changes in ecosystems, weather and climate extremes, and groundwater supplies critical for agriculture now seem more likely, severe, and imminent.

In addition to a changing climate, multiple other stressors are pushing natural and human systems toward their limits, and thus become more sensitive to small perturbations that can trigger large responses. Groundwater aquifers, for example, are being depleted in many parts of the world, including the southeast of the United States. Groundwater is critical for farmers to ride out droughts, and if that safety net reaches an abrupt end, the impact of droughts on the food supply will be even larger.

Levels of carbon dioxide and other greenhouse gases in Earth’s atmosphere are exceeding levels recorded in the past millions of years, and thus climate is being forced beyond the range of the recent geological era.

The paleoclimate record—information on past climate gathered from sources such as fossils, sediment cores, and ice cores—contains ample evidence of abrupt changes in Earth’s ancient past, including sudden changes in ocean and air circulation, or abrupt extreme extinction events. One such abrupt change was at the end of the Younger Dryas, a period of cold climatic conditions and drought in the north that occurred about 12,000 years ago. Following a millennium-long cold period, the Younger Dryas abruptly terminated in a few decades or less and is associated with the extinction of 72 percent of the large-bodied mammals in North America. Some abrupt climate changes are already underway, including the rapid decline of Arctic sea ice over the past decade due to warmer polar temperatures.

Scientific research has advanced sufficiently that it is possible to assess the likelihood, for example the probability of a rapid shutdown of the Atlantic Meridional Overturning Circulation (AMOC) within this century is now understood to be low.

Human infrastructure is built with certain expectations of useful life expectancy, but even gradual climate changes may trigger abrupt thresholds in their utility, such as rising sea levels surpassing sea walls or thawing permafrost destabilizing pipelines, buildings, and roads.

The primary timescale of concern is years to decades. A key characteristic of these changes is that they can come faster than expected, planned, or budgeted for, forcing more reactive, rather than proactive, modes of behavior.

Table S.1 summarizes the state of knowledge about potential abrupt changes. This table includes potential abrupt changes to the ocean, atmosphere, ecosystems, and highlatitude regions that are judged to meet the above criteria. For each abrupt change, the Committee examined the available evidence of potential impact and likelihood. Some abrupt changes are likely to occur within this century—making these changes of most concern for near-term societal decision making and a priority for research.

S-1 abrupt CC 1

S-1 abrupt CC 2S-1 abrupt CC 3S-1 abrupt CC 41 Change could be either abrupt or non-abrupt.

2 Committee assesses the near-term outlook that sea level will rise abruptly before the end of this century as Low; this is not in contradiction to the assessment that sea level will continue to rise steadily with estimates of between 0.26 and

0.82­m by the end of this century (IPCC, 2013).

3 Methane is a powerful but short-lived greenhouse gas

4 Limited by ability to predict methane production from thawing organic carbon

5 No mechanism proposed would lead to abrupt release of substantial amounts of methane from ocean methane hydrates this century.

6 Limited by undertainty in hydrate abundance in near-surface sediments, and fate of CH4 once released

7 Species distribution models (Thuiller et al., 2006) indicate between 10–40% of mammals now found in African protected areas will be extinct or critically endangered by 2080 as a result of modeled climate change. Analyses by Foden et al.(2013) and Ricke et al. (2013) suggest 41% of bird species, 66% of amphibian species, and between 61% and 100% of corals that are not now considered threatened with extinction will become threatened due to climate change sometime between now and 2100.

Disappearance of Late-Summer Arctic Sea Ice

Recent dramatic changes in the extent and thickness of the ice that covers the Arctic sea have been well documented. Satellite data for late summer (September) sea ice extent show natural variability around a clearly declining long-term trend (Figure S.1). This rapid reduction in Arctic sea ice already qualifies as an abrupt change with substantial decreases in ice extent occurring within the past several decades. Projections from climate models suggest that ice loss will continue in the future, with the full disappearance of late-summer Arctic sea ice possible in the coming decades. The impacts of rapid decreases in Arctic sea ice are likely to be considerable. More open water conditions during summer would have potentially large and irreversible effects on various components of the Arctic ecosystem, including disruptions in the marine food web, shifts in the habitats of some marine mammals, and erosion of vulnerable coastlines. Because the Arctic region interacts with the large-scale circulation systems of the ocean and atmosphere, changes in the extent of sea ice could cause shifts in climate and weather around the northern hemisphere. The Arctic is also a region of increasing economic importance for a diverse range of stakeholders, and reductions in Arctic sea ice will bring new legal and political challenges as navigation routes for commercial shipping open and marine access to the region increases for offshore oil and gas development, tourism, fishing and other activities.

Increases in Extinction Threat for Marine and Terrestrial Species

The rate of climate change now underway is probably as fast as any warming event in the past 65 million years, and it is projected that its pace over the next 30 to 80 years will continue to be faster and more intense. These rapidly changing conditions make survival difficult for many species. Biologically important climatic attributes—such as number of frost-free days, length and timing of growing seasons, and the frequency and intensity of extreme events (such as number of extremely hot days or severe storms)—are changing so rapidly that some species can neither move nor adapt fast enough

The distinct risks of climate change exacerbate other widely recognized and severe extinction pressures, especially habitat destruction, competition from invasive species, and unsustainable exploitation of species for economic gain, which have already elevated extinction rates to many times above background rates. If unchecked, habitat destruction, fragmentation, and over-exploitation, even without climate change, could result in a mass extinction within the next few centuries equivalent in magnitude to the one that wiped out the dinosaurs. With the ongoing pressures of climate change, comparable levels of extinction conceivably could occur before the year 2100; indeed, some models show a crash of coral reefs from climate change alone as early as 2060 under certain scenarios. Loss of a species is permanent and irreversible, and has both economic impacts and ethical implications. The economic impacts derive from loss of ecosystem services, revenue, and jobs, for example in the fishing, forestry, and ecotourism industries. Ethical implications include the permanent loss of irreplaceable species and ecosystems as the current generation’s legacy to the next generation.

Abrupt Changes of Unknown Probability Destabilization of the West Antarctic Ice Sheet

The volume of ice sheets is controlled by the net balance between mass gained (from snowfall that turns to ice) and mass lost (from iceberg calving and the runoff of meltwater from the ice sheet). Scientists know with high confidence from paleo-climate records that during the planet’s cooling phase, water from the ocean is traded for ice on land, lowering sea level by tens of meters or more, and during warming phases, land ice is traded for ocean water, raising sea level, again by tens of meters and more. The rates of ice and water loss from ice stored on land directly affect the speed of sea level rise, which in turn directly affects coastal communities. Of greatest concern among the stocks of land ice are those glaciers whose bases are well below sea level, which includes most of West Antarctica, as well as smaller parts of East Antarctica and Greenland. These glaciers are sensitive to warming oceans, which help to thermally erode their base, as well as rising sea level, which helps to float the ice, further destabilizing them. Accelerated sea level rise from the destabilization of these glaciers, with sea level rise rates several times faster than those observed today, is a scenario that has the potential for very serious consequences for coastal populations, but the probability is currently not well known,

Research to understand ice sheet dynamics is particularly focused on the boundary between the floating ice and the grounded ice, usually called the grounding line (see Figure S.3). The exposed surfaces of ice sheets are generally warmest on ice shelves, because these sections of ice are at the lowest elevation, furthest from the cold central region of the ice mass and closest to the relatively warmer ocean water. Locations where meltwater forms on the ice shelf surface can wedge open crevasses and cause ice-shelf disintegration—in some cases, very rapidly.

Because air carries much less heat than an equivalent volume of water, physical understanding indicates that the most rapid melting of ice leading to abrupt sea-level rise is restricted to ice sheets flowing rapidly into deeper water capable of melting ice rapidly and carrying away large volumes of icebergs. In Greenland, such deep water contact with ice is restricted to narrow bedrock troughs where friction between ice and fjord walls limits discharge. Thus, the Greenland ice sheet is not expected to destabilize rapidly within this century. However, a large part of the West Antarctic Ice Sheet (WAIS), representing 3–4 m of potential sea-level rise, is capable of flowing rapidly into deep ocean basins. Because the full suite of physical processes occurring where ice meets ocean is not included in comprehensive ice-sheet models, it remains possible that future rates of sea-level rise from the WAIS are underestimated, perhaps substantially.

Abrupt Changes Unlikely to Occur This Century

These include disruption to the Atlantic Meridional Overturning Circulation (AMOC) and potential abrupt changes of high-latitude methane sources (permafrost soil carbon and ocean methane hydrates). Although the Committee judges the likelihood of an abrupt change within this century to be low for these processes, should they occur even next century or beyond, there would likely be severe impacts. Furthermore, gradual changes associated with these processes can still lead to consequential changes.

However, it is important keep a close watch on this system, to make observations of the North Atlantic to monitor how the AMOC responds to a changing climate, for reasons including the likelihood that slow changes will have real impacts, and to update the understanding of the slight possibility of a major event.

Potential Abrupt Changes due to High-Latitude Methane

Large amounts of carbon are stored at high latitudes in potentially labile reservoirs such as permafrost soils and methane-containing ices called methane hydrate or clathrate, especially offshore in ocean marginal sediments. Owing to their sheer size, these carbon stocks have the potential to massively affect Earth’s climate should they somehow be released to the atmosphere. An abrupt release of methane is particularly worrisome because methane is many times more potent than carbon dioxide as a greenhouse gas over short time scales. Furthermore, methane is oxidized to carbon dioxide in the atmosphere, representing another carbon dioxide pathway from the biosphere to the atmosphere.

According to current scientific understanding, Arctic carbon stores are poised to play a significant amplifying role in the century-scale buildup of carbon dioxide and methane in the atmosphere, but are unlikely to do so abruptly, i.e., on a timescale of one or a few decades.

Although comforting, this conclusion is based on immature science and sparse monitoring capabilities. Basic research is required to assess the long-term stability of currently frozen Arctic and sub-Arctic soil stocks, and of the possibility of increasing the release of methane gas bubbles from currently frozen marine and terrestrial sediments, as temperatures rise.

The Committee examined a number of other possible changes. These included sea level rise due to thermal expansion or ice sheet melting (except WAIS—see above), decrease in ocean oxygen (expansion in oxygen minimum zones (OMZs)), changes to patterns of climate variability, changes in heat waves and extreme precipitation events (droughts/floods/ hurricanes/major storms), disappearance of winter Arctic sea ice (distinct from late summer Arctic sea ice—see above), and rapid state changes in ecosystems, species range shifts, and species boundary changes.

Early studies of ice cores showed that very large changes in climate could happen in a matter of a few decades or even years, for example, local to regional temperature changes of a dozen degrees or more, doubling or halving of precipitation rates, and dust concentrations changing by orders of magnitude

What has become clearer recently is that the issue of abrupt change cannot be confined to a geophysical discussion of the climate system alone. The key concerns are not limited to large and abrupt shifts in temperature or rainfall, for example, but also extend to other systems that can exhibit abrupt or threshold-like behavior even in response to a gradually changing climate. The fundamental concerns with abrupt change include those of speed—faster changes leave less time for adaptation, either economically or ecologically—and of magnitude—larger changes require more adaptation and generally have greater impact.

This report offers an updated look at the issue of abrupt climate change and its potential impacts, and takes the added step of considering not only abrupt changes to the climate system itself, but also abrupt impacts and tipping points that can be triggered by gradual changes in climate. This examination of the impacts of abrupt change brings the discussion into the human realm, raising questions such as: Are there potential thresholds in society’s ability to grow sufficient food? Or to obtain sufficient clean water? Are there thresholds in the risk to coastal infrastructure as sea levels rise?

Bark beetles are a natural part of forested ecosystems, and infestations are a regular force of natural change. In the last two decades, though, the bark beetle infestations that have occurred across large areas of North America have been the largest and most severe in recorded history, killing millions of trees across millions of hectares of forest from Alaska to southern California (Bentz, 2008); see Figure B. Bark beetle outbreak dynamics are complex, and a variety of circumstances must coincide and thresholds must be surpassed for an outbreak to occur on a large scale. Climate change is thought to have played a significant role in these recent outbreaks by maintaining temperatures above a threshold that would normally lead to cold-induced mortality.

When there are consecutive warm years, this can speed up reproductive cycles and increase the likelihood of outbreaks (Bentz et al., 2010). Similar to many of the issues described in this report, climate change is only one contributing factor to these types of abrupt climate impacts, with other human actions such as forest history and management also playing a role.

They noted that events that did not meet the common criterion of a semi-permanent change in state could still force other systems into a permanent change, and thus qualify as an abrupt change. For example, a mega-drought may be followed by the return of normal precipitation rates, such that no baseline change occurred, but if that drought caused the collapse of a civilization, a permanent, abrupt change occurred in the system impacted by climate.

The 2002 NRC study introduced the important issue of gradual climate change causing abrupt responses in human or natural systems, noting “Abrupt impacts therefore have the potential to occur when gradual climatic changes push societies or ecosystems across thresholds and lead to profound and potentially irreversible impacts.” The 2002 report also noted that “…the more rapid the forcing, the more likely it is that the resulting change will be abrupt on the time scale of human economies or global ecosystems” and “The major impacts of abrupt climate change are most likely to occur when economic or ecological systems cross important thresholds

Changes occurring over a few decades, i.e., a generation or two, begin to capture the interest of most people because it is a time frame that is considered in many personal decisions and relates to personal memories. Also, at this time scale, changes and impacts can occur faster than the expected, stable lifetime of systems about which society cares. For example, the sizing of a new air conditioning system may not take into consideration the potential that climate change could make the system inadequate and unusable before the end of its useful lifetime (often 30 years or more). The same concept applies to other infrastructure, such as airport runways, subway systems, and rail lines. Thus, even if a change is occurring over several decades, and therefore might not at first glance seem “abrupt,” if that change affects systems that are expected to function for an even longer period of time, the impact can indeed be abrupt when a threshold is crossed. “Abrupt” then, is relative to our “expectations,” which for the most part come from a simple linear extrapolation of recent history, and “expectations” invoke notions of risk and uncertainty. In such cases, it is the cost associated with unfulfilled expectations that motivates discussion of abrupt change. Finally, changes occurring over one to a few years are abrupt, and for most people, would also be alarming if sufficiently large and impactful.

The rate of greenhouse gas addition to the atmosphere continues to increase, with many policies in place to accelerate rising greenhouse gases (IMF, 2013). It is sobering to consider that about one-fifth of all fossil fuels ever burned were burned since the 2002 report was released. The sum of global emissions from 1751 through 2009 inclusive is 355,676 million metric tons of carbon; sum of global emissions from 2002 through 2009 inclusive is 64,788 million metric tons of carbon (Boden et al., 2011). Total carbon emissions for 2002-2009 compared to the total 1751-2009 is thus greater than 18%.

Abrupt Changes of Primary Concern

Either because they are currently believed to be the most likely and the most impactful, because they are predicted to potentially cause severe impacts but with uncertain likelihood, or because they are considered to be unlikely to occur but have been widely discussed in the literature or media.

It is very unlikely that the AMOC will undergo an abrupt transition or collapse in the 21st century. Delworth et al. (2008) pointed out that for an abrupt transition of the AMOC to occur, the sensitivity of the AMOC to forcing would have to be far greater than that seen in current models. Alternatively, significant ablation of the Greenland ice sheet greatly exceeding even the most aggressive of current projections would be required. As noted in the ice sheet section later in this chapter, Greenland ice has about 7.3m equivalent of sea level rise, which, if melted over 1000 years, yields an annual rise rate of 7 mm/yr, about 2 times faster just from Greenland than today’s rate from all sources, and more than 10 times faster than the rate from Greenland over 2000–2011 (Shepherd et al., 2012). Although neither possibility can be excluded entirely, it is unlikely that the AMOC will collapse before the end of the 21st century because of global warming.

Rising sea level increases the likelihood that a storm surge will overtop a levee or damage other coastal infrastructure, such as coastal roads, sewage treatment plants, or gas lines—all with potentially large, expensive, and immediate consequences

A separate but key question is whether sea-level rise itself can be large, rapid and widespread. In this regard, rate of change is assessed relative to the rate of societal adaptation. Available scientific understanding does not answer this question fully, but observations and modeling studies do show that a much faster sea-level rise than that observed recently (~3 mm/yr over recent decades) is possible (Cronin, 2012). Rates peaked more than 10 times faster in Meltwater Pulse 1A during the warming from the most recent ice age, a time with more ice on the planet to contribute to the sealevel rise, but slower forcing than the human-caused rise in CO2 (Figure 2.5 and 2.6). One could term a rise “rapid” if the response or adaptation time is significantly longer than the rise time. For example, a rise rate of 15 mm/yr (within the range of projec

Projections of sea-level rise remain notably uncertain even if the increase in greenhouse gases is specified accurately, but many recently published estimates include within their range of possibilities a rise of 1m by the end of this century (reviewed by Moore et al., 2013). For lowlying metropolitan areas, such as Miami and San Francisco, such a rise could lead to significant flooding

Thirty nine percent of the population lives in coastal shoreline counties. This population grew by 39 percent between 1970 and 2010, and is projected to grow by 8.3 percent by 2020. The population density of coastal counties is 446 people per sq mile, which is over 4 times that of inland counties. Just under half of the annual GDP of the United States is generated in coastal shoreline counties, an annual contribution that was $6.6 trillion in 2011. If counted as their own country, these counties would rank as the world’s third largest economy, after the United States and China. Some portions of these counties are well above sea level and not vulnerable to flooding (e.g., Cadillac Mountain, Maine, in Acadia National Park, at 470 m). But, the interconnected nature of roads and other infrastructure within political divisions mean that sea-level rise would cause problems even for the higher parts of these counties. The following statistics, from NOAA’s State of the Coast,a highlight the wealth and infrastructure at risk from rising seas: • $6.6 trillion: Contribution to GDP of the coastal shoreline counties, just under half of US GDP in 2011.b

  • 446 persons/mi2: Average population density of the coastal watershed counties (excluding Alaska). Inland density averages 61 persons per square mile.h

In many cases, such areas would be difficult to defend by dikes and dams, and such a large sea level rise would require responses ranging from potentially large and expensive engineering projects to partial or near complete abandonment of now-valuable areas as critical infrastructure such as sewer systems, gas lines, and roads are disrupted, perhaps crossing tipping points for adaptation (Kwadijk et al., 2010). Miami was founded little more than one century ago, and could face the possibility of sea level rise high enough to potentially threaten the city’s critical infrastructure in another century (Strauss et al., 2013). In terms of modern expectations for the lifetime of a city’s infrastructure, this is abrupt. If sometime in the coming centuries sea level should rise 20 to 25 m, as suggested

FIGURE B The long-term worst-case sea-level rise from ice sheets could be more than 60 m if all of Greenland and Antarctic ice melts. A 20 m rise, equivalent to loss of all of Greenland’s ice, all of the ice in West Antarctica, and some coastal parts of East Antarctica, is shown here.This may approximate the sea level during the Pliocene period (3–5 million years ago), the last time that CO2 levels are thought to have been 400 ppm.This figure emphasizes the large areas of coastal infrastructure that are potentially at risk if substantial ice sheet loss were to occur. SOURCE: http://geology.com/sea-level-rise/washington.shtml. for the Pliocene Epoch, 3 to 5 million years ago (see Figure 2.5), when CO2 is estimated to have had levels similar to today of roughly 400 parts per million, most of Delaware, the first State in the Union, would be under water without very large engineering projects (Figure B). In terms of the expected lifetime of a State, this could also qualify as abrupt.

In addition, compaction following removal of groundwater or fossil fuels, or possibly inflation from injection of fluids, may change land elevation

Most mountain glaciers worldwide are losing mass, contributing to sea-level rise. However, the amount of water stored in this ice is estimated to be less than 0.5 m of sea-level equivalent (Lemke et al., 2007), so the contribution to sea-level rise cannot be especially large before the reservoir is depleted. On the other hand, the reservoir in the polar ice sheets is sufficient to raise global sea level by more than 60 m (Lemke et al., 2007).

Beyond some threshold of a few degrees C warming, Greenland’s ice sheet will be almost completely removed. However, the timescale for this is expected to be many centuries to millennia. This still could result in a relatively rapid rate of sea-level rise. Greenland ice has about 7.3 m equivalent of sea-level rise (Lemke et al., 2007), which, if melted over 1000 years (a representative rather than limiting case), yields an annual rise rate of 7 mm/yr just from Greenland, slightly more than twice as fast as the recent rate of rise from all sources including melting of Greenland’s ice.

Mass loss by flow of ice into the ocean is less well understood, and it is arguably the frontier of glaciological science where the most could be gained in terms of understanding the threat to humans of rapid sea-level rise. Increased ice-sheet flow can raise sea level by shifting non-floating ice into icebergs or into floating-but-still-attached ice shelves, which can melt both from beneath and on the surface. Rapid sea-level rise from these processes is limited to those regions where the bed of the ice sheet is well below sea level and thus capable of feeding ice shelves or directly calving icebergs rapidly, but this still represents notable potential contributions to sea-level rise, including the deep fjords in Greenland (roughly 0.5 m; Bindschadler et al., 2013), parts of the East Antarctic ice sheet (perhaps as much as 20 m; Fretwell et al., 2013), and especially parts of the West Antarctic ice sheet (just over 3 m;

The loss of land ice, particularly from marine-based ice sheets such as the West Antarctic Ice Sheet—possibly in response to gradual ocean warming—could trigger sea-level rise rates that are much higher than ongoing. Paleoclimatic rates at least 10 times larger than recent rates have been documented, and similar or possibly higher rates cannot be excluded in the future. This time scale is also roughly that of humanbuilt infrastructure such as roads, water treatment plants, tunnels, homes, etc. Deep uncertainty persists about the likelihood of a rapid ice-sheet “collapse” contributing to a major acceleration of sea-level rise; for the coming century, the probability of such an event is generally considered to be low but not zero.

The impacts of ocean acidification on ocean biology have the potential to cause rapid (over multiple decades) changes in ecosystems and to be irreversible when contributing to extinction events. Specifically, the increase in CO2 and HCO3– availability might increase photosynthetic rates in some photosynthetic marine organisms, and the decrease in CO32– availability for calcification makes it increasingly difficult for calcifying organisms (such as some phytoplankton, corals, and bivalves) to build their calcareous shells and effects pH sensitive physiological processes (NRC, 2010c, 2013). As such, ocean acidification could represent an abrupt climate impact when thresholds are crossed below which organisms lose the ability to create their shells by calcification, or pH changes affect survival rates

Of more immediate concern is the expansion of Oxygen Minimum Zones (OMZs). Photosynthesis in the sunlit upper ocean produces O2, which escapes to the atmosphere; it also produces particles of organic carbon that sink into deeper waters before they decompose and consume O2. The net result is a subsurface oxygen minimum typically found from 200–1000 meters of water depth, called an Oxygen Minimum Zone. Warming ocean temperatures lead to lower oxygen solubility. A warming surface ocean is also likely to increase the density stratification of the water column (i.e., Steinacher et al., 2010), altering the circulation and potentially increasing the isolation of waters in an OMZ from contact with the atmosphere, hence increasing the intensity of the OMZ. Thus, oxygen concentrations in OMZs fall to very low levels due to the consumption of organic matter (and associated respiration of oxygen) and weak replenishment of oxygen by ocean mixing and circulation. Furthermore, a hypothetical warming of 1ºC would decrease the oxygen solubility by 5 µM (a few percent of the saturation value). This would result in the expansion of the hypoxic2 zone by 10 percent, and a tripling of the extent of the suboxic zone (Deutsch et al., 2011). With a 2ºC warming, the solubility would decrease by 14 µM resulting in a large expansion of areas depleted of dissolved oxygen and turning large areas of the ocean into places where aerobic life disappears.

Hypoxia is the environmental condition when dissolved water column oxygen (DO) drops below concentrations that are considered the minimal requirement for animal life. Suboxia is even further depletion of oxygen and anoxia is the condition of no paleo records have shown the extinctions of many benthic species during past periods of hypoxia. These periods have coincided with both a rise in temperature and sea level. Records also indicate long recovery times for ecosystems affected by hypoxic events (Danise et al., 2013). In addition, when the oxygen in seawater is depleted, bacterial respiration of organic matter turns to alternate electron-acceptors with which to oxidize organic matter, such as dissolved nitrate (NO3–). A by-product of this “denitrification” reaction is the release of N2O, a powerful greenhouse gas with an atmospheric lifetime of about 150 years. Low-oxygen environments, in the water column and in the sediments, are the main removal mechanism for nitrate from the global ocean. An intensification of oxygen depletion in the ocean therefore also has the potential to alter the global ocean inventory of nitrate, affecting photosynthesis in the ocean. However, the lifetime of nitrate in the global ocean is thousands of years, so any change in the global nitrate inventory would also take place on this long time scale.

Likelihood of Abrupt Changes

Changes in global ocean oxygen concentrations have the potential to be abrupt because of the threshold to anoxic conditions, under which the region becomes uninhabitable for aerobic organisms including fish and benthic organisms. Once this tipping point is reached in an area, anaerobic processes would be expected to dominate resulting in a likely increase in the production of the greenhouse gas N2O. Some regions like the Bay of Bengal already have low oxygen concentrations today.

OMZs have also been intensified in many areas of the world’s coastal oceans by runoff of plant fertilizers from agriculture and incomplete wastewater treatment. These ‘dead zones’ have spread significantly since the middle of the last century and pose a threat to coastal marine ecosystems (Diaz and Rosenberg, 2008).This expansion of OMZs is due to nutrient runoff makes the ocean more vulnerable to decreasing solubility of O2 in a warmer ocean. Indeed, as warming of the ocean intensifies, the decrease in oxygen availability might become non-linear; particularly, as indicated by the expansion of the size of the oxygen minimum zone

ABRUPT CHANGES IN THE ATMOSPHERE

Atmospheric Circulation The climate system exhibits variability on a range of spatial and temporal scales. On large (i.e., continental) scales, variability in the climate system tends to be organized into distinct spatial patterns of atmospheric and oceanic variability that are largely fixed in space but fluctuate in time. Such patterns are thought to owe their existence to internal feedbacks within the climate system. Prominent patterns of large-scale climate variability include: • the El-Nino/Southern Oscillation (ENSO), • the Madden-Julian Oscillation (MJO), • the stratospheric Quasi-Biennial Oscillation, • the Pacific-North American pattern, and • the Northern and Southern annular modes (the Northern

Given the definition of abrupt change in this report (see Box 1.2), there is little evidence that the atmospheric circulation and its attendant large-scale patterns of variability have exhibited abrupt change, at least in the observations. The atmospheric circulation exhibits marked natural variability across a range of timescales, and this variability can readily mask the effects of climate change (e.g., Deser et al., 2012a, 2012b). As noted above, patterns of large-scale variability in the extratropical atmospheric wind field exhibit variations on timescales from weeks to decades (Hartmann and Lo, 1998; Feldstein, 2000).

Weather and Climate Extremes

Extreme weather and climate events include heat waves, droughts, floods, hurricanes, blizzards, and other events that occur rarely.

Extreme weather and climate events are among the most deadly and costly natural disasters. For example, tropical cyclone Bhola in 1970 caused about 300,000-500,000 deaths in East Pakistan (Bangladesh today) and West Bengal of India.3,4 Hurricane Katrina caused more than 1,800 deaths and $96-$125 billion in damages to the Southeast U.S. in 2005. Worldwide, more than 115 million people are affected and more than 9,000 people are killed annually by floods, most of them in Asia (Figure 2.9 or see, for example, the Emergency Events Database5). Heat waves contributed to more than 70,000 deaths in Europe in 2003 (e.g., Robine et al., 2008) and more than 730 deaths and thousands of hospitalizations in Chicago in 1995 (Chicago Tribune, July 31, 1995; Centers for Disease Control and Prevention, 1995). Heat waves are one of the largest weather-related sources of mortality in the United States annually.6

TABLE 2.1 Billion-dollar weather and climate disasters in the United States from 1980 to 2011 by type. Total damages are in consumer-price-index-adjusted 2012 dollars. Note that the impacts of droughts are difficult to determine precisely, so those figures may be underestimated.

The potential for abrupt regime shifts was raised in NRC (2002), which highlighted the transitions into and out of the 1930s Dust Bowl as prime examples.

The impacts of extreme events on societal tipping points have been more clearly appreciated (Lenton et al., 2008; Nel and Righarts, 2008).

Extreme warm temperatures in summer can greatly increase the risks of mega-fires in temperate forests, boreal forests, and savanna ecosystems, leading to abrupt changes in species dominance and vegetation type, regional water yield and quality, and carbon emission (e.g., Adams, 2013), before the gradual increase of surface temperature crosses the threshold for abrupt ecosystem collapse

Extreme events could lead to a tipping point in regional politics or social stability. In Africa, extreme droughts and high temperatures have been linked to an increase of risk of civil conflict and large-scale humanitarian crisis in Africa.

Generally, extreme climate events alone do not cause conflict. However, they may act as an accelerant of instability or conflict, placing a burden to respond on civilian institutions and militaries around the world (NRC, 2012b). For example, the devastating tropical cyclone Bhola in 1970 heightened the dissatisfaction with the ruling government and strengthened the Bangladesh separatist movement. This led eventually to civil war and independence of Bangladesh in 1971

Historically, extreme climate events such as decadal mega-droughts may have triggered the collapse of civilizations, such as the Maya (Hodell et al., 1995; Kennett et al., 2012) or large scale civil unrest that ended the Ming dynasty (Shen et al., 2007).

ABRUPT CHANGES AT HIGH LATITUDES

Potential Climate Surprises Due to High-Latitude Methane and Carbon Cycles

Interest in high-latitude methane and carbon cycles is motivated by the existence of very large stores of carbon (C), in potentially labile reservoirs of soil organic carbon in permafrost (frozen) soils and in methane-containing ices called methane hydrate or clathrate, especially offshore in ocean marginal sediments. Owing to their sheer size, these carbon stocks have potential to massively impact the Earth’s climate, should they somehow be released to the atmosphere. An abrupt release of methane (CH4) is particularly worrisome as it is many times more potent as a greenhouse gas than carbon dioxide (CO2) over short time scales. Furthermore, methane is oxidized to CO2 in the atmosphere representing another CO2 pathway from the biosphere to the atmosphere in addition to direct release of CO 2 from aerobic decomposition of carbon-rich soils.

Permafrost Stocks

Frozen northern soils contain enough carbon to drive a powerful carbon cycle feedback to a warming climate (Schuur et al., 2008). These stocks across large areas of Siberia comprise mainly yedoma (an ice-rich, loess-like deposit averaging ~25 m deep [Zimov et al., 2006b]), peatlands (i.e., histels and gelisols), and river delta deposits. Published estimates of permafrost soil carbon have tended to increase over time, as more field datasets are incorporated and deposits deeper than 1 m depth are considered. Estimates of the total soil-carbon stock in permafrost in the Arctic range from 1,700–1,850 Gt C (Gt C = gigatons of carbon; Tarnocai et al., 2009;

To put the Arctic soil carbon reservoir into perspective, the carbon it contains exceeds current estimates of the total carbon content of all living vegetation on Earth (approximately 650 Gt C), the atmosphere (730 Gt C, up from ~360 Gt C during the last ice age and 560 Gt C prior to industrialization, Denman et al., 2007), proved reserves of recoverable conventional oil and coal (about 145 Gt C and 632 Gt C, respectively), and even approaches geological estimates of all fossil fuels contained within the Earth (~1,500 – 5,000 Gt C). It represents more than two and a half centuries of our current rate of carbon release through fossil fuel burning and the production of cement (nearly 9 Gt C per year, Friedlingstein et al., 2010). These vast deposits exist largely because microbial breakdown of organic soil carbon is generally low in cold climates, and virtually halted when frozen in permafrost. Despite slow rates of plant growth in the Arctic and sub-Arctic latitudes, massive deposits of peat have accumulated there since the last glacial maximum (Smith et al., 2004; MacDonald et al., 2006). Potential response to a warming climate Permafrost soils in the Arctic have been thawing for centuries, reflecting the rise of temperatures since the last glacial maximum (~21 kyr ago) and the Little Ice Age (1350-1750).

FIGURE 2.12 Top: Approximate inventories of carbon in various reservoirs (see text for references).

Melting has accelerated in recent decades, and can be attributed to human-induced warming (Lemke et al., 2007). Under business-as-usual climate forcing scenarios, much of the upper permafrost is projected to thaw within a time scale of about a century (Camill, 2005, Lawrence and Slater, 2005). Exactly how this will proceed is uncertain.

It is clear that the time scale for deep permafrost thaw is measured in centuries, not years. Furthermore, unlike methane hydrates (see below), the very large stocks of permafrost soil carbon (i.e., the 1,672 Gt C of Tarnocai et al., 2009) must first undergo anaerobic microbial fermentation to produce methane, itself a gradual decomposition process. There are no currently proposed mechanisms that could liberate a climatically significant amount of methane or CO 2 from frozen permafrost soils within an abrupt time scale of a few years, and it appears gradual increases in carbon release from warming soils can be at least partially offset, owing to rising vegetation net primary productivity.

A related idea is the possibility of rising soil temperatures triggering a “compost bomb instability” (Wieczorek et al., 2011)—possibly including combustion—and a prime example of a rate-dependent tipping point (Ashwin et al., 2012). Such possibilities would represent a rapid breakdown of the Arctic’s very large soil carbon stocks and warrant further research. Even absent an abrupt or catastrophic mobilization of CO2 or methane from permafrost carbon stocks, it is important to recognize that Arctic emissions of these critical greenhouse gases are projected to increase gradually for many decades to centuries, thus helping to drive the global climate system more quickly towards other abrupt thresholds examined in this report.

Methane Hydrates in the Ocean

Stocks Under conditions of high pressure, high methane concentration, and low temperature, water and methane can combine to form icy solids known as methane hydrates or clathrates in ocean sediments.

Throughout most of the world ocean, a water depth of about 700 m is required for hydrate stability. In the Arctic, due to colder-than-average water temperatures, only about 200 m of water depth is required, which increases the vulnerability of those methane hydrates to a warming Arctic Ocean. The Arctic is also a focus of concern because of the wide expanse of continental shelf (25 percent of the world’s total), much of which is still frozen owing to its exposure to the frigid atmosphere during lowered sea levels of the last glacial maximum (see above). The inventory of methane in ocean margin sediments is large but not well constrained, with a generally agreed upon range of 1,000-10,000 Gt C (Archer, 2007; Boswell, 2007; Boswell et al., 2012). One inventory places the total Arctic Ocean hydrates at about 1,600 Gt C by extrapolation of an estimate from Shakhova et al. (2010a) to the entire Arctic shelf region (Isaksen et al., 2011) (see Figure 2.12). The geothermal increase in temperature with depth in the sediment column restricts methane hydrate to within a few hundred meters thickness near the upper surface of the sediments

Warming bottom waters in deeper parts of the ocean, where surface sediment is much colder than freezing and the hydrate stability zone is relatively thick, would not thaw hydrates near the sediment surface, but downward heat diffusion into the sediment column would thin the stability zone from below, causing basal hydrates to decompose, releasing gaseous methane. The time scale for this mechanism of hydrate thawing is on the order of centuries to millennia, limited by the rate of anthropogenic heat diffusion into the deep ocean and sediment column.

The proportion of this gas production that will reach the atmosphere as CH4 is likely to be small. To reach the atmosphere, the CH4 would have to avoid oxidization within the sediment column (a chemical trap) and re-freezing within the stability zone shallower in the sediment column (a cold trap).

Most of the methane gas that emerges from the sea floor dissolves in the water column and oxidizes to CO2 instead of reaching the atmosphere. Bubble plumes tend to dissolve on a height scale of tens of meters even in the cold Arctic Ocean, methane hydrate is only stable below about 200 m water depth, making for an inefficient pathway to the atmosphere at best.

Over time scales of centuries and millennia, the ocean hydrate pool has the potential to be a significant amplifier of the anthropogenic fossil fuel carbon release. Because the chemistry of the ocean equilibrates with that of the atmosphere (on time scales of decades to centuries), methane oxidized to CO2 in the water column will eventually increase the atmospheric CO2 burden (Archer and Buffett, 2005). As with decomposing permafrost soils, such release of carbon from the ocean hydrate pool would represent a change to the Earth’s climate system that is irreversible over centuries to millennia.

Impacts of Arctic Methane on Global Climate

Although attention is often focused on methane when considering a potential Arctic carbon release, because methane is a short-lived gas in the atmosphere (CH4 oxidizes to CO2 within about a decade), ultimately a methane problem is a CO2 problem. It does matter how rapidly methane is released, and the impacts of a spike versus chronic emissions are discussed in Box 2.4. As methane emissions from permafrost degradation will also be accompanied by larger fluxes of CO2, Arctic carbon stores clearly have the potential to be a significant amplifier to the human release of carbon.

Speculations about potential methane releases in the Arctic have ranged up to about 75 Gt C from the land (Isaksen et al., 2011) and 50 Gt C from the ocean (Shakhova et al., 2010a). A release of 50 Gt C methane from the Arctic to the atmosphere over 100 years would increase Arctic CH4 emissions by about a factor of 25, and would make the present-day permafrost area about two times more productive of CH4 on average as comes from wetlands today. Postulating such a methane release over a more abrupt 10-year time scale, the emission rates from present-day permafrost would have to exceed that from wetlands by a seemingly implausible factor of 20, supporting a longer century timescale for this process, and making methane emission from polar regions an unlikely candidate for a tipping point in the climate system. Nonetheless, as can be seen in Box 2.4, releasing 50 Gt C of methane over 100 years would have a significant impact on Earth’s climate. The atmospheric CH4 concentration would roughly quadruple, with a resulting total radiative forcing from CH4 of about 3 Watts/m2. The magnitude of this forcing is comparable to that from doubling the atmospheric CO2 concentration, but the impact of the methane forcing would be strongly attenuated by its short duration (see Box 2.4).

Summary and the Way Forward

Arctic carbon stores are poised to play a significant amplifying role in the centurytimescale buildup of CO2 and methane in the atmosphere, but are unlikely to do so abruptly, on a time scale of one or a few decades.

Boreal forests appear susceptible to rapid transition to sparse woodland or treeless landscapes as temperature and precipitation patterns shift

At the global scale, observations show that the transitions from forests to savanna and from savanna to grassland tend to be abrupt when annual rainfall ranges from 1,000 to 2,500 mm and from 750 to 1,500 mm, respectively (Hirota et al., 2011; Mayer and Khalyani, 2011; Staver et al., 2011). Such rainfall regimes cover nearly half of the global land, where either a gradual climate change across the ecosystem thresholds or a strong perturbation due to either extreme climate events, land use, or diseases could trigger abrupt ecosystem changes. The latter could in turn amplify the original climate change in the areas where land surface feedback is important to climate

Amazon forests represent the world’s largest terrestrial biome and potentially the tropical ecosystem most vulnerable to abrupt change in response to future climate change in concert with agricultural development (e.g., Cox et al., 2000; Lenton et al., 2008;

The forests are characterized by a tall canopy of broadleaved trees, 30-40m high, sometimes with impressive emergent trees up to 55 m or taller. The Brazilian portion of the Amazon comprises 4 × 106 km2,12 less than 1 percent of global land area, but disproportionally important in terms of aboveground terrestrial biomass (15 percent of global terrestrial photosynthesis [Field et al., 1998]) and number of species (~25 percent, Dirzo and Raven, 2003). Direct human intervention via deforestation represents an existential threat to this forest: despite recent moderation of rates of deforestation, the Amazon forest is on track to be 50 percent deforested within 30 years—arguably by itself an abrupt change of global importance (Fearnside,

Lenton et al. (2008) and Nobre and Borma (2009) have summarized current understanding of “tipping points” in Amazonian forests. Global and regional models do indeed simulate hysteresis and collapse of Amazonia forests. Models exhibit these shifts for a range of perturbations: temperature increases of 2-4°C, precipitation decreases by ~40 percent (1100 mm, according to Lenton et al., 2008), and/or deforestation that replaces large swathes of the forest with agriculture

Thresholds may occur much closer to current conditions, for example, if precipitation falls below 1,600-1,700 mm (Nobre and Borma, 2009). Indeed, long-lasting damage to Amazonian forests may have occurred after the single severe drought in 2005

The committee concludes that credible possibilities of thresholds, hysteresis, indirect effects, and interactions amplifying deforestation, make abrupt (50 year) change plausible in this globally important system. Rather modest shifts in climate and/or land cover may be sufficient to initiate significant migration of the ecotone defining the limit of equatorial closed-canopy forests in Amazonia, potentially affecting large areas.

In the context of this report, extinction is recognized as “abrupt” in two respects. First, the numbers of individuals and populations that ultimately compose a species may fall below critical thresholds such that the likelihood for species survival becomes very low. This kind of abrupt change is often cryptic, in that the species at face value remains alive for some time after the extinction threshold is crossed, but becomes in effect a “dead clade walking” (Jablonski, 2001). Such losses of individuals that take species towards critical viability thresholds can be very fast—within three decades or less, as already evidenced by many species now considered at risk of extinction due to causes other than climate change by the International Union for the Conservation of Nature.15

The abrupt impact of climate change on causing extinctions of key concern, therefore, is its potential to deplete population sizes below viable thresholds within just the next few decades, whether or not the last individual of a species actually dies.

From the late 20th to the end of the 21st century, climate has been and is expected to continue changing faster than many living species, including humans and most other vertebrate animals, have experienced since they originated. Consequently, the predicted “velocity” of climate change—that is, how fast populations of a species would have to shift in geographic space in order to keep pace with the shift of the organisms’ current local climate envelope across the Earth’s surface—is also unprecedented (Diffenbaugh and Field, 2013; Loarie et al.,

Climate change now is proceeding at “at a rate that is at least an order of magnitude and potentially several orders of magnitude more rapid than the changes to which terrestrial ecosystems have been exposed during the past 65 million years.

Moreover, the overall temperature of the planet is rapidly rising to levels higher than most living species have experienced (Figure 2.19). Consequently all the populations in some species, and many populations in others, will be exposed to local climatic conditions they have never experienced (so-called “novel climates”), or will see the climatic conditions that have been an integral part of their local habitats disappear (“disappearing climates”) (Williams et al., 2007). Models suggest that by the year 2100, novel and disappearing climates will affect up to a third and a half of Earth’s land surface, respectively (Williams et al., 2007), as well as a large percentage of the oceans

Thus, many species will experience unprecedented climatic conditions across their geographic range. If those conditions exceed the tolerances of local populations, and those populations cannot migrate or evolve fast enough to keep up with climate change, extinction will be likely. These impacts of rapid climate change will moreover occur within the context of an ongoing major extinction event that has up to now been driven primarily by anthropogenic habitat destruction.

Recent work suggests that up to 41 percent of bird species, 66 percent of amphibian species, and between 61 percent and 100 percent of corals that are not now considered threatened with extinction will become threatened due to climate change sometime between now and 2100 (Foden et al., 2013; Ricke et al., 2013), and that in Africa, 10-40 percent of mammal species now considered not to be at risk of extinction will move into the critically endangered or extinct categories by 2080, possibly as early as 2050

A critical consideration is that the biotic pressures induced by climate change will interact with other well-known anthropogenic drivers of extinction to amplify what are already elevated extinction rates. Even without putting climate change into the mix, recent extinction has proceeded at least 3-80 times above long-term background rates (Barnosky et al., 2011) and possibly much more (Pimm and Brooks, 1997; Pimm et al., 1995; WRI, 2005), 17 primarily from human-caused habitat destruction and overexploitation of species. The minimally estimated current extinction rate (3 times above background rate), if unchecked, would in as little as three centuries result in a mass extinction equivalent in magnitude to the one that wiped out the dinosaurs (Barnosky et al., 2011) (see Box 2.4). Importantly, this baseline estimate assumes no effect from climate change. A key concern is whether the added pressure of climate change would substantially increase overall extinction rates such that a major extinction episode would become a fait accompli within the next few decades, rather than something that potentially would play out over centuries. Known mechanisms by which climate change can cause extinction include the following. 1. Direct impact of an abrupt climatic event—for example, flooding of a coastal ecosystem by storm surges as by seas rise to levels discussed earlier in this report. 2. Gradually changing a climatic parameter until some biological threshold is exceeded for most individuals and populations of a species across its geographic range—for example, increasing ambient temperature past the limit at which an animal can dissipate metabolic heat, as is happening with pikas at higher elevations in several mountain ranges (Grayson, 2005). Populations of ocean corals (Hoegh-Guldberg, 1999; Mumby et al., 2007; Pandolfi et al., 2011; Ricke et al., 2013) and tropical forest ectotherms (Huey et al., 2012) also inhabit environments close to their physiological thermal limits and may thus be vulnerable to climate warming. Another potential threshold phenomenon is decreasing ocean pH to the point that the developmental pathways of many invertebrates (NRC, 2011a; Ricke et al., 2013) and vertebrate species are disrupted, as is already beginning to happen (see examples below).

Interaction of pressures induced directly by climate change with non-climatic anthropogenic factors, such as habitat fragmentation, overharvesting, or eutrophication, that magnify the extinction risk for a given species—for example, the checkerspot butterfly subspecies Euphydryas editha bayensis became extinct in the San Francisco Bay area as housing developments destroyed most of their habitat, followed by a few years of locally unfavorable climate conditions in their last refuge at Jasper Ridge, California (McLaughlin et al., 2002). 4. Climate-induced change in biotic interactions, such as loss of mutualist partner species, increases in disease or pest incidence, phenological mismatches, or trophic cascades through food webs after decline of a keystone species. Such effects can be intertwined with the intersection of extinction pressures noted in mechanism 3 above. In fact, the disappearance of checkerspot butterflies from Jasper Ridge was because unusual precipitation events altered the timing of overlap of the butterfly larvae and their host plants (McLaughlin et al., 2002).

BOX 2.4 MASS EXTINCTIONS Mass extinctions are generally defined as times when more than 75 percent of the known species of animals with fossilizable hard parts (shells, scales, bones, teeth, and so on) become extinct in a geologically short period of time (Barnosky et al., 2011; Harnik et al., 2012; Raup and Sepkoski, 1982). Several authors suggest that the extinction crisis is already so severe, even without climate change included as a driver, that a mass extinction of species is plausible within decades to centuries. This possible extinction event is commonly called the “Sixth Mass Extinction,” because biodiversity crashes of similar magnitude have happened previously only five times in the 550 million years that multi- cellular life has been abundant on Earth: near the end of the Ordovician (~443 million years ago), Devonian (~359 million years ago), Permian (251 million years ago), Triassic (~200 million years ago), and Cretaceous (~66 million years ago) Periods. Only one of the past “Big Five” mass extinctions (the dinosaur extinction event at the end of the Cretaceous) is thought to have occurred as rapidly as would be the case if currently observed extinctions rates were to continue at their present high rate (Alvarez et al., 1980; Barnosky et al., 2011; Robertson et al., 2004; Schulte et al., 2010), but the minimal span of time over which past mass extinctions actually took place is impossible to determine, because geological dating typically has error bars of tens of thousands to hundreds of thousands of years. After each mass extinction, it took hundreds of thousands to millions of years for biodiversity to build back up to pre-crash levels.

Data also indicate that continued climate change at its present pace would be detrimental to many species of marine clams and snails, fish, tropical ectotherms, and some species of plants (examples and citations below). For such species, continuing the present trajectory of climate change would very likely result in extinction of most, if not all, of their populations by the end of the 21st century. The likelihood of extinction from climate change is low for species that have short generation times, produce prodigious numbers of offspring, and have very large geographic ranges. However, even for such species, the interaction of climate change with habitat fragmentation may cause the extirpation of many populations. Even local extinctions of keystone species may have major ecological and economic impacts.

The interaction of climate change with habitat fragmentation has high potential for causing extinctions of many populations and species within decades (before the year 2100 if not sooner). The paleontological record and historical observations of species indicate that in the past species have survived climate change by their constituent populations moving to a climatically suitable area, or, if they cannot move, by evolving adaptations to the new climate. The present condition of habitat fragmentation limits both responses under today’s shifting climatic regime. More than 43 percent of Earth’s currently ice-free lands have been changed into farms, rangelands, cities, factories, and roads (Barnosky et al., 2012; Foley et al., 2011; Vitousek et al., 1986, 1997), and in the oceans many continental-shelf areas have been transformed by bottom trawling (Halpern et al., 2008; Jackson, 2008; Hoekstra et al., 2010). This extent of habitat destruction and fragmentation means that even if individuals of a species can move fast enough to cope with ongoing climate change, they will have difficulty dispersing into suitable areas because adequate dispersal corridors no longer exist. If individuals are confined to climatically unsuitable areas, the likelihood of population decline is enhanced, resulting in high likelihood of extinction if population size falls below critical values, from processes such as random fluctuations in population size

Novel climates are those that are created by combinations of temperature, precipitation, seasonality, weather extremes, etc., that exist nowhere on Earth today. Disappearing climates are combinations of climate parameters that will no longer be found anywhere on the planet. Modeling studies suggest that by the year 2100, between 12 percent and 39 percent of the planet will have developed novel climates, and current climates will have disappeared from 10 percent to 48 percent of Earth’s surface (Williams et al., 2007). These changes will be most prominent in what are today’s most important reservoirs of biodiversity

The end-Permian extinction started from a different continental configuration and global climate, so an exact reproduction is not to be expected,

The climatic warming at the last glacial-interglacial transition was coincident with the extinction of 72 percent of the large-bodied mammals in North America, and 83 percent of the large-bodied mammals in South America—in total, 76 genera including more than 125 species for the two continents. Many of these extinctions occur within and just following the Younger Dryas, and generally they are attributed to an interaction between climatic warming and human impacts. The magnitude of climatic warming, about 5oC, was about the same as currently-living species are expected to experience within this century, although the end-Pleistocene rate of warming was much slower. Also similar to today, the end-Pleistocene extinction event played out on a landscape where human population sizes began to grow rapidly, and when people began to exert extinction pressures on other large animals. The main differences today, with respect to extinction potentials, are that anthropogenic climate change is much more rapid and moving global climate outside the bounds living species evolved in, and the global human population, and the pressures people place on other species, are orders of magnitude higher than was the case at the last glacialinterglacial transition (Barnosky et al., 2012).

Many of the extinction impacts in the next few decades could be cryptic, that is, reducing populations to below-viable levels, destining the species to extinction even though extinction does not take place until later in the 21st or following century. The losses would have high potential for changing the function of existing ecosystems and degrading ecosystem services (see Chapter 3). The risk of widespread extinctions over the next three to eight decades is high in at least two critically important ecosystems where much of the world’s biodiversity is concentrated, tropical/ sub-tropical areas, especially rainforests and coral reefs. The risk of climate-triggered extinctions of species adapted to high, cool elevations and high-latitude conditions also is high.

Abrupt climate impacts may have detrimental effects on ecological resources that are critical to human well-being. Such resources are called “ecosystem services” (Box 3.1), which basically are attributes of ecosystems that fulfill the needs of people. For example, healthy diverse ecosystems provide the essential services of moderating weather, regulating the water cycle and delivering clean water, protecting and keeping agricultural soils fertile, pollinating plants (including crops), providing food (particularly seafood), disposing of wastes, providing pharmaceuticals, controlling spread of pathogens, sequestering greenhouse gases from the atmosphere, and providing recreational opportunities

Largely due to water-delivery issues related to climate change, cereal crop production is expected to fall in areas that now have the highest population density and/or the most undernourished people, notably most of Africa and India (Dow and Downing, 2007). In the United States, key crop growing areas, such as California, which provides half of the fruits, nuts, and vegetables for the United States, will experience uneven effects across crops, requiring farmers to adapt rapidly to changing what they plant. Fisheries Degradation of coral reefs by ocean warming and acidification will negatively affect fisheries, because reefs are required as habitat for many important food species, especially in poor parts of the world. For example, in the poorest countries of Africa and south Asia, fisheries largely associated with coral reefs provide more than half of the protein and mineral intake for more than 400 million people (Hughes et al., 2012). On a broader scale, many fisheries around the world can be expected to experience changes as ocean temperatures, acidity, and currents change (Allison et al., 2009; Jansen et al., 2012; Powell and Xu, 2012), with attendant socio-economic impacts (Pinsky and Fogarty, 2012). One study suggests climate change, combined with other pressures on fisheries, may result in a 30–60 percent reduction in fish production by 2050 in areas such as the eastern Indo-Pacific, and those areas fed by the northern Humboldt and the North Canary Currents (Blanchard et al., 2012). Because other pressures, notably over-fishing, already stress fisheries, a small climatic stressor can contribute strongly to hastening collapse

Forest diebacks (Anderegg et al., 2013) and reduced tree biodiversity (Cardinale et al., 2012) can be expected to have major impacts on timber production. Such is already the case for millions of square miles of beetle-killed forests throughout the American West. Drought-enhanced desertification of dryland ecosystems may cause famines and migrations of environmental refugees

Regulatory Services

Also of concern is the potential loss of regulatory services, which buffer the effects of environmental change (Reid et al., 2005). For example, tropical forest ecosystems slow the rate of global warming both by absorbing atmospheric carbon dioxide and through latent heat flux (Anderson-Teixeira et al., 2012). Coastal saltmarsh and mangrove wetlands buffer shorelines against storm surge and wave damage (Gedan et al., 2011). Grassland biodiversity stabilizes ecosystem productivity in response to climate variation (see Cardinale et al., 2012 and references therein). Climate change has the clear potential to exacerbate losses of these critical ecosystem services (for instance, decrease in rainforests, desertification) and attendant impacts on human societies. Direct Economic Impacts Some species currently at risk of extinction, and some of those which will be further imperiled by ongoing climate change, provide significant economic benefits to people who live in the surrounding areas, as well as significant aesthetic and emotional benefits to millions of others, primarily through ecotourism, hunting, and fishing. At the international level, for example, ecotourism—largely to view elephants, lions, cheetahs, and other threatened species—supplies around 14 percent of Kenya’s GDP as of 2013 (USAID, 2013) and supplied 13 percent of Tanzania’s in 2001 (Honey, 2008). Yet in a single year, 2009, an extreme drought decimated the elephant population and populations of many other large animals in Amboseli Park, Kenya. Increased frequency of such extreme weather events could erode the ecotourism base on which the local economies depend. Other international examples include ecotourism in the Galapagos Islands—driven in a large part to view unique, threatened species—which contributed 68 percent of the 78 percent growth in GDP of the Galapagos that took place from 1999–2005 (Taylor et al., 2008). Within the United States, direct economic benefits of ecosystem services also are substantial; for example, commercial fisheries provide approximately one million jobs and $32 billion in income nationally (NOAA, 2013). Ecotourism also generates substantial revenues and jobs in the United States—visitors to national parks added $31 billion to the national economy and supported more than 258,000 jobs in 2010 (Stynes, 2011).

Less obviously, there are also systems whose useful lifetimes are cut short by gradual changes in baseline climate. Such systems are experiencing abrupt impacts if they are built to last a certain period of time, and priced such that they can be amortized over that lifetime, but their actual lifetime is artificially shortened by climate change. One example would be a large air conditioning system for computer server rooms. If maximum high temperatures rise faster than planned for, the lifetime of such systems would be cut short, and new systems would need to be installed at added cost to the owner of the servers.

Another example is storm runoff drains in cities and towns. These systems are sized to handle large storms that precipitate a certain amount of water in a certain period of time. Rare storms, such as a 1000-year event, are typically not considered when choosing the size of pipes and drains, but the largest storms that occur annually up to once per decade or so are considered. As the atmosphere warms and can hold more moisture, the amount of rain per event is increasing (Westra et al., 2013), changing the baseline used to size storm runoff systems, and thus their utility, generally long before the systems are considered to have reached

Another type of infrastructure problem associated with abrupt change is the infrastructure that does not exist, but will need to after an abrupt change. The most glaring example today is the lack of US infrastructure in the Arctic as the Arctic Ocean becomes more and more ice free in the summer. For example, the United States lacks sufficient ice breakers that can patrol waters that, while seasonally open in many places, will still have extensive wintertime ice cover. Servicing and protecting our activities in this resource-rich region is now a challenge, one that only recently, and abruptly, emerged. This challenge has illustrated a time scale issue associated with abrupt change. Currently, it will take years to rebuild our fleet of ice-breakers, but because of the rapid loss of sea ice in 2007 and more recently, the need for these ships is now (NRC, 2007; O’Rourke, 2013). Coastal Infrastructure Globally, about 40 percent of the world’s population lives within 100 km of the world’s coasts. While complete inventories are lacking, the accompanying infrastructure— from the obvious, such as roads and buildings, to the less obvious but no less critical, such as underground services (e.g., natural gas and electric lines)—is easily valued in the trillions of dollars, and this does not include ecosystem services such as fresh water supplies, which are threatened as sea level rises. A nearly equal percentage of the US population lives in Coastal Shoreline Counties.2 In addition, coastal counties are more densely populated than inland ones. The National Coastal Population Report, Population Trends from 1970 to 2020 (NOAA, 2013), reports that coastal county population density is over six times that of inland counties (Figure 3.1). Consequently, the United States has a large amount of physical assets located near coasts and currently vulnerable to sea level rise and storm surges exacerbated by rising seas (See Chapter 2 and especially Box 2.1 for additional discussion of this issue.) For example, the National Flood Insurance Program (NFIP) currently has insured assets of $527 billion in the coastal floodplains of the United States, areas that are vulnerable to sea level rise and storm surges.

Nearly half of the US gross domestic product, or GDP, was generated in the Coastal Shoreline Counties along the oceans and Great Lakes (see NOAA State of the Coast3). Despite the ongoing rise of sea level, and the frequent, high-profile illustrations of the value and vulnerabily of coastal assets at risk, there is no systematic, ongoing, and updated cataloging of coastal assets that are in harm’s way as sea level rises. Overall, there is a need to shift to more holistic planning, investment, and operation for global sea ports (Becker et al., 2013).

Permafrost, or permanently frozen ground, is ubiquitous around the Arctic and subArctic latitudes and the continental interiors of eastern Siberia and Canada, the Tibetan Plateau and alpine areas. As such, it is a substrate upon which numerous pipelines, buildings, roads and other infrastructure have (or could be) built, so long as these structures are properly designed to not thaw the underlying permafrost. For areas underlain by ice-rich permafrost, severe damage to permanent infrastructure can result from settlement of the ground surface as the permafrost thaws (Nelson

Over the past 40 years, significant losses (>20 percent) in ground load-bearing capacity have been computed for large Arctic population and industrial centers, with the largest decrease to date observed in the Russian city of Nadym where bearing capacity has fallen by more than 40 percent (Streletskiy et al., 2012). Numerous structures have become unsafe in Siberian cities, where the percentage of dangerous buildings ranges from at least 10 percent to as high as 80 percent of building stock in Norilsk, Dikson, Amderma, Pevek, Dudina, Tiksi, Magadan, Chita, and Vorkuta (ACIA, 2005).

The second way in which milder winters and/or deeper snowfall reduce human access to cold landscapes is through reduced viability of winter roads (also called ice roads, snow roads, seasonal roads, or temporary roads). Like permafrost, winter roads are negatively impacted by milder winters and/or deeper snowfall (Hinzman et al., 2005; Prowse et al., 2011). However, the geographic range of their use is much larger, extending to seasonally frozen land and water surfaces well south of the permafrost limit. They are most important in Alaska, Canada, Russia, and Sweden, but also used to a lesser extent (mainly river and lake crossings) in Finland, Estonia, Norway, and the northern US states. These are seasonal features, used only in winter when the ground and/or water surfaces freeze sufficiently hard to support a given vehicular weight. They are critically important for trucking, construction, resource exploration, community resupply and other human activities in remote areas. Because the construction cost to build a winter road is <1 percent that of a permanent road (e.g., ~$1300/km versus $0.5–1M/km, Smith, 2010) winter roads enable commercial activity in remote northern areas that would otherwise be uneconomic. Since the 1970s, winter road season lengths on the Alaskan North Slope have declined from more than 200 days/year to just over 100 days/year (Hinzman et al., 2005). Based on climate model projections, the world’s eight Arctic countries are all projected to lose significant land areas (losses of 11 percent 82 percent) currently possessing climates suitable for winter road construction (Figure 3.3), with Canada (400,000km2) and Russia (618,000km2) experiencing the greatest losses in absolute land area terms

Although the prospect of such trans-Arctic routes materializing has attracted considerable media attention (and indeed, 46 vessels transited the Northern Sea Route during the 2012 season), it is important to point out that these routes would operate only in summer, and numerous other non-climatic factors remain to discourage trans-Arctic shipping including lack of services, infrastructure, and navigation control, poor charts, high insurance and escort costs, unknown competitive response of the Suez and Panama Canals, and other economic factors

This section briefly describes several other human health-related impacts—heat waves, vector-borne and zoonotic diseases, and waterborne diseases—but there are others, including potential impacts from reduced air quality, impacts on human health and development, impacts on mental health and stress-related disorders, and impacts on neurological diseases and disorders

Heat waves cause heat exhaustion, heat cramps, and heat stroke; heat waves are one of the most common causes of weather-related deaths in United States (USGCRP, 2009). Summertime heat waves will likely become longer, more frequent, more severe, and more relentless with decreased potential to cool down at night. Increases in heat-related deaths due to climate change are likely to outweigh decreases in deaths from cold snaps (Åström et al., 2013; USGCRP, 2009). In general, heat waves and the associated health issues disproportionately affect more vulnerable populations such as the elderly, children, those with existing cardiovascular and respiratory diseases, and those who are economically disadvantaged or socially isolated (Portier et al., 2010). Increasing temperature and humidity levels can cross thresholds where it is unsafe for individuals to perform heavy labor (below a direct physiological limit). Recent work has shown that environmental heat stress has already reduced the labor capacity in the tropics and mid-latitudes during peak months of heat stress by 10 percent, and another 10 percent decrease is projected by 2050 (Dunne et al., 2013) with much larger decreases further into the future. Areas of Concern for Humans from Abrupt Changes Heavy rainfall and flooding can enhance the spread of water-borne parasites and bacteria, potentially spreading diseases such as cholera, polio, Guinea worm, and schistosomiasis.“Outbreaks of waterborne diseases often occur after a severe precipitation event (rainfall, snowfall). Because climate change increases the severity and frequency of some major precipitation events, communities—especially in the developing world—could be faced with elevated disease burden from waterborne diseases” (Portier et al., 2010).

Vector-borne diseases are those in which an organism carries a pathogen from one host to another. The carrier is often an insect, tick, or mite, and well-known examples include malaria, yellow fever, dengue, murine typhus, West Nile virus, and Lyme disease. Zoonotic diseases are those that are transmitted from animals to humans by either contact with the animals or through vectors that carry zoonotic pathogens from animals to humans; examples include Avian Flu, and H1N1 (swine flu). Changes in climate may shift the geographic ranges of carriers of some diseases. For example, the geographic range of ticks that carry Lyme disease is limited by temperature. As air temperatures rise, the range of these ticks is likely to continue to expand northward (Confalonieri et al., 2007).

National Security

The topic of climate and national security including a recent review entitled Climate and Social Stresses: Implications for Security Analysis (NRC, 2012b). as well as the excellent discussion of this topic by Schwartz and Randall (2003).

Conflicts over water issues may become more numerous as droughts become more frequent. In addition, famine and food scarcity have the potential to cause international humanitarian issues and even conflicts, as do health security issues from epidemics and pandemics (also see previous section). These impacts from climate change may present national security challenges through humanitarian crises, disruptive migration events, political instability, and interstate or internal conflict. The impacts on national security are likely to be presented abruptly, in the sense that the eruption of any crisis represents an abrupt change.

An example of an abrupt change that affects the national infrastructure of a number of countries is the opening of shipping lanes in the Arctic as a result of the retreating sea ice. There are geopolitical ramifications related to possible shipping routes and territorial claims, including potential oil, mineral, and fishing rights.

Rapid or catastrophic methane release from sea-floor or permafrost reservoirs has also been shown to be much less worrisome than first considered possible

Fast changes in atmospheric methane concentration in ice cores from glacial time correlated with abrupt climate changes (e.g., Chappellaz et al., 1993). However, subsequent research has revealed that the variations in methane through the glacial cycles (1) originated in large part from low-latitude wetlands, and were not dominated by high-latitude sources that could be potentially much larger, and (2) produced a relatively small radiative forcing relative to the temperature changes, serving as a small feedback to climate changes rather than a primary driver.

Methane was also proposed as the origin of the Paleocene–Eocene thermal maximum event, 55 million years ago, in which carbon isotopic compositions of CaCO3 shells in deep sea sediments reflect the release of some isotopically light carbon source (like methane or organic carbon), and various temperature proxies indicate warming of the deep ocean and hence the Earth’s surface. But the longevity of the warm period has shown that CO2 was the dominant active greenhouse gas, even if methane was one of the important sources of this CO2, and the carbon isotope spike shows that if the primary release reservoir were methane, the amount of CO2 that would be produced by this spike would be insufficient to explain the extent of warming, unless the climate sensitivity of Earth was much higher than it is today (Pagani et al., 2006).

The collected understanding of these threats is summarized in Table 4.1. For example, the West Antarctic Ice Sheet (WAIS) is a known unknown, with at least some potential to shed ice at a rate that would in turn raise sea level at a pace that is several times faster than is happening today. If WAIS were to rapidly disintegrate, it would challenge adaptation plans, impact investments into coastal infrastructure, and make rising sea level a much larger problem than it already is now. Other unknowns include the rapid loss of Arctic sea ice and the potential impacts on Northern Hemisphere weather and climate that could potentially come from that shift in the global balance of energy, the widespread extinction of species in marine and terrestrial systems, and the increase in the frequency and intensity of extreme precipitation events and heat waves.

Anticipating the potential for climatically-induced abrupt change in social systems is even more difficult, given that social systems are actually extremely complex systems, the dynamics of which are governed by a network of interactions between people, technology, the environment, and climate. The sheer complexity of such systems makes it difficult to predict how changes in any single part of the network will affect the overall system, but theory indicates that changes in highly-connected nodes of the system have the most potential to propagate and cause abrupt downstream changes. Climate connects to social stability through a wide variety of nodes, including availability of food and water, transportation (for instance, opening Arctic seaways), economics (insurance costs related to extreme weather events or rising sea level, agricultural markets, energy production), ecosystem services (pollination, fisheries), and human health (spread of disease vectors, increasing frequency of abnormally hot days that cause physiological stress). Reaching a climatic threshold that causes rapid change in any one of these arenas therefore has high potential to trigger rapid changes throughout the system.

Posted in Planetary Boundaries | Tagged | 1 Comment

Nuclear waste will last a lot longer than climate change

Preface. One of the most tragic aspects of peak oil is that it is very unlikely once energy descent begins that oil will be expended to clean up our nuclear mess. No one wants the spent fuel! New Mexico is suing the U.S. over a proposed site there in Bryan (2021) below.

Anyone who survives peak fossil fuels and then rising sea levels and temperatures plus extreme weather from climate change, will still be faced with nuclear waste as a deadly pollutant and potential weapon. 

According to Archer (2008): “… there are components of nuclear material that have a long lifetime, such as the isotopes plutonium 239 (24,000 year half-life), thorium 230 (80,000 years), and iodine 129 (15.7 million years). Ideally, these substances must be stored and isolated from reaching ground water until they decay, but the lifetimes are so immense that it is hard to believe or to prove that this can be done”.

Below are articles about nuclear waste in the news.

Alice Friedemann   www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Geranios NK (2021) US: Nuclear Waste Tank in Washington State May Be Leaking. Associated Press.

Officials say an underground nuclear waste storage tank in Washington state that dates to World War II appears to be leaking contaminated liquid into the ground.

It’s the second tank believed to be leaking waste left from the production of plutonium for nuclear weapons at the Hanford Nuclear Reservation. The first was discovered in 2013. Many more of the 149 single-walled storage tanks at the site are suspected of leaking.

Tank B-109, the latest suspected of leaking, holds 123,000 gallons (465,000 liters) of radioactive waste. The giant tank was constructed during the Manhattan Project that built the first atomic bombs and received waste from Hanford operations from 1946 to 1976.

The Hanford site near Richland in the southeastern part of the state produced about two-thirds of the plutonium for the nation’s nuclear arsenal, including the bomb dropped in 1945 on Nagasaki, Japan, and now is the most contaminated radioactive waste site in the nation.

A multibillion dollar environmental cleanup has been underway for decades at the sprawling Hanford site.

Bryan SM (2021) New Mexico sues US over proposed nuclear waste storage plan. ABCnews.

Nuclear reactors across the country produce more than 2,000 metric tons of radioactive waste a year, with most of it remaining on-site because there’s nowhere else to put the 83,000 metric tons of spent fuel sitting at temporary storage sites in nearly three dozen states.

New Mexico is suing the U.S. Nuclear Regulatory Commission over concerns that the federal agency hasn’t done enough to vet plans for a multibillion-dollar facility to store spent nuclear fuel in the state, arguing that the project would endanger residents, the environment and the economy.

New Jersey-based Holtec International wants to build a complex in southeastern New Mexico where tons of spent fuel from commercial nuclear power plants around the nation could be stored until the federal government finds a permanent solution. State officials worry that New Mexico will become a permanent dumping ground for the radioactive material.

The state cited the potential for surface and groundwater contamination, disruption of oil and gas development in one of the nation’s most productive basins.

Ro, C. 2019. The Staggering Timescales Of Nuclear Waste Disposal. Forbes.

This most potent form of nuclear waste needs to be safely stored for up to a million years. Yet existing and planned nuclear waste sites operate on much shorter timeframes: often 10,000 or 100,000 years. These are still such unimaginably vast lengths of time that regulatory authorities decide on them, in part, based on how long ice ages are expected to last.

Strategies remain worryingly short-term, on a nuclear timescale. Chernobyl’s destroyed reactor no. 4, for instance, was encased in July 2019 in a massive steel “sarcophagus” that will only last 100 years. Not only will containers like this one fall short of the timescales needed for sufficient storage, but no country has allotted enough funds to cover nuclear waste disposal. In France and the US, according to the recently published World Nuclear Waste Report, the funding allocation only covers a third of the estimated costs. And the cost estimates that do exist rarely extend beyond several decades.

Essentially, we’re hoping that things will work out once future generations develop better technologies and find more funds to manage nuclear waste. It’s one of the most striking examples of the dangers of short-term thinking.

Fred Pearce. 7 March 2012. Resilient reactors: Nuclear built to last centuries. New Scientist.

All nuclear plants have to be shut down within a few decades because they become too radioactive, making them so brittle they’re likely to crumble.

Decommissioning can take longer than the time that the plant was operational.  This is why only 17 reactors have been decommissioned, and well over a hundred are waiting to be decommissioned (110 commercial plants, 46 prototypes, 250 research reactors), yet meanwhile we keep building more of them.

Building longer lasting new types of nuclear power plants

Fast-breeders were among the first research reactors. But they have never been used for commercial power generation. There’s just one problem. Burke says the new reactors aren’t being designed with greater longevity in mind, and the intense reactions in a fast-breeder could reduce its lifetime to just a couple of decades. A critical issue is finding materials that can better withstand the stresses created by the chain reactions inside a nuclear reactor.Uranium atoms are bombarded with neutrons that they absorb. The splitting uranium atoms create energy and more neutrons to split yet more atoms, a process that eventually erodes the steel reactor vessel and plumbing.

The breakdown that leads to a reactor’s decline happens on the microscopic level when the steel alloys of the reactor vessels undergo small changes in their crystalline structures. These metals are made up of grains, single crystals in which atoms are lined up, tightly packed, in a precise order. The boundaries between the grains, where the atoms are slightly less densely packed, are the weak links in this structure. Years of neutron bombardment jar the atoms in the crystals until some lose their place, creating gaps in the structure, mostly at the grain boundaries. The steel alloys – which contain nickel, chromium and other metals – then undergo something called segregation, in which these other metals and impurities migrate to fill the gaps. These migrations accumulate until, eventually, they cause the metal to lose shape, swell, harden and become brittle. Gases can accumulate in the cracks, causing corrosion.

A reactor that does not need to be shut down after a few decades will do a lot to limit the world’s stockpile of nuclear waste. But eventually, even these will need to be decommissioned, a process that generates vast volumes of what the industry calls “intermediate-level” waste.

Despite its innocuous name, intermediate-level waste is highly radioactive and will one day have to be packaged and buried in rocks hundreds of meters underground, while its radioactivity decays over thousands of years. It is irradiated by the same mechanism that erodes the machinery in a nuclear power plant, namely neutron bombardment.

Toxic legacy

Nuclear waste is highly radioactive and remains lethal for thousands of years and is without doubt nuclear energy’s biggest nightmare. Efforts to “green” nuclear energy have focused almost exclusively on finding ways to get rid of it. The most practical option is disposal in repositories deep underground. Yet, seven decades into the nuclear age, not one country has built a final resting place for its most toxic nuclear junk. So along with the legacy waste of cold-war-era bomb making, it will accumulate in storage above ground – unless the new reactors can turn some of that waste back into fuel.

Without a comprehensive clean-up plan, the wider world is unlikely to embrace any dreams of a nuclear renaissance.

References

Archer, D., et al. 2008. The millennial atmospheric lifetime of anthropogenic CO2. Climactic Change 90: 283-297. https://geosci.uchicago.edu/~archer/reprints/archer.2008.tail_implications.pdf

Posted in Nuclear Waste, Planetary Boundaries | Tagged , , | 3 Comments

Half of U.S. Coal runs out in 30 years, not 250

Preface. The USGS did a survey of coal in the U.S. in 1974 and announced that America had 250 years of coal left.  In 2007, the National Research Council wrote a report suggesting 100 years was more likely due to “a combination of increased rates of production…transportation issues, recoverability, and location”, and that the USGS ought to re-survey the U.S. to find out.

Not until 2015 was a new survey done on the Powder River Basin (PRB) in Wyoming and Montana, which supplies 45% of U.S. coal.  The USGS found that at best, 40 years of coal were left (35 years in 2020).   Here’s how the USGS calculated this in Billions of Short Tons:

  • 1,156 BST original resources (mostly coal that isn’t economic or technologically obtainable).
  • 1,148 BST after subtracting out previously mined coal
  • 179 BST geological constraints; subtract Environmental, societal, technological restrictions
  • 162 BST Subtract too deep, too thin, high stripping ratios, mining technology limitations
  • 25 BST  2% of original resource estimate after subtracting coal that is more expensive than the market value of coal

You would think that this would be huge news, but the only major news media it appeared in were U.S. News and World Report and Pittsburgh Post-Gazette.

Then in 2017, the Little snake river and red desert coal fields were assessed again.  Originally there were 19.37 BST in resources, but at this point in time there is only 1% of this original resource, 167 million short tons of reserves that are economically and technologically obtainable (Shaffer 2017).

Two other basins have a lot of coal but have not been reassessed, the Appalachian and Illinois Basins. Plus the Raton and Piceance Basins in the Rocky Mountain Province.

Lignite has such low energy density that it is not worth evaluating the lignite basins of Williston in the Northern Great Plains Province and the Gulf Coast Province (USGS 2017b).

The QUALITY needs to be considered. Tad Patzek, former chairman of the Department of Petroleum and Geosystems Engineering at the University of Texas, Austin, found that energy-contentwise, global coal peak may have occurred already in 2011 (Patzek et al. 2010). Still though, a lot of coal left, though to the extent it depends on diesel trucks and other petroleum inputs, and plentiful water, production is likely to decline as oil declines. Also as overburden increases, the “stripping ratio”, the tons of earth must be removed to get at the coal to mine it will take more and more energy at a time when energy is declining. Many thick coal seams curve deeper into the earth making them more energy intense to mine.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Matthew Brown. Feb. 23, 2016  Amid coal market struggles, less fuel worth mining in US.   Associated Press.

This AP article is based on a USGS report that “presents the final results of the first assessment of both coal resources and reserves for all significant coal beds in the entire Powder River Basin, northeastern Wyoming and southeastern Montana. The basin covers about 19,500 square miles, and contains the largest resources of low-sulfur, low-ash, subbituminous coal in the United States. It is the single most important coal basin in the United States. In 2012, almost 420 million short tons were produced from this basin, which was about 42 percent of the total coal production in the United States.

(AP) — Vast coal seams dozens of feet thick that lie beneath the rolling hills of the Northern Plains once appeared almost limitless, fueling boasts that domestic reserves were sufficient to power the U.S. for centuries.

But an exhaustive government analysis says that at current prices and mining rates the country’s largest coal reserves, located along the Montana-Wyoming border, will be tapped out in just a few decades.

The finding by the U.S. Geological Survey upends conventional wisdom on the lifespan for the nation’s top coal-producing region, the Powder River Basin.

“You’re looking at a forty-year life span, maximum, for Powder River coal,” said USGS geologist Jon Haacke, one of the authors of the analysis.

Claims that the U.S. had reserves sufficient to last as long as 250 years came from greatly inflated estimates of how much coal could be mined, Haacke added. They were based on data put out by the U.S. Energy Department last updated comprehensively in the 1990s.

USGS study leader James Luppens said the Energy Department estimates were in “desperate need of revision.” But there are no immediate plans to do so or to incorporate the new findings, said Lance Harris, a supervisor with the Energy Department’s coal team.

For decades, the agency has made little distinction between coal reserves that reasonably could be mined and those that could not.

The perception of coal’s abundance began to shift in 2008, when the USGS team released initial data that called into question the longevity of U.S. supplies.

Yet assertions that America was the “Saudi Arabia of coal” persisted, including in 2010 by President Barack Obama and continuing in recent months by industry supporters. The Department of Energy states on its website that based on current mining rates, “estimated recoverable coal reserves would last about 261 years.”

Leslie Glustrom, an environmental activist from Boulder, Colorado, who has urged the Energy Department to change how it tallies up the nation’s untapped resources, said she believes the end for the Powder River Basin is coming even more rapidly than the USGS study suggests. And she said it has little to do with a “war on coal” that Republicans frequently accuse the Obama administration of waging.

“This is not a political problem. It’s a geologic problem,” Glustrom said.

It’s been four decades since its low-sulfur content first made Powder River Basin coal the fuel of choice among electric utilities that needed to cut their sulfur dioxide pollution. Sprawling strip mines in the region have since removed more than 11 billion tons of coal, the equivalent of 95 million loaded rail cars.

To gauge how much coal remains, USGS researchers since 2004 have analyzed the geology from minerals removed by 30,000 holes drilled deep into the earth. The data revealed almost 1.1 trillion tons of coal buried across the 20,000-square mile Powder River Basin. Of that, only 162 billion tons is within coal seams considered thick enough and close enough to the surface to make extracting them worthwhile.

The amount drops even more drastically when the coal’s quality is factored in and compared against current prices. When the USGS data was first compiled, in 2013, Powder River Basin coal was selling for $10.90 a ton, resulting in about 23 billion tons being designated as economically-recoverable.

With coal prices down to $9.55 a ton, the reserve estimate has plummeted to just 16 billion tons, Haacke said. That’s equivalent to 40 years at the current production pace of 400 million tons annually from the basin’s 16 mines in Wyoming and Montana.

Meanwhile, mining costs have trended up. That’s been driven by an increase in the “stripping ratio” — how many tons of earth must be removed to mine a ton of coal as the region’s thick coal seams curve gradually deeper into the earth.

“It became two to one, then three to one, then three-and-a-half to one,” Haacke said of the stripping ratio. “That becomes a dirt-moving operation rather than a coal-moving operation.”

Luppens, James A., et al. 2015. Coal Geology and Assessment of Coal Resources and Reserves in the Powder River Basin, Wyoming and Montana. USGS.

This report presents the final results of the first assessment of both coal resources and reserves for all significant coal beds in the entire Powder River Basin, northeastern Wyoming and southeastern Montana. The basin covers about 19,500 square miles, exclusive of the part of the basin within the Crow and Northern Cheyenne Indian Reservations in Montana. The Powder River Basin, which contains the largest resources of low-sulfur, low-ash, subbituminous coal in the United States, is the single most important coal basin in the United States. The U.S. Geological Survey used a geology-based assessment methodology to estimate an original coal resource of about 1.16 trillion short tons for 47 coal beds in the Powder River Basin; in-place (remaining) resources are about 1.15 trillion short tons. This is the first time that all beds were mapped individually over the entire basin. A total of 162 billion short tons of recoverable coal resources (coal reserve base) are estimated at a 10:1 stripping ratio or less. An estimated 25 billion short tons of that coal reserve base met the definition of reserves, which are resources that can be economically produced at or below the current sales price at the time of the evaluation. The total underground coal resource in coal beds 10–20 feet thick is estimated at 304 billion short tons.

This report is groundbreaking as it provides the first published maps of the individual coal beds for the entire PRB.

Prior resource assessments relied on net coal thickness maps for only selected beds. Although net thickness maps are sufficient for estimating in-place (remaining) resources, the mapping of all individual beds is necessary for conducting economic studies to determine the coal reserve base for the Powder River Basin. The coal reserve base includes those resources that are currently (October 2014) economic (reserves), but also may encompass those parts of a resource that have a reasonable potential for becoming economically available. Thus, the coal reserve base provides a more realistic estimate of the portion of in-place resources that are potentially recoverable, which is important from a national energy standpoint. A key to the success of this current assessment was incorporating as much data as practical from the recent, extensive coal bed methane development in the basin. The interpretation of these new data proved critical to the development of a comprehensive geologic model needed for estimating coal resources and reserves in the Powder River Basin. A total of 29,928 drill holes were used for this assessment.

There is often confusion regarding the use of the terms coal resources and coal reserves as they relate to assessments. Although the two terms have been used interchangeably, there are significant differences between the definitions. Coal resources include those in-place tonnage estimates determined by summing the volumes for identified resources and hypothetical resources, using coal zones of a minimum thickness and within certain depth limits (commonly 0–2,000 feet [ft] deep) (Pierce and Dennen, 2009). Coal reserves are a subset of coal resources and are considered economically minable at the time of classification (Wood and others, 1983).

The cumulative results from the four PRB assessment areas are 24.5 BST of coal reserves and a total recoverable coal resource (coal reserve base) of 162 BST in coal beds greater than 5 ft in thickness and less than a 10:1 stripping ratio

So far 11 billion tons of coal filling 95 million rail cars have been removed. Yes, there’s a lot of coal down there: 1.1 trillion tons, but only 162 billion tons are thick and close enough to the surface to justify mining them.  Remember, money is an abstract concept that can’t move your car even an inch if stuffed into the gas tank.  No matter what the price of coal, if it takes more energy to mine and transport than the energy contained within the coal, it’s an energy sink and the mine will be shut down.

References

NRC. 2007. Coal. Research and Development to support national energy policy National Research Council.

Patzek, T., et al. 2010. A global coal production forecast with multi-Hubbert cycle analysis.
Energy 35: 3109–3122

Shaffer, B. N., et al. 2017. Assessment of coal resources and reserves in the Little Snake River Coal Field and Red Desert Assessment Area, Greater Green River Basin, Wyoming. Fact Sheet 2019-3053. United States Geological Survey.

Singh S (2021) China power crunch spreads, shutting factories and dimming growth outlook. Reuters. https://www.reuters.com/world/china/chinas-power-crunch-begins-weigh-economic-outlook-2021-09-27/

USGS. 2017b. Assessing U.S. coal resources and reserves. Fact sheet 2017-3067. United States Geological Survey.

Xu M (2022) Analysis: Quantity over quality – China faces power supply risk despite coal output surge. Reuters
https://www.reuters.com/markets/commodities/quantity-over-quality-china-faces-power-supply-risk-despite-coal-output-surge-2022-06-21/

Posted in Coal, Peak Coal | Tagged , , , | Comments Off on Half of U.S. Coal runs out in 30 years, not 250

Were other humans the first victims of the 6th mass extinction?

Preface. This article makes a good case that we did indeed wipe out other hominids. “…Yet the extinction of Neanderthals, at least, took a long time—thousands of years. While Neanderthals lost the war, to hold on so long they must have fought and won many battles against us, suggesting a level of intelligence close to our own.”

I seriously doubt we’ll drive ourselves extinct, though carrying capacity of the earth is at best 1 billion (pre-fossil fuels) or less (topsoil erosion, deforestation, pollution, climate change, etc).

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles,Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Longrich, N. 2019. Were other humans the first victims of the sixth mass extinction? The conversation.

Nine human species walked the Earth 300,000 years ago. Now there is just one. The Neanderthals, Homo neanderthalensis, were stocky hunters adapted to Europe’s cold steppes. The related Denisovans inhabited Asia, while the more primitive Homo erectus lived in Indonesia, and Homo rhodesiensis in central Africa.

Several short, small-brained species survived alongside them: Homo naledi in South Africa, Homo luzonensis in the Philippines, Homo floresiensis (“hobbits”) in Indonesia, and the mysterious Red Deer Cave People in China. Given how quickly we’re discovering new species, more are likely waiting to be found.

By 10,000 years ago, they were all gone. The disappearance of these other species resembles a mass extinction. But there’s no obvious environmental catastrophe—volcanic eruptions, climate change, asteroid impact—driving it. Instead, the extinctions’ timing suggests they were caused by the spread of a new species, evolving 260,000-350,000 years ago in Southern Africa: Homo sapiens.

The spread of modern humans out of Africa has caused a sixth mass extinction, a greater than 40,000-year event extending from the disappearance of Ice Age mammals to the destruction of rainforests by civilisation today. But were other humans the first casualties?

We are a uniquely dangerous species. We hunted wooly mammoths, ground sloths and moas to extinction. We destroyed plains and forests for farming, modifying over half the planet’s land area. We altered the planet’s climate. But we are most dangerous to other human populations, because we compete for resources and land.

History is full of examples of people warring, displacing and wiping out other groups over territory, from Rome’s destruction of Carthage, to the American conquest of the West and the British colonization of Australia. There have also been recent genocides and ethnic cleansing in Bosnia, Rwanda, Iraq, Darfur and Myanmar. Like language or tool use, a capacity for and tendency to engage in genocide is arguably an intrinsic, instinctive part of human nature. There’s little reason to think that early Homo sapiens were less territorial, less violent, less intolerant—less human.

Optimists have painted early hunter-gatherers as peaceful, noble savages, and have argued that our culture, not our nature, creates violence. But field studies, historical accounts, and archaeology all show that war in primitive cultures was intense, pervasive and lethal. Neolithic weapons such as clubs, spears, axes and bows, combined with guerrilla tactics like raids and ambushes, were devastatingly effective. Violence was the leading cause of death among men in these societies, and wars saw higher casualty levels per person than World Wars I and II.

Old bones and artifacts show this violence is ancient. The 9,000-year-old Kennewick Man, from North America, has a spear point embedded in his pelvis. The 10,000-year-old Nataruk site in Kenya documents the brutal massacre of at least 27 men, women, and children.

It’s unlikely that the other human species were much more peaceful. The existence of cooperative violence in male chimps suggests that war predates the evolution of humans. Neanderthal skeletons show patterns of trauma consistent with warfare. But sophisticated weapons likely gave Homo sapiens a military advantage. The arsenal of early Homo sapiens probably included projectile weapons like javelins and spear-throwers, throwing sticks and clubs.

Complex tools and culture would also have helped us efficiently harvest a wider range of animals and plants, feeding larger tribes, and giving our species a strategic advantage in numbers.

The ultimate weapon

But cave paintings, carvings, and musical instruments hint at something far more dangerous: a sophisticated capacity for abstract thought and communication. The ability to cooperate, plan, strategizemanipulate and deceive may have been our ultimate weapon.

The incompleteness of the fossil record makes it hard to test these ideas. But in Europe, the only place with a relatively complete archaeological record, fossils show that within a few thousand years of our arrival , Neanderthals vanished. Traces of Neanderthal DNA in some Eurasian people prove we didn’t just replace them after they went extinct. We met, and we mated.

Elsewhere, DNA tells of other encounters with archaic humans. East Asian, Polynesian and Australian groups have DNA from Denisovans. DNA from another species, possibly Homo erectus, occurs in many Asian people. African genomes show traces of DNA from yet another archaic species. The fact that we interbred with these other species proves that they disappeared only after encountering us.

But why would our ancestors wipe out their relatives, causing a mass extinction—or, perhaps more accurately, a mass genocide?

The answer lies in population growth. Humans reproduce exponentially, like all species. Unchecked, we historically doubled our numbers every 25 years. And once humans became cooperative hunters, we had no predators. Without predation controlling our numbers, and little family planning beyond delayed marriage and infanticide, populations grew to exploit the available resources.

Further growth, or food shortages caused by drought, harsh winters or overharvesting resources would inevitably lead tribes into conflict over food and foraging territory. Warfare became a check on population growth, perhaps the most important one.

Our elimination of other species probably wasn’t a planned, coordinated effort of the sort practiced by civilizations, but a war of attrition. The end result, however, was just as final. Raid by raid, ambush by ambush, valley by valley, modern humans would have worn down their enemies and taken their land.

Yet the extinction of Neanderthals, at least, took a long time—thousands of years. This was partly because early Homo sapiens lacked the advantages of later conquering civilizations: large numbers, supported by farming, and epidemic diseases like smallpox, flu, and measles that devastated their opponents. But while Neanderthals lost the war, to hold on so long they must have fought and won many battles against us, suggesting a level of intelligence close to our own.

Today we look up at the stars and wonder if we’re alone in the universe. In fantasy and science fiction, we wonder what it might be like to meet other intelligent species, like us, but not us. It’s profoundly sad to think that we once did, and now, because of it, they’re gone.

Posted in Human Nature | Tagged , | Comments Off on Were other humans the first victims of the 6th mass extinction?

Movie review of Michael Moore’s “Planet of the Humans”

Preface. This documentary was made by Jeff Gibbs, a writer and environmentalist, with Michael Moore as the executive producer. This movie is worth watching, and an entertaining and quick way to understand why rebuildable “renewables” are neither green or a solution for replacing fossil fuels.

I watched the movie and then read 20 criticisms of it. None were any good, it is as if the reviewers had watched an entirely different movie. Most yell names at it and call it Bullshit, rather than offer legitimate criticisms as to what was wrong and criticize it for things it never said.  A lot of howling can be heard, like an ox who’s been gored.  McKibben is super angry about his portrayal. Here is Gibbs response to Bill McKibben.

All of the dozens of critiques zero in on something trivially incorrect, like some remark that solar panels only last 10 years. I do wish the film makers had left out questionable bits, but none of the attacks on this movie address the main points:

  • Renewables aren’t replacing natural gas and coal plants because they’re needed as backup since not enough energy storage exists, especially not batteries.
  • Renewables require stunning amounts of fossil fuels to generate the high heat to smelt metal ores. Nothing I have ever written or could write is as effective and stunning in conveying the ginormous amount of fossils needed to construct renewable contraptions than the sequence of dozens of metals being smelted (I wish the movie had also shown the fossils to mine the ore, transport it to the smelter, crush it, fabricate into pieces, ship and truck transport of pieces to assembly factory, truck transport to final destination, and so on).
  • Electricity in Germany and elsewhere is a tiny fraction of OVERALL energy use.

The only legitimate criticism, if it is every offered, would need to come from scientists, who understand that you can’t rant, rave, and call a film names, you have to actually state what was wrong and cite peer-reviewed evidence to back it up.  You can’t cherry-pick some random fact that makes wind or solar look good as rebuttal.

The Guardian is more reasonable, but accuses the film of not offering a solution, and asks what about nuclear power. It’s not fair to say a 100 minute film should have covered nuclear and dozens of other topics.

So far the best reviews that have many points I didn’t mention are by Robert Brice (here), Richard Heinberg (here), McClennen at Salon (here), and episode 24 Banana town of the delightful podcast “Crazy Town” (here).

I’ve been writing about peak oil, the coming energy crisis, and the other death by a thousand cuts that will eventually lead to collapse of the world’s fossil-fueled civilization since 2001. Which sadly means going back to the wood based energy and infrastructure societies of the past. The film sure got it right that burning biomass and making biofuels are quite destructive.

And finally, William Rees, professor at the University of British Columbia, wrote me to point out that even if renewables were ‘the answer’, even if we could contrive a cheap plentiful substitute for fossil fuels — it would be a catastrophe. We would simply use the energy bounty to completely dismember the Earth.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Michael Moore. 2020. Planet of the Humans. Youtube.com

Gibbs starts out by asking “Why are we still addicted to fossil fuels? So I began to follow the green energy movement.”

He went to a solar fair that ran on solar power until it rained, then biodiesel generators were turned on, which didn’t work, so they plugged into the electric grid.  Other “green” events later in the movie running on solar power are actually using diesel generators and the grid.

Famous, rich, powerful people support Greenness. Obama gave hope that the green movement would ramp up.  Al Gore shared ideas with Obama, Sir Richard Branson invested in renewables. So did Vinod Khosla, major banks, investment groups, and Bloomberg gave $50 million to the Sierra club to fight coal. 

Then he shows how that “green” technology may not be.  GM introduced a new line of electric vehicles, the Volt in Lansing Michigan.  Gibbs point out that much of the electricity in this region is produced with coal, which isn’t very green.  Electric cars need rare earth metals, which often contain radioactive material in the areas they’re mined, and that has to be disposed of somehow. And many minerals that require massive amounts of energy to mine, smelt, and fabricate

Then he shows a huge field with solar panels that the owner said could power 10 homes at best.  Critics of the movie dismiss this saying the latest solar panels are far more powerful, but even if five times better, if it takes this large an area to power just 50 homes when the sun is shining, it’s not hard to imagine the millions of acres of panels required to power a city.

He interviews an environmental health and safety employee where a wind plant might be installed on a mountain loved for its beauty and hiking, who points out that the turbines will still require a backup fossil power plant idling 100% of the time on stand-by to step in when the wind dies and ramp down when it surges, using more energy on stand-by than if just kept running.  The forested mountain also protects the watershed, but not any longer if it is deforested for turbines.

The point that renewables don’t replace fossils was then made more strongly by Richard York, at the University of Oregon, who published an article in Nature Climate Change titled “Do alternative energy sources displace fossil fuels?” that showed green energy did not replace fossil fuels. I just looked at the article, it’s much worse than that: “each unit of electricity generated by non-fossil-fuel sources displaced less than one-tenth of a unit of fossil-fuel-generated electricity.”  So renewables add to energy generation, but aren’t replacing fossil generation.

On top of that, fossils were used to mine the materials of wind and solar, crush the ore, smelt the metal out, fabricate it into pieces, transport each piece to the assembly factory, and deliver the wind turbine or solar panel to its destination, and ongoing maintenance.  So we aren’t making a transition to something else, or kicking our addiction to fossil fuels at all. We’re just expanding the amount of electrical energy produced a tiny bit.

Coal plants certainly aren’t being replaced by solar and wind, but with much larger natural gas plants fueled by the largest expansion of fossil fuel production in American history. The Sierra Club’s “beyond coal” campaign may have helped get many coal plants closed, but it did not reduce consumption of fossil fuels.

Gibbs asks if we are so desperate to find a green solution that we don’t look closely enough at them.  At U.C. Berkeley he’s shown how solar panels are made.  First, quartz is dynamited out of mountains, then coal melts the silicon out of quartz at 1800 F.  That is decidedly not green. 

Even solar companies admitted they weren’t entirely green, since making solar panels requires mining, and only produce maximum power a few hours a day if the sun is up.  Like wind, natural gas plants have to back solar up most of the time according to Philip Moeller, a Federal Energy regulatory commissioner. This is not efficient, and causes wear and tear on fossil and nuclear plants which weren’t designed to do this, shortening their lifespan and increasing maintenance costs.

Without battery storage, fossil plants have to provide baseload power and balancing power.  The world uses 546,000,000 Giga BTU.  All the batteries in the world can store 51 Giga BTU according to the International Energy Agency (IEA).  Then they degrade.   Many critics castigate this without a citation to prove it false. If anything, the problem is far worse than what the film portrayed. In my book “When Trucks Stop Running”, I show how the only battery there are enough materials on earth that could be made for half a day of global electricity generation is a Sodium Sulfur (NaS) battery. Using data from the Department of Energy (DOE/EPRI 2013) energy storage handbook, I calculated that the cost of NaS batteries capable of storing 24 hours of electricity generation in the United States came to $40.77 trillion dollars, covered 923 square miles, and weighed in at a husky 450 million tons. And after 15 years you need to replace it.

Concentrated solar power (CSP) exists only in deserts.  They need to burn natural gas for hours to run turbines before and after the sun comes up.  They were built with fossil fuels and in my research I found that they cost about $1 billion each.  The sun is renewable but the solar arrays are not.  You use more fossil fuels to build these facilities than the energy they’ll ever produce.  Gibbs points out that if you were to criticize a CSP plant, you’d be called evil, yet it is the evil Koch brothers who make almost every component of the glass, steel, and other parts using the most toxic and industrial processes that have ever been invented. 

CSP Ivanpah takes up over 5 square miles of a beautiful desert that was destroyed to build it. Only a few years later things began falling apart.

You’ll hear that Germany has 35%, even 50% renewable power, but Germany is still Europe’s largest consumer of coal and these figures are at best the highest days of electricity generation. Not overall power use. Electricity is only 20% of energy consumption, fossils power German manufacturing, transportation, heating, and other non-electric needs. In addition, Germany has just built a large liquefied natural gas terminal to import US gas.

Elon Musk promised that his Tesla factory in Sparks, Nevada, would run off of solar, wind, and geothermal, but that is not true, the factory is connected to the electric grid.  In fact, there is no factory that runs entirely off 100% renewable energy anywhere in the world.

Then a dizzying series of film depicting dozens of mining operations of minerals and metal needed to make wind and solar plus the coal and other fossils required, and equipment and vehicles running on diesel that is so NOT green.

So why are bankers, industrialists and environmental leaders only focused on green technology? 

Gibbs asks Sheldon Solomon at Skidmore College if perhaps to deny death, the right has religion and endless fossil fuels, the left says no worries, we have solar and wind?  Yes, he confirms.  We know we’re here, we don’t like that we’re animals, so we enveloped ourselves in protective beliefs of religion, cultural, and so on.  Hearing points of view that contradict your comfortable illusions creates anxiety.

McNeil biomass power plant, the biggest source of renewables in Vermont, burns trees.   Trees emit a great deal of CO2 and toxic metals, also not clean and green at 30 cords per hour, 400,000 tons of wood a year.  It took a lot of fossils to cut them down, chip, and truck them in and this biomass plant simply couldn’t exist without fossil fuels.  Made worse by using old tires, creosote, and other wastes are added, since green wood doesn’t burn well.

Environmental groups have touted for years that forests are renewable and will grow back.  Sure if you wait a century.  If all trees were cut down and burned they would power America for only a year. 

Many universities have decided to go “green” by burning biomass.  At a North Carolina college, Bruce Nilles, the director of the Sierra Clubs “Beyond Coal” project, proudly announced this. “Out of bed with coal companies, and into bed with logging companies?” Gibbs asks.  Bill McKibben spoke at a college in Vermont that planned to burn wood with great favor and fervor.

To create 40 million gallons of ethanol, a project in Michigan proposed using a million tons of green wood, which would use more natural gas fertilizer to get new trees to grow to replace them than ethanol produced.

Wood chips from America are being exported all over the world. Burning wood is by far the largest “green energy” in the world. Plenty of environmentalists realize this.  But leaders have promoted it at times, calling it sustainable and renewable.  Though when Sierra Club, 350.org, and other leaders are asked by Gibbs directly they all dodged the question.  Only one rejected them, Vandana Shiva of India.

Gibbs then addresses the profit motive.  Businesses are making a lot of money hiding under the cover of “green” energy.  Bloomberg, Jeremy Grantham (sells forests), Richard Branson ran an airplane on rainforest destroying coconut oil, Vinod Khosla ethanol with wood chips, and too many more to list.  Several environmental leaders and groups were mentioned who promoted “green” funds that actually only put a very small amount of money into green projects and much more into non-green investments.

How is 350.org funded? McKibben says they don’t get funds from large entities.  The film does not accuse him of this either, but McKibben has an angry rebuttal of the film denying it, even though the film didn’t make this accusation.

We must accept that infinite growth on a finite planet isn’t possible, we must take control away from billionaires, they are not our friends.

Many interviewed brought up population as the main issue.  And the need to consume less.  If we don’t, we crash, this happens to species all the time.  Fossil fuels allowed our population to expand to an impact 100 times greater than only 100 years ago from population and energy consumption.  Steven Running, ecologist, talked about the limits we’re reaching, including fisheries declining, farmland declining, groundwater and rivers vanishing, and numerous other limits are being reached.  It is not just the CO2 destroying the planet, it’s us and everything we’re doing.  

To learn more from the film makers, see the discussion at: “Planet of the Humans” Earth Day Live Stream w/ Michael Moore, Jeff Gibbs & Ozzie Zehner

Afternote: Here are some articles that rebut many of the criticisms with peer-reviewed evidence instead of random information about this-or-that and straw man arguments about something the film said but actually didn’t. Also, to expect a 100 minute film to cover EVERYTHING is absurd.

Fossil-fueled industrial heat hard to impossible to replace with renewables

Why solar power can’t save us from the coming energy crisis

48 Reasons why wind power can not replace fossil fuels

Utility scale energy storage has a long way to go to make renewables possible

Pumped Hydro Storage (PHS)

Who Killed the Electric Car and more importantly, the Electric Truck?

More posts about electric cars (topics include self-driving, lithium shortages, etc).

CSP Barriers and Obstacles

NREL. April 2012. Geothermal power and interconnection. The Economics of Getting to Market.

Nuclear power is too expensive and 37 reactors likely to shut down because of that

A Nuclear spent fuel fire at Peach Bottom in Pennsylvania could force 18 million people to evacuate

Peak Uranium by Ugo Bardi from Extracted: How the Quest for Mineral Wealth Is Plundering the Planet

Peak soil: Industrial agriculture destroys ecosystems and civilizations. Biofuels make it worse.

Wood, the fuel of preindustrial societies, is half of EU renewable energy

And finally, my book “When trucks stop running” makes the case that civilization ends when trucks stop. EV simply don’t matter. Here’s what would happen if trucks stopped (see links at the end for why trucks can’t be electrified, and read my book about why trucks can’t run on electricity, batteries, hydrogen, biofuels, natural gas, liquefied coal, etc):

What would happen if trucks stopped running?

Posted in Alternative Energy, Biomass, Coal, Natural Gas, Solar, Wind | Tagged , , , , , | 15 Comments

How sand transformed civilization

Preface. No wonder we’re reaching peak sand. We use more of this natural resource than of any other except water. Civilization consumes nearly 50 billion tons of sand & gravel a year, enough to build a concrete wall 88 feet (27 m) high and 88 feet wide right around the equator.  

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Financial Sense, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Vince Beiser. 2018. The World in a Grain. The Story of Sand and How It Transformed Civilization. Riverhead Books.

Riverbeds and beaches around the world are being stripped bare of their precious grains. Farmlands and forests are being torn up. And people are being imprisoned, tortured, and murdered. All over sand.

In 1950, some 746 million people—less than one-third of the world’s population—lived in cities. Today, the number is almost 4 billion,

The overwhelming bulk of it goes to make concrete, by far the world’s most important building material. In a typical year, according to the United Nations Environment Programme, the world uses enough concrete to build a wall 88 feet high and 88 feet wide right around the equator.    

There is such intense need for certain types of construction sand that places like Dubai, which sits on the edge of an enormous desert in the Arabian Peninsula, are importing sand from Australia.

Sand mining tears up wildlife habitat, fouls rivers, and destroys farmland.

Thieves in Jamaica made off with 1,300 feet of white sand from one of the island’s finest beaches in 2008. Smaller-scale beach-sand looting is ongoing in Morocco, Algeria, Russia, and many other places around the world.

The damage being done to beaches is only one facet, and not even the most dangerous one, of the damage being done by sand mining around the world. Sand miners have completely obliterated at least two dozen Indonesian islands since 2005. Hauled off boatload by boatload, the sediment forming those islands ended up mostly in Singapore, which needs titanic amounts of sand to continue its program of artificially adding territory by reclaiming land from the sea.

The city-state has created an extra 50 square miles in the past 40 years and is still adding more, making it by far the world’s largest sand importer. The demand has denuded beaches and riverbeds in neighboring countries to such an extent that Indonesia, Malaysia, Vietnam, and Cambodia have all restricted or completely banned exports of sand to Singapore.

Sand miners are increasingly turning to the seafloor, vacuuming up millions of tons with dredges the size of aircraft carriers. One-third of all aggregate used in construction in London and southern England comes from beneath the United Kingdom’s offshore waters. Japan relies on sea sand even more heavily, pulling up around 40 million cubic meters from the ocean floor each year. That’s enough to fill up the Houston Astrodome thirty-three times.

Hauling all those grains from the seafloor tears up the habitat of bottom-dwelling creatures and organisms. The churned-up sediment clouds the water, suffocating fish and blocking the sunlight that sustains underwater vegetation.

The dredging ships dump grains too small to be useful, creating further waterborne dust plumes that can affect aquatic life far from the original site.

Dredging of ocean sand has also damaged coral reefs in Florida and many other places, and threatens important mangrove forests, sea grass beds, and endangered species such as freshwater dolphins and the Royal Turtle. One round of dredging may not be significant, but the cumulative effect of several can be. Large-scale ocean sand mining is new enough that there hasn’t been a lot of research on it, meaning that no one knows for sure what the long-term environmental impacts will be. We’re sure to find out in the coming years, however, given how fast the practice is expanding.

What is sand?

The average grain of sand is a tad larger than the width of a human hair. Those grains can be made by glaciers grinding up stones, by oceans degrading seashells and corals (many Caribbean beaches are made of decomposed shells), even by volcanic lava chillingand shattering upon contact with air or water.

Nearly 70% of all sand grains on Earth are quartz. These are the ones that matter most to us.

Silicon and oxygen, are the most abundant elements in the Earth’s crust, so it’s no surprise that quartz is one of the most common minerals on Earth. It is found abundantly in the granite and other rocks that form the world’s mountains and other geologic features.

Most of the quartz grains we use were formed by erosion. Wind, rain, freeze-thaw cycles, microorganisms, and other forces eat away at mountains and other rock formations, breaking grains off their exposed surfaces. Rain then washes those grains downhill, sweeping them into rivers that carry countless tons of them far and wide. This waterborne sand accumulates in riverbeds, on riverbanks, and on the beaches where the rivers meet the sea. Over the centuries, rivers periodically overflow their banks and shift their courses, leaving behind huge deposits of sand

Quartz is tremendously hard, which is why quartz grains survive this long, bruising journey intact while other mineral grains disintegrate.

Over millions of years, sands are often buried under newer layers of sediment, uplifted into new mountains, then eroded and transported once again.

Quartz always comes mixed with bits of other materials: iron, feldspar, whatever other minerals prevail in the local geology. (Pure quartz is transparent,

A certain amount of those other substances need to be filtered out before the sand can be used to make concrete, glass, or other products.

Sand is deployed on its own to make other construction materials like mortar, plaster, and roofing components.

Marine sands—the naval wing of the army, found on the ocean floor—are of similar composition, making them useful for artificial land building, such as Dubai’s famous palm-tree-shaped man-made islands. These underwater grains can also be used for concrete, but that requires washing the salt off them—an expensive step most contractors would rather avoid.

Silica sands are purer—at least 95%.  These are the sands you need to make glass.  Silica sands are also used to help make molds for metal foundries, add luster to paint, and filter the water in swimming pools, among many other tasks. Some of the unique properties of industrial sands suit them for highly specific jobs. The silica sands of western Wisconsin, for instance, have a particular shape and structure that make them ideal for use in fracking for oil and gas.

Small amounts of extremely high-purity quartz, a tiny, elite group possessed of rare attributes that enable them to perform extraordinary feats. These particles are made into high-tech equipment essential for manufacturing computer chips. Some are also used to create the sparkling sand traps of exclusive golf courses or to line Persian Gulf horse-racing tracks

Underwater sands are easier to mine, since there’s no intervening earth, known as overburden, to scrape away. They also come largely cleansed of dust-sized particles. On land, sand is usually quarried from open pits. Sometimes that requires using explosives and crushing machines to break apart sandstone,

Harvesting sand

Raw sand needs to be washed and run through a series of screens to sort it by size.

In the United States, some 4,100 companies and government agencies harvest aggregate from about 6,300 locations in all fifty states.

The harm done by sand mining

Colossal amounts of more ordinary construction sand is dredged up from riverbeds or dug from nearby floodplains. In central California, floodplain sand mining has diverted river waters into dead-end detours and deep pits that have proven fatal traps for salmon.

Dredging sand from riverbeds, as from seabeds, can destroy habitat and muddy waters to a lethal degree for anything living in the water. Kenyan officials shut down all river sand mines in one western province in 2013 because of the environmental damage they were causing. In Sri Lanka,33 sand extraction has left some riverbeds so deeply lowered that seawater intrudes into them, damaging drinking water supplies.

India’s Supreme Court warned in 2011 that “the alarming rate of unrestricted sand mining” was disrupting riparian ecosystems all over the country, with fatal consequences for fish and other aquatic organisms and “disaster” for many bird species.

In Vietnam, researchers with the World Wildlife Federation believe sand mining on the Mekong River is a key reason the 15,000-square-mile Mekong Delta—home to 20 million people and source of half of all the country’s food and much of the rice that feeds the rest of Southeast Asia—is gradually disappearing. The ocean is overtaking the equivalent of one and a half football fields of this crucial region’s land every day. Already, thousands of acres of rice farms have been lost.

For centuries, the delta has been replenished by sediment carried down from the mountains of Central Asia by the Mekong River. But in recent years, in each of the several countries along its course, miners have begun pulling huge quantities of sand from the riverbed to use for the construction of Southeast Asia’s surging cities. Nearly 50 million tons of sand are being extracted annually. “The sediment flow has been halved,” says Marc Goichot, a researcher with the World Wildlife Fund’s Greater Mekong Programme. That means that while natural erosion of the delta continues, its natural replenishment does not. At this rate, nearly half the Mekong delta will be wiped out by the end of this century.

Sand extraction from rivers has also caused untold millions of dollars’ worth of damage to infrastructure around the world. The stirred-up sediment clogs up water supply equipment, and all the earth removed from riverbanks leaves the foundations of bridges exposed and unsupported. A 1998 study found that each ton of aggregate mined from the San Benito River on California’s central coast caused $11 million in infrastructure damage—costs that are borne by taxpayers. In many countries, sand miners have dug up so much ground that they have dangerously exposed the foundations of bridges and hillside buildings, putting them at risk of collapse.

Fisherfolk from Cambodia to Sierra Leone are losing their livelihoods as sand mining decimates the populations of fish and other aquatic creatures they rely on. In some places, mining has made riverbanks collapse, taking out agricultural land and causing floods that have displaced whole families. In Vietnam in 2017 alone, so much soil slid into heavily mined rivers, taking with it the crops and homes of hundreds of families, that the government shut down sand extraction completely in two provinces.

And in Houston, Texas, government officials say that sand mining in the nearby San Jacinto River—much of it illegal—seriously exacerbated flooding damage during 2017’s Hurricane Harvey.  It seems that sand miners stripped away so much vegetation along the river banks that huge amounts of silt were left exposed, and were then washed into the river by Harvey’s rains. That silt then piled up in riparian bottlenecks and at the bottom of Lake Houston, the city’s principal source of drinking water, causing them to overflow into nearby neighborhoods.

River-bottom sand also plays an important role in local water supplies. It acts like a sponge, catching the water as it flows past and percolating it down into underground aquifers. But when that sand has been stripped away, instead of being drawn underground, the water just keeps on moving to the sea, leaving aquifers to shrink. As result, there are parts of Italy and southern India where river sand mining has drastically depleted local drinking water supplies. Elsewhere, the lack of water is killing crops.

In 2015, New York state authorities slapped a $700,000 fine on a Long Island contractor who had illegally gouged thousands of tons of sand from a 4.5-acre patch of land near the town of Holtsville and then refilled the pit with toxic waste.

In Morocco, fully half the sand used for construction is estimated to be mined illegally; whole stretches of beach in that country are disappearing.

India is a vast country of more than 1 billion people. It hides hundreds, most likely thousands, of illegal sand mining operations. Corruption and violence will stymie many of even the best-intentioned attempts to crack down on them. And it’s not just India.

There is large-scale illegal sand extraction going on in dozens of countries. One way or another, sand is mined in almost every country on Earth. India is only the most extreme manifestation of a slow-building crisis that affects the whole world.

Concrete is the skeleton of the modern world, the scaffold on which so much else is built. It gives us the power to dam enormous rivers, erect buildings of Olympian height, and travel to all but the remotest corners of the world with an ease that would astonish our ancestors. Measured by the number of lives it touches, concrete is easily the most important man-made material ever invented.

Cement is not the same thing as concrete. Cement is an ingredient of concrete. It’s the glue that binds the gravel and sand together. Cements (there are many forms) are typically made by crushing up clay, lime, and other minerals, firing them in a kiln at temperatures up to 2,700 degrees, then milling the result into a silky-fine gray powder. Mix that powder with water and you get a paste. The paste doesn’t simply dry, like mud; it “cures,” meaning the powder’s molecules bond together via a process called hydration, its chemical components gripping each other ever tighter, making the resulting substance extremely strong. Reinforced with a platoon of sand, that paste thickens into mortar, the stuff used to hold bricks together.

Concrete is made by adding “aggregate”—sand and gravel—to the mix of cement and water. Typical concrete is about 75% aggregate, 15% water, and 10% cement.

Roman engineers developed sophisticated techniques to improve on basic concrete. Concrete shrinks as it hardens, which can cause it to crack. Water seeping into the cracks expands when it freezes, widening those cracks and further weakening the concrete. Adding horsehair helped with shrinkage, the Romans found, and putting a bit of blood or animal fat in the mix helped the concrete withstand the effects of freezing water.

Today, there are hundreds of formulas for making cement tailored to specific weather conditions, project types, and other variables.

95% of the roughly 83 million tons of cement manufactured in America is Portland cement.

On its own, concrete is basically artificial stone. Reinforced with iron or steel, though, it becomes a building material unlike anything found in nature, one that combines the strengths of both metal and stone. That’s what makes it so useful for so many purposes.

By 1906 there were very few reinforced concrete buildings in California. That was largely thanks to bitter opposition from powerful building trade unions, especially on Ransome’s home turf of San Francisco. Bricklayers, stonemasons, and others, correctly seeing in concrete a mortal threat to their professions, denounced it as unproven and unsafe. Just a few months before the quake, a group of bricklayers and steelworkers in Los Angeles tried to convince the city council to forbid the construction of any more concrete buildings31 within municipal limits. The tradesmen also made a case against concrete on the grounds that it was plain ugly.

Concrete made possible the Panama Canal, begun in 1903, which reshaped an entire nation’s landscape and the world’s shipping routes. It was used to make bunkers for millions of troops in World War I

One million tons of it were deployed to anchor San Francisco’s Golden Gate Bridge.

Every mile of the US interstate highway is made with some 15,000 tons of concrete. Throw in the medians, overpasses, ramps, and road base, and all told, an estimated 1.5 billion tons of gravel and sand went into making the national highway system. That’s more than enough concrete to build a sidewalk reaching to the moon and back—twice.  

Modern asphalt pavement is often more than 90% sand and gravel.

One advantage asphalt had over wood was that it didn’t soak up urine from the endless parade of horses that were the primary form of transport at the time. And unlike brick or stone, asphalt had no gaps between blocks for manure to get stuck in, a serious health hazard.

These days, asphalt producers like to boast that 93% of all 2.2 million miles of America’s paved roads are surfaced with their product. They don’t mention that it’s often just an overlay on top of concrete base.

Both asphalt and concrete are basically just gravel and sand stuck together. The difference is the binding agent. In concrete, it’s cement. In asphalt pavement, it’s bitumens.

The basic trade-off is that in general, asphalt is cheaper to lay down and to maintain, and provides a smoother, quieter ride. Concrete, on the other hand, lasts longer and doesn’t need as much repairing in the first place. The choice often comes down to how much money a given government agency has handy.

Both types of pavement began creeping over city streets in the late 1800s, but outside of urban areas at that time, there was almost nothing but dirt to travel on. Roads just weren’t that important. For most of American history, if you wanted to move lots of people or large quantities of goods any significant distance, you did it via water. Rivers, lakes, canals, and seacoasts carried trade and travelers between settlements. Then along came the railroads in the mid-1800s. Trains connected existing centers and made it easier for people to settle further inland.

Roads, such as they were, were for local travel and hauling small loads via horse, wagon, or foot.

By 1912, there were nearly a million cars on American roads—10 percent of them Model T’s. They jostled for space with the new trucks that farmers were investing in to haul their produce, and which businesses were turning to as an alternative to railroads. At the time, there were still 21 million horses hauling people and cargo, but it was clear automobiles were becoming ever more important.

One of the central difficulties in building those first highways was getting the armies of sand to where they were needed. Each mile of paved road required around 2,000 tons of sand and 3,000 tons of gravel. Hauling all that aggregate out to the rural areas where most of the new highways were being built was no small feat; after all, at the time there were hardly any trucks, and no existing roads on which to transport the aggregate from the mines to the new roadbeds. Builders had to rely on horses and wagons, or build special rail lines to bring trains to the roadbeds. Locomotives would haul in carloads of rock, sand, and cement to be mixed on-site.

Roads became a major industry unto themselves. Hundreds of thousands of men worked building them (including chain-ganged prisoners forced to break rocks for roads). More jobs were created in the gas stations, repair shops, restaurants, hotels, and motels that grew up alongside the new highways. Hundreds of other businesses grew fat supplying the raw materials to the road makers—cement, asphalt, gravel, and of course, sand.

11 million tons of sand and gravel were needed to build California’s Shasta Dam. Kaiser figured it would be simple, since he already owned a sizable aggregate mine near the dam site north of Redding; all he had to do was load it up on trains and pay for the transport. But the local railroad quoted a price Kaiser thought too high. So he came up with an audacious work-around. He built a conveyor belt nearly ten miles long, the longest the world had ever seen, to carry a thousand tons of sand and rock per hour up and down rugged hills and across several creeks to the dam site. Later, Kaiser parlayed his expertise with aggregate into a prize gig as one of the main contractors building the Hoover Dam.

The road network is far more resilient compared to rail lines. Trucks can drive around bomb craters, after all, but trains can’t get past damaged track. trucks carry 70% of all US freight, seven times more than trains.

In addition to all the grains embedded in the 11 inches of concrete on the roads’ surface, a further 21 inches of aggregates were needed for the underlying road base.

Consumption of sand and gravel in the US hit a record high of nearly 700 million tons in 1958, a figure almost twice the 1950 total. By then, according to a federal Bureau of Mines report, so much had already been used that “sources of aggregate were limited in some states” and “nearly depleted in other areas.” Entire new types of monster dump trucks, capable of carrying huge loads off-road, were designed to meet the need to move all that aggregate.

Figuring out exactly how to build those roads took some doing. The Bureau of Public Roads set up a testing center near Chicago where researchers experimented with different types and proportions of sand, gravel, cement, and other ingredients to figure out how much of a beating from heavily loaded trucks each paving mixture could stand up to and for how long. They built a series of looping test tracks composed of various asphalt and concrete mixes, and then set a company of soldiers to drive trucks over them—19 hours a day, every day for two years. The bureau used the data to set pavement design standards.

Whatever else you can say about suburbs, their low density and dependence on cars make them an especially sand-intensive form of settlement. Think of all the sand that goes into those wide roads and all those low-slung, spread-out houses, each with its own driveway. Every one of those houses contains hundreds of tons of sand and gravel, from its asphalt driveway to its concrete foundation to its stuccoed walls to the grains on its roof shingles.

The open spaces of suburbia also made possible an explosive proliferation of swimming pools, which require large amounts of sand in the form of concrete.

American sand and gravel production grew in step with the spread of suburbs.

It can be shaped and molded into almost any form, from twenty-ton slabs to strands thinner than a human hair, from delicate crystal to bulletproof shields. It makes fiber-optic cables and beer bottles, microscope lenses and fiberglass kayaks, the skins of skyscrapers and the teeny camera lenses on your cell phone.

Glass is the thing that lets us see everything. Without it, we’d have no photographs, films, or television, “no understanding of the world of bacteria and viruses, no antibiotics and no revolution in molecular biology from the discovery of DNA,

A more refined breed of grain is required than the common construction sand used for concrete. Glass sand belongs to a category called industrial, or silica, sand.  The best silica sands also come relatively uniform in size. Grains that are too big won’t melt as easily, and ones that are too small will be blown away by air currents in the furnaces.

Construction sand grains retain their form when made into concrete; they are cemented together with countless legions of their fellow grains and their big brothers, gravel pieces, perpetually working together. The grains that become glass, however, are actually transmuted, losing their individual bodies as they are fused together to form a completely different substance.

Glass

Getting them to do that, however, is not easy. It takes temperatures topping 1,600 degrees Celsius to melt silica grains. But mixing sand with additives known as flux, such as soda (aka sodium carbonate), lowers that melting point dramatically. Throw in a little calcium, in the form of powdered limestone or seashell fragments, melt it all together, and when the mixture cools, you have basic glass.

Glassmaking developed into such a profitable art in Venice that in 1291 the city-state’s rulers ordered all of the city’s glassmakers to move to the island of Murano. There they were treated like aristocrats—but not allowed to leave, lest they take their coveted craft secrets to rival nations.

“The invention of spectacles increased the intellectual life of professional workers by fifteen years or more,” write Macfarlane and Martin. Eyeglasses likely abetted the surge of knowledge in Europe from the fourteenth century on. “Much of the later work of great writers such as Petrarch would not have been completed without spectacles. The active life of skilled craftsmen, often engaged in very detailed close work, was also almost doubled,” Macfarlane and Martin maintain. The ability to read into one’s old age became even more important once the printing press came into widespread use from the middle of the fifteenth century.

To manufacture glass profitably, glassmakers need easy access to high-quality sand, cheap energy to run the furnaces, and a transportation network to get the product to market.

It insulated the Alaskan oil pipeline,

In the single year following the introduction of the bottle-making machine, silica sand production in the United States leapt from 1.1 million tons to 4.4 million tons. Clawing all those grains from the earth wreaked considerable damage on the environment. Starting in 1890, sand miners completely dismantled the Hoosier Slide, a 200-foot-tall Indiana dune near Michigan City that was once a tourist attraction, hauling its grains away in wheelbarrows to sell to glassmakers

Lake Michigan shoreline dunes, some as high as 300 feet, were also mined out of existence until public outcry forced the state government to protect them in the 1970s and 1980s.

Elsewhere in Indiana, the Gary Evening Post complained in 1913 that “sand sucker” boats were “stealing the bottom” of Lake Michigan to sell to glassmakers. At the time, no permit or payment was required; anyone was free to dredge as much sand as they liked. (Indiana sand also provided fill for the site of the 1893 Chicago World’s Fair, and to reclaim the land on which Chicago’s famous Lincoln Park was built.)

Owens’s machine quickly and completely wiped out jobs for another class of workers: children. The unions suddenly became crusaders for eliminating child labor—partly because their low pay dragged down wages for everyone, at a time when workingmen’s livelihoods were already in jeopardy. But more important, kids simply were no longer needed in the factories. The dangerous, repetitive tasks that had been given to children were now better handled by machines. In 1880, nearly one-quarter of all glass industry workers were children; by 1919, fewer than 2 percent were.

The irony of all this was that Owens himself didn’t see much wrong with child labor. He always insisted his own early career was a fine one for any stouthearted lad. In a 1922 magazine interview, he expounded: “One of the greatest evils of modern life is the growing habit of regarding work as an affliction. When I was a youngster I wanted to work. . . . A great deal of the trouble to day is with the mothers. Too many boys are being brought up by sentimental women. The first fifteen or twenty years of their lives are spent in playing. . . . When they finally start to work, they are so useless and so helpless that it is positively pathetic. The young man who has begun to work when he was a boy has them handicapped. . . . The hard work I did as a boy never injured me.” He added: “I went through all the jobs the boys performed, and I enjoyed every bit of the experience.

Before 1900, beer and whiskey were distributed in kegs to taverns; if you wanted some to take home, you had to supply your own jug. Milk was stored in metal cans delivered by milk wagons; it was served in pitchers. There was no such thing as a baby bottle. Glass is a near-perfect material for packaging food and beverages. It is nonporous and impermeable, and almost nothing reacts with it chemically, which means a bottle will not interact with whatever is inside it. It won’t rust or leach BPAs or impart a plasticky taste; the liquid inside will retain its aroma and flavor for a very long time. So the sudden availability of cheap high-quality bottles was a colossal gift to makers of soft drinks, beer, medicines, and other bottled consumables.

Owens’s mass-manufactured bottles hit the market at the same time that automobiles were taking over the country and paved roads were spreading. Both developments made it easier than ever to distribute products like bottled drinks far and wide. Trucks loaded with products packaged in sand rolled smoothly from shop to shop on roads made of sand.

By 1916 they had a good enough model to launch a new company selling sheet glass. Its impact was as profound as the bottle machine, turning windows for houses and cars, as well as glass tableware, from luxury items into everyday basics.

Glass-skinned skyscrapers took over city skylines. Plate glass production worldwide mushroomed twenty-five-fold between 1980 and 2010.37 Today, more than 11 billion square yards of flat glass are consumed every year38—more than enough to glaze over the entire city of Houston six times.

Owens-Illinois employees in the 1930s developed a threadlike form of glass that is flexible, strong, lightweight, waterproof, and heat resistant, which they dubbed Fiberglas.

Others had spun glass into threads before, but the new process allowed for the creation of strands as thin as four microns around and thousands of feet long.

To make fiberglass, silica is melted down along with other substances—boron, calcium oxide, magnesia—to make it more workable and give it other properties desired for specific products, such as greater tensile strength. This molten glass is extruded through a metal sleeve set with tiny holes, and the streams are caught on high-speed winders that spin them into filaments. Once cooled and coated with chemical resin, these strands can be used in all kinds of ways.

Owens-Illinois employees in the 1930s developed a threadlike form of glass that is flexible, strong, lightweight, waterproof, and heat resistant, which they dubbed Fiberglas. (Yes, with one s. Later, other companies brought their own versions to market and the stuff became known generically as fiberglass.) Others had spun glass into threads before, but the new process allowed for the creation of strands as thin as four microns around and thousands of feet long. As is true of all glass products, it owes its existence to sand. To make fiberglass, silica is melted down along with other substances—boron, calcium oxide, magnesia—to make it more workable and give it other properties desired for specific products, such as greater tensile strength. This molten glass is extruded through a metal sleeve set with tiny holes, and the streams are caught on high-speed winders that spin them into filaments. Once cooled and coated with chemical resin, these strands can be used in all kinds of ways. Fiberglass pipe insulation to kayaks. Highly efficient insulation made with fiberglass also helped make possible the movement of millions of people into America’s South and Southwest, areas too unpleasantly hot in summer for most folks to consider without a reliable way to keep the heat out. Sand in the form of fiberglass made it easier for people to move to the sand-strewn deserts of Arizona and Nevada.

(Ceramics, incidentally, are also largely composed of sand; ground silica provides the skeleton to which the clay and other additives are attached.)

Glass has long since lost its premier position as the world’s beverage container material of choice; plastic bottles and metal cans now make up 80 percent of the market.

The industry’s center of gravity today is China, which is now both the world’s largest producer and consumer of glass, churning out and gobbling up more than half of all the world’s flat glass. It so thoroughly dominates glass manufacture today

Computer Chips

Spruce Pine, it turns out, is the source of the purest natural quartz ever found on Earth. This ultra-elite corps of silicon dioxide particles plays a key role in manufacturing the silicon used to make computer chips. In fact, there’s an excellent chance the chip that makes your laptop or cell phone work was made using quartz from this obscure Appalachian backwater. “It’s a billion-dollar industry here,” said Glover with a hooting laugh. “Can’t tell by driving through here. You’d never know it.

Mica used to be prized for wood- and coal-burning stove windows and for electrical insulation in vacuum tube electronics. It’s now used mostly as a specialty additive in cosmetics and things like caulks, sealants, and drywall joint compound.

Step one is to take high-purity silica sand, the kind used for glass. (Lump quartz is also sometimes used.) That quartz is then blasted in a powerful electric furnace, creating a chemical reaction that separates out much of the oxygen. That leaves you with what is called silicon metal, which is about 99 percent pure silicon. But that’s not nearly good enough for high-tech uses. Silicon for solar panels has to be 99.999999 percent pure—six 9s after the decimal. Computer chips are even more demanding. Their silicon needs to be 99.99999999999 percent pure—eleven 9s.

The next step is to melt down the polysilicon. But you can’t just throw this exquisitely refined material in a cook pot. If the molten silicon comes into contact with even the tiniest amount of the wrong substance, it causes a ruinous chemical reaction. You need crucibles made from the one substance that has both the strength to withstand the heat required to melt polysilicon, and a molecular composition that won’t infect it. That substance is pure quartz. This is where Spruce Pine quartz comes in. It’s the world’s primary source of the raw material needed to make the fused-quartz crucibles in which computer-chip-grade polysilicon is melted. A fire in 2008 at one of the main quartz facilities in Spruce Pine for a time all but shut off the supply of high-purity quartz to the world market, sending shivers through the industry.

A 2017 study by the US Geological Survey warned that unless something is done, as much as two-thirds of Southern California’s beaches may be completely eroded by 2100.2 To understand why, you

Massive coastal development—marinas, jetties, ports—blocks the flow of ocean-borne sand.

River dams also cut off the flow of sand that used to feed beaches.

Southern California’s beaches have lost as much as four-fifths of the sediment that rivers used to bring them, thanks to dams.

Louisiana loses an estimated sixteen square miles of wetlands every year—a crucial natural defense against hurricanes—because levees and canals on the Mississippi have reduced the flow of sediment that used to replenish them.6 Egypt’s Aswan Dam has done a similar number on the shore of the Nile Delta. China’s colossal Three Gorges Dam project is expected to have an even greater impact.

Sand mining makes the problem worse. Dams combined with upriver sand mining are decimating the supply of replenishing sediment to Vietnam’s Mekong Delta, home to 20 million people and source of half that country’s food supply.

Illegal beach sand mining has been reported all over the world. In Morocco and Algeria, illegal miners have stripped entire beaches for construction sand, leaving behind rocky moonscapes. Thieves in Hungary made off with hundreds of tons of sand from an artificial river beach in 2007. Five miles of beach was stripped down to its clay foundation in Russian-occupied Crimea in 2016. Smugglers in Malaysia, Indonesia, and Cambodia pile beach sand onto small barges in the night and sell them in Singapore.8 Beaches have been torn up in India and elsewhere by

Government officials in Puerto Rico have had to restrict beach sand mining because so many grains were being taken to build tourist hotels that the very beaches those tourists came for were disappearing.

Add rising seas to shrinking beaches and you have a serious problem worldwide.

Beach nourishment, also known as beach replenishment, has become a major industry. More than $7 billion has been spent in the United States in recent decades on artificially rebuilding hundreds of miles of beach nationwide. Almost all of the costs are covered by taxpayers; much of it is overseen by the federal US Army Corps of Engineers. Florida accounted for about a quarter of the total,

Eastman Aggregate would dump a million tons of new sand on Broward’s beaches over the course of several months. The grains are mined from an inland quarry a couple of hours drive away. Trucks haul that sand down the highway, squeeze their way in between the villas and hotels, and dump it on the shore. Excavators load the freshly delivered sand into hulking yellow dump trucks, which ferry it to the edge of the renourishment zone. Small bulldozers then push the grains into place, extending an evenly proportioned beach out into the surf.

Hauling and placing sand with trucks is both considerably slower and far more expensive than the more common method, which is to dredge sand from the sea bottom and blast it onto the shore through floating pipes. The problem is that over the last four decades since beach nourishment began in earnest, Broward County has used up all the sea sand it is legally and technically able to lay its hands on. Nearly 12 million cubic yards13 of underwater grains have been stripped off the ocean bottom and thrown onto Broward’s shores. There are still some pockets of sand on the seabed, but dredging them is forbidden because it could damage the coral reefs they sit next to.

The same goes for Miami-Dade County to the south.

There is lots of sand left off the coasts of three other Florida counties farther north. They haven’t worked their beaches quite as hard as the tourist meccas to the south, and the continental shelf up there extends further out before dropping into the deep ocean, giving them a larger area to dredge from. Miami-Dade has asked for help, but the northern counties have so far refused to share. They don’t want to find themselves in Miami’s position thirty years from now.

Olympic beach volleyball players. To make sure their bare feet come into contact only with grains of just the right size and shape, sand was brought in from Hainan Island for the 2008 Beijing Games, and from a quarry in Belgium for the 2004 Athens Games.

This particular beach is only expected to last about six years before it needs more upkeep.

In Broward County, they make no bones about it. “Beaches are a form of infrastructure,” said Sharp. “You pave your potholes, we pave our beaches with sand.

For most of human history, beaches weren’t places to relax, but to work. The sandy shores were where fishermen launched their boats and cleaned their catch, where small traders unloaded their cargo. Coastal people built their homes a safe distance from the unpredictable weather and waves of the shoreline, often facing away from the sea for added protection.16 “When Europeans and Americans first settled the coasts, they largely ignored, indeed avoided, what are today’s most coveted stretches of shore,” writes historian John R. Gillis in The Human Shore, an account of our changing relationship with our coasts. “The beach was used for landing but not for settlement. Its featureless barrenness was not only inhospitable but repulsive.

 “1820s-era England is responsible for a turning point in the history of seaside resorts, as this was when the first major bathing establishments were constructed for the specific purpose of bathing, relaxation, and play,”18 writes University of Florida scholar Tatyana Ressetar in her master’s thesis

The popularity of beaches grew through the late 1800s among the burgeoning middle class, with their newfound leisure time, and as railroads made the shores accessible to lower-class city dwellers who previously had no way to reach them.

The rich began building private seaside mansions, and the middle class copied them on a smaller scale, until by the 1930s there were seaside towns all over Europe and North America. The rise of the automobile and post–World War II prosperity brought unprecedented numbers to the beach, more and more of whom chose to retire there as time went on.

A century ago, Hawaii’s Waikiki Beach was a narrow ribbon of sand fringed by marsh; it was beefed up to its current expansive size with grains barged in from other Hawaiian islands, and at one point in the 1930s with sand shipped from California. Today it still requires regular renourishing.

Many of Spain’s Canary Island beaches were just rocky coastlines until developers dumped tons of sand imported from the Caribbean and Morocco on them.

The glamorization of the sandy beach gave rise to cities like Miami Beach and Fort Lauderdale. Roads built of sand made it possible for people to drive to them. Concrete made it possible to build whole cities in the middle of nowhere to house them all. Later, concrete built the vast theme parks—Walt Disney World, Universal Studios—which attracted even more people. Sand abetting sand abetting sand.

Washington subsidizes local governments and homeowners who build in imperiled coastal areas to the tune of billions of dollars in the form of insurance guarantees, disaster bailouts, and other protections. Taxpayer-funded beach nourishment also has the perverse effect of shoring up property values, a recent study found.

NOTE: to read further, be sure to buy the book, I left a lot out of the above

Posted in Concrete, Peak Sand | Tagged , , | Comments Off on How sand transformed civilization

Far out power #1: human fat, playgrounds, solar wind towers, perpetual motion, thermal depolymerization

Preface. Plans for hydrogen, wind, solar, wave and all the other re-buildable contraptions that use fossil fuels in every single step of their short 15-25 year life cycle and hence are non-renewable, are just as silly as the ideas below,  yet these  schemes with negative energy return that can’t make themselves without fossil fuels are written about in respectable scientific journals, unlike the proposals below.

I’ve been writing about this since 2001, now Michael Moore has made a film called “Planet of the Humans” that explains this as well.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Liposuction fat

Mr. Buthune thinks the use of human fat as an energy source has some potential.  “There’s an interesting business model: link a biodiesel plant with the cosmetic surgeons,” says Mr. Bethune. “In Auckland we produce about 330 pounds of fat per week from liposuction, which would make about 40 gallons of fuel. If it is going to be chucked out, why not?” (Schouten 2005)

At an Exxon conference, the Yes Men pulled a prank of giving a presentation of making a new fuel, Vivoleum, out of humans killed by climate change. Hundreds of candles made of human hair that smelled like dead people were handed out (Yes Men 2015).

Auckland, New Zealand, adventurer Peter Bethune plans to break the round-the-world powerboat speed record in a boat powered by biodiesel fuel partly manufactured from human fat. The lean Mr. Bethune had about three ounces of fat extracted from his body yesterday in a lipsuction procedure, and he is seeking volunteers to donate more (Schouten 2005).

Playground power

The only place I could find this actually existing is in Ghana, Africa, where Empower Playgrounds provides merry-go-rounds to schools that generate and store electricity as they are spun around (Brownlee 2013).

Perpetual motion

Violates all the laws of physics and thermodynamics, even the patent office got wise and won’t accept any applications (Wikipedia, Park 2000).

Thermal depolymerization

Garbage and landfills turn can be turned into biogass.  But as energy declines, there will be less and less garbage, not only because there won’t be the fuel to take it to a landfill, but people will be burning anything they can get their hands on to cook and heat with.

Solar Wind Towers (Slav 2019)

More than 30 years ago a giant tower was built in Manzanares, Spain, to produce electricity in a way that at the time must have seen even more eccentric than it seems now, by harnessing the power of air movement. The Manzanares tower was, sadly, toppled by a storm. Decades ago, several other firms tried to replicate the idea, but none has succeeded. Why?

The idea behind the so-called solar wind towers is pretty straightforward. The more popular version is the solar updraft tower, which works as follows:

On the ground, around the hollow tower, there is a solar energy collector—a transparent surface suspended a little above ground—which heats the air underneath.

As the air heats up, it is drawn into the tower, also called a solar chimney, since hot air is lighter than cold air. It enters the tower and moves up it to escape through the top. In the process, it activates a number of wind turbines located around the base of the tower. The main benefit over other renewable technologies? Doing away with the intermittency of PV solar, since the air beneath the collector could stay hot even when the sun is not shining.

But the cost of building one is simply too expensive, and investors are wary of the problems related to the very tall height required (the taller the better).

References

Brownlee, J. 2013. A Merry-Go-Round That Turns The Power Of Play Into Electricity. fastcompany.

Park, R. 2000. Perpetual Motion: still going around. Washington Post.

Schouten, H. 2005. Earthrace biofuel promoter to power boat using human fat. calorielab.com

Slav, I. 2019. The fatal flaw in a perfect energy solution. oilprice.com

Posted in Far Out | Tagged , , , , | 4 Comments

How a pandemic or bioweapon could take civilization down

Preface.  I just listened to a 3.5 hour podcast on pandemics and bioweapons with the best up-to-date coverage I know of, and more interesting to listen to than reading a book or article.  Just one of many scary problems: synthetic biology and CRISPR tools today are on the way to being accessible within 20 years or less to the public (Cross 2018, Sharma et al 2020). That would make it possible for just one person to assemble a virus like the bird flu (H5N1) and let it loose.

2021-4-3 Engineering the Apocalypse by Rob Reid & Sam Harris

Rob Reid’s podcast has suggestions for what we could do, such as creating a universal flue and corona virus vaccine, as well as vaccines for other viruses we know of. There are many ways to monitor the rise of a pandemic through testing, air sampling and more.

I would guess there are many possible motivations –perhaps someone who is suicidal or crazy like all the mass shooters. Or a nation. North Korea comes to mind, but a nation at war that has developed bioweapons and a vaccine to counteract their engineered virus might innoculate their own population against it before unleashing it in the world. A deep ecologist protecting biodiversity and climate. Or a billionaire with a New Zealand bunker who wants to carry on with his non-negotiable way of life by killing billions to delay limits to growth and the end of fossil fuel production.

I think it is more likely for civilization to fail from energy shortages now that peak oil is upon us, ending the precision machine tools, supply chains, and technology that could create a bioweapon or a vaccine.

And who knows what Russia has and might use? In 1973 the Soviet Union decided it would be much cheaper to develop bioweapons than nuclear missiles, and their Biopreparat program successfully weaponized: Smallpox, Bubonic plague, Anthrax, Venezuelan equine encephalitis, Tularemia, Influenza, Brucellosis Marburg virus, Machupo virus, Veepox (hybrid of Venezuelan equine encephalitis & smallpox), and Ebolapox (hybrid of ebola & smallpox).

Below is an article about whether a pandemic could bring civilization down. The main way this would happen is if the death rate is so high that essential workers would stay home.

Alice Friedemann   www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

MacKenzie, D. April 5, 2008. Will a pandemic bring down civilization? NewScientist.

For years we have been warned that a pandemic is coming. It could be flu, it could be something else. We know that lots of people will die. As terrible as this will be, on an ever more crowded planet, you can’t help wondering whether the survivors might be better off in some ways. Wouldn’t it be easier to rebuild modern society into something more sustainable if, perish the thought, there were fewer of us.

Yet would life ever return to something resembling normal after a devastating pandemic? Virologists sometimes talk about their nightmare scenarios – a plague like ebola or smallpox – as “civilization ending”. Surely they are exaggerating. Aren’t they?

Many people dismiss any talk of collapse as akin to the street-corner prophet warning that the end is nigh. In the past couple of centuries, humanity has innovated its way past so many predicted plagues, famines and wars – from Malthus to Dr Strangelove – that anyone who takes such ideas seriously tends to be labeled a doom-monger.

There is a widespread belief that our society has achieved a scale, complexity and level of innovation that make it immune from collapse. “It’s an argument so ingrained both in our subconscious and in public discourse that it has assumed the status of objective reality,” writes biologist and geographer Jared Diamond of the University of California, Los Angeles, author of the 2005 book Collapse. “We think we are different.”
Ever more vulnerable

A growing number of researchers, however, are coming to the conclusion that far from becoming ever more resilient, our society is becoming ever more vulnerable. In a severe pandemic, the disease might only be the start of our problems.

No scientific study has looked at whether a pandemic with a high mortality could cause social collapse – at least none that has been made public. The vast majority of plans for weathering a pandemic all fail even to acknowledge that crucial systems might collapse, let alone take it into account.

There have been many pandemics before, of course. In 1348, the Black Death killed about a third of Europe’s population. Its impact was huge, but European civilization did not collapse. After the Roman empire was hit by a plague with a similar death rate around AD 170, however, the empire tipped into a downward spiral towards collapse. Why the difference? In a word: complexity.

In the 14th century, Europe was a feudal hierarchy in which more than 80% of the population were peasant farmers. Each death removed a food producer, but also a consumer, so there was little net effect. “In a hierarchy, no one is so vital that they can’t be easily replaced,” says Yaneer Bar-Yam, head of the New England Complex Systems Institute in Cambridge, Massachusetts. “Monarchs died, but life went on.”

Individuals matter

The Roman empire was also a hierarchy, but with a difference: it had a huge urban population – not equaled in Europe until modern times – which depended on peasants for grain, taxes and soldiers. “Population decline affected agriculture, which affected the empire’s ability to pay for the military, which made the empire less able to keep invaders out,” says anthropologist and historian Joseph Tainter at Utah State University in Logan. “Invaders in turn further weakened peasants and agriculture.”

A high-mortality pandemic could trigger a similar result now, Tainter says. “Fewer consumers mean the economy would contract, meaning fewer jobs, meaning even fewer consumers. Loss of personnel in key industries would hurt too.”

Bar-Yam thinks the loss of key people would be crucial. “Losing pieces indiscriminately from a highly complex system is very dangerous,” he says. “One of the most profound results of complex systems research is that when systems are highly complex, individuals matter.”

The same conclusion has emerged from a completely different source: tabletop “simulations” in which political and economic leaders work through what would happen as a hypothetical flu pandemic plays out. “One of the big ‘Aha!’ moments is always when company leaders realize how much they need key people,” says Paula Scalingi, who runs pandemic simulations for the Pacific Northwest economic region of the US. “People are the critical infrastructure.”
Vital hubs

Especially vital are “hubs” – the people whose actions link all the rest. Take truck drivers. When a strike blocked petrol deliveries from the UK’s oil refineries for 10 days in 2000, nearly a third of motorists ran out of fuel, some train and bus services were cancelled, shops began to run out of food, hospitals were reduced to running minimal services, hazardous waste piled up, and bodies went unburied. Afterwards, a study by Alan McKinnon of Heriot-Watt University in Edinburgh, UK, predicted huge economic losses and a rapid deterioration in living conditions if all road haulage in the UK shut down for just a week.

What would happen in a pandemic when many truckers are sick, dead or too scared to work? Even if a pandemic is relatively mild, many might have to stay home to care for sick family or look after children whose schools are closed. Even a small impact on road haulage would quickly have severe knock-on effects.

One reason is just-in-time delivery. Over the past few decades, people who use or sell commodities from coal to aspirin have stopped keeping large stocks, because to do so is expensive. They rely instead on frequent small deliveries.

Cities typically have only three days’ worth of food, and the old saying about civilizations being just three or four meals away from anarchy is taken seriously by security agencies such as MI5 in the UK.

How long would your stocks last if shops emptied and your water supply dried up? Even if everyone were willing, US officials warn that many people might not be able to afford to stockpile enough food.

Two-day supply

Hospitals rely on daily deliveries of drugs, blood and gases. “Hospital pandemic plans fixate on having enough ventilators,” says public health specialist Michael Osterholm at the University of Minnesota in Minneapolis, who has been calling for broader preparation for a pandemic. “But they’ll run out of oxygen to put through them first. No hospital has more than a two-day supply.” Equally critical is chlorine for water purification plants.

It’s not only absentee truck drivers that could cripple the transport system; new drivers can be drafted in and trained fairly quickly, after all. Trucks need fuel, too. What if staff at the refineries that produce it don’t show up for work?

Some models suggest absenteeism sparked by a 1918-type pandemic could cut the workforce by half at the peak of a pandemic wave.

Critical infrastructure

All the companies that provide the critical infrastructure of modern society – energy, transport, food, water, telecoms – face similar problems if key workers fail to turn up. According to US industry sources, one electricity supplier in Texas is teaching its employees “virus avoidance techniques” in the hope that they will then “experience a lower rate of flu onset and mortality” than the general population.

The fact is that the best way for people to avoid the virus will be to stay home. But if everyone does this – or if too many people try to stockpile supplies after a crisis begins – the impact of even a relatively minor pandemic could quickly multiply.

Planners for pandemics tend to overlook the fact that modern societies are becoming ever more tightly connected, which means any disturbance can cascade rapidly through many sectors. For instance, many businesses have contingency plans that count on some people working online from home. Models show there won’t be enough bandwidth to meet demand.

And what if the power goes off? This is where complex inter-dependencies could prove disastrous. Refineries make diesel fuel not only for trucks but also for the trains that deliver coal to electricity generators, which now usually have only 20 days’ reserve supply, Osterholm notes. Coal-fired plants supply 30 % of the UK’s electricity, 50% of the US’s and 85% of Australia’s.

Powerless

The coal mines need electricity to keep working. Pumping oil through pipelines and water through mains also requires electricity. Making electricity depends largely on coal; getting coal depends on electricity; they all need refineries and key people; the people need transport, food and clean water. If one part of the system starts to fail, the whole lot could go. Hydro and nuclear power are less vulnerable to disruptions in supply, but they still depend on highly trained staff.

With no electricity, shops will be unable to keep food refrigerated even if they get deliveries. Their tills won’t work either. Many consumers won’t be able to cook what food they do have. With no chlorine, water-borne diseases could strike just as it becomes hard to boil water. Communications could start to break down as radio and TV broadcasters, phone systems and the internet fall victim to power cuts and absent staff. This could cripple the global financial system, right down to local cash machines, and will greatly complicate attempts to maintain order and get systems up and running again.

Even if we manage to struggle through the first few weeks of a pandemic, long-term problems could build up without essential maintenance and supplies. Many of these problems could take years to work their way through the system. For instance, with no fuel and markets in disarray, how do farmers get the next harvest in and distributed?
Closing borders

As a plague takes hold, some countries may be tempted to close their borders. But quarantine is not an option any more. “These days, no country is self-sufficient for everything,” says Lay. “The worst mistake governments could make is to isolate themselves.” The port of Singapore, a crucial shipping hub, plans to close in a pandemic only as a last resort, he says. Yet action like this might not be enough to prevent international trade being paralysed as other ports close for fear of contagion or for lack of workers, as ships’ crews sicken and exporters’ assembly lines grind to a halt without their own staff, power, transport or fuel and supplies.

Osterholm warns that most medical equipment and 85% of US pharmaceuticals are made abroad, and this is just the start. Consider food packaging. Milk might be delivered to dairies if the cows get milked and there is fuel for the trucks and power for refrigeration, but it will be of little use if milk carton factories have ground to a halt or the cartons are an ocean away.

“No one in pandemic planning thinks enough about supply chains,” says Osterholm. “They are long and thin, and they can break.” When Toronto was hit by SARS in 2003, the major surgical mask manufacturers sent everything they had, he says. “If it had gone on much longer they would have run out.”

The trend is for supply chains to get ever longer, to take advantage of economies of scale and the availability of cheap labour. Big factories produce goods more cheaply than small ones, and they can do so even more cheaply in countries where labor is cheap.
Flawed assumptions

Disaster planners usually focus on single-point events of this kind: industrial accidents, hurricanes or even a nuclear attack. But a pandemic happens everywhere at the same time, rendering many such plans useless.

The main assumption is how serious a pandemic could be. Many national plans are based on mortality rates from the mild 1957 and 1968 pandemics. “No government pandemic plans consider the possibility that the death rate might be higher than in 1918,” says Tim Sly of Ryerson University in Toronto, Canada.
Death rate

This scenario assumes around 3% of those who fall ill die. Of all the people known to have caught H5N1 bird flu so far, 63% have died. “It seems negligent to assume that H5N1, if it goes pandemic, will necessarily become less deadly,” says Sly. And flu is far from the only viral threat we face.

The ultimate question is this: what if a pandemic does have huge knock-on effects? What if many key people die, and many global balancing acts are disrupted? Could we get things up and running again? “Much would depend on the extent of the population decline,” says Tainter. “Possibilities range from little effect to a mild recession to a major depression to a collapse.”

References

Cross R (2018) Synthetic biology could enable bioweapons development. A new National Academies report names and classifies the kinds of biological weapons that could emerge from techniques like CRISPR gene editing and DNA synthesis? Chemical & engineering news 96.

Sharma A et al (2020) Next generation agents (synthetic agents): Emerging threats and challenges in detection, protection, and decontamination. Handbook on Biological Warfare Preparedness.

Posted in 3) Fast Crash, Biowarfare, Interdependencies, Pandemic Fast Crash | Tagged , , , , , , | 3 Comments

Fall of Indus valley & Akkadian civilizations from climate change

Preface. Any civilization or region that survives energy decline must then survive climate change for many centuries. As far as the wind systems that collapsed the Akkadian empire, it’s already happening:

“Greenhouse gases are increasingly disrupting the jet stream, a powerful river of winds that steers weather systems in the Northern Hemisphere. That’s causing more frequent summer droughts, floods and wildfires, a new study says. The findings suggest that summers like 2018, when the jet stream drove extreme weather on an unprecedented scale across the Northern Hemisphere, will be 50% more frequent by the end of the century if emissions of carbon dioxide and other climate pollutants from industry, agriculture and the burning of fossil fuels continue at a high rate” (Berwyn 2018).

Alice Friedemann www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, April 2021, Springer, “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Malik N (2020) Uncovering transitions in paleoclimate time series and the climate driven demise of an ancient civilization. Chaos: An Interdisciplinary Journal of Nonlinear Science.

There are several theories about why the Indus Valley Civilization declined—including invasion by nomadic Indo-Aryans and earthquakes—but climate change appears to be the most likely scenario. Shifting monsoon patterns led to the demise of the Indus Valley Civilization, a Bronze Age civilization contemporary to Mesopotamia and ancient Egypt.

Bressan, D (2019) Climate Change Caused the World’s First Empire To Collapse. Forbes

The Akkadian Empire was the first ancient empire of Mesopotamia, centered around the lost city of Akkad. The reign of Akkad is sometimes regarded as the first empire in history, as it developed a central government and elaborate bureaucracy to rule over a vast area comprising modern Iraq, Syria, parts of Iran and central Turkey. Established around 4.600 years ago, it abruptly collapsed two centuries later as settlements were suddenly abandoned. New research published in the journal Geology argues that shifting wind systems contributed to the demise of the empire.

The region of the Middle East is characterized by strong northwesterly winds known locally as shamals. This weather effect occurs one or more times a year. The resulting wind typically creates large sandstorms that impact the climate of the area. To reconstruct the temperature and rainfall patterns of the area around the ancient metropolis of Tell-Leilan, the researchers sampled 4,600- to 3,000-year-old fossil Porites corals, deposited by an ancient tsunami on the northeastern coast of Oman.

The genus Porites builds a stony skeleton using the mineral aragonite (CaCO3). Studying the chemical and isotopic signatures of the carbon and oxygen used by the living coral, it is possible to reconstruct the sea-surface temperature conditions and so the precipitation and evaporation balance of a region located near the sea.

The fossil evidence shows that there was a prolonged winter shamal season accompanied by frequent shamal days lasting from 4.500 to 4.100 years ago, coinciding with the collapse of the Akkadian empire 4.400 years ago . The impact of the dust storms and lack of rainfall would have caused major agricultural problems possibly leading to famine and social instability. Weakened from the inside, the Akkadian Empire became an easy target to many opportunistic tribes living nearby. Hostile invasions, helped by the shifting climate, finally brought an end to the first modern empire in history.

The collapse of the Akkadian Empire concides also with the proposed onset of the Meghalayan Age, an age marked by mega-droughts on a global scale that crushed a number of civilizations worldwide.

References

Berwyn, B. 2018. Global Warming Is Messing with the Jet Stream. That Means More Extreme Weather. A new study links the buildup of greenhouse gas emissions to more frequent heat waves, floods and droughts in the Northern Hemisphere. insideclimatenews.org

Posted in Climate Change, Collapsed & collapsing nations | Tagged , | Comments Off on Fall of Indus valley & Akkadian civilizations from climate change