Population explosion to destroy 11% of remaining ecosystems and biodiversity

Preface. According to a recent paper in Nature Sustainability (Williams et al 2020), we are on the verge of destroying 11% of earth’s remaining ecosystems by 2050 to grow more food. We already are using 75% of Earth’s land. What a species! Reminds me of the ecology phrase “Are Humans Smarter than Yeast?”

But I have several criticisms of this research.

Proposed remedies include increasing crop yields, but we are at peak food, so that isn’t going to happen. We are also at peak pesticides, as we are running out of new toxic chemicals and pests adapt within five years on average. The second idea is to have homo sapiens stop eating meat and adopt a plant-based diet.  As long as meat is available and affordable, that simply won’t happen.  The third way is to cut food waste or loss.  That would require all of us to live in dire poverty given human nature, and then we’d all chop away at the remaining wild lands to grow more food. And finally, the 4th solution would be to export food to the nations that are going to destroy the most creatures and forests.  Which in turn would lead to expanding populations in these regions. Malthus was right about food being the only limitation on population. And it would be difficult to export food when there are 83 million more mouths to feed every year globally. 

This research article doesn’t even mention family planning and birth control as a solution.

Or point out the huge increase in greenhouse gases that would be emitted. From “Life After Fossil Fuels: A Reality Check on Alternative Energy”:  The idea that biofuels generate less CO2 than gasoline stems from the fact that biofuels are derived from plants that absorb carbon dioxide.  But land typically supports plant growth regardless of whether it’s being used to grow corn or not. Corn grown for ethanol for use in gasoline has a net benefit of storing around three tons of carbon dioxide per hectare.  But if the land had not been used for ethanol, we’d be better off.  If reforested, then 7.5-12 tons of CO2 would be stored per hectare.  A corn ethanol field, formerly a forest, will emit 12 to 35 tons of CO2 per hectare a year for 30 years (NRC 2014).  By contrast, a wetland stores 81-216 tons of carbon per acre (TCF 2020). In sum, corn doesn’t sequester carbon, but recycles it at best, releasing CO2 when made into ethanol, and absorbing CO2 in the next corn crop. Every year when land is tilled or cleared to grow crops, greenhouse gases are emitted from the soil. A carbon storehouse, soil stores 4.5 times more carbon than vegetation (Lal 2004).  Agriculture emits 30% of all global greenhouse gas emissions.

Alice Friedemann  www.energyskeptic.com Women in ecology  author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity

***

Williams DR, Clark M, Buchanan GM, et al (2020) Proactive conservation to prevent habitat losses to agricultural expansion. Nature Sustainability.

If current trends continue, land clearing for agriculture will eat away at the habitats of nearly 90% of land animals by 2050. Humans have already appropriated over 75% of Earth’s lands for farms, ranches, cities and other endeavors, leaving just 11.6 of the planet’s 57.3 million square miles of land to house the wealth of global biodiversity (Watson et al. 2016).  Humans are likely to convert 1.3 million square miles of the remaining 11.6 million square miles of ecosystems to agriculture by 2050. Williams et al (2020) estimate that the conversion to cropland will further shrink the habitats of more than 17,000 species of land vertebrates, mainly in sub-Saharan Africa as well as South and Southeast Asia.

References

Watson JEM, Shanahan DF, Di Marco M, et al (2016) Catastrophic Declines in Wilderness Areas Undermine Global Environment Targets. Current Biology 26: 2929-2934.

Posted in Chemicals, Deforestation, Food production, Overpopulation | Tagged , , | 4 Comments

Negative energy return of solar PV in Northern Europe

Preface.  I once yanked this paper after huge blow back, almost as bad as nuclear proponents, but in the past few years, I have no reason to doubt Ferroni and Hopkirks methods, boundaries, or conclusions, so I’m putting this post back. If you automatically dismiss this paper because of Raugei’s rebuttal, then you need to read Ferroni et al’s rebuttal of Raugei:  Further considerations to: Energy Return on Energy Invested (ERoEI) for photovoltaic solar systems in regions of moderate insolation

An ERoEI of less than 1 means there is a net energy loss. In this paper Ferroni and Hopkirk found the EROEI of Solar PV to be negative, just .82 (+/-) 15%) in countries north of the Swiss Alps.

The problem with EROEI is that there is endless arguing over the boundaries.  For example, Prieto and Hall’s 2013 book, “Spain’s Photovoltaic Revolution-The Energy Return on Investment” had energy data for over 20 activities outside the production process of the modules, typically NOT included in EROEI studies. But these steps are necessary, or the solar PV installation won’t happen, and Pedro Prieto built several large installations and was in charge of the finances, so he knew everything required — the road built to access the site, the new transmission lines, the security fence and system and more that EROI studies typically don’t include.  

This paper goes beyond Prieto and Hall’s boundaries because they deliberately left out labor and other costs to mollify solar proponents. It didn’t do any good, they tried to get Springer to not publish their book. But this paper includes labor, the costs of the energy required to integrate and buffer intermittent PV-electricity in the grid (i.e. storage via pumped hydro, batteries, natural gas or coal backup plants), and the energy embodied in faulty equipment.  If Prieto & Hall had included these then their paper would have found a negative EROI, as Prieto wrote here. Also important is that Prieto and Hall’s EROI of 2.6 : 1 in sunny Spain is still far less than the EROI of 10 to 14 many scientists believe necessary to maintain our current civilization.

Another important finding of this paper is that based on recycling rates of PV in Germany, solar panel lifespan is closer to 17 or 18 years than 25.  And that doesn’t count the solar panels that are abandoned or tossed in the trash. But the paper doesn’t use the 17-18 year lifespan and sticks with a 25-year lifespan or the EROEI would be considerably less than the 0.82 (negative EROEI) calculated. If the real lifespan is 17 years, then the calculations of all solar PV papers need to be reduced by 43%, because solar PV EROI research assumes a lifespan of 30 years.

Other items of interest:

  1. The capacity factor during the winter period is only about 3% (or more recently in Germany during January 2015, only 2%)
  2. In the winter PV is producing at peak power for only 1.7 hours per day on average, and in the summer only 3.3 hours daily
  3. the consumption of material resources using the photovoltaic technology is at least 64 times that of nuclear energy
  4. The production of PV modules requires a process consisting of approximately 200 steps [and as I have written in many of my posts on EROI – every step takes energy and lowers the energy return on investment ]
  5. Many potentially hazardous chemicals are used during the production of solar modules. To be mentioned here is, for instance, nitrogen trifluoride (NF3), (Arnold et al., 2013), a gas used for the cleaning of the remaining silicon-containing contaminants in process chambers. According to the IPCC (Intergovernmental Panel on Climate Change) this gas has a global warming potential of approximately 16,600 times that of CO2.

In order to keep civilization running, transportation comes first, because you need to deliver tens of thousands of parts to solar PV farms, wind turbine factories, and biorefineries and transport the final contraption after assembly to its living quarters.

You’ll also need specialized trucks to build and maintain the millions of miles of transmission and distribution lines of the electric grid.  And as I make a case for in my book “When trucks stop running”, trucks can’t be electrified because batteries have a terribly low energy density, about 40 times less than oil kg for kg, and so batteries will always be too heavy when scaled up to a truck that weighs 80,000 pounds fully loaded.

So in the end, the EROEI of solar PV doesn’t matter, and EROEI distracts from the fact that civilization will end unless a drop-in fuel for diesel engines, which can only burn diesel #2, is made in massive quantities with an EROI above 10.

And with world peak oil in 2018, time is running out.

Related posts:

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Financial Sense, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Ferroni F, Hopkirk RJ (2016) Energy Return on Energy Invested (ERoEI) for photovoltaic solar systems in regions of moderate insolation. Energy Policy 94: 336–344

Many people believe renewable energy sources to be capable of substituting fossil or nuclear energy. However there exist very few scientifically sound studies, which apply due diligence to substantiating this impression. In the present paper, the case of photovoltaic power sources in regions of moderate insolation is analyzed critically by using the concept of Energy Return on Energy Invested (ERoEI, also called EROI). But the methodology for calculating the ERoEI differs greatly from author-to-author. The main differences between solar PV Systems are between the current ERoEI and what is called the extended ERoEI (ERoEI EXT). The current methodology recommended by the International Energy Agency is not strictly applicable for comparing photovoltaic (PV) power generation with other systems. The main reasons are due to the fact that on one hand, solar electricity is very material-intensive, labor-intensive and capital-intensive and on the other hand the solar radiation exhibits a rather low power density.

Publications in increasing numbers have started to raise doubts as to whether the commonly promoted, renewable energy sources can replace fossil fuels, providing abundant and affordable energy. Trainer (2014) stated: “Many reports have claimed to show that it is possible and up to now the academic literature has not questioned the faith. Therefore, it is not surprising that all Green agencies as well as the progressive political movements have endorsed the belief that the replacement of fossil fuels with renewables is feasible”.

However, experience from more than 20 years of real operation of renewable power plants such as photovoltaic installations and the deficient scientific quality and validity of many studies, specifically aimed at demonstrating the effective sustainability of renewable energy sources, indicate precisely the contrary.

A meta-analysis by Dale and Benson (2013) has been concerned with the global photovoltaic (PV) industry’s energy balance and is aimed at discovering whether or not the global industry is a net energy producer. It contains reviews of cumulative energy demand (CED) from 28 published reports, each concerning a different PV installation using one of the currently available technologies. The majority use either single- crystal or multi-crystalline silicon solar panels, which together effectively comprise around 90% of the market. The huge scatter in the reported CEDs is itself a strong indication that the authors of the 28 publications studied were not following the same criteria in determining the boundaries of the PV system: in setting the criteria for the calculation of the values of the embodied energy of the various materials, in the calculation of the energy invested for the necessary labor, in the calculation of the energy invested for obtaining and servicing the required capital and, in defining the conversion factors for the system’s inputs and outputs consistently in terms of coherent energy and monetary units.

In fact, the CEDs show a range, from maximum of 2000 kW he/m² of module area down to a minimum of 300 kW he/m² with a median value of 585 kW he/m². For such cases, a meta- analysis would require an additional investigation to explain the system boundary conditions leading to the more extreme values.

Pickard (2014) expresses concerns similar to those of Trainer. He examines: “the open question of whether mankind, having run through its dowry of fossil fuels, will be able to maintain its advanced global society. Given our present knowledge base, no definite answer can be reached”. His conclusion is: “it appears that mankind may be facing an obligatory change to renewable fuel sources, without having done due diligence to learn whether, as envisioned, those renewable sources can possibly suffice”.

We wish at this point to emphasize the significance of the factor ERoEI (often abbreviated elsewhere to EROI), which lies at the heart of the present paper. Arithmetically, it is most simply expressed as a ratio – the quotient obtained by dividing the total energy returned (or energy output) from a system by the total energy invested (the energy input or the system’s CED). If the quotient is larger than one, then the system can be considered to be an energy source and if the quotient is lower than one, then the same system must be considered to be an energy sink. Clearly, the difference between the total energy returned and the total energy invested is equal in absolute units to the net energy produced during system lifetime. The words “TOTAL” and “NET” are critical here.

In this paper the ERoEI analysis is applied to systems including the PV installations located in regions of modest insolation in Europe, in particular in Switzerland and Germany. The energy returned and the energy invested will be treated separately. Sufficient data records are now available for the regions of interest, from which the electrical (i.e. secondary) energy returned can be derived.

The energy invested, on the other hand, is based on the actual industrial situation for the production of silicon-based PV modules, for their transport, their installation, their maintenance and their financing. Due to the elevated costs and local environmental restrictions in Europe, PV module/panel manufacture takes place primarily in China.

Let us consider first the energy returned as the specific electrical energy produced, per unit of PV-panel surface (annually, in kW he/m2 yr and over plant lifetime, in kW he/m2).

Energy returned per unit of photovoltaic panel surface

There are two ways of approaching the calculation of yearly average and lifetime levels of electrical energy production.

The first starts with the yearly total of global horizontal irradiation, used currently as an indicator for the insolation levels at a site. The average value for Switzerland of this primary (thermal) energy (Haeberlin, 2010) lies between 1000 and 1400 kW ht/m2 yr. However, measurements with a pyranometer, from which these values are derived do not take into consideration the reduction of irradiation and hence of solar cell performance due to the presence, in the course of real operation, of accumulations of dust, fungus and bird droppings, due to surface damage, ageing and wear and finally due to atmospheric phenomena like snow, frost and condensing humidity.

We use therefore the published statistical data for thermal collectors actually in operation as an indicator for the insolation. Such data are measured as a function of the surface given in square meters. The data are available in the Swiss annual energy statistics (Swiss Federal Office of Energy, 2015) prepared and published in German and French by the Swiss Federal Office of Energy (Bundesamt für Energie) and show an average value of 400 kW ht/m2 yr (suffix “t” means “thermal”) for the last 10 years.

This is an indication of the rather low effective level of the insolation in Switzerland. It is to be noted that further to the North, in Germany, the value is about 5% lower than this.

The uptake from the incoming solar radiation is converted into electrical energy by the photovoltaic effect. The conversion process is subject to the Shockley-Queisser Limit, which indicates for the silicon technology a maximum theoretical energy conversion efficiency of 31%. Since the maximum measured efficiency under standard test conditions (vertical irradiation and temperature below 25 °C) is lower, at approximately 20%, the yearly energy return derived by this first method in the form of electricity generated, amounts to only 80 kW he/m2 yr.

An alternative route to obtaining the energy return starts with the published statistical data of the PV installations themselves. The values measured are the electrical energy flow after conversion in the inverter from direct to alternating low voltage current and the indication of the kWp peak rating of the installed PV system. In this case, applying the module surface per installed peak kWp, it is possible to calculate the electricity production per square meter of the module. According to the official Swiss energy statistics (Swiss Federal Office of Energy, 2015), an average for the last 10 years of 106 kW he/m2 yr is obtained for relatively new modules.

At this stage, we need to de fine the operational lifetime of a PV installation. This requires an assumption. Currently, vendors of PV installations quote a lifetime of 30 years, but the warranty for the material is normally limited to 5 years and all damaging events, such as damage due to incorrect installation or maintenance, hail, snow and storm, etc. lie outside the scope of responsibility of the vendor. Modules, which have failed during transport, installation or operating are collected for disposal by the European Association PV CYCLE (PV CYCLE – Operational Status Report – Europe, 2015), which is published on a monthly basis. Over the whole of Europe 13239 tonnes of failed or exhausted modules had been collected by the end of December 2015.

We must concentrate here on the history in Germany, where the records are most complete. Table 1, below, shows the peak power of PV systems installed and the weight of the modules at a range of dates starting in 1985. It is necessary to compare these figures with the mass of module material from Germany treated so far (by the end of 2015). This was 7637 tonnes. A module of 1 m2 weighs 16 kg and 1 kWp peak rating needs 9 m2 and consequently, scaling this up, a 1 MWp module will weigh approximately 144 tonnes.

The source of the values of installed capacity has been Report IEA-PVPS T1-18: 2009 “Trends in Photovoltaic Application.” This is a survey report concerning selected IEA countries between 2002 and 2008.

If the system lifetime were 30, or 25 years the quantity of dismantled modules (Table 1) should be practically zero, since by the year 1985 or 1990 (30 or 25 years ago) practically no PV systems had been installed. Now, at the end of 2015, modules corresponding to some 53 MWp , the peak power capacity installed by 1998, a time between 17 and 18 years ago, have already been dismantled. Therefore, the average lifetime could be said to be nearer to 17 than to 30 years, due to the fact that the quantity of treated material by the end of 2015 (7637 tonnes) corresponds to the capacity installed by 1998. In more recent years the quantity of new installations has increased very sharply and quality of installation design and building may be improving, or may have improved, but an extended lifetime remains to be demonstrated.

Table 1. Installed PV module capacities and weights between 1985 and 1998 in Germany  

Years ago End of year Installed capacity MWp Weight of installed modules (tonnes)
30 1985 0.5 72
25 1990 2.0 288
20 1995 17.7 2549
19 1996 27.8 4003
18 1997 41.8 6019
17 1998 53.8 7747

There are also other, external factors, which can reduce PV module lifetime, for instance the site, the weather and indeed climatic conditions. These aspects do not appear to have been treated in the scientific literature in connection with photovoltaic energy usage. The thermal cycling effects of passing clouds, the alternating night and day air temperatures varying strongly with season, the corrosion effects of humidity and the surface loading due to snow, ice and hailstones impacts should be studied and accounted for.

Furthermore, the performance during operation of PV installations has not been problem- free. For instance, in the “Quality Monitor, 2013” of the TUV Rheinland, it is stated that 30% of the modules installed in Germany have serious deficiencies. A further review published in the January 2013 issue of the magazine PHOTON states that about 70% have minor defects. It is clear that these faults influence lifetimes, downtimes and efficiencies of PV installations. Considering that many installations are not maintenance-friendly, it can be expected that such figures will continue to be seen. For the remainder of the present study a lifetime of 25 years is assumed, realizing that this too, based on the above data, tends to be optimistic.

Experience has shown that, on average, efficiency and hence performance degradations of around 1% per year of operation must be expected (Jordan and Kurtz, 2012).

In addition, module failures have been found to cause operational downtime of some 5% per year (Jahn et al., 2005).

Please note that this does not include electric grid losses.

The total energy returned over plant lifetime is 88.1 times 25, or 2203 kW he/m2. The analysis continues now, using this, the higher and more optimistic of the two values derived earlier.   

The photovoltaic technology is material, labor and capital intensive.  For general information on the photovoltaic technology we refer to the “White Paper – Towards a Just and Sustainable Solar Energy Industry”– (Silicon Valley Toxics Coalition, 2009).

In Sections 3.1, 3.2 and 3.3 of this Chapter we shall evaluate separately the characteristics, relevant for the comparison of the energy invested in PV plants with that necessary for other energy sources. This is important, as it enables us to understand the relative position in the energy mix of PV energy imposed by the limited power density of the incoming radiation, by the level of efficiency of its conversion to electricity and by the intermittent and frequently non-deliverable nature of the power output. Since most data offered by the solar energy industry refer to the installed peak power and not to the potential deliverable electrical energy, it is necessary to convert the power-based data to electrical energy relationships

The average weight of a photovoltaic module is 16 kg/m2 and the weight of the support system, inverter and the balance of the system is at least 25 kg/m2 (Myrans, 2009), whereby the weight of concrete is not included. Also, most chemicals used, such as acids/ bases, etchants, elemental gases, dopants, photolithographic chemicals etc. are not included, since quantities are small. But, we must add hydrochloric acid (HCl): the production of the solar-grade silicon for one square meter of panel area requires 3.5 kg of concentrated hydrochloric acid. Summarizing now, we have a total weight of used materials per square meter of PV module panel area of: 16 kg (module) + 25 kg (balance of plant) + 3.5 kg (significant chemicals) = 44.5 kg/m2

Since the total lifetime energy return is 2203 kW he/m²,we obtain a material flow of 20.2 g per kW he (principally steel, aluminum and copper). To compare this number with the corresponding numbers for other low CO2- emission power sources, we use the values for a nuclear power plant adapted from the “Environmental Product Declaration of Electricity from Sizewell B Nuclear Power Station (EDF Energy, 2009) for a modern power plant rated at 1500 MWe and with a design lifetime of 60 years.

The resulting material flow (principally steel) amounts to 0.31 g per kW he for a load factor of at least 85%. Thus the consumption of material resources using the photovoltaic technology is at least 64 times that of nuclear energy. This will also have a great influence on the energy invested during transport, which is not included in the usual type of energy balance analysis.

The data used in this section have been published by the solar or nuclear industries and may be biased. Important however, is that the differences in the energy balances be known in their orders of magnitude rather than in great detail.

Methodology for the calculation of the energy invested

The suppliers involved in the renewable energies industry advertise their capability to create many new jobs. The European Photovoltaic Industry Association (EPIA-Job creation, 2012) gives the value of 10 for the direct and indirect jobs needed for installation, operation and decommissioning per MWp installed. This refers to the peak power of a PV- system.

Job creation in respect to the energy produced is 94.4 jobs. Comparison with an estimate for the job creation by nuclear power plants is significant. Our study finds 13 jobs created per MW installed for the site construction, operation, maintenance, and decommissioning of a nuclear power plant. The human resources involved in the photovoltaic industry are thus revealed to be rather high – the PV technology is more than 7 times (or 94.44/13) more labor intensive than other energy sources.

Use of capital

The actual capital cost for a sample group of fully installed PV units, 2/3 roof-mounted and 1/3 free field-mounted, in Switzerland lies at or above 1000 CHF/m2 with large cost variations of up to 30%, due principally to the uncertainty in the price developments of PV modules. The NREL (National Renewable Energy Laboratory of the U.S. DOE) reports capital cost for fully installed PV units in the lower end of the price range given above. The 1000 CHF/m2 cost, translated into specific cost for installed peak power is 6000 CHF/kWp and is a result of personal experience of the authors. Now, we can compensate for the differing capacity factors of PV (9%) and fossil or nuclear (85%) plants multiplying by 9.44. This enables a comparison between PV and a nuclear power plant, which itself is much more capital intensive than other, fossil-fueled plants. The overnight cost of a large, advanced nuclear power plant is estimated currently at 5500 CHF/ kW, from a report (International Energy Agency-Projected Costs of Generating Electricity (IEA)-2015 Edition, according to Table 8.2).

The capital resource taken by the PV technology is therefore around 10 times that of a nuclear power plant and nearly 45 times that of fossil-fueled power plants.

The purpose of this section is to define and present the calculations for the total energy invested. For this, it is important first to define the system under investigation, its boundaries and what flows across them – i.e. materials, money and energy. There are several stages in the life cycle of an energy system.

These include the production of the necessary materials, the manufacture of the components, their transportation, installation, commissioning, operation and maintenance, decommissioning, financing, administration, their integration in the electricity supply system duly revised according to the needs of the users, and finally the essential accompanying research and development work. It is important with respect to this latter point that the quality of the energy produced be considered.

Photovoltaic plants are material, labor and capital intensive, but provide only intermittent and irregular energy production.

These characteristics have a significant and clear effect on the total energy, which must be invested in each system, whereby a system must be understood to consist of a segment of the production and manufacturing industries and then of a unit-sized PV plant and the contribution demanded by it from the revised electricity supply infrastructure.

There are many definitions of the energy invested for the ERoEI. The article “Year in review-EROI or energy return on (energy) invested” (Murphy and Hall, 2010) outlines some definitions for the EI such as: a) The energy required to collect the energy to be returned, or b) The energy required to collect, deliver, and use that energy.

Most ERoEI analyses are not very clear in de fining the system boundary for the energy invested. Here we consider on one side, the methodology used by the IEA, which uses in principle the definition a) for the calculation of the ERoEI , which we shall refer to as ERoEIIEA and our own methodology, using the definition b) for the calculation of the extended ERoEI as referred to by Murphy and Hall as ERoEIEXT.

The reader will note that the costs for the use of materials, labor and capital are all expressed in terms of equivalent electrical energy. PV technologies, consume per unit of electricity produced, 64 times more material resources, 7 times more human resources und 10 times more capital than nuclear technology.

This is a clear indication of the extreme inefficiency of the PV technologies in regions of moderate insolation in helping to achieve the objective of providing an efficient electricity supply system, which consumes a minimum of resources.

We have still not considered the facts that in the winter period the PV is producing at its peak power for the equivalent of only 1.7 hours per day on average and in the summer period, still for only 3.3 hours daily, and due to the intermittent nature of electricity produced, a parallel electricity supply infrastructure also has to be provided.

The Report IEA-PVPS T12-03: 2011 ( IEA-PVPS T12, 2011) has been prepared as a document of the International Energy Agency (IEA) by a group of experts involved in the photovoltaic industry and is more suitable for a comparison of the different PV technologies rather than for the determination of the efficiency and sustainability of the PV system as energy source. For the determination of the ERoEI, the guideline has the following deficiencies:

  1. The energy flux across the system boundaries and invested for the labor is not included.
  2. The energy flux across the system boundaries and invested for the capital is not included.
  3.  The energy invested for integration of the PV-generated electricity into a complex and flexible electricity supply and distribution system is not included (energy production does not follow the needs of the customer).  
  4. The IEA guidelines specify the use of “primary energy equivalent ” as a basis. However, since the energy returned is measured as secondary electrical energy, the energy carrier itself, and since some 64% to 67% of the energy invested for the production of solar-silicon and PV modules is also in the form of electricity (Weissbach et al., 2013) and since moreover, the rules for the conversion from carrier or secondary energy back to primary energy are not scientifically perfect (Giampietro and Sorman, 2013), it is both easier and more appropriate to express the energy invested as electrical energy. The direct contribution of fossil fuel, for instance in providing energy for process heating, also has to be converted into secondary energy. The conversion from a fossil fuel’s internal chemical energy to electricity is achieved in modern power plants with an efficiency of 38% according to the BP statistic protocol (BP Statistical Review of World Energy, June 2015). In the present paper, in order to avoid conversion errors, we shall continue to use electrical (i.e. secondary) energy in kW he/m² as our basic energy unit.
  5. The recommended plant lifetime of 30 years, based on the experiences to date, must be regarded as unrealistic.  
  6. The energy returned can and should be based on actual experimental data measured in the field. Use of this procedure will yield values in general much lower than the electricity production expected by investors and politicians.

Estimated ERoEI values for a variety of cases, have been calculated by several authors following the IEA guidelines: 5.8 was given, for example, by Brandt et al. (2013); 5.9 by Raugei et al. (2012). Weissbach et al. (2013) indicated in Table 3 in their paper an EROI of 4.95 expressed in coherent units. The tendency, when using the IEA methodology is to make use of ideal parameter values, which, in their turn, tend to yield optimistic levels of EROI.

In the authors’ opinion the IEA-guideline is not suitable for evaluating the ERoEI of the PV systems against non-PV systems in view of the fact that, as stated above, the PV technology is extremely material, labor and capital intensive, the capacity factor during the winter period is only about 3% (or more recently in Germany during January 2015, only 2%). The methodology is only suitable for comparing the various PV technologies with each other.

Methodology based on “extended ERoEI

Historically the methodology for the “extended ERoEI” is derived from the works of the ecologist Howard T. Odum, who was introducing a generalized approach to analysing energy systems, the concept of “net energy” of renewable and non-renewable energy sources and the concept of “emergy” as an expression of all the energy and material resources used in the work processes that generate a product or service (now termed embodied energy). In Odum’s book, “Environmental Accounting: Emergy and Environmental Decision Accounting (Odum, 1995) he showed that from a PV installation, considering the labour associated with the construction, operation and decommissioning no “net energy” is obtained. Charles Hall and his team, developed further the concept of ERoEI in Hall et al. (2009), in Murphy and Hall (2010) and in Murphy and Hall (2011). They have suggested that a technology with an ERoEIEXT less than 5 be considered as unsustainable.

In the extended ERoEI, the system ’s boundaries are defined so as to encompass all energy-relevant activities related to the ability to deliver a reliable, flexible and available product to the consumer on demand.

The first has to do with “upstream” factors, such as, for example, the energy it took to construct the plant for the purification of silicon to solar grade Silicon. According to the Hemlock Semiconductor Group (HSC), the investment required for the construction for such a plant for 21,000 tonnes of yearly production was approximately 4 billion US $. Due to the high flow of materials necessary to produce 1 kW he from photovoltaic installations in comparison with those from other types of energy sources, such factors should, strictly speaking, be taken into consideration. Only vague data are available at present, so in the present study they have not been included. This reduces (optimistically) the amount of energy input during the “upstream” phase. The remaining factors for the ERoEIEXT are the “downstream“ energy fluxes and losses attributable to PV.

The book “Spain’s Photovoltaic Revolution-The Energy Return on Investment” (Prieto and Hall, 2013) indicate more than 20 activities or tasks, outside the production process of the modules, which should be included in defining the system boundary and the energy or equivalent energy fluxes, which cross it. The activities are based on the comprehensive experience gained by Pedro A. Prieto during the construction of several photovoltaic projects in Spain. The estimated ERoEI including labor and financing as given in Section 7 of Prieto and Hall’s book and using coherent units, results in an ERoEI of 2.45. According to our calculations, their values of the specific embodied energy for the modules, inverters and Balance of Plant are somewhat too low. Moreover, in Spain the PV installations are in operation typically for 1.9 times the annual productive operational hours of PV installations in Switzerland or Germany, so it is possible to deduce that PV technology is not sustainable for these regions with their more modest levels of insolation.

Apart from the work of Prieto and Hall, only a few other studies have corrected any of the weak points of the IEA methodology. One of these was that by Weissbach et al. (2013), in which an energy storage capacity of 10 full-load days was estimated to be necessary to enable a system’s service target to be met. Adding this storage capacity to a system, according to Table 3 in Weissbach et al. (2013) results in an additional 10 years of equivalent energy payback time and a dramatic EROI reduction to 2.3, using coherent units. Such a result cannot be ignored and is a sound justification for working with the ERoEIEXT.

In this section the calculations made for the energy invested are reported. In addition to the system boundary as recommended in the IEA-guidelines, the following additional factors have been considered: 1. The integration of intermittent, PV-generated electricity into the grid, 2. The labour and the capital requirements.

The treatments and detail used for the estimations presented here correspond closely to those described by Prieto and Hall (2013). “Upstream” activities, such as the energy invested in building manufacturing plant, have not been included in either case. The resulting reduction of the invested energy represents again an optimistic assumption.

Cumulative energy demand (CED) or energy invested in the PVbased system

As shown in the review by Dale and Benson (2013) the results of the 28 cases reported indicate a considerable scattering of CED values. Our analysis of these studies indicates that those originally done in Japan, India, China and Malaysia all show a higher CED and a limited scattering. Whilst a large part of the solar module production industry was located in Europe before 2010, including companies such as Q-Cell, SolarWorld, BP Solar, Siemens, Bosch and REC, today almost all European companies have either been closed, have suffered huge losses or have undergone bankruptcies. Leadership of the solar industry has been taken over by Chinese companies who now represent over 70% of current world production. The main reason for this shift is the high cost of electricity in Europe, and this is very important for the energy intensive solar industry.

The production of PV modules requires a process consisting of approximately 200 steps [I’ve reformatted the paragraph to emphasize the main steps below. Every step takes ENERGY]:

  • starting from crystalline silica mining,
  • upgrading silica sand to metallurgical grade silicon,
  • upgrading metallurgical grade silicon to solar grade silicon.
  • The pulverized metallurgical grade is combined with hydrochloric acid to produce trichlorosilane.
  • This is subjected to a multistage distillation process, referred to commonly as the Siemens process, to obtain polysilicon.
  • Solar cells are produced by transforming polysilicon into cylindrical ingots of monocrystalline silicon,
  • which are then shaped and sliced into thin wafers.
  • Next a textured pattern is imparted to the surface of the wafer in order to maximize the absorption of light.
  • The wafer is then doped at high temperature with phosphorus oxychloride,
  • provided with an anti-reflective coating of silicon nitride
  • and finally printed with a silver paste (lead should be avoided) to facilitate the transport of electrical energy away from the cell.
  • A typical PV module consists of several cells wired together and encapsulated in a protective material, commonly made of ethylene vinyl acetate.
  • To provide structural integrity the encapsulated cells are mounted on a substrate frequently made of polyvinyl fluoride.
  • A transparent cover, commonly hardened glass, further protects these components.
  • The entire module is held together in an aluminum frame.

The cumulative energy demand (CED) values of some of the oriental-based cases reviewed by Dale and Benson (2013) have been analyzed and the results transformed into our coherent units, kW he per square meter in Table 2.

Table 2. CED for production of PV-systems

Reference of the

study                                     KWhe/m²            Notes

NawazandTiwari(2006) 1380       Roof-installed

NawazandTiwari(2006) 1710       Free-field

Lu andYang(2010)            1237       Roof-mounted

Kannan etal.(2006)          1224       Roof-mounted

Kato etal.(1998)                1291       Only modules, no Balance of System

Ferroni(2014)                     1287       2/3roof,1/3free field

Lundin (2013)                    1317       Nosupportincluded

Many potentially hazardous chemicals are used during the production of solar modules. To be mentioned here is, for instance, nitrogen trifluoride (NF3), (Arnold et al., 2013), a gas used for the cleaning of the remaining silicon-containing contaminants in process chambers. According to the IPCC (Intergovernmental Panel on Climate Change) this gas has a global warming potential of approximately 16600 times that of CO2.

Two other similarly undesirable “greenhouse” gases appearing are hexafluoroethane (C2F6) and sulphur hexafluoride (SF6). For further information on the chemicals involved in the solar industry, please read the White Paper “Towards a Just and Sustainable Solar Energy Industry” by the Silicon Valley Toxics Coalition (Silicon Valley Toxics Coalition, 2009).

It is stressed that, in addition to the flow of materials necessary for the production and installation estimated at 44.5 kg/m2, one must also account for the energy used to treat and transport all used chemicals and the sludge waste to a final repository. These quantities are estimated at 20 kg per square meter of solar panels.

Therefore, the energy required for the total quantity of material to be transported and estimated to be 64.5 kg per square meter of panel cannot be neglected.

For the evaluation made for the present paper of a hypothetical situation in Switzerland, the case was assumed to concern a production volume, from which 2/3 of the PV installations were destined for roof-mounting and the remaining 1/3 for free field placement. The CED value is approximately 1300 kW he/m2, consistent with the other examples in the table.

Integration of the intermittent PV-electricity into the existing grid

The intermittent generation of energy by photovoltaic and wind sources implies a need for availability of a mixture of backup power plants, mainly fossil-fueled, and for large-scale energy storage systems.

Many concepts for energy storage are available, such as hydroelectric pumped storage schemes, pressurized air storage, hydrogen production by electrolysis and storage or batteries. Here we shall consider only the pumped storage option, since this system has the lowest energy losses, at 25%, in pumping up the water and then letting it down through the turbine. Our estimation assumes further that 25% of the electricity generated by the PV system will be used to pump the water into an upper storage lake to be discharged when the consumers need electricity. In addition, losses due to conversion from low to high voltage for the pumps estimated to be 2.1% are to be included. Furthermore, in order to guarantee creation of a reliable electricity system, back-up power preferably from gas turbine driven generating plants and a smart grid will have to be devised and constructed. This too implies energy invested or energy needed for the operation of the smart grid. It has to be noted that a smart grid cannot save energy, but will consume energy to fulfil its task. Of course, the existing grid itself needs adaptation to the different electricity supply.

In Table 3, we list the calculated energy losses and extra energy to be invested in order that the customers are served according to their requirements in an integrated power supply system.

Table 3 Principal energy losses and extra energy investments due to plant and grid

integration

kWhe/m²            Losses or energy invested for additional infrastructure

149         Losses due to the pump-storage hydroelectric system 2203(el. production) x 25% X 27.1%(efficiency losses)

100         Construction of pump storage systems(1m3 Concrete–> 300 kWhe)

25           Construction of back-up gas turbine power plant

25           Grid-adaptation (1 kg copper – 11 kW he) 

50           Operation of smart-grid infrastructure

—           ——————————————–

349         TOTAL

Energy intensity in an advanced economy

It is a widely held assumption that energy consumption is related to economic activity and plays a key role in the process of economic growth. In addition, the relationship of energy to GDP (Gross Domestic Product) is also termed the “Energy Intensity” that is to say, the energy required to produce a unit of income or GDP. This gives the connection between monetary units and energy units. The publication:  “The underestimated contribution of energy to economic growth” (Ayres et al., 2013) underlines the fact that “The rather standard assumption that economic growth is independent of energy availability must be discarded absolutely” and that neither labor nor capital can function in an advanced economy without inputs of energy to the different sectors such as materials, manufacturing and services, etc.

This interdependence is seen clearly in the work of Gael Giraud, Research Director at CNRS (Centre de la recherche scientifique) in Paris. The presentation by Giraud and Kahraman (2014) summarizes the literature on the subject, showing that primary energy consumption is indeed a key factor of growth in OECD countries.

The comprehensive study “Energy and Growth: the Stylized Facts” (Csereklyei et al., 2016) analyses the energy to GDP data of 99 countries from 1971 to 2010. The main findings are that over the last 40 years there has been a stable relationship between per capita energy use and income per capita. Furthermore, energy intensity has declined globally as the world economy has grown and there has been a convergence of the figures for wealthy nations towards a value (see Figure 18 of the study) of 7.4 MJ/USD, which converts to 2.05 kW hth primary energy per dollar. This value has remained stable during recent years due to the global technological progress in advanced economies in using energy more efficiently and wisely. Of course it is related to the overall make-up of the economy, which includes energy-intensive sectors as well as less energy-intensive sectors, such as service industries.

No statistical data are available for the energy intensity due to the installation, operation, repair, servicing and decommissioning of PV-systems. Since the manufacturing sector – a sector in content similar to the diverse activities necessary for a PV system – exhibits an energy intensity higher than the overall value, we assume as a conservative value for the energy intensity of labor the typical overall value in an advanced economy. For the energy intensity of capital generation, it is reasonable again to assume the overall value of an advanced economy. Capital is the result of energy invested in previous economic activities for housing, transport, food, goods, services and other. Therefore, knowing the amount of money required and the energy intensity, it is possible to calculate the energy use.

For this analysis, since we are using the higher Swiss costs of labor and goods, we will also determine separately the Swiss secondary energy intensity to avoid statistical weak points as explained (Giampietro and Sorman, 2013). The internal national secondary energy consumption for the year 2014 may be extracted from the Swiss annual energy statistics (Swiss Federal Office of Energy, 2015). It is the sum of the primary energy of imported fossil fuels, converted to secondary energy assuming a 38% conversion efficiency according the BP statistic protocol and added to the electricity produced inland, mainly by hydro-electric or nuclear power, the figures for which already available in terms of secondary energy.

Furthermore it is necessary to consider the nature of the Swiss economy, going through a process of de-industrialization and now having practically no energy-intensive industries, but with a huge imports of energy embodied in the materials used in products made inland, such as in metals, plastics, paper and construction materials etc.. To estimate this value, using Fig. 18 of the (German language) study by the Swiss Federal Office of the Environment (2013), “Climate Change in Switzerland” (2013) and assuming that the net energy imported is proportional to the net CO2- emissions (i.e.: CO2-import minus export), it is evinced that we have to multiply the internal Swiss energy consumption by a factor of 2.17 to determine the total energy consumption. It is important to note that the national energy statistics do not attach sufficient attention to the European de-industrialization process, giving the impression that we are saving energy. However, in reality we are outplacing energy-intensive industries to regions offering low price energy and labor. This, for instance, has been the case for the energy-intensive production of solar-grade silicon.

Using the Swiss GDP, a secondary energy intensity of 0.43 kW he/CHF is obtained. Note that this value is lower than the primary global value of 2.05 kW hth/ USD that, converted to secondary energy with an efficiency of 38%, would result in 0.78 kW he/CHF. Low energy intensity indicates high energy-effi- ciency – that is to say, the generation of more units of GDP per unit of energy consumed. The higher efficiency in Switzerland is also due to the fact that the energy consumption there shows a stronger correlation with the proportion of energy used in the form of electricity. Use is made here of the BP statistics protocol for the USA, where this protocol is always used, and for Switzerland. The comparison demonstrates the proportion of electrical energy consumed in Switzerland is 48% in against 40% in the USA.

Energy invested for the labor

An additional factor neglected in the majority of studies on ERoEI is the human labor associated with the installation, operation, decommissioning and final disposal of the hazardous materials used in the production of the PV plant and of the modules themselves, where such materials as, for example, Cd, Ga and Pb are present. According to Section 3.2 we have shown that the labor involved is proportionately so much higher for PV systems than for other types of energy generation systems and therefore it must be taken into account. Equally, the human resources involved for back-up power plants and power storage systems must be considered. This, optimistically again, has not yet been included in the present study, due to a high degree of uncertainty in the chosen development plans. Based upon the authors’ experiences for typical local labor costs per square meter of PV module are: project management (10% of capital cost), installation (506 CHF per m2), operation for 25 years, including insurance (1.67% of capital cost per year for 25 years) and decommissioning (30% of installation). The total labor costs amount to 1175 CHF/m2.

To derive from these cost figures the energy involved, we use the energy intensity (kW he/CHF) for Switzerland as calculated in 5.3.1 which is 0.43. Therefore, the amount of energy invested for the human resources is an optimistic 505 kW he /m². Faulty modules and inverters appearing during the lifetime of the PV installation must be considered as a loss of embodied energy. According to the experience in Spain (Prieto and Hall, 2013) about 2% of the modules were returned or scrapped during their installation. In Switzerland many modules have been damaged by the weight of snow or the intensity of hail impacts.

In addition, inverters too, are subject to failure and during the plant’s operational lifetime an inverter often has to be replaced. The embodied energy calculated for the faulty modules and inverters amounts to 90 kW he /m².

Energy invested for the capital

We were able to see that solar energy in the form of electricity is capital intensive compared to other energy sources. Capital is the result of labor previously performed and therefore of energy previously consumed.

We assume an average capital requirement of 1100 CHF/m ² for a mix of PV plants consisting of two thirds as roof-installations and one third as free field installations including project management activities. For the sake of simplicity, we neglect the capital necessary for the construction of the back-up power sources and the power storage system as well as the capital for the necessary land to install all the equipment. We apply the method of constant annuity in order to calculate the capital required to service the necessary capital of 1100 CHF/m2, assuming an amortization period of 25 years and an average interest rate of 5%. The annuity is 7.1% minus the amortization of the energy invested over the 25 years at 4%, leaving 3.1%. The total capital necessary to serve the capital invested for 25 years is 872 CHF/m2 or 436 kW he/m2.

Table 4 Summary of the components of the total energy investments

kW he/m²            Principal energy investments

1300       Cumulative energy demand (CED) for the production of the PV-system

  349       Integration of the intermittent PV-electricity in the grid and buffering

  505       Energy invested for the labor

    90       Energy embodied for faulty equipment 

420         Energy invested necessary for the capital

—-         —————————————–

2664       Total  

The renewable energy will have to compensate for the same amounts of taxes, duties or levies as are paid by the existing electric power supply system. In Switzerland these amount to 0.0424 CHF/kW he with the addition of the Value Added Tax for the maintenance work. The total amounts to 127 CHF/m2 or 54 kW he/m2.

We see now, that the total energy required for obtaining and servicing the capital necessary for a PV-system is the sum of 366 and 54 = 420 kW he/m2.

Total energy invested

Table 4 summarizes the calculated essential energy investments for a PV system which can guarantee a reliable electricity supply to the customers. The energy contributions of subsequent activities, such as the research and development for the PV industry, have not been included. Also not included are the additional personnel that have been employed within the utility companies and the state-owned renewable energy agency, the energy required for the final disposal of the hazardous conditioned material and the energy loss due to the dumping of excess energy. Such energy dumping is a necessity to stabilize the grid during summer weekends, when, for instance, excess energy is dissipated by heating railway tracks or by disconnecting hydraulic turbines, which use river water.

Conclusion and policy implications

The calculated value for ERoEI is dimensionless, constituting the energy return (2203 kW he/m2) divided by the energy invested (2664 kW he/m2)– a ratio of 0.82.

It is estimated that these numbers could have an error of  +/- 15%, so that, despite a string of optimistic choices resulting in low values of energy investments, the ERoEI is significantly below 1. In other words, an electrical supply system based on today’s PV technologies cannot be termed an energy source, but rather a non-sustainable energy sink or a non-sustainable NET ENERGY LOSS.

The methodology recommended by the expert working group of the IEA appears to yield EROI levels which lie between 5 and 6, but which are really not meaningful for determining the efficiency, sustainability and affordability of an energy source. The main conclusions to be drawn are:

The result of rigorously calculating the “extended ERoEI” for regions of moderate insolation levels as experienced in Switzerland and Germany proves to be very revealing. It indicates that, at least at today’s state of development, the PV technology cannot offer an energy source but a NET ENERGY LOSS, since its ERoEIEXT is not only very far from the minimum value of 5 for sustainability suggested by Murphy and Hall (2011), but is less than 1.

Our advanced societies can only continue to develop if a surplus of energy is available, but it has become clear that photovoltaic energy at least will not help in any way to replace fossil fuel. On the contrary we find ourselves suffering increased dependence on fossil energy. Even if we were to select, or be forced to live in a simpler, less rapidly expanding economic environment, photovoltaic technology would not be a wise choice for helping to deliver affordable, environmentally favorable and reliable electricity regions of low, or even moderate insolation, since it involves an extremely high expenditure of material, human and capital resources.

References

Arnold, T., Harth, C.M., Mühle, J., Manning, A.J., Salameh, P.K., Kim, J., Ivy, D.J., Steele, L.P., Petrenko, V.V., Severinghaus, J.P., Baggenstos, D., Weiss, R. F., 2013. Nitrogen trifluoride global emissions estimated from updated atmospheric measurements. In: Proceedings of the National Academy of Sciences 110, no. 6 (February 5, 2013): pp. 2029–2034.

Ayres, R.U., van den Bergh, J.C.J.M., Lindenberger, D., Warr, B., 2013. The underestimated contribution of energy to economic growth. Struct. Change Econ. Dyn. 27 (2013), 79–88.

BP Statistical Review of World Energy, June 2015.

Brandt, A. R., Dale, M., Barnhart, C.J., 2013. Calculating systems-scale energy effi- ciency and net energy return: a bottom-up matrix-based approach. Energy 62, 235–247, Dec. 2013.

Csereklyei, Z., Rubio Varas, Md.M., Stern, D.I., 2016. Energy and Economic Growth: the Stylized Facts. The Energy Journal. International Association for Energy Economics, Vol. 0 (2).

 Dale, M., Benson, S.M., 2013. Energy balance of the global photovoltaic (PV) industry – is the PV industry a net electricity producer? Environ. Sci. Technol. 2013 (47), 3482–3489.

EDF Energy, 2009. Environmental Product Declaration of electricity from Sizewell B nuclear power station, A study for EDF Energy undertaken by AEA.

EPIA – Job creation, 2012. European Photovoltaic Industry Association – EPIA FACT SHEET – September.

Ferroni, F., 2014. Photovoltaic installations in Switzerland are energy sinks (in German – Photovoltaik-Stromanlagen in der Schweiz sind Energievernichter), Presentation to the Technische Gesellschaft Zürich (TGZ – Zürich Technical Society), 3rd March 2014. http://bit.ly/1QP6aK8.

Giampietro, M., Sorman, A.H., 2013. Are energy statistics useful for making energy scenarios? Energy 37 (2012) 5-1.

Giraud,G., Kahraman,Z.,2014.HowDependentisOutputGrowthfromPrimaryEnergy? Presentation given at the Paris School of Economics, 28th March, 2014 www. parisschoolofeconomics.eu/IMG/pdf/13juin-pse-ggiraud-presentation-1.pdf.

Haeberlin, H., 2010. Photovoltaik-Strom aus Sonnenlicht für Verbundnetz und Inselanlagen, electrosuisse Verlag, 710 pp.

Hall, C.A.S., Balogh, S., Murphy, D.J.R., 2009. What is the minimum EROI that a sustainable society must have? Energies 2009 (2), 25–47. http://dx.doi.org/10. 3390/en20100025.

IEA: 2015. Projected Costs of Generating Electricity, Edition 2015.

IEA-PVPS T1-18: 2009. Trends in Photovoltaic Application.

IEA-PVPS T12, Methodology Guidelines on the Life Cycle Assessment of Photovoltaic Electricity – Report

IEA-PVPS T12-03:2011.

Jahn, U., Nordmann, T., Clavadetscher, L., 2005. Performance of Grid- Connected PV Systems: Overview of PVPS Task 2 Results. IEA PVPS 2 Meeting, Florida, USA.

Jordan, D.C., Kurtz, S.R., 2012 Photovoltaic Degradation Rates – An Analytical Review. NREL/JA-pp. 5200–51664.

Kannan, R., Leong, K.C., Osman, R., Ho, H.K., Tso, C.P., 2006. Life cycle assessment study of solar PV systems: an example of a 2,7 kWp distributed solar PV system in Singapore. Sol. Energy 80 (2006), 555–563.

Kato, K., Murata, A., Sakuta, K., 1998. Energy pay-back time and life-cycle CO2 emission of residential PV power system with silicon PV module. Prog. Photovolt. Res. Appl. 6 (105–115), 1998.

Lu, I., Yang, H.X., 2010. Environmental payback time analysis of a roof-mounted building- integrated photovoltaic (BIPV) system in Hong Kong. Appl. Energy 87 (2010), 3625–3631.

Lundin, J., 2013. EROI of Crystalline Silicon Photovoltaics by Johan Lundin, Student Thesis, Master Programme in Energy Systems Engineering, University of Uppsala, 51 pp.

Murphy, D.J.R., Hall, C.A.S., 2010. Year in review-EROI or energy return on (energy) invested. Ann. N. Y. Acad. Sci. Spec. Issue Ecol. Econ. Rev. 1185, 102–118.

Murphy, D.J.R., Hall, C.A.S., 2011. Energy return on investment, peak oil and the end of economic growth. Ann. N.Y. Acad. Sci. Spec. Issue Ecol. Econ. 1219, 52–72.

Myrans, K., 2009. Comparative Energy and Carbon Assessment of Three Green Technologies for a Toronto Roof. University of Toronto, Department of Geography and Center for Environment.

Nawaz, I., Tiwari, G.N., 2006. Embodied energy analysis of photovoltaic (PV) system based on micro- and micro-level. Energy Policy 34 (17), 3144–3152.

Odum, H.T., 1995. Environmental Accounting: Emergy and Environmental Decision Making. John Wiley & Sons, Inc.

Pickard, W.F., 2014. Energy return on energy invested (EROI): a quintessential but possibly inadequate metric for sustainability in a solar-powered world. Proc. IEEE 102 (8), 1118–1122.

Prieto, P.A., Hall, C.A.S., 2013. Spain’s Photovoltaic Revolution – The Energy Return on Investment. By Pedro A. Prieto and Charles A.S. Hall, Springer.

PV- CYCLE- Operational Status Report, Europe- 12/2015 (<<www.pvcycle.org>>).

Raugei, M., Fullana-i-Palmer, P., Fthenakis, V., 2012. The energy return on energy investment (EROI) of photovoltaic: methodology and comparisons with fossil fuel cycles. Energy Policy 45, 576–582.

Silicon Valley Toxics Coalition – White Paper –Toward a Just and Sustainable Solar Energy Industry – January 14, 2009 (<<www.svtc.org>>).

Swiss Federal Office of Energy, 2015. (Bundesamt für Energie-BFE). Schweizerische Eidgenossenschaft, Schweizerische Gesamtenergiestatistik, 2015 (Complete Swiss Energy Statistics, 2015).

Swiss Federal Office of the Environment – Climate change in Switzerland – 2013. Schweizerische Eidgenossenschaft, Bundesamt für Umwelt, BAFU – Klimaänderung in der Schweiz – 2013).

Trainer, T., 2014. Some inconvenient theses. Energy Policy 64 (2014), 168–174.

Weissbach, D., Ruprecht, G., Huke, A., Czerski, K., Gottlieb, S., Hussein, A., 2013. Energy intensities, EROIs (energy returned on invested), and energy payback times of electricity generating power plants. Energy 52, 210–221.

Several of the authors below and others in a group like this one that is very pro-renewables told me that I would lose all respect in the energy community by having written about a paper that was clearly flawed and incorrect as shown in this rebuttal was written by Marco Raugei, Vasilis Fthenakis, Ugo Bardi, Charles Barnhart, Michael Carbajales-dale, and about 20 other authors: Energy Return on Energy Invested (ERoEI) for photovoltaic solar systems in regions of moderate insolation: A comprehensive response.

 

Posted in Photovoltaic Solar, Solar EROI | Tagged , , , | Comments Off on Negative energy return of solar PV in Northern Europe

Government plans to reduce dependency on fossil fuels won’t work

Preface. Yikes!  These government plans from 2009 won’t help the energy crisis much!  I do like these ideas though:

  • Get Yucca mountain ready to take nuclear waste. We need to sequester nuclear wastes while there is still energy to do so and not expose future generations for hundreds of thousands of years to radioactive materials.  
  • IV. Reducing demand for oil: improving efficiency A. Aggressively implement fuel-economy standards established in the Energy Independence and Security Act of 2007 (EISA).  

But electrify transportation and enhancing the national electric system?  I explain why that won’t work in When Trucks Stop Running: Energy and the Future of Transportation”. Worst of all is enhancing the biofuels system because the energy return is negative and destroys ecosystems, topsoil, drains aquifers, and poisons the land, air, and water with pesticides, and much more — covered at great length in ofLife After Fossil Fuels: A Reality Check on Alternative Energy”.

Many items in ”Increasing energy access: expanding domestic supply” won’t work. Oil shale, methane hydrates, and coal to liquids are far from commercial and have a negative energy return, and don’t substitute for diesel.  Nor will methane hydrates see post “Why we aren’t mining methane hydrates now – or perhaps ever”.  

Nor will the arctic and Alaskan oil / coal / natural gas be exploited because icebergs will knock out offshore drills and permafrost will buckle roads and topple drills, bridges, and buildings (see arctic oil posts here).

The section “V. Managing risks and global issues” scares me. Sounds like there will be more wars in the Middle East over oil.

And nothing mentioned at all about how to keep trucks running.  If there are plans to cope with the coming energy crisis, perhaps they are classified at Homeland Security or some other agency.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Senate Hearing 111-2 (2009) Testimony on Current Energy Security Challenges. Hearing before the committee on energy and natural resources. U.S. Senate. 

The Oil Dependence Crisis

Oil is the lifeblood of the U.S. economy, providing nearly 40 percent of our primary energy needs, more than any other fuel.  Within the transportation sector, petroleum fuels account for 93 percent of delivered energy, and there are currently no substitutes available at scale.  This severe oil dependence ties the fate of our economy to the global oil market—and jeopardizes both our national security and economic prosperity as a result.

Outline of the energy security leadership council’s national strategy for energy security: Recommendations to the nation on reducing U.S. Oil Dependence

I. Diversify energy supplies for the transportation sector

A. Electrification of the transportation sector

1. Establish development of advanced battery technology as a top research priority and spend at least $500 million per year toward their development.

2. Replace existing vehicle tax credits with new tax credits of up to $8,000 per vehicle for the first two million domestically produced highly efficient vehicles.

3. Federal government should help create a market and exercise leadership by purchasing highly efficient vehicles.

4. Establish production tax incentives to aid in the retooling of U.S. vehicles manufacturing facilities and to create and maintain a domestic capacity to manufacture advanced batteries.

5. To encourage business participation, extend and modify federal subsidies for hybrid medium-duty vehicles (Classes 3–6) and heavy-duty vehicles (Classes 7–8) to 2012 and remove the cap on the number of eligible vehicles.

6. Grants to municipalities and tax credits to commercial real estate developers to encourage the installation of public recharging stations.

B. Enhancing the nation’s electrical system

a. Increasing Nuclear Power Generation and Addressing Waste Storage

1. Continue licensing process for Yucca Mountain while initiating a program of interim storage as an alternative to Yucca Mountain.

2. Extend the deadline and increase the funding levels for loan guarantees for new nuclear generation.

b. Deploying Advanced Coal Technology

1. Significantly increase investment in advanced coal R&D including development of carbon capture and storage technology and policy frame-work.

2. Increase funding for loan guarantees for advanced coal generation.

c. Promoting Renewable Energy

1. Reform and extend the Production Tax Credit (PTC) and the Investment Tax Credit (ITC) through December 31, 2013, while providing certain guidance for the transition to a fundamentally improved, next-generation incentives program.

d. Development of a Robust Transmission Grid to Move Power to Where It is Needed

1. Extend backup federal eminent domain for transmission lines to help expand the use of renewable power and to enhance reliability by moving power from surplus to deficit regions.

2. Require the Federal Energy Regulatory Commission (FERC) to approve enhanced rates of return on investments to modernize electrical grid system.

e. Transforming Consumer Demand for Electricity

1. Direct states to implement time of day pricing for electricity, and grant FERC backstop authority to implement time-of-day pricing if states will not.

2. Require utilities to install smart meters for all new installations after a specified date.

C. Reforming the biofuels program

a. Shift focus of biofuels deployment by concentrating on R&D and commercialization efforts on next-generation biofuels, fostering competition among fuels derived from differing feedstocks.

b. Require increasing production of Flexible Fuel Vehicles (FFVs).

c. Accelerate Department of Energy and Environmental Protection Agency testing and performance validation of unmodified gasoline engines running on intermediate-levels, first- and second generation biofuels blends.

d. Replace the 45-cents-per-gallon ethanol tax credit with a ‘smart subsidy’.

e. Eliminate tariffs on imported ethanol over a period of three years.

II. Increasing energy access: expanding domestic supply

A. Target federal policy and resources to encourage the expanded use of

carbon dioxide for enhanced oil recovery.

B. Support federal investment in technologies that can limit the adverse environmental impacts of oil shale and coal-to-liquids (CTL) production toensure long-term viability before undertaking public investment in production.

C. Increase access to U.S. oil and natural gas reserves on the Outer Continental Shelf (OCS) with sharply increased and expanded environmental protections.

D. Increase access to U.S. resources in the Arctic and Alaska.

E. Federal support for construction of a natural gas pipeline from Alaska to the continental United States.

F. Expand federal R&D initiatives studying the opportunities to exploit methane hydrates, including the initiation of small-scale production tests.

III. Accelerating the development and deployment of new energy-related technology

A. Annual public investment in energy R&D should be increased by roughly an order of magnitude to approximately $30 billion.

B. Reform the existing institutions and processes governing federal R&D spending.

C. Develop a more effective federal R&D investment strategy.

D. Establish new institutions to provide funding for early-stage R&D and for later-stage deployment and commercialization.

E. Invest in the next-generation workforce for the energy industry.

IV. Reducing demand for oil: improving efficiency

A. Aggressively implement fuel-economy standards established in the Energy Independence and Security Act of 2007 (EISA).

B. Increase allowable weight to 97,000 lbs. gross vehicle weight for tractor-trailer trucks that have a supplementary sixth axle installed but which replicate current stopping distances and do not fundamentally alter current truck architecture. In addition, government should study further the safety impacts of significantly longer and heavier tractor-trailers used in conjunction with slower speed limits.

C. Require the Federal Aviation Administration (FAA) to implement and fund improvements to commercial air-traffic routing in order to increase safety and decrease fuel consumption.

V. Managing risks and global issues

A. Direct the Department of Energy to develop workable guidelines for the use of the Strategic Petroleum Reserve and evaluate its proper size based on those criteria.

B. Work with foreign governments to eliminate fuel subsidies.

C. Promote a robust China-U.S. partnership on carbon capture and storage that focuses on private-sector collaboration and sharing of best practices.

D. Establish a National Energy Council at the White House to coordinate the development of the nation’s energy policy and to advise the president with regard to energy policy.

E. The National Intelligence Council should complete a comprehensive National Intelligence Estimate on energy security that assesses the most vulnerable aspects of the infrastructure critical to delivering global energy supplies and the future stability of major energy suppliers.

F. Working with the Department of State, the Department of Justice should bolster programs designed to train national police and security forces to defend and secure energy infrastructure in key countries.

G. As called for in its recent Maritime Strategy, the U.S. Navy should leverage the maritime forces of other countries to provide protection against terrorists and pirates for oil tankers in vulnerable regions.

H. The Department of Defense should engage NATO and other allies in focused negotiations with the intention of creating an architecture that improves the security of key strategic terrain.

I. The intelligence community should bolster collection and analysis capabilities on potential strategic conflicts that could disrupt key energy supplies. The State Department should improve its capacity to intervene diplomatically in conflicts that impact U.S. energy security.

J. The intelligence community should expand the collection of intelligence on national oil companies and their energy reserves in order to allow policy-makers to make better decisions about future alliances and the nation’s strategic posture on energy suppliers.

The Energy Security Leadership Council (ESLC) brings together some of America’s most prominent business and military leaders to support a comprehensive, long-term policy to reduce U.S. oil dependence and improve energy security.  

Corporate members include:

  • Frederick W. Smith, Chairman, President, and CEO of FedEx Corporation
  • David Steiner, CEO of Waste Management
  • Jeffrey Sprecher. chairman New York Stock Exchange & Intercontinental exchange
  • Herbert Kelleher, Chairman and founder of Southwest Airlines
  • Eric Schwartz  Goldman Sachs asset management
  • Military members include:
  • John Lehman former secretary of the U.S. Navy
  • General James Conway (Ret.), former Marine Corps Commandant,
  • General P.X. Kelley (Ret.), former Marine Corps Commandant and member of the Joint Chiefs of Staff.
  • General John Handy, U.S. Air Force (Ret.)
  • General John Keane, U.S. Army (Ret.)
Posted in Government on what to do, U.S. Congress Energy Dependence | Tagged , , , , , , | 3 Comments

Why we aren’t mining methane hydrates now — or perhaps ever

Preface. Methane hydrates are far from being commercial, and probably always will be. Scientists and companies have been trying to exploit them since the first energy crisis in 1973 to no avail. Nor are they likely to trigger a runaway greenhouse as I show in “Methane Apocalyse. Not Likely“.

Methane hydrate extraction in the news:

NREL 2021 Japan’s phase 4 methane hydrate research: There is still a long way to go to achieve the project’s goal of introducing marine methane hydrates into the Japanese domestic resource portfolio. The last two phases had operational problems with sand control, flow assurance, and a production rate high enough to be commercially viable. Nonetheless, there will be phase 4 from 2019 to 2022.

Alice Friedemann  www.energyskeptic.com Women in ecology  author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity
***

Gas-hydrate technologies remain at an early stage of development, despite the maturity of many of the individual exploration technologies being used. While some technologies may be widely deployed in the conventional oil and gas industry, most are not mature in the context of gas hydrates. For example, while core recovery is common practice in the oil and gas industry, coring technologies had to be adapted to enable gas-hydrate coring, and none of the pressure corers have yet reached a commercial scale; 2 Addressing issues relating to operations, e.g. number and type of wells, and size of drilling vessels; 3 Controlled-Source Electromagnetic Methods; 4 Lab work / theoretical research; 5 Bench-scale; 6 Pilot-scale; 7 Proved commercial-scale process, with optimization work in progress; 8 Commercial-scale, widely deployed, with limited optimization potential. Source: SBC Energy Institute analysis

Gas-hydrate technologies remain at an early stage of development, despite the maturity of many of the individual exploration technologies being used. While some technologies may be widely deployed in the conventional oil and gas industry, most are not mature in the context of gas hydrates. For example, while core recovery is common practice in the oil and gas industry, coring technologies had to be adapted to enable gas-hydrate coring, and none of the pressure corers have yet reached a commercial scale; 2 Addressing issues relating to operations, e.g. number and type of wells, and size of drilling vessels; 3 Controlled-Source Electromagnetic Methods; 4 Lab work / theoretical research; 5 Bench-scale; 6 Pilot-scale; 7 Proved commercial-scale process, with optimization work in progress; 8 Commercial-scale, widely deployed, with limited optimization potential. Source: SBC Energy Institute analysis

Methane hydrates are crystalline structures that are mostly water: four methane molecules per 23 water molecules. Methane is trapped within this matrix of ice, so they don’t amass in commercial quantities and the majority are too spread out to harvest for energy.

Their formation depends on low temperatures, high pressures, and water.  They’re found 2,000 to 8,000 feet deep in the ocean, often in thin and discontinuous layers, or below 600 to 3,000 foot layers of permafrost in high latitudes.

Big oil companies have known about them since 1970 yet so far haven’t found a way to extract them.

The United States Geological Survey estimates the total energy content of natural gas in methane hydrates is greater than all of the known oil, coal, and gas deposits in the world.

But that’s a wild ass guess since we can’t measure this resource, for reasons such as coring equipment that can’t handle the expansion of the gas hydrate as it’s brought to the surface.  And if you do work around this problem, there’s tremendous variability within the same area (Riedel).  Since less than 1% of is potentially extractable, there’s no point in throwing around large numbers and getting the energy illiterate excited.

According to petroleum engineer Jean Laherrère, no way do methane hydrates dwarf fossil fuels.  “Most hydrates are located in the first 600 meters of recent oceanic sediments at an average water depth of 500 meters or more, which represents just a few million years.  Fossil fuel sediments were formed over a billion years and are much thicker — typically over 6,000 meters (Laherrère).

So here it is 2014, with no commercially produced gas hydrate, despite 30 years of research at hundreds of universities, government agencies, and energy companies in the United States, Japan, Brazil, Canada, Germany, India, Norway, South Korea, China, and Russia.

Japan alone has spent about $700 million on methane-hydrate R&D over the past decade (Mann) and gotten $16,000 worth of natural gas out of it (Nelder).  I think this reflects the likely EROI of methane hydrates — .0000228 (16000/700,000,000, and yes, I know money and EROI aren’t the same). But EROI doesn’t capture the insanity as understandably as money does. Basically, for every $43,750 you spend, you get $1 back ($700,000,000 / $16,000).

Of course, it’s all theoretical.  Maybe you get $500 or $5,000 back.  Who knows? There is no commercial production now or in the foreseeable future. And we’ve tried all kinds of thermal techniques to unleash it — hot brine injection, steam injection, cyclic steam, fire flooding, and electromagnetic heating — all of them too inefficient and expensive to scale up to a commercial project (DOE 2009).

Heating them requires just 7% of the energy content released by burning them, the problem is that distributing the heat in the gas hydrate layer because “the normal pore space within the sediments is plugged up by the gas hydrates, so simple injection of a hot fluid into the hydrate layer probably will not work”. Another method would be to convince the water to migrate to a substance more attractive than the methane, an “anti-freeze”. This has been tried with methanol to no effect (Deffeyes).

Even if we found a way to get some of them, they’re so thin and dispersed that the most we could hope for is about 100 Tcfg (trillion cubic feet of gas), about 1% of the present gas URR, despite the fact that the total resources are orders of magnitude higher (Boswell).

1) Gas hydrates are cotton candy crystals mainly found in dispersed, deeply buried impermeable marine shale.

methane hydrate cotton candy
Figure 1. methane hydrate crystals form from dodecahedral clusters of water which create a cage around a single methane molecule. Source: Ken Jordan. 2005. Water Water Everywhere. Projects in Scientific computing.

In Figure 2 below, methane hydrates (yellow) in porous sands are the only resource with any chance of being exploited — a very small fraction of the overall methane hydrate resource.  Most methane hydrates are locked up in marine shales (gray) where they’ll probably remain forever because:

  • The average concentrations are extremely low, about .9 to 1.5% by volume, even in the less than 1% of highly porous sediments where there’s any chance of extracting them
  • Marine shales are impermeable, very deep, widely dispersed, with very low concentrations of methane hydrate  (Moridis et al., 2008).
  • Clathrates are far from oil and gas infrastructure, which you must use to get the methane hydrates stored and delivered
  • The infrastructure, technology, and equipment to extract gas hydrates hasn’t been invented yet
  • The energy required to get the methane hydrate out has negative Energy Returned on Energy Invested (EROEI).  It takes too much energy to heat them in order to release them plus break the bonds between the hydrates’ water molecules.
  • Inhibitor injection requires significant quantities of fairly expensive chemicals

methane hydrate resource pyramid 2

Source: Boswell, Ray, et al. 14 Sep 2010. Current perspectives on gas hydrate resources. Energy Environ. Sci., 2011,4, 1206-1215

2) Methane Hydrates are Explosive Cotton Candy

Because as temperature rises or pressure goes down when you bring these ice cubes to the surface, the gas hydrates expand to 164 times their original size. Though most are the size of sugar grains mixed in with other sediments.

methane hydrate real cotton candy

Methane hydrates bubbling up to the surface

methane hydrate green hulk 1

methane hydrate green hulk 2

3) How do you store and get these giant gas bubbles to market?

If you could keep the gas hydrates small, crystalline, and pacified, there would still be that niggling worry you might offend them into their 164-fold fury.  So it’s best to let that happen — but now where are you going to store all this gas and how will you deliver it?

You’d have to use oil and gas infrastructure in the Arctic and other questionable places where ownership isn’t settled and potentially create  geopolitical tensions.

And imagine how Exxon will feel about that!  Their oil rigs are already dodging icebergs.   Oil companies avoid drilling through methane hydrates because they can fracture and disrupt bottom sediments, wrecking the wellbore, pipelines, rig supports, and potentially take out a billion dollar offshore platform as well as other oil and gas production equipment and undersea communication cables.

4) The Mining of Gas Hydrates can cause Landslides…

Eastman states that normally, the pressure of hundreds of meters of water above keeps the frozen methane stable. But heat flowing from oil drilling and pipelines has the potential to slowly destabilize it, with possibly disastrous results: melting hydrate might trigger underwater landslides as it decomposes and the substrate becomes lubricated…

5) Which can Trigger Tsunamis

Landslides can create tsunamis that migh result in fatalities, long term health effects, and destruction of property and infrastructure.

6) Methane Hydrates are a greenhouse gas 23 times more potent than carbon dioxide

Climate scientists like James E. Hansen worry that methane hydrates in permafrost may be released due to global warming, unleashing powerful feedback loops that could cause uncontrollable runaway climate change.

Scientists believe that sudden, massive releases of methane hydrates may have led to mass extinction events in the past.

Considering that the amount of methane onshore and offshore could be 3,000 times as much as in the atmosphere, it ought to be studied a bit more before proceeding, don’t you think? (Whiteman 2013, Kvenvolden 1999).

7) Ecological Destruction

They’re dispersed across vast areas at considerable depths, which makes them very ecologically destructive to mine, since you have to sift through millions of cubic yards of silt to get a few chunks of hydrate.

8) Toxic Waste

The current state of technology uses existing oil drilling techniques, which generate wastes including produced formation water (PFW), drilling fluid chemicals, oil and water-based drilling muds and cuttings, crude oil from extraction processes and fuel/diesel from ships and equipment (Holdway 2002).

9) EROI

There are only two studies on EROI, both by Callarotti, and he looks only at the heat energy used to free the clathrates up, and it’s published in a journal called Sustainability that would better be named Gullibility when it comes to the topic of energy which is not their specialty.  He comes up with an EROI of 4/3 to 5/3 using just that one parameter.  Callarotti knows this is a dishonest figure because he says “If one were to consider the energy required for the construction of the heaters, the pipes, and the pipe and the installation process, the total EROI would be even less.”

Is he kidding?  What about the energy used to mine and crush the ore to get the metals to build the pipelines, drilling, dredging and sifting through the sediment equipment, methane hydrate processing plant, the vessel and the diesel burned to get to the remote (arctic) location, and so on.

10) Technical challenges (House 2009)

Gas hydrate wells will be more complex than most conventional and unconventional gas wells due a number of technical challenges, including:

  1. Maintaining commercial gas flows with high water production rates
  2. Operating with low temperatures and low pressures in the well-bore
  3. Controlling formation sand production into the well-bore
  4. Ensuring well structural integrity with reservoir subsidence

Technologies exist to address all of these issues, but will add to development costs. Gas hydrate development also has one distinct challenge compared to other unconventional resources, and that is the high cost of transportation to market.

Most gas fields require some compression to maximize reserve recovery, but this typically occurs later in the life of the field after production starts to fall below the plateau rate. For a gas hydrate development, the required pressure to cause dissociation will require the use of inlet compression throughout the life of the field including the plateau production time. This will require a larger capital investment for compression at the front end of the project, and will also result in higher operating costs over the life of the project.

Water production is not uncommon in gas wells, however water rates are typically less than say 10 bbls/MMscf (barrels of water per million standard cubic feet of gas) for water of condensation and/or free water production. Wells that produce excessive amounts of water are typically worked-over to eliminate water production or shut-in as non-economic. The water production from a gas hydrate reservoir could be highly variable, however water:gas ratios in excess of 1,000 bbls/MMscf are possible. This water must be removed from the reservoir and wellbore to continue the dissociation process. On this basis, a gas hydrate development will require artificial lift such as electric submersible pumps or gas lift, which will also increase capital and operating costs over the life of the field. But it is important to highlight that the water in gas hydrate contains no salts or impurities, it is fresh water and may be a valuable coproduced product of a gas hydrate development.

The combination of low operating pressures and high water rates will require larger tubing and flowlines for a gas hydrate development, in order to minimize friction losses and maximize production. Additional water handling facilities and water disposal will also be required. Larger inhibitor volume (such as glycol) will be required to prevent freezing and hydrate formation in tubing and flow-lines. Other items such as sand control, reservoir subsidence, down-hole chemical injection, possible requirements for near well-bore thermal stimulation, etc., will also require additional capital and operating costs for gas hydrate developments compared to conventional gas developments.

Onshore gas hydrates in North America are located on the North Slope of Alaska and on the Mackenzie Delta in Canada. These resources, along with significant volumes of already discovered conventional gas, are stranded without a pipeline to market. In order to compete for pipeline capacity, the economics of onshore gas hydrate developments must be attractive at prevailing gas prices.

By all estimates, the majority of gas hydrates considered for production are located in sandstone reservoirs in deepwater environments. Deepwater drilling technology and experience continues to evolve, and the worldwide deepwater fleet continues to expand. However the deepwater environment is still a very high cost and very high risk area of operation. Offshore gas hydrate developments must have strong economic drivers in order to compete with other deepwater exploration and development opportunities. Adding on the risk of gas hydrates is yet another level of risk to add onto the existing high-risk drilling in deep water.

Significant scientific and exploration work must be completed before gas hydrates can be considered as a viable source of natural gas. Critical among these tasks remains the validation reservoir and well performance through extended field testing that demonstrates the ability to produce gas hydrates at commercial rates with current technology.

So far the small-scale experiments have not been able to bring gas hydrates as far as the surface of the ocean.

On the basis of the studies done to date, gas hydrate developments will have capital and operating costs significantly higher than other unconventional or conventional developments due to well productivity, low operating pressures and temperatures, and high water production rates. Surface facilities for gas hydrate developments will also be higher due to the requirements for larger surface flowlines and inlet facilities (required because of low pressures and water production rates) and the requirement for inlet compression into the processing plant.

The reason methane hydrate production rates peak in later years, while conventional natural gas wells peak immediately is because unconventional hydrocarbons are so called because they are found in
formations other than the typical sandstone or carbonate reservoirs i.e. extremely low permeability or tight,reservoirs, shale, or coal beds the hydrocarbons are in their normal fluid condition and can typically flow without undergoing a fundamental change (except of course for bitumen). The types of reservoirs targeted for gas hydrate testing (and eventual development) are relatively high permeability conventional sandstone reservoirs however the methane gas is locked in a solid gas hydrate crystal so actually the gas is unconventional, not the reservoir. Based on simulation studies, the maximum gas production rate therefore occurs not on days one as with conventional gas reservoirs, but some time into the future, typically years.

All gas reservoirs, conventional or unconventional, are capable of their maximum rate on day one of operation. This is because the reservoir pressure is at its maximum (average reservoir pressure declines with production for most reservoirs), the gas that initially flows into the well is in the near wellbore area, and of course the gas is continuous throughout the reservoir. As gas production continues the gas that flows into the wellbore flows through the reservoir rock from greater and greater distances away. Flowing gas through the reservoir rock results in additional pressure loss, and the production rate begins to decline. Some gas wells in high permeability conventional reservoirs can flow at a more or less constant rate or steady state condition for some time, but eventually the production rate will decline. Unconventional gas reservoir production rates typically decline quite rapidly, and may never actually reach any sort of steady state production, although the rate of decline will drop and the wells may produce for many years. At the start of production for a gas hydrate reservoir, there is no free gas in the reservoir it is all locked up in the hydrate crystals in the pores space of the reservoir rock. The hydrate must first be dissociated, and then the water and free gas can flow to the well. Because water and gas is flowing simultaneously (termed multi-phase flow), the pressure loss through the reservoir will be higher than if just gas only was flowing. Gas and water saturations through the dissociated region will change with time, and gravity will affect the gas and water phases, therefore the flow mechanism will be quite complex.

Conclusion

You don’t have to be a scientist to see how difficult the problem is: 

  • Somehow you’ve got to capture the energy in thousands of square miles of exploding grains of sugar that erupt into a gas 164 times their size. 
  • There are huge deposits of natural gas that are easier to get at and far more valuable that aren’t being exploited because they’re stranded (not near pipeline infrastructure), so who’s going to invest in a resource of much lower quality at the bottom of the pyramid with such dismal prospects?
  • We can’t even drill for oil in most of the Arctic (Patzek) which is where a lot of the methane hydrates are, and that infrastructure has to be there to even think of trying to get at the methane hydrates.
  • Most of the hydrates are in a thin film on the deep ocean floor.  Are you going to build a thousand square mile blanket to trap the bubbles like a school of fish? Or use expensive fracking & coalbed methane techniques?
  • Permafrost gas hydrate is so shallow there’s not enough pressure to get it to flow fast enough to be worth mining

Gas hydrates are stranded in distant regions and deep oceans. It would be far cheaper to go after large natural gas reservoirs than attempt to go after mostly small deposits of methane hydrates we don’t even know how to extract yet.

Despite all the happy talk that says we can meet these challenges by 2025 if only there were more funding, we’re out of time.

It’s highly unlikely that Methane Hydrates will ever fuel the diesel engines that do the actual work of civilization, all of them screaming “Feed Me!” as oil declines in the future.

Methane hydrate little shop of horrors boats

References

Arango, S. O. May 7, 2013. Canada drops out of race to tap methane hydrates Funding ended for research into how to exploit world’s largest fossil energy resource. CBC News
Benton, Michael J. 2003. When Life Nearly Died: The Greatest Mass Extinction of All Time. Thames & Hudson.

BBC. 5 December 2002. The Day The Earth Nearly Died. Permian-Triassic Extinction Event

Boswell, R. 2009. Is gas hydrate energy within reach? Science.
Callarotti, R. C. 2011. Energy Return on Energy Invested (EROI) for the Electrical Heating of Methane Hydrate Reservoirs. sustainability 2011, 3.

Collett T. S. April 19-23, 2002. “Detailed analysis of gas hydrate induced drilling and production hazards,” Proceedings of the Fourth International Conference on Gas Hydrates, Yokohama, Japan.

Carrington, Damian. 23 Nov 1999. Fossil fuel revolution begins.

Deffeyes, K.S. 2005. Beyond Oil. The View from Hubbert’s Peak. Hill and Wang.

DOE 2009. U.S. Department of Energy. 2009. International Energy Outlook 2009

Eastman, Q. 2004. Energy Saviour? Or Impending Disaster? Science Notes.

Holdway, D. A. 2002. The acute and chronic effects of wastes associated with offshore oil and gas production on temperate and tropical marine ecological processes. Marine Pollution Bulletin, Vol 44: 185-203.

House. 2009. UNCONVENTIONAL FUELS PART II: THE PROMISE OF METHANE HYDRATES. U.S. House of Representatives.

Jayasinghe, A.G. 2007. Gas hydrate dissociation under undrained unloading conditions. P. 61 in Submarine Mass Movements and Their Consequences. Vol. IGCP-511. UNESCO.

Kaneshiro-Pineiro, M. et al. Dec 4, 2009. Report on the Science, Issues, Policy, and Law of Gas Hydrates as an Alternative Energy Source. East Carolina University. Coastal Resources Management Program.

Kvenvolden, K.A. 1999. Potential effects of gas hydrate on human welfare. Proceedings in the National Academy of Science. USA. 96: 3420 – 3426.

Laherrère, Jean. July 17, 2009. Update on US Gulf of Mexico: Methane Hydrates. theoildrum europe.

Mann, C. C. May 2013. What If We Never Run Out of Oil? New technology and a little-known energy source suggest that fossil fuels may not be finite. This would be a miracle—and a nightmare. The Atlantic.

Moridis, George. 2006. “Geomechanical implications of thermal stresses on hydrate-bearing sediments,” Fire in the Ice, Methane Hydrate R&D Program Newsletter.

Moridis, G.J., et al. 2008. Toward production from gas hydrates: Current status, assessment of resources, and simulation-based evaluation of technology and potential. Paper SPE 114163.Presented at the SPE Unconventional Reservoirs Conference, Keystone, Colo., February 10–12, 2008.

Nelder, C. 2013. Are Methane Hydrates Really Going to Change Geopolitics? The Atlantic.
Office of Naval Research. 5 Nov 2002. Fiery Ice From The Sea: A New World Energy Source?

NAS 2009. America’s Energy Future: Technology and Transformation. 2009. National Academy of Sciences, National Research Council, National Academy of Engineering.

Patzek, Tad. 29 Dec 2012. Oil in the Arctic. LifeItself blog.

Riedel M  and the Expedition 311 Scientists. 2006. Proceedings of the IODP, 311: Washington, DC (Integrated Ocean Drilling Program Management International, Inc).

Whiteman, G. et al. 25 July 2013. Vast costs of Arctic change. Nature, 499, 401-3.

Posted in Alternative Energy, Global Warming, Methane Hydrates | Tagged , , , , , , , | Comments Off on Why we aren’t mining methane hydrates now — or perhaps ever

Blackouts, firestorms, and energy use

Preface. Blackouts are more and more likely in the future from fires, hurricanes, natural gas shortages and more. Below is an account from a friend who had to evacuate due to a wildfire.

Blackouts in the news:

2024: Half a million Victorian customers without power as Loy Yang A coal-fired station shuts down and storms damage infrastructure

2021. Texas Was Seconds Away From Going Dark for Months.

Alice Friedemann  www.energyskeptic.com Women in ecology  author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity

***

This is a letter from a friend about his experiences when PG&E cut his power off (and 2.5 million others).

Last Saturday around 2 pm we received notice that our area was under an evacuation warning owing to the huge Kincade fire that erupted on Wednesday evening (which we watched in terror and awe from our front porch). At 6:30 pm the order became mandatory. In the end, nearly 200,000 people, or about a third of the population of Sonoma County, were evacuated.

This was our first experience having to plan and prepare to leave on a moment’s notice. We found refuge with a friend in San Francisco, where we stayed until the order was downgraded to a warning on the following Tuesday. The experience highlighted a number of lessons for us.

First and foremost, do not ever evacuate without taking your dog’s favorite toy with you. This oversight necessitated a trip to a pet store to find the item in question. Having a dog certainly helped us keep focused and calmer, although I know she sensed that we were quite out of sorts for days.

Second, we discovered that fuel disappears quickly. We went out 15 minutes after the initial warning was issued, and the closest gasoline station already had 7 of 8 pumps taped closed. The second station had fuel, but long lines coming in from each direction. Of course, once the power went off, there was no fuel to be had at all.

Third, having PV was useless. Although Sonoma County is one of the most heavily PV’d counties in the state, nearly all is grid-tied and thus rendered inoperable in a blackout. And EV owners were out of luck and had to head to SF or the central valley to find electricity.

Fourth, it completely reinforced my understanding that “you can’t do just one thing”. Our power utility (PG&E) in October started implementing what they termed PSPS or “Public Safety Power Shutoffs,” or plainly, power blackouts, to avoid sparking additional fires if the high winds (which did reach up to 103 mph in gusts on Sunday) blew trees into energized lines starting new fires. But after the power went off Saturday night to nearly 2.5 million people, it started a cascading series of failures of complex systems. The county’s largest cable and internet provider failed, and even the copper-wire landline went dead (we keep a land line because of frequent winter blackouts), and neither was restored until a day after the power returned 5 days later. This led to a huge range of consequences, including the near-complete shutdown of commerce, and mundane problems such as repair shops unable to release vehicles to owners because state law requires an invoice, and the invoicing system is cloud-based. We also discovered that the battery backup on our garage door (now state-mandated for new houses) which we got after 5 people died in the 2017 fires from not being able to get out of their garages, died since it requires a trickle charge and goes offline after 2 days without power. And most critically, in my region of the county, nearly everyone relies on a well, so without power, there is no water. Fortunately, we didn’t lose any crops on our drip irrigation, though some were were quite stressed from lack of water.

Fifth, evacuations and firefighting are very energy intensive. With 200,000 people leaving the county, that probably involved 75,000 or so cars, trucks, and RVs on the road, and people headed north to Eureka, inland to Sacramento, and south to San Jose. CalFire deployed 10 Super Huey helicopters, 445 fire engines, 41 dozers, and 64 water tenders in addition to the airtankers and the Global Supertanker, a modified 747 with retardant tanks. Air and ground assistance came from as far away as Montana. We saw fire trucks from Fullerton and Santa Barbara in southern California and some from Oregon, all of which drove to the fire zone. To then turn the power back on, PG&E had to deploy over 600 trucks and numerous helicopters to inspect every mile of every distribution line in the county for damage.

Without even speculating on what this means to the viability of living in California, it hardened my belief that folks are completely delusional in their efforts to design “resilient and sustainable cities” with programs that rely heavily on cloud-based sensors reporting traffic, home appliance usage, and requiring big-data crunching to work. I know I’m going to be even more of a gadfly at meetings where this comes up in the future. It just won’t work.

***

When PG&E told us the power would be out for two days, here are a few things we did: freeze as many water bottles as possible, stores sell out of ice over a day ahead of time. Put some frozen bottles into a cooler and all of the ice in the ice maker or it will melt and flood the floor. Add refrigerator food for the next 2 days to the cooler so that you don’t ever have to open the door. Better than candles are battery lanterns. Charge laptops, phones, kindles and a battery to recharge them. Be sure to have matches to light the natural gas burners on the stove. I bet those of you who get hurricanes could add a lot to this list of how to cope!

Posted in Blackouts Electric Grid, Wildfire | Tagged , | 2 Comments

Book Review of Richard Heinberg’s 2011 “The End of Growth”

Preface. This is not a book review really, it’s more a few of my kindle notes. Heinberg writes so well, so clearly, that I am sure history will remember him as the most profound and wide-ranging expert on energy and ecological overshoot. Just a few of the topics in this book include:

  • The depletion of important resources including fossil fuels and minerals
  • The proliferation of environmental impacts arising from both the extraction and use of resources (including the burning of fossil fuels)—leading to snowballing costs from both these impacts themselves and from efforts to avert them and clean them up
  • Financial disruptions due to the inability of our existing monetary, banking, and investment systems to adjust to both resource scarcity and soaring environmental costs—and their inability (in the context of a shrinking economy) to service the enormous piles of government and private debt that have been generated over the past couple of decades.

As always, I noted only what interested me.  So much is left out, so do buy this book!  And not just for yourself — I write a lot about why the electric grid will eventually come down for good in both of my Springer Books, so buy it for your grandchildren to preserve knowledge and so that future generations will understand why collapse happened.

Alice Friedemann    www.energyskeptic.com   author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity, XX2 report

***

Richard Heinberg. 2011. The End of Growth: Adapting to Our New Economic Reality.  New Society Publishers.

The Deepwater Horizon incident also illustrates to some degree the knock-on effects of depletion and environmental damage upon financial institutions. Insurance companies have been forced to raise premiums on deepwater drilling operations, and impacts to regional fisheries have hit the Gulf Coast economy hard.

payments forced the company to reorganize and resulted in lower stock values and returns to investors. BP’s financial woes in turn impacted British pension funds that were invested in the company. This is just one event—admittedly a spectacular one. If it were an isolated problem, the economy could recover and move on. But we are, and will be, seeing a cavalcade of environmental and economic disasters, not obviously related to one another, that will stymie economic growth in more and more ways. These will include but are not limited to: •  Climate change leading to regional droughts, floods, and even famines; • Shortages of water and energy; and • Waves of bank failures, company bankruptcies, and house foreclosures.

Each will be typically treated as a special case, a problem to be solved so that we can get “back to normal.” But in the final analysis, they are all related, in that they are consequences of growing human population striving for higher per-capita consumption of limited resources (including non-renewable, climate-altering fossil fuels), all on a finite and fragile planet.

The result: we are seeing a perfect storm of converging crises that together represent a watershed moment in the history of our species. We are witnesses to, and participants in, the transition from decades of economic growth to decades of economic contraction.

we are adding about 70 million new “consumers” each year. That makes further growth even more crucial: if the economy stagnates, there will be fewer goods and services per capita to go around.

We harnessed the energies of coal, oil, and natural gas to build and operate cars, trucks, highways, airports, airplanes, and electric grids—all the essential features of modern industrial society. Through the one-time-only process of extracting and burning hundreds of millions of years’ worth of chemically stored sunlight, we built what appeared (for a brief, shining moment) to be a perpetual-growth machine. We learned to take what was in fact an extraordinary situation for granted. It became normal.

But as the era of cheap, abundant fossil fuels comes to an end, our assumptions about continued expansion are being be shaken to their core. The end of growth is a very big deal indeed. It means the end of an era, and of our current ways of organizing economies, politics, and daily life. Without growth, we will have to virtually reinvent human life on Earth.

World leaders, if they are deluded about our actual situation, are likely to delay putting in place the support services that can make life in a non-growing economy survivable, and they will almost certainly fail to make needed, fundamental changes to monetary, financial, food, and transport systems. As a result, what could have been a painful but endurable process of adaptation could become history’s greatest tragedy. We can survive the end of growth, but only if we recognize it for what it is and act accordingly.

As early as 1998, petroleum geologists Colin Campbell and Jean Laherrère were discussing a Peak Oil impact scenario that went like this. Sometime around the year 2010, they theorized, stagnant or falling oil supplies would lead to soaring and more volatile petroleum prices, which would precipitate a global economic crash. This rapid economic contraction would in turn lead to sharply curtailed energy demand, so oil prices would then fall; but as soon as the economy regained strength, demand for oil would recover, prices would again soar, and as a result of that the economy would relapse. This cycle would continue, with each recovery phase being shorter and weaker, and each crash deeper and harder, until the economy was in ruins. Financial systems based on the assumption of continued growth would implode, causing more social havoc than the oil price spikes would themselves generate.

Meanwhile, volatile oil prices would frustrate investments in energy alternatives: one year, oil would be so expensive that almost any other energy source would look cheap by comparison; the next year, the price of oil would have fallen far enough that energy users would be flocking back to it, with investments in other energy sources looking foolish. But low oil prices would discourage exploration for more petroleum, leading to even worse fuel shortages later on. Investment capital would be in short supply in any case because the banks would be insolvent due to the crash, and governments would be broke due to declining tax revenues. Meanwhile, international competition for dwindling oil supplies might lead to wars between petroleum importing nations, between importers and exporters, and between rival factions within exporting nations.

But what happened next riveted the world’s attention to such a degree that the oil price spike was all but forgotten: in September 2008, the global financial system nearly collapsed. The reasons for this sudden, gripping crisis apparently had to do with housing bubbles, lack of proper regulation of the banking industry, and the over-use of bizarre financial products that almost nobody understood. However, the oil price spike had played a critical (if largely overlooked) role in initiating the economic meltdown

In the immediate aftermath of that global financial near-death experience, both the Peak Oil impact scenario proposed a decade earlier and the Limits to Growth standard-run scenario of 1972 seemed to be confirmed with uncanny and frightening accuracy. Global trade was falling. The world’s largest auto companies were on life support. The U.S. airline industry had shrunk by almost a quarter. Food riots were erupting in poor nations around the world. Lingering wars in Iraq (the nation with the world’s second-largest crude oil reserves) and Afghanistan (the site of disputed oil and gas pipeline projects) continued to bleed the coffers of the world’s foremost oil-importing nation.

Meanwhile, the debate about what to do to rein in global climate change exemplified the political inertia that had kept the world on track for calamity since the early ’70s. It had by now become obvious to nearly every person of modest education and intellect that the world has two urgent, incontrovertible reasons to rapidly end its reliance on fossil fuels: the twin threats of climate catastrophe and impending constraints to fuel supplies. Yet at the Copenhagen climate conference in December, 2009, the priorities of the most fuel-dependent nations were clear: carbon emissions should be cut, and fossil fuel dependency reduced, but only if doing so does not threaten economic growth.

We must convince ourselves that life in a non-growing economy can be fulfilling, interesting, and secure. The absence of growth does not necessarily imply a lack of change or improvement. Within a non-growing or equilibrium economy there can still be continuous development of practical skills, artistic expression, and certain kinds of technology. In fact, some historians and social scientists argue that life in an equilibrium economy can be superior to life in a fast-growing economy: while growth creates opportunities for some, it also typically intensifies competition—there are big winners and big losers, and (as in most boom towns) the quality of relations within the community can suffer as a result. Within a non-growing economy it is possible to maximize benefits and reduce factors leading to decay, but doing so will require pursuing appropriate goals: instead of more, we must strive for better; rather than promoting increased economic activity for its own sake, we must emphasize whatever increases quality of life without stoking consumption. One way to do this is to reinvent and redefine growth itself.

 “Classical” economic philosophers such as Adam Smith (1723–1790), Thomas Robert Malthus (1766–1834), and David Ricardo (1772–1823) introduced basic concepts such as supply and demand, division of labor, and the balance of international trade.

These pioneers set out to discover natural laws in the day-to-day workings of economies. They were striving, that is, to make of economics a science admired the ability of physicists, biologists, and astronomers to demonstrate the fallacy of old church doctrines, and to establish new universal “laws” by means of inquiry and experiment.

Economic philosophers, for their part, could point to price as arbiter of supply and demand, acting everywhere to allocate resources far more effectively than any human manager or bureaucrat could ever possibly do—surely this was a principle as universal and impersonal as the force of gravitation!

The classical theorists gradually adopted the math and some of the terminology of science. Unfortunately, however, they were unable to incorporate into economics the basic

Economic theory required no falsifiable hypotheses and demanded no repeatable controlled experiments. Economists began to think of themselves as scientists, while in fact their discipline remained a branch of moral philosophy—as it largely does to this day.

Importantly, these early philosophers had some inkling of natural limits and anticipated an eventual end to economic growth. The essential ingredients of the economy were understood to consist of labor, land, and capital. There was on Earth only so much land (which in these theorists’ minds stood for all natural resources), so of course at some point the expansion of the economy would cease. Both Malthus and Smith explicitly held this view. A somewhat later economic philosopher, John Stuart Mill (1806-1873), put the matter as follows: “It must always have been seen, more or less distinctly, by political economists, that the increase in wealth is not boundless: that at the end of what they term the progressive state lies the stationary state…”

But starting with Adam Smith, the idea that continuous “improvement” in the human condition was possible came to be generally accepted.

A key to this transformation was the gradual deletion by economists of land from the theoretical primary ingredients of the economy (increasingly, only labor and capital really mattered—land having been demoted to a sub-category of capital). This was one of the refinements that turned classical economic theory into neoclassical economics; others included the theories of utility maximization and rational choice.

While this shift began in the 19th century, it reached its fruition in the 20th through the work of economists who explored models of imperfect competition, and theories of market forms and industrial organization, while emphasizing tools such as the marginal revenue curve (this is when economics came to be known as “the dismal science”—partly because its terminology was, perhaps intentionally, increasingly mind-numbing). Meanwhile, however, the most influential economist of the 19th century, a philosopher named Karl Marx, had thrown a metaphorical bomb through the window of the house that Adam Smith had built. In his most important book, Das Kapital, Marx proposed a name for the economic system that had evolved since the Middle Ages: capitalism. It was a system founded on capital. Many people assume that capital is simply another word for money, but that entirely misses the essential point: capital is wealth—money, land, buildings, or machinery—that has been set aside for production of more wealth. If you use your entire weekly paycheck for rent, groceries, and other necessities, you may have money but no capital. But even if you are deeply in debt, if you own stocks or bonds, or a computer that you use for a home-based business, you have capital. Capitalism, as Marx defined it, is a system in which productive wealth is privately owned. Communism (which Marx proposed as an alternative) is one in which productive wealth is owned by the community, or by the nation on behalf of the people. In any case, Marx said, capital tends to grow.  

Marx also wrote that capitalism is inherently unsustainable, in that when the workers become sufficiently impoverished by the capitalists, they will rise up and overthrow their bosses and establish a communist state (or, eventually, a stateless workers’ paradise). The ruthless capitalism of the 19th century resulted in booms and busts, and a great increase in inequality of wealth—and therefore an increase in social unrest. With the depression of 1893 and the crash of 1907, and finally the Great Depression of the 1930s, it appeared to many social commentators of the time that capitalism was indeed failing, and that Marx-inspired uprisings were inevitable; the Bolshevik revolt in 1917 served as a stark confirmation of those hopes or fears (depending on one’s point of view).

The next few decades saw a three-way contest between the Keynesian social liberals, the followers of Marx, and temporarily marginalized neoclassical or neoliberal economists who insisted that social reforms and Keynesian meddling by government with interest rates, spending, and borrowing merely impeded the ultimate efficiency of the free Market.

the fall of the Soviet Union at the end of the 1980s, Marxism ceased to have much of a credible voice in economics. Its virtual disappearance from the discussion created space for the rapid rise of the neoliberals, who for some time had been drawing energy from widespread reactions against the repression and inefficiencies of state-run economies. Margaret Thatcher and Ronald Reagan both relied heavily on advice from neoliberal economists of the Chicago School

One of the most influential libertarian, free-market economists of recent decades was Alan Greenspan (b. 1926), who, as U.S. Federal Reserve Chairman from 1987 to 2006, argued for privatization of state-owned enterprises and de-regulation of businesses—yet Greenspan nevertheless ran an activist Fed that expanded the nation’s money supply in ways and to degrees that neither Friedman or Hayek would have approved of. As

There is a saying now in Russia: Marx was wrong in everything he said about communism, but he was right in everything he wrote about capitalism. Since the 1980s, the nearly worldwide re-embrace of classical economic philosophy has predictably led to increasing inequalities of wealth within the U.S. and other nations, and to more frequent and severe economic bubbles and crashes. Which brings us to the global crisis that began in 2008. By this time all mainstream economists (Keynesians and neoliberals alike) had come to assume that perpetual growth is the rational and achievable goal of national economies. The discussion was only about how to maintain it—through government intervention or a laissez-faire approach that assumes the Market always knows best. But

It is clearly a challenge to the neoliberals, whose deregulatory policies were largely responsible for creating the housing bubble whose implosion is generally credited with stoking the crisis. But it is a conundrum also for the Keynesians, whose stimulus packages have failed in their aim of increasing employment and general economic activity. What we have, then, is a crisis not just of the economy, but also of economic theory and philosophy.

The ideological clash between Keynesians and neoliberals (represented to a certain degree in the escalating all-out warfare between the U.S. Democratic and Republican political parties) will no doubt continue and even intensify. But the ensuing heat of battle will yield little light if both philosophies conceal the same fundamental errors. One such error is of course the belief that economies can and should perpetually grow. But that error rests on another that is deeper and subtler. The subsuming of land within the category of capital by nearly all post-classical economists had amounted to a declaration that Nature is merely a subset of the human economy—an endless pile of resources to be transformed into wealth. It also meant that natural resources could always be substituted with some other form of capital—money or technology. The reality, of course, is that the human economy exists within, and entirely depends upon Nature, and many natural resources have no realistic substitutes. This fundamental logical and philosophical mistake, embedded at the very heart of modern mainstream economic philosophies, set society directly upon a course toward the current era of climate change and resource depletion, and its persistence makes conventional economic theories—of both Keynesian and neoliberal varieties—utterly incapable of dealing with the economic and environmental survival threats to civilization in the 21st century.

For help, we can look to the ecological and biophysical economists, whose ideas have been thoroughly marginalized by the high priests and gatekeepers of mainstream economics—and, spectacular growth of debt—in obvious and subtle forms—that has occurred during the past few decades. That phenomenon in turn must be seen in light of the business cycles that characterize economic activity in modern industrial societies, and the central banks that have been set up to manage them.

We’ve already noted how nations learned to support the fossil fuel-stoked growth of their physical economies by increasing their money supply via fractional reserve banking. As money was gradually (and finally completely) de-linked from physical substance (i.e., precious metals), the creation of money became tied to the making of loans by commercial banks.

This meant that the supply of money was entirely elastic—as much could be created as was needed, and the amount in circulation could contract as well as expand. And the growth of money was tied to the growth of debt. The system is dynamic and unstable, and this instability manifests in the business cycle. In the expansionary phase of the cycle, businesses see the future as rosy, and therefore take out loans to build more productive capacity and hire new workers. Because many businesses are doing this at the same time, the pool of available workers shrinks; so, to attract and keep the best workers, businesses have to raise wages. With wages rising, worker-consumers have more money in their pockets. Worker-consumers spend much of that money on products from the businesses that hire them, helping spread even more optimism about the future. Amid all this euphoria, worker-consumers go into debt based on the expectation that their wages will continue to grow, making it easy to repay loans. Businesses go into debt expanding their productive capacity. Real estate prices go up because of rising demand (former renters deciding they can now afford to buy), which means that houses are worth more as collateral if existing homeowners want to take out big loans to do some remodeling or to buy a new car. All of this borrowing and spending increases the money supply and the velocity of money. At some point, however, the overall mood of the country changes. Businesses have invested in as much productive capacity as they are likely to need for a while. They feel they have taken on as much debt as they can handle, and don’t feel the need to hire more employees. Upward pressure on wages ceases, and that helps dampen the general sense of optimism about the economy. Workers likewise become shy about taking on more debt, as they are unsure whether they will be able to make payments. Instead, they concentrate on paying off existing debts. With fewer loans being written, less new money is being created; meanwhile, as earlier loans are paid off, money effectively disappears from the system. The nation’s money supply contracts in a self-reinforcing spiral. But if people increase their savings during this downward segment of the cycle, they eventually will feel more secure and therefore more willing to begin spending again. Also, businesses will eventually have liquidated much of their surplus productive capacity and thereby reduced their debt burden. This sets the stage for the next expansion phase.

A bubble consists of trade in high volumes at prices that are considerably at odds with intrinsic values, but the word can also be used more broadly to refer to any instance of rapid expansion of currency or credit that’s not sustainable over the long run. Bubbles always end with a crash—a rapid, sharp decline in asset values.

The upsides and downsides of the business cycle are reflected in higher or lower levels of inflation. Inflation is often defined in terms of higher wages and prices, but (as the Austrian economists have persuasively argued) wage and price inflation is actually just the symptom of an increase in the money supply relative to the amounts of goods and services being traded, which in turn is typically the result of exuberant borrowing and spending. The downside of the business cycle, in the worst instance, can produce the opposite of inflation, or deflation. Deflation manifests as declining wages and prices, consequent upon a declining money supply relative to goods and services traded, due to a contraction of borrowing and spending.

As we have seen, bubbles are a phenomenon generally tied to speculative investing. But in a larger sense our entire economy has assumed the characteristics of a bubble—even a Ponzi scheme. That is because it has come to depend upon staggering and continually expanding amounts of debt: government and private debt; debt in the trillions, and tens of trillions, and hundreds of trillions of dollars; debt that, in aggregate, has grown by 500 percent since 1980; debt that has grown faster than economic output (measured in GDP) in all but one of the past 50 years; debt that can never be repaid; debt that represents claims on quantities of labor and resources that simply do not exist.

Looking at the problem close up, the globalization of the economy looms as a prominent factor. In the 1970s and ’80s, with stiffer environmental and labor standards to contend with domestically, corporations began eyeing the regulatory vacuum, cheap labor, and relatively untouched natural resource base of less-industrialized nations as a potential goldmine. International investment banks started loaning poor nations enormous sums to pay for ill-advised infrastructure projects (and, incidentally, to pay kickbacks to corrupt local politicians), later requiring these countries to liquidate their natural resources at fire-sale prices so as to come up with the cash required to make loan payments. Then, prodded by corporate interests, industrialized nations pressed for the liberalization of trade rules via the World Trade Organization (the new rules almost always subtly favored the wealthier trading partner).

All of this led predictably to a reduction of manufacturing and resource extraction in core industrial nations, especially the U.S. (many important resources were becoming depleted in the wealthy industrial nations anyway), and a steep increase in resource extraction and manufacturing in several “developing” nations, principally China. Reductions in domestic manufacturing and resource extraction in turn motivated investors within industrial nations to seek profits through purely financial means. As a result of these trends, there are now as many Americans employed in manufacturing as there were in 1940, when the nation’s population was roughly half what it is today—while the proportion of total U.S. economic activity deriving from financial services has tripled during the same period. And speculative investing has become an accepted practice that is taught in top universities and institutionalized in the world’s largest corporations.

The most important financial development during the 1970s was the growth of securitization—a financial practice of pooling various types of contractual debt (such as residential mortgages, commercial mortgages, auto loans, or credit card debt obligations) and selling it to investors in the form of bonds, pass-through securities, or collateralized mortgage obligations (CMOs). The principal and interest on the debts underlying the security are paid back to investors regularly. Securitization provided an avenue for more investors to fund more debt. In effect, securitization caused (or allowed) claims on wealth to increase far above previous levels.

In 1970 the top 100 CEOs earned about $45 for every dollar earned by the average worker; by 2008 the ratio was over 1,000 to one.

In the 1990s, as the surplus of financial capital continued to grow, investment banks began inventing a slew of new securities with high yields. In assessing these new products, rating agencies used mathematical models that, in retrospect, seriously underestimated their levels of risk. Until the early 1970s, bond credit ratings agencies had been paid for their work by investors who wanted impartial information on the credit worthiness of securities issuers and their various offerings. Starting in the early 1970s, the “Big Three” ratings agencies (Standard & Poors, Moody’s, and Fitch) were paid instead by the securities issuers for whom they issued those ratings. This eventually led to ratings agencies actively encouraging the issuance of collateralized debt obligations (CDOs).   The Clinton administration adopted “affordable housing” as one of its explicit goals (this didn’t mean lowering house prices; it meant helping Americans get into debt), and over the decade the percentage of Americans owning their homes increased 7.8 percent. This initiated a persistent upward trend in real estate prices.

In the late 1990s investors piled into Internet-related stocks, creating a speculative bubble. The dot-com bubble burst in 2000 (as with all bubbles, it was only a matter of “when,” not “if”), and a year later the terrifying crimes of September 11, 2001 resulted in a four-day closure of U.S. stock exchanges and history’s largest one-day decline in the Dow Jones Industrial Average. These events together triggered a significant recession. Seeking to counter a deflationary trend, the Federal Reserve lowered its federal funds rate target from 6.5 percent to 1.0 percent, making borrowing more affordable.   Downward pressure on interest rates was also coming from the nation’s high and rising trade deficit. Every nation’s balance of payments must sum to zero, so if a nation is running a current account deficit it must balance that amount by earning from foreign investments, by running down reserves, or by obtaining loans from other countries. In other words, a country that imports more than it exports must borrow to pay for those imports. Hence American imports had to be offset by large and growing amounts of foreign investment capital flowing into the U.S. Higher bond prices attract more investment capital, but there is an inevitable inverse relationship between bond prices and interest rates, so trade deficits tend to force interest rates down.   Foreign investors had plenty of funds to lend, either because they had very high personal savings rates (in China, up to 40 percent of income saved), or because of high oil prices (think OPEC). A torrent of funds—it’s been called a “Giant Pool of Money” that roughly doubled in size from 2000 to 2007, reaching $70 trillion—was flowing into the U.S. financial markets. While foreign governments were purchasing U.S. Treasury bonds, thus avoiding much of the impact of the eventual crash, other foreign investors, including pension funds,

By this time a largely unregulated “shadow banking system,” made up of hedge funds, money market funds, investment banks, pension funds, and other lightly-regulated entities, had become critical to the credit markets and was underpinning the financial system as a whole. But the shadow “banks” tended to borrow short-term in liquid markets to purchase long-term, illiquid, and risky assets, profiting on the difference between lower short-term rates and higher long-term rates. This meant that any disruption in credit markets would result in rapid deleveraging, forcing these entities to sell long-term assets (such as MBSs) at depressed prices.

Between 1997 and 2006, the price of the typical American house increased by 124%.

People bragged that their houses were earning more than they were, believing that the bloating of house values represented a flow of real money that could be tapped essentially forever. In a sense this money was being stolen from the next generation: younger first-time buyers had to burden themselves with unmanageable debt in order to enter the market, while older homeowners who bought before the bubble were able to sell, downsize, and live on the profit.

For a brief time between 2006 and mid-2008, investors fled toward futures contracts in oil, metals, and food, driving up commodities prices worldwide. Food riots erupted in many poor nations, where the cost of wheat and rice doubled or tripled. In part, the boom was based on a fundamental economic trend: demand for commodities was growing—due in part to the expansion of economies in China, India, and Brazil—while supply growth was lagging. But speculation forced prices higher and faster than physical shortage could account for. For Western economies, soaring oil prices had a sharp recessionary impact, with already cash-strapped new homeowners now having to spend eighty to a hundred dollars every time they filled the tank in their SUV. The auto, airline, shipping, and trucking industries were sent reeling.

The U.S. real estate bubble of the early 2000s was the largest (in terms of the amount of capital involved) in history. And its crash carried an eerie echo of the 1930s: Austrian and Post-Keynesian economists have argued that it wasn’t the stock market crash that drove the Great Depression so much as farm failures making it impossible for farmers to make mortgage payments—along with housing bubbles in Florida, New York, and Chicago.

Real estate bubbles are essentially credit bubbles, because property owners generally use borrowed money to purchase property (this is in contrast to currency bubbles, in which nations inflate their currency to pay off government debt). The amount of outstanding debt soars as buyers flood the market, bidding property prices up to unrealistic levels and taking out loans they cannot repay. Too many houses and offices are built, and materials and labor are wasted in building them. Real estate bubbles also lead to an excess of homebuilders, who must retrain and retool when the bubble bursts. These kinds of bubbles lead to systemic crises affecting the economic integrity of nations.   Indeed, the housing bubble of the early 2000s had become the oxygen of the U.S. economy—the source of jobs, the foundation for Wall Street’s recovery from the dot-com bust, the attractant for foreign capital, the basis for household wealth accumulation and spending. Its bursting changed everything.  

And there is reason to think it has not fully deflated: commercial real estate may be waiting to exhale next. Over the next five years, about $1.4 trillion in commercial real estate loans will reach the end of their terms and require new financing. Commercial property values have fallen more than 40 percent nationally since their 2007 peak, so nearly half the loans are underwater. Vacancy rates are up and rents are down.   The impact of the real estate crisis on banks is profound, and goes far beyond defaults upon outstanding mortgage contracts: systemic dependence on MBSs, CDOs, and derivatives means many of the banks, including the largest, are effectively insolvent and unable to take on more risk (we’ll see why in more detail in the next section).   The demographics are not promising for a recovery of the housing market anytime soon: the oldest of the Baby Boomers are 65 and entering retirement. Few have substantial savings; many had hoped to fund their golden years with house equity—and to realize that, they must sell. This will add more houses to an already glutted market, driving prices down even further.  

With regard to debt, what are those limits likely to be and how close are we to hitting them?  There are practical limits to debt within such a system, and those limits are likely to show up in somewhat different ways for each of the four categories of debt indicated in the graph.   With government debt, problems arise when required interest payments become a substantial fraction of tax revenues. Currently for the U.S., the total Federal budget amounts to about $3.5 trillion, of which 12 percent (or $414 billion) goes toward interest payments. But in 2009, tax revenues amounted to only $2.1 trillion; thus interest payments currently consume almost 20 percent, or nearly one-fifth, of tax revenues.

By the time the debt reaches $20 trillion, roughly ten years from now, interest payments may constitute the largest Federal budget outlay category, eclipsing even military expenditures. If Federal tax revenues haven’t increased by that time, Federal government debt interest payments will be consuming 20 percent of them.

Once 100 percent of tax revenues have to go toward interest payments and all government operations have to be funded with more borrowing—on which still more interest will have to be paid—the system will have arrived at a kind of financial singularity: a black hole of debt, if you will. But in all likelihood we would not have to get to that ultimate impasse before serious problems appear.

Many economic wags suggest that when government has to spend 30 percent of tax receipts on interest payments, the country is in a debt trap from which there is no easy escape. Given current trajectories of government borrowing and interest rates, that 30 percent mark could be hit in just a few years. Even before then, U.S. credit worthiness and interest costs will take a beating.

However, some argue that limits to government debt (due to snowballing interest payments) need not be a hard constraint—especially for a large nation, like the U.S., that controls its own currency. The United States government is constitutionally empowered to create money, including creating money to pay the interest on its debts. Or, the government could in effect loan the money to itself via its central bank, which would then rebate interest payments back to the Treasury (this is in fact what the Treasury and Fed are doing with Quantitative Easing 2,

The most obvious complication that might arise is this: If at some point general confidence that external U.S. government debt (i.e., money owed to private borrowers or other nations) could be repaid with debt of equal “value” were deeply and widely shaken, potential buyers of that debt might decide to keep their money under the metaphorical mattress (using it to buy factories or oilfields instead), even if doing so posed its own set of problems. Then the Fed would become virtually the only available buyer of government debt, which might eventually undermine confidence in the currency, possibly igniting a rapid spiral of refusal that would end only when the currency failed. There are plenty of historic examples of currency failures, so this would not be a unique occurrence.

But as long as deficit spending doesn’t exceed certain bounds, and as long as the economy resumes growth in the not-too-distant future, then it can be sustained for quite some time. Ponzi schemes theoretically can continue forever—if the number of potential participants is infinite. The absolute size of government debt is not necessarily a critical factor, as long as future growth will be sufficient so that the proportion of debt relative to revenues remains the same. Even an increase in that proportion is not necessarily cause for alarm, as long as it is only temporary. This, at any rate, is the Keynesian argument. Keynesians would also point out that government debt is only one category of total debt, and that U.S. government debt hasn’t grown proportionally relative to other categories of debt to any alarming degree (until the current recession).

Baby Boomers (the most numerous demographic cohort in the nation’s history, encompassing 70 million Americans) are reaching retirement age, which means that their lifetime spending cycle has peaked. It’s not that Boomers won’t continue to buy things (everybody has to eat), but their aggregate spending is unlikely to increase, given that cohort members’ savings are, on average, inadequate for retirement (one-third of them have no savings whatever). Out of necessity, Boomers will be saving more from now on, and spending less. And that won’t help the economy grow.  

When demand for products declines, corporations aren’t inclined to borrow to increase their productive capacity. Even corporate borrowing aimed at increasing financial leverage has limits. Too much corporate debt reduces resiliency during slow periods—and the future is looking slow for as far as the eye can see. Durable goods orders are down, housing starts and new home sales are down, savings are up. As a result, banks don’t want to lend to companies, because the risk of default on such loans is now perceived as being higher than it was a few years ago; in addition, the banks are reluctant to take on more risk of any sort given the fact that many of the assets on their balance sheets consist of now-worthless derivatives and CDOs.   Meanwhile, ironically and perhaps surprisingly, U.S. corporations are sitting on over a trillion dollars because they cannot identify profitable investment opportunities and because they want to hang onto whatever cash they have in anticipation of continued hard times.   If only we could get to the next upside business cycle, then more corporate debt would be justified for both lenders and borrowers. But so far confidence in the future is still weak.

One of the main reforms enacted during the Great Depression, contained in the Glass Steagall Act of 1933, was a requirement that commercial banks refrain from acting as investment banks. In other words, they were prohibited from dealing in stocks, bonds, and derivatives. This prohibition was based on an implicit understanding that there should be some sort of firewall within the financial system separating productive investment from pure speculation, or gambling. This firewall was eliminated by the passage of the Gramm–Leach–Bliley Act of 1999 (for which the financial services industry lobbied tirelessly). As a result, all large U.S. banks have for the past decade become deeply engaged in speculative investment, using both their own and their clients’ money.   With derivatives, since there is no requirement to own the underlying asset, and since there is often no requirement of evidence of ability to cover the bet, there is no effective limit to the amount that can be wagered. It’s true that many derivatives largely cancel each other out, and that their ostensible purpose is to reduce financial risk. Nevertheless, if a contract is settled, somebody has to pay—unless they can’t.

In the heady years of the 2000s, even the largest and most prestigious banks engaged in what can only be termed criminally fraudulent behavior on a massive scale. As revealed in sworn Congressional testimony, firms including Goldman Sachs deliberately created flawed securities and sold tens of billions of dollars’ worth of them to investors, then took out many more billions of dollars’ worth of derivatives contracts essentially betting against the securities they themselves had designed and sold. They were quite simply defrauding their customers, which included foreign and domestic pension funds. To date, no senior executive with any bank or financial services firm has been prosecuted for running these scams. Instead, most of the key figures are continuing to amass immense personal fortunes, confident no doubt that what they were doing—and in many cases continue to do—is merely a natural extension of the inherent logic of their industry.   The degree and concentration of exposure on the part of the biggest banks with regard to derivatives was and is remarkable: as of 2005, JP Morgan Chase, Bank of America, Citibank, Wachovia, and HSBC together accounted for 96 percent of the $100 trillion of derivatives contracts held by 836 U.S. banks.

Even though many derivatives were insurance against default, or wagers that a particular company would fail, to a large degree they constituted a giant bet that the economy as a whole would continue to grow (and, more specifically, that the value of real estate would continue to climb). So when the economy stopped growing, and the real estate bubble began to deflate, this triggered a systemic unraveling that could be halted (and only temporarily) by massive government intervention.  

Suddenly “assets” in the form of derivative contracts that had a stated value on banks’ ledgers were clearly worth much less. If these assets had to be sold, or if they were “marked to market” (valued on the books at the amount they could actually sell for), the banks would be shown to be insolvent. Government bailouts essentially enabled the banks to keep those assets hidden, so that banks could appear solvent and continue carrying on business.   Despite the proliferation of derivatives, the financial system still largely revolves around the timeworn practice of receiving deposits and making loans. Bank loans are the source of money in our modern economy. If the banks go away, so does the rest of the economy.

But as we have just seen, many banks are probably actually insolvent because of the many near-worthless derivative contracts and bad mortgage loans they count as assets on their balance sheets.   One might well ask: If commercial banks have the power to create money, why can’t they just write off these bad assets and carry on? Ellen Brown explains the point succinctly in her useful book Web of Debt: [U]nder the accountancy rules of commercial banks, all banks are obliged to balance their books, making their assets equal their liabilities. They can create all the money they can find borrowers for, but if the money isn’t paid back, the banks have to record a loss; and when they cancel or write off debt, their assets fall. To balance their books . . . they have to take the money either from profits or from funds invested by the bank’s owners [i.e., shareholders]; and if the loss is more than its owners can profitably sustain, the bank will have to close its doors.

So, given their exposure via derivatives, bad real estate loans, and MBSs, the banks aren’t making new loans because they can’t take on more risk. The only way to reduce that risk is for government to guarantee the loans. Again, as long as the down-side of this business cycle is short, such a plan could work in principle.   But whether it actually will in the current situation is problematic. As noted above, Ponzi schemes can theoretically go on forever, as long as the number of new investors is infinite. Yet in the real world the number of potential investors is always finite. There are limits. And when those limits are hit, Ponzi schemes can unravel very quickly.

The shadow banks can still write more derivative contracts, but that doesn’t do anything to help the real economy and just spreads risk throughout the system. That leaves government, which (if it controls its own currency and can fend off attacks from speculators) can continue to run large deficits, and the central banks, which can enable those deficits by purchasing government debt outright—but unless such efforts succeed in jump-starting growth in the other sectors, that is just a temporary end-game strategy.

Remember: in a system in which money is created through bank loans, there is never enough money in existence to pay back all debts with interest. The system only continues to function as long as it is growing.   So, what happens to this mountain of debt in the absence of economic growth? Answer: Some kind of debt crisis. And that is what we are seeing.   Debt crises have occurred frequently throughout the history of civilizations, beginning long before the invention of fractional reserve banking and credit cards. Many societies learned to solve the problem with a “debt jubilee”: According to the Book of Leviticus in the Bible, every fiftieth year is a Jubilee Year, in which slaves and prisoners are to be freed and debts are to be forgiven.

For householders facing unaffordable mortgage payments or a punishing level of credit card debt, a jubilee may sound like a capitol idea. But what would that actually mean today, if carried out on a massive scale—when debt has become the very fabric of the economy? Remember: we have created an economic machine that needs debt like a car needs gas.   Realistically, we are unlikely to see a general debt jubilee in coming years; what we will see instead are defaults and bankruptcies that accomplish essentially the same thing—the destruction of debt. Which, in an economy like ours, effectively means a destruction of wealth and claims upon wealth. Debt will have to be written off in enormous amounts—by the trillions of dollars. Over the short term, government will attempt to stanch this flood of debt-shedding in the household, corporate, and financial sectors by taking on more debt of its own—but eventually it simply won’t be able to keep up, given the inherent limits on government borrowing discussed above.   We began with the question, “How close are we to hitting the limits to debt?” The evident answer is: we have already probably hit realistic limits to household debt and corporate debt; the ratio of U.S. total debt-to-GDP is probably near or past the danger mark; and limits to government debt may be within sight, though that conclusion is more controversial and doubtful.  

For the U.S., actions undertaken by the Federal government and the Federal Reserve bank system have so far resulted in totals of $3 trillion actually spent and $11 trillion committed as guarantees. Some of these actions are discussed below; for a complete tally of the expenditures and commitments, see the online CNN Bailout Tracker.

The New Deal had cost somewhere between $450 and $500 billion and had increased government’s share of the national economy from 4 percent to 10 percent. ARRA represented a much larger outlay that was spent over a much shorter period, and increased government’s share of the economy from 20 percent to 25 percent.

At the end of 2010, President Obama and congressional leaders negotiated a compromise package of extended and new tax cuts that, in total, would reduce potential government revenues by an estimated $858 billion. This was, in effect, a third stimulus package.

Critics of the stimulus packages argued that transitory benefits to the economy had been purchased by raising government debt to frightening levels. Proponents of the packages answered that, had government not acted so boldly, an economic crisis might have turned into complete and utter ruin.

While the U.S. government stimulus packages were enormous in scale, the actions of the Federal Reserve dwarfed them in terms of dollar amounts committed.   During the past three years, the Fed’s balance sheet has swollen to more than $2 trillion through its buying of bank and government debt. Actual expenditures included $29 billion for the Bear Sterns bailout; $149.7 billion to buy debt from Fannie Mae and Freddie Mac; $775.6 billion to buy mortgage-backed securities, also from Fannie and Freddie; and $109.5 billion to buy hard-to-sell assets (including (MBSs) from banks. However, the Fed committed itself to trillions more in insuring banks against losses, loaning to money market funds, and loaning to banks to purchase commercial paper. Altogether, these outlays and commitments totaled a minimum of $6.4 trillion.

Documents released by the Fed on December 1, 2010 showed that more than $9 trillion in total had been supplied to Wall Street firms, commercial banks, foreign banks, and corporations, with Citigroup, Morgan Stanley, and Merrill Lynch borrowing sums that cumulatively totaled over $6 trillion. The collateral for these loans was undisclosed but widely thought to be stocks, CDSs, CDOs, and other securities of dubious value.   In one of its most significant and controversial programs, known as “quantitative easing,” the Fed twice expanded its balance sheet substantially, first by buying mortgage-backed securities from banks, then by purchasing outstanding Federal government debt (bonds and Treasury certificates) to support the Treasury debt market and help keep interest rates down on consumer loans. The Fed essentially creates money on the spot for this purpose (though no money is literally “printed”), thus monetizing U.S. government debt.  

In November 2008 China announced a stimulus package totaling 4 trillion yuan ($586 billion) as an attempt to minimize the impact of the global financial crisis on its domestic economy. In proportion to the size of China’s economy, this was a much larger stimulus package than that of the U.S. Public infrastructure development made up the largest portion, nearly 38 percent, followed by earthquake reconstruction, funding for social welfare plans, rural development, and technology advancement programs.

What’s the bottom line on all these stimulus and bailout efforts? In the U.S., $12 trillion of total household net worth disappeared in 2008, and there will likely be more losses ahead, largely as a result of continued fall in real estate values though increasingly as a result of job losses as well. The government’s stimulus efforts, totaling less than $1 trillion, cannot hope to make up for this historic evaporation of wealth. While indirect subsidies may temporarily keep home prices from falling further, that just keeps houses less affordable to workers making less income. Meanwhile, the bailouts of banks and shadow banks have been characterized as government throwing money at financial problems it cannot solve, rewarding the very people who created them. Rather than being motivated by the suffering of American homeowners or governments in over their heads, the bailouts of Fannie Mae and Freddie Mac in the U.S., and Greece and Ireland in the E.U. were (according to critics) essentially geared toward securing the investments of the banks and the wealthy bonds holders.

The stimulus-bailout efforts of 2008-2009—which in the U.S. cut interest rates from 5 percent to zero, spent up the budget deficit to 10 percent of GDP, and guaranteed $6.4 trillion to shore up the financial system—arguably cannot be repeated. These constituted quite simply the largest commitments of funds in world history, dwarfing the total amounts spent in all the wars of the 20th century in inflation-adjusted terms (for the U.S., the cost of World War II amounted to $3.2 trillion). Not only the U.S., but Japan and the European nations as well have exhausted their arsenals.   But more will be needed as countries, states, counties, and cities near bankruptcy due to declining tax revenues. Meanwhile the U.S. has lost 8.4 million jobs—and if loss of hours worked is considered that adds the equivalent of another 3 million; the nation will need to generate an extra 450,000 jobs each month for three years to get back to pre-crisis levels of employment. The only way these problems can be allayed (not fixed) is through more central bank money creation and government spending.

Once a credit bubble has inflated, the eventual correction (which entails destruction of credit and assets) is of greater magnitude than government’s ability to spend. The cycle must sooner or later play itself out.   There may be a few more arrows in the quiver of economic policy makers: central bankers could try to drive down the value of domestic currencies to stimulate exports; the Fed could also engage in more quantitative easing. But these measures will sooner or later merely undermine currencies

Further, the way the Fed at first employed quantitative easing in 2009 was minimally productive.

QE1 amounted to adding about a trillion dollars to banks’ balance sheets, with the assumption that banks would then use this money as a basis for making loans.[2] The “multiplier effect” (in which banks make loans in amounts many times the size of deposits) should theoretically have resulted in the creation of roughly $9 trillion within the economy. However, this did not happen: because there was reduced demand for loans (companies didn’t want to expand in a recession and families didn’t want to take on more debt), the banks just sat on this extra capital. A better result could arguably have been obtained if the Fed were somehow to have distributed the same amount of money directly to debtors, rather than to banks, because then at least the money would either have circulated to pay for necessities, or helped to reduce the general debt overhang.

QE2 was about funding Federal government debt interest-free. Because the Federal Reserve rebates its profits (after deducting expenses) to the Treasury, creating money to buy government debt obligations is an effective way of increasing that debt without increasing interest payments. Critics describe this as the government “printing money” and assert that it is highly inflationary; however, given the extremely deflationary context (trillions of dollars’ worth of write-downs in collateral and credit), the Fed would have to “print” far more than it is doing to result in real inflation. Nevertheless, as we will see in Chapter 5 in a discussion of “currency wars,” other nations view this strategy as a way to drive down the dollar so as to decrease the value of foreign-held dollar-denominated debt—in effect forcing them to pay for America’s financial folly.

Central banks and governments are barely keeping the wheels on society, but their actions come with severe long-term costs and risks. And what they can actually accomplish is most likely limited anyway.

Deflation represents a disappearance of credit and money, so that whatever money remains has increased purchasing power. Once the bubble began to burst back in 2007-2008, say the deflationists, a process of contraction began that inevitably must continue to the point where debt service is manageable and prices for assets such as homes and stocks are compelling based on long-term historical trends.   However, many deflationists tend to agree that the inflationists are probably right in the long run: at some point, perhaps several years from now, some future U.S. administration will resort to truly extraordinary means to avoid defaulting on interest payments on its ballooning debt, as well as to avert social disintegration and restart economic activity. There are several scenarios by which this might happen—including government simply printing money in enormous quantities and distributing it directly to banks or citizens. The net effect would be the same in all cases: a currency collapse.

In general, what we are actually seeing so far is neither dramatic deflation nor hyperinflation. Despite the evaporation of trillions of dollars in wealth during the past four years, and despite government and central bank interventions with a potential nameplate value also running in the trillions of dollars, prices (which most economists regard as the signal of inflation or deflation) have remained fairly stable. That is not to say that the economy is doing well: the ongoing problems of unemployment, declining tax revenues, and business and bank failures are obvious to everyone. Rather, what seems to be happening is that the efforts of the U.S. Federal government and the Federal Reserve have temporarily more or less succeeded in balancing out the otherwise massively deflationary impacts of defaults, bankruptcies, and falling property values. With its new functions, the Fed is acting as the commercial bank of last resort, transferring debt (mostly in the form of MBSs and Treasuries) from the private sector to the public sector.

The Fed’s zero-interest-rate policy has given a huge hidden subsidy to banks by allowing them to borrow Fed money for nothing and then lend it to the government at a 3 percent interest rate. But this is still not inflationary, because Federal Reserve is merely picking up the slack left by the collapse of credit in the private sector. In effect, the nation’s government and its central bank are together becoming the lender of last resort and the borrower of last resort—and (via the military) increasingly also both the consumer of last resort and the employer of last resort.

While leaders will make every effort to portray this as a gradual return to growth, in fact the economy will be losing ground and will remain fragile, highly vulnerable to upsetting events that could take any of a hundred forms—including international conflict, terrorism, the bankruptcy of a large corporation or megabank, a sovereign debt event (such as a default by one of the European countries now lined up for bailouts), a food crisis, an energy shortage or temporary grid failure, an environmental disaster, a curtailment of government-Fed intervention based on a political shift in the makeup of Congress, or a currency war

Extreme social unrest would be an inevitable result of the gross injustice of requiring a majority of the population to forego promised entitlements and economic relief following the bailout of a small super-wealthy minority on Wall Street. Political opportunists can be counted on to exacerbate that unrest and channel it in ways utterly at odds with society’s long-term best interests. This is a toxic brew

Growth requires not just energy in the most general sense, but forms of energy with specific characteristics. After all, the Earth is constantly bathed in energy—indeed, the amount of solar energy that falls on Earth’s surface each hour is greater than the amount of fossil-fuel energy the world uses every year. But sunlight energy is diffuse and difficult to use directly. Economies need sources of energy that are concentrated and controllable, and that can be made to do useful work. From a short-term point of view, fossil fuels proved to be energy sources with highly desirable characteristics: they could be extracted from Earth’s crust quite cheaply (at least in the early days), they were portable, and they delivered a lot of energy per unit of weight and/or volume—in most instances, far more than the firewood that people had been accustomed to using.

2009 Post Carbon Institute and the International Forum on Globalization undertook a joint study to analyze 18 energy sources (from oil to tidal power) using 10 criteria (scalability, renewability, energy density, energy returned on energy invested, and so on).

(Searching for a Miracle: Net Energy Limits and the Fate of Industrial Societies),

Our conclusion was that there is no credible scenario in which alternative energy sources can entirely make up for fossil fuels as the latter deplete.

Given oil’s pivotal role in the economy, high prices did more than reduce demand, they had helped undermine the economy as a whole in the 1970s and again in 2008. Economist James Hamilton of the University of California, San Diego, has assembled a collection of studies showing a tight correlation between oil price spikes and recessions during the past 50 years. Seeing this correlation, every attentive economist should have forecast a steep recession beginning in 2008, as oil price soared.

By mid-2009 the oil price had settled within the “Goldilocks” range—not too high (so as to kill the economy and, with it, fuel demand), and not too low (so as to scare away investment in future energy projects and thus reduce supply). That just-right price band appeared to be between $60 and $80 a barrel. How long prices can stay in or near the Goldilocks range is anyone’s guess but as declines in production in the world’s old super-giant oilfields continue to accelerate and exploration costs continue to mount, the lower boundary of that just-right range will inevitably continue to migrate upward. And while the world economy remains frail, its vulnerability to high energy prices is more pronounced, so that even $80-85 oil could gradually weaken it further, choking off signs of recovery.  In other words, oil prices have effectively put a cap on economic recovery. This problem would not exist if the petroleum industry could just get busy and make a lot more oil, so that each unit would be cheaper. But despite its habitual use of the terms “produce” and “production,” the industry doesn’t make oil, it merely extracts the stuff from finite stores in the Earth’s crust. As we have already seen, the cheap, easy oil is gone. Economic growth is hitting the Peak Oil ceiling.  

As more and more resources acquire the Goldilocks syndrome, general commodity prices will likely spike and crash repeatedly, making a hash of efforts to stabilize the economy.

There are three main solutions to the problem of Peak Phosphate: composting of human wastes, including urine diversion; more efficient application of fertilizer; and farming in such a way as to make existing soil phosphorus more accessible to plants.

It’s worth noting that for the past few decades a vocal minority of farmers, agricultural scientists, and food system theorists including Wendell Berry, Wes Jackson, Vandana Shiva, Robert Rodale, and Michael Pollan, has argued against centralization, industrialization, and globalization of agriculture, and for an ecological agriculture with minimal fossil fuel inputs. Where their ideas have taken root, the adaptation to Peak Oil and the end of growth will be easier. Unfortunately, their recommendations have not become mainstream, because industrialized, globalized agriculture has proved capable of producing larger short-term profits for banks and agribusiness cartels. Even more unfortunately, the available time for a large-scale, proactive food system transition before the impacts of Peak Oil and economic contraction arrive is gone. We’ve run out the clock.   In his book, Dirt, David Montgomery makes a powerful case that soil erosion was a major cause of the Roman economy’s decline.

Data from the U.S. Geological Survey shows that within the U.S. many mineral resources are well past their peak rates of production.[4] These include bauxite (whose production peaked in 1943), copper (1998), iron ore (1951), magnesium (1966), phosphate rock (1980), potash (1967), rare earth metals (1984), tin (1945), titanium (1964), and zinc (1969).[5]

There are 17 rare earth elements (REEs) with names like lanthanum, neodymium, europium, and yttrium. They are critical to a variety of high-tech products including catalytic converters, color TV and flat panel displays, permanent magnets, batteries for hybrid and electric vehicles, and medical devices; to manufacturing processes like petroleum refining; and to various defense systems like missiles, jet engines, and satellite components. REEs are even used in making the giant electromagnets in modern wind turbines. But rare earth mines are failing to keep up with demand. China produces 97 percent of the world’s REEs, and has issued a series of contradictory public statements about whether, and in what amounts, it intends to continue exporting these elements.

Indium is used in indium tin oxide, which is a thin-film conductor in flat-panel television screens. Armin Reller, a materials chemist, and his colleagues at the University of Augsburg in Germany have been investigating the problem of indium depletion. Reller estimates that the world has, at best, 10 years before production begins to decline; known deposits will be exhausted by 2028, so new deposits will have to be found and developed. Some analysts are now suggesting that shortages of energy minerals including indium, REEs, and lithium for electric car batteries could trigger trade wars. 

Armin Reller and his colleagues have also looked into gallium supplies. Discovered in 1831, Gallium is a blue-white metal with certain unusual properties, including a very low melting point and an unwillingness to oxidize. These make it useful as a coating for optical mirrors, a liquid seal in strongly heated apparatus, and a substitute for mercury in ultraviolet lamps. Gallium is also essential to making liquid-crystal displays in cell phones, flat-screen televisions, and computer monitors. With the explosive profusion of LCD displays in the past decade, supplies of gallium have become critical; Reller projects that by about 2017 existing sources will be exhausted. 

Palladium (along with platinum and rhodium) is a primary component in the autocatalysts used in automobiles to reduce exhaust emissions. Palladium is also employed in the production of multi-layer ceramic capacitors in cellular telephones, personal and notebook computers, fax machines, and auto and home electronics. Russian stockpiles have been a key component in world palladium supply for years, but those stockpiles are nearing exhaustion, and prices for the metal have soared as a result.

Uranium is the fuel for nuclear power plants and is also used in nuclear weapons manufacturing; small amounts are employed in the leather and wood industries for stains and dyes, and as mordants of silk or wool. Depleted uranium is used in kinetic energy penetrator weapons and armor plating. In 2006, the Energy Watch Group of Germany studied world uranium supplies and issued a report concluding that, in its most optimistic scenario, the peak of world uranium production will be achieved before 2040. If large numbers of new nuclear power plants are constructed to offset the use of coal as an electricity source, then supplies will peak much sooner. Tantalum for cell phones. Helium for blimps. The list could go on. Perhaps it is not too much of an exaggeration to say that humanity is in the process of achieving Peak Everything.

Accidents and natural disasters have long histories; therefore it may seem peculiar at first to think that these could now suddenly become significant factors in choking off economic growth. However, two things have changed.   First, growth in human population and proliferation of urban infrastructure are leading to ever more serious impacts from natural and human-caused disasters.

There are also limits to the environment’s ability to absorb the insults and waste products of civilization, and we are broaching those limits in ways that can produce impacts of a scale far beyond our ability to contain or mitigate. The billions of tons of carbon dioxide that our species has released into the atmosphere through the combustion of fossil fuels are not only changing the global climate but also causing the oceans to acidify. Indeed, the scale of our collective impact on the planet has grown to such an extent that many scientists contend that Earth has entered a new geologic era—the Anthropocene. Humanly generated threats to the environment’s ability to support civilization are now capable of overwhelming civilization’s ability to adapt and regroup.

GDP impacts from the 2010 disasters were substantial. BP’s losses from the Deepwater Horizon gusher (which included cleanup costs and compensation to commercial fishers) have so far amounted to about $40 billion. The Pakistan floods caused damage estimated at $43 billion, while the financial toll of the Russian wildfires has been pegged at $15 billion.[4] Add in other events listed above, plus more not mentioned, and the total easily tops $150 billion for GDP losses in 2010 resulting from natural disasters and industrial accidents.[5] This does not include costs from ongoing environmental degradation (erosion of topsoil, loss of forests and fish species). How does this figure compare with annual GDP growth? Assuming world annual GDP of $58 trillion and an annual growth rate of three percent, annual GDP growth would amount to $1.74 trillion. Therefore natural disasters and industrial accidents, conservatively estimated, are already costing the equivalent of 8.6 percent of annual GDP growth.   As resource extraction moves from higher-quality to lower-quality ores and deposits, we must expect worse environmental impacts and accidents along the way. There are several current or planned extraction projects in remote and/or environmentally sensitive regions that could each result in severe global impacts equaling or even surpassing the Deepwater Horizon blowout. These include oil drilling in the Beaufort and Chukchi Seas; oil drilling in the Arctic National Wildlife Refuge; coal mining in the Utukok River Upland, Arctic Alaska; tar sands production in Alberta; shale oil production in the Rocky Mountains; and mountaintop-removal coal mining in Appalachia.

Since climate is changing mostly because of the burning of fossil fuels, averting climate change is largely a matter of reducing fossil fuel consumption.[9] But as we have seen (and will confirm in more ways in the next chapter), economic growth depends on increasing energy consumption. Due to the inherent characteristics of alternative energy sources, it is extremely unlikely that society can increase its energy production while dramatically curtailing fossil fuel use.

Anther environmental impact that is relatively slow and ongoing and even more difficult to put a price tag on is the decline in the number of other species inhabiting our planet. According to one recent study, one in five plant species faces extinction as a result of climate change, deforestation, and urban growth.

Non-human species perform ecosystem services that only indirectly benefit our kind, but in ways that turn out to be crucial. Phytoplankton, for example, are not a direct food source for people, but comprise the base of oceanic food chains—in addition to supplying half of the oxygen produced each year by nature. The abundance of plankton in the world’s oceans has declined 40 percent since 1950, according to a recent study, for reasons not entirely clear. This is one of the main explanations for a gradual decline in atmospheric oxygen levels recorded worldwide.  A 2010 study by Pavan Sukhdev, a former banker, to determine a price for the world’s environmental assets, concluded that the annual destruction of rainforests entails an ultimate cost to society of $4.5 trillion—$650 for each person on the planet. But that cost is not paid all at once; in fact, over the short term, forest cutting looks like an economic benefit as a result of the freeing up of agricultural land and the production of timber. Like financial debt, environmental costs tend to accumulate until a crisis occurs and systems collapse.

Declining oxygen levels, acidifying oceans, disappearing species, threatened oceanic food chains, changing climate—when considering planetary changes of this magnitude, it may seem that the end of economic growth is hardly the worst of humanity’s current problems. However, it is important to remember that we are counting on growth to enable us to solve or respond to environmental crises. With economic growth, we have surplus money with which to protect rainforests, save endangered species, and clean up after industrial accidents. Without economic growth, we are increasingly defenseless against environmental disasters—many of which paradoxically result from growth itself.

Talk of limits typically elicits dismissive references to the failed warnings of Thomas Malthus—the 18th century economist who reasoned that population growth would inevitably (and soon) outpace food production, leading to a general famine. Malthus was obviously wrong, at least in the short run: food production expanded throughout the 19th and 20th centuries to feed a fast-growing population. He failed to foresee the introduction of new hybrid crop varieties, chemical fertilizers, and the development of industrial farm machinery. The implication, whenever Malthus’s ghost is summoned, is that all claims that environmental limits will overtake growth are likewise wrong, and for similar reasons. New inventions and greater efficiency will always trump looming limits.

The main advantages of electrics are that their energy is used more efficiently (electric motors translate nearly all their energy into motive force, while internal combustion engines are much less efficient), they need less drive-train maintenance, and they are more environmentally benign (even if they’re running on coal-derived electricity, they usually entail lower carbon emissions due to their much higher energy efficiency). The drawbacks of electric vehicles have to do with the limited ability of batteries to store energy, as compared to conventional liquid fuels. Gasoline carries 45 megajoules per kilogram, while lithium-ion batteries can store only 0.5 MJ/kg. Improvements are possible, but the theoretical limit of chemical energy storage is still only about 3 MJ/kg. This is why we’ll never see battery-powered airliners: the batteries would be way too heavy to allow planes to get off the ground.

The low energy density (by weight) of batteries tends to limit the range of electric cars. This problem can be solved with hybrid power trains—using a gasoline engine to charge the batteries, as in the Chevy Volt, or to push the car directly part of the time, as with the Toyota Prius—but that adds complexity and expense.  

Posted in Richard Heinberg | Tagged , , | Comments Off on Book Review of Richard Heinberg’s 2011 “The End of Growth”

U.S. Army new jobs: quell social unrest from climate change, help get arctic oil

Preface. Of all the branches of government, the military is the most on top of climate change, peak oil, pandemics, power grid failure, and other disasters. I guess that shouldn’t be surprising, it’s their job to defend the U.S. against threats.

What I found interesting was that given the coming threats, the military is proposing new job opportunities for themselves in addition to fighting wars abroad. They anticipate that disorder from pandemics, climate change, financial crashes and more might require them to be here in U.S. to maintain order.  The army also proposes to enable and defend arctic hydrocarbon resources, which climate change may make more available.

This study examines the implications of climate change over the next 50 years for the United States Army assuming that IPCC RCP 4.5 is our likely future to predict expected outcomes.

Related: you might want to read Nafeez Ahmed’s take on this report here: U.S. Military Could Collapse Within 20 Years Due to Climate Change, Report Commissioned By Pentagon Says. The report says a combination of global starvation, war, disease, drought, and a fragile power grid could have cascading, devastating effects.

Alice Friedemann    www.energyskeptic.com   author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity, XX2 report

***

Brosig M, Frawley CP, Hill A, et al (2019) Implications of climate change for the U.S. army. U.S. Army War College.  52 pages

Sea level rise, changes in water and food security, and more frequent extreme weather events are likely to result in the migration of large segments of the population. Rising seas will displace tens (if not hundreds) of millions of people, creating massive, enduring instability. This migration will be most pronounced in those regions where climate vulnerability is exacerbated by weak institutions and governance and underdeveloped civil society. Recent history has shown that mass human migrations can result in increased propensity for conflict and turmoil as new populations intermingle with and compete against established populations. More frequent extreme weather events will also increase demand for military humanitarian assistance.

Salt water intrusion into coastal areas and changing weather patterns will also compromise or eliminate fresh water supplies in many parts of the world. Additionally, warmer weather increases hydration requirements. This means that in expeditionary warfare, the Army will need to supply itself with more water. This significant logistical burden will be exacerbated on a future battlefield that requires constant movement due to the ubiquity of adversarial sensors and their deep strike capabilities.

My caption: New jobs for the military

A warming trend will also increase the range of insects that are vectors of infectious tropical diseases. This, coupled with large scale human migration from tropical nations, will increase the spread of infectious disease. The Army has tremendous logistical capabilities, unique in the world, in working in austere or unsafe environments. In the event of a significant infectious disease outbreak (domestic or international), the Army is likely to be called upon to assist in the response and containment. They propose working closely with the CDC and relief plans.

As the electorate becomes more concerned about climate change, it follows that elected officials will, as well. This may result in significant restrictions on military activities (in peacetime) that produce carbon emissions. The Department of Defense (DoD) does not currently possess an environmentally conscious mindset. Political and social pressure will eventually force the military to mitigate its environmental impact in both training and wartime. Implementation of these changes will be costly in effort, time and money.

All of the plans require energy, here are plans that are directly energy related

In light of these findings, the military must consider changes in doctrine, organization, equipping, and training to anticipate changing environmental requirements. Lagging behind public and political demands for energy efficiency and minimal environmental footprint will significantly hamstring the Department’s efforts to face national security challenges. The Department will struggle to maintain its positive public image and that will impact the military’s ability to receive the required funding to face the growing number of security challenges.

[My comment: In a sly way, this study seems to acknowledge peak oil, though it’s stated as if the cause for lack of fuel will be the public’s awareness of climate change: “Problem: potential disruptions to readiness due to restrictions on fuel use”]

The decrease in Arctic sea ice and associated sea level rise will bring conflicting claims to newly-accessible natural resources. It will also introduce a new theater of direct military contact between an increasing belligerent Russia and other Arctic nations, including the U.S. Yet the opening of the Arctic will also increase commercial opportunities. Whether due to increased commercial shipping traffic or expanded opportunities for hydrocarbon extraction, increased economic activity will drive a requirement for increased military expenditures specific to that region. The study recommends training and equipment to conduct future Arctic operations.

Power grid vulnerabilities: improve grid near military installations and fund internal power generation from solar/battery farms and small nuclear reactors.

The Arctic

According to the Intergovernmental Panel on Climate Change (IPCC), since satellite monitoring of the Arctic began in 1979, the Arctic ice extent has de creased from 3.5 – 4.1% (“Climate Change 2014 Synthesis Report.” International Panel on Climate Change. 2015. http://ipcc.ch/report/ar5/syr/ )

According to a 2008 U.S. Geological survey, the Arctic likely holds approximately one quarter of the world’s undiscovered hydrocarbon reserves, with 20% of them potentially in U.S. Territory.

Since territories aren’t well defined, this is mainly a Navy and Air Force issue, however the Army will be tasked with wide area security and reconnaissance roles as part of any joint efforts to secure Arctic interests.

Russia has embarked on a rapid build-up in the Arctic, including expensive refurbishment of Soviet era Arctic bases. Russia’s current Arctic plans include the opening of ten search and rescue stations, 16 deep water ports, 13 airfields and ten air defense sites.  These developments create not only security outposts for Russia, but also threats to the U.S. mainland. Russia’s recent development of KH-101/102 air launched cruise missiles and SSC-8 ground launched cruise missiles potentially put much of the United States at risk from low altitude, radar evading, nuclear capable missiles.   

POWER GRID STRESS

The power grid that serves the United States is aging and continues to operate without a coordinated and significant infrastructure investment. Vulnerabilities exist to electricity-generating power plants, electric transmission infrastructure and distribution system components. Power transformers average over 40 years of age and 70 percent of transmission lines are 25 years or older. The U.S. national power grid is susceptible to coordinated cyber or physical attacks; electromagnetic pulse (EMP) attacks; space weather; and other natural events, to include the stressors of a changing climate (Transmission & Distribution Infrastructure: A Harris Williams & Co. White Paper” Harris Williams & Co. 2014.)

If the power grid infrastructure collapsed:

  • Loss of perishable foods and medications
  • Loss of water and wastewater distribution systems
  • Loss of heating/air conditioning and electrical lighting systems
  • Loss of computer, telephone, and communications systems (including airline flights, satellite networks and GPS services
  • Loss of public transportation systems
  • Loss of fuel distribution systems and fuel pipelines
  • Loss of all electrical systems that do not have back-up power

There are 16 critical infrastructure sectors (here) that would be affected by a blackout: chemical, commercial facilities, communications, critical manufacturing, dams, defense industrial base, emergency services, energy, financial services, food and agriculture, government facilities, healthcare and public health, information technology, nuclear reactors / materials / waste, transportation systems, water and wastewater systems.

The Congressional Electro-Magnetic Pulse (EMP) Commission, in 2008, estimated it would cost $2 billion to harden just the grid’s critical nodes. The Task Force on National and Homeland Security calculates an additional $10 to $30 billion and many years necessary for a complete grid overhaul. The EMP Commission further cited that some of the very improvements of network interconnectedness created through the updated Supervisory Control and Data Acquisition (SCADA) network, which control power distribution around the country, introduced additional weaknesses to cyber-attack.

Department of Defense installations are 99 percent reliant on the U.S. power grid for electrical power generation due to the decommissioning of autonomous power generation capability for budgetary cost saving measures over the last two decades.93

Global reductions in demand for hydrocarbons means that gasoline, diesel, and jet fuel should become less expensive. On the other hand, reduced demand tends to reduce incentives to explore potential oil fields or build new refining facilities. Much of the U.S.’s domestic oil extraction is unprofitable at oil prices below $30 a barrel. Technological advances tend to push this number lower, but exhaustion of oil fields tends to push the number higher. In all scenarios, global declines in oil consumption increase the sensitivity of oil markets to the choices of large consumers like the U.S. DoD.

The automated, A.I.-enhanced force of the Army’s future is one that runs on electricity, not jet fuel (JP-8). More efficient or resilient production of electricity through micro-nuclear power generation or improved solar arrays can fundamentally alter the mobility and the logistical challenges of a mechanized force. Light, quick-charging batteries (super-capacitors) have tremendous value in such a force; so does the wireless transmission of electrical current.

[many pages on climate change]

Then request for $100 million for fighting in middle eastern deserts: “The U.S. Army is precipitously close to mission failure concerning hydration of the force in a contested arid environment. The experience and best practices of the last 17 years of conflict in Afghanistan, Iraq, Syria, and Africa rely heavily on logistics force structures to support the warfighter with water mostly procured through contracted means of bottled water, local wells and Reverse Osmosis Water Purification Units (ROWPU). The ability to supply this amount of water in the most demanding environment is costly in money, personnel, infrastructure, and force structure.  The calculations for water (8.34 pounds per gallon) in an arid environment equates to 66 pounds of water per soldier. Water is 30-40% of the force sustainment requirement.  The Army must develop advanced technologies to capture ambient humidity.

Daily: Temperate 12.2 gallons, tropical 15.4, arid 15.8

Current planning methodologies remain heavily vested in bottled water meaning a more considerable force is needed to transport it.

In the 2000s in Iraq, over 864,000 bottles of water were consumed each month at one Forward Operating Base (FOB) with that number doubling during hotter months. Browne, Mathuel. “Marines Invest in New System to Purify Water on the Go.” Armed with Science: The Official US Defense Department Science Blog. 2017. http://science.dodlive. mil/2017/02/01/marines-invest-in-new-system-to-purify-water-onthe-go/.

ARCTIC OIL

Increased accessibility to the region for economic activity will consequently increase the security requirements and competition in the region. Currently Russia is rapidly expanding their Arctic military capabilities and capacity. The U.S. military must immediately begin expanding its capability to operate in the Artic to defend economic interests and to partner with allies across the region.

As ice melts there will be increased shipping, population shifts to the region and increased competition to extract the vast hydrocarbon resources more readily available as the ice sheets contract. These changes will drive an expansion of security efforts from nations across the region as they vie to claim and protect the economic resources of the region.

the competition for resources in the Arctic will increase security requirements and the potential for conflict. The Army will not be excluded from those requirements or any conflict that develops. The Army will simply be unprepared for the mission and the environment in which it will occur. As Russian activity expands in the Arctic, both the Navy and the Air Force will compete for resources to meet the Russian threat. The Army must compete as well

The Army needs to focus on the development of an infantry carrier vehicle with low surface pressure to maximize maneuverability in adverse terrain. An amphibious capable vehicle that has high weight distribution characteristics across the drive (either wheeled or tracked) contact patches will increase the speed of maneuver necessary for units to conduct wide area security across greater coverage areas.

PANDEMICS AND DISEASE (from climate change, yet more jobs for the army):  As the largest source of potential capacity and capability to respond to widespread disease outbreaks in the United States, the military should be prepared to execute defense support to civil authority (DSCA) missions of this type.

NUCLEAR POWER INDUSTRY

Currently, the Department of Energy conducts tritium production using 2 to 4 commercial nuclear pressurized water reactors (PWRs) run by the Tennessee Valley Authority (TVA). This commercial capability currently meets the U.S. stockpile tritium production capability; however, due to the overall age of the U.S. nuclear power industry, future PWRs may not be available to continue tritium production.168 The loss of tritium production directly reduces the effectiveness of the U.S. nuclear stockpile by reducing or hindering the overall yield produced by the nuclear warheads. Without an effective U.S. nuclear stockpile, the U.S. cannot deter peer nuclear competitors and rogue nuclear states increasing the risk to all-out war against the United States.

Directly tied to tritium production is the future of the nuclear power industry. It is filled with an aging fleet of reactors built in the late 1960s and 1970s. Most receive a commercial license by the Nuclear Regulatory Commission (NRC) to operate on average 30 years, but many have or are seeking extensions to increase the operations out to 40 and 50 years.170 The age of the industry and the lack of new reactors coming on-line creates a significant risk to both the environment and the maintenance of the U.S. nuclear stockpile. “The highest priority of nuclear innovation policy should be to promote the availability of an advanced nuclear power system 15 to 20 years from now”.

Increasing the underlying U.S. baseline nuclear power generation capability from a mere 20% (and declining) to more than 80% (to cover the 60% coal production capability that currently exists) can significantly reduce greenhouse gases.172 The government will need to lead this expansion which goes against the fossil fuel business paradigms that have existed for more than 100 years. Any nuclear industry expansion must include a long-term review of tritium production requirements and analyze how the government will maintain its required tritium production capability.

[natters on and on about need for nuclear, tritium for bombs, no mention of how to dispose of nuclear waste, the lesson learned from Fukushima that it’s the spent nuclear fuel pools not in a containment vessel that are the real hazard (see “A Nuclear spent fuel fire at Peach Bottom in Pennsylvania could force 8.8 million people to evacuate” ]

CONCLUSION

It is useful to remind ourselves regularly of the capacity of human beings to persist in stupid beliefs in the face of significant, contradictory evidence.  Mitigation of new large-scale stresses requires a commitment to learning, systematically, about what is happening.

Life is full of the unexpected, or the overlooked obvious. The term “black swan event” describes surprises of an especially momentous and nasty type. Popularized by the mathematician Nicholas Nassim Taleb in his 2007 book of the same title, Taleb argued that black swan events have three characteristics: “rarity, extreme impact, and retrospective (though not prospective) predictability.”176 In recent years, the concept of black swan events has gained currency in political, military, and financial contexts.

The black swan has a venerable history as an illustration of the ancient epistemological problem of induction: simply stated, no number of observations of a given relationship are sufficient to prove that a different relationship cannot occur. No amount of white swan sightings can guarantee that a different color swan is not out there waiting to be seen.  

Three maxims can help us avoid dangerous failures of recognition, and speed learning when unexpected things happen.

1. Everything we believe about the world is provisional – “serving for the time being.” Adding the words “so far” to assertions about reality reminds us of this.

2. Unjustified certainty is very costly. The greater your certainty that you are right when you are wrong, the longer it will take you to recognize and incorporate new data into your system of belief, and to change your mind. General Douglas MacArthur was a confident man, and this confidence usually served him well, such as when he undertook the risky landings at Incheon in the Korean War. Yet MacArthur’s confidence betrayed him when China entered the war. He was certain that this would not happen, and MacArthur’s certainty delayed his recognition of a key change, exposing forces under his command to terrible risk. Confidence in your beliefs is valuable only insofar as it results in different choices (e.g., I choose A or B). Beyond that point, confidence has increasing costs.

3. Pay special attention to data that is unlikely in light of your current beliefs; it has much more information per unit, all else equal. In this sense, information content is measured as the potential to change how you think about the world. Information that is probable in light of your beliefs will have minimal effects on your understanding. Improbable information, if incorporated, will change it.

Posted in Military | Tagged , , , , | Comments Off on U.S. Army new jobs: quell social unrest from climate change, help get arctic oil

Reforestation for the return to biomass after fossil fuels

Preface. Below are excerpts from a New York Times article about forests.

My book “Life After Fossil Fuels: A Reality Check on Alternative Energy” explains why the myriad ways we use fossil fuels can’t be electrified (or hydrogenized or anything else). Not even the electric grid can be 100% renewable.

Only biomass can do it all, obviously, since the 5,000 years of civilizations that preceded fossil fuels used biomass for energy as well as infrastructure.  The least we could do for our descendants is to plant forests so they don’t freeze in the dark, can build homes, carts, and more and rebuild anew (and bury nuclear wastes).

Alice Friedemann    www.energyskeptic.com   author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity, XX2 report

***

Jabr F (2020) The Social Life of Forests. Trees appear to communicate and cooperate through subterranean networks of fungi. What are they sharing with one another? New York Times.

When Europeans arrived on America’s shores in the 1600s, forests covered one billion acres of the future United States — close to half the total land area. Between 1850 and 1900, U.S. timber production surged to more than 35 billion board feet from five billion. By 1907, nearly a third of the original expanse of forest — more than 260 million acres — was gone. As of 2012, the United States had more than 760 million forested acres. The age, health and composition of America’s forests have changed significantly, however. Although forests now cover 80 percent of the Northeast, for example, less than 1 percent of its old-growth forest remains intact.

And though clearcutting is not as common as it once was, it is still practiced on about 40 percent of logged acres in the United States and 80 percent of them in Canada. In a thriving forest, a lush understory captures huge amounts of rainwater, and dense root networks enrich and stabilize the soil. Clearcutting removes these living sponges and disturbs the forest floor, increasing the chances of landslides and floods, stripping the soil of nutrients and potentially releasing stored carbon to the atmosphere. When sediment falls into nearby rivers and streams, it can kill fish and other aquatic creatures and pollute sources of drinking water. The abrupt felling of so many trees also harms and evicts countless species of birds, mammals, reptiles and insects.

Humans have relied on forests for food, medicine and building materials for many thousands of years. Forests have likewise provided sustenance and shelter for countless species over the eons. But they are important for more profound reasons too. Forests function as some of the planet’s vital organs. The colonization of land by plants between 425 and 600 million years ago, and the eventual spread of forests, helped create a breathable atmosphere with the high level of oxygen we continue to enjoy today. Forests suffuse the air with water vapor, fungal spores and chemical compounds that seed clouds, cooling Earth by reflecting sunlight and providing much-needed precipitation to inland areas that might otherwise dry out. Researchers estimate that, collectively, forests store somewhere between 400 and 1,200 gigatons of carbon, potentially exceeding the atmospheric pool.

Crucially, a majority of this carbon resides in forest soils, anchored by networks of symbiotic roots, fungi and microbes. Each year, the world’s forests capture more than 24 percent of global carbon emissions, but deforestation — by destroying and removing trees that would otherwise continue storing carbon — can substantially diminish that effect. When a mature forest is burned or clear-cut, the planet loses an invaluable ecosystem and one of its most effective systems of climate regulation. The razing of an old-growth forest is not just the destruction of magnificent individual trees — it’s the collapse of an ancient republic whose interspecies covenant of reciprocation and compromise is essential for the survival of Earth as we’ve known it.

By the time she was in grad school at Oregon State University, however, Simard, today 60-years-old and a professor of ecology at the University of British Columbia, understood that commercial clearcutting had largely superseded the sustainable logging practices of the past. Loggers were replacing diverse forests with homogeneous plantations, evenly spaced in upturned soil stripped of most underbrush. Without any competitors, the thinking went, the newly planted trees would thrive. Instead, they were frequently more vulnerable to disease and climatic stress than trees in old-growth forests. In particular, Simard noticed that up to 10 percent of newly planted Douglas fir were likely to get sick and die whenever nearby aspen, paper birch and cottonwood were removed. The reasons were unclear. The planted saplings had plenty of space, and they received more light and water than trees in old, dense forests. So why were they so frail?

Simard suspected that the answer was buried in the soil. Underground, trees and fungi form partnerships known as mycorrhizas: Threadlike fungi envelop and fuse with tree roots, helping them extract water and nutrients like phosphorus and nitrogen in exchange for some of the carbon-rich sugars the trees make through photosynthesis. Research had demonstrated that mycorrhizas also connected plants to one another and that these associations might be ecologically important, but most scientists had studied them in greenhouses and laboratories, not in the wild. For her doctoral thesis, Simard decided to investigate fungal links between Douglas fir and paper birch in the forests of British Columbia. Apart from her supervisor, she didn’t receive much encouragement from her mostly male peers. “The old foresters were like, Why don’t you just study growth and yield?” Simard told me. “I was more interested in how these plants interact. They thought it was all very girlie.”

Simard has studied webs of root and fungi in the Arctic, temperate and coastal forests of North America for nearly three decades. Her initial inklings about the importance of mycorrhizal networks were prescient, inspiring whole new lines of research that ultimately overturned longstanding misconceptions about forest ecosystems. By analyzing the DNA in root tips and tracing the movement of molecules through underground conduits, Simard has discovered that fungal threads link nearly every tree in a forest — even trees of different species. Carbon, water, nutrients, alarm signals and hormones can pass from tree to tree through these subterranean circuits. Resources tend to flow from the oldest and biggest trees to the youngest and smallest. Chemical alarm signals generated by one tree prepare nearby trees for danger. Seedlings severed from the forest’s underground lifelines are much more likely to die than their networked counterparts. And if a tree is on the brink of death, it sometimes bequeaths a substantial share of its carbon to its neighbors.

Although Simard’s peers were skeptical and sometimes even disparaging of her early work, they now generally regard her as one of the most rigorous and innovative scientists studying plant communication and behavior. David Janos, co-editor of the scientific journal Mycorrhiza, characterized her published research as “sophisticated, imaginative, cutting-edge.” Jason Hoeksema, a University of Mississippi biology professor who has studied mycorrhizal networks, agreed: “I think she has really pushed the field forward.” Some of Simard’s studies now feature in textbooks and are widely taught in graduate-level classes on forestry and ecology. She was also a key inspiration for a central character in Richard Powers’s 2019 Pulitzer Prize-winning novel, “The Overstory”: the visionary botanist Patricia Westerford. In May, Knopf will publish Simard’s own book, “Finding the Mother Tree,” a vivid and compelling memoir of her lifelong quest to prove that “the forest was more than just a collection of trees.”

Since Darwin, biologists have emphasized the perspective of the individual. They have stressed the perpetual contest among discrete species, the struggle of each organism to survive and reproduce within a given population and, underlying it all, the single-minded ambitions of selfish genes. Now and then, however, some scientists have advocated, sometimes controversially, for a greater focus on cooperation over self-interest and on the emergent properties of living systems rather than their units.

Before Simard and other ecologists revealed the extent and significance of mycorrhizal networks, foresters typically regarded trees as solitary individuals that competed for space and resources and were otherwise indifferent to one another. Simard and her peers have demonstrated that this framework is far too simplistic. An old-growth forest is neither an assemblage of stoic organisms tolerating one another’s presence nor a merciless battle royale: It’s a vast, ancient and intricate society. There is conflict in a forest, but there is also negotiation, reciprocity and perhaps even selflessness. The trees, understory plants, fungi and microbes in a forest are so thoroughly connected, communicative and codependent that some scientists have described them as superorganisms. Recent research suggests that mycorrhizal networks also perfuse prairies, grasslands, chaparral and Arctic tundra — essentially everywhere there is life on land. Together, these symbiotic partners knit Earth’s soils into nearly contiguous living networks of unfathomable scale and complexity. “I was taught that you have a tree, and it’s out there to find its own way,” Simard told me. “It’s not how a forest works, though.”

In some of her earliest and most famous experiments, Simard planted mixed groups of young Douglas fir and paper birch trees in forest plots and covered the trees with individual plastic bags. In each plot, she injected the bags surrounding one tree species with radioactive carbon dioxide and the bags covering the other species with a stable carbon isotope — a variant of carbon with an unusual number of neutrons. The trees absorbed the unique forms of carbon through their leaves. Later, she pulverized the trees and analyzed their chemistry to see if any carbon had passed from species to species underground. It had. In the summer, when the smaller Douglas fir trees were generally shaded, carbon mostly flowed from birch to fir. In the fall, when evergreen Douglas fir was still growing and deciduous birch was losing its leaves, the net flow reversed. As her earlier observations of failing Douglas fir had suggested, the two species appeared to depend on each other. No one had ever traced such a dynamic exchange of resources through mycorrhizal networks in the wild. In 1997, part of Simard’s thesis was published in the prestigious scientific journal Nature — a rare feat for someone so green. Nature featured her research on its cover with the title “The Wood-Wide Web,” a moniker that eventually proliferated through the pages of published studies and popular science writing alike.

In 2002, Simard secured her current professorship at the University of British Columbia, where she continued to study interactions among trees, understory plants and fungi. In collaboration with students and colleagues around the world, she made a series of remarkable discoveries. Mycorrhizal networks were abundant in North America’s forests. Most trees were generalists, forming symbioses with dozens to hundreds of fungal species. In one study of six Douglas fir stands measuring about 10,000 square feet each, almost all the trees were connected underground by no more than three degrees of separation; one especially large and old tree was linked to 47 other trees and projected to be connected to at least 250 more; and seedlings that had full access to the fungal network were 26 percent more likely to survive than those that did not.

Depending on the species involved, mycorrhizas supplied trees and other plants with up to 40 percent of the nitrogen they received from the environment and as much as 50 percent of the water they needed to survive. Below ground, trees traded between 10 and 40 percent of the carbon stored in their roots. When Douglas fir seedlings were stripped of their leaves and thus likely to die, they transferred stress signals and a substantial sum of carbon to nearby ponderosa pine, which subsequently accelerated their production of defensive enzymes. Simard also found that denuding a harvested forest of all trees, ferns, herbs and shrubs — a common forestry practice — did not always improve the survival and growth of newly planted trees. In some cases, it was harmful.

At this point other researchers have replicated most of Simard’s major findings. It’s now well accepted that resources travel among trees and other plants connected by mycorrhizal networks. Most ecologists also agree that the amount of carbon exchanged among trees is sufficient to benefit seedlings, as well as older trees that are injured, entirely shaded or severely stressed, but researchers still debate whether shuttled carbon makes a meaningful difference to healthy adult trees. On a more fundamental level, it remains unclear exactly why resources are exchanged among trees in the first place, especially when those trees are not closely related.

“Darwin’s theory of evolution by natural selection is obviously 19th-century capitalism writ large,” wrote the evolutionary biologist Richard Lewontin.

As Darwin well knew, however, ruthless competition was not the only way that organisms interacted. Ants and bees died to protect their colonies. Vampire bats regurgitated blood to prevent one another from starving. Vervet monkeys and prairie dogs cried out to warn their peers of predators, even when doing so put them at risk. At one point Darwin worried that such selflessness would be “fatal” to his theory. In subsequent centuries, as evolutionary biology and genetics matured, scientists converged on a resolution to this paradox: Behavior that appeared to be altruistic was often just another manifestation of selfish genes — a phenomenon known as kin selection. Members of tight-knit social groups typically share large portions of their DNA, so when one individual sacrifices for another, it is still indirectly spreading its own genes.

Kin selection cannot account for the apparent interspecies selflessness of trees, however — a practice that verges on socialism. Some scientists have proposed a familiar alternative explanation: Perhaps what appears to be generosity among trees is actually selfish manipulation by fungi. Descriptions of Simard’s work sometimes give the impression that mycorrhizal networks are inert conduits that exist primarily for the mutual benefit of trees, but the thousands of species of fungi that link trees are living creatures with their own drives and needs. If a plant relinquishes carbon to fungi on its roots, why would those fungi passively transmit the carbon to another plant rather than using it for their own purposes? Maybe they don’t. Perhaps the fungi exert some control: What looks like one tree donating food to another may be a result of fungi redistributing accumulated resources to promote themselves and their favorite partners.

“Where some scientists see a big cooperative collective, I see reciprocal exploitation,” said Toby Kiers, a professor of evolutionary biology at Vrije Universiteit Amsterdam. “Both parties may benefit, but they also constantly struggle to maximize their individual payoff.” Kiers is one of several scientists whose recent studies have found that plants and symbiotic fungi reward and punish each other with what are essentially trade deals and embargoes, and that mycorrhizal networks can increase conflict among plants. In some experiments, fungi have withheld nutrients from stingy plants and strategically diverted phosphorous to resource-poor areas where they can demand high fees from desperate plants.

Several of the ecologists I interviewed agreed that regardless of why and how resources and chemical signals move among the various members of a forest’s symbiotic webs, the result is still the same: What one tree produces can feed, inform or rejuvenate another. Such reciprocity does not necessitate universal harmony, but it does undermine the dogma of individualism and temper the view of competition as the primary engine of evolution.

The most radical interpretation of Simard’s findings is that a forest behaves “as though it’s a single organism,” as she says in her TED Talk. Some researchers have proposed that cooperation within or among species can evolve if it helps one population outcompete another — an altruistic forest community outlasting a selfish one, for example. The theory remains unpopular with most biologists, who regard natural selection above the level of the individual to be evolutionarily unstable and exceedingly rare. Recently, however, inspired by research on microbiomes, some scientists have argued that the traditional concept of an individual organism needs rethinking and that multicellular creatures and their symbiotic microbes should be regarded as cohesive units of natural selection. Even if the same exact set of microbial associates is not passed vertically from generation to generation, the functional relationships between an animal or plant species and its entourage of microorganisms persist — much like the mycorrhizal networks in an old-growth forest. Humans are not the only species that inherits the infrastructure of past communities.

When a seed germinates in an old-growth forest, it immediately taps into an extensive underground community of interspecies partnerships. Uniform plantations of young trees planted after a clear-cut are bereft of ancient roots and their symbiotic fungi. The trees in these surrogate forests are much more vulnerable to disease and death because, despite one another’s company, they have been orphaned. Simard thinks that retaining some mother trees, which have the most robust and diverse mycorrhizal networks, will substantially improve the health and survival of future seedlings — both those planted by foresters and those that germinate on their own.

Since at least the late 1800s, North American foresters have devised and tested dozens of alternatives to standard clearcutting: strip cutting (removing only narrow bands of trees), shelterwood cutting (a multistage process that allows desirable seedlings to establish before most overstory trees are harvested) and the seed-tree method (leaving behind some adult trees to provide future seed), to name a few. These approaches are used throughout Canada and the United States for a variety of ecological reasons, often for the sake of wildlife, but mycorrhizal networks have rarely if ever factored into the reasoning.

Ryan told me about the 230,000-acre Menominee Forest in northeastern Wisconsin, which has been sustainably harvested for more than 150 years. Sustainability, the Menominee believe, means “thinking in terms of whole systems, with all their interconnections, consequences and feedback loops.” They maintain a large, old and diverse growing stock, prioritizing the removal of low-quality and ailing trees over more vigorous ones and allowing trees to age 200 years or more — so they become what Simard might call grandmothers. Ecology, not economics, guides the management of the Menominee Forest, but it is still highly profitable. Since 1854, more than 2.3 billion board feet have been harvested — nearly twice the volume of the entire forest — yet there is now more standing timber than when logging began. “To many, our forest may seem pristine and untouched,” the Menominee wrote in one report. “In reality, it is one of the most intensively managed tracts of forest in the Lake States.”

Diverse microbial communities inhabit our bodies, modulating our immune systems and helping us digest certain foods. The energy-producing organelles in our cells known as mitochondria were once free-swimming bacteria that were subsumed early in the evolution of multicellular life. Through a process called horizontal gene transfer, fungi, plants and animals — including humans — have continuously exchanged DNA with bacteria and viruses. From its skin, fur or bark right down to its genome, any multicellular creature is an amalgam of other life-forms. Wherever living things emerge, they find one another, mingle and meld.

Five hundred million years ago, as both plants and fungi continued oozing out of the sea and onto land, they encountered wide expanses of barren rock and impoverished soil. Plants could spin sunlight into sugar for energy, but they had trouble extracting mineral nutrients from the earth. Fungi were in the opposite predicament. Had they remained separate, their early attempts at colonization might have faltered or failed. Instead, these two castaways — members of entirely different kingdoms of life — formed an intimate partnership. Together they spread across the continents, transformed rock into rich soil and filled the atmosphere with oxygen.

Eventually, different types of plants and fungi evolved more specialized symbioses. Forests expanded and diversified, both above- and below ground. What one tree produced was no longer confined to itself and its symbiotic partners. Shuttled through buried networks of root and fungus, the water, food and information in a forest began traveling greater distances and in more complex patterns than ever before. Over the eons, through the compounded effects of symbiosis and coevolution, forests developed a kind of circulatory system. Trees and fungi were once small, unacquainted ocean expats, still slick with seawater, searching for new opportunities. Together, they became a collective life form of unprecedented might and magnanimity.

Posted in Deforestation | Tagged , | Comments Off on Reforestation for the return to biomass after fossil fuels

The History of Drunkenness

Preface. This is a book review of “A short history of Drunkenness” by Mark Forsyth.

I expect alcohol to be a big part of life postcarbon not only because most cultures have embraced alcohol, but to drown the sorrows and memories of the time when we lived likes Gods & Goddesses during the brief oil age. Those of you who survive The Great Simplification may find brewing a good way to make a living. 

Taxation of alcohol is also how governments pay for wars, elites grow rich, and a large role in many religions:

There is, in the Western world, no tradition of religious drunkenness. But it is a practice found across history and across the globe. From Mexico to the Pacific islands to Ancient China there is or has been drunken mysticism, god found at the bottom of a bottle.

The sway of alcohol over mankind is unquestionably due to its power to stimulate the mystical faculties of human nature, usually crushed to earth by the cold facts and dry criticisms of the sober hour. Sobriety diminishes, discriminates, and says no; drunkenness expands, unites, and says yes. It is in fact the great exciter of the Yes function in man. It brings its votary from the chill periphery of things to the radiant core. It makes him for the moment one with truth. Not through mere perversity do men run after it. To the poor and the unlettered it stands in the place of symphony concerts and of literature. The drunken consciousness is one bit of the mystic consciousness.”

Alice Friedemann    www.energyskeptic.com   author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity, XX2 report

***

Mark Forsyth. 2018. A Short History of Drunkenness: How, Why, Where, and When Humankind Has Gotten Merry from the Stone Age to the Present.

Drunkenness

Drunkenness is near universal. Almost every culture in the world has booze. The only ones that weren’t too keen—North America and Australia—have been colonized by those who were. And at every time and in every place, drunkenness is a different thing. It’s a celebration, a ritual, an excuse to hit people, a way of making decisions or ratifying contracts, and a thousand other peculiar practices. When the Ancient Persians had a big political decision to make they would debate the matter twice: once drunk, and once sober. If they came to the same conclusion both times, they acted.

History books like to tell us that so-and-so was drunk, but they don’t explain the minutiae of drinking. Where was it done? With whom? At what time of day? Drinking has always been surrounded by rules, but they rarely get written down. In present-day Britain, for example, though there is no law in place, absolutely everybody knows that you must not drink before noon, except, for some reason, in airports and at cricket matches.

All we know for sure is that if a male fruit fly has his romantic advances spurned by a cruel and disdainful female fruit fly, he ups his alcohol consumption dramatically. Unfortunately for animals, alcohol doesn’t occur naturally in large enough quantities to allow for a proper party.  Though sometimes it does. There’s an island off Panama where the mantled howler monkey can feast happily on the fallen fruit of the astrocaryum palm (4.5 percent ABV). They get boisterous and noisy, and then they get sleepy and stumbly, and then sometimes they fall out of trees and injure themselves. If you adjust their alcohol intake for bodyweight, they can get through the equivalent of two bottles of wine in thirty minutes. But they are a rarity.

What happens if you give a whole colony of rats an open bar? Actually, they’re rather civilized. Though not for the first few days, when they go a bit crazy, but then most of them settle down to two drinks a day: one just before feeding (which the scientists refer to as the cocktail hour) and one just before bedtime (the nightcap). Every three or four days there’s a spike in alcohol consumption as all the rats get together for little rat parties.  Rat colonies usually have one dominant male, the King Rat. The King Rat is a teetotaler. Alcohol consumption is highest among the males with the lowest social status. They drink to calm their nerves, they drink to forget their worries, they drink, it seems, because they’re failures.

Load a couple of barrels of beer onto the back of a pickup truck, drive to somewhere near the elephants, take the lids off and let them have a sip. There’s usually a bit of jostling and the big bull elephants take most of it. But you can then observe them stumbling around and falling asleep and it’s all rather amusing. Even this, though, can go wrong. One scientist who allowed a dominant bull to get a bit too pissed found himself having to break up a fight between a soused elephant and a rhino. Usually, elephants don’t attack rhinos, but the beer makes them quarrelsome.

On the following morning monkeys who drank were very cross and dismal; they held their aching heads with both hands and wore a most pitiable expression: when beer or wine was offered them, they turned away with disgust, but relished the juice of lemons.  If, Darwin thought, man and monkey both react the same way to hangovers, they must be related. This wasn’t his only proof, but it was a start in proving that bishops were primates.     From the New Yorker: In “Descent of Man,” Darwin states, “Many kinds of monkeys have a strong taste for . . . spirituous liquors.” And he cites the reported effects of the monkeys’ being exposed to strong beer—“cross and dismal . . . aching heads . . . a most pitiable expression”—as suggestive evidence for the evolutionary affinity between humans and primates. “These trifling facts prove how similar the nerves of taste must be in monkeys and man, and how similarly their whole nervous system is affected”—by alcohol.

Humans are designed to drink. We’re really damned good at it. Better than any other mammal, except maybe the Malaysian tree shrew. Never get into a drinking contest with a Malaysian tree shrew; or, if you do, don’t let them insist that you adjust for bodyweight. They can take nine glasses of wine and be none the worse for it. That’s because they’ve evolved to survive on fermented palm nectar. For millions of years evolution has been naturally selecting the best shrew drinkers in Malaysia and now they’re champions. But we are the same. We evolved to drink. Ten million years ago our ancestors came down from the trees. Why they did this is not entirely clear, but it may well be that they were after the lovely overripe fruit that you find on the forest floor. That fruit has more sugar in it and more alcohol. So we developed noses that could smell the alcohol at a distance. The alcohol was a marker that could lead us to the sugar.

Alcohol has led us to our food, alcohol has made us want to eat our food, but now we need to process the alcohol; otherwise we’ll just become food for somebody else. It’s hard enough to fight off a prehistoric predator when you’re sober, but trying to punch a saber-toothed tiger when you’re five sheets to the wind is a nightmare.

So now that we’d acquired the taste, we needed—evolutionarily—to develop a coping mechanism. There is one quite precise genetic mutation that occurred ten million years ago that makes us process alcohol nearly as well as a Malaysian shrew. It’s to do with the production of a particular enzyme that we started to produce. Humans (or the ancestors of humans) were suddenly able to drink all the other apes under the table. For a modern human, 10% of the enzyme machinery in your liver is devoted to converting alcohol into energy.   From the internet: Once alcohol has entered your bloodstream it remains in your body until it is processed. About 90-98% of alcohol that you drink is broken down in your liver, the other 2-10% is removed in your urine, breathed out through your lungs or excreted in your sweat.  The average person will take about an hour to process 10 grams of alcohol, which is the amount of alcohol in a standard drink. So if you drink alcohol faster than your body can process it, your blood alcohol level will continue to rise.

Benjamin Franklin, Founding Father of the United States, famously observed that the existence of wine was “proof that God loves us, and loves to see us happy.  He also made a significant observation about human anatomy: To confirm still more your piety and gratitude to Divine Providence, reflect upon the situation which it has given to the elbow. You see in animals who are intended to drink the waters that flow upon the earth, that if they have long legs, they have also a long neck, so that they can get at their drink without kneeling down. But man, who was destined to drink wine, is framed in a manner that he may raise the glass to his mouth. If the elbow had been placed nearer the hand, the part in advance would have been too short to bring the glass up to the mouth; and if it had been nearer the shoulder, that part would have been so long, that when it attempted to carry the wine to the mouth it would have overshot the mark, and gone beyond the head

Most of the early drinks wouldn’t so much have been invented as discovered. A pleasant theory involves bees. Imagine a bees’ nest in the hollow of a tree. Then there’s a storm, the tree falls over and the nest is flooded with rainwater. So long as you have roughly one part honey to two parts rainwater, fermentation ought to kick in pretty soon.   More prosaically you simply need to be picking and storing fruit somewhere reasonably watertight. The juice at the bottom will start to bubble and pretty soon you’ll have a very primitive wine. For that you would probably need pottery. More importantly you need to remain in the same place for a while, and all of the evidence suggests that our ancestors were mostly on the move.

It looks like there was beer, and, importantly, it looks like there was beer before there were temples and before there was farming. This leads to the great theory of human history: that we didn’t start farming because we wanted food—there was loads of that around. We started farming because we wanted booze. This makes a lot more sense than you might think, for six reasons. 1) beer is easier to make than bread as no hot oven is required, 2) beer contains vitamin B, which humans require if they’re going to be healthy and strong. Hunters get their vitamin B by eating other animals. On a diet of bread and no beer, grain farmers will all turn into anemic weaklings and be killed by the big healthy hunters. But fermentation of wheat and barley produces vitamin B. 3) beer is simply a better food than bread. It’s more nutritious because the yeast has been doing some of the digesting for you.

From NPR: Charlie Bamforth, a professor of brewing sciences at the University of California, Davis. Though it’s been blamed for many a paunch, it’s more nutritious than most other alcoholic drinks, Bamforth says. “There’s a reason people call it liquid bread,” he says. Beer, he says, has more selenium, B vitamins, phosphorus, folate and niacin than wine. Beer also has significant protein and some fiber. And it is one of a few significant dietary sources of silicon, which research has shown can help thwart the effects of osteoporosis. 150 calories in your typical, 12-ounce serving of 5 percent-alcohol beer. A 12-ounce bottle of 9.6 percent has 300 calories, 200 from the alcohol.   

4) beer can be stored and consumed later, 5) the alcohol in beer purifies the water that was used to make it, killing all the nasty microbes.  6) The biggest argument is that to really change behavior you need a cultural driver. If beer was worth traveling for (which Göbekli Tepe suggests it was) and if beer was a religious drink (which Göbekli Tepe suggests it was), then even the most ardent huntsman might be persuaded to settle down and grow some good barley to brew it with.

And so in about 9000 BC, we invented farming because we wanted to get drunk on a regular basis.

Cities are the result of farmers working too hard. In fact, history is the result of farmers working too hard. If you have a job that doesn’t involve food-production (and you’re alive), that means that somewhere there’s a farmer producing more food than he needs. The second that happens you get specialized jobs, because ultimately you’ve got to be providing something to the farmer in exchange for the food, whether it’s clothes or housing or protection or accountancy services.

The sure sign of agricultural surplus is that there are populated places that produce no food at all. Such places are called cities, inhabited by citizens. The Latin for citizen was civis, and from that we get the words civil and civilization. When we give the farmers something in return, it’s called trade, and trade causes disputes, and the people who solve these disputes are called the government. The government requires money to spend on important things like thrones, armies and fact-finding trips. And because it’s terribly hard to remember who’s paid their tax and who hasn’t, tax requires writing. Writing causes Prehistory to stop, and History to begin.

Everybody drank beer. Kings drank it on their thrones. Priests drank it in temples.

There was a myth that civilization had only come about through beer. The story went that Enki, the god of wisdom, had sat down with the goddess of hanky-panky, whose name was Inana. At the time, humans had no skills or knowledge. So it came about that Enki and Inana were drinking beer together in the abzu, and enjoying the taste of sweet wine. The bronze aga vessels were filled to the brim, and the two of them started a competition, drinking from the bronze vessels of Uraš. Long story short: Inana wins. While Enki is passed out drunk, she steals all the wisdom from heaven and takes it down to earth. When Enki wakes up, he notices that all the wisdom is missing and throws a fit, but by then it’s too late.

The most famous Sumerian myth of all, The Epic of Gilgamesh, starts with a wild man called Enkidu who lives among the animals like a Mesopotamian Mowgli, until a priestess of Inana turns up and tries to make him human. She does this by having sex with him, and then giving him a drink (not the usual order).

SUMERIA: So now we sit down at a table and the beer is brought to us in an amam jar, along with two straws. Beer has to be drunk through a straw. This is because Sumerian beer is not like our lovely modern clear amber nectar. It’s a sort of fizzing barley porridge with lots of solid stuff floating on the surface. A straw lets us go below the surface and suck out the sweet liquid. There are lots of representations of Sumerians doing this, and people still do it with palm wine in parts of central Africa.

RELIGION AND ALCOHOL

There is, in the Western world, no tradition of religious drunkenness. But it is a practice found across history and across the globe. From Mexico to the Pacific islands to Ancient China there is or has been drunken mysticism, god found at the bottom of a bottle

The sway of alcohol over mankind is unquestionably due to its power to stimulate the mystical faculties of human nature, usually crushed to earth by the cold facts and dry criticisms of the sober hour. Sobriety diminishes, discriminates, and says no; drunkenness expands, unites, and says yes. It is in fact the great exciter of the Yes function in man. It brings its votary from the chill periphery of things to the radiant core. It makes him for the moment one with truth. Not through mere perversity do men run after it. To the poor and the unlettered it stands in the place of symphony concerts and of literature;

The drunken consciousness is one bit of the mystic consciousness,

The Greeks didn’t drink beer, they drank wine; but they watered it down by a ratio of about two or three parts water to one part wine, which made it almost exactly the same strength.  The Persians drank beer; that made them barbarians. The Thracians drank undiluted wine; that made them barbarians. The Greeks were the only people who had it just right, according to the Greeks.

It’s rather intriguing that the Greek god of wine and the Egyptian goddess of beer were both said to arrive from the exotic south with a dancing menagerie of humans, animals and spirits, but it’s probably just a coincidence.

The myths about Dionysus mostly fall into two categories. (1) There are the stories of people who don’t recognize him, and don’t even realize that he is a god. Who these people are varies from pirates to princes, but their fate is usually the same. Dionysus punishes them by turning them into animals. The moral of the stories is reasonably clear. When you’re dealing with wine you need to remember that you are dealing with something powerful, something divine. This is no ordinary drink. It is holy. Moreover, alcohol, if you’re not careful, can bring out the beast in you.

The only fully human friends Dionysus had were the maenads. Maenads were women who worshipped Dionysus. They did this by going out into the mountains wearing next to nothing and getting very, very drunk. Then they would dance and let their hair down and rip animals to pieces in a sort of terrifying Arcadian hen party. Nobody is quite sure whether maenads ever actually existed, or whether they were just a sexual fantasy of Greek men, like the Amazons.  The maenads, though, were terribly important in the second type of Dionysus myth.  Dionysus didn’t like teetotalers. This is unsurprising for a god of wine, but Dionysus being Dionysus he tends to kill them cruelly. The most famous example is a play by Euripides where the King tries to outlaw maenadism so Dionysus makes his maenads believe that the King is a lion and they rip him limb from limb (the group is led by the King’s mother). There’s another story about Orpheus wandering the countryside. His wife has died and he wants to have a good cry. Unfortunately, he comes across a group of maenads who are all getting plastered and want him to join in. Orpheus politely declines and they rip him limb from limb as well.

There are a lot of stories like this and they all end the same way. The moral is pretty clear: you should recognize that drinking is dangerous and that it might turn you into a wild beast, but you should still drink. Never turn down an invitation to a party.

CHRISTIANITY. Paul notes that people were getting drunk at communion. He has to point out that communion is for drinking, not for getting drunk, which must have come as something of a shock to the Corinthians. Once you start to look for it, you find this problem a lot in early Christianity. The poor apostles were going out preaching the good news of a new religion that required you to drink wine. And people seem to have got the wrong impression. The Acts of the Apostles opens with Pentecost and the Holy Spirit descending upon the Christians, who proceed to speak in tongues. The people in the crowd that gathered: asked one another, “What does this mean?” Some, however, made fun of them and said, “They have had too much wine.” And poor St. Peter has to jump up and explain: Fellow Jews and all of you who live in Jerusalem, let me explain this to you; listen carefully to what I say. These people are not drunk, as you suppose. It’s only nine in the morning! When you think about it, the drink would have made a perfect stick with which to beat early Christianity. It would be so easy to caricature this strange new sect as a group of drunkards, a Jewish version of the cult of Dionysus, that it would be surprising if pagans didn’t do this.

Greek drinking

Plato, quite specifically, says that getting drunk is like going to the gym: the first time you do it you’ll be really bad and end up in pain. But practice makes perfect. If you can drink a lot and still behave yourself, then you are an ideal man. If you can do this in company, then you can show the world that you are an ideal man, because you are displaying the great virtue of self-control even under the influence. Self-control, said Plato, was like bravery.

A chap who spends his days fighting battles can train himself to be brave. A man who spends his evenings getting drunk can train himself to ever higher levels of self-control.

Let us say that you were a lady in classical Athens and you wanted to get drunk. You couldn’t. Women weren’t allowed at symposiums. Or, to be more precise, women might be allowed but not ladies

So it was the men who gathered, and they gathered at somebody’s private house. Not at a bar. For a typical symposium you might have a dozen chaps over. A really large one might be up to thirty fellows, but that was unusual. First, you had supper. This was a plain meal that was consumed pretty quickly and pretty silently. The food was not the thing—it was only really there to soak up the wine. Arranged in a circle around the room were couches with cushions on them. The men would lie down on the couches with a pillow under one arm. Young men, though, were not allowed to lie down.

It may then have been necessary to choose a symposiarch—the leader of the evening’s drinking. This would almost always be the host, whose first job was to choose the wine. Usually, this would be from his private estate as most Athenian gentlemen would own a vineyard, indeed the class system in Athens was built around how big your vineyard was. The lowest level was 7 acres or less; the highest had over 25.  If it was summer, the wine would have been cooled by lowering it into a well, or burying it.

At a symposium you got deliberately, methodically and publicly drunk. Everybody was given a bowl of wine. Everybody had to drink their bowl of wine before there’s a refill. Just as the guests at a symposium didn’t get to choose how much they drank, so they didn’t get to choose what they talked about, or indeed if they talked at all. The symposiarch would name a subject and then each guest in turn would have to give their opinion on it.  Each guest is meant to launch into a long and detailed answer.

There would be none of the free flow of conversation that we associate with a drinking session, and no opportunity simply to remain silent.

A game that Athenians played at symposiums was called kottabos. You took the last few drops of wine in your drinking bowl and tried to flick it at something. Sometimes a special bronze target would be brought in and everyone would flick their wine at it. Sometimes the target was a bowl floating in a pot of water and your aim was to sink it. Sometimes the target was a person. It all sounds rather messy, and old people used to complain about it and say that young men should be doing something constructive instead.

For sensible men I prepare only three kraters: one for health (which they drink first), the second for love and pleasure, and the third for sleep. After the third one is drained, wise men go home. The fourth krater is not mine any more—it belongs to bad behavior; the fifth is for shouting; the sixth is for rudeness and insults; the seventh is for fights; the eighth is for breaking the furniture; the ninth is for depression; the tenth is for madness and unconsciousness.

ROMAN EMPIRE

Early Rome was a very stern and sober place. In the days of the high republic (we’re talking about 200 BC–ish), they were all clean-shaven, short-haired militaristic types. Drunkenness was frowned upon. Sternly. It was associated with the long-haired, bearded, luxurious Greeks, whom the Romans were busy defining themselves against.

The Roman Empire was, in essence, a system whereby the entire wealth of the known world was funneled back to one city. This produced possibly the wealthiest city that the earth has ever known. Money corrupts and huge amounts of money are huge amounts of fun. The result, as every schoolboy learns, was decadence. Roman men started enjoying wine more than water. Then they even let their womenfolk try some. Then they finally read some Greek books and realized they were rather good. And then they thought they’d give homosexuality a go, and that was a big hit. By the time you got to the mid-first century AD those stern senators of 186 BC would have been turning in their graves.

So how did you get in on the fun? The problem with Roman money was that, though there was an awful lot of it, it arrived at the very top of society and flowed down. If you wanted a bit of wealth and wine, you had to find yourself a patron, somebody to sponge off. This sounds horribly parasitical, and in a sense it was, but it was all out in the open. There were patrons with money, and there were dependents with flattery. Everyone knew what was going on. So long as you were prepared to sell your dignity, you got paid in good food and wine. The central component of the system was a banquet called the convivium. Not everybody liked the system. The poet Juvenal asked: “Is a dinner worth all the insults with which you have to pay for it? Is your hunger so importunate, when it might, with greater dignity, be shivering where you are, and munching dirty scraps of dog’s bread?” And most people said yes.

The Roman convivium was not about being convivial. The Roman convivium was all about showing off, and about asserting who was on the top and who was right down at the bottom. You are not here to have fun. You’re here to learn your place, to applaud those above you, and to sneer at those below you. This was accomplished through seating, slaves, quality of wine, quantity of wine, food, what the wine was served in and where that was thrown.

The dining room contained one big table. One side was left empty as that was the side where the slaves, those endless crowds of slaves, served the brimming platters, and took away the empties. The other three sides had a couch each, and each couch held three people, lying down, because the Romans liked to drink horizontally. Looked at from the slaves’ point of view, the couch on the right was for inferior guests, with the least honored guest nearest to you. That corner of the table, diagonally opposite the host and his friend, could be covered with inferior food and inferior wine for the clearly inferior guest. If you’re there, you weren’t really welcome, you certainly weren’t honored. The host is telling you that he doesn’t give a galley-slave’s cuss about you. And you still have to say thank you. That’s the point of the convivium.  

The whole house is crawling with crawling slaves. They had to crawl, or they got whipped. Hosts would whip their slaves in front of their guests as a demonstration of power.  

The monks of the Dark Ages, indeed the people of the Dark Ages, needed booze because the alternative was water. Water requires a well-maintained well, or preferably an aqueduct, and that requires effective organization and government and all the things that the Dark Ages are not best known for. In the absence of these, your best source of water is the nearest stream, and for most of us, those who don’t live high in the mountains, that is a murky prospect.

Water drawn from the nearest stream was barely transparent. It was liable to contain creeping things, whatever they were—worms or leeches. One Anglo-Saxon book recommends a cure for swallowing creeping things: immediately drink some hot sheep’s blood. This tells us two things: (a) water was disgusting; (b) people did nonetheless drink it sometimes. Sometimes you had to, you were thirsty and you could afford nothing better. The standard Anglo-Saxon attitude to the subject is summed up in Abbot Aelfric’s dictum: “Ale if I have it, water if I have no ale.

Wine, continued Aelfric in a wistful tone, was way too dear for the average English monk. Instead, the standard ration was a mere gallon of ale a day (and more on feast days).

THE VIKINGS

Most polytheistic religions have one chief god, and then a god of drunkenness/wine/brewing, etc., somewhere on the side. Enlil was superior to Ninkasi; Amun to Hathor; Zeus to Dionysus. The drunken god turns up, causes some fun and chaos, but is always subject to the wiser ways and greater powers of the chief god, who usually has a beard. You don’t need to be the sharpest theologian to interpret this as drunkenness having to find its niche within society, its little spot where it can be tamed and controlled. But with the Vikings the chief god is the drunk god. The chief god is actually called “the drunk one.” There is no other Viking god of alcohol. It’s Odin. That’s because alcohol and drunkenness didn’t need to find their place within Viking society, they were Viking society. Alcohol was authority, alcohol was family, alcohol was wisdom, alcohol was poetry, alcohol was military service and alcohol was fate.

There were only three kinds of Viking booze.  There was wine which was immensely expensive and almost nobody could get hold of it. The next drink down the pecking order was mead, fermented honey, sweet and reasonably expensive. Almost everybody almost all the time just drank ale, which was much less expensive. Their ale was probably slightly stronger than ours at about 8 percent ABV.

If you wanted to set yourself up as a lord, you needed to build a mead hall, even if all you ever served in it was ale. You still called it a mead hall for appearances’ sake. Your mead hall could even be quite small—some were only about 10 by 15 feet. Others were huge, a hundred yards in length. In Beowulf when Hrothgar wants to become a mighty king, he builds Heorot, the biggest mead hall that anyone has ever seen, filled with pillars and gold.

The mead hall makes you a lord because the very first duty of a lord is to provide booze to his warriors. This was the formal way in which you showed your lordship. And conversely, if you went to somebody’s mead hall and drank their mead, you were honor-bound to protect them militarily.

Alcohol was, literally, power. It was how you swore people to loyalty. A king without a mead hall would be like a banker with no money or a library with no books.

You also needed a queen, because, strange as it may seem, women were a rather important (if a trifle subjugated) part of the mead hall feast. Women—or peace-weavers as the Vikings called them—were the ones who kept the formal footing of the feast going, who lubricated the rowdy atmosphere and provided a healthy dose of womanly calm. They were in charge of the logistics of the sumbl, which was the Norse name for a drunken feast. They may even have enjoyed the beginning of the evening, the first three drinks which were to Odin (for victory), to Njord and Freya (for peace and good harvest), and then the minnis-öl, the “memory-ale” to spirits of ancestors and of dead friends.

There’s a funny kind of Viking frost-cup that archaeologists call a funnel glass. That’s because archaeologists aren’t poets. A funnel glass is about 5 inches tall and is shaped just as you might imagine it, which means that it can’t be put down on a table. It would just fall over. This is quite deliberate as the idea is to make you down your whole drink in one. This was immensely important to the Vikings as downing drinks made you a real man. This was also the purpose of the more traditional drinking horn: to test your virility by reference to your ability to swallow.

There’s a story about Thor (the god of warfare and hammers) and Loki (the god of mischief). Loki challenged Thor to drink a horn of ale. Thor, who could never resist a challenge, accepted and Loki had a horn brought to the table and told Thor that a real man could down it in one. Thor grabbed the horn, put it to his mouth, and drank, and drank, and drank, and, when he could drink no more, the horn was still almost full. Loki looked disappointed and said that a normal chap might need to do it in two. So Thor tried again, and again his godlike drinking had almost no effect. Loki murmured that a weakling could do it in three. Same thing happened. This left Thor feeling rather ashamed and effeminate, until Loki revealed that he had tricked him, and that the other end of the horn was connected to the sea. Thor had drunk so much that he had brought the whole level of the world’s oceans down, and that, according to the Vikings, was the origin of tides.

Along with the drinking competitions, Vikings did an awful lot of boasting. This was not seen as a bad thing. A Viking chap was meant to boast. He was meant to recount all of his great rapacious deeds. And then another Viking was meant to outdo him. These boasts were not quick one-liners either. They were long affairs that waxed poetic and lyrical. It was a big, formal occasion, much like a modern rap battle, or so I am informed. Moreover, your boasting was in deadly earnest. You were expected to stand by anything you said, whether it was a claim of something you had done in the past, or of something that you were merely planning on. There was no possibility of excusing yourself the next morning by saying, as we would, that that was just the drink talking.

It was a viciously violent society, a hall full of warriors who are being forced to drink much too quickly, ceremonial bragging and insulting, and they’re all carrying swords. The result of all this can best be summed up in the Viking/Anglo-Saxon epic Beowulf, where the poet is trying to explain just what a wonderful man Beowulf was. He lavishes praise on him, and the highest praise of all is that Beowulf “never killed his friends when he was drunk”.

There’s a lovely mythical creature called the Heron of Oblivion (I’ve no idea why) that was said to come down and hover over the sumbl until everybody dozed off. Nobody went home. You stayed in your lord’s mead hall until you could stay awake no longer and then you lay down on a bench or a table or whatever you could find and you fell fast asleep.

SWEDEN

There was, apparently, an eighth-century Swedish king called Ingjald who invited all the neighboring kings to his coronation. When the bragarfull came round, he swore to enlarge his kingdom by half in every direction. Everyone drank. Everyone got drunk. The Heron of Oblivion did his restful work, and when everyone else was asleep, Ingjald went outside, locked the doors and burned down his own mead hall with all the other kings in it. I’d like to say that that was a one-off, but it wasn’t. There are a fair few accounts of burning down mead halls with everyone in them. There’s even one of a queen doing it to her husband, which seems fair.

ENGLAND

Taverns sold wine. Wine, because it had to be imported, was very, very expensive. Taverns were for wealthy men who wanted to splash a bit of cash, which meant that they were almost all in London. It also meant that taverns could have a rather degenerate side. This is where you’d find prostitutes and gamblers because, by definition, if you could afford wine you could afford other sinful luxuries.

Shakespeare, I’m pretty sure, was a wine-drinker. His works have over a hundred references to wine and sack, and only sixteen to ale.

In England in the year 1200 there was no such thing as a pub. Villages simply did not have drinking establishments. This may seem strange. Imagining England without a village pub is like imagining Russia with no vodka (there was, at this time, no vodka in Russia; but we’ll come to that in another chapter).

There were no pubs, because there was no need for pubs. Everybody was drinking at work. Often it was part of the pay. A carter, for example, might expect to have 3 pints and some food thrown in with his wages. When a lord employed laborers to work his land, he had to give them some booze. Medieval Englishwomen and children also drank. Water was still pretty dangerous, and only for the very poor.

Not that people got drunk. A few pints spread out over the course of a hard day’s toil in the fields won’t do that. But it will nourish you. Ale is, after all, liquid bread. People drank in church as well. The medieval village church was not so much a place of worship as a community center (with some worship thrown in on Sundays).   Opportunities to cadge booze in church were neither few nor far between.

A husband would expect his wife to cook and clean and look after children, and brew, and spin. Spinning wool into cloth and brewing ale had the added advantage that they could make you extra money. A wife would weave the cloth to clothe her husband, and, if there was any left over, she could sell it. This was almost the only way that the average medieval single woman could get an income. And it was so common that an unmarried woman is, to this day, called a spinster. 

A woman who brewed would be called a Brewster. A woman who brewed for profit could also be called an alewife. Medieval ale had a very short shelf life. It would go off after two or three days. So when an alewife had brewed more than her family needed, she would put up an ale stake above her front door. This was just a horizontal stick with a sprig of bush tied to the end. She would put the barrel outside her house, and sell to passersby who would turn up with a flagon and some pennies. They could then stroll off and drink it at work, at their own home or in church.

That’s how things were all the way up to the beginning of the 14th century. Then several things happened at once. First, people stopped drinking in churches. This was not because they didn’t like drinking in church, but because the church didn’t like people drinking in it.

Once upon a time, a nobleman employed people to till his fields. But in the 14th century noblemen decided that it was simpler just to rent plots of land out to the peasants and let them farm it for themselves. This meant that any peasant who didn’t have a good alewife now had to go and buy ale, which was good news for alewives. Thirsty laborers would show up after work, they wanted ale, but they also wanted somewhere to sit down and drink it. So alewives started to let people into their kitchens. Thus the pub was born.

Finally, beer was invented. Throughout this chapter I’ve been talking about ale, which was made with barley and water. It was not a very pleasant substance. Nutritious? Yes. Alcoholic? Yes. Tasty and pure and fizzy and refreshing? No. It was a sort of sludgy porridge with bits in it. The only way to make it taste nice was to flavor it with herbs and spices—horseradish was a favorite. But you were trying to disguise the taste. Trying to make something vile into something drinkable. Then hops arrived. When you add them to ale you get beer.

Most people much preferred the taste of hoppy beer. And beer had one other massive advantage over ale: it didn’t go off. You could keep beer for a year or so and, as long as the barrel was well sealed, it would still be good. Because of this, beer could be mass-produced. In every major town, breweries were set up which could produce lots of lovely beer that could then be sold to all the local alehouses (they continued to be called alehouses, long after the awful sludgy porridge had been forgotten).

The breweries could filter the beer and make a much better product.

Let us suppose that we are travelers sometime around the end of the 15th century. To find an alehouse we’d look for an ale stake. Pub signs (and by extension pub names) don’t come in until the 1590s.  The ale bench, which, as you may have guessed, was a bench just outside the door where, in fine weather, you could sit and drink in the sunshine. It’s also quite possible that we’ll spot some people playing games—bowls was a favorite—and betting on them. The door will be open. This was a legal requirement, except in the depths of winter. The idea was that any passing authority figure should be able to see inside an alehouse and thus check that nothing naughty was going on, while also not having to sully themselves by actually going in.

One of the great advantages of visiting an alehouse was that there was usually a fire blazing away. Many medieval peasants simply couldn’t afford such a luxury in their own homes. One of the first differences we’ll notice from a modern pub is that there is no bar. Countertop bars, the sort of thing we know and love, don’t actually come in until the 1820s. This place doesn’t look like a pub. It looks like somebody’s kitchen, which is basically what it is. There’s a barrel of beer somewhere in the room. And there are a few stools and benches, perhaps a trestle table or two. But the total value of the furniture isn’t more than a few shillings. We are in somebody’s house, but it’s public.

The person whose house we’re in is almost certainly a woman.  There’s also a good chance that she’s a widow. Running an alehouse was still one of the only ways that a woman could make money, and, in the days before pensions, alehouse licenses would be granted to widows out of pity. It was that or she would have to throw herself upon the parish, which the parish found inconvenient.

Women usually went to alehouses in groups. A woman on her own might be talked about. A group of respectable matrons, though, was in the clear. People also went on dates to alehouses. If a couple were known to be courting, then going out for drink was considered perfectly normal and respectable.

Alehouses were only for the poorest in society. Even moderately well-off people like yeoman farmers were still drinking at home. The alehouse was a place of escape. Servants came here for the same reason as lovers; it was what anthropologists call the Third Place. It wasn’t work, where you have to obey your boss, and it wasn’t home, where you have to obey your parents or your spouse. That’s also why the place is full of teenagers. Medieval England was an edenic place where there were absolutely no laws about underage drinking.

Not that people will actually get that drunk, unless it’s a Sunday. Just as we think of Friday night as the standard time for drinking, the medievals liked to get sloshed on a Sunday morning. This makes a lot of sense, if you think about it, as you get to be buzzed all day. But it does mean that there is a permanent war between the alehouse and the church for attendance on a Sunday morning. A war that the alehouse tended to win.

The standard greeting for a stranger arriving in an alehouse was “What news?” In the days before newspapers and even television, travelers were the main way to find out what was going on in the world. Who was king? Were we at war? Had we been invaded? Alehouses actually developed a rather bad reputation for spreading absolute lies. In 1619 the whole of Kent was sent into a panic by the news that the Spanish had taken Dover Castle; and, very curiously, the alehouse drinkers of Leicester heard the news of Elizabeth I’s death forty-eight hours before it happened.

AZTECS

But if drinking was so very, very illegal, how did it have such a central place in Aztec culture? And it did. They had gods of drinking. Several of them. Mayahuel, who was the goddess of the agave plant, was said to have married Patecatl, who was the god of fermentation. Mayahuel had 400 breasts, which was probably fun for Patecatl, but was also useful because she gave birth to 400 divine little rabbits, the Centzon Totochtin. The reason that there were 400 of them is that the Aztecs counted in base twenty. Four hundred is twenty squared and so the number had much the same place in their culture that 100 (ten squared) does in ours.

So, to recap, booze is ferociously forbidden and punishable by death. Booze is ubiquitous. Booze is revered and central to the culture and religion. Booze is legal for the elderly. This combination has left historians somewhat confused, and indeed inclined toward a quick dose of teonanacatl, the Aztec hallucinogen of choice that was entirely legal. There is, though, a theory that makes sense of all this. Anthropologists who study drunkenness draw a distinction between what they call “wet cultures” and “dry cultures.” In wet cultures people are terribly relaxed about alcohol. They sip it all day and have a terribly pleasant time, and very rarely get properly, falling-over drunk. Dry cultures are the opposite. They aren’t dry in the sense of being alcohol-free; they’re called dry because people are very wary of alcohol and have strict rules about when you can’t drink it. Then, when it is permitted, they get trollied.

But on the day of a religious festival—for example, one devoted to the 400 drunken rabbits—they got absolutely hammered. They got apocalyptically and religiously drunk, and, like the Ancient Egyptians and the Ancient Chinese before them, they used alcohol to give them an experience of the divine. And then for the rest of the month they didn’t drink at all.

It was the relaxation of the rules and the disorientation of society produced by Christianity which pushed the conquered to perpetual pulque.

The people of Zumbagua in Ecuador drink in order to communicate with ancestral spirits, and, indeed, believe that when you drink so much that you throw up, the vomit becomes food for the ghosts of the dead. To this day there is a phrase in Mexico: “As drunk as 400 rabbits.

DISTILLING

Ancient Greeks definitely knew about distilling over 2,000 years ago, but there’s no evidence that they distilled alcohol. Instead, they wasted their invention on producing drinkable water.

You start to get, in the 15th century, mentions of distilled alcohol being used as a medicine in very small doses.

James IV of Scotland bought several barrels of whisky, or aqua vitae as it was called, from a monastery in 1495.  A hundred years later, there was one bar in England—just outside London—that served aqua vitae. It was still a novelty drink that most people would never even have heard of. And then, in the second half of the 17th century, western Europe went crazy for spirits. The French suddenly got into brandy.

Come the Restoration, the English aristocracy stampeded back from France with a newfound taste for all sorts of funny newfound drinks: champagne, vermouth, and brandy. These became the drinks of the nobility.

Gin became popular in England for four reasons: monarchy, soldiers, religion and an end to world hunger. Some historians would add “hatred of the French,” which makes five. First, monarchy. King William III liked gin because he was Dutch and all Dutch people liked gin. Second, soldiers. Dutch soldiers liked gin for two reasons. Because they were Dutch and because gin infused Dutch soldiers with a peculiar form of bravery, which to this day we refer to as Dutch courage. Third, during this period European countries were constantly going to war with each other, usually on a Protestant vs. Catholic basis. England and Holland were both Protestant, so English soldiers fought alongside the Dutch, and drank alongside the Dutch, and came home with a hangover and a taste for gin. Gin was thus soldierly and Protestant. Fourth, an end to world hunger. From time immemorial, and probably before, every country in the world had had a problem with Bad Harvests. In a normal year farmers produced just enough grain to feed everybody. They didn’t produce any more than that, because they wouldn’t be able to sell it. Every so often, though, you got a year with a Bad Harvest. When this happened there wasn’t enough grain to go around, and farmers were not in the slightest bit upset. A funny aspect of the economics of farming is that a Bad Harvest means less grain; less grain means higher grain prices; these higher prices mean that farmers made just as much money from a Bad Harvest as they did from a good one, and it was less work.

William III thought he had this problem solved. Gin is made out of grain, and the quality of the grain doesn’t particularly matter. Once the stuff has been fermented and distilled, you can’t taste the difference. Therefore, if he could make gin popular in England he would produce a great big market for excess grain during normal years; and that meant that when a Bad Harvest came round there would be an excess to cover it. It might not be the highest-quality excess, but it would be edible. Thus he could end starvation forever.

But to do so he’d have to make gin really, really popular. To do that, you’d have to make gin more readily available than beer. You’d have to make it completely tax-free and unregulated and let anybody who wants to start distilling distill. Also, you’d have to ban the import of French brandy.

Where did a poor Londoner actually go to get gin? And when? And from whom? And the answer is absolutely everywhere. To set up shop you went to a distiller and got a gallon or so, distilled it a second time to make it even stronger, and added flavorings like juniper, turpentine, or sulfuric acid, whatever you liked.  Many drank way too much  and died.

Gin arrived in England in the 1690s and by the 1720s the streets of London were full of unconscious drunks who had sold their clothes for gin, so authorities tried to cut consumption by taxing it and requiring a license, which people ignored.

AUSTRALIA

Lord Sydney had a utopian idea of what Australia would be – hard work, fresh air, nature and no alcohol or money. But the sailors refused to sail without booze. And home-brewing began on day 1 of the convict ships arrival, mainly rum.  The sailors sold rum to the convicts at a markup of 1200 percent.

The economy was a bartered one with work exchanged for food or other goods.  Most of the population were convicts doing forced labor, to get them to do a speck more than they were expected to do you had to offer them something.  And that was rum, which greatly enabled the Governor to control the colony as a measure of social control.  Rum was the one and only lever of power.

The British government was not all OK with this and sent the famous Captain Bligh of mutiny on the bounty to dry out Australia as the next Governor and get rid of the militia who controlled the rum trade. He began by confiscating the stills of Captain John Macarthur, the richest man of the colony, and took him to court as well.  When he showed up, the jury cheered him, as did the hundreds of soldiers gathered outside the courthouse. He was absolutely furious, and ordered Major Johnston to get his men under control, but Johnston replied saying the was sorry, he’d been so drunk the night before he’d crashed his carriage, so couldn’t intervene.  Later that day, Johnston arrested Bligh and took control of the Colony.  Effigies of Bligh were burned in the street and had a roasted sheep and rum BBQ to celebrate.

So the government sent a new Governor called Macquarie who took control by realizing everyone was a crok and outcrooking them all.  He began by asking for exclusive rights to import rum for 3 years in exchange for a new hospital, and so began Australia’s health care system.

AMERICA

In 1979 George Washington own the largest distillery, producing 11,000 gallons of whisky a year, and after handing out free booze to voters, won his first election.  His military success came from doubling his men’s rum rations.   

Although Hollywood usually has just one giant saloon in the center of town which forces the hero and villain to confront each other, in real life there were many saloons in a town, so many they might not bump into each other.  The doors were solid, not swinging, and instead of a large room, bars were narrow, with the bar usually on the left, usually with a large mirror that lets those at the bar see anyone approaching them from behind. Although there are bottles of wine and crème de menthe, no one orders them. Everyone’s drinking whiskey and beer, though mainly whiskey. Another odd thing is that no one every asks how much drinks cost or gets change, because everyone knows the charge. It’s one-bit (about 12 cents) at the poor saloons, and two-bits at a fancier one with floor shows and a chandelier.

It’s mainly white men. A black man might be tolerated, native americans banned by law, and most unwelcome were the Chinese.  Respectable women never went into a saloon. Many weren’t for rent, why do that when you could earn $10 a week chatting with lonely men? At the back the card game would be faro, not poker, a very simple game of pure chance and easy to cheat at.

Prohibition was meant to get rid of saloons, which were perceived, especially in the Midwest, as the root of many evils.  Husbands drank their salaries, beat up their wives, died young.  Saloons were places decent women didn’t go, though the gals that were there often weren’t prostitutes but paid in whiskey (actually cold tea) to talk to men.  Saloons always had a bar on the left with a mirror behind it, and a brass rail with spittoons for every 4 people at the bottom, and no swinging doors like in the old westerns.  Horses were parked outside in huge piles of poop, since naturally while their owners drank, they pooped.  In a one-bit saloon, you plopped down a bit (12.5 cents, so really a quarter and had two drinks).  Or most often you bought someone else a drink, and the favor would be returned later by a newcomer.

Prohibition succeeded in getting rid of saloons.  That was its purpose, not stopping all alcohol, and Germans and other ethnic groups that made beer and wine weren’t worried about it.  But then the Vollmer act defined alcoholic beverages as anything over half a percent.  So for 13 years the U.S. lost the skills to make wine and beer or even whiskey well and it took 50 years to recover.  Speakeasies were quite unlike saloons, pretty much anything from someone’s living room where pasta might also be served, to the movie versions of New York city.  And women went too, unlike saloons.   

Russia

Traditions there were good at getting everyone to drink, a toast was made and all were expected to participate.  Ivan the Terrible began this in the 1500s to use drunkenness as a form of political control. Scribes attended who wrote down what everyone said while drunk, and read to him in the morning, with punishments handed out. He started state-run drinking to get as much tax money as possible. While most countries try to limit the crimes, riots, broken homes and health of drunkards, Russia was too keen for the revenue to discourage drinking in any way.

In 1914, Tsar Nicholas II outlawed vodka. In 1918 he and his family were executed. These two facts are not unrelated. It was poorly timed too, WWI was beginning and a quarter of all revenue came from taxes on alcohol.  And being sober the population could see what their government was doing to them. Today in Russia nearly a quarter of all deaths are related to alcohol.

Stalin ruled with terror and drunkenness.  He’d invite his politburo to dinner and make them drink and drink and drink, which they couldn’t refuse to do.  At one dinner there were 22 toasts before any food arrived.  He would tap out his pipe on Kruschev’s bald head and order him to do a Cossack dance. He loved to push one of the commissars into a pond.  But Stalin was mainly drinking water himself.  He did this to humiliate them, to set their tongues against each other, and make it hard to plot against him.  Even Peter the Great was known for forcing drinks on others. If he caught someone not drinking, they were forced to drink 1.5 liters of win in one go.  The head of Peter’s secret police had a tame bear who would offer guests a glass of vodka and attack if refused.

Posted in Advice, Agriculture, Human Nature | Tagged , , | Comments Off on The History of Drunkenness

Pentagon report: collapse within 20 years from climate change

Preface. The report that the article by Ahmed below is based on is: Brosig, M., et al. 2019. Implications of climate change for the U.S. Army. United States Army War College.

It was written in 2019, before covid-19 and so quite prescient: The two most prominent risks are a collapse of the power grid and the danger of disease epidemics.

It is basically a long argument to increase the military budget so it can help cope with epidemics, water and food shortages, electric grid outages, flooding, and protect the (oil and gas) resources in the arctic.

Since I see energy decline as a far more immediate threat than climate change, and the military knows this, it is odd so little is written about energy in this report. But then I looked at the pages about the arctic, and though the word oil doesn’t appear, you can see that the military is very aware of the resources (oil) there and the chance of war with Russia. Therefore they propose that the military patrol this vast area with ships, aircraft, and new vehicles that can traverse the bogs and marshes of melted permafrost. They propose sending more soldiers to the arctic for training, satellites for navigation, to develop new ways of fighting, enhance batteries and other equipment to be able function in the cold arctic environment, and more.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Financial Sense, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Ahmed, N. 2019. U.S. Military Could Collapse Within 20 Years Due to Climate Change, Report Commissioned By Pentagon Says. vice.com

According to a new U.S. Army report, Americans could face a horrifically grim future from climate change involving blackouts, disease, thirst, starvation and war. The study found that the US military itself might also collapse. This could all happen over the next two decades.

The senior US government officials who wrote the report are from several key agencies including the Army, Defense Intelligence Agency, and NASA. The study called on the Pentagon to urgently prepare for the possibility that domestic power, water, and food systems might collapse due to the impacts of climate change as we near mid-century.

The report was commissioned by General Mark Milley, Trump’s new chairman of the Joint Chiefs of Staff, making him the highest-ranking military officer in the country (the report also puts him at odds with Trump, who does not take climate change seriously.)

The report, titled Implications of Climate Change for the U.S. Army, was launched by the U.S. Army War College in partnership with NASA in May at the Wilson Center in Washington DC. The report was commissioned by Gen. Milley during his previous role as the Army’s Chief of Staff. It was made publicly available in August via the Center for Climate and Security, but didn’t get a lot of attention at the time.

The two most prominent scenarios in the report focus on the risk of a collapse of the power grid within “the next 20 years,” and the danger of disease epidemics. Both could be triggered by climate change in the near-term, it notes.

The report also warns that the US military should prepare for new foreign interventions in Syria-style conflicts, triggered due to climate-related impacts. Bangladesh in particular is highlighted as the most vulnerable country to climate collapse in the world. “The permanent displacement of a large portion of the population of Bangladesh would be a regional catastrophe with the potential to increase global instability. This is a potential result of climate change complications in just one country. Globally, over 600 million people live at sea level.”

Without urgent reforms, the report warns that the US military itself could end up effectively collapsing as it tries to respond to climate collapse. It could lose capacity to contain threats in the US and could wilt into “mission failure” abroad due to inadequate water supplies.

The report paints a frightening portrait of a country falling apart over the next 20 years due to the impacts of climate change on “natural systems such as oceans, lakes, rivers, ground water, reefs, and forests.”

Current infrastructure in the US, the report says, is woefully under prepared: “Most of the critical infrastructures identified by the Department of Homeland Security are not built to withstand these altered conditions.”

Some 80 percent of US agricultural exports and 78 percent of imports are water-borne. This means that episodes of flooding due to climate change could leave lasting damage to shipping infrastructure, posing “a major threat to US lives and communities, the US economy and global food security,” the report notes.

At particular risk is the US national power grid, which could shut down due to “the stressors of a changing climate,” especially changing rainfall levels:

“The power grid that serves the United States is aging and continues to operate without a coordinated and significant infrastructure investment. Vulnerabilities exist to electricity-generating power plants, electric transmission infrastructure and distribution system components,” it states.

As a result, the “increased energy requirements” triggered by new weather patterns like extended periods of heat, drought, and cold could eventually overwhelm “an already fragile system.”

The report’s grim prediction has already started playing out, with utility PG&E cutting power to more than a million people across California to avoid power lines sparking another catastrophic wildfire. While climate change is intensifying the dry season and increasing fire risks, PG&E has come under fire for failing to fix the state’s ailing power grid.

The US Army report shows that California’s power outage could be a taste of things to come, laying out a truly dystopian scenario of what would happen if the national power grid was brought down by climate change. One particularly harrowing paragraph lists off the consequences bluntly:

“If the power grid infrastructure were to collapse, the United States would experience significant:

  • Loss of perishable foods and medications
  • Loss of water and wastewater distribution systems
  • Loss of heating/air conditioning and electrical lighting systems
  • Loss of computer, telephone, and communications systems (including airline flights, satellite networks and GPS services)
  • Loss of public transportation systems
  • Loss of fuel distribution systems and fuel pipelines
  • Loss of all electrical systems that do not have back-up power”

Also at “high risk of temporary or permanent closure due to climate threats” are US nuclear power facilities.

There are currently 99 nuclear reactors operating in the US, supplying nearly 20% of the country’s utility-scale energy. But the majority of these, some 60%, are located in vulnerable regions which face “major risks” including sea level rise, severe storms, and water shortages.

“Climate change is introducing an increased risk of infectious disease to the US population. It is increasingly not a matter of ‘if’ but of when there will be a large outbreak.”

Water is currently 30-40% of the costs required to sustain a US military force operating abroad, according to the new Army report. A huge infrastructure is needed to transport bottled water for Army units. So the report recommends major new investments in technology to collect water from the atmosphere locally, without which US military operations abroad could become impossible. The biggest obstacle is that this is currently way outside the Pentagon’s current funding priorities.

Bizarrely for a report styling itself around the promotion of environmental stewardship in the Army, the report identifies the Arctic as a critical strategic location for future US military involvement: to maximize fossil fuel consumption.

Noting that the Arctic is believed to hold about a quarter of the world’s undiscovered hydrocarbon reserves, the authors estimate that some 20% of these reserves could be within US territory, noting a “greater potential for conflict” over these resources, particularly with Russia.

The melting of Arctic sea ice is depicted as a foregone conclusion over the next few decades, implying that major new economic opportunities will open up to exploit the region’s oil and gas resources as well as to establish new shipping routes: “The US military must immediately begin expanding its capability to operate in the Arctic to defend economic interests and to partner with allies across the region.”

Senior US defense officials in Washington clearly anticipate a prolonged role for the US military, both abroad and in the homeland, as climate change wreaks havoc on critical food, water and power systems. Apart from causing fundamental damage to our already strained democratic systems, the bigger problem is that the US military is by far a foremost driver of climate change by being the world’s single biggest institutional consumer of fossil fuels.

The prospect of an ever expanding permanent role for the Army on US soil to address growing climate change impacts is a surprisingly extreme scenario which goes against the grain of the traditional separation of the US military from domestic affairs.

In putting this forward, the report inadvertently illustrates what happens when climate is seen through a narrow ‘national security’ lens. Instead of encouraging governments to address root causes through “unprecedented changes in all aspects of society” (in the words of the UN’s IPCC report this time last year), the Army report demands more money and power for military agencies while allowing the causes of climate crisis to accelerate. It’s perhaps no surprise that such dire scenarios are predicted, when the solutions that might avert those scenarios aren’t seriously explored.

Rather than waiting for the US military to step in after climate collapse—at which point the military itself could be at risk of collapsing—we would be better off dealing with the root cause of the issue skirted over by this report: America’s chronic dependence on the oil and gas driving the destabilization of the planet’s ecosystems.

Posted in Arctic, Blackouts, Climate Change, Infrastructure & Fast Crash, Military, Over Oil | Tagged , , | 2 Comments