HSBC bank report predicts another financial crisis in 2018

[ Bill Hill of the Hill’s group predicted in June 2016 (at a peakoil.com forum): “We expect to have reached permanent depression by the end of 2017. The reduction will not hit all nations the same way. The richer Western countries will be able to afford fuels for longer than smaller poorer counties. But, how that will feed back into their general economies is yet an unknown. It will definitely have a negative impact, and perhaps a gigantic one. Like the S&P collapsing, an explosion of corporate bankruptcies, and supply chains breaking. But all and all we will just have to wait and see. It has been four years since petroleum hit its energy half way point. We should not have to wait much longer. ]

Is an Economic Oil Crash Around the Corner? By Nafeez Ahmet, January 2017, Alternet.

A report by HSBC shows that contrary to industry mythology, even amidst the glut of unconventional oil and gas, the vast bulk of the world’s oil production has already peaked and is now in decline, while European government scientists show that the value of energy produced by oil has declined by half within the first 15 years of the 21st century.

The upshot? Welcome to a new age of permanent economic recession driven by ongoing dependence on dirty, expensive, difficult oil—unless we choose a fundamentally different path.

Last September, a few outlets were reporting the counter intuitive findings of a new HSBC research report on global oil supply. Unfortunately, the true implications of the HSBC report were largely misunderstood.

New scientific research suggests that the world faces an imminent oil crunch, which will trigger another financial crisis.

The HSBC research note — prepared for clients of the global bank — found that contrary to concerns about too much oil supply and insufficient demand, the situation was opposite: global oil supply in coming years will be insufficient to sustain rising demand.

Yet the full, striking import of the report, concerning the world’s permanent entry into a new age of global oil decline, was never really explained. The report didn’t just go against the grain of the industry’s hype about “peak demand”: it vindicated what is routinely lambasted by the industry as a myth: peak oil ,  the concurrent peak and decline of global oil production.

The HSBC report you need to read

Insurge Intelligence obtained a copy of the report in December 2016, and for the first time we are exclusively publishing the entire report in the public interest. Read and/or download the full HSBC report.

Headquartered in London, HSBC is the world’s sixth largest bank, holding assets of $2.67 trillion. So when it produces a research report for its clients, we should listen. Among the report’s most shocking findings is that, “81% of the world’s total liquids production is already in decline.”

Between 2016 and 2020, non-OPEC production will be flat due to declines in conventional oil production, even though OPEC will continue to increase production modestly. This means that by 2017, deliverable spare capacity could be as little as 1% of global oil demand.

This heightens the risk of a major global oil supply shock around 2018 which could “significantly affect oil prices.”

The report asserts that peak demand (the idea that demand will stop growing leaving the world awash in too much supply), while certainly a relevant issue due to climate change agreements and disruptive trends in alternative technologies, is not the most imminent challenge:

“Even in a world of slower oil demand growth, we think the biggest long-term challenge is to offset declines in production from mature fields. The scale of this issue is such that in our view rather there could well be a global supply squeeze some time before we are realistically looking at global demand peaking.”

Under the current supply glut driven by rising unconventional production, falling oil prices have damaged industry profitability and led to dramatic cut backs in new investments in production. This, HSBC says, will exacerbate the likelihood of a global oil supply crunch from 2018 onwards.

Four Saudi Arabias, anyone?

The HSBC report examines two main datasets from the International Energy Agency and the University of Uppsala’s Global Energy Systems Program in Sweden.

The latter has consistently advocated a global peak oil scenario for many years — the HSBC report confirms the accuracy of this scenario, and shows that the IEA’s data supports it.

The rate and nature of new oil discoveries has declined dramatically over the last few decades, reaching almost negligible levels on a global scale, the report finds. Compare this to the report’s warning that just to keep production flat against increasing decline rates, the world will need to add four Saudi Arabia’s worth of production by 2040. North American production, despite remaining the most promising in terms of potential, will simply not be able to fill this gap.

Business Insider, the Telegraph and other outlets that covered the report last year acknowledged the supply gap, but failed to properly clarify that HSBC’s devastating findings basically forecast the long-term scarcity of cheap oil due to global peak oil, from 2018 to 2040.

The report revises the way it approaches the concept of peak oil — rather than forecasting it as a single global event, the report uses a disaggregated approach focusing on specific regions and producers. Under this analysis, 81% of the world’s oil supply has peaked in production and so now “is post-peak.”

Using a more restrictive definition puts the quantity of global oil that has peaked at 64%. But either way, well over half the world’s global oil supply consists of mature and declining fields whose production is inexorably and irreversibly decreasing:

“If we assumed a decline rate of 5%pa [per year] on global post-peak supply of 74 mbd — which is by no means aggressive in our view — it would imply a fall in post-peak supply of c.38mbd by 2030 and c.52mbd out to 2040. In other words, the world would need to find over four times the size of Saudi Arabia just to keep supply flat, before demand growth is taken into account.”

What’s worse is that when demand growth is taken into account — and the report notes that even the most conservative projections forecast a rise in global oil demand by 2040 of more than 8 mbd above that of 2015 — then even more oil would be needed to fill the coming supply gap.

But with new discoveries at an all-time low and continuing to diminish, the implication is that oil can simply never fill this gap.

Technological innovation exacerbates the problem

Much trumpeted improvements in drilling rates and efficiency will not make things better, because they will only accelerate production in the short term while, therefore, more rapidly depleting existing reserves. In this case, the report concludes: “the decline-delaying techniques are only masking what could be significantly higher decline rates in the future.”

This does not mean that peak demand should be dismissed as a serious concern. As Michael Bradshaw, professor of global energy at Warwick University’s Sloan Business School, told me for my previous Vice article, any return to higher oil prices will have major economic consequences.

Price spikes, economic recession

Firstly, oil price spikes would have an immediate recessionary effect on the global economy, by amplifying inflation and leading to higher costs for social activity at all levels, driven by the higher underlying energy costs.

Secondly, even as spikes may temporarily return some oil companies to potential profitability, such higher oil prices will drive consumer incentives to transition to cheaper renewable energy technologies like solar and wind, which are already becoming cost-competitive with fossil fuels.

That means a global oil squeeze could end up having a dramatic impact on continued demand for oil, as twin crises of peak oil and peak demand end up intensifying and interacting in unfamiliar ways.

The demise of fossil fuels

But the HSBC report’s specific forecasts of global oil supply and demand are part of a wider story of global net energy decline.

A new scientific research paper authored by a team of European government scientists, published on Cornell University’s Arxiv website in October 2016, warns that the global economy has entered a new era of slow and declining growth. This is because the value of energy that can be produced from the world’s fossil fuel resource base is declining inexorably.

The paper—currently under review with an academic journal—was authored by Francesco Meneguzzo, Rosaria Ciriminna, Lorenzo Albanese, Mario Pagliaro, who collectively conduct research on climate change, energy, physics and materials science at the Italian National Research Council,  Italy’s premier government agency for scientific research.

According to HSBC, oil prices are likely to rise and stabilize for some time around the $75 per barrel mark. But the Italian scientists find that this is still too high to avoid destabilizing recessionary effects on the economy.

The Italian study offers a new model combining “the competing dynamics of population and economic growth with oil supply and price,” with a view to evaluate the near-term consequences for global economic growth.

Data from the past 40 years shows that during economic recessions, the oil price tops $60 per barrel, but during economic growth remains below $40 a barrel. This means that prices above $60 will inevitably induce recession. Therefore, the scientists conclude that to avoid recession, “the oil price should not exceed a threshold located somewhat between $40/b [per barrel] and $50/b, or possibly even lower.”

More broadly, the scientists show that there is a direct correlation between global population growth, economic growth and total energy consumption. As the latter has steadily increased, it has literally fueled the growth of global wealth.

But even so, the paper finds that the world is experiencing: “declining average EROIs [Energy Return on Investment] for all fossil fuels; with the EROI of oil having likely halved in the short course of the first 15 years of the 21st century.”

EROI is the total value of energy a resource can generate, calculated by comparing the quantity of energy extracted, to the quantity of energy put in to enable the extraction.

This means that overall, despite total liquids production increasing, as the energy value it generates is declining, the overall costs of extraction are simultaneously increasing. This is acting as an increasing geophysical brake on global economic growth. And it means the more the economy remains dependent on fossil fuels, the more the economy is tied to the recessionary impact of global net energy decline: “The chance of future economic growth matching the current trajectory of the human population is inextricably bound to the wide and growing availability of highly concentrated energy sources enjoying broad applicability to energy end uses.”

The problem is that since the 1980s, the share of oil in the global energy mix has declined. To make up for this, economic growth has increasingly had to rely on clever financial instruments based on debt: in effect, the world is borrowing from the future to sustain our present consumption levels.

In an interview, lead author Francesco Meneguzzo explained:  “Global conventional oil peaked around the year 2005. All the following supply increase was due to unconventional oil exploitation and, since 2009, basically to U.S. shale (tight) oil, which in turn peaked around March, 2015.

“What looks like to be even more important is the fact that global oil supply has failed to keep the pace with the increase in total energy consumption, which ‘natural’ growth requires to be approximately proportional to population increase, leading to the decline of the oil share in the energy mix. While governments have struggled to fuel their economies with ever increasing energy supply, other sources have steadily replaced oil in the energy mix, such as coal in China. Yet, no other conventional source has proved to be a valuable substitute for oil, hence the need for debt in order to replace the vanishing oil share.”

On a business-as-usual trajectory then, the economy can quite literally never recover — unless it transitions to a truly viable new energy source which can substitute for oil.

“In order to avoid the [oil] price affordable by the global economy falling below the extraction cost, debt piling (borrowing from the future) becomes a necessity, yet it is a mere trick to gain some time while hoping for something positive to happen,” said Meneguzzo. “The reality is that debt, basically as a substitute for oil, does not work to produce real wealth, as apparent for example from the decline of the industry value added as a percentage of GDP.”

Where will this end up?

“Recently, debt has started shrinking, basically because it has failed to generate real wealth. Assuming no meaningful (and fast) transition to renewable energy, the economic growth can only deteriorate further and further.”

Basically, this means, Meneguzzo adds, “delocalizing manufacturing to economies using local, cheaper and dirtier energy sources (such as coal in China) as well as lower wages, further shrinking domestic aggregate demand and fueling a downward spiral of deflation and/or debt.”

Is there a way out? Not within the current trajectory: “Unless that debt is immediately used to exploit renewable sources on a massive scale, along with ‘accessories’ such as storage making them as qualified as oil, social and political derangements, even before an economic crash, look to be unavoidable.”

Crisis convergence

Seen in this broader scientific context, the HSBC global oil supply report provides stunning confirmation that for the most part, global oil production is already in post-peak ,  and that after 2018, this is going to manifest in not simply a global supply shock, but a world in which cheap, high quality fossil fuels is increasingly hard to find.

What will this mean? One possible scenario is that by 2018 or shortly thereafter, the world will face a similar convergence of global crises that occurred a decade earlier.

In this scenario, oil price hikes would have a recessionary affect that destabilizes the global debt bubble, which for some years has been higher than pre-2008 crash levels, now at a record $152 trillion.

In 2008, oil price shocks played a key role in creating pre-crisis economic conditions for consumers in which rising living costs helped trigger debt-defaults in housing markets, which rapidly spiraled out of control.

In or shortly after 2018, economic and energy crisis convergence would drive global food prices up, regenerating the contours of the triple crunch we saw ravage the world from 2008 to 2011, the debilitating impacts of which we have yet to recover from.

2018 is likely to be crunch year for another reason. Jan. 1, 2018 is the date when a host of new regulations are set to come in force, which will “constrain lending ability and prompt banks to only advance money to the best borrowers, which could accelerate bankruptcies worldwide,” according to Bloomberg. Other rules to come in play will require banks to stop using their own international risk assessment measures for derivatives trading.

Ironically, the introduction of similar well-intentioned regulation in January 2008 (through Basel II) laid the groundwork to rupture the global financial architecture, making it vulnerable to that year’s banking collapse.

In fact, two years earlier in July 2006, David Martin, an expert on global finance, presciently forecast that Basel II would interact with the debt bubble to convert a collapse of the housing bubble into a global financial conflagration. Just a month after that warning, I was told by a former senior Pentagon official with wide-ranging high-level access to the U.S. military, intelligence and financial establishment that a global banking collapse was imminent, and would likely occur in 2008.

My source insisted that the event was bound up with the peak of global conventional oil production about two years earlier (which according to the U.K.’s former chief government scientist Sir David King did indeed occur around 2005, even though unconventional oil and gas production has offset the conventional decline so far).

Having first outlined my warning of a 2008 global banking collapse in August 2006, I re-articulated the warning in November 2007, citing Martin’s forecast and my own wider systems analysis at a lecture at Imperial College, London. In that lecture, I predicted that a housing-triggered banking crisis would be sparked in the context of the new era of expensive fossil fuels.

I called it then, and I’m calling it now. Some time after January 2018, we are seeing the probability of a new crisis convergence in global energy, economic and food systems, similar to what occurred in 2008.

Today, we are all supposed to quietly believe that the economy is in recovery, when in fact it is merely transitioning through a fundamental global systemic phase-shift in which the unsustainability of prevailing industrial structures are being increasingly laid bare. The truth is that the cycles of protracted economic crisis are symptomatic of a deeper global systemic process.

One way we can brace ourselves for the next crash is to recognize it for what it is: a symptom of global system failure, and therefore of the inevitable transition to a post-carbon, post-capitalist future. The future we are stepping into simply doesn’t work the way we are accustomed to.

The old, industrial era rules for the dying age of energy and technological super-abundance must be re-written for a new era beyond fossil fuels, beyond endless growth at any environmental cost, beyond debt-driven finance.

This year, we can prepare for the post-2018 resurgence of crisis convergence by planting seeds — however small — for that future in our own lives, and with those around us, from our families, to our communities and wider societies.
Nafeez Ahmed is an investigative journalist and international security scholar. He writes the System Shift column for VICE’s Motherboard, and is the winner of a 2015 Project Censored Award for Outstanding Investigative Journalism for his former work at the Guardian. He is the author of A User’s Guide to the Crisis of Civilization: And How to Save It (2010), and the scifi thriller novel Zero Point, among other books.

Posted in Crash Coming Soon, Decline, EROEI Energy Returned on Energy Invested, EROEI remaining oil too low, How Much Left, Peak Oil | Tagged , | Leave a comment

Peak coal 2013-2045 — most likely 2025-2030

Dennis Coyne. March 11, 2016. Coal Shock Model. peakoilbarrel.com

Coal is an important energy resource, but we do not know how the size of the economically recoverable resource that will eventually be recovered. The mainstream view is that there are extensive coal resources that are economically recoverable. But research by Rutledge, Mohr, and Laherrere contradicts this view.

My estimates of the coal URR are based on the work of David Rutledge and Steve Mohr. Recent work by Jean Laherrere has coal URR estimates which are higher than my estimates, his medium scenario (650 Gtoe) is higher than my high case (630 Gtoe) and his estimates are usually conservative. My estimate may be too conservative, though my medium case (URR=510 Gtoe) is somewhat higher than the best estimate of Steve Mohr (465 Gtoe), whose work on coal is the best that I have found.

The average of the best estimate of Mohr and Laherrere’s medium case is about 550 Gtoe, a little higher than my medium case and similar to Laherrere’s low case. Based on the recent work by Laherrere, my best estimate would be 560 Gtoe (570 Gtoe is the average of my medium and high cases and 550 Gtoe is the average of the Mohr and Laherrere medium cases, the average of all 4 is 560 Gtoe).

The peak for world coal output will be sooner than most people think, the range is 2013 to 2045, my estimate is 2025 to 2030 with peak output between 4 and 5 Gtoe/year (2014 output was about 4 Gtoe/year).

blog1603/

The eventual peak in World fossil fuel output is a potentially serious problem for human civilization. Many people have studied this problem, including Jean Laherrere, Steve Mohr, Paul Pukite (aka Webhubbletelescope), and David Rutledge.

I have found Steve Mohr’s work the most comprehensive as he covered coal, oil, and natural gas from both the supply and demand perspective in his PhD Thesis. Jean Laherrere has studied the problem extensively with his focus primarily on oil and natural gas, but with some exploration of the coal resource as well. David Rutledge has studied the coal resource using linearization techniques on the production data (which he calls logit and probit).

Paul Pukite introduced the Shock Model with dispersive discovery which he has used primarily to look at how oil and natural gas resources are developed and extracted over time. In the past I have attempted to apply Paul Pukite’s Shock Model (in a simplified form) to the discovery data found in Jean Laherrere’s work for both oil and natural gas, using the analysis of Steve Mohr as a guide for the URR of my low and high scenarios along with the insight gleaned from Hubbert Linearization.

In the current post I will apply the Shock model to the coal resource, again trying to build on the work of Mohr, Rutledge, Laherrere, and Pukite.

A summary of URR estimates for World coal are below:blog1603/

The “Laherrere+Rutledge” estimate uses the Rutledge best estimate for the low case and Laherrere’s low and medium cases for the medium and high cases. Laherrere also has a high case of 750 Gtoe for the World coal URR, which seems too optimistic in my opinion. The “high” estimate of Steve Mohr has been reduced from his “Case 3” estimate of 670 Gtoe by 40 Gtoe because I have assumed lignite and black coal resources are lower than his high estimate.

An update of David Rutledge’s estimate using the latest BP data through 2014 gives a URR of about 400 billion tonnes of oil equivalent (Gtoe) for coal. The Rutledge 2009 estimate was about 350 Gtoe.

My initial estimate was in billions of tonnes (Gt) of coal at 800 Gt for the low estimate (a round number near Steve Mohr’s low estimate of 770 Gt) and 1300 Gt for the high estimate (about the same as Steve Mohr’s high estimate), my medium estimate was simply the average of the high and low estimates. I came across Jean Laherrere’s estimate after I had developed my model, surprisingly his medium estimate is a little higher than my guess, which is usually not the case (for other fossil fuels).

I do not have access to discovery data for coal, but based on World Resource estimates gathered by David Rutledge, most coal resources had been discovered by the 1930s. I developed simple dispersive discovery models with peak discovery around 1900 for each of the three cases, these are rough estimates, I only know is that coal was discovered over time. The cumulative coal discovery models in Gtoe are shown in the chart below for the low, medium and high URR cases.

blog1603/

In each case about 75% of coal discovery was prior to 1940.  Coal resources have been developed very slowly, especially since the discovery of oil and natural gas. As a simplification I assume that the rate that the discovered coal is developed remains constant over time.

A maximum entropy probability density function with a mean time from discovery to first production of 100 years is used to approximate how quickly new proved developed producing reserves are added to any reserves already producing each year. For example a 1000 million tonne of oil equivalent (1 Gtoe) coal discovery would be developed (on average) as shown in the chart below:

blog1603/

Reading from the chart, about 9 Mtoe of new producing reserves would be developed from this 1850 discovery in 1860 and about 5 Mtoe of new producing reserves would be developed in 1920. About half of the 1000 Mt discovered in 1850 would have become producing reserves by 1920, so the median time from discovery to producing reserve is about 70 years (the mean is 100 years due to the long tail of the exponential probability density function).

The model takes all the discoveries for each year and applies the probability density function (pdf) above to each year’s discoveries (the pdf is 1000 less than shown in the chart because we multiplied the pdf by 1000 to show the new producing reserves in Mtoe.) Then the new producing reserves from each year’s discoveries are simply added together in a spreadsheet, not complicated, just an accounting exercise.  The new producing reserves curve (when everything is added up) is shown below for the medium URR case (510 Gtoe):

blog1603/

Each year new producing reserves are added to the pool of producing reserves while some of these reserves are produced and become fossil fuel output. This is indicated schematically below:

blog1603/

If the Fossil fuel output is less than the new producing reserves added in any year, then the producing reserves would increase during that year, if the reverse is true they would decrease.

The fossil fuel output divided by the producing reserves is called the extraction rate.

Using data from David Rutledge for fossil fuel output to 1980 and data from BP’s Statistical Review of World Energy from 1981 to 2014, I extrapolated the extraction rate trend from 2000 to 2014 to estimate future coal output. The chart below shows the discovery curve, new producing reserves curve, and the output curve for the scenario with a URR of 510 Gtoe.

blog1603/

Note that when new producing reserves are more than output the producing reserves will increase (up to 1986), after 1993 output is higher than the new producing reserves added each year so producing reserves start to decrease. Producing reserves are in the following chart for the medium scenario (URR=510 Gtoe).

blog1603/

The fall in producing reserves combined with increased World output of coal from 2000 to 2013 required an increase in extraction rates from 1.5% to 2.9%. I assume after 2014 that this increase in extraction rates continues at a similar rate until reaching 4% in 2026 and then extraction rates gradually flatten, reaching 5.1% in 2070.

Clearly I do not know the future extraction rate, this is an estimate assuming recent trends continue. For this scenario with a coal URR of 510 Gtoe output peaks in 2026 at about 4250 Mtoe/year.

blog1603/

For the low and high URR cases the details of the analysis are covered at the end of the post. The extraction rate trend from 2000 to 2014 was also extended until a peak was reached and then the increase in extraction rates were assumed to lessen until a constant rate of extraction was reached.

The three scenarios(low, medium, and high) are presented in the chart below.

blog1603/

The low scenario peaks in 2013 at about 4 Gtoe/a, the medium scenario peaks in 2025 at about 4.3 Gtoe/a, and the high scenario peaks in 2045 at about 4.9 Gtoe/a. Note that the medium scenario is not my best estimate, it is simply a scenario between possible low or high URR cases, reality might fall on any path between the high and low scenarios, depending on the eventual URR and extraction rates in the future.

A blog post by Luis de Sousa covered Jean Laherrere’s estimate of future coal output with URR between 550 Gtoe and 750 Gtoe.

blog1603/
blog1603/

For comparison, I have adjusted my chart (shown above) to have a similar scale as Jean Laherrere’s chart.

Note that only the two higher scenarios in my chart can be roughly compared with the lower two scenarios in Laherrere’s chart (510 compared with 550 Gtoe and 630 compared with 650 Gtoe). My scenarios peak at higher output at a later year and decline more steeply as a result.

The chart below is Steve Mohr’s medium independently dynamic scenario, where supply responds to coal demand.

blog1603/
blog1603/

The Chart above labelled C Case 2 is figure 5-8 from page 69 of Steve Mohr’s PhD Dissertation, the peak output is 210 EJ/year in 2019 (from Table 5-7 on page 71), Case 2 has a URR of 19.4 ZJ or 465 Gtoe (ZJ=zettajoule=1E21 J). My medium scenario (URR of 21.3 ZJ) has a lower peak output of 180 EJ/year, which occurs 6 years later than Mohr’s scenario. (1 Gtoe=41.868 EJ=4.1868E-2 ZJ).

It is interesting that Jean Laherrere’s larger URR scenario (550 Gtoe) has a peak of 4 Gtoe/year, while Mohr’s smaller URR (465 Gtoe) has a peak of 5 Gtoe/year. Mohr’s scenario was created in 2010 before the 2014 slowdown in Chinese coal consumption and he may have assumed that China and India would resume their rapid increase in coal consumption from 2010 to 2025. Jean Laherrere’s scenario was created in 2015 and in his 550 Gtoe scenario he may assume that the recent decrease in World coal output (in 2014) will continue in the future.

My medium scenario (510 Gtoe) is between Mohr’s medium (case 2) scenario and Laherrere’s low scenario. I have created two new scenarios using a URR of 510 Gtoe which match the peak output of Laherrere’s 550 Gtoe scenario and Mohr’s 465 Gtoe scenario. I have also created a “plateau” scenario with URR=510 Gtoe with World output remaining at the 2014 level until 2025. The various scenarios are presented in the chart below.

blog1603/

The extraction rates in the 4 different 510 Gtoe scenarios can be compared in the chart that follows.

blog1603/

Generally  a higher peak in output leads to steeper annual decline rates, the chart below compares annual decline rates for the 4 different 510 Gtoe URR scenarios.

blog1603/

Works Cited

  • De Sousa, Luis. “Peak Coal in China and the World, by Jean Laherrère.”          attheedgeoftime.blogspot.com. Web. 11 March. 2016.
  • Mohr, Steve. Projection of world fossil fuel production with supply and demand interactions. 2010. Web. 11 March. 2016.
  • Oil Conundrum. theoilconundrum.com. Web. 11 March. 2016.
  • Rutledge, David. “Estimating long-term world coal production with logit and probit transforms.” International Journal of Coal Geology. 85 (2011): 23-33. Web. 11 March. 2016.

Appendix with details of Low and High cases

With links to Excel files at end of appendix

Low case-URR=390 Gtoe

blog1603/

blog1603/

blog1603/

blog1603/

High Case- URR=630 Gtoe

blog1603/

blog1603/

blog1603/

blog1603/

Further reading

Posted in Coal, Peak Coal | Tagged | Leave a comment

Why Nuclear Power is not an alternative to fossil fuels

[ Economic reasons are the main hurdle to new nuclear plants now, with capital costs so high it’s almost impossible to get a loan, especially when natural gas is so much cheaper and less risky. But there are other reasons nuclear power is in trouble as well. Far more plants are in danger of closing than are being built (37 or more may close).

This is a liquid transportation fuels crisis. The Achilles heel of civilization is our dependency on trucks of all kinds, which run on diesel fuel because diesel engines are far more powerful than steam, gasoline, electric or any other engine on earth (Vaclav Smil. 2010. Prime Movers of Globalization: The History and Impact of Diesel Engines and Gas Turbines. MIT Press).  Billions of trucks (and equipment) are required to keep the supply chains going that every person and business on earth depends on, as well as mining, agriculture, road / construction, logging trucks and so on)  Since trucks can’t run on electricity, anything that generates electricity is not a solution, nor is it likely that the electric grid can ever be 100% renewable (read “When trucks stop running”, this can’t be explained in a sound-bite), or that we could replace billions of diesel engines in the short time left.  According to a study for the Department of energy society would need to prepare for the peaking of world oil production 10 to 20 years ahead of time (Hirsch 2005).  But conventional oil peaked in 2005 and been on a plateau since then. Here we are 12 years later, totally unprepared, and the public is still buying gas guzzlers whenever oil prices drop, freeway speed limits are still over 55 mph.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Nuclear power costs too much

U.S. nuclear power plants are old and in decline. By 2030, U.S. nuclear power generation might be the source of just 10% of electricity, half of production now, because 38 reactors producing a third of nuclear power are past their 40-year life span, and another 33 reactors producing a third of nuclear power are over 30 years old. Although some will have their licenses extended, 37 reactors that produce half of nuclear power are at risk of closing because of economics, breakdowns, unreliability, long outages, safety, and expensive post-Fukushima retrofits (Cooper 2013. Nuclear power is too expensive, 37 costly reactors predicted to shut down and A third of Nuclear Reactors are going to die of old age in the next 10-20 years.

New reactors are not being built because it takes years to get permits and $8.5–$20 billion in capital must be raised for a new 3400 MW nuclear power plant (O’Grady, E. 2008. Luminant seeks new reactor. London: Reuters.). This is almost impossible since a safer 3400 MW gas plant can be built for $2.5 billion in half the time. What utility wants to spend billions of dollars and wait a decade before a penny of revenue and a watt of electricity is generated?

In the USA there are 104 nuclear plants (largely constructed in the 1970s and 1980s) contributing 19% of our electricity.  Even if all operating plants over 40 years receive renewals to operate for 60 years, starting in 2028 it’s unlikely they can be extended another 20 years, so by 2050 nearly all nuclear plants will be out of business.

Joe Romm “The Nukes of Hazard: One Year After Fukushima, Nuclear Power Remains Too Costly To Be A Major Climate Solution” explains in detail why nuclear power is too expensive, such as:

  • New nuclear reactors are expensive. Recent cost estimates for individual new plants have exceeded $5 billion (for example, see Scroggs, 2008; Moody’s Investor’s Service, 2008).
  • New reactors are intrinsically expensive because they must be able to withstand virtually any risk that we can imagine, including human error and major disasters
  • Based on a 2007 Keystone report, we’d need to add an average of 17 plants each year, while building an average of 9 plants a year to replace those that will be retired, for a total of one nuclear plant every two weeks for four decades — plus 10 Yucca Mountains to store the waste
  • Before 2007, price estimates of $4000/kw for new U.S. nukes were common, but by October 2007 Moody’s Investors Service report, “New Nuclear Generation in the United States,” concluded, “Moody’s believes the all-in cost of a nuclear generating facility could come in at between $5,000 – $6,000/kw.”
  • That same month, Florida Power and Light, “a leader in nuclear power generation,” presented its detailed cost estimate for new nukes to the Florida Public Service Commission. It concluded that two units totaling 2,200 megawatts would cost from $5,500 to $8,100 per kilowatt – $12 billion to $18 billion total!
  • In 2008, Progress Energy informed state regulators that the twin 1,100-megawatt plants it intended to build in Florida would cost $14 billion, which “triples estimates the utility offered little more than a year ago.” That would be more than $6,400 a kilowatt.  (And that didn’t even count the 200-mile $3 billion transmission system utility needs, which would bring the price up to a staggering $7,700 a kilowatt).

Extract from Is Nuclear Power Our Energy Future, Or in a Death Spiral? March 6th, 2016, By Dave Levitan, Ensia:

In general, the more experience accumulated with a given technology, the less it costs to build. This has been dramatically illustrated with the falling costs of wind and solar power. Nuclear, however has bucked the trend, instead demonstrating a sort of “negative learning curve” over time.

According to the Union of Concerned Scientists, the actual costs of 75 of the first nuclear reactors built in the U.S. ran over initial estimates by more than 200 percent. More recently, costs have continued to balloon. Again according to UCS, the price tag for a new nuclear power plant jumped from between US$2 billion and US$4 billion in 2002 all the way US$9 billion in 2008. Put another way, the price shot from below US$2,000 per kilowatt in the early 2000s up to as high as US$8,000 per kilowatt by 2008.

Steve Clemmer, the director of energy research and analysis at UCS, doesn’t see this trend changing. “I’m not seeing much evidence that we’ll see the types of cost reductions [proponents are] talking about. I’m very skeptical about it — great if it happens, but I’m not seeing it,” he says.

Some projects in the U.S. seem to face delays and overruns at every turn. In September 2015, a South Carolina effort to build two new reactors at an existing plant was delayed for three years. In Georgia, a January 2015 filing by plant owner Southern Co. said that its additional two reactors would jump by US$700 million in cost and take an extra 18 months to build. These problems have a number of root causes, from licensing delays to simple construction errors, and no simple solution to the issue is likely to be found.

In Europe the situation is similar, with a couple of particularly egregious examples casting a pall over the industry. Construction began for a new reactor at the Finnish Olkiluoto 3 plant in 2005 but won’t finish until 2018, nine years late and more than US$5 billion over budget. A reactor in France, where nuclear is the primary source of power, is six years behind schedule and more than twice as expensive as projected.

“The history of 60 years or more of reactor building offers no evidence that costs will come down,” Ramana says. “As nuclear technology has matured costs have increased, and all the present indications are that this trend will continue.”

Nuclear plants require huge grid systems, since they’re far from energy consumers. The Financial Times estimates that would require ten thousand billion dollars be invested world-wide in electric power systems over the next 30 years.

In summary, investors aren’t going to invest in new reactors because:

  • of the billions in liability after a meltdown or accident
  • there may only be enough uranium left to power existing plants
  • the cost per plant ties up capital too long (it can take 10 billion dollars over 10 years to build a nuclear power plant)
  • the costs of decommissioning are very high
  • properly dealing with waste is expensive
  • There is no place to put waste — in 2009 Secretary of Energy Chu shut down Yucca mountain and there is no replacement in sight.

Nor will the USA government pay for the nuclear reactors given that public opinion is against that — 72% said no (in E&E news), they weren’t willing for the government to pay for nuclear power reactors through billions of dollars in new federal loan guarantees for new reactors.

Cembalest, an analyst at J.P. Morgan, wrote “In some ways, nuclears goose was cooked by 1992, when the cost of building a 1 GW plant rose by a factor of 5 (in real terms) from 1972” (Cembalest).

Peak Uranium

Energy experts warn that an acute shortage of uranium is going to hit the nuclear energy industry. Dr Yogi Goswami, co-director of the Clean Energy Research Centre at the University of Florida warns that proven reserves of uranium will last less than 30 years. By 2050, all proven and undiscovered reserves of uranium will be over.  Current nuclear plants consume around 67,000 tonnes of high-grade uranium per year. With present world uranium reserves of 5.5 million tons, we have enough to last last 42 years.  If more nuclear plants are built, then we have less than 30 years left (Coumans).

Uranium production peaked in the 1980s but supplies continued to meet demand because weapons decommissioned after the Cold War were converted commercial fuel. Those sources are now drying up, and a new demand-driven peak may be on the horizon.

The only way we could extend our supplies of uranium is to build breeder reactors.  But we don’t have any idea how to do that and we’ve been trying since the 1950s.

China switched on its 19th nuclear power reactor as it rushes to increase nuclear generation. The country plans to switch on 8.64 gigawatts of nuclear generating capacity in 2014 as compared to 3.24 gigawatts of new capacity in 2013. The availability of uranium for China’s nuclear industry is becoming an issue. Beijing may have to import some 80 percent of its uranium by 2020, as compared to the current 60 percent.

There may not even be enough uranium to power existing plants

Source: Colorado Geological survey

Related articles:

Nuclear power is Way too Dangerous

In 2016, top journal Science, based on the National Academy of Sciences of lessons learned from Fukushima, reported that a nuclear spent fuel fire at Peach Bottom in Pennsylvania could force 18 million people to evacuate.  This is because there’s still nowhere to put nuclear waste, so it’s stored in pools of water on-site that are not under the containment dome, but open to the air, and a prime target for terrorists at over 100 locations.  If electric power were ever down more than 10 days due to a natural disaster, electromagnetic pulse from a nuclear weapon / solar flare, or any other reason, these nuclear pools would catch on fire and spew out radiation for many square miles and force millions of people to evacuate.  Also see: Shocking state of world’s riskiest nuclear waste sites

The dangers of nuclear waste is the main reason California and many other states won’t allow new nuclear power plants to open. To find out more about the dangers of nuclear waste and why we have nowhere to store it, read by book review of “Too Hot to touch“.

Greenpeace has a critique of nuclear power called Nuclear Reactor Hazards (2005) which makes the following points:

  1. As nuclear power plants age, components become embrittled, corroded, and eroded. This can happen at a microscopic level which is only detected when a pipe bursts. As a plant ages, the odds of severe incidents increase. Although some components can be replaced, failures in the reactor pressure vessel would lead to a catastrophic release of radioactive material. The risk of a nuclear accident grows significantly each year after 20 years. The average age of power plants now, world-wide, is 21 years.
  2. In a power blackout, if the emergency backup generators don’t kick in, there is the risk of a meltdown. This happened recently in Sweden at the Fosmark power station in 2006. A former director said “It was pure luck that there was not a meltdown. Since the electricity supply from the network didn’t work as it should have, it could have been a catastrophe.” Another few hours and a meltdown could have occurred. It should not surprise anyone that power blackouts will become increasingly common and long-lasting as energy declines.
  3. 3rd generation nuclear plants are pigs wearing lipstick – they’re just gussied up 2nd generation — no safer than existing plants.
  4. Many failures are due to human error, and that will always be the case, no matter how well future plants are designed.
  5. Nuclear power plants are attractive targets for terrorists now and future resource wars. There are dozens of ways to attack nuclear and reprocessing plants. They are targets not only for the huge number of deaths they would cause, but as a source of plutonium to make nuclear bombs. It only takes a few kilograms to make a weapon, and just a few micrograms to cause cancer.

If Greenpeace is right about risks increasing after 20 years, then there’s bound to be a meltdown incident within ten years, which would make it almost impossible to raise capital. (And indeed there was, Fukushima had a meltdown in 2011).

It’s already hard to raise capital, because the owners want to be completely exempt from the costs of nuclear meltdowns and other accidents. That’s why no new plants have been built in the United States for decades.

The Energy Returned on Energy Invested may be too low for investors as well. When you consider the energy required to build a nuclear power plant, which needs tremendous amount of cement, steel pipes, and other infrastructure, it could take a long time for the returned energy to pay back the energy invested. The construction of 1970’s U.S. nuclear power plants required 40 metric tons of steel and 190 cubic meters of concrete per average megawatt of electricity generating capacity (Peterson 2003).

The amount of greenhouse gases emitted during construction is another reason many environmentalists have turned away from nuclear power.

The costs of treating nuclear waste have skyrocketed. An immensely expensive treatment plant to cleanup the Hanford nuclear plant went from costing 4.3 billion in 2000 to 12.2 billion dollars today. If the final treatment plant is ever built, it will be twelve stories high and four football fields long (Dininny 2006).

Nuclear power plants take too long to build

It often takes 10 years to build a nuclear power plant because it takes years to get licensed, fabricate components, and another 4 to 7 years to actually build it. That’s too long for investors to wait, they want far more immediate returns than that. Techno-optimists can argue that some new-fangled kind of reactor could be built more quickly.  But the public is afraid of reactors (rightly so), so it’s bound to go slowly as protestors demand stringent inspections every step of the way.  The public also is concerned with the issues of long-term nuclear waste storage.  So even a small, simple reactor would have many hurdles to overcome.

Financial markets are wary of investments in new nuclear plants until it can be demonstrated they can be constructed on budget and on schedule. Nuclear plants have not been built in the United States for decades, but there are unpleasant memories, because construction of some of the currently operating plants was associated with substantial cost overruns and delays. There is also a significant gap between when construction is initiated and when return on investment is realized.

A crisis will harden public opinion against building new Nuclear Power Plants

I wrote this section before the Fukushima disaster, and there will be more disasters as aging nuclear power plants, extended beyond their lifetime and being pushed to produce electricity full-tilt, succumb to many hazards detailed in the Green Peace International report “Nuclear Reactor Hazards“.  It’s only a matter of time before one of our aging reactors melts down.  When that happens, the public will fight the development of more nuclear power plants.  Other factors besides aging that could cause a disaster are natural disasters, failure of the electric grid, increased and more severe flooding, drought, and severe and unstable weather from climate change, lack of staffing as older workers retire with few educated engineers available to replace them.

Even Edward Teller, father of the hydrogen bomb, thought Nuclear Power Plants were dangerous and should be put underground for safety in case of a failure and to make clean-up easier.

Five of the six reactors at the Fukushima plant in Japan were Mark 1 reactors. Thirty-five years ago, Dale G. Bridenbaugh and two of his colleagues at General Electric quit after they became convinced that the Mark 1 nuclear reactor design they were reviewing was so flawed it could lead to a devastating accident (Mosk).

Nuclear power plants are extremely attractive targets for terrorists and in a war.  Uranium is not only stored in the core, but the “waste” area near the plant, providing plenty of material for “dirty” or explosive atom bombs.

For details, read the original document or my summary of the Greenpeace report.

EROEI and decommissioning

See: Decommissioning a nuclear reactor

The energy to build, decommission, dispose of wastes, etc., may be more than the plant will ever generate  a negative Energy Returned on Energy Invested (EROEI).  A review by Charles Hall et al. of net energy studies of nuclear power found the data to be “idiosyncratic, prejudiced, and poorly documented,” and concluded the most reliable EROEI information was too old to be useful (results ranged from 5 to 8:1). Newer data was unjustifiably optimistic (15:1 or more) or pessimistic (low, even less than 1:1).  One of the main reasons EROEI is low is due to the enormous amount of energy used to construct nuclear power plants, which also create a great deal of GHG emissions.

Scale

“To produce enough nuclear power to equal the power we currently get from fossil fuels, you would have to build 10,000 of the largest possible nuclear power plants. That’s a huge, probably nonviable initiative, and at that burn rate, our known reserves of uranium would last only for 10 or 20 years.” (Goodstein). Are there enough sites for 10,000 plants near water for cooling yet not so low that rising sea levels destroy them or drought remove cooling water supplies?

Staffing

Nuclear power has been unpopular for such a long time, that there aren’t enough nuclear engineers, plant operators and designers, or manufacturing companies to scale up quickly (Torres 2006).  The number of American Society of Mechanical Engineers (ASME) nuclear certificates held around the world fell from 600 in 1980 to 200 in 2007. There is also an insufficient supply of people with the requisite education or training at a time when vendors, contractors, architects, engineers, operators, and regulators will be seeking to build up their staffs. In addition, 35% of the staff at U.S nuclear utilities are eligible for retirement in the next 5–10 years.

There could be shortages in certain parts and components (especially large forgings), as well as in trained craft and technical personnel, if nuclear power expands significantly worldwide.

There are fewer suppliers of nuclear parts and components now than in the past.

Nuclear Proliferation & terrorism targets

Can we really prevent crazed dictators for 30,000 years from using plutonium and other wastes to wage war?  Even if a nuclear bomb is beyond the capabilities of society in the future, the waste could be used to make a dirty bomb. Meanwhile, reactors make good targets for terrorists who do have the money to hire scientists help them make a nuclear bomb from stolen uranium or plutonium.

Water 

Nuclear plants must be built near water for cooling, and use a tremendous amount of water. Scientists are certain that global warming will raise sea levels — about half of existing power plants would be flooded.  Climate change will cause longer and more severe droughts, with the potential for not enough water to cool the plant down, and more severe storms will bring more hurricanes and tornadoes.

NIMBYism

Never underestimate NIMBYism, which is already preventing nuclear power plants from being built. The political opposition to building thousands of nuclear plants will be impossible to overcome.

No good way to store the energy

One of the most critical needs for power is a way to store it. Utility scale storage batteries  have not been invented despite decades of research, and only enough materials exist on earth to build NaS batteries at a cost of over $44 trillion that would take up 945 square miles of real estate (Friedemann 2015)

A great deal of the electric power generated would need to be used to replace the billions of combustion engine machines and vehicles rather than providing heat, cooling, cooking power and light to homes and offices. It takes decades to move from one source of power to another. It’s hard to see how this could be accomplished without great hardship and social chaos, which would slow the conversion process down. Desperation is likely to lead to stealing of key components of the new infrastructure to sell for scrap metal, as is already happening in Baltimore where 30-foot tall street lights are being stolen (Gately 2005).

Related posts:  Energy Storage

Breeder reactors. You’d need 24,000 Breeder Reactors, each one a potential nuclear bomb (Mesarovic)

  • We’ve known since 1969 that we needed to build breeder reactors to stretch the lifetime of radioactive material to tens of thousands of years, and to reduce the radioactive wastes generated, but we still don’t know how to do this. (NAS)
  • If we ever do succeed, these reactors are much closer to being bombs than conventional reactors – the effects of an accident would be catastrophic economically and in the number of lives lost if it failed near a city (Wolfson).
  • The by-product of the breeder reaction is plutonium. Plutonium 239 has a half-life of 24,000 years. How can we guarantee that no terrorist or dictator will ever use this material to build a nuclear or dirty bomb during this time period?

Assume, as the technology optimists want us to, that in 100 years all primary energy will be nuclear. Following historical patterns, and assuming a not unlikely quadrupling of population, we will need, to satisfy world energy requirements, 3,000 “nuclear parks” each consisting of, say, 8 fast-breeder reactors. These 8 reactors, working at 40% efficiency, will produce 40 million kilowatts of electricity collectively. Therefore, each of the 3,000 nuclear parks will be converting primary nuclear power equivalent to 100 million kilowatts thermal. The largest nuclear reactors presently in operation convert about 1 million kilowatts (electric), but we will give progress the benefit of doubt and assume that our 24,000 worldwide reactors are capable of converting 5 million kilowatts each. In order to produce the world’s energy in 100 years, then, we will merely have to build, in each and every year between now and then, 4 reactors per week! And that figure does not take into account the lifespan of nuclear reactors. If our future nuclear reactors last an average of thirty years, we shall eventually have to build 2 reactors per day to replace those that have worn out.  By 2025, sole reliance on nuclear power would require more than 50 major nuclear installations, on the average, in every state in the union.

For the sake of this discussion, let us disregard whether this rate of construction is technically and organizationally feasible in view of the fact that, at present, the lead time for the construction of much smaller and simpler plants is seven to ten years. Let us also disregard the cost of about $2000 billion per year — or 60 percent of the total world output of $3400 billion — just to replace the worn-out reactors and the availability of the investment capital. We may as well also assume that we could find safe storage facilities for the discarded reactors and their irradiated accessory equipment, and also for the nuclear waste. Let us assume that technology has taken care of all these big problems, leaving us only a few trifles to deal with.

In order to operate 24,000 breeder reactors, we would need to process and transport, every year, 15 million kilograms (16,500 tons) of plutonium-239, the core material of the Hiroshima atom bomb. Only 10 pounds are needed to construct a bomb.  If inhaled, just ten micrograms (.00000035 ounce) of plutonium-239 is likely to cause fatal lung cancer. A ball of plutonium the size of a grapefruit contains enough poison to kill nearly all the people living today. Moreover, plutonium-239 has a radioactive life of more than 24,000 years. Obviously, with so much plutonium on hand, there will be a tremendous problem of safeguarding the nuclear parks — not one or two, but 3000 of them. And what about their location, national sovereignty, and jurisdiction? Can one country allow inadequate protection in a neighboring country, when the slightest mishap could poison adjacent lands and populations for thousands and thousands of years? And who is to decide what constitutes adequate protection, especially in the case of social turmoil, civil war, war between nations, or even only when a national leader comes down with a case of bad nerves. The lives of millions could easily be beholden to a single reckless and daring individual.

References

Cembalest, M.21 Nov 2011. Eye on the Market. The quixotic search for energy solutions.  J P Morgan

Coumans, C.  4 Sep 2010. Uranium reserves to be over by 2050. Deccan Chronicle

Dininny, S. 7 Sep 2006. Cost for Hanford waste treatment plant grows to $12.2 billion. The Olympian / Associated Press.

Friedemann, A. 2015. When Trucks stop running: Energy and the Future of Transportation. Springer.

Gately, G. 25 Nov 2005. Light poles vanishing — believed sold for scrap by thieves 130 street fixtures in Baltimore have been cut down. New York Times.

Goodstein, D. April 29, 2005. Transcript of The End of the Age of Oil talk

(Greenpeace) H. Hirsch, et al. 2005. Nuclear Reactor Hazards: Ongoing Dangers of Operating Nuclear Technology in the 21st Century http://www.greenpeace.org/raw/content/international/press/reports/nuclearreactorhazards.pdf

Heinberg, Richard. September 2009. Searching for a Miracle. “Net Energy” Limits & the Fate of Industrial Society. Post Carbon Institute.

Hirsch, R. L., et al. February 2005. Peaking of World Oil Production: Impacts, mitigation, & risk management. Department of Energy.

Hoyos, C. 19 OCT 2003 Power sector 'to need $10,000 bn in next 30 years'. Financial Times.

Mesarovic, Mihajlo, et al. 1974. Mankind at the Turning Point.  The Second Club of Rome Report.  E.P. Dutton, 1974 pp. 132-135

Mosk, M. 15 Mar 2011. Fukushima: Mark 1 Nuclear Reactor Design Caused GE Scientist To Quit In Protest. ABC World News.

(NAS) “It is clear, therefore, that by the transition to a complete breeder-reactor program before the initial supply of uranium 235 is exhausted, very much larger supplies of energy can be made available than now exist. Failure to make this transition would constitute one of the major disasters in human history." National Academy of Sciences. 1969. Resources & Man. W.H.Freeman, San Francisco. 259.

Peterson, P. 2003. Will the United States Need a Second Geologic Repository? The Bridge 33 (3), 26-32.

Torres, M. “Uranium Depletion and Nuclear Power: Are We at Peak Uranium?” http://www.theoildrum.com/node/2379#more

Wolfson, R. 1993. Nuclear Choices: A Citizen's Guide to Nuclear Technology. MIT Press

To see what plants are open, closing, or being built (excel):

United States Nuclear Regulatory Commission 2014-2015 Information Digest. Nuclear materials, radioactive waste, nuclear reactors, nuclear security.

Posted in Alternative Energy, Energy, Nuclear Power | 1 Comment

Civilization goes over the net energy cliff in 2022 — just 6 years away

[ Below are excerpts from 3 posts by Louis Arnoux (see the full versions here) and a 1-hour video explaining the Hill’s group report here .   Basically this explains the Net Energy Cliff and why it drops off so quickly rather than being a bell curve.

Oil extraction costs have been shooting up and can only become higher as nearly all of the ‘easy oil’ has been found. Once more energy is used than gained, exploration and production end.

For the average barrel of oil this may happen in 2022 — just 6 years away

So by 2022 half the oil industry is likely to be out of business. Oil production won’t end — there will still be “above average” barrels produced, but dramatically less and less as we fall over the energy cliff, with the tail end around 2095.

  • The rapid end of the Oil Age began in 2012 and will be over within some 10 years. By 2022 the number of service stations in the US will have shrunk by 75%.
  • The critical parameter to consider is not the million barrels produced per day, but the net energy from oil per head of global population, since when this gets too close to nil we must expect complete social breakdown, globally.
  • We are in an unprecedented situation.  As stressed by Tainter, no previous civilization has ever managed to survive the kind of predicament we are in.  However, the people living in those civilizations were mostly rural and had a safety net, in that their energy source was 100% solar, photosynthesis for food, fiber and timber – they always could keep going even though it may have been under harsh conditions.  We no longer have such a safety net; our entire food systems are almost completely dependent on the net energy from oil that is in the process of dropping to the floor and our food supply systems cannot cope without it.

The Hills Group has models that predicted the price of oil going down before it began in 2014, and several other models all arrive at the same conclusion that the end of the age of oil for most of us ends around 2030 — though really 2022 since 2030 assumes total energy efficiency. If you need a kick in the pants to change your life and location, I can’t imagine a more important document to read (ignore the math, the methods and results in the charts are clear). And besides, it’s good brain exercise to prevent Alzheimer’s)

The Hill’s Group. March 1, 2015. Depletion : A determination for the world’s petroleum reserve. A reserve status report  # HC3-433 Version 2

Or for an easier read look at this short summary of Dr. Alister Hamilton’s talk “Brexit, Oil and the World Economy” here, and view the hour video here  on YouTube. 

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Louis Arnoux. July 12, 2016. Some reflections on the Twilight of the Oil Age – part I. cassandralegacy.blogspot.com

Introduction

Since at least the end of 2014 there has been increasing confusions about oil prices, whether so-called “Peak Oil” has already happened, or will happen in the future and when, matters of EROI (or EROEI) values for current energy sources and for alternatives, climate change and the phantasmatic 2oC warming limit, and concerning the feasibility of shifting rapidly to renewables or sustainable sources of energy supply. Overall, it matters a great deal whether a reasonable time horizon to act is say 50 years, i.e. in the main the troubles that we are contemplating are taking place way past 2050, or if we are already in deep trouble and the timeframe to try and extricate ourselves is some 10 years. Answering this kind of question requires paying close attention to system boundary definitions and scrutinizing all matters taken for granted.

It took over 50 years for climatologists to be heard and for politicians to reach the Paris Agreement re climate change (CC) at the close of the COP21, late last year. As you no doubt can gather from the title, I am of the view that we do not have 50 years to agonise about oil. In the three sections of this post I will first briefly take stock of where we are oil wise; I will then consider how this situation calls upon us to do our utter best to extricate ourselves from the current prevailing confusion and think straight about our predicament; and in the third part I will offer a few considerations concerning the near term, the next ten years – how to approach it, what cannot work and what may work, and the urgency to act, without delay.

Part 1 – Alice looking down the end of the barrel

In his recent post, Ugo contrasted the views of the Doomstead Diner‘s readers  with that of energy experts regarding the feasibility of replacing fossil fuels within a reasonable timeframe. In my view, the Doomstead’s guests had a much better sense of the situation than the “experts” in Ugo’s survey. To be blunt, along current prevailing lines we are not going to make it. I am not just referring here to “business-as-usual” (BAU) parties holding for dear life onto fossil fuels and nukes. I also include all current efforts at implementing alternatives and combating CC. Here is why.

The energy cost of system replacement

What a great number of energy technology specialists miss are the challenges of whole system replacement – moving from fossil-based to 100% sustainable over a given period of time. Of course, the prior question concerns the necessity or otherwise of whole system replacement. For those of us who have already concluded that this is an urgent necessity, if only due to Climate Change, no need to discuss this matter here. For those who maybe are not yet clear on this point, hopefully, the matter will become a lot clearer a few paragraphs down.

So coming back for now to whole system replacement, the first challenge most remain blind to is the huge energy cost of whole system replacement in terms of both the 1st principle of thermodynamics (i.e. how much net energy is required to develop and deploy a whole alternative system, while the old one has to be kept going and be progressively replaced) and also concerning the 2nd principle (i.e. the waste heat involved in the whole system substitution process). The implied issues are to figure out first how much total fossil primary energy is required by such a shift, in addition to what is required for ongoing BAU business and until such a time when any sustainable alternative has managed to become self-sustaining, and second to ascertain where this additional fossil energy may come from.

The end of the Oil Age is now

If we had a whole century ahead of us to transition, it would be comparatively easy. Unfortunately, we no longer have that leisure since the second key challenge is the remaining timeframe for whole system replacement. What most people miss is that the rapid end of the Oil Age began in 2012 and will be over within some 10 years. To the best of my knowledge, the most advanced material in this matter is the thermodynamic analysis of the oil industry taken as a whole system (OI) produced by The Hill’s Group (THG) over the last two years or so (http://www.thehillsgroup.org).

THG are seasoned US oil industry engineers led by B.W. Hill. I find its analysis elegant and rock hard. For example, one of its outputs concerns oil prices. Over a 56 year time period, its correlation factor with historical data is 0.995. In consequence, they began to warn in 2013 about the oil price crash that began late 2014 (see: http://www.thehillsgroup.org/depletion2_022.htm). In what follows I rely on THG’s report and my own work.

Three figures summarize the situation we are in rather well, in my view.

Figure 1 – End Game

arnoux-end-game

For purely thermodynamic reasons net energy delivered to the globalized industrial world (GIW) per barrel by the oil industry (OI) is rapidly trending to zero. By net energy we mean here what the OI delivers to the GIW, essentially in the form of transport fuels, after the energy used by the OI for exploration, production, transport, refining and end products delivery have been deducted.

However, things break down well before reaching “ground zero”; i.e. within 10 years the OI as we know it will have disintegrated. Actually, a number of analysts from entities like Deloitte or Chatham House, reading financial tea leaves, are progressively reaching the same kind of conclusions.[1]

The Oil Age is finishing now, not in a slow, smooth, long slide down from “Peak Oil”, but in a rapid fizzling out of net energy. This is now combining with things like climate change and the global debt issues to generate what I call a “Perfect Storm” big enough to bring the GIW to its knees.

In an Alice world

Under the prevailing paradigm, there is no known way to exit from the Perfect Storm within the emerging time constraint (available time has shrunk by one order of magnitude, from 100 to 10 years). This is where I think that Doomstead Diner’s readers are guessing right. Many readers are no doubt familiar with the so-called “Red Queen” effect illustrated in Figure 2 – to have to run fast to stay put, and even faster to be able to move forward. The OI is fully caught in it.

Figure 2 – Stuck on a one track to nowhere

arnoux-red-queen

The top part of Figure 2 highlights that, due to declining net energy per barrel, the OI has to keep running faster and faster (i.e. pumping oil) to keep supplying the GIW with the net energy it requires. What most people miss is that due to that same rapid decline of net energy/barrel towards nil, the OI can’t keep “running” for much more than a few years – e.g. B.W. Hill considers that within 10 years the number of petrol stations in the US will have shrunk by 75%

What people also neglect, depicted in the bottom part of Figure 2, is what I call the inverse Red Queen effect (1/RQ). Building an alternative whole system takes energy that to a large extent initially has to come from the present fossil-fueled system. If the shift takes place too rapidly, the net energy drain literally kills the existing BAU system.[2] The shorter the transition time the harder is the 1/RQ. 

I estimate the limit growth rate for the alternative whole system at 7% growth per year. So growth rates for solar and wind, well above 20% and in some cases over 60%, are not viable globally. However, the kind of growth rates, in the order of 35%, that are required for a very short transition under the Perfect Storm time frame are even less viable. As the last part of Figure 2 suggests, there is a way out by focusing on current huge energy waste, but presently this is the road not taken.

On the way to Olduvai

In my view, given that nearly everything within the GIW requires transport and that said transport is still about 94% dependent on oil-derived fuels, the rapid fizzling out of net energy from oil must be considered as the defining event of the 21st century – it governs the operation of all other energy sources, as well as that of the entire GIW. Therefore the critical parameter to consider is not that absolute amount of oil mined (as even “peakoilers” do), such as Million barrels produced per year, but net energy from oil per head of global population, since when this gets too close to nil we must expect complete social breakdown, globally.

The overall picture, as depicted ion Figure 3, is that of the “Mother of all Senecas” (to use Ugo’s expression).  It presents net energy from oil per head of global population.[3] The Olduvai Gorge as a backdrop is a wink to Dr. Richard Duncan’s scenario (he used barrels of oil equivalent which was a mistake) and to stress the dire consequences if we do reach the “bottom of the Gorge” – a kind of “postmodern hunter-gatherer” fate.

Oil has been in use for thousands of year, in limited fashion at locations where it seeped naturally or where small well could be dug out by hand. Oil sands began to be mined industrially in 1745 at Merkwiller-Pechelbronn in north east France (the birthplace of Schlumberger). From such very modest beginnings to a peak in the early 1970s, the climb took over 220 years. The fall back to nil will have taken about 50 years.

The amazing economic growth in the three post WWII decades was actually fueled by a 321% growth in net energy/head. The peak of 18GJ/head in around 1973, was actually in the order of some 40GJ/head for those who actually has access to oil at the time, i.e. the industrialized fraction of the global population.

Figure 3 – The “Mother of all Senecas”

arnoux-peak-net-end-user-energy-1970sIn 2012 the OI began to use more energy per barrel in its own processes (from oil exploration to transport fuel deliveries at the petrol stations) than what it delivers net to the GIW. We are now down below 4GJ/head and dropping fast.

This is what is now actually driving the oil prices: since 2014, through millions of trade transactions (functioning as the “invisible hand” of the markets), the reality is progressively filtering that the GIW can only afford oil prices in proportion to the amount of GDP growth that can be generated by a rapidly shrinking net energy delivered per barrel, which is no longer much. Soon it will be nil. So oil prices are actually on a downtrend towards nil.

To cope, the OI has been cannibalizing itself since 2012. This trend is accelerating but cannot continue for very long. Even mainstream analysts have begun to recognize that the OI is no longer replenishing its reserves. We have entered fire-sale times (as shown by the recent announcements by Saudi Arabia (whose main field, Ghawar, is probably over 90% depleted) to sell part of Aramco and make a rapid shift out of a near 100% dependence on oil and towards “solar”.

Given what Figure 1 to 3 depict, it should be obvious that resuming growth along BAU lines is no longer doable, and that incurring ever more debt that can never be reimbursed is no longer a solution, not even short-term

Part 2 – Inquiring into the appropriateness of the question

Let’s acknowledge it, the situation we are in, is complex. As many commentators like to state, there is still plenty of oil, coal, and gas left “in the ground”. Since 2014, debates have been raging, concerning the assumed “oil glut”, concerning how low oil prices may go down, how high prices may rebound as demand possibly picks up and the “glut” vanishes, and, in the face of all this, what may or may not happen regarding “renewables”. However, my Part 1 data have indicated that most of what’s left in terms of fossil fuels is likely to stay where it is, underground  because this is what thermodynamics dictates.

We can now venture a little bit further if we keep firmly in mind that the globalized industrial world (GIW), and by extension all of us, do not “live” on fossil resources but on net energy delivered by the global energy system; and if we also keep in mind that, in this matter, oil-derived transport fuels are the key since, without them, none of the other fossil and nuclear resources can be mobilized and the GIW itself can’t function.

In my experience, most often, when faced with such a broad spectrum of conflicting views, especially involving matters pertaining to physics and the social sciences, the lack of agreement is indicative that the core questions are not well formulated. Physicist David Bohm liked to stress: “In scientific inquiries, a crucial step is to ask the right question. Indeed each question contains presuppositions, largely implicit. If these presuppositions are wrong or confused, the question itself is wrong, in the sense that to try to answer it has no meaning. One has thus to inquire into the appropriateness of the question.”

Here it is important, in terms of system analysis, to differentiate between the global energy industry (GEI) and the GIW. The GEI bears the brunt of thermodynamics directly, and within the GEI, the oil industry (OI) is key since, as seen in Part 1, it is the first to reach the thermodynamics limit of resource extraction and, since it conditions the viability of the GEI’s other components – in their present state and within the remaining timeframe, they can’t survive the OI’s eventual collapse. On the other hand, the GIW is impacted by thermodynamic decline with a lag, in the main because it is buffered by debt – so that by the time the impact of the thermodynamic collapse of the OI becomes undeniable it’s too late to do much about it.

At the micro level, debt can be “good” – e.g. a company borrows to expand and then reimburses its debt, etc… At the macro level, it can be, and has now become, lethal, as the global debt can no longer be reimbursed (I estimate the energy equivalent of current global debt, from states, businesses, and households to be in the order of some 10,700 EJ, while current world energy use is in the order of 554 EJ; it is no longer doable to “mind the gap”).

Crude oil prices are dropping to the floor

Figure 4 – The radar signal for an Oil Pearl Harbor

arnoux-oil-pearl-harbor

In brief, the GIW has been living on ever growing total debt since around the time net energy from oil per head peaked in the early 1970s. The 2007-08 crisis was a warning shot. Since 2012, we have entered the last stage of this sad saga – when the OI began to use more energy within its own production chains than what it delivers to the GIW. From this point onwards retrieving the present financial fiat system is no longer doable.

This 2012 point marked a radical shift in price drivers.[4] Figure 4 combines the analyses of TGH (The Hills Group) and mine. In late 2014 I saw the beginning of the oil price crash as a signal of a radar screen. Being well aware that EROIs for oil and gas combined had already passed below the minimum threshold of 10:1, I understood that this crash was different from previous ones: prices were on their way right down to the floor. I then realized what TGH had anticipated this trend months earlier, that their analysis was robust and was being corroborated by the market there and then.

Until 2012, the determining price driver was the total energy cost incurred by the OI. Until then the GIW could more or less happily sustain the translation of these costs into high oil prices, around or above $100/bbl. This is no longer the case. Since 2012, the determining oil price driver is what the GIW can afford to pay in order to still be able to generate residual GDP growth (on borrowed time) under the sway of a Red Queen that is running out of thermodynamic “breath”. I call the process we are in an “Oil Pearl Harbor”, taking place in a kind of eerie slow motion. This is no longer retrievable. Within roughly ten years the oil industry as we know it will have disintegrated. The GIW is presently defenseless in the face of this threat.

The Oil Fizzle Dragon-King

Figure 5 – The “Energy Hand”

energy-hand-take-5

To illustrate how the GEI works I often compare its energy flows to the five fingers of the one hand: all are necessary and all are linked (Figure 5). Under the Red Queen, the GEI is progressively loosing its “knuckles” one by one like a kind of unseen leprosy – unseen yet because of the debt “veil” that hides the progressive losses and more fundamentally because of what I refer to at the bottom of Figure 5, namely were are in what I call Oil Fizzle Dragon-King.

A Dragon-King (DK) is a statistical concept developed by Didier Sornette of the Swiss Federal Institute of Technology, Zurich, and a few others to differentiate high probability and high impact processes and events from Black Swans, i.e. events that are of low probability and high impact. I call it the Oil Fizzle because what is triggering it is the very rapid fizzling out of net energy per barrel. It is a DK, i.e. a high probability, high impact unexpected process, purely because almost none of the decision-making elites is familiar with the thermodynamics of complex systems operating far from equilibrium; nor are they familiar with the actual social workings of the societies they live in. Researchers have been warning about the high likelihood of something like this at least since the works of the Meadows in the early 1970s.[5]

The Oil Fizzle DK is the result of the interaction between this net energy fizzling out, climate change, debt and the full spectrum of ecological and social issues that have been mounting since the early 1970s – as I noted on Figure 1, the Oil Fizzle DK is in the process of whipping up a “Perfect Storm” strong enough to bring the GIW to its knees. The Oil Pearl Harbor marks the Oil Fizzle DK getting into full swing.

To explain this further, with reference to Figure 5, oil represents some 33% of global primary energy use (BP data). Fossil fuels represented some 86% of total primary energy in 2014. However, coal, oil, and gas are not like three boxes neatly set side by side from which energy is supplied magically, as most economists would have it.

In the real world (i.e. outside the world economists live in), energy supply chains form networks, rather complex ones.  For example, it takes electricity to produce many products derived from oil, coal, and gas, while electricity is generated substantially from coal and gas, and so on.  More to the point, as noted earlier, because 94% of all transport is oil-based, oil stands at the root of the entire, complex, globalized set of energy networks.  Coal mining, transport, processing, and use depend substantially on oil-derived transport fuels; ditto for gas.[6]   The same applies to nuclear plants. So the thermodynamic collapse of the oil industry, that is now underway, not only is likely to be completed within some 10 years but is also in the process of triggering a falling domino effect (aka an avalanche, or in systemic terms, a self-organising criticality, a SOC).

Presently, and for the foreseeable future, we do not have substitutes for oil derived transport fuels that can be deployed within the required time frame and that would be affordable to the GIW. In other words, the GIW is falling into a thermodynamic trap, right now. As B. W. Hill recently noted, “The world is now spending $2.3 trillion per year more to produce oil than what is received when it is sold. The world is now losing a great deal of money to maintain its dependence on oil.”

In the longer run, the end effect of the Oil Fizzle DK is likely to be an abrupt decline of GHG emissions.

However, the danger I see is that meanwhile the GEI, and most notably the OI, is not going to just “curl up and die”. I think we are in a “die hard” situation. Since 2012, we are already seeing what I call a Big Mad Scramble (BMS) by a wide range of GEI actors that try to keep going while they still can, flying blind into the ground. The eventual outcome is hard to avoid with a GEI operating with only about 12% energy efficiency, i.e. some 88% wasteful current primary energy use. The GIW’s agony is likely to result in a big burst of GHG emissions while net energy fizzles out. The high danger is that the old quip will eventuate on a planetary scale: “the operation was successful but the patient died”… Hence my call for “inquiring into the appropriateness of the question” and for systemic thinking. We are in deep trouble. We can’t afford to get this wrong.

Part 3 – Standing slightly past the edge of the cliff

At least since the early 1970s and the Meadows’ work, we have known that the globalized industrial world (GIW) is on a self-destructive path, aka BAU (Business as usual). We now know that we are living through the tail end of this process, the end of the Oil Age, precipitating what I have called the Oil Fizzle Dragon-King, Seneca style, that is, after a slow, relatively smooth climb (aka “economic growth”) we are at the beginning of an abrupt fall down a thermodynamic cliff.

The chief issue is whole system change. This means thinking in whole systems terms where the thermodynamics of complex systems operating far from equilibrium is the key.  Understanding the situation requires moving repeatedly from the particulars, the details, to the whole system, improving our understanding of the whole and from this going back to the particulars, improving our understanding of them, going back to considering the whole, and so on.

Whole system replacement, i.e. going 100% renewable, requires a huge energy embodiment that is not feasible.  Having the “Energy Hand” in mind (Figure 5), where does this required energy come from in a context of sharp decline of net energy from oil and the Red Queen effect, and concerning renewable, inverse Red Queen/cannibalization effects?  

Solely considering the performances and cost of this or that alternative energy technology won’t suffice.  Short of addressing the complexities of whole system replacement, the situation we are in is some kind of “Apocalypse now”.  The chief challenge I see is thus how to shift safely, with minimal loss of life (substantial loss of life there will be; this has become unavoidable), from fossil-BAU (and nuclear) …

We currently have some 17 TW of power installed globally (mostly fossil with some nuclear), i.e. about 2.3kW/head, but with some 4 billion people who at best are grossly energy stressed, many who have no access to electricity at all and only limited transport, in a context of an efficiency of global energy systems in the order of 12%.[9]

Going “green” and surviving it (i.e. avoiding the inverse Red Queen effect) means increasing our Energy Hand from 17 TW to 50 TW (as a rough order of magnitude), with efficiencies shifting from 12% to over 80%.

It should be clear that under this predicament something would have to give, i.e. some of us would get even more energy stressed and die, or as the Chinese and Indians have been doing use much more of remaining fossil resources but then this would accelerate global warming and many other nasties. 

Whole system replacement (on a “do or die” mode) requires considering whole production chain networks from mining the ores, through making the metals, cement, etc., to making the machines, to using them to produce the stuff we require to go 100% sustainable. Given the very short time window constraint, we can’t afford to get it wrong in terms of how to possibly getting out of there – we have hardly enough time to have one go at it.

Remaining time frame

We no longer have 35 years, (say up to around 2050).  We have at best 10 years, not to debate and agonize but to actually do, with the next three years being key.  The thermodynamics on this, summarized in Part 1, is rock hard.  This time-frame, combined with the Oil Pearl Harbor challenge and the inverse Red Queen constraints, means in my view that none of the current “doings” renewable-wise can cut it.

Weak links

Notwithstanding its apparent power, the GIW is in fact extremely fragile.  It embodies a number of very weak links in its networks.  I have highlighted the oil issue, an issue that defines the overall time frame for dealing with “Apocalypse now”.  In addition to that and to climate change, there are a few other challenges that have been variously put forward by a range of researchers in recent years, such as fresh water availability, massive soil degradation, trace pollutants, degradation of life in oceans (about 99% of life is aquatic), staple food threats (e.g. black stem rust, wheat blast, ground level ozone, etc.), loss of biodiversity and 6th mass extinction, all the way to Joseph Tainter’s work concerning the links between energy flows, power (in TW), complexity and overshoot to collapse.[11]  

These weak links are currently in the process of breaking or are about to break, the breaks forming a self-reinforcing avalanche (SOC) or Perfect Storm.  All have the same key time-frame of about 10 years as an order of magnitude for acting.  All require a fair “whack” of energy as a prerequisite to handling them (the “whack” being a flexible and elastic unit of something substantial that usually one does not have).

Cognitive failure

The “Brexit” saga is perhaps the latest large-scale demonstration of cognitive failure in a very long series.  That is to say, the failure on the part of decision-making elites to make use of available knowledge, experience, and expertise to tackle effectively challenges within the time-frame required to do so.

Cognitive failure is probably most blatant, but largely remaining unseen, concerning energy, the Oil Fizzle DK and matters of energy returns on energy investments (EROI or EROEI).  What we can observe is a triple failure of BAU, but also of most current “green” alternatives (Figure 7): (1) the BAU development trajectory since the 1950s failed; (2) there has been a failure to take heed of over 40 years of warnings; and (3) there has been a failure to develop viable alternatives.

Figure 8 – The necessity of very high EROIs

  • With an EROI of 1.1 :  1   at the production well we can pump oil out and look at it…that’s all – there is no spare energy to do anything else with it
  • 1.2 : 1    We can refine crude oil into diesel fuel…and that’s all
  • 1.3 : 1    We can dispatch the diesel to a service station…and that’s all
  • 3 : 1        We can run a truck with it as well as enough spare energy to build and maintain the truck, roads, and bridges…and that’s all
  • 5 : 1        We can put something in the truck and deliver it…and that’s all
  • 8 : 1        We can provide a living to the oil field worker, the refinery worker, the truck driver, and the farmer…and that’s all
  • 10 : 1      You may have minimal health care, some education…and that’s all
  • 20 : 1      You may have the basic set of consumer items such as refrigerators, stoves, radios, TV, a small car…and that’s all
  • 30 : 1      Or higher – you can have a prosperous lifestyle and the spare energy to deal with ecological issues and to invest in a secure energy future

This is expanded from similar attempts by Jessica Lambert et al., to perhaps highlights what sliding down the thermodynamic cliff entails.  Charles Hall has shown that a production EROI of 10:1 corresponds roughly to an end-user EROI of 3.3:1 and is the bare minimum for an industrial society to function.[15]  In sociological terms, for 10:1 think of North Korea.  As shown on Figure 7, currently I know of no alternative, either unconventional fossils based, nuclear or “green” technologies with production EROIs (i.e. equivalent to the well head EROI for oil) above 20:1; most remain below 10:1.  I do think it feasible to go back above 30:1, in 100% sustainable fashion, but not along prevalent modes of technology development, social organization, and decision-making.

We are in an unprecedented situation.  As stressed by Tainter, no previous civilization has ever managed to survive the kind of predicament we are in.  However, the people living in those civilizations were mostly rural and had a safety net, in that their energy source was 100% solar, photosynthesis for food, fiber and timber – they always could keep going even though it may have been under harsh conditions.  We no longer have such a safety net; our entire food systems are almost completely dependent on the net energy from oil that is in the process of dropping to the floor and our food supply systems cannot cope without it.

Arnoux responds to readers comments:

It is important to not confuse EROI or EROEI at the well head and for the whole system up to the end-users. The Hill’s Group people have shown that the EROIE as defined by them passed below the critical viability level of 10:1 around 2010 and that along current dynamics by circa 2030 it will be about 6.89:1, by which time no net energy per barrel will reach end-users (assuming there is still an oil industry at this point, which a number of us consider most unlikely, at least not the oil industry as we presently know it). Net energy here means what is available to end-users typically to go from A to B, the energy lost as waste heat (2nd principle) and the energy used by the oil industry having been fully deducted – as such it cannot be directly linked in reverse to evaluate an EROI.

We are considering the whole system, from oil exploration to end-users. The matter is that relative to the early stages in the development of the oil industry, the total energy costs of producing the energy reaching end-users has been increasing steadily barrel after barrel and we are now getting close to a point when no significant energy will reach end-users. We expect that the industry will breakdown well before this critical point is reached.

The idea of collapse remains taboo in numerous circles and understandably is rather unpalatable. However, increasingly the awareness of the dangers appears to be progressing rapidly, all the way notably among very wealthy people who now constitute a booming market segment for underground luxury bunkers where, as the marketing goes, they could survive 5 years without going back to the surface in case of heavy turmoil…

In energy matters inequality is prevalent. Some regions are likely to retain access to residual net energy from oil longer than others and to the detriment of others, and this isn’t shaping up as a nice and smooth affair. Prof Micheal Klare has spoken of a global “30 Year War” (Klare, Michael, 2011, “The New Thirty Years War”, in European Energy Review, 5 September). However, war requires a lot of oil-based energy, so war is likely to accelerate thermodynamic collapse dynamics. For example, in the Middle East a number of researchers have noted the contribution of years of drought and displacement of about 1 million farmers to Syrian cities that has led to the present tragedy. However, few realize that another factor contributing to turmoil in the region is the competition between two sets of pipelines projects and related political and military interests, one focused on Iran and the other on KSA to link those areas to the Mediterranean. It is not possible to read through a crystal ball at the regional level. It is likely that if mistakes can be made and atrocities committed, they will take place… All in all, however, I tend to agree with B. W. Hill that globally the tail end of the Oil Fizzle process is most unlikely to extend beyond 2030.

You ask “how are they to be convinced to abandon their investments prior to catastrophic collapse?” It’s clear to me that they are not going to be convinced and there is no point in trying to and above all not time left to do so. I have come to think that those who cling to BAU for dear life do not have much prospects to last long simply because they are no longer within a viable thermodynamic space. On the other hand there are millions currently innovating and doing their utter best to stay or come back within such a space. They do so mostly flying blind, mostly without enquiring into the appropriateness of the questions they ask, which makes their life a lot harder and riskier. As a result many will end up outside the viable space and vanish, however, given the numbers, I think that statistically quite a number will manage to live within that space and evolve new ways, probably enough for one or more new kind(s) of civilization(s).

For over a century the ratio of gold to oil has remained in a narrow range of 1g to 6g of gold per barrel of sweet crude – gold being an age old monetary means that goes by weight and is not subject to inflation and other vagaries it can be used as a fixed metric not amenable to much manipulations (as fiat currencies and price indices are). This ratio is presently close to 1.04g/bbl. However, as we have seen, the GIW does not “live” on crude but on net energy from crude, essentially in the form of transport fuels. Currently the net energy that reaches end-users is about 16% of the gross energy in an average barrel of sweet crude (it was about 70% in 1920). This gives a present shadow price of about US$277/bbl, a highly unpalatable figure for the GIW’s operations (or 6.5g of gold/bbl). Of course, as net energy keeps dropping, a time will come, very soon, when after a burst the shadow price also drops to the floor (a value of x times zero equals zero). Put in other words, gold and oil have begun to diverge since 2014. All currencies have been dropping against gold since 1971. The stable gold-oil relationship is breaking down because the fundamental was not the crude barrel but the amount of net energy able to “power growth”; since 2012 this is now fizzling out.

I am saying that when 1 barrel of sweet crude is traded at US$44 (actually as I write it’s at about $43 and a bit), the GIW has access to only 16% of the energy it contains, so the net financial impact for the GIW as a whole is yes, $277/bbl equivalent. The GIW can’t make money with the full barrel, only the 16% residual, so it all happens as if it was attempting to “grow” at a basic cost of $277/bbl, which these days is quite a challenge. Even adjusting for inflation, at the time of the 1978-79 crisis (based on BP inflation adjusted price data) with some 56% net energy available to end-users, the shadow price was around US$188/bbl equivalent, and back then the situation was dire. In New Zealand we had carless days… So now at $277/bbl? The main difference I see is that now the GIW lives fully on debt, with central banks “printing money” like there is no tomorrow, which is probably correct – there is no tomorrow for the GIW in this fashion. We are at the stage where thermodynamics comes back home to roost.

In practice, no one but businesses from the oil industry buys oil. End-users buy transport fuels, plastics, etc… Now, in the main transport fuels are used to generate economic activity. No one can generate as much economic activity per barrel now, with only 16% net energy that can be used to do so, as compared to say 1920 when about 70% net energy was available. So after quite a bit of speculation up and down by traders who by and large have not a clue about what is going on, progressively the price of crude adjusts in proportion to the economic activity that can be generated downstream. The globalised industrial world (GIW), taken as a whole, cannot afford to pay more for its fuel than the amount of economic “growth” that it can generate with it, not for a long time any way. The consequence, however, is that the GIW decelerates in proportion, which is what we are observing.

 

References

[1] See for example, Stevens, Paul, 2016, International Oil Companies: The Death of the Old Business Model, Energy, Research Paper, Energy, Environment and Resources, Chatham House; England, John W., 2016, Short of capital? Risk of underinvestment in oil and gas is amplified by competing cash priorities, Deloitte Center for Energy Solutions, Deloitte LLP. The Bank of England recently commented: “The embattled crude oil and natural gas industry worldwide has slashed capital spending to a point below the minimum required levels to replace reserves — replacement of proved reserves in the past constituted about 80 percent of the industry’s spending; however, the industry has slashed its capital spending by a total of about 50 percent in 2015 and 2016. According to Deloitte’s new study {referred to above], this underinvestment will quickly deplete the future availability of reserves and production.”

[2] This effect is also referred to as “cannibalizing”. See for example, J. M. Pearce, 2009, Optimising Greenhouse Gas Mitigation Strategies to Suppress Energy Cannibalism, 2nd Climate Change Technology Conference, May 12-15, Hamilton, Ontario, Canada. However, in the oil industry and more generally the mining industry, cannibalism usually refers to what companies do when there are reaching the end of exploitable reserves and cut down on maintenance, sell assets at a discount or acquires some from companies gone bankrupt, in order to try and survive a bit longer. Presently there is much asset disposal going on in the Shale Oil and Gas patches, ditto among majors, Lukoil, BP, Shell, Chevron, etc….  Between spending cuts and assets disposal amounts involved are in the $1 to $2 trillions.

[3] This graph is based on THG’s net energy data, BP oil production data and UN demographic data.

[4] As THG have conclusively clarified, see http://www.thehillsgroup.org/depletion2_022.htm.

[5] The Meadows’ original work has been amply corroborated over the ensuing decades. See for example, Donella Meadows, Jorgen Randers, and Dennis Meadows, 2004, A Synopsis: Limits to Growth: The 30-Year Update, The Donella Meadows Institute; Turner, Graham, 2008, A Comparison of the Limits to Growth with Thirty Years of Reality, Socio-Economics and the Environment in Discussion, CSIRO Working Paper Series 2008-09; Hall, Charles A. S. and Day, John W, Jr, 2009, “Revisiting the Limits to Growth After Peak Oil” in American Scientist, May-June; Vuuren, D.P. van and Faber, Albert, 2009, Growing within Limits, A Report to the Global Assembly 2009 of the Club of Rome, Netherlands Environmental Assessment Agency; and Turner, Graham, M., 2014, Is Global Collapse Imminent? An Updated Comparison of The Limits to Growth with Historical Data, MSSI Research Paper No. 4, Melbourne Sustainable Society Institute, The University of Melbourne.

[6] Although there is a drive to use more and more liquefied natural gas for gas tankers and ordinary ship fuel bunkering.

[7] Dellingpole, James, 2013, “The dirty secret of Britain’s power madness: Polluting diesel generators built in secret by foreign companies to kick in when there’s no wind for turbines – and other insane but true eco-scandals”, in The Daily Mail, 13 July.

[8] As another example, Axel Kleidon has shown that extracting energy from wind (as well as from waves and ocean currents) on any large scale would have the effect of reducing overall free energy usable by humankind (free in the thermodynamic sense, due to the high entropy levels that these technologies do generate, and as opposed to the direct harvesting of solar energy through photosynthesis, photovoltaics and thermal solar, that instead do increase the total free energy available to humankind) – see Kleidon, Axel, 2012, How does the earth system generate and maintain thermodynamic disequilibrium and what does it imply for the future of the planet?, Max Planck Institute for Biogeochemistry, published in Philosophical Transaction of the Royal Society A,  370, doi: 10.1098/rsta.2011.0316.

[9] E.g. Murray and King, Nature, 2012.

[10] This label is a wink to the Sea People who got embroiled in the abrupt end of the Bronze Age some 3,200 years ago, in that same part of the world currently bitterly embroiled in atrocious fighting and terrorism, aka MENA.

[11] Tainter, Joseph, 1988, The Collapse of Complex Societies, Cambridge University Press; Tainter, Joseph A., 1996, “Complexity, Problem Solving, and Sustainable Societies”, in Getting Down to Earth: Practical Applications of Ecological Economics, Island Press, and Tainter, Joseph A. and Crumley, Carole, “Climate, Complexity and Problem Solving in the Roman Empire” (p. 63), in Costanza, Robert, Graumlich, Lisa J., and Steffen, Will, editors, 2007, Sustainability or Collapse, an Integrated History and Future of People on Earth, The MIT Press, Cambridge, Massachusetts and London, U.K., in cooperation with Dahlem University Press.

[12] See for example Armour, Kyle, 2016, “Climate sensitivity on the rise”, www.nature.com/natureclimatechange, 27 June.

[13] For a good overview, see Spratt, David, 2016, Climate Reality Check, March.

[14] For example, Jacobson, Mark M. and Delucchi, Mark A., 2009, “A path to Sustainability by 2030”, in Scientific American, November.

[15] Hall, Charles A. S. and Klitgaard, Kent A., 2012, Energy and the Wealth of Nations, Springer; Hall, Charles A. S., Balogh, Stephen, and Murphy, David J. R., 2009, “What is the Minimum EROI that a Sustainable Society Must Have?” in Energies, 2, 25-47; doi:10.3390/en20100025. See also Murphy, David J., 2014, “The implications of the declining energy return on investment of oil production” in Philosophical Transaction of the Royal Society A, 372: 20130126, http://dx.doi.org/10.1098/rsta.2013.0126.

[16] Joseph Tainter, 2011, “Energy, complexity, and sustainability: A historical perspective”, Environmental Innovation and Societal Transitions, Elsevier

Posted in Cascading Failure, EROEI remaining oil too low, How Much Left, Interdependencies, Limits To Growth, Net Energy Cliff, Predictions, Scientists | Tagged , , , , , , | 13 Comments

Peak Uranium by Ugo Bardi from Extracted: How the Quest for Mineral Wealth Is Plundering the Planet

Figure 1. cumulative uranium consumption by IPCC model 2015-2100 versus measured and inferred Uranium resources

[ Figure 1 shows that the next IPCC report counts very much on nuclear power to keep warming below 2.5 C.  The black line represents how many million tonnes of reasonably and inferred resources under $260 per kg remain (2016 IAEA redbook). Clearly most of the IPCC models are unrealistic.  The IPCC greatly exaggerates the amount of oil and coal reserves as well. Source: David Hughes (private communication)

This is an extract of Ugo Bardi’s must read “Extracted” about the limits of production of uranium.

Many well-meaning citizens favor nuclear power because it doesn’t emit greenhouse gases.  The problem is that the Achilles heel of civilization is our dependency on trucks of all kinds, which run on diesel fuel because diesel engines transformed our civilization with their ability to do heavy work better than steam, gasoline, or any other kind of engine.  Trucks are required to keep the supply chains going that every person and business on earth require, from food to the materials and construction of the roads they run on, as well as mining, agriculture, construction trucks, logging etc. 

Nuclear power plants are not a solution, since trucks can’t run on electricity, so anything that generates electricity is not a solution, nor is it likely that the electric grid can ever be 100% renewable (read “When trucks stop running”, this can’t be explained in a sound-bite).  And we certainly aren’t going to be able to replace a billion trucks and equipment with diesel engines by the time the energy crunch hits with something else, there is nothing else.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Bardi, Ugo. 2014. Extracted: How the Quest for Mineral Wealth Is Plundering the Planet. Chelsea Green Publishing.

Although there is a rebirth of interest in nuclear energy, there is still a basic problem: uranium is a mineral resource that exists in finite amounts.

Even as early as the 1950s it was clear that the known uranium resources were not sufficient to fuel the “atomic age” for a period longer than a few decades.

That gave rise to the idea of “breeding” fissile plutonium fuel from the more abundant, non-fissile isotope 238 of uranium. It was a very ambitious idea: fuel the industrial system with an element that doesn’t exist in measurable amounts on Earth but would be created by humans expressly for their own purposes. The concept gave rise to dreams of a plutonium-based economy. This ambitious plan was never really put into practice, though, at least not in the form that was envisioned in the 1950s and ’60s. Several attempts were made to build breeder reactors in the 1970s, but the technology was found to be expensive, difficult to manage, and prone to failure. Besides, it posed unsolvable strategic problems in terms of the proliferation of fissile materials that could be used to build atomic weapons. The idea was thoroughly abandoned in the 1970s, when the US Senate enacted a law that forbade the reprocessing of spent nuclear fuel.

A similar fate was encountered by another idea that involved “breeding” a nuclear fuel from a naturally existing element—thorium. The concept involved transforming the 232 isotope of thorium into the fissile 233 isotope of uranium, which then could be used as fuel for a nuclear reactor (or for nuclear warheads). 48 The idea was discussed at length during the heydays of the nuclear industry, and it is still discussed today; but so far, nothing has come out of it and the nuclear industry is still based on mineral uranium as fuel.

Today, the production of uranium from mines is insufficient to fuel the existing nuclear reactors. The gap between supply and demand for mineral uranium has been as large as almost 50% from 1995 to 2005, though gradually reduced the past few years.

The U.S. mined 370,000 metric tons the past 50 years, peaking in 1981 at 17,000 tons/year.  Europe peaked in the 1990s after extracting 460,000 tons.  Today nearly all of the 21,000 ton/year needed to keep European nuclear plants operating is imported.

The European mining cycle allows us to determine how much of the originally estimated uranium reserves could be extracted versus what actually happened before it cost too much to continue. Remarkably in all countries where mining has stopped it did so at well below initial estimates (50 to 70%). Therefore it’s likely ultimate production in South Africa and the United States can be predicted as well.

Table 1. The European mining cycle allows us to determine how much of the originally estimated uranium reserves could be extracted versus what actually happened before it cost too much to continue. Remarkably in all countries where mining has stopped it did so at well below initial estimates (50 to 70%). Therefore it’s likely ultimate production in South Africa and the United States can be predicted as well.

The Soviet Union and Canada each mined 450,000 tons. By 2010 global cumulative production was 2.5 million tons.  Of this, 2 million tons has been used, and the military had most of the remaining half a million tons.

The most recent data available show that mineral uranium accounts now for about 80% of the demand.  The gap is filled by uranium recovered from the stockpiles of the military industry and from the dismantling of old nuclear warheads.

This turning of swords into plows is surely a good idea, but old nuclear weapons and military stocks are a finite resource and cannot be seen as a definitive solution to the problem of insufficient supply. With the present stasis in uranium demand, it is possible that the production gap will be closed in a decade or so by increased mineral production. However, prospects are uncertain, as explained in “The End of Cheap Uranium.” In particular, if nuclear energy were to see a worldwide expansion, it is hard to see how mineral production could satisfy the increasing uranium demand, given the gigantic investments that would be needed, which are unlikely to be possible in the present economically challenging times.

At the same time, the effects of the 2011 incident at the Fukushima nuclear power plant are likely to negatively affect the prospects of growth for nuclear energy production, and with the concomitant reduced demand for uranium, the surviving reactors may have sufficient fuel to remain in operation for several decades.

It’s true that there are large quantities of uranium in the Earth’s crust, but there are limited numbers of deposits that are concentrated enough to be profitably mined. If we tried to extract those less concentrated deposits, the mining process would require far more energy than the mined uranium could ultimately produced [negative EROI].

Modeling Future Uranium Supplies

Uranium supply and demand to 2030

Table 2. Uranium supply and demand to 2030

 

Michael Dittmar used historical data for countries and single mines, to create a model that projected how much uranium will likely be extracted from existing reserves in the years to come. The model is purely empirical and is based on the assumption that mining companies, when planning the extraction profile of a deposit, project their operations to coincide with the average lifetime of the expensive equipment and infrastructure it takes to mine uranium—about a decade.

Gradually the extraction becomes more expensive as some equipment has to be replaced and the least costly resources are mined. As a consequence, both extraction and profits decline. Eventually the company stops exploiting the deposit and the mine closes. The model depends on both geological and economic constraints, but the fact that it has turned out to be valid for so many past cases shows that it is a good approximation of reality.

This said, the model assumes the following points:

  • Mine operators plan to operate the mine at a nearly constant production level on the basis of detailed geological studies and to manage extraction so that the plateau can be sustained for approximately 10 years.
  • The total amount of extractable uranium is approximately the achieved (or planned) annual plateau value multiplied by 10.

Applying this model to well-documented mines in Canada and Australia, we arrive at amazingly correct results. For instance, in one case, the model predicted a total production of 319 ± 24 kilotons, which was very close to the 310 kilotons actually produced. So we can be reasonably confident that it can be applied to today’s larger currently operating and planned uranium mines. Considering that the achieved plateau production from past operations was usually smaller than the one planned, this model probably overestimates the future production.

Table 2 summarizes the model’s predictions for future uranium production, comparing those findings against forecasts from other groups and against two different potential future nuclear scenarios.

As you can see, the forecasts obtained by this model indicate substantial supply constraints in the coming decades—a considerably different picture from that presented by the other models, which predict larger supplies.

The WNA’s 2009 forecast differs from our model mainly by assuming that existing and future mines will have a lifetime of at least 20 years. As a result, the WNA predicts a production peak of 85 kilotons/year around the year 2025, about 10 years later than in the present model, followed by a steep decline to about 70 kilotons/year in 2030. Despite being relatively optimistic, the forecast by the WNA shows that the uranium production in 2030 would not be higher than it is now. In any case, the long deposit lifetime in the WNA model is inconsistent with the data from past uranium mines. The 2006 estimate from the EWG was based on the Red Book 2005 RAR (reasonably assured resources) and IR (inferred resources) numbers. The EWG calculated an upper production limit based on the assumption that extraction can be increased according to demand until half of the RAR or at most half of the sum of the RAR and IR resources are used. That led the group to estimate a production peak around the year 2025.

Assuming all planned uranium mines are opened, annual mining will increase from 54,000 tons/year to a maximum of 58 (+ or – 4) thousand tons/year in 2015. [ Bardi wrote this before 2013 and 2014 figures were known. 2013 was 59,673 (highest total) and 56,252 in 2014.]

Declining uranium production will make it impossible to obtain a significant increase in electrical power from nuclear plants in the coming decades.

Posted in Ugo Bardi, Uranium | Tagged , | 1 Comment

Ward-Perkins “The Fall of Rome: And the End of Civilization”

[ This is a book review of Ward-Perkins “The Fall of Rome: And the End of Civilization“.

What sparked my interest in reading several books on the decline of Rome was when James Howard Kunstler  (KunstlerCast 278) interviewed me about my book “When Trucks Stopped Running” and asked whether I thought there’d be mass migrations at some point of energy decline as supply chains broke. This was certainly one of the many reasons that many civilizations fell in 1177 B.C., and our supply chains are far more complex, global, and fragile than they were back gotten.  So what had happened to people after the Roman Empire fell? The population of Rome dropped from 1 million people to just 10,000 – where did they go? 

One of my favorite books in high school was Gibbon’s “Decline and Fall of the Roman Empire”, and it turns out there’s been a tremendous amount of scholarship since then.  Peter Turchin finds the patterns of the rise and fall of nations going back 5,000 years to Mesopotamia, including Rome.  Montgomery’s book “Dirt: The erosion of civilizations” makes the case that loss of topsoil is the main, or one of the main reasons civilizations have fallen, and Perlin’s “A Forest Journey” makes the case that civilizations fell due to deforestation.  This doesn’t contradict Montgomery’s topsoil theory, since deforestation is a large factor in topsoil loss.  

The Roman Empire lost top soil and deforested, but evaded this fate by making Carthage and Egypt pay tribute with massive shipments of food.  Carthage was the main source of food for Rome, so when it fell to barbarians, this was the beginning of the end. 

Although I’m still trying to find a book that explains exactly what happened to Rome’s citizens, I suspect famine and disease from broken supply chains played a larger role in its decline than mass migrations.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Bryan Ward-Perkins. 2006. The Fall of Rome: And the End of Civilization. Oxford University Press.

Notes from this book follow:

The Germanic invaders of the western empire seized or extorted through the threat of force the vast majority of the territories in which they settled, without any formal agreement on how to share resources with their new Roman subjects. The impression given by some recent historians that most Roman territory was formally ceded to them as part of treaty arrangements is quite simply wrong. Evidence shows that conquest or surrender to the threat of force was definitely the norm, not peaceful settlement.

The city of Rome was repeatedly besieged by the Goths, before being captured and sacked over a 3-day period in August 410.  During one siege the inhabitants were forced to progressively reduce their rations and eat only half the previous daily allowance, and later as scarcity continued, only a third. When there was no means of relief, and their food was exhausted, plague not unexpectedly succeeded famine. Corpses lay everywhere. The eventual fall of the city, according to another account, occurred because a rich lady ‘felt pity for the Romans who were being killed off by starvation and who were already turning to cannibalism’, and so opened the gates to the enemy.’

Unsurprisingly, the defeats and disasters of the first half of the 5th century shocked the Roman world. This reaction can be charted most fully in the perplexed response of Christian writers to some obvious and awkward questions. Why had God, so soon after the suppression of the public pagan cults (in 391), unleashed the scourge of the barbarians on a Christian empire; and why did the horrors of invasion afflict the just as harshly as they did the unjust? The scale of the literary response to these difficult questions, the tragic realities that lay behind it, and the ingenious nature of some of the answers that were produced, are all worth examining in detail. They show very clearly that the fifth century was a time of real crisis, rather than one of accommodation and peaceful adjustment.” It was an early drama in the West, the capture of the city of Rome itself in 410, that created the greatest shock waves within the Roman world. In military terms, and in terms of lost resources, this event was of very little consequence, and it certainly did not spell the immediate end of west Roman power.

The pagans now, not unreasonably, attributed Roman failure to the abandonment by the State of the empire’s traditional gods, who for centuries had provided so much security and success. The most sophisticated, radical, and influential answer to this problem was that offered by Augustine, who in 413 (initially in direct response to the sack of Rome) began his monumental City of God.” Here he successfully sidestepped the entire problem of the failure of the Christian empire by arguing that all human affairs are flawed, and that a true Christian is really a citizen of Heaven. Abandoning centuries of Roman pride in their divinely ordained state (including Christian pride during the 4th century), Augustine argued that, in the grand perspective of Eternity, a minor event like the sack of Rome paled into insignificance.

Most resorted to what rapidly became Christian platitudes in the face of disaster.  In a similar vein and also in early 5th-century Gaul, Orientius of Auch confronted the difficult reality that good Christian men and women were suffering unmerited and violent deaths. Not unreasonably, he blamed mankind for turning God’s gifts, such as fire and iron, to warlike and destructive ends.

Roman military dominance over the Germanic peoples was considerable, but never absolute and unshakeable. The Romans had always enjoyed a number of important advantages: they had well-built and imposing fortifications; factory-made weapons that were both standardized and of a high quality; an impressive infrastructure of roads and harbors; the logistical organization necessary to supply their army, whether at base or on campaign; and a tradition of training that ensured disciplined and coordinated action in battle, even in the face of adversity. Furthermore, Roman mastery of the sea, at least in the Mediterranean, was unchallenged and a vital aspect of supply. It was these sophistications, rather than weight of numbers, that created and defended the empire.

These advantages were still considerable in the 4th century. In particular, the Germanic peoples remained innocents at sea (with the important exception of the Anglo-Saxons in the north), and notorious for their inability to mount successful siege warfare. Consequently, small bands of Romans were able to hold out behind fortifications, even against vastly superior numbers, and the empire could maintain its presence in an area even after the surrounding countryside had been completely overrun.

The Alamans were physically stronger and swifter; Roman soldiers, through long training, more ready to obey orders. The enemy were fierce and impetuous; Roman men quiet and cautious, putting trust in their minds while barbarians trusted in their huge bodies. At Strasbourg discipline, tactics, and equipment triumphed over mere brawn.

However, even at the best of times, the edge that the Romans enjoyed over their enemies, through their superior equipment and organization, was never remotely comparable to that of Europeans in the 19th century using rifles, Gatling and Maxim guns against peoples armed mainly with spears. Consequently, although normally the Romans defeated barbarians when they met them in battle, they could and did occasionally suffer disasters. Even at the height of the empire’s success, in AD 9, three whole legions under the command of Quinctilius Varus, along with a host of auxiliaries, were trapped and slaughtered by tribesmen in north Germany. Some 20,000 men died:

The West was lost mainly through failure to engage the invading forces successfully and to drive them back. This caution in the face of the enemy, and the ultimate failure to drive him out, are best explained by the severe problems that there were in putting together armies large enough to feel confident of victory. Avoiding battle led to a slow attrition of the Roman position, but engaging the enemy on a large scale would have risked immediate disaster on the throw of a single dice. Did the invaders push at the doors of a tottering edifice, or did they burst into a venerable but still solid structure? Because the rise and fall of great powers have always been of interest, this issue has been endlessly debated. Famously, Edward Gibbon, inspired by the secularist thinking of the Enlightenment, blamed Rome’s fall in part on the 4th-century triumph of Christianity and the spread of monasticism: “’a large portion of public and private wealth was consecrated to the specious demands of charity and devotion; and the soldiers pay was lavished on the useless multitudes of both sexes, who could only plead the merits of abstinence and chastity.”

Gibbon’s ideas about the damaging effects of Christianity were fiercely contested at the time; then fell into abeyance. In the 19th and early 20th centuries, the fall of Rome tended to be explained in terms of the grand theories of racial degeneration or class conflict that were then current. But in 1964 the pernicious influence of the Church was given a new lease of life by the then doyen of late Roman studies, A. H. M. Jones. Under the wonderful heading ‘Idle Mouths’, Jones lambasted the economically unproductive citizens of the late empire-aristocrats, civil servants, and churchmen: “the Christian church imposed a new class of idle mouths on the resources of the empire … a large number lived on the alms of the peasantry, and as time went on more and more monasteries acquired landed endowments which enabled their inmates to devote themselves entirely to their spiritual duties.”

In my opinion, the key internal element in Rome’s success or failure was the economic well-being of its taxpayers. This was because the empire relied for its security on a professional army, which in turn relied on adequate funding. The 4th-century Roman army contained as many as 600,000 soldiers, all of whom had to be salaried, equipped, and supplied. The number of troops under arms, and the levels of military training and equipment that could be lavished on them, were all determined by the amount of cash that was available. As in a modern state, the contribution in tax of tens of millions of unarmed subjects financed an elite defense corps of full-time fighters. Consequently, again as in a modern state, the strength of the army was closely linked to the well-being of the underlying tax base. Indeed, in Roman times this relationship was a great deal closer than it is today. Military expenditure was by far the largest item in the imperial budget, and there were no other massive departments of state, such as ‘Health’ or ‘Education’, whose spending could be cut when necessary in order to protect ‘Defense’; nor did the credit mechanisms exist in Antiquity that would have allowed the empire to borrow substantial sums of money in an emergency. Military capability relied on immediate access to taxable wealth.

Invasions were not the only problem faced by the western empire; it was also badly affected during parts of the 5th century by civil war and social unrest.

We know that what the empire required during these years was a concerted and united effort against the Goths (then marching through much of Italy and southern Gaul, and sacking Rome itself in 410), and against the Vandals, Sueves, and Alans (who entered Gaul at the very end of 406 and Spain in 409). What it got instead were civil wars, which were often prioritized over the struggle with the barbarians.

As we have seen, the revolts by the Bacaudae in the West can partly be understood as an attempt by desperate provincials to defend themselves, after the central government had failed to protect them. Roman civilians had to relearn the arts of war in this period, and slowly did so. As early as 407-8 two wealthy landowners in Spain raised a force of slaves from their own estates, in support of their relative the emperor Honorius. But it would, of course, take time to convert a disarmed and demilitarized population into an effective fighting force.

Interestingly, the most successful resistance to Germanic invasion was in fact offered by the least Romanized areas of the empire: the Basque country; Brittany; and western Britain. Brittany and the Basque country were only ever half pacified by the invaders, while north Wales can lay claim to being the very last part of the Roman Empire to fall to the barbarians-when it fell to the English under Edward I in 1282. It seems that it was in these ‘backward’ parts of the empire that people found it easiest to re-establish tribal structures and effective military resistance.

Sophistication and specialization, characteristic of most of the Roman world, were fine, as long as they worked: Romans bought their pots from professional potters, and bought their defense from professional soldiers. From both they got a quality product–much better than if they had had to do their soldiering and potting themselves. However, when disaster struck and there were no more trained soldiers and no more expert potters around, the general population lacked the skills and structures needed to create alternative military and economic systems. In these circumstances, it was in fact better to be a little ‘backward’.

Unlike the Romans, who relied for their military strength on a professional army (and therefore on tax), freeborn Germanic males looked on fighting as a duty, a mark of status, and, perhaps, even a pleasure. As a result, large numbers of them were practiced in warfare-a very much higher proportion of the population than amongst the Romans. Within reach of the Rhine and Danube frontiers lived tens of thousands of men who had been brought up to think of war as a glorious and manly pursuit, and who had the physique and basic training to put these ideals into practice. Fortunately for the Romans, their innate bellicosity was, however, to a large extent counterbalanced by another, closely related, feature of tribal societies-disunity, caused by fierce feuds, both between tribes and within them.

Already, before the later fourth century, there had been a tendency for the small Germanic tribes of early imperial times to coalesce into larger political and military groupings. But events at the end of this century and the beginning of the next unquestionably accelerated and consolidated the trend. In 376 a disparate and very large number of Goths were forced by the Huns to seek refuge across the Danube and inside the empire. By 378 they had been compelled by Roman hostility to unite into the formidable army that defeated Valens at Adrianopolis. At the very end of 406 substantial numbers of Vandals, Alans, and Sueves crossed the Rhine into Gaul. All these groups entered a still functioning empire, and, therefore, a very hostile environment. In this world, survival depended on staying together in large numbers. Furthermore, invading armies were able to pick up and assimilate other adventurers, ready to seek a better life in the service of a successful war band. We have already met the soldiers of the dead Stilicho and the slaves of Rome, who joined the Goths in Italy in 408; but even as early as 376-8 discontents and fortune-seekers were swelling Gothic ranks, soon after they had crossed into the empire-the historian Ammianus Marcellinus tells us that their numbers were increased significantly, not only by fleeing Gothic slaves, but also by miners escaping the harsh conditions of the state’s gold mines and by people oppressed by the burden of imperial taxation.

The different groups of incomers were never united, and fought each other, sometimes bitterly, as often as they fought the `Romans’– just as the Roman side often gave civil strife priority over warfare against the invaders.” When looked at in detail, the ‘Germanic invasions’ of the fifth century break down into a complex mosaic of different groups, some imperial, some local, and some Germanic, each jockeying for position against or in alliance with the others, with the Germanic groups eventually coming out on top.

Balkans, Italy, Gaul, and Spain between 376 and 419, were indeed quite unlike the systematic annexations of neighboring territory that we expect of a true invasion. These Goths on entering the empire left their homelands for good. They were, according to circumstance (and often concurrently), refugees, immigrants, allies, and conquerors, moving within the heart of an empire that in the early fifth century was still very powerful. Recent historians have been quite correct to emphasize the desire of these Goths to be settled officially and securely by the Roman authorities. What the Goths sought was not the destruction of the empire, but a share of its wealth and a safe home within it, and many of their violent acts began as efforts to persuade the imperial authorities to improve the terms of agreement between them.

The incoming peoples were not ideologically opposed to Rome–they wanted to enjoy a slice of the empire rather than to destroy the whole thing. Emperors and provincials could, and often did, come to agreements with the invaders. For instance, even the Vandals, the traditional ‘bad boys’ of this period, were very happy to negotiate treaty arrangements, once they were in a strong enough negotiating position. Indeed it is a striking but true fact that emperors found it easier to make treaties with invading Germanic armies who would be content with grants of money or land than with rivals in civil wars-who were normally after their heads.

Because the military position of the imperial government in the fifth century was weak, and because the Germanic invaders could be appeased, the Romans on occasion made treaties with particular groups, formally granting them territory on which to settle in return for their alliance.

Is it really likely that Roman provincials were cheered by the arrival on their doorsteps of large numbers of heavily armed barbarians under the command of their own king? To understand these treaties, we need to appreciate the circumstances of the time, and to distinguish between the needs and desires of the local provincials, who actually had to host the settlers, and those of a distant imperial government that made the arrangements. I doubt very much that the inhabitants of the Garonne valley in 419 were happy to have the Visigothic army settled amongst them; but the government in Italy, which was under considerable military and financial pressure, might well have agreed this settlement, as a temporary solution to a number of pressing problems. It bought an important alliance at a time when the imperial finances were in a parlous condition. At the same time it removed a roving and powerful army from the Mediterranean heartlands of the empire, converting it into a settled ally on the fringes of a reduced imperial core. Siting these allies in Aquitaine meant that they could be called upon to fight other invaders, in both Spain and Gaul. They could also help contain the revolt of the Bacaudae, which had recently erupted to the north, in the region of the Loire. It is even possible that the settlement of these Germanic troops was in part a punishment on the aristocracy of Aquitaine, for recent disloyalty to the emperor.

The interests of the center when settling Germanic peoples, and those of the locals who had to live with the arrangements, certainly did not always coincide. The granting to some Alans of lands in northern Gaul in about 442, on the orders of the Roman general Aetius, was resisted in vain by at least some of the local inhabitants. The Alans, to whom lands in northern Gaul had been assigned by the patrician Aetius to be divided with the inhabitants, subdued by force of arms those who resisted, and, ejecting the owners, forcibly took possession of the land. But, from the point of view of Aetius and the imperial government, the same settlement offered several potential advantages. It settled one dangerous group of invaders away from southern Gaul (where Roman power and resources were concentrated); it provided at least the prospect of an available ally; and it cowed the inhabitants of northern Gaul, many of whom had recently been in open revolt against the empire.) All this, as our text makes very clear, cost the locals a very great deal. But the cost to the central government was negligible or non-existent, since it is unlikely that this area of Gaul was any longer providing significant tax revenues or military levies for the emperor. If things went well (which they did not), the settlement of these Alans might even have been a small step along the path of reasserting imperial control in northern Gaul.

The imperial government was entirely capable of selling its provincial subjects downriver, in the interests of short-term political and military gain.

At a number of points along the line, things might have gone differently, and the Roman position might have improved, rather than worsened. Bad luck, or bad judgment, played a very important part in what actually happened. For instance, had the emperor Valens won a stunning victory at Hadrianopolis in 378 (perhaps by waiting for the western reinforcements that were already on their way), the ‘Gothic problem’ might have been solved, and a firm example would have been set to other barbarians beyond the Danube and Rhine. Similarly, had Stilicho in 402 followed up victories in northern Italy over the Goths with their crushing defeat, rather than allowing them to retreat back into the Balkans, it is much less likely that another Germanic group in 405-6, and the Vandals, Alans, and Sueves in 406, would have taken their chances within the western empire.

How did the East Survive? The eastern half of the Roman empire survived the Germanic and Iiunnic attacks of this period, to flourish in the 5th and early 6th centuries; indeed it was only a thousand years later, with the Turkish capture of Constantinople in 1453, that it came to an end. No account of the fall of the western empire can be fully satisfactory if it does not discuss how the East managed to resist very similar external pressure. Here, I believe, it was primarily good fortune, rather than innately greater strength, that was decisive.

The Cost of Peace. The new arrivals demanded and obtained a share of the empire’s capital wealth, which at this date meant primarily land. We know for certain that many of the great landowners of post-Roman times were of Germanic descent, even though we have very little information as to how exactly they had obtained their wealth at the expense of its previous owners.

The Germanic settlers rapidly used their power to acquire more wealth.

The Germanic peoples entered the empire with no ideology that they wished to impose, and found it most advantageous and profitable to work closely, within the well-established and sophisticated structures of Roman life. The Romans as a group unquestionably lost both wealth and power in order to meet the needs of a new, and dominant, Germanic aristocracy. But they did not lose everything, and many individual Romans were able to prosper under the new dispensation.

In the case of the Anglo-Saxons and others who bordered Roman territory by land or sea, the number of immigrants was probably substantially larger, since here the initial conquests could readily he followed up by secondary migration. However, except perhaps in regions that were right on the frontiers, it is unlikely that the numbers involved were so large as to dispossess many at the level of the peasantry. Many smallholders in the new kingdoms probably continued to hold their land much as before, except that much of the tax and rent that they paid will now have gone to enrich Germanic masters.

THE DISAPPEARANCE OF COMFORT

It is currently deeply unfashionable to state that anything like a ‘crisis’ or a ‘decline’ occurred at the end of the Roman empire, let alone that a ‘civilization’ collapsed and a ‘dark age’ ensued. The new orthodoxy is that the Roman world, in both East and West, was slowly, and essentially painlessly,’transformed’ into a medieval form. However, there is an insuperable problem with this new view: it does not fit the mass of archaeological evidence now available, which shows a startling decline in western standards of living during the 5th to 7th centuries. This was a change that affected everyone, from peasants to kings, even the bodies of saints resting in their churches. It was no mere transformation-it was decline on a scale that can reasonably be described as ‘the end of a civilization’.

The Fruits of the Roman Economy

The Romans produced goods, including mundane items, to a very high quality, and in huge quantities; and then spread them widely, through all levels of society. Because so little detailed written evidence survives for these humble aspects of daily, life, it used to be assumed that few goods moved far from home, and that economic complexity in the Roman period was essentially there to satisfy the needs of the state and the whims of the elite, with little impact on the broad mass of society. However, painstaking work by archaeologists has slowly transformed this picture, through the excavation of hundreds of sites, and the systematic documentation and study of the artefacts found on them. This research has revealed a sophisticated world, in which a north-Italian peasant of the Roman period might eat off tableware from the area near Naples, store liquids in an amphora from North Africa, and sleep under a tiled roof. Almost all archaeologists, and most historians, now believe that the Roman economy was characterized, not only by an impressive luxury market, but also by a very substantial middle and lower market for high-quality functional products.

Evidence comes from the study of the different types of pottery found in such abundance on Roman sites: functional kitchen wares, used in the preparation of food; fine table wares, for its presentation and consumption; and amphorae, the large jars used throughout the Mediterranean for the transport and storage of liquids, such as wine and oil.’

Pots, although not normally the heroes of history books, deserve our attention. Three features of Roman pottery are remarkable, and not to be found again for many centuries in the West: its excellent quality and considerable standardization; the massive quantities in which it was produced; and its widespread diffusion, not only geographically (sometimes being transported over many hundreds of miles), but also socially (so that it reached, not just the rich, but also the poor). In the areas of the Roman world that I know best, central and northern Italy, after the end of the Roman world, this level of sophistication is not seen again until perhaps the fourteenth century, some 800 years later.

What strikes the eye and the touch most immediately and most powerfully with Roman pottery is its consistently high quality. This is not just an aesthetic consideration, but also a practical one. These vessels are solid (brittle, but not friable), they are pleasant and easy to handle (being light and smooth), and, with their hard and sometimes glossy surfaces, they hold liquids well and are easy to wash. Furthermore, their regular and standardized shapes will have made them simple to stack and store. When people today are shown a very ordinary Roman pot, and, in particular, are allowed to handle it, they often comment on how ‘modern’ it looks and feels, and need to be convinced of its true age.

On the left bank of the Tiber in Rome, by one of the river ports of the ancient city, is a substantial hill some So meters high, Monte Testaccio, Pottery Mountain, is a reasonable translation into English. It is made up entirely of broken oil amphorae, mainly of the second and third centuries AD and primarily from the province of Baetica in south-western Spain. It has been estimated that Monte Testaccio contains the remains of some 53 million amphorae, in which around 6,000,000,000 liters of oil were imported into the city from overseas.” Imports into imperial Rome were supported by the full might of the state and were therefore quite exceptional-but the size of operations at Monte Testaccio, and the productivity and complexity that lay behind them, none the less cannot fail to impress. This was a society with similarities to our own-moving goods on a gigantic scale, manufacturing high-quality containers to do so, and occasionally, as here, even discarding them on delivery. Like us, the Romans enjoy the dubious distinction of creating a mountain of good-quality rubbish.

In all but the remotest regions of the empire, Roman pottery of a high standard is common on the sites of humble villages and isolated farmsteads.

Pottery in most cultures is vital in relation to one of our primary needs, food. Ceramic vessels, of different shapes and sizes, play an essential part in the storage, preparation, cooking, and consumption of foodstuffs. They certainly did so in Roman times, even more than they do today, since their importance for storage and cooking has declined considerably in modern times, with the invention of cardboard and plastics, and with the spread of cheap metal ware and glass.

Amphorae, not barrels, were the normal containers for the transport and domestic storage of liquids. There is every reason to see pottery vessels as central to the daily life of Roman times.

I am also convinced that the broad picture that we can reconstruct from pottery can reasonably be applied to the wider economy. Pots are low-value, high-bulk items, with the additional disadvantage of being brittle-in other words, no one has ever made a large profit from making a single pot (except for quite exceptional art objects), and they are difficult and expensive to pack and transport, being heavy, bulky, and easy to break. If, despite these disadvantages, vessels (both fine table wares and more functional items) were being made to a high standard and in large quantities, and if they were travelling widely and percolating through even the lower levels of society-as they were in the Roman period-then it is much more likely than not that other goods, whose distribution we cannot document with the same confidence, were doing the same. If good-quality pottery was reaching even peasant households, then the same is almost certainly true of other goods, made of materials that rarely survive in the archaeological record, like cloth, wood, basketwork, leather, and metal. There is, for instance, no reason to suppose that the huge markets in clothing, foot ware, and tools were less sophisticated than that in pottery.

Further confirmation for this view can be found in an even humbler item, which also survives well in the soil but has received less scholarly attention than pottery-the roof tile.

Even buildings intended only for storage or for animals may well often have been tiled:

Tiles can be made locally in much of the Roman world, but they still require a large kiln, a lot of clay, a great deal of fuel, and expertise. After they have been manufactured, carrying them, even over short distances, without the advantages of mechanized transport, is also no mean feat. On many of the sites where they have been found, they can only have arrived laboriously, a few at a time, loaded onto pack animals. The roofs we have been looking at may not seem very important, but they represented a substantial investment in the infrastructure of rural life. A tiled roof may appeal in part because it is thought to be smart and fashionable, but it also has considerable practical advantages over roofs in perishable materials, such as thatch or wooden shingles. Above all, it will last much longer, and, if made of standardized well-fired tiles, as Roman roofs were, will provide more consistent protection from the rain-with minor upkeep, a tiled roof can function well for centuries; whereas even today a professionally laid thatch roof, of straw grown specifically for its durability, will need to be entirely remade every thirty years or so. A tiled roof is also much less likely to catch fire, and to attract insects, than wooden shingles or thatch. In Roman Italy, indeed in parts of pre-Roman Italy, many peasants, and perhaps even some animals, lived under tiled roofs. After the Roman period, sophisticated conditions such as these did not return until quite recent times.

Even smaller industries will have required considerable skills and some specialization in order to flourish, including, for example: the selection and preparation of clays and decorative slips; the making and maintenance of tools and kilns; the primary shaping of the vessels on the wheel; their refinement when half-dry; their decoration; the collection and preparation of fuel; the stacking and firing of the kilns; and the packing of the finished goods for transport. From unworked clay to finished product, a pot will have passed through many different processes and several different hands, each with its own expert role to play.

To reach the consumer then required a network of merchants and traders, and a transport infrastructure of roads, wagons, and pack animals, or sometimes of boats, ships, river- and sea-ports.

How exactly all this worked we will never know, because we have so few written records from the Roman period to document it; but the archaeological testimony of goods spread widely around their region of production, and sometimes further afield, is testimony enough to the fact that complex mechanisms of distribution did exist to link a potter at his kiln with a farmer needing a new bowl to eat from.

Wrecks filled with amphorae are so common that two scholars have recently wondered whether the volume of Mediterranean trade in the second century AD was again matched before the nineteenth century.

I am keen to emphasize that in Roman times good-quality articles were available even to humble consumers, and that production and distribution were complex and sophisticated. In many ways, this is a world like our own; but it is also important to try and be a little more specific. Although this is inevitably a guess, I think we are looking at a world that is roughly comparable, in terms of the range and quality of goods available, to that of the thirteenth to fifteenth centuries, rather than at a mirror image of our own times. The Roman period was not characterized by the consumer frenzy and globalized production of the modern developed world, where mechanized production and transport, and access to cheap labor overseas, have produced mountains of relatively inexpensive goods, often manufactured thousands of miles away. In Roman times machines still played only a relatively small part in manufacture, restricting the quantity of goods that could be made; and everything was transported by humans and animals, or, at best, by the wind and the currents. Consequently, goods imported from a distance were inevitably more expensive and more prestigious than local products.

Although some goods traveled remarkable distances, the majority of consumption was certainly local and regional-Roman pottery, for instance, is always much commoner near its production site than in more distant areas.

Many people were able to buy at least a few of the more expensive products from afar.

However, even if many would now choose to prioritize the role of the merchant over that of the state, no one would want to deny that the impact of state distribution was also considerable. Monte Testaccio alone testifies to a massive state effort with a wide impact: on Spanish olive-growers; on amphora-manufacturers; on shippers; and, of course, on the consumers of Rome itself, who thereby had their supply of olive oil guaranteed. The needs of the imperial capitals, like Rome and Constantinople, and of an army of around half a million men, stationed mainly on the Rhine and Danube and on the frontier with Persia, were very considerable, and the impressive structures that the Roman state set up to supply them are at least partially known from written records.

The distributive activities of the state and of private commerce have sometimes been seen as in conflict with each other; but in at least some circumstances they almost certainly worked together to mutual advantage. For instance, the state coerced and encouraged shipping between Africa and Italy, and built and maintained the great harbor works at Carthage and Ostia, because it needed to feed the city of Rome with huge quantities of African grain. But these grain ships and facilities were also available for commercial and more general use.

The End of Complexity. In the post-Roman West, almost all this material sophistication disappeared. Specialized production and all but the most local distribution became rare, unless for luxury goods; and the impressive range and quantity of high-quality functional goods, which had characterized the Roman period, vanished, or, at the very least, were drastically reduced. The middle and lower markets, which under the Romans had absorbed huge quantities of basic, but good-quality, items, seem to have almost entirely disappeared. Pottery, again, provides us with the fullest picture. In some regions, like the whole of Britain and parts of coastal Spain, all sophistication in the production and trading of pottery seems to have disappeared altogether: only vessels shaped without the use of the wheel were available, without any functional or aesthetic refinement. In Britain, most pottery was not only very basic, but also lamentably friable and impractical. In other areas, such as the north of Italy, some solid wheel-turned pots continued to be made and some soapstone vessels imported, but decorated table wares entirely, or almost entirely, disappeared; and, even amongst kitchen wares, the range of vessels being manufactured was gradually reduced to only a very few basic shapes. By the seventh century, the standard vessel of northern Italy was the olla (a simple bulbous cooking pot), whereas in Roman times this was only one vessel type in an impressive batterie de cuisine (jugs, plates, bowls, serving dishes, mixing and grinding bowls, casseroles, lids, amphorae, and others).

The great tableware producers of Roman North Africa continued to make (and export) their wares throughout the fifth and sixth centuries, and indeed into the latter half of the seventh. But the number of pots exported and their distribution became gradually more-and-more restricted-both geographically (to sites on the coast, and eventually, even there, only to a very few privileged centers like Rome), and socially (so that African pottery, once ubiquitous, by the sixth century is found only in elite settlements).

It was not only quality and diversity that declined; the overall quantities of pottery in circulation also fell dramatically.

Rome continued to import amphorae and table wares from Africa even in the late seventh century, and it was here, in the eighth century, that one of the very first medieval glazed wares was developed. These features are impressive, suggesting the survival within the city of something close to a Roman-style ceramic economy. But, even in this exceptional case, a marked decline from earlier times is evident, if we look at overall quantities.

In the Mediterranean region, the decline in building techniques and quality was not quite so drastic-what we witness here, as with the history of pottery production, is a dramatic shrinkage, rather than a complete disappearance. Domestic housing in post-Roman Italy, whether in town or countryside, seems to have been almost exclusively of perishable materials. Houses, which in the Roman period had been primarily of stone and brick, disappeared, to be replaced by settlements constructed almost entirely of wood. Even the dwellings of the landed aristocracy became much more ephemeral, and far less comfortable: archaeologists, despite considerable efforts, have so far failed to find any continuity into the late-sixth and seventh centuries of the impressive rural and urban houses that had been a ubiquitous feature of the Roman period-with their solid walls, and marble and mosaic floors, and their refinements such as underfloor heating and piped water.

It may have been as much as a thousand years later, perhaps in the fourteenth or fifteenth centuries, that roof tiles again became as readily available and as widely diffused in Italy as they had been in Roman times. In the meantime, the vast majority of the population made do with roofing materials that were impermanent, inflammable, and insect-infested. Furthermore, this change in roofing was not an isolated phenomenon, but symptomatic of a much wider decline in domestic building standards-early medieval flooring, for instance, in all but palaces and churches, seems to have been generally of simple beaten earth.

Coinage is undoubtedly a great facilitator of commercial exchange-copper coins, in particular, for small transactions. In the absence of coinage, raw bullion for major purchases, and barter for minor ones, can admittedly be much more sophisticated than we might initially suppose.” But barter requires two things that coinage can circumvent: the need for both sides to know, at the moment of agreement, exactly what they want from the other party; and, particularly in the case of an exchange that involves one party being ‘paid back’ in the future, a strong degree of trust between those who are doing the exchanging. If I want to exchange one of my cows for a regular supply of eggs over the next five years, I can do this, but only if I trust the chicken-farmer. Barter suits small face-to-face communities, in which trust either already exists between parties, or can be readily enforced through community pressure. But it does not encourage the development of complex economies, where goods and money need to circulate impersonally. In a monied economy, I can exchange my cow for coins, and only later, and perhaps in a distant place, decide when and how to spend them. I need only trust the coins that I receive.

A Return to Prehistory? The economic change that I have outlined was an extraordinary one. What we observe at the end of the Roman world is not a ‘recession’ with an essentially similar economy continuing to work at a reduced pace. Instead what we see is a remarkable qualitative change, with the disappearance of entire industries and commercial networks. The economy of the post-Roman West is not that of the fourth century reduced in scale, but a very different and far less sophisticated entity. This is at its starkest and most obvious in Britain. A number of basic skills disappeared entirely during the fifth century, to be reintroduced only centuries later. Some of these, such as the technique of building in mortared stone or brick,

All over Britain the art of making pottery on a wheel disappeared in the early fifth century, and was not reintroduced for almost 300 years.

Rare elite items, made or imported for the highest levels of society. At this level, beautiful objects were still being made, and traded or gifted across long distances. What had totally disappeared, however, were the good-quality, low-value items, made in hulk, and available so widely in the Roman period.

The complex system of production and distribution, whose disappearance we have been considering, was an older and more deeply rooted phenomenon than an exclusively `Roman’ economy. Rather, it was an ‘ancient’ economy that in the eastern and southern Mediterranean was flourishing long before Rome became at all significant, and that even in the north-western Mediterranean was developing steadily before the centuries of Roman domination. Cities such as Alexandria, Antioch, Naples and Marseille were ancient long before they fell under Roman control.

What was destroyed in the post-Roman centuries, and then only very slowly re-created, was a sophisticated world with very deep roots indeed.

Patterns of Change. There was no single moment, nor even a single century of collapse. The ancient economy disappeared at different times and at varying speeds across the empire.

There is general agreement that Roman Britain’s sophisticated economy disappeared remarkably quickly and remarkably early. There may already have been considerable decline in the later fourth century, but, if so, this was a recession, rather than a complete collapse: new coins were still in widespread use and a number of sophisticated industries still active. In the early fifth century all this disappeared, and, as we have seen in the previous chapter, Britain reverted to a level of economic simplicity similar to that of the Bronze Age, with no coinage, and only hand-shaped pots and wooden buildings.2 Further south, in the provinces of the western Mediterranean, the change was much slower and more gradual, and is consequently difficult to chart in detail. But it would be reasonable to summarize the change in both Italy and North Africa as a slow decline, starting in the fifth century (possibly earlier in Italy), and continuing on a steady downward path into the seventh. Whereas in Britain the low point had already been reached in the fifth century, in Italy and North Africa it probably did not occur until almost two centuries later, at the very end of the sixth century, or even, in the case of Africa, well into the seventh.’ Turning to the eastern Mediterranean, we find a very different story. The best that can be said of any western province after the early fifth century is that some regions continued to exhibit a measure of economic complexity, although always within a broad context of decline. By contrast, throughout almost the whole of the eastern empire, from central Greece to Egypt, the fifth and early sixth centuries were a period of remarkable expansion. We know that settlement not only increased in this period, but was also prosperous, because it left behind a mass of newly built rural houses, often in stone, as well as a rash of churches and monasteries across the landscape (Fig. 6.2). New coins were abundant and widely diffused, and new potteries, supplying distant as well as local markets, developed on the west coast of modern Turkey, in Cyprus, and in Egypt-. Furthermore, new types of amphora appeared, in which the wine and oil of the Levant and of the Aegean were transported both within the region, and outside it, even as far as Britain and the upper Danube. If we measure `Golden Ages’ in terms of material remains, the fifth and sixth centuries were certainly golden for most of the eastern Mediterranean, in many areas leaving archaeological traces that are more numerous and more impressive than those of the earlier Roman empire.’ In the Aegean, this prosperity came to a sudden and very dramatic end in the years around AD 6oo.` Great cities such as Corinth, Athens, Ephesus, and Aphrodisias, which had dominated the region since long before the arrival of the Romans, shrank to a fraction of their former size-the recent excavations at Aphrodisias suggest that the greater part of the city became in the early seventh century an abandoned ghost town, peopled only by its marble statues.” The tablewares and new coins, which had been such a prominent feature of the fifth and sixth centuries, disappeared with a suddenness similar to the experience of Britain some two centuries earlier

My focus here, however, will be on what happened after the invasions began. The evidence available very strongly suggests that political and military difficulties destroyed regional economies, irrespective of whether they were flourishing or already in decline. The death of complexity in Britain in the early fifth century must certainly have been closely related to the withdrawal of Roman power from the province, since the two things happened at more or less at the same time.

All regions, except Egypt and the Levant, suffered from the disintegration of the Roman empire, but distinctions between the precise histories of different areas show that the impact of change varied quite considerably. In Britain in the early fifth century, and in the Aegean world around AD 6oo, collapse seems to have happened suddenly and rapidly, as though caused by a series of devastating blows. But in Italy and Africa change was much more gradual, as if brought about by the slow decline and death of complex systems. These different trajectories make considerable sense. The Aegean was hit by repeated invasion and raiding at the very end of the sixth century, and throughout the seventh-first by Slavs and Avars (in Greece), then by Persians (in Asia Minor), and finally by Arabs (on both land and sea).

The effect of the disintegration of the Roman state cannot have been wholly dissimilar to that caused by the dismemberment of the Soviet command economy after 1989. The Soviet structure was, of course, a far larger, more complex, and all-inclusive machine than the Roman. But most of the former Communist bloc has faced the problems of adjustment to a new world in a context of peace, whereas, for the Romans of the West, the end of the state economy coincided with a prolonged period of invasion and civil war. The emperors also maintained, primarily for their own purposes, much of the infrastructure that facilitated trade: above all a single, abundant, and empire-wide currency; and an impressive network of harbours, bridges, and roads. The Roman state minted coins less for the good of its subjects than to facilitate the process of taxing them; and roads and bridges were repaired mainly in order to speed up the movement of troops and government envoys. But coins in fact passed through the hands of merchants, traders, and ordinary citizens far more often than those of the taxman; and carts and pack animals travelled the roads much more frequently than did the legions.” With the end of the empire, investment in these facilities fell dramatically: in Roman times, for instance, there had been a continuous process of upgrading and repairing the road network, commemorated by the erection of dated milestones; there is no evidence that this continued in any systematic way beyond the early sixth century.

Security was undoubtedly the greatest boon provided by Rome

it is a remarkable fact that few cities of the early empire were walled-a state of affairs not repeated in most of Europe and the Mediterranean until the late nineteenth century, and then only because high explosives had rendered walls ineffective as a form of defense. The security of Roman times provided the ideal conditions for economic growth.

There were also other problems that played a subsidiary role. In 541, for instance, bubonic plague reached the Mediterranean

Economic sophistication has a negative side

Because the ancient economy was in fact a complicated and interlocked system, its very sophistication rendered it fragile and less adaptable to change. For bulk, high-quality production to flourish in the way that it did in Roman times, a very large number of people had to be involved, in more-or-less specialized capacities. First, there had to be the skilled manufacturers, able to make goods to a high standard, and in a sufficient quantity to ensure a low unit-cost. Secondly, a sophisticated network of transport and commerce had to exist, in order to distribute these goods efficiently and widely. Finally, a large (and therefore generally scattered) market of consumers was essential, with cash to spend and an inclination to spend it. Furthermore, all this complexity depended on the labour of the hundreds of other people who oiled the wheels of manufacture and commerce by maintaining an infrastructure of coins, roads, boats, wagons, wayside hostelries, and so on. Economic complexity made mass-produced goods available, but it also made people dependent on specialists or semi-specialists-sometimes working hundreds of miles away-for many of their material needs. This worked very well in stable times, but it rendered consumers extremely vulnerable if for any reason the networks of production and distribution were disrupted, or if they themselves could no longer afford to purchase from a specialist. If specialized production failed, it was not possible to fall back immediately on effective self-help. Comparison with the contemporary western world is obvious and important. Admittedly, the ancient economy was nowhere near as intricate as that of the developed world in the twenty-first century. We sit in tiny productive pigeon-holes, making our minute and highly specialized contributions to the global economy and we are wholly dependent for our needs on thousands, indeed hundreds of thousands, of other people spread around the globe, each doing their own little thing. We would be quite incapable of meeting our needs locally, even in an emergency. The ancient world had not come as far down the road of specialization and helplessness as we have.

The enormity of the economic disintegration that occurred at the end of the empire was almost certainly a direct result of this specialization. The post-Roman world reverted to levels of economic simplicity, lower even than those of immediately pre-Roman times, with little movement of goods, poor housing, and only the most basic manufactured items.

The sophistication of the Roman period, by spreading high-quality goods widely in society, had destroyed the local skills and local networks that, in pre-Roman times, had provided lower-level economic complexity. It took centuries for people in the former empire to reacquire the skills and the regional networks that would take them back to these pre-Roman levels of sophistication.

Food production may also have slumped, causing a steep drop in the population. Almost without exception, archaeological surveys in the West have found far fewer rural sites of the fifth, sixth, and seventh centuries AD than of the early empire.  In many cases, the apparent decline is startling, from a Roman landscape that was densely settled and cultivated, to a post-Roman world that appears only very sparsely inhabited. Almost all the dots that represent Roman-period settlements disappear, leaving only large empty spaces. At roughly the same time, evidence for occupation in towns also decreases dramatically-the fall in the number of rural settlements was certainly not produced by a flight from the countryside into the cities.

Since economic complexity definitely increased the quality and quantity of manufactured goods, it is more likely than not that it also increased production of food, and therefore the number of people the land could feed. Archaeological evidence, from periods of prosperity, does indeed seem to show a correlation between increasing sophistication in production and marketing, and a rising population.

However sophisticated Roman agriculture was, harvests could still fail, and, when they did, transport was not cheap or rapid enough to bring in the large quantities of affordable grain that could have saved the poor from starvation. Edessa in Mesopotamia was one of the richest cities of the Roman East, surrounded by prosperous arable farming. But in AD 500 a swarm of locusts consumed the wheat harvest; a later harvest, of millet, also failed. For the poor, disaster followed. The price of bread shot up, and people were forced to sell their few possessions for a pittance in order to buy food. Many tried, in vain, to assuage their hunger with leaves and roots. Those who could, fled the region; but crowds of starving people flocked into Edessa and other cities, to sleep rough and to beg: ‘They slept in the colonnades and streets, howling night and day from the pangs of hunger.’ Here disease and the cold nights of winter killed large numbers of them; even collecting and burying the dead became a major problem.”‘

If we ask ourselves how the ability to read and write came to be so widespread in the Roman world, the answer probably lies in a number of different developments, which all encouraged the use of writing. In particular, there is no doubt that the complex mechanism of the Roman state required literate officials at all levels of its operations. There was no other way that the state could raise taxes in coin or kind from its provincials, assemble the resulting profits, ship them across long distances, and consume or spend them where they were needed. A great many lists and tallies will have been needed to ensure that a gold solidus raised in one of the peaceful provinces of the empire, like Egypt or Africa, was then spent effectively to support a soldier on the distant frontiers of Mesopotamia, the Danube, or the Rhine.

In Italy, the primacy of ancient civilization is seldom doubted, and a traditional view of the end of the Roman world is very much alive. Most Italians are with me in remaining highly skeptical about a peaceful `accommodation’ of the barbarians, and the ‘transformation’ of the Roman world into something new and equally sophisticated.’ The idea that the Germanic incomers were peaceful immigrants, who did no harm, has not caught on.

[ My comment: Egads! American historians are so politically correct that they ignore the role of invading immigrants and material life ]

A recent Guide to Late Antiquity, published by Harvard University Press, asks us “to treat the period between around 250 and 800 as a distinctive and quite decisive period of history that stands on its own’, rather than as the story of the unraveling of a once glorious and “higher” state of civilization”. This is a bold challenge to the conventional view of darkening skies and gathering gloom as the empire dissolved.

Words like ‘decline’ and ‘crisis’, which suggest problems at the end of the empire and which were quite usual into the 1970s, have largely disappeared from historians’ vocabularies, to be replaced by neutral terms, like ‘transition’, ‘change’, and ‘transformation’.

Here too old certainties are being challenged. According to the traditional account, the West was, quite simply, overrun by hostile ‘waves’ of Germanic peoples. The long-term effects of these invasions have, admittedly, been presented in very different ways, depending largely on the individual historian’s nationality and perspective. For some, particularly in the Latin countries of Europe, the invasions were entirely destructive. For others, however, they brought an infusion of new and freedom-loving Germanic blood into a decadent empire.

Unsurprisingly, an image of violent and destructive Germanic invasion was very much alive in continental Europe in the years that immediately followed the Second World War.” But in the latter half of the twentieth century, as a new and peaceful Western Europe became established, views of the invaders gradually softened and became more positive

More recently, however, some historians have gone very much further than this, notably the Canadian historian Walter Goffart, who in 1980 launched a challenge to the very idea of fifth-century ‘invasions’. He argued that the Germanic peoples were the beneficiaries of a change in Roman military policy. Instead of continuing the endless struggle to keep them out, the Romans decided to accommodate them into the empire by an ingenious and effective arrangement. The newcomers were granted a proportion of the tax revenues of the Roman state, and the right to settle within the imperial frontiers; in exchange, they ceased their attacks, and diverted their energies into upholding Roman power, of which they were now stakeholders. In effect, they became the Roman defense force.

Goffart was very well aware that sometimes Romans and Germanic newcomers were straightforwardly at war, but he argued that ‘the 5th century was less momentous for invasions than for the incorporation of barbarian protectors into the fabric of the West’. In a memorable sound bite, he summed up his argument: “what we call the Fall of the Western Roman empire was an imaginative experiment that got a little out of hand.” Rome did fall, but only because it had voluntarily delegated away its own power, not because it had been successfully invaded. Like the new and positive ‘Late Antiquity’, the idea that the Germanic invasions were in fact a peaceful accommodation has had a mixed reception. The world at large has seemingly remained content with a dramatic ‘Fall of the Roman empire’, played out as a violent and brutal struggle between invaders and invaded.

As someone who is convinced that the coming of the Germanic peoples was very unpleasant for the Roman population, and that the long-term effects of the dissolution of the empire were dramatic, I feel obliged to challenge such views.

The historians who have argued for a new and rosy Late Antiquity are primarily North Americans, or Europeans based in the USA, and they have shifted their focus right out of the western Roman empire. Much of the evidence that sustains the new and upbeat Late Antiquity is rooted firmly in the eastern Mediterranean, where, as we have seen, there is good evidence for prosperity through the fifth and sixth centuries, and indeed into the eighth in the Levant.

Until fairly recently it was institutional, military, and economic history that dominated historians’ views of the fourth to seventh centuries.’ Quite the reverse is now the case, at least in the USA. Of the 36 volumes so far published by the University of California Press in a series entitled ‘The Transformation of the Classical Heritage’, 30 discuss the world of the mind and spirit (primarily different aspects of Christian thought and practice); only five or six cover more secular topics (such as politics and administration); and none focuses on the details of material life.’

Posted in Collapse of Civilizations, Roman Empire, Supply Chains | Tagged , , , | Leave a comment

How corporations used conservative religion to gain wealth and power and undo the New Deal

Source: Republican Jesus

[ This is a book review of One Nation Under God: How Corporate America Invented Christian America by Kevin Kruse (2016), followed by excerpts from the book.  Much of this introduction is my take on what this all means.

This book tells the history of how corporate America has tried to undo New Deal reforms since the 1940s by creating a new free-enterprise religion, and to erode the separation of church and state.

Corporate America’s creation of free-enterprise Jesus began in 1935 with the founding of an organization called Spiritual Mobilization.  Some of the corporations who donated money to this and similar organizations include: 

American Cyanamid and chemical corporation, Associated Refineries, AT&T,  Bechtel Corporation, Caterpillar Tractor Company, Chevrolet, Chicago & Southern Airline, Chrysler corporation, Colgate-Palmolive Company, Deering-Milliken, Detroit Edison, Disney, DuPont, Eastern Airlines, General Electric, General Foods, General Motors, Goodwill, Goodyear Tire & Rubber, IBM, J. C. Penney, J. Walter Thompson, Mark A. Hanna, Marriott, Marshall Field, Monsanto Chemical Company, National Association of Manufacturers, Pacific Mutual Life Insurance, Paramount Pictures, PepsiCo, Precision Valve Corp, Quaker Oats, Republic Steel Corp, Richfield Oil Co., San Diego Gas & Electric, Schick Safety Razor, Standard Oil Company, Sun Oil company, Sun shipbuilding company, Union Carbide and Carbon Corporation, United Airlines, US Rubber company, US steel corporation, Utah Power & Light, Warner Bros. Pictures, Weyerhauser.

In the 1930s, corporations were well known to have brought on the Great Depression with their tremendous greed and dishonesty.  The New Deal reformed the financial system, distributed wealth more evenly, provided a social safety net, protected citizens by regulating businesses to prevent them from selling unsafe food, drugs, etc., emitting toxic pollution, aided farmers in slowing soil erosion to prevent more dust bowls, the federal interstate highway system, and other infrastructure and public services that benefited everyone, especially corporations.

The New Deal embodied the ideals of the Social Gospel, a movement dedicated to the public good, economic equality, eradication of poverty, slums, child labor, an unclean environment, inadequate labor unions, poor schools, and war (Wiki Social Gospel).

Corporate America fought against these reforms and has been trying to undo the New Deal ever since then.

One of their most successful tactics was getting religious leaders to spout a new version of Jesus – replacing the Social Gospel Jesus of the New deal with a Republican free-enterprise, Ayn Rand, selfish Jesus.

At first everyone saw through this propaganda since it was obviously driven by craven self-interest.

So the propaganda was crafted more subtly, and sold to conservative religious leaders via what appeared to be a religious organization called “Spiritual Mobilization” run by minister James Fifield.  Congregations began to hear sermons about the free-enterprise Jesus with open hearts and minds, which they would have laughed at if the speaker were from a corporation. The new religion taught them to detest unions, social welfare, and to fear and hate government.

Later on, capitalist Jesus expanded to the teachings of the evils of food stamps, Obamacare, to be against abortion and birth control (since the more people there are they less they can be paid).  This propaganda came not just from the pulpit, but also conservative religious TV and radio stations.

Recent scholarship has revealed Jesus to be the Social Gospel Jesus of the Democrats in Rex Weyler’s book “The Jesus Sayings: The quest for his authentic message”. Weyler also found that people have been twisting the real Jesus since St. Paul, so Republican manipulation isn’t anything new, it’s been going on for 2000 years.  By looking at the Dead Sea scrolls (found in the last century) and modern scholarship, Weyler found that the most likely Jesus was a man who spent his time helping the poor and encouraged people to turn their spiritual philosophy in to action.  Jesus was a wise and humble teacher, advocating self-awareness and social compassion. The core, genuine message of Jesus includes (Solomon 2008): 1) Give to anyone who asks; knowledge and righteousness are revealed in action, 2) seek spiritual resources within yourself; don’t wait for a deity to solve your problem, and that knowing one’s self is the first step to offering comfort and compassion to the world.

This book shows how the Bible, America’s history, and Constitution were misquoted and misinterpreted to twist Jesus into Capitalist Jesus.

This is why you don’t have a chance of talking Uncle Bob out of voting for demagogues at the Thanksgiving table – you’re attacking his religion and core beliefs he’s been taught since his first sermon, and his brain shuts down in anger.  And he’ll never change because the main and just about only social organization that knits rural American communities together is Church.  If you read enough to doubt right-wing ideology and religious philosophy, you’re going to be a very lonely and perhaps even outcast person.

People like to say that capitalism is imperfect, but it’s the best system that exists.  Well, I’ll agree that free-enterprise is better at raping, pillaging, and poisoning land, water, and air more quickly than any other system.  Just look at how industrial farming is depleting aquifers and eroding and compacting top soil to the point where it won’t produce much food after just centuries rather than the average of 1,500 years in past civilizations (Montgomery 2007).  

If Capitalism is so great, why are Social Gospel nations like Denmark, Iceland, Norway, Sweden, the Netherlands, and Canada consistently ranked the happiest nations in the world as well as high in the per capita nominal GDP rankings? Socialist Cuba did far better than other nations when their fossil fuels were suddenly cut, with Russia coming in second. 

More importantly for our own civilization, global conventional oil production, where 90% of our oil comes from, peaked in 2005 (Aleklett et al. 2012; Kerr 2011; Murray 2012; Newby 2011; IEA 2010; Zittel et al. 2013), and is declining at a rate of 6% per year, increasing to 9% by 2030 (Hook 2009). If that doesn’t alarm you, read my posts on exponential growth here.

According to the Department of Energy, you’d want to prepare at least 20 years ahead of time for peak oil (Hirsch 2005), yet here we are 12 years after peak conventional oil, with both Democrats and Republicans assuming that endless growth on a finite planet is possible and will fix things.  We don’t have endless energy. It turns out that earth is not a giant gas tank.  Even if it were, exponential growth would drain it in 342 years (Friedemann 2016).

There isn’t a single endeavor that doesn’t depend on energy, especially supply chains, mining, logging, construction, and road building, which are mainly done with heavy-duty trucks that can only accomplish their tasks with diesel engines (Smil 2010) that burn only diesel fuel (Friedemann 2015).

Since the social net is funded by an ever-expanding working population and growth, social security and Medicare are Ponzi schemes, as well as our financial system, which depends on growth to pay back debt.   

That means the corporations are about to get the death of the New Deal they’ve so wanted via the decline of our fossil-fueled civilization.  But it will be a Pyrrhic victory, since they’ll go bankrupt too.

There is no political party that can fix this, this is a biophysical reality, it’s not politics.  To cope, it’s long past time to strengthen your community by becoming more resilient, self-sufficient, and able to supply food and other essential goods locally. It’s long past time to fix water and sewage infrastructure. 

Energy decline will be less frightening if we all embrace the social gospel and help the less fortunate in our communities.

References

  • Aleklett, K., et al. 2012. Peeking at peak oil. Berlin: Springer.
  • Hook, M., et al. 2009. Giant oil field decline rates and their influence on world oil production. Energy Policy 37(6):2262–2272.
  • Friedemann, A. 2016. Limits to Growth? 2016 United Nations report provides best evidence yet. www.energyskeptic.com
  • Friedemann, A. 2015. When trucks stop running, Energy and the Future of Transportation. Springer.
  • Kerr, R. 2011. Peak oil production may already be here. Science 331:1510–11.
  • Montgomery, D.R. 2007. Dirt: the erosion of civilizations. California: University of California Press.
  • Murray, J., et al. 2012. Oil’s tipping point has passed. Nature 481:43–4.
  • Newby, J. 2011. Oil Crunch (Fatih Birol). Catalyst. ABC TV.
  • IEA. 2010. World energy outlook 2010, 116. International Energy Agency.
  • Smil, V. 2010. Prime Movers of Globalization: The History and Impact of Diesel Engines and Gas Turbines. MIT press.
  • Solomon, L. May 2, 2008. Author Rex Weyler on sorting myth from history, and why we need both. TheTyee.ca
  • Zittel, W, et al. 2013. Fossil and nuclear fuels. Energy Watch Group.

P.S. My favorite books about how corporations manipulate government are “Republic, Lost” and Dark Money

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Excerpts from:  One Nation Under God: How Corporate America Invented Christian America by Kevin Kruse 2016

This book argues the postwar revolution in America’s religious identity had its roots in the domestic politics of the 1930s and early 1940s.

Decades before Eisenhower’s inaugural prayers, corporate titans enlisted conservative clergymen in an effort to promote new political arguments embodied in the phrase “freedom under God.”

As the private correspondence and public claims of the men leading this charge make clear, this new ideology was designed to defeat the state power its architects feared most—not the Soviet regime in Moscow, but Franklin D. Roosevelt’s New Deal administration in Washington. With ample funding from major corporations, prominent industrialists, and business lobbies such as the National Association of Manufacturers and the US Chamber of Commerce in the 1930s and 1940s, these new evangelists for free enterprise promoted a vision best characterized as “Christian libertarianism.

By the late 1940s and early 1950s, this ideology had won converts including religious leaders such as Billy Graham and Abraham Vereide and conservative icons ranging from former president Herbert Hoover to future president Ronald Reagan. The new conflation of faith, freedom, and free enterprise then moved to center stage in the 1950s under Eisenhower’s watch.

Though his administration gave religion an unprecedented role in the public sphere, it essentially echoed and amplified the work of countless private organizations and ordinary citizens who had already been active in the same cause.

Corporate leaders remained central. Leading industrialists and large business organizations bankrolled major efforts to promote the role of religion in public life. The top advertising agency of the age, the J. Walter Thompson Company, encouraged Americans to attend churches and synagogues through an unprecedented “Religion in American Life” ad campaign.

Inundated with urgent calls to embrace faith, Americans did just that. The percentage of Americans who claimed membership in a church had been fairly low across the 19th century, increasing from just 16% in 1850 to 36% in 1900. In the early decades of the 20th century the percentages plateaued, remaining at 43% in both 1910 and 1920, then moving up slightly to 47% in 1930 and 49% in 1940. In the decade and a half after the Second World War, however, the percentage of Americans who belonged to a church or synagogue suddenly soared, reaching 57% in 1950 and then peaking at 69% at the end of the decade, an all-time high.

While this religious revival was remarkable, the almost complete lack of opposition to it was even more so. A few clergymen complained that the new public forms of faith seemed a bit superficial, but they ultimately approved of anything that encouraged church attendance.

IN DECEMBER 1940, MORE THAN 5,000 industrialists from across America took part in their yearly pilgrimage to Park Avenue. For three days every winter, the posh Waldorf-Astoria Hotel welcomed them for the annual meeting of the National Association of Manufacturers (NAM). Tucked away near the end of the program was a name that few knew upon arrival but everyone would be talking about by the week’s end: Reverend James W. Fifield Jr.

Ordinarily, a Congregationalist minister might not have seemed well suited to address the corporate luminaries assembled at the Waldorf-Astoria. But his appearance had been years in the making. For much of the 1930s, organizations such as NAM had been searching in vain for ways to rehabilitate a public image that had been destroyed in the crash and defamed by the New Deal. In 1934, a new generation of conservative industrialists took over NAM with a promise to “serve the purposes of business salvation.” “The public does not understand industry,” one of them argued, “because industry itself has made no effort to tell its story; to show the people of this country that our high living standards have risen almost altogether from the civilization which industrial activity has set up.” Accordingly, NAM dedicated itself to spreading the gospel of free enterprise, through a wide array of films, radio programs, advertisements, direct mail, a speakers bureau, and a press service that provided ready-made editorials and news stories for 7,500 local newspapers. By 1937 the organization devoted $793,043 to the cause, more than half its total income that year. Seeking to repair the image of industrialists, NAM promoted the values of free enterprise

Its efforts at self-promotion were seen as precisely that. As one observer noted, “Throughout the 30s, the corporate campaign was marred by extremist, overt attacks on the unions and the New Deal that it was easy for critics to dismiss the entire effort as mere propaganda.

While established business lobbies such as NAM had been unable to sell free enterprise effectively in the Depression, neither had the many new organizations created specifically for that purpose. The most prominent, the American Liberty League formed in 1934 to “teach the necessity of respect for the rights of persons and property” and “the duty of government to encourage and protect individual and group initiative and enterprise.” It benefited from generous financial support from corporate titans, particularly at DuPont and General Motors. But their prominence inadvertently crippled its effectiveness, as the Liberty League was easily dismissed as a collection of tycoons looking out for their own self-interest. Jim Farley, chairman of the Democratic Party, joked that it really ought to be called the “American Cellophane League” because “first, it’s a DuPont product and second, you can see right through it.” Even the president took his shots. “It has been said that there are two great Commandments—one is to love God, and the other to love your neighbor,” Franklin D. Roosevelt noted soon after its creation. “The two particular tenets of this new organization say you shall love God and then forget your neighbor.” Off the record, he joked that the name of the god they worshiped seemed to be “Property”.

In introducing the New Deal, Roosevelt and his allies revived the old language of the so-called Social Gospel to justify the creation of the modern welfare state. The original proponents of the Social Gospel, back in the late 19th century, had significantly reframed Christianity as a faith concerned less with personal salvation and more with the public good. They rallied popular support for Progressive Era reforms in the early 20th century before fading from public view in the conservative 1920s. But the economic crash and the widespread suffering of the Great Depression brought them back into vogue. When Roosevelt launched the New Deal, an array of politically liberal clergymen championed his proposal for a vast welfare state as simply “the Christian thing to do.” His administration’s efforts to regulate the economy and address the excesses of corporate America were singled out for praise. Catholic and Protestant leaders hailed the “ethical and human significance” of New Deal measures, which they said merely “incorporated into law some of the social ideas and principles for which our religious organizations have stood for many years.” The head of the Federal Council of Churches, for instance, claimed the New Deal embodied basic Christian principles such as the “significance of daily bread, shelter, and security.

Throughout the 1930s, the nation’s industrialists tried to counter the selflessness of the Social Gospel with direct appeals to Americans’ self-interest but had little success.

Accordingly, at the Waldorf-Astoria in December 1940, NAM president H. W. Prentis proposed that they try to beat Roosevelt at his own game. With wispy white hair and a weak chin, the 56-year-old head of the Armstrong Cork Company seemed an unlikely star. But 18 months earlier, the Pennsylvanian had electrified the business world with a speech to the US Chamber of Commerce that called for the recruitment of religion in the public relations war against the New Deal. “Economic facts are important, but they will never check the virus of collectivism,” Prentis warned; “the only antidote is a revival of American patriotism and religious faith.” The speech thrilled the Chamber and propelled Prentis to the top ranks of NAM. His presidential address at the Waldorf-Astoria was anticipated as a major national event, heavily promoted in advance by the Wall Street Journal and broadcast live over both ABC and CBS radio. Again, Prentis urged the assembled businessmen to emphasize faith in their public relations campaigns. “We must give attention to those things more cherished than material wealth and physical security,” he asserted. “We must give more attention to intellectual leadership and a strengthening of the spiritual concept that underlies our American way of life.

Fifield delivered a passionate defense of the American system of free enterprise and a withering assault on its perceived enemies in government. Decrying the New Deal’s “encroachment upon our American freedoms,” the minister listed a litany of sins committed by the Roosevelt administration, ranging from its devaluation of currency to its disrespect for the Supreme Court. He denounced the “rising costs of government and the multitude of federal agencies attached to the executive branch” and warned ominously of “the menace of autocracy approaching through bureaucracy.” His audience of executives was stunned. Over the preceding decade, these titans of industry had been told, time and time again, that they were to blame for the nation’s downfall.

Fifield, in contrast, insisted that they were the source of its salvation.

Minister Fifield convinced the industrialists that clergymen could be the means of regaining the upper hand in their war with Roosevelt in the coming years. As men of God, they could give voice to the same conservative complaints as business leaders, but without any suspicion that they were motivated solely by self-interest. In doing so, they could push back against claims that business had somehow sinned and the welfare state was doing God’s work.

Conservative clergymen now used their ministerial authority to argue, quite explicitly, that New Dealers were the ones violating the Ten Commandments. In countless sermons, speeches, and articles issued in the months and years after Fifield’s address, these ministers claimed that the Democratic administration made a “false idol” of the federal government, leading Americans to worship it over the Almighty; that it caused Americans to covet what the wealthy possessed and seek to steal it from them; and that, ultimately, it bore false witness in making wild claims about what it could never truly accomplish.

Above all, they insisted that the welfare state was not a means to implement Christ’s teachings about caring for the poor and the needy, but rather a perversion of Christian doctrine. In a forceful rejection of the public service themes of the Social Gospel, they argued that the central tenet of Christianity remained the salvation of the individual. If any political and economic system fit with the religious teachings of Christ, it would have to be rooted in a similarly individualistic ethos. Nothing better exemplified such values, they insisted, than the capitalist system of free enterprise.

He and his colleagues devoted themselves to fighting back against the government forces that they believed were threatening capitalism and, by extension, Christianity. In the early postwar era, their activities helped reshape the national debate about the proper functions of the federal government, the political influence of corporations, and the role of religion in national life.

Fifield had watched in alarm as Roosevelt convinced vast majorities of Americans that unfettered capitalism had crippled the nation and that the federal government now needed to play an important new role in regulating the free market’s risks and redistributing its rewards. For Fifield and his flock, Roosevelt’s actions violated not just the Constitution but the natural order of things.

The New Deal undermined the spirit of Christianity and demanded a response from Christ’s representatives on earth. “If, with Jesus, we believe in the sacredness of individual personalities, then our leadership responsibility is very plain.” This duty was “not an easy one,” he cautioned. “We may be called unpatriotic and accused of ‘selling out,’ but so was Jesus.” Finding the leaflet to his liking, Hoover sent Fifield a warm note of appreciation and urged him to press on.

Though they had hoped to destroy the Roosevelt administration themselves, its wounds were largely self-inflicted. In 1937, the president’s labor allies launched a series of sit-down strikes that secured union recognition at corporations such as General Motors and US Steel but also roused sympathy for seemingly beleaguered businessmen. At the same time, Roosevelt overreached with his proposal to “pack” the Supreme Court with new justices, a move that played into the hands of those who sought to portray him as dictatorial in intent. Most significant, though, was his ill-fated decision to rein in federal spending in an effort to balance the budget. The impressive economic recovery of Roosevelt’s first term suddenly stalled, and the country entered a short but sharp recession in the winter of 1937–1938.

As the New Deal faltered, Fifield began to look forward to the next presidential election—in “the critical year 1940”—when conservatives might finally rout the architects of the regulatory state. To his dismay, international tensions soon marginalized domestic politics and prompted the country to rally around Roosevelt again.

As the distraction of the foreign war drew to a close, Fifield looked forward to renewing the fight against the New Deal. The minister now counted on the support of not just Hoover but an impressive array of conservative figures in politics, business, and religion. The advisory committee for Spiritual Mobilization’s wartime pledge was, in the words of one observer, “a who’s who of the conservative establishment.” At mid-decade, its 24-man roster included three past or present presidents of the US Chamber of Commerce, a leading Wall Street analyst, a prominent economist at the American Banking Association, the founder of the National Small Businessmen’s Association,

Senator Albert Hawkes agreed. “After careful examination of the records during the past ten years, one can only conclude that there is the objective of the assumption of greater power and control by the government over individual life. If these policies continue,” he warned, “they will lead to state direction and control of all the lives of our citizens. That is the goal of Federal planners. That is NOT the desire of the American people!

In February 1945, Haake explained to Pew why the NAM campaign to ministers and others like it had all failed. “Of the approximately 30 preachers to whom I have thus far talked, I have yet to find one who is unqualifiedly impressed,” Haake reported. “One of the men put it almost typically for the rest when he said: ‘The careful preparation and framework for the meetings to which we are brought is too apparent. We cannot help but see that it is expertly designed propaganda and that there must be big money behind it.

If industrialists wanted to convince clergymen to side with them, they would need a subtler approach. Rather than simply treating ministers as a passive audience to be persuaded, Haake argued, they should involve them actively in the cause as participants. The first step would be making ministers realize that they too had something to fear from the growth of government. “The religious leaders must be helped to discover that their callings are threatened,” Haake argued, by realizing that the “collectivism” of the New Deal, “with the glorification of the state, is really a denial of God.” Once they were thus alarmed, they would readily join Spiritual Mobilization as its representatives and could then be organized more effectively into a force for change both locally and nationally.

With the new financial support and sense of direction, Spiritual Mobilization underwent a massive overhaul. In February 1947, Fifield reported that he had already reached their goal for “the signing of 10,000 ministers as representatives.” This national network of clergymen would be the primary channel through which the work and writings of Spiritual Mobilization would flow. In a new monthly publication that bore the organization’s name, Fifield ran a column—with the businesslike heading “Director to Representatives”—devoted to marshaling these ministers to achieve their common goal of defeating the New Deal. Fifield repeatedly warned them that the growth of government had crippled not only individual initiative but personal morality as well. “It is time to exalt the dignity of individual man as a child of God, to exalt Jesus’ concept of man’s sacredness and to rebuild a moral fabric based on such irreducibles as the Ten Commandments,” he urged his minister-representatives.

Clergymen responded enthusiastically. Many ministers wrote the Los Angeles office to request copies of Friedrich Hayek’s libertarian treatise The Road to Serfdom and anti–New Deal tracts by Herbert Hoover and libertarian author Garet Garrett, all of which had been advertised in Spiritual Mobilization. Some sought reprints of the bulletin itself.

Fifield’s backers in the Businessmen’s Advisory Committee were so pleased with his progress that they nearly doubled the annual budget. To raise funds, its members secured sizable donations from their own companies and personal accounts and, more important, reached out to colleagues across the corporate world for their donations as well. Pew once again set the pace, soliciting donations from officials at 158 corporations. “A large percentage of ministers in this country are completely ignorant of economic matters and have used their pulpits for the purpose of disseminating socialistic and totalitarian doctrines,” he wrote in his appeal. “Much has already been accomplished in the education of these ministers, but a great deal more is left to be done.” Many of the corporations he contacted—including General Motors, Chrysler, Republic Steel, National Steel, International Harvester, Firestone Tire and Rubber, Sun Oil, Gulf Oil, Standard Oil of New Jersey, and Colgate-Palmolive-Peet—were already contributing the maximum allowable annual donation. Other leading businesses, from US Steel to the National Cash Register Company, had donated in the past, but Pew hoped they would commit to the limit as well. Recognizing that there were many conservative groups out there “fighting for our American way of life,” Pew assured a colleague in the oil industry that Spiritual Mobilization deserved to be “at the top of the list” when it came time to donate, “because recent polls indicated that of all the groups in America, the ministers had more to do with molding public opinion.

“According to my book there are five principal issues before the country: The socialization of industry, the socialization of medicine, the socialization of education, the socialization of labor, and the socialization of security,” he noted. “Only through education and the pressure which the people exert on their politicians can we hope to prevent this country from becoming a totalitarian state.

Fifield’s financial backers helped secure free airtime for these programs across the nation. “Republic Steel is taking steps to get them on radio stations in every town where they have a factory or office,” Fifield noted in March 1949. “We are expecting to be on 150 radio stations by June.” A year later, The Freedom Story was broadcast on a weekly network of over 500 stations; by late 1951, it aired on more than 800.

Fifield’s journal purposely presented itself as created by ministers for ministers. Spiritual Mobilization had long operated on the principle that clergymen could not be swayed through crude propaganda. “The articulation should be worked out before-hand, of course, and we should be ready to help the thinking of the ministers on it,” Haake noted in one of his early musings on Spiritual Mobilization, “but it should be so done as to enable them to discover it for themselves, as something which they really had believed but not realized fully until our questions brought it out so clearly. I am sure we may not TELL them: not as laymen, or even as fellow clergymen. We must help them to discover it themselves.

Faith and Freedom thus presented itself as an open forum in which ministers could debate a wide variety of issues and disagree freely. But there was an important catch. “Clergymen may differ about politics, economics, sociology, and such,” Fifield stated, “but I would expect that in matters of morality all followers of Jesus speak in one voice.” Because Fifield and Johnson insisted that morality directly informed politics and economics, they were able to cast those who disagreed with them on those topics as essentially immoral

Time and time again, he condemned a variety of “socialistic laws,” such as ones supporting minimum wages, price controls, Social Security pensions for the elderly, unemployment insurance, veterans’ benefits, and the like, as well as a wide range of federal taxation that he deemed to be “tyrannical” in nature.

As the Fourth of July drew near, the Committee to Proclaim Liberty focused its attention on encouraging Americans to mark the holiday with public readings of the preamble to the Declaration of Independence. The decision to focus solely on the preamble was in some ways a natural one, as its passages were certainly the most famous and lyrical in the document. But doing so also allowed organizers to reframe the Declaration as a purely libertarian manifesto, dedicated to the removal of an oppressive government. Those who read the entire document would have discovered, to the consternation of the committee, that the founding fathers followed the high-flown prose of the preamble with a long list of grievances about the absence of government and rule of law in the colonies. Among other things, they lambasted King George III for refusing “his Assent to Laws, the most wholesome and necessary for the public good,” for forbidding his governors from passing “Laws of immediate and pressing importance,” for dissolving the legislative bodies in the colonies, and for generally enabling a state of anarchy that exposed colonists to “all the dangers of invasion from without, and convulsions within.” In the end, the Declaration was not a rejection of government power in general but rather a condemnation of the British crown for depriving the colonists of the government they needed. In order to reframe the Declaration as something rather different, the Committee to Proclaim Liberty had to edit out much of the document they claimed to champion.

“. . . That to secure these rights, governments are instituted among men . . .” Here is the reason for and the purpose of government. Government is but a servant—not a master—not a giver of anything. “. . . deriving their just powers from the consent of the governed . . .” In America, the government may assume only the powers you allow it to have. It may assume no others. The ad urged readers to make their own declaration of independence in 1951. “Declare that government is responsible TO you—rather than FOR you,” it continued. “Declare that freedom is more important to you than ‘security’ or ‘survival.’

“The effort to establish socialism in our country has probably progressed farther than most of us fully realize,” asserted a Lutheran minister in Kansas. “It would be well to remember that every act or law passed by which the government promises to ‘give’ us something is a step in the direction of socialism.” A clergyman from Brooklyn agreed. “Today our homes are built for us, financed for us, and the church is provided for us. Our many services are in danger of robbing us of that which is most important,” he warned, “the right to our own kingdom of self.

Americans had learned that the Soviet Union now had the atomic bomb. The energetic young Graham seized on the headlines to make the Armageddon foretold in the New Testament seem imminent. “Communism,” he thundered, “has decided against God, against Christ, against the Bible, and against all religion. Communism is not only an economic interpretation of life—communism is a religion that is inspired, directed, and motivated by the Devil himself who has declared war against Almighty God.” He urged his audience to get religion not simply for their own salvation but for the salvation of their city and country. Without “an old-fashioned revival,” he warned, “we cannot last!

Three important movements in the 1940s and early 1950s—the prayer breakfast meetings of Abraham Vereide, Graham’s evangelical revivals, and the presidential campaign of Dwight D. Eisenhower—encouraged the spread of public prayer as a political development whose means and motives were distinct from the drama of the Cold War. Working in lockstep to advance Christian libertarianism, these three movements effectively harnessed Cold War anxieties for an already established campaign against the New Deal.

Graham was the most prominent of the new Christian libertarians, a charismatic figure who spread the ideas of forerunners such as Fifield to even broader audiences. In 1954, Graham offered his thoughts on the relationship between Christianity and capitalism in Nation’s Business, the magazine of the US Chamber of Commerce. “We have the suggestion from Scripture itself that faith and business, properly blended, can be a happy, wholesome, and even profitable mixture,” he observed. “Wise men are finding out that the words of the Nazarene: ‘Seek ye first the kingdom of God and His righteousness, and all these things shall be added unto you’ were more than the mere rantings of a popular mystic; they embodied a practical, workable philosophy which actually pays off in happiness and peace of mind. . . . Thousands of businessmen have discovered the satisfaction of having God as a working partner.

Graham’s warm embrace of business contrasted sharply with the cold shoulder he gave organized labor. The Garden of Eden, he told a rally in 1952, was a paradise with “no union dues, no labor leaders, no snakes, no disease.” The minister insisted that a truly Christian worker “would not stoop to take unfair advantage” of his employer by ganging up against him in a union. Strikes, in his mind, were inherently selfish and sinful.  If workers wanted salvation, they needed to put aside such thoughts and devote themselves to their employers.

On Labor Day that same year, he warned that “certain labor leaders would like to outlaw religion, disregard God, the church, and the Bible,” and he suggested that their rank and file were wholly composed of the unchurched.

His hostility to organized labor was matched by his dislike of government involvement in the economy, which he invariably condemned as “socialism.” Graham warned that “government restrictions” in the realm of free enterprise threatened “freedom of opportunity” in America.

Graham’s thoughts on the dangers of socialism became a bit of an international scandal after the Billy Graham Evangelical Association sent followers a free calendar. A page on England noted that “when the war ended a sense of frustration and disillusionment gripped England and what Hitler’s bombs could not do, socialism with its accompanying evils shortly accomplished. England’s historic faith faltered. The churches still standing were gradually emptied.” Learning of the slight, a columnist for the London Daily Herald denounced Graham with a new nickname: “the Big Business evangelist.

As preachers like Billy Graham helped to popularize public prayer, they thus managed to politicize it as well. They shared the Christian libertarian sensibilities of Spiritual Mobilization but were able to spread that gospel in much subtler—and much more effective—ways than that organization ever could. At the same time, their work helped to democratize the phenomenon of public prayer.

Congressional breakfast meetings quickly became a fixture on Capitol Hill. Each month, Vereide printed a program to guide the groups in their morning meditations, offering specific readings from Scripture and providing questions for discussion. The groups were officially nonpartisan, welcoming Republicans and Democrats alike, but that was not to say they were apolitical. Most of the Democratic members of the House breakfast group, for instance, were conservative southerners who held federal power and the activism of the New Deal state in as much contempt as the average Republican did. Political overtones were lightly drawn but present nonetheless. “The domestic and the world conflict is the physical expression of a perverted mental, moral and spiritual condition,” noted a program for a House session. “We need to repent from our unworkable way and pray.” The congressional prayer meetings gave Vereide immediate access to the nation’s political elite.

Having won over political leaders in Washington, D.C., Vereide used their influence to establish even more breakfast groups across the nation. The minister pressed ahead in his drive to give the organization an international presence, with quick success. Within a few years, Christian Leadership breakfast groups were meeting regularly in 31 foreign countries. England, France, West Germany, the Netherlands, and Finland represented the bulk of the initial growth of the group, but the ICCL made its presence felt in nations as varied as China, South Africa, and Canada, with isolated operations in localities such as Havana and Mexico City as well. Vereide recognized that the tensions of the Cold War could be exploited to win more converts to his cause.

The earthy Richardson had little use for Graham’s religion, but the two shared a common faith in free enterprise. “When Graham speaks of ‘the American way of life,’” an early biographer noted, “he has in mind the same combination of economic and political freedom that the National Association of Manufacturers, the United States Chamber of Commerce, and the Wall Street Journal do when they use the phrase.

He chided Democrats for wasting money on the welfare state at home and the Marshall Plan abroad. “The whole Western world is begging for more dollars,” he noted that fall, but “the barrel is almost empty. When it is empty—what then?” He insisted that the poor in other nations, like those in his own, needed no government assistance. “Their greatest need is not more money, food, or even medicine; it is Christ.

Graham led prayer meetings all over town, including daily sessions in the Pentagon auditorium. On Monday mornings, he held “Pastor’s Workshops” with local clergymen; on Tuesdays, there were luncheons at the Hotel Statler to discuss religion with “the men who have so much a part in shaping the destiny of the Capital of Western Civilization: the business men of Washington.

EISENHOWER SEEMED AN UNLIKELY CANDIDATE to lead the nation to spiritual reawakening. For decades he had remained distant from religion and could not even claim a specific denominational affiliation. His grandfather had been a minister for the River Brethren, an offshoot of the Mennonites, and his father maintained that faith.

While he lacked ties to any specific denomination, Eisenhower remained firmly committed to the Bible itself. Like his parents, he considered it an unparalleled resource. One of his aides during the Second World War remembered that Eisenhower could “quote Scripture by the yard,” using it to punctuate points made at staff meetings.

Graham’s spiritual support was surely influential in the general’s decision, as was the financial support Richardson promised. Once Eisenhower announced his intentions, the oilman put his vast fortune to work for him. Richardson’s direct contribution to the campaign was reportedly $1 million, but he also paid for roughly $200,000 in expenses at the Commodore Hotel in New York, where the general had established offices after returning home, and then covered most of his expenditures during the Republican National Convention in Chicago as well.

Eisenhower condemned a set of “evils which can ultimately throttle free government,” which he identified as labor unrest, runaway inflation, “excessive taxation,” and the “ceaseless expansion” of the federal government. These were commonplace conservative positions, but Eisenhower presented them in religious language that elevated them for his audience.

Faith and Freedom followed the lead of Graham and Vereide, claiming it would never endorse one party or the other. But it offered a “political checklist for Christians” that nudged readers rather strongly toward the Republicans.

He took more than 55% of the popular vote, with even more impressive margins in the Electoral College, where he won 442 to 89. Stevenson only managed to win nine states.

Reflecting on the election returns, Eisenhower resolved to put that mandate in the service of a national religious revival. He asked Graham to meet with him in the suite Sid Richardson had provided at the Commodore Hotel in New York, to discuss plans for his inauguration and beyond. “I think one of the reasons I was elected was to help lead this country spiritually,” the president-elect confided.

“These days I seem to have no trouble filling my calendar,” the president-elect told them. “But this is one engagement that I requested. I wanted to come and do my best to tell those people who are my friends, who are supporters of the idea that is represented in the foundation, how deeply I believe that they are serving America.” The basic idea of the Freedoms Foundation was that those who promoted “a better understanding of the American way of life” should be singled out for awards and attention, especially those who celebrated the central role played by “the American free enterprise system” in making the nation great. Fittingly, for an organization devoted to the promotion of big business, its president was Don Belding, head of a national advertising agency whose clients included Walt Disney and Howard Hughes. The board of directors, meanwhile, included leaders at General Foods, Maytag, Republic Steel, Sherwin Williams, Union Carbide and Carbon, and US Rubber, as well as individuals such as Sid Richardson and Mrs. J. Howard Pew. The corporate presence was so pronounced that one honoree sent his award back, grumbling that the Freedoms Foundation was “just another group promoting the propaganda of the National Association of Manufacturers.

More than any other individual, Senator Frank Carlson deserved credit for creating the National Prayer Breakfast. An outspoken opponent of the New Deal, he denounced Franklin Roosevelt as the “destroyer of human rights and freedom” for his administration’s interventions in the economy. He held Harry Truman in similar contempt. “Little Caesars walk the highways of our nation, trying to tell us what to wear, eat, plant, sow and reap,” Carlson complained in 1947.

As Eisenhower’s cabinet focused its attention on spiritual rewards yet to come, its members faced the danger that the press and the public might focus more on the earthly riches they had already amassed. Secretary of Defense Charles Wilson had been the country’s highest-paid executive as president of General Motors, the world’s largest private corporation. Wilson’s initial refusal to divest his holdings in the corporation, which had nearly $5 billion worth of contracts with the same federal department he would now lead, had delayed his confirmation and tarnished his image. When asked whether his GM holdings would tempt him to favor his corporation over his nation, Wilson famously answered that he always thought “what was good for our country was good for General Motors, and vice versa.” The auto tycoon eventually agreed to release his shares, but he was not the only top Defense Department official whose business associations gave the appearance of impropriety. Deputy Secretary Roger Kyes had been in charge of procurement for General Motors; Secretary of the Army Robert Ten Broeck Stevens’s family textile company made uniforms for that branch of the military; Secretary of the Air Force Harold Talbott had ties to both Chrysler and North American Aviation; and Secretary of the Navy Robert Anderson—put in the post at Sid Richardson’s recommendation—had previously managed a major facility for Associated Refineries.

Though he attracted a considerable deal of scrutiny, Wilson was by no means the only corporate titan in the Eisenhower cabinet. Treasury Secretary George Humphrey, for instance, had long served as president of the Mark A. Hanna Company of Cleveland, a sprawling conglomerate with interests in coal, oil, natural gas, iron, steel, copper, rayon, plastics, shipping, and banking. Commerce Secretary Sinclair Weeks, a New England financier and banker, was such a zealous advocate for business that Eisenhower privately worried that he “seems so completely conservative in his views that at times he seems to be illogical.” Postmaster General Arthur Summerfield ran one of the nation’s largest automobile agencies but also found success in real estate, oil, and insurance, while Hobby had made her fortune as a Texas newspaper publisher. Although not businessmen themselves, both Dulles and Brownell had close ties to the corporate world from their time at two of New York’s oldest law firms; Dulles had reportedly earned more in billings than any other corporate attorney in America.

Business leaders, of course, had long been working to “merchandise” themselves through the appropriation of religion. In organizations such as Spiritual Mobilization, the prayer breakfast groups, and the Freedoms Foundation, they had linked capitalism and Christianity and, at the same time, likened the welfare state to godless paganism.

After decades of work, these businessmen believed their efforts had finally paid off with the election of Dwight Eisenhower. Watching him enthusiastically embrace public faith, these supporters assumed that the national religious revival was largely a means to a more important end: the rollback of the New Deal state. But they soon realized that, for all his sympathies for and associations with business leaders, Eisenhower saw the religious revival itself as his essential domestic duty. To their amazement, once in office he gave relatively little thought to the political and economic causes that his backers had always seen as the real reason for that revival.

He refused to go further, especially when it came to the welfare state that his supporters had long worked to destroy. Despite his personal sympathies with their position, the president believed “the mass of the people” disagreed. “Should any political party attempt to abolish social security, unemployment insurance, and eliminate labor laws and farm programs, you would not hear of that party again in our political history,” he warned. “There is a tiny splinter group, of course, that believes you can do these things. Among them are H. L. Hunt . . . , a few other Texas oil millionaires, and an occasional politician or business man from other areas. Their number is negligible and they are stupid.

Even though Eisenhower’s rise to power had depended on support from “Texas oil millionaires” such as Sid Richardson, he refused to roll back the welfare state they despised. In fundamental ways, he ensured the longevity of the New Deal, giving a bipartisan stamp of approval to its continuation and significantly expanding its reach. Notably, Eisenhower pushed Congress to extend Social Security coverage to another ten million Americans and increase benefits as well. In his first term, the president repeatedly resisted calls from conservatives to cut education spending; in his second, he secured an additional $1 billion for the cause. On a much larger scale, Eisenhower established the single largest public works project in American history with the interstate highway system, but did little to bring down tax rates for the wealthy; the top bracket barely dipped, declining from 94% to 92% over the course of his two terms in office.

For conservatives who had assumed that the success of “under-God consciousness” during the Eisenhower administration would naturally lead to tangible reductions in the welfare state, his time in office was a disappointment.

The National Association of Evangelicals (NAE) praised the president for the pious example he had set.

As with earlier drives to supplant the secular authority of the welfare state with the higher power of the Almighty, the Seven Divine Freedoms ultimately served an earthly purpose. Organizers made the political aims of the project explicit in their plans. “There is a growing realization that the enemies of freedom are not foreign powers,” observed R. L. Decker, the NAE’s executive director, “but that there are forces at work within the nation which are just as dangerous and more sinister than any foreign foe. These forces take advantage of the natural desires of the people for unity and security and material prosperity to propose panaceas for our social, economic, and political problems which, if accepted, would rob us of our freedom as effectively as defeat in warfare,” he continued.

The group would encourage public and private leaders to sign the “Statement of Seven Divine Freedoms” and thereby signify that the United States of America had been founded on the principles of the Holy Bible.  Eisenhower was the first to sign, in an Oval Office ceremony on July 2, 1953. “This is the kind of thing I like to do,” he said afterward. “This statement is simple and understandable, and sets forth the basic truth which is the foundation of our freedoms.” Nixon added his name next, as did members of the cabinet.

“By means of the radio, motion pictures, television, newspaper and periodical advertisements, signboards and posters, essay contests and amateur dramatics as well as community rallies, sermons and editorials,” Decker insisted, “this theme ‘Freedom is of God and we must have faith in him’ can constantly be dinned into the consciousness of America

The Judiciary Committee sat to consider a proposed amendment to the Constitution of the United States. If passed, it would have declared, “This Nation devoutly recognizes the authority and law of Jesus Christ, Saviour and Ruler of nations through whom are bestowed the blessings of Almighty God.”1 The campaign for this “Christian amendment” had been under way, in fits and starts, for nearly a century. Like most efforts to add religious elements to American political culture, the idea originated during the Civil War. In 1861, several northern ministers came to believe that the conflict was the result of the godlessness of the Constitution. “We are reaping the effects of its implied atheism,” they warned, and only a direct acknowledgment of Christ’s authority could correct such an “atheistic error in our prime conceptions of Government.

These clergymen banded together to create the National Reform Association, an organization that was single-mindedly dedicated to promoting the Christian amendment. It won the support of prominent governors, senators, judges, theologians, college presidents, and professors.

Advocates of the Christian amendment still faced an inherently difficult challenge in the Senate. By its very nature, their proposal to change the Constitution forced them to acknowledge that the religious invocation was something new for the document. The founding fathers had felt no need to acknowledge “the law and authority of Jesus Christ,” and neither had subsequent generations of American legislators. Some of the more imaginative advocates of the Christian amendment at the Senate hearings simply waved away this history and argued that leaders such as Washington and Lincoln had supported the idea even if they never acted upon it. For evidence, they repeatedly made reference in their testimony to letters and meetings in which these presidents allegedly had lent support to their cause. At the hearings, the presiding senator kindly offered to have these documents inserted into the official transcript once they were found. But the published record provided a quiet rebuke to such claims, noting that inquiries to the Library of Congress and other authoritative sources showed that the alleged documents did not, in fact, exist.

Disneyland’s dedication testified to how deeply piety and patriotism were intertwined in its creator’s worldview. Disney, a Congregationalist, relied on Christianity as a constant guide. His faith in his country was equally strong, though his political beliefs changed considerably over the course of his life. During the 1930s, he had been a strong supporter of Franklin Roosevelt and the New Deal. His cartoons during the Depression helped establish the so-called “sentimental populism” of the era’s popular culture, always championing “little guys”—Mickey Mouse, the Three Little Pigs, the Seven Dwarves—in their struggles against stronger foes. But in the 1940s, Disney’s politics took a sharp turn to the right. In 1941, a bitter strike at his company led him to denounce “Communist agitation” in a full-page ad in Variety. The day after Pearl Harbor, Disney was stunned when the US Army abruptly commandeered his studio for seven months’ use as a supply base. During the war, the government never paid him for some propaganda shorts he made, and his overseas profits dwindled to a trickle. Disney emerged from the conflict a staunch conservative. He helped bring the House Un-American Activities Committee to Hollywood in October 1947 and, in his appearance as a friendly witness, condemned communist influence in labor unions, pointedly naming names. When fellow Congregationalist James Fifield organized the Committee to Proclaim Liberty a few years later, Disney readily signed on to support its “Freedom Under God” festivities.

In its conflation of piety and patriotism, Disneyland embodied larger currents in American popular culture during the postwar era. Political leaders and religious reformers led the way in fomenting the religious revival of the Eisenhower era, but their counterparts in Hollywood and on Madison Avenue proved to be indispensable allies. Prompted by both patriotism and an eye for profits, entertainers and advertisers did a great deal to promote public expressions of faith in the era. Prominent advertising agencies promoted religious observance as a vital part of American life and religion as an essential marker of the national character.

Like much of corporate America, the advertising industry discovered religion as a means of professional salvation in the aftermath of the Great Depression. The industry had fallen into turmoil when ad revenues plummeted along with corporate profits in the crash of the late 1920s and early 1930s. More ominously for advertising executives, the New Deal represented the first real efforts to regulate their work, as it empowered the Federal Trade Commission to fight false claims about food and drugs. As the nation prepared itself for the Second World War, further growth of the federal government seemed guaranteed. Thus, in November 1941, hundreds of ad executives gathered at a spa in Hot Springs, Virginia, to discuss the danger of “those who would do away with the American system of free enterprise” or who might “modify the economic system of which advertising is an integral part.

The Advertising Council classified its projects as acts of public service, but in truth they were acts of public relations, meant to sell the American people on the merits of free enterprise. In 1946, for instance, the council launched a campaign titled “Our American Heritage.” On the surface, it seemed wholly nonpartisan, simply intended to raise Americans’ awareness of their rights and responsibilities as citizens. Internally, though, organizers described it as a conservative-minded effort that would help Americans resist becoming “pawns of a master state.

The J. Walter Thompson Company (JWT), the largest advertising firm in the world, handled the practical work of the campaign. Its advertisements had a simple message for Americans: go to church. Copywriters drew on their conventional strategies, pitching religion as a path to personal improvement and self-satisfaction. “Find yourself through faith,” the campaign urged; “come to church this week.” Ads typically dramatized the concerns of a frantic father or an anxious housewife and then, in the same tones used to hawk antacid or mouthwash, promised that faith would cure their problems quickly.

Television and film followed the religious trend throughout the 1950s. Billy Graham’s Hour of Decision program was televised by three different networks, on some 850 stations, to an estimated audience of twenty million viewers.

The most lasting legacy of The Ten Commandments was its marketing campaign. As he prepared for the debut, DeMille worked with the Fraternal Order of Eagles on an ambitious plan to establish monuments of the Ten Commandments on public property across the nation. The organization had been distributing copies of the Ten Commandments for years, inspired by an incident in which Judge E. J. Ruegemer of St. Cloud, Minnesota, learned that a juvenile defendant in his courtroom had never heard of the laws and “sentenced” the boy to learn and obey them. Ruegemer, the head of the Eagles’ Youth Guidance Commission, persuaded the fraternal order to take up the cause. Members and their families volunteered to make reproductions of the Ten Commandments, initially manufacturing them as paper scrolls in St. Paul and framing them with hand-cut wood and glass. The nearly nine hundred thousand members of the organization popularized the venture, distributing scrolls far and wide. Recipients included city halls in small towns from Washington State to Pennsylvania, judges in Idaho and Massachusetts, and a police detective in Atlantic City, New Jersey.

When he learned of the Eagles’ campaign, DeMille immediately wanted to join in. A consummate showman, the director urged the Eagles to work on a grander scale. Instead of modest scrolls, he suggested the organization craft larger stone monuments that more closely resembled the tablets described in Exodus. Together, DeMille and the Eagles established Ten Commandments monuments across America.

Schwarz capitalized on his new influence in Congress to present himself as a leading authority on the problem of communism and the solution of Christianity. In 1957, he addressed a breakfast meeting of the Republican Club, where he so inspired attendees that they “immediately,” as one told Schwarz, took steps “to refer you to the House Un-American Activities Committee and to arrange a personal interview between you and an Assistant to the President of the United States.” He was soon summoned to testify before the committee’s staff on the topic “The Communist Mind.” In an interview that ran for an hour and twenty minutes, the doctor—who liked to compare himself to a pathologist in his new line of work—patiently led congressional aides through his diagnosis of the communist menace. Ultimately, he urged greater awareness of “the basic foundations of American civilization” as the only cure.

Improbably, Schwarz’s congressional testimony quickly became a cause célèbre. The first transcripts were rapidly distributed, forcing Congress to print another 50,000 copies the following year. Executives at the Allen-Bradley Company, an electronics corporation in Milwaukee, published large portions of the interview as a special double-page advertisement in the largest metropolitan newspapers. “WILL YOU BE FREE TO CELEBRATE CHRISTMAS IN THE FUTURE?” the headline blared. “NOT UNLESS: You and other free Americans begin to understand and appreciate the benefits provided by God under the American free enterprise system.” The ad urged Americans to read Schwarz’s words and share them with friends. Much like the other corporations who sponsored like-minded messages, the Allen-Bradley Company insisted it had nothing to gain. “With this advertisement,” the sponsor noted, “this company is trying to sell you nothing except the importance of holding fast to your American freedoms including the freedom to live, the freedom to worship your God, and the freedom to work as you choose.” Republican senator Barry Goldwater, meanwhile, wrote Schwarz soon repackaged his testimony as a best-selling book, You Can Trust the Communists ( . . . To Do Exactly as They Say). Released in 1960, it quickly sold a million copies.  While Schwarz successfully spread his message in print, his energies were more devoted to a whirlwind tour of personal appearances.

In 1958, the CACC launched its first School of Anti-Communism. For $5 a day—or $20 for the week—participants were treated to a slate of anticommunist films, lectures, and discussions in a packed schedule that ran from 8:30 a.m. to 9:45 p.m.  The first School of Anti-Communism was held in St. Louis, but they soon spread to cities around the nation including Los Angeles, New York, Chicago, Houston, Dallas, Miami, San Diego, San Francisco, Seattle, and Portland.

While the school made an impression on the public, it also impacted the finances of the Christian Anti-Communism Crusade. The accounting firm of Ernst & Ernst reported that the organization raked in $311,253 for the week, an impressive sum in light of the low admission fees. Even after expenses, the CACC still turned nearly $250,000 in profits. Schwarz promised the proceeds would be used to operate similar schools across the country. But in the short term, he decided to capitalize on the overwhelming local popularity of the Southern California school of by staging a sequel two months later, billed as “Hollywood’s Answer to Communism.” Organizers worked diligently to surpass the success of the first event. Frawley again led the way, this time securing the landmark Hollywood Bowl for the rally. As master of ceremonies, he enlisted the former song-and-dance man and future US senator George Murphy.  The actors made a curtain call as well, with Reagan, Wayne, Boone, Rogers, and Evans all on hand again. This time, though, they were joined by a cast of all-stars that included Jimmy Stewart, Rock Hudson, Robert Stack, Donna Reed, Ozzie and Harriet Nelson, Nat “King” Cole, Jane Russell, Edgar Bergen, Andy Devine, Walter Brennan, Tex Ritter, Irene Dunne, Vincent Price, Cesar Romero, and a host of others then starring on television and in film. Notable directors such as John Ford and studio executives such as Walt Disney and Jack Warner offered their support too.

“When I finally spoke,” Schwarz remembered, “only ten minutes remained, so I delivered an uncharacteristically brief message. It was sufficiently forceful to earn me a comparison to Adolf Hitler in the student newspaper of Stanford University.” The highlight of the Hollywood Bowl event, however, was a special appearance by C. D. Jackson, the publisher of Life magazine. After the Southern California school, his publication ran a two-paragraph item that dismissed the event as a gathering of wild-eyed extremists no different from the John Birch Society. Privately, Schwarz knew well that the two far-right groups often shared a common constituency. In a nine-page, single-spaced letter, Birch Society founder Robert Welch informed him in the fall of 1960 that “we have told our members to encourage, support, and work for your ‘schools’ wherever they were put on, so far as they had the opportunity and ability to do so; and to encourage the attendance of friends and acquaintances (as well as attending themselves).” In some instances, Birchers had taken an even more prominent role in the CACC schools. “I know,” Welch wrote, “that at your recent school in San Diego, some of the people who worked hardest to bring it off successfully were our members, for I saw right on the listing of committees and workers the names of some of our members who had specifically written to ask us whether or not they should participate, and whom we encouraged to do so.” Likewise, “quite a number of the leaders and hardest workers” in the Milwaukee and Chicago schools had been Birchers too.

Publicly Schwarz bristled at any suggestion that his organization had anything in common with the increasingly marginalized Birchers. In retaliation for the hit piece in Life, CACC’s sponsors lashed out. An FBI report noted that Frawley “at once cancelled $80,000 ‘Life’ advertising accounts for Schick Razor and Technicolor.” At the same time, “Richfield and other large national advertisers also withdrew substantial contracts calculated to total half million dollars.” (The sponsors went after less prominent critics with equal zeal. In September 1961, an executive with Richfield Oil sent the head of the Los Angeles FBI office the names and addresses of a dozen private citizens who had written the corporation to complain about its sponsorship of the school, suggesting that they needed to be formally investigated.) Meanwhile, conservative activists organized a grassroots campaign calling for individuals to cancel their subscriptions.

He sat down to write the Engel decision 15 years later, Black was determined to defend the wall of separation between Church and State. Religious liberty was essential, he told his wife, because “when one religion gets predominance, they immediately try to suppress the others.” History was littered with evidence of the dangers that inevitably followed when church and state merged. “People had been tortured, their ears lopped off, and sometimes their tongues cut or their eyes gouged out,” Black continued, “all in the name of religion.” To illustrate that point, the justice crafted a rigorously researched opinion. He began with the Book of Common Prayer and then reread John Bunyan’s Pilgrim’s Progress, a classic Christian allegory written by a Baptist author who had been imprisoned for defying the Church of England. That was merely the beginning. “The Judge had religious references on his fingertips,” marveled one of his clerks, who ran back and forth to the library to collect them. As he wrote and rewrote the opinion, Black piled on more history each time. Lower courts had repeatedly made unsubstantiated claims about the nation’s “religious heritage” to support the defendants in Engel, but Black was determined to expose their errors with a meticulously researched rebuttal. By the sixth draft, the bulk of his opinion had become a lengthy narrative about the tangled history of church-state relations in the entire Anglo-American world from the 16th to 18th centuries. “It is a matter of history,” he insisted, “that this very practice of establishing governmentally composed prayers for religious services was one of the reasons which caused many of our early colonists to leave England and seek religious freedom in America.” Based on their “bitter personal experience,” Black wrote, the founders crafted the First Amendment to keep the state out of religion and religion out of the state.

In Black’s view, religion certainly deserved a place of prominence in American life, but the state could not dictate it. “It is no part of the business of government,” he read, “to compose official prayers for any group of the American people to recite as a part of a religious program carried on by the government.   “The prayer of each man from his soul must be his and his alone,” he said. “If there is anything clear in the First Amendment, it is that the right of the people to pray in their own way is not to be controlled by the election returns.

The outraged reaction to the Engel decision was, in large part, driven by alarmist coverage in the press. The court’s majority had gone to great lengths to note that their ruling merely struck down the Regents’ Prayer and, moreover, did so only because of the unique role that New York State officials played in its composition and implementation, but newspapers lost the nuances. “God Banned from the State,” ran a typically hyperbolic headline. Hostile editorials only compounded the problem. The New York Daily News, for instance, lambasted the “atheistic, agnostic, or what-have-you Supreme Court majority,” while the Los Angeles Times complained they made “a burlesque show” of the First Amendment. Publisher William Randolph Hearst Jr. went so far as to call for a complete rewriting of the First Amendment in a signed editorial that ran in all his papers. The media’s misrepresentations were so widespread that the Columbia Journalism Review devoted its fall issue to figuring out just how and why it had all gone so spectacularly wrong.

For a year and a half, Kennedy managed to avoid issues of church and state. But now the Warren Court had forced his hand. In a press conference two days after the decision, Kennedy finally addressed it. In measured remarks, he cautioned Americans to approach the issue calmly. Noting that it was important to “support the Supreme Court’s decisions even when we may not agree with them,” the president reminded Americans that “we have in this case a very easy remedy, and that is to pray ourselves, and I would think that it would be a welcome reminder to every American family that we can pray a good deal more at home, we can attend our churches with a good deal more fidelity, and we can make the true meaning of prayer much more important in the lives of all our children.” As Kennedy called for calm, however, a few of his predecessors fueled the fires. Herbert Hoover denounced Engel as a “disintegration of a sacred American heritage,” while Eisenhower asserted that he “always thought this nation was an essentially religious one.” Truman pointed out that it was actually the Court’s duty to interpret the Constitution, but he was largely ignored.

Congressional leaders only ramped up their rhetoric. The ruling, Senator Herman Talmadge of Georgia thundered, was “an outrageous edict” and “a blow to all believers in a Supreme Being.” His colleagues in the Senate largely agreed. Barry Goldwater of Arizona denounced the decision as a “blow to our spiritual strength,” while James Eastland of Mississippi likewise called it as a major step toward “the destruction of the religious and spiritual life of this country.

Winegarner’s role in the debate was short-lived. In May 1964, columnists Roland Evans and Robert Novak revealed that the Citizens Congressional Committee was “operated, financed, and directed by Gerald L. K. Smith, notorious promoter of right-wing causes,” and that Winegarner was Smith’s nephew. A onetime ally of Senator Huey Long and an outspoken anti-Semite, Smith had made no secret of his involvement, bragging that the committee was “an auxiliary, financed and directed by The Cross and the Flag,” the far-right publication of his Christian Nationalist Crusade. In its pages, Smith attacked the “cabal of international Jews” in the Kennedy administration and the “nine-man oligarchy” they manipulated on the Supreme Court, before telling readers there was hope. With its “mammoth petition,” the Citizens Congressional Committee had demanded the restoration of “the right of Christian devotions in public schools.

While exposure of the committee’s extremist roots was embarrassing to the larger cause, it was not surprising. Indeed, the campaign for a constitutional amendment to restore prayer to public schools had quickly attracted activists on the far right. Billy James Hargis of the archconservative Christian Crusade devoted himself to circulating petitions across the West, while Carl McIntire, a fundamentalist broadcaster with an affinity for far-right politics, lobbied for it over his own network of 582 radio stations. The John Birch Society supported the amendment idea as part of its long-standing drive to impeach Earl Warren and generally discredit the Supreme Court. Similarly, segregationists who criticized the Court’s rulings on civil rights latched on to the school prayer issue as a more popular and palatable way to condemn it again.

The visibility of such supporters led some to dismiss the constitutional prayer amendment as a cause championed only by the far right or the Deep South, but in truth it had much broader backing. At the 1962 Governors’ Conference, the leaders of forty-nine states called for a prayer amendment that “will make clear and beyond challenge the acknowledgment of our nation and people of their faith in God”; a year later, they renewed their call unanimously. The governors weren’t alone. The Supreme Court’s rulings against school prayer and Bible reading were deeply unpopular across the nation, and a solid majority of Americans seized on the amendment idea as a solution. In August 1963, shortly after the Schempp decision, Gallup asked Americans if they wanted prayer and Bible reading in public schools; 70 percent said yes. They flooded their political representatives with mail, with one study estimating that 50 percent of all correspondence to Congress in the 1963–1964 term focused on the proposal for a school prayer amendment. These letters, postcards, and petitions overwhelmingly supported the idea, with officials citing a margin of nearly twenty to one in favor. Congress leapt into action. Between the summer of 1962 and spring of 1964, 113 representatives and 27 senators introduced 146 different amendments to restore prayer and Bible reading to public schools.5 With such overwhelming popular and political support, the “prayer amendment” seemed sure to sail through Congress and be ratified by the states with equal speed.

Though the two camps in this battle were far from homogeneous, each clustered around a set of convictions. To put it in broad strokes, proponents of the prayer amendment believed America was a Christian nation—or, in their more generous moments, a Judeo-Christian nation. They were deeply invested in promoting a prominent role for religion in public life, believing that formal recognition of God was not simply an affirmation of the nation’s religious roots but an essential measure for preserving the country’s character. In their eyes, liberty came directly from God. If Americans ever came to believe that their rights stemmed from the state instead, then those rights could just as easily be taken away by the state. Thus, the debate for the pro-amendment side was about much more than school prayer; it was about the survival of the nation.

For opponents of the amendment, the stakes were just as high. Legal and religious authorities who opposed the idea warned that a school prayer amendment would radically reshape the status quo, effectively weakening the First Amendment’s guarantee of religious freedom. Under a new “tyranny of the majority,” they believed, local religious minorities would be persecuted. But more than that, all faiths would be endangered. If the state intruded on churches’ and synagogues’ roles as religious educators, it would usurp not just their activities but also their authority. In their place, the state would foster a broader but blander public religion, one drained of the vital details that animated individual faiths. The prayer amendment, the heads of major denominations concluded, would ultimately hurt religion rather than help it.

While Celler’s delaying tactics enraged supporters of the Becker Amendment, they proved crucial in giving opponents time to mobilize. Most civil libertarians and religious organizations had assumed the campaign for a constitutional amendment would go nowhere, but as momentum shifted in Becker’s direction they realized, almost too late, what was happening. In March, ACLU headquarters sent its affiliates warnings that the discharge petition drive was “becoming alarming.” They scrambled to find allies. The Baptist Joint Committee on Public Affairs, the political voice of the eight largest Baptist bodies in the nation, soon announced its opposition, claiming the Becker Amendment threatened their religious liberty. A week later, the American Jewish Committee denounced it as “the most serious challenge to the integrity of the Bill of Rights in American history.” On St. Patrick’s Day, representatives of Protestant and Jewish organizations and civil liberties groups gathered at a hastily arranged meeting in New York. Sizing up the situation, they realized the Becker Amendment had “an excellent chance” of winning a majority of votes from the Judiciary Committee. If that happened, the full House and Senate would invariably vote for it.

Reverend Eugene Carson Blake of the United Presbyterian Church worried that “school prayer and Bible reading either become a ritual that is meaningless and has no effect on the children, or it is some kind of indoctrination.” Either way, it amounted to “state religion,” he warned. “If you get the idea that religion and Americanism are the same thing, all of us are scared to death, because we think religion transcends the State.

“The politician who says he believes in reducing the scope of Government and then asks for a Government role in nurturing and guiding the inner man can expect scrutinizing conversations as these issues are pursued by our people in future debate.

“It is so easy to think that one is voting for prayer and the Bible,” cautioned the Christian Science Monitor. “It comes as a shock that this is not the issue. The issue is that agencies of government cannot avoid favoring one denomination and hurting another by the practical decisions that have to be made by government authority on what version of the Bible shall be imposed and what prayer. The churches know this and that is why they are against the Becker Amendment.

The prolonged fight over the amendment marked not the end of a struggle but the beginning. The House hearings revealed how fault lines across the country were shifting on the issue of separation of church and state. Clerical leaders had taken stands that were largely in line with their denominations’ traditional perspectives on the matter, but conservative laymen recoiled from their arguments. They felt bewildered—and, in many instances, betrayed—by their leaders’ objections to seemingly wholesome traditions such as school prayer and Bible reading. Their faiths’ traditional stances on issues of church-state separation had always seemed academic. In the wake of Becker’s failure, conservative laymen began to doubt the authority of their religious representatives and look for new leaders to replace them.

Dirksen refused to accept defeat. “This crusade will continue,” he announced. “The next time, we will be better organized throughout the country.” In a telephone call the night before the vote, he had been assured by Dr. Daniel Poling, the eighty-one-year-old fundamentalist and former editor of the Christian Herald, that a new grassroots organization would rise up to champion the cause of school prayer. Its leaders would be Poling, Billy Graham, and a “Catholic prelate” to be named later. That specific organization never came to pass, but the proposal was prescient. For too long, religious conservatives believed that their voice in political matters—especially when it came to the role of religion in public life—had been drowned out by the more liberal leaders of their denominations. If conservative Christians at the grassroots would simply organize themselves according to their politics rather than their particular denominations, they could end the reign of the religious establishment. If effective leaders could bridge the long-standing gaps between different faiths—and bring together, as Poling proposed, conservative Catholics with fundamentalist and evangelical Protestants—then laypeople would finally have their say.

When he tried to explain his razor-thin loss in the 1960 presidential race, Nixon often singled out a last-minute decision by Life publisher Henry Luce to scrap an article in which Graham had given him a strong endorsement. Both Nixon and Graham believed the article would have made the difference.

Eight years later, they were determined not to repeat that mistake. Echoing his earlier service to Eisenhower, Graham proved pivotal both in Nixon’s decision to run and in his performance on the campaign trail. “You are the best prepared man in the United States to be president,” Graham reportedly told him in January 1968. “I think it is your destiny to be president.” Unlike his coy approach in 1952, this time he made no secret of his support. At a Billy Graham crusade in Portland, Oregon, he introduced Nixon’s daughters to the crowd and announced that “there is no American I admire more than Richard Nixon.” At the Republican National Convention in Miami in August, Graham provided a prayer after Nixon’s acceptance speech and then participated in top-level discussions about potential running mates. In September, Nixon took a place of honor next to Graham on stage at another crusade in Pittsburgh, where the preacher told the worshipers and those watching at home that his long friendship with Nixon had been “one of the most moving religious experiences of my life.” Shortly before the election, Graham informed the press that he had already cast an absentee ballot for Nixon, a fact that was repeated in Republican television ads right up to election day.

Graham’s influence in the Nixon White House was profound. His words and deeds helped make piety and patriotism seem the sole property of the right.

“Every president in American history had invoked the name and blessings of God during his inauguration address, and many . . . had made some notable public display of their putative piety,” religious scholar William Martin observed, “but none ever made such a conscious, calculating use of religion as a political instrument as did Richard Nixon.” Not even Eisenhower came close. While his purposely bland public religion had helped unite Americans around a seemingly nonpartisan cause, the starkly conservative brand of faith and politics advanced by Nixon and Graham only drove them apart.

“Even in this period when religion is not supposed to be fashionable, when agnosticism and skepticism seem to be on the upturn,” he reflected, “most of the people seem to be saying ‘We are praying for you, Mr. President, and for the country.’” He appeared sincere, but later, when an aide praised his performance, Nixon laughed it off. He’d simply fed the crowd some “church stuff” to keep them happy.

Behind the scenes, however, the ulterior motives were clear. “Sure, we used the prayer breakfasts and church services and all that for political ends,” Nixon aide Charles Colson later admitted. “One of my jobs in the White House was to romance religious leaders. We would bring them into the White House and they would be dazzled by the aura of the Oval Office, and I found them to be about the most pliable of any of the special interest groups that we worked with.” The East Room church services were crucial to his work. “We turned those events into wonderful quasi-social, quasi-spiritual, quasi-political events, and brought in a whole host of religious leaders to [hold] worship services for the president and his family—and three hundred guests carefully selected by me for political purposes.

Well versed in the public relations value of public piety, Haldeman exploited the services to their full potential. At his suggestion, for instance, the supposedly private programs were broadcast over the radio, with print reporters, photographers, and TV cameramen on hand to record the spectacle for wider distribution.

Other officiants were even more direct in blessing the president. In June 1969, Rabbi Louis Finkelstein, chancellor of the Jewish Theological Seminary of America, concluded his sermon with a bold prophecy. “I hope it is not too presumptuous of me, in the presence of the President,” he noted, “to say that future historians, looking back at our generation, may say that in a period of great trials and tribulations the finger of God pointed to Richard Milhous Nixon, giving him the vision and the wisdom to save the world and civilization.

Such comments were no accident. The White House staff went to great lengths to ensure that clergymen invited to the East Room were conservatives connected to a major political constituency. In recommending Archbishop Joseph Bernardin of Cincinnati as officiant for a service before St. Patrick’s Day, a cover memo noted bluntly that “Bernardin was selected because he is the most prominent Catholic of Irish extraction and a strong supporter of the President. We have verified this.” Harry Dent, a former aide to Strom Thurmond who directed the administration’s “southern strategy,” likewise forwarded a list of “some good conservative Protestant Southern Baptists” who could be trusted to preach a message that pleased the president.

Political concerns also dictated who attended each service. Low-level members of the White House staff, such as switchboard operators or limousine drivers, were occasionally invited, to support the illusion that these were private affairs for the larger White House “family,” but internal policies instructed that no more than a quarter of the attendees should be “non-VIPs.” Instead, the congregation was composed of prominent members of the White House and its supporters, so much so that the New York Times joked: “The administration that prays together, stays together.” Invitations usually went to the administration’s allies in Congress, but occasionally they were used to lobby more independent members about particular bills.

With the bulk of the seats reserved for administration officials and congressmen they might sway, the remaining few were precious political commodities. Potential campaign donors were always given preference. An early “action memo” to Colson ordered him to follow up on the “President’s request that you develop a list of rich people with strong religious interest to be invited to the White House church services.” At this, Colson had quick success. The guests for an ensuing East Room service, for instance, included the heads of AT&T, Bechtel, Chrysler, Continental Can, General Electric, General Motors, Goodyear, PepsiCo, Republic Steel, and other leading corporations.

As the political purpose of the White House church services became obvious, criticism from the press increased. Originally, Nixon thought it would be “very useful” to win the media’s approval for the new tradition and decided to invite several prominent reporters, pundits, newspaper publishers, and network presidents to a service early in his administration. Guests included CBS anchorman Walter Cronkite and newspaper publisher Samuel I. Newhouse, as well as prominent reporters from major dailies. For his sermon to the press, Reverend Louis H. Evans Jr. dwelled on the dangers of passing judgment without having the full facts at hand. “Can we be accepted for what we truly are, can we accept others for what they are,” Evans asked, “or will they cling to stereotypes, to distorted priori portraits?” Such blunt entreaties did not, of course, keep the press from passing judgment. In July 1969, the Washington Post challenged the sincerity of this “White House Religion.” “Unfortunately, the way religion is being conducted these days—amid hand-picked politicians, reporters, cameras, guest-lists, staff spokesmen—has not only stirred needless controversy, but invited, rightly or not, the suspicion that religion has somehow become entangled (again needlessly) with politics,” the editors chided. “Kings, monarchs, and anyone else brash enough to try this have always sought to cajole, seduce or invite the clergy to support official policy—not necessarily by having them personally bless that policy, but by having the clergy on hand in a smiling and prominent way.” In the end, the Post gently suggested it might be best “to avoid using the White House as a church.

Religious leaders began to denounce the East Room church services as well. Reinhold Niebuhr, once an outspoken critic of Spiritual Mobilization, now targeted its apparent heirs. For an August 1969 issue of Christianity and Crisis, the seventy-seven-year-old theologian penned a scathing critique titled “The King’s Chapel and the King’s Court.” The founding fathers had expressly prohibited establishment of a national religion, he wrote, because they knew from experience that “a combination of religious sanctity and political power represents a heady mixture for status quo conservatism.” In creating a “kind of sanctuary” in the East Room, Nixon committed the very sin the founders had sought to avoid. “By a curious combination of innocence and guile, he has circumvented the Bill of Rights’ first article,” Niebuhr charged. “Thus he has established a conforming religion by semi-officially inviting representatives of all the disestablished religions, of whose moral criticism we were [once] naturally so proud.” The “Nixon-Graham doctrine of the relation of religion to public morality and policy” neutered the critical functions of independent religion, he warned. “It is wonderful what a simple White House invitation will do to dull the critical faculties, thereby confirming the fears of the Founding Fathers.

“I call upon Americans to bend low before God and go to their knees as Washington and Lincoln called us to our knees many years ago,” he implored. “I submit that we can best honor America by rededicating ourselves to God and the American dream.” A return to religion, Graham argued, would bind the wounds of the nation and “stop this polarization before it is too late.” As Graham looked out from the Lincoln Memorial, though, it seemed it might already be too late. The crowd before him welcomed his message, but they had become increasingly distracted by a smaller contingent of radicals arrayed behind them. Roughly a thousand sprawled in the shadows of the Washington Monument, smoking red-white-and-blue joints and waving Vietcong flags. Though Graham had hoped to win them over, they still viewed him and his supporters with suspicion. (Speaking with a reporter, a young man with long brown hair and a drooping mustache referred to Graham’s clean-cut crowd as “the Americans.”) As the service went on, a few hundred radicals, some completely nude, waded waist deep into the reflecting pool and launched into antiwar chants.

When mounted policemen finally intervened to keep the hecklers at bay, the conservative crowd cheered them on. “Push ’em back,” yelled a man in yellow Bermuda shorts. “They can use a bath!” “They ought to be clubbed,” said a bald man in a striped shirt. An angry housewife upped the ante: “I hope they break a few necks, that’s what I hope.

As the speakers descended the steps, they joined the crowd in a procession down Constitution Avenue. US servicemen and Boy Scouts led the way with the American flag and the flags of states and territories. Hippies stood on the sidelines chanting “One, two, three, four! We don’t want your fucking war!

Once in office, Reagan helped deepen the sacralization of the state. “I am told that tens of thousands of prayer meetings are being held on this day; for that I am deeply grateful,” he said in his first inaugural address. “We are a nation under God, and I believe God intended for us to be free.

[ And that’s just some of what this book has to say – find out about Nixon, Reagan, and what’s happened since by reading the book ]

 

Posted in Corruption, Corruption, Distribution of Wealth | Tagged , , , , , | Leave a comment

A book review of “Thundersticks: Firearms and the Violent Transformation of Native America“ by David J. Silverman

[ This is a book review of “Thundersticks: Firearms and the Violent Transformation of Native America“ by David J. Silverman 2016.

I found this book hard to put down.  It should be read because it tells the role guns played in the decimation of Native Americans, how initially European colonization was mainly able to succeed by trading guns for fur (beaver, otter, buffalo, deer), how guns played a huge role in the deaths of hundreds of thousands of Native Americans over the next 200 years, and a new and darker view of the history of America.  After reading this book, I wondered if perhaps as many Indians died in gun battles between tribes and colonists as died from small pox and other diseases.

Native Americans were brave, strong, clever, and strategic in how they used guns to transform their culture.  Perhaps if they’d had a greater population they could have fended off colonization, though disease, fighting among themselves, the immigration of millions, and enormous birth-rate of colonists almost certainly doomed them.

It’s an enormous tragedy that Indians used guns to kill and capture slaves from other tribes to swell their own numbers (lost to battles and disease) to gain wives and children (men were killed), as well as exchanging captured natives to the European slave trade in exchange for guns.  The one time Native American leaders had the vision to try to unite tribes against European colonization failed (i.e. Pontiac’s War in 1763.)

The complete history of the role guns played tragedy in hundreds of thousands of Native American deaths in gun battles has never been told before as far as I can tell, though “The Earth Is Weeping: The Epic Story of the Indian Wars for the American West” covers the U.S. Army wars against Native American tribes after the Civil War.

As globalization ends due to supply chains breaking from oil shortages, peak everything, wars over remaining resources, and destruction of the ecology that sustains us, the ability of the State to control violence will erode and deaths will grow in number until the human population is back to carrying capacity.

This book reminded me that guns will play a huge role in the die-off, especially in America, with by far the highest number of guns per capita (See Wikipedia “Estimated number of guns per capita by country” here for details). 

Guns are anathema to me and many others, but after you read this book you may see merits in arming yourself and finding alliances with other members of your community as mafias, gangs, and paramilitaries proliferate. And who knows, red states taking over blue states given 55% of Republicans versus 32% of Democrats have a gun in their home (Gallup poll 2006).

There are quite a few people capable of forming militias — 22 million, or 7.3% of all living Americans, are in the military now or have served at some point in their lives (What percentage of Americans have served in the military 2015 FiveThirtyEight). Plus millions of policemen, security guards, and related professions.

I wouldn’t be surprised if bullets were worth more than gold for a while.

P.S. I’ve read several other books lately on the true history of America, not the patriotic pablum fed in school you may also be interested in:

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Excerpts from Thundersticks:

From the early days of Atlantic coast colonization in the 17th century, through the end of the Plains wars in the late 19th century, one group of Indians after another used firearms to revolutionize their lives.

The first groups to adopt these weapons sought a military advantage over their rivals.

Those who managed to seize temporary control of an emerging gun market transformed themselves into predatory gunmen, terrorizing entire regions to seize enemy Indian captives, plunder, land, and glory. In the face of such gun-toting expansionist powers, neighboring peoples had little choice but to respond in kind. They could plainly see that the groups most at risk of subjugation, forced adoption, enslavement, displacement, and death were the ones who failed to provide their warriors with guns and ammunition.

All of the tribes quickly learned that access to guns could lead to their rise or fall. The result was the eruption of regional arms races across the continent for over 200 years. This predatory raiding would not subside until a rough balance of power was achieved through a widespread distribution of guns.

For every force like the Five Nations that rose on the strength of its armament, there were numerous other groups on whose fall that rise was predicated. This added up to tens of thousands and perhaps even hundreds of thousands of eastern woodlands Indians killed, captured, tortured, forcibly adopted, and maimed over the course of the seventeenth century.

Innumerable others suffered the misery of losing loved ones over and over again and living in constant fear. Even the Iroquois eventually had their own tactics turned against them as their rivals acquired their own arms and fine-tuned their defenses. The region had degenerated into a running gun battle in which no one was safe.

For some Natives the gun became an important and even necessary tool for hunting. This was especially the case among deer-hunting peoples east of the Mississippi River and for caribou/moose hunters near Hudson Bay. It took only a generation or two before Indians claimed that their young people had become so accustomed to hunting with these weapons, and so out of practice at using and manufacturing bows and arrows, that they would starve without ammunition and gunsmithing services.

The centrality of guns to Native warfare and hunting made them symbols of Indian manhood, for these were the most basic male responsibilities. Men went to war for a variety of reasons:

  • kill enemy warriors to expand or defend territory
  • seize women and children for enslavement and adoption
  • negotiation of tributary relationships between communities
  • revenge of insults
  • protection of kin from outside aggressors
  • plunder enemy wealth

The people’s destiny hinged on these goals, and therefore their cultural practices emphasized war as a foundation of male identity. Almost any man who aspired to social esteem, a favorable marriage, and political influence first had to prove himself as a warrior and hunter. As the weapons market spread, achieving this status required him to become a capable gunman as well. Firearms grew so essential to masculine achievements that, in many times and places, an Indian man was rarely, if ever, seen out on the hunt or on the warpath without a musket and an ammunition bag slung over his shoulder.

Among the Blackfeet of the northwestern Plains, capturing an enemy warrior’s gun became the greatest honor a man could accomplish in battle, which he then memorialized in ceremony and art.

It is equally telling of the role guns played in Indian constructions of gender that Native women rarely used firearms, even when their lives were in peril. The general rule was that women gave and sustained life but did not take it. This principle held firm even when the threat of enemy gunmen was imminent and the community at risk had enough resources to put muskets in the hands of adults of both sexes. It did not seem to matter that women faced special dangers from enemy raiders and armies, since Indian war parties usually killed their adult male opponents but marked able-bodied women for forcible adoption or slavery.

Women were prizes for gun-toting enemy warriors, restricted by their people’s gender conventions from wielding arms to defend themselves. Women who made it out of an enemy attack alive but captive would serve the captor’s people for a greater or lesser time as slaves before being adopted with the expectation of marrying and producing children—that is, if no one killed them beforehand. Child captives suffered similar ordeals. The misery of untold numbers of women and children fitting this description, and of thousands of men who also died along the way, were among the legacies of the arming of the Native Northeast, where in the end “there really were no winners … only survivors.

As Indians’ need for munitions grew, they developed economies to secure supplies of arms, gunsmithing services and restrict their rivals’ access to them.

Indigenous political economies of guns followed a common pattern across the continent over the course of 250 years. Repeatedly, Indian polities harvested resources sought by gun suppliers, and then cultivated trade with more than one weapons dealer to ensure dependable flows of munitions at low costs, even in the event of war with the societies of the arms merchants. Indians used their arsenals to cut off indigenous enemies from the arms trade and seize hunting grounds, slaves, and horses from them which could be converted into more guns. Sometimes the Indians’ gun dealers hailed from different nations, such as England, France, the Netherlands, or Spain, or different colonies of the same nation, in the case of the English provinces of the Atlantic seaboard. At other times (or simultaneously), munitions came from one or more Native groups playing the role of middlemen between colonial markets and Indians of the interior. The point of opening so many trade lines was to prevent foreigners from turning the people’s dependence on firearms into political and economic weakness.

Middlemen accumulated earnings and allies by trafficking guns to people isolated from the Euro-American arms market. Generally the middlemen came from small communities unable to compete independently with the most formidable tribes and confederacies. They made themselves valuable to these groups by delivering munitions and other goods to them from remote colonial sources.

On the return trip they carried indigenous commodities such as beaver pelts, otter pelts, slaves, horses, and bison robes for trade to Euro-American merchants, which began the cycle anew. Serving as the conduit between distant markets enabled the middlemen to build political and economic alliances with peoples at both ends of the transaction, thus giving them influence disproportionate to their numbers and military strength. This role also gave middlemen a cut of the profits, thereby enhancing their own ability to purchase foreign weaponry. Indian polities used commercial and military leverage to shape these relationships to their advantage. They threatened gun dealers that they would take their trade elsewhere unless they received gunsmithing, powder, and shot at reduced prices or even for free. They also required gunrunners who did business with them not to supply their rivals. Traders who bent to these demands often found themselves with customers so loyal that they could be trusted to repay large extensions of credit, even in the absence of a formal legal system to enforce these agreements. By contrast, traders who ignored the Indians’ conditions suffered a loss of business, at best, and sometimes the loss of their lives.

Indians never possessed the technological ability to manufacture guns and gunpowder, but attempts to prevent guns traded for good usually failed because the Indians’ made sure they had multiple sources of supply. At no point in time did any one colonial or imperial polity control enough of the continent or even one region to cut off Indians completely from guns, powder, and shot.

The widespread success of Indians at building and maintaining large arsenals of firearms reveals the extent of indigenous economic and political power, the limits of state authority, and the high degree of interdependence between Indians and Euro-Americans. This interdependence stemmed Indians  being the main suppliers to colonists of beaver pelts, otter furs, deerskins, and buffalo robes. The fur trade was central to the economy of nearly every colony in its opening decades.

Indians insisted on high-quality, low-cost firearms, gunpowder, shot, and gunsmithing services for furs, though they demanded other types of goods, especially woolen blankets, linen, shirts, metal tools, and liquor. But they could make do without cloth or tools if they had to, whereas guns and ammunition became a military necessity, a matter of life and death. The Indians’ Euro-American trade partners could either supply these wares or lose their Native customers and risk turning them into enemies.

The main concession of Euro-American governments and even major trade firms to Indian demands was to make gifts of guns, powder, shot, and gunsmithing a routine part of their diplomacy with Indians. Presents of these goods and services were so common that powerful Indian groups no longer had to pay for them to any significant degree. In the diplomatic gift economy, the quality, quantity, and timeliness of arms-related gifts became the symbols of the health of the relationship between giver and recipient. Price was taken out of the equation. The fact that Europeans delivered these presents in ritual settings structured by Indian customs of feasting, smoking, dancing, singing, and speeches, reflected the leverage Indians exercised over colonial states even as they needed European guns to defend themselves.

Their dependence on the technology of Europe did not translate into political subservience to particular empires, colonies, or nations. The lengthy condition of interdependence between Indians and Euro-Americans, and the Indians’ cultivation of multiple sources of supply beyond the control of any particular government, meant that indigenous peoples’ reliance on guns rarely made them captive to a single Euro-American state. Euro-American states were never able to exploit the Indians’ need for munitions to force them to cede their land or extradite their people to colonial jurisprudence. What those states could do with varying degrees of effectiveness was reduce, but rarely halt, the arms trade during periods of Indian-colonial warfare and thereby pressure enemy Indians to end their campaigns. Additionally, they could use their trading policies and gift diplomacy to influence Native people toward peace or war with other tribes or colonies and to deliver warriors to imperial military campaigns.

The Indians’ dependence on Euro-American weaponry did not make them tools of Euro-American governments. Euro-American polities, including the United States, always struggled to control the arms trade to Indians. In the founding years of colonies, when they were most vulnerable, and during periods of war with Indian peoples, Euro-American governments typically banned the sale of munitions to Indians, but usually to little effect. There were always traders who refused to honor such restrictions. Most alarming were examples of government officers and military men who turned to the black-market trade with Indians to line their own pockets. The arms trade to Indians was one of the prime examples of American “rogue colonialism,” in which colonists of all ranks pursued their own interests, often illegally, in opposition to the directives of central authorities and even against the interests of their neighbors.

The most common element in the sequential collapse of Indian military resistance to Euro-America was starvation and war-weariness stemming from the enemy’s scorched-earth tactics and killing of women, children, and the elderly. Another key factor was their harassment at the hands of other indigenous people who allied with Euro-Americans in the hopes of dealing a blow against their intertribal rivals and gaining supplies of munitions. More generally, Indians lost a numbers game, with their own ranks thinned by repeated bouts of epidemic disease and warfare, while Euro-Americans were strengthened by centuries of high birthrates and large-scale migrations to North America. To the extent that Indians held back this tide, it was in no small part because of, not despite of, their adoption of firearms.

The Atlantic coast was the strongest base of the arms trade, and in broad strokes the gun frontier tended to move from east to west, but firearms arrived in Indian country from multiple directions along the twisting routes of rivers and ancient pathways. Throughout the 18th century, munitions flowed south from the Hudson’s Bay Company’s base in Canadia into the northern Plains and Rocky Mountain regions. Weapons unloaded at French ports on the Gulf of Mexico circulated north, west, and sometimes east, often for hundreds of miles.

In a striking reversal of the east-to-west movement associated with the traditional American frontier, during the late 18th and early 19th centuries shipboard traders sold guns to indigenous people along the Pacific Northwest coast, who then carried these weapons eastward to Natives of the interior. Most Indians in the continental Southwest did not possess guns in significant numbers until the mid- to late 19thcentury, because Spanish policies and economic underdevelopment stifled the arms trade out of colonial New Mexico and Texas. Nevertheless, munitions reached the hands of the Comanches of the southern Plains through their eastern neighbors, the Wichitas of the Arkansas and Red River Valleys, who in turn had obtained them from French, British, and American sources based along the Mississippi.

This history of the movement of guns to Native Americans across the continent over the span of more than two centuries demonstrates how indigenous people used guns to reshape their world. This development was one of the essential features of their history with colonialism. Some Indians used guns to accumulate wealth, power, and honors to become ascendant.

Their stories offer an important counterpoint to the long-standing assumption that Indians generally plunged into a downward trajectory of death, land loss, and impoverishment at contact with Euro-Americans. They also challenge the notion that a disadvantage in arms somehow accounts for indigenous people’s ultimate subjugation to Euro-American authority. Native economic power, business sense, and political savvy ensured that was not the case.

However, it is equally critical to acknowledge that gun-toting Indian groups nearly always arose at the expense of other Natives, sometimes many others. Just as the story of the United States should not be told simply as the triumphant rise of a democratic nation-state of liberty-loving people, neither should the advantages Indians wrested from colonialism overshadow the costs.

Indians became so well armed that they were capable of inflicting incredible damage with surprise attacks on colonial settler societies. Most of them were so resourceful in preparing for war and cultivating multiple supply lines that colonial authorities could not disarm them simply by declaring bans on the weapons trade. Yet if arms embargos could not starve Indians of supplies, they could induce hunger for them. These boycotts gave colonial authorities, particularly the English, an influential, albeit not a decisive, weapon to use, but only if they could manage to control their own traders. The problem, of course, was that colony governments exercised weak authority over their own people and none at all over those of neighboring colonies. Given these conditions, the colonists’ most powerful weapon, aside from their artillery, was the lure of arms to recruit Indian warriors to fight for their side.

Overall, long-term Indian success in war against colonial states required stockpiles of arms, dependable avenues of supply, regional alliances of tribes to prevent the colonial strategy of divide and conquer, and forts at remote locations where colonial forces could not haul their artillery guns.

There is no way to calculate the exact number of Indians killed and captured during the gun violence of the late 17th and 18th centuries, but the figure certainly ran into the high tens and even hundreds of thousands of people. To make matters worse, smallpox stalked the routes of slave raiding and gunrunning, preying on populations that were malnourished and traumatized by the predatory violence and clustered into defensive fortifications, which rendered them more vulnerable to communicable diseases. The overall effect was a population decline of some two-thirds between 1685 and 1730, from an estimated 199,000 people to some 67,000.

Thundersticks

Many Native North Americans believed that thunder was produced by the flapping wings of a giant bird streaking across the sky. That same Thunderbird shot lightning bolts from its eyes, which then crystalized on the ground into such forms as mica and ancient stone arrowheads. Calling guns Thundersticks or Metal-Lightning was a way of saying that they embodied the awesomeness of the Thunderbird. Clearly these peoples associated the noise, flash, smoke, and lethality of guns with some of the most fearsome natural elements and their spirits.

During thunder and lightning storms, southeastern Indians fired their guns toward the sky to show the Thunderbird “that they were warriors, and not afraid to die in any shape; much less afraid of that threatening noise.” They were also demonstrating that they wielded the power of the elements no less than the spirits of the upper world.

The role of native Americans in the slave trade

The arming of the Indian Southeast took place through the trade in Indian slaves.

Indian captors and their colonial customers robbed as many as 50,000 people of their freedom during the heyday of this enterprise from 1660 to 1720, and killed many more along the way.

The scale of this commerce and the devastation it unleashed was infinitely greater than in the trade of Native people for arms that was developing in the Great Lakes and upper Mississippi River Valley during the same period. Southeastern slave raids wiped out numerous communities and dislocated others from the Virginia-Carolina Piedmont, deep into Florida, and all the way west to the Mississippi River Valley.

South Carolina’s profits from the labor of Indian slaves and their resale to the West Indies produced much of the seed money for the development of the colony and emptied indigenous people from territory that would later host plantations run on the toil of African slaves.

The danger of slave raiders forced survivors to band together in defensive confederacies and take up slave raiding themselves, for one was either an aggressor or a victim in this terrifying new world. This was a new type of warfare, focused less on satisfying revenge or obtaining captives for adoption than on acquiring people to sell. The southeastern slave trade was fundamentally a trade of humans for munitions in which marauding Indian slavers grew ever more formidable by selling captives for arms, while previous victims became raiders themselves in order to obtain guns for protection and predation. The slave trade and the gun frontier marched hand in hand.

The kind of cascading gun violence that marred the Southeast during this period has obvious parallels to the Northeast and Great Lakes regions between the 1630s and early 1700s. Competition for captives (albeit largely for slaves instead of future adoptees) and control of European markets galvanized intertribal arms races in the Southeast as they had in the North. Rivalries between English Virginia, English South Carolina, Spanish Florida, and French Louisiana involved using trade and gifts of military hardware to bid for Indian trade partners and allies, as was the case among New France, New Netherland, and the various English colonies of the Northeast. Most of the Southeast colonies proved just as incapable of policing gunrunners as their northern counterparts had been, and few of them made much of an effort in the first place. The raids of indigenous groups boasting a temporary advantage in arms forced their enemies to seek political alliances and trading relationships to build up their own arsenals. As in the North, within a few decades guns were more or less evenly distributed throughout the region, which ended runs of dominance of predatory raiders in favor of a balance of power maintained by the fur/deerskin trade and diplomacy with rival European powers. In many critical respects, then, the gun frontier looked similar in the northern and southern woodlands.

Human actors connect the stories of these regions, too, demonstrating the long reach of colonial violence in Indian country. The Chichimecos, as some Indians and the Spanish called them, first appear in the records of Virginia during the 1650s under the name of Rickahockans. By the 1670s the English referred to them as Westos. These Rickahockans/Westos were none other than the Eries, who had retreated from the Great Lakes to the falls of the James River to escape Iroquois gunmen. They then continued their migration into what is now South Carolina. Within a few years their neighbors included a portion of the Savannahs (or Shawnees) from the Ohio Valley, who had left the region seeking European trade and escape from indigenous slave raids out of Virginia and then Five Nations attacks, before settling on the southern river to which they gave their name. Given that the Westos were an Iroquoian-speaking people (though not a member of the Iroquois League), in all likelihood they had their own “mourning war” tradition of adopting enemy captives into their population. But that was not the primary motive of their raiding in the Southeast. Their most compelling reason to relocate this far south was the opportunity to arm themselves by trading people, deerskins, and furs to the colonies of Virginia and then South Carolina. The Westos and Savannahs knew through hard experience that guns were the key to their defense as long as rival groups had access to the colonial weapons market. Seizing captives from southeastern tribes to exchange for arms and adopt into their ranks was their way of ensuring that no one would overawe their people ever again. In this they became the same kind of menacing force that Iroquois gunners had formerly been to them. And they too, like the Five Nations, forced one group after another to build up their munitions and turn these weapons against others, part of the thunderous storm rumbling across Indian country.

The Eries’ relationship with Virginia did not get off to a good start, but eventually the parties established a mutually beneficial exchange of slaves and deerskins for munitions. When Virginia learned in 1656 that 700 strangers called Rickahockans (or Richahecrians) had suddenly appeared at the falls of the James River, its initial response was to attack them. Little did Virginia know that this group was battle tested and, judging from the Rickahockans’ victory in the subsequent fight, perhaps better armed than one might have expected, possibly via the Susquehannocks and that tribe’s weapons trade with Maryland and New Sweden. Thinking the better of entering yet another Indian war with such a formidable opponent, Virginia sued for peace and by 1658 had authorized an open trade in guns, powder, and shot with any “friendly Indians.” A year later there were reports out of St. Augustine of “northern” Indians wielding English muskets, sometimes accompanied by Englishmen, terrorizing the missions of Guale, the northernmost province of Spanish Florida on what is now the Georgia coast. These raiders were certainly the Eries, seeking captives to sell as slaves to Chesapeake tobacco planters and probably also to buttress their population, thinned by war with the Iroquois. Virginians even began referring to the Eries by the name “Westos

The Westos’ prey were the bow-and-arrow Indians of the Florida missions, Carolina coast, and adjacent Piedmont. Spain’s southeastern mission system was extensive, consisting of 35 stations along what today is the shoreline of Georgia and northeast Florida and across the Florida panhandle. Yet it was also vulnerable. One of the principles of the missions was that the Spanish would provide military support and European goods to uphold the authority of local chiefs. However, that protection and trade did not include firearms in any appreciable volume, not because the Spanish refused to trade guns to Indians, but because the Spanish crown invested few resources in the marginal Florida colony and tightly restricted its economy. Additionally, in the 1670s the number of Spanish soldiers stood at just a few hundred men out of a Spanish population of less than 1,000. Such a small, concentrated, poorly armed population was precisely what raiders wanted. Indians elsewhere in the region were also relatively easy targets. The South Carolina coast was inhabited by various Siouan-speaking communities of just a few hundred people each, weakened by successive outbreaks of epidemic disease and war. Even the large, town-dwelling Muskogean-speaking groups west and south of the Savannah River enticed Westo slavers because they were easily reached and defended only by bowmen. The Westos viewed them all as potential slaves.

The Westos’ advantage in arms enabled them to devastate these populations. In the fall of 1659, news arrived in St. Augustine from the interior province of Apalachee that villages 80 leagues to the north had suffered “much damage” by an army of up to 1,000 men consisting of “some striped [painted] Indians, and with them white people, and that they bought some firearms and among them two campaign pieces [or artillery guns].” Coastal Guale was next, suffering an invasion in June 1661 by “a great number of Indians” estimated at 2,000 men, “who said they were Chichimecos and among them some Englishmen with firearms

It was probably no coincidence that half a dozen or so villages of a previously unknown group called the Yamasees appeared just north of the Guale missions shortly after these reports. Though the origin of the Yamasees is cloudy, they seem to have been comprised of various peoples displaced by Westo gunmen. Soon the Yamasees would move directly into the mission districts and begin contributing to Spanish labor drafts in the hope of receiving protection. Their retreat was part of a larger diaspora of peoples throughout the Piedmont and lower Southeast, including Tutelos, Saponis, Yuchis, and Coushattas, seeking refuge from the slavers.

Other Piedmont nations responded to the Westos by acquiring firearms from the Virginia trade forts and especially from pack trains that were probing deeper into Indian country.

By the 1660s the Occaneechis inhabiting the confluence of the Roanoke and Dan Rivers along the major Piedmont trade path had established themselves as middlemen between Virginia gunrunners and indigenous slave raiders and deerskin hunters, a position they jealously guarded. Their town became known as “the mart for all the Indians for at least 500 miles.” The Tuscaroras of the North Carolina coastal plain and Piedmont carved out a similar niche for themselves, transforming one of their towns into “a place of great Indian trade and commerce.” By 1670 firearms had become so common in the region that Monacans by the falls of the James and the Saponis of Otter Creek (by modern Lynchburg, Virginia) greeted English traders with celebratory “volleys of shot” and other signs that “guns, powder, and shot, etc., are commodities they will greedily barter for.” If the Westos wanted to maintain their superiority in arms over their neighbors, they had to find a new home. Consequently, less than a decade after arriving in Virginia, the Westos had relocated to the Savannah River, the modern border of South Carolina and Georgia, within easier striking distance of their intended victims.

Even as the South Carolina–Westo alliance was thriving, elements within the colony were working to undermine it. The eight lords proprietor, who technically governed Carolina from England, claimed a monopoly on the interior Indian trade, including that with the Westos, but colonists, even the proprietors’ own appointees, had little respect for their authority. A faction of councilors soon to be known as the “Goose Creek Men,” led by James Moore, Maurice Mathews, and Arthur Middleton, began encouraging the so-called Settlement Indians and Savannahs to raid the Westos for captives to sell into slavery. When the Westos retaliated, as predicted, the Goose Creek Men used it as an excuse to have the assembly declare war, despite the proprietors’ orders to stop. In this the Goose Creek Men simultaneously dealt a severe blow to proprietary authority and weakened the most threatening indigenous power in the region. Details of the war are murky, but by 1682 the Westos were said to be “ruined” and “not 50 left alive and those Divided.

With the Westos shattered, the Goose Creek Men partnered with Indians near and far to expand the hunt for slaves. The Savannahs (or Shawnees) were the first group to step into the Westo vacuum, relocating to the Savannah River and then raiding interior peoples such as the Cherokees. The Yamasees followed, abandoning the Spanish missions for territory just east of the Savannahs, after concluding that it was better to go slaving for arms than to remain the prey of armed slavers. Even Indians who thought of themselves as allies of Carolina were vulnerable to slave raiding. Shortly after the Westo War, the colony used a trumped-up excuse to declare war on the Winyaw community of Settlement Indians, then successfully urged the Savannahs to conquer and enslave them.

The slave traders’ control of the assembly gave them leeway to pursue their criminal interests under the color of law. The proprietors accused Goose Creek Men in the legislature of banning the sale of arms to Indians only to apply the rule selectively against their commercial rivals while they “brook it themselves for their private advantage and escaped the penalty.” Worse yet, the slavers provoked wars with Indians not as a matter of public good but “as best suited their private advantage in trade.” Indians cooperated in these schemes, the proprietors charged, because “you induce them through [their] Covetousness of your guns, powder, and shot and other European commodities to make war upon their neighbors, to ravish the wife from the husband, kill the father to get the child and to burn and destroy the habitations of these poor people.” It could only have deepened the proprietors’ sense of scandal that much of their colony’s importation of guns and export of Indian slaves appears to have flowed through pirates. Coastal Carolinians, including authorities, thought of pirates less as terrors than as partners in their black-market trade. From 3,000 miles away the proprietors were toothless, given that the colony’s lawmakers and lawbreakers were one and the same.

In the late 17th and 18th centuries, slavers and gunrunners marched together deeper into the continent, their power channeled by the political reorganization of their home societies. Within South Carolina, the Goose Creek Men had effectually neutered the lords proprietor, taken over the Carolina government, and thrown open the Indian trade to anyone connected with their faction. Commerce in slaves and deerskins from the Indians, and munitions and other manufactured goods from Europe, became the key to riches at a time when the colony was still searching for a cash crop. For their part, Indians in an area extending for hundreds of miles were coming to the realization that unless they did business with South Carolina, they would lack the weapons to defend themselves from the growing ranks of marauders.

Autonomous communities, their populations thinned by epidemic disease and foreign attacks, began to confederate to protect themselves from the slavers and to man their own armies to go slaving. By the early eighteenth century, two of the most significant of these coalitions were known as the Catawbas and the Creeks. These groups incorporated people from diverse linguistic and cultural backgrounds. Their cohesion was in its tentative early stages as the turn of the seventeenth century approached, contributing to the sense of regional upheaval.

Competition between militant slavers meant that even groups raiding for Carolina might themselves become captives. The Westos had been merely the first group to fall victim to this trap. The Savannahs were next. In the early 1700s Catawbas attacked the Savannahs’ main town, killing a reported 450 people. The survivors retreated to the Susquehanna River Valley of Pennsylvania, then sent warriors on revenge raids against Carolina’s Indian protectorates. The colony encouraged the Catawbas to retaliate by giving them a gift of fifty guns, 1,000 flints, 200 pounds of gunpowder, and 400 pounds of bullets. Any Catawba who brought in a Savannah scalp or captive could keep the gun without charge, an arrangement premised on the assumption that repeat Indian customers would hold true to the bargain. Carolina was developing a pattern of turning on its friends as soon as it was profitable, but Indians facing the threat of enslavement could not resist the pull of its arms market.

The imperial politics of Europe also shaped these American dynamics. South Carolinians had always been driven by profits to sponsor slave raids, but they got an additional spur in 1688 with England’s Glorious Revolution and the ascension of William and Mary to the throne. By securing England’s Protestant succession, this event inaugurated more than a century of on-again, off-again warfare between Britain and the Catholic powers of France and Spain. In turn, Charles Town gained political cover to enslave the Indian allies of England’s imperial enemies. Queen Anne’s War (or the War of the Spanish Succession), stretching between 1702 and 1713, was especially critical in this respect. It legitimized and incentivized South Carolina’s long-running, de facto state of war with Florida. Furthermore, it permitted the slave traders to justify slave raids against Indians far to the west who had become associated with the young French colony of Louisiana.

The Choctaws’ partnership with the French gave Carolina slave merchants a convenient excuse to direct slave raids against them. When authorities in London demanded an explanation, the slavers easily maintained that they acted in the interest of the empire. After all, they expounded, South Carolina was “a frontier, both against the French and Spaniards,” and enslaving the Indian allies of those powers “serves to lessen their numbers before the French can arm them.

Like a hurricane feeding off the warm waters and winds of the Caribbean, the slave raiders gathered political capital, manpower, and weapons, and then slammed into Florida with irresistible force, pummeling it mercilessly from the mid-1680s into the early 1700s until they had practically emptied the entire peninsula of indigenous people.

There was no mistaking that this slave trade was primarily an exchange of people for guns. A fresh opportunity to put those guns to use arose when the start of Queen Anne’s War coincided with the appointment of none other than the slave trader James Moore to the governorship of Carolina, after the sitting governor died. Moore saw his term as the Goose Creek Men’s chance to deal a fatal blow against the Spanish while accumulating a windfall in slaving profits.

For Carolina’s Indian trade partners, it was an opportunity to build up their musketry. Between 1703 and 1705, armies of up to 1,000 Yamasee, Creek, and Cherokee gunmen marched against the missions of Apalachee and Timucua, carrying away upward of 1,300 captives in just one expedition. The only way mission Indians escaped these attacks alive and unshackled was to “agree” to relocate to the Savannah River under the supervision of the Ochese Creeks. By the time this campaign was over, Spanish Florida and its once extensive mission system were reduced to the fort at St. Augustine, small indigenous villages within range of its guns, and the garrison of Pensacola. These losses, combined with deaths from a vicious smallpox epidemic beginning in 1696, which tore through the Southeast along the routes of slaving and the arms trade, meant that by 1711 slavers had to extend their raids all the way to the Florida Keys to find populations large enough to make the effort worth

The slavers’ superiority in arms was the critical factor in their conquest of the missions. Florida officials complained endlessly that enemy raiders were “being aided by the English with guns, ammunition, cutlasses, and pistols” and “have become so expert in the handling of arms that they use them as if they were born in this service.” Mission Indians were no match. To be sure, some military hardware reached the Apalachees through a black-market trade with Cuban fishermen working Florida’s Gulf Coast and sailors docked at St. Augustine, and a handful of warriors received Spanish weapons in recognition of exemplary military service. However, the overall number of guns among the mission Indians was small and their effectiveness was diminished by shortages of powder and shot.

Florida governor Joseph de Zúñiga’s report on the fall of Apalachee concluded that “for lack of munitions, my people were defeated.” Indians agreed. When a band of Apalachees fled to Louisiana in the wake of the 1704 attacks, they explained that the Spanish “did not give them any guns at all but that the French gave them to all their allies.” It had become a matter of life and death for Indians in the slaving zone to have a European partner willing and able to arm them.

The strikes against Florida’s interior missions began a phase of significant growth in the number of militant slavers and the geographic reach of their attacks. Indeed, these developments were reciprocal, for as more communities acquired guns for defense and slaving, slavers directed their attacks farther west and south against people with weak or nonexistent armaments and became even better armed in the process. Initially the Cherokees suffered slave raids by the Savannahs, Catawbas, and Esaws, but once the Cherokees began trading with Carolina in the late 1690s that became a more dangerous proposition. Muskogean-speaking communities on the Coosa and Tallapoosa Rivers, which would later become known as Upper Creeks, were also hosting Carolina traders by at least 1704. “The English were in those nations every day,” Louisiana officials brooded, “and they take pack horses burdened with clothing, guns, gunpowder, shot, and a variety of other goods … the greatest traffic between the English and the savages is the trade of slaves … each person being traded for a gun.” By 1715 most Tallapoosa and Alabama warriors wielded firearms and the Alabamas were said to have a warehouse containing 10,000 pounds of gunpowder. Slave raiders were wise to bypass communities with such weaponry in favor of more vulnerable targets deeper in the interior, far from the gun frontier.

With the destruction of the Florida missions by Yamasee, Creek, and English slavers, the gravitational center of slaving shifted west, driven by the fears and ambitions of the Chickasaws of what is now northern Mississippi. For years the Chickasaws had suffered intermittent attacks by gunmen from the Iroquois, Great Lakes tribes, and southeastern slavers without the ability to respond in kind because those same nations blocked their access to eastern arms markets. However, eventually the gunrunners found their way to the Chickasaws.

Carolina pack trains had reached the Chickasaws as early as 1686, and by the early to mid-1690s their visits were becoming routine, much to the chagrin of neighboring peoples

Chickasaws had killed more than 1,800 Choctaws and enslaved some 500 over the previous decade, and the problem was only growing worse. In 1706 a Chickasaw army said to have numbered as many as 4,000 men (almost certainly an exaggeration unless this force included many foreign allies) attacked the Choctaws and seized more than 300 women and children. Underlying the ferocity of these campaigns was the Chickasaw determination “never to return” to the days when they were defenseless against enemy gunmen.

The ringleader of the 1706 raids on the Choctaws, recalled advancing the Chickasaws 300 muskets in exchange for the promise of just fifteen slaves. Almost overnight the Chickasaws became capable of marshaling an army of gunmen.

Pierre Le Moyne d’Iberville, estimated that 700 to 800 out of 2,000 Chickasaw fighting men possessed firearms and that they killed three Choctaws for every one they enslaved. Their raids, combined with those of the Creeks, threw the Gulf Coast and lower Mississippi River Valley into turmoil, leaving towns destroyed, hundreds of people killed and carried into captivity, and the survivors fleeing their home territories to congregate near the French. But nowhere was safe. By the early eighteenth century, slavers sometimes ranged as far as 150 miles west of the Mississippi River.

Louisiana’s relations with area Indians hinged on arming them against this threat. The most important group in this respect was the Choctaws of the Pearl, Leaf, Pascagoula, and Tombigbee River watersheds, just south of Chickasaw territory. Unlike small Gulf Coast nations, the Choctaws, with more than 1,000 households and an estimated 4,000 warriors, had more than enough population to contend with the Chickasaws, who were less than half their number. What they needed were muskets, powder, and shot, “the most precious merchandise that there is for them,” in the judgment of Diron d’Artaguette, Louisiana’s commissary general. Yet the French were incapable of outdealing English gunrunners. Louisiana’s supply lines from Europe and Canada were just too long, its support from the crown too scanty, and its economy and population too small, to compete on the basis of free trade.

Louisiana then offered payment of a gun for every enemy scalp and 400 livres in goods for enemy captives, an incentive program that had produced 400 scalps and a hundred slaves by 1723. The cumulative effect of these measures was to give Louisiana’s Indian allies a fighting chance against foreign raiders, a point of which the French never tired of reminding them.

The slaves-for-guns trade was inherently unstable amid its remarkable growth because the spread of firearms made raids ever more costly to the aggressors while continuing to increase indigenous demand for munitions. The trade in deerskins, which always operated alongside the slave trade, was an uncertain fallback because deerskins had far less purchasing power than slaves. As colonial traders pressured Indian customers to make good on their debts, sometimes even threatening them with enslavement, tensions mounted. At the same time, English and French settlements encroached on Indian communities already bitter over their losses to the slave trade and epidemic disease. The mix proved explosive, and between 1710 and 1730 Indians throughout the Southeast began rising up against the colonies.

The Tuscaroras of the Carolina coastal plain and Piedmont were the first to rise after years of serving as both perpetrators and victims of the slave trade. Though the immediate spark of this war was North Carolina’s founding of a Swiss-Palatine settlement on the lower Trent and Neuse Rivers, followed by land surveys auguring further expansion into Tuscarora territory, the Tuscaroras’ fear of land loss was indelibly tied to their fading economic power and the risk of enslavement. Tuscarora returns on the slave trade had been declining for years as the region’s other Indians grew better armed and Virginia began importing ever greater numbers of African slaves. As the Tuscaroras brought in fewer Indian captives, colonial traders began dealing ever more sharply to collect on debts the Tuscaroras had accumulated by buying European goods on credit. Tuscaroras knew, and traders probably threatened, that if these debts remained unpaid, colonists would not hesitate to enslave their people and seize their land. North Carolina’s encroachment on their territory suggested that the time was nigh. Unwilling to brook these conditions any longer, the southern Tuscaroras and neighboring Coree Indians began attacking colonial settlements along the Neuse on September 22, 1711, killing 130 people in a matter of days and sending the survivors in a panicked flight to the safety of New Bern.

The southern Tuscaroras had built up a substantial arsenal before their attacks and then resourcefully exploited every avenue of supply as the war continued. Virginia governor Alexander Spotswood understood that the Tuscaroras “were better provided with ammunition than we ourselves” when the conflict began.

Tuscarora prisoners of the English confessed that the Senecas had counseled their people not to worry about running out of ammunition because they “would come twice a year, and furnish them with it.” What the Tuscaroras could not obtain from such outlets, they robbed from Virginia pack trains heading out to western nations like the Cherokees. The Tuscaroras had the means to fight a long campaign. During the war the Tuscaroras constructed several impressive forts that maximized their firepower. On a high bluff above Catechna Creek was “Hancock’s Fort,” so-called after the Anglicized name of its teetha (or chief). Surrounded by a trench and an embankment lined with sharp river cane, the fort’s thick log palisade contained upper and lower firing ports and bastions at the corners mounted with “some great guns,” probably meaning swivel guns or light artillery pieces. Inside was “a great deal of powder, and 300 men.” Another nearby fort, Nooherooka, was even more formidable.

“The enemy says it was a runaway negro who taught them to fortify thus,” seethed Barnwell. The Tuscaroras’ use of this slave, as in their employment of firearms, was another stinging example of them appropriating the colonists’ strengths to mount their own resistance to colonialism. Over the course of two years of fighting, both of these forts fell to large armies comprised of South and North Carolina militia and hundreds of Indian allies, but not because the Tuscaroras lacked munitions, and being  unaccustomed to European siege warfare.

Hundreds of Indians fought alongside the English in this war, less out of enmity for the Tuscaroras than with an eye toward obtaining slaves to pay off their debts.

Indeed, the roster of Indians in this force reads like a roll call of slaving nations, including Yamasees, Apalachees, Cherokees, and Catawbas. Though they returned home triumphantly with dozens, even hundreds, of Tuscarora captives, they could not escape the haunting realization that they shared many of the same problems that had driven their victims to war.

The Carolina traders’ rough treatment of Indian debtors, who they mistakenly believed had become their pawns, was the main grievance behind the subsequent Yamasee War. As Indians fell behind on their payments, traders began confiscating their property and even seizing members of their communities as slaves.

Such aggression, combined with mounting cases of traders perpetrating sexual assaults, drunken brawls, and property thefts, increasingly made traders intolerable to the people with whom they dealt. Amid this acrimony Carolina made an ill-timed decision to take a census of its Indian allies, which the Indians thought to be in preparation for their enslavement. It took only a matter of weeks for Indians who did business with Carolina—Yamasees, Lower Creeks, Cherokees, and Catawbas—to kill nearly all of the one hundred traders in their towns and begin attacking outlying English settlements.

The Cherokees were the first to break, less out of fear of attack by the English than out of a need for munitions to fend off raids by the Iroquois and other indigenous enemies. To that end, in December 1715 they negotiated a peace in which Carolina restored trade and they took up arms against the Creeks. This decision, followed by a Cherokee slaughter of a Creek political delegation, inaugurated 40 years of warfare between the nations and also opened an unprecedented flow of Carolinian arms into Cherokee country and a decades-long Cherokee-British alliance. Carolina promptly sent the Cherokees 200 muskets and ammunition to keep up the fight, followed in July 1716 by a present of 300 guns, 900 pounds of powder, and 750 pounds of shot. It also redressed long-standing Indian complaints about trader abuses and high prices by forming an oversight body called the Commissioners of the Indian Trade

Each colonial power wooed the Creeks as if they carried a royal dowry. The Spanish and French in particular, knowing that they could never compete with English trade, bent over backward to conform to Indian protocol and showered the Creeks with gifts to the best of their ability. South Carolina countered with a pledge not to settle south of the Savannah River, though that promise was broken in spirit with the founding of Georgia in 1733. Yet even as the Creeks prohibited the English from their territory, they permitted the French to build Fort Toulouse on their western boundary at the headwaters of the Alabama River, and the Spanish to open Fort San Marcos on Apalachee Bay. Through this arrangement the Creeks were assured that no single European nation could dictate to them by threatening to sever the trade.

Even as far west as the Mississippi River Valley, Indians’ decisions about whether and how to resist colonial expansion had become deeply influenced by the strength of their military stockpiles and supply lines and those of their indigenous enemies. The Natchez of the lower Mississippi River Valley had endured a decade of French encroachment and violence when, on November 29, 1729, they launched a surprise attack on Fort Rosalie and its surrounding settlement, killing at least 238 French and capturing some 300 African slaves and 50 colonists. They were prepared for a drawn-out conflict, having amassed a “great deal” of powder and shot through their trade with the English via the Chickasaws, to which they added plunder from Fort Rosalie and a convoy of four French pirogues (supply boats) they had ambushed along the Mississippi River. The Chickasaw-English connection promised to keep the Natchez armed throughout this conflict. Additionally, the Natchez boasted two palisaded forts along St. Catherine Creek near their Grand Village, replete with bastions and loopholes. Atop they mounted cannons seized from Fort Rosalie, which might have been manned by captive African slaves who had joined their resistance. The Natchez armament and these structures were capable of meeting all the force the French and their Indian allies could muster.

After the carnage of the slave wars and the wars of resistance, most Native people in the Southeast tried to avoid conflict with colonial powers in favor of a play-off political system and the deerskin trade.

A thriving deerskin trade partially filled the gap caused by the decline in Indian slaving after the Yamasee War. These developments might very well have been connected, as the elimination of so many thousands of Indian people through slaving, warfare, and related diseases opened up new habitat for deer, which likely produced an explosion of deer population.

The number of deer skins exported out of the southeastern English and French colonies climbed from 53,000 per year between 1698 and 1715, to 177,500 a year between 1758 and 1759, to 400,000 a year in 1764. These skins had less purchasing power than slaves, but they could make ends meet. Guns from English traders cost ten skins in 1735 and sixteen skins in 1767, and three-fourths of a pint of gunpowder cost one skin in 1767. By comparison, the price of French goods in 1721 was set at twenty deerskins for a gun and two-thirds of a pound of powder or forty bullets for one skin. An Indian hunter trading thirty to sixty skins a year (as appears to have been typical) had more than enough to cover the costs of his arms while leaving extra for other goods.

Indians also addressed the decline in slaving by extracting gifts of munitions and gunsmithing from colonies courting their allegiance, in what amounted to the second phase of the gun frontier. South Carolina’s public expenditure on Indian gifts climbed from 4 percent of the colony budget in 1716 to 7 percent in 1732. Carolina also rewarded Indians with arms for capturing runaway slaves and servants, paying out a gun and three blankets for every fugitive in the 1770s.

The relative calm—relative, that is, to the maelstrom of the slave trade—after the Tuscarora, Yamasee, and Natchez Wars, should not be romanticized. In all likelihood the reason the Indians stopped going slaving for Carolina was not that they saw the inhumanity in it or that they feared the slave merchants would double-cross them like the Westos, Shawnees, or Yamasees. Instead, the spread of firearms throughout Indian country had made this enterprise too dangerous.

There was yet another factor in the decline of the Indian slave trade, reflecting the sinister forces of colonialism at work. Colonial buyers shifted their preference in slaves from Indians to Africans. In 1716 only 67 Africans entered South Carolina. Within a decade Carolina was importing 1,700 Africans a year and in 1736 that figure climbed to over 3,000. Efficiencies in the transatlantic African slave trade were making those unfortunate souls cheaper and more available than ever before in the North American market.  These captives also came without the risk that their people an ocean away would rise against the colonies in which they toiled. In western Africa, the havoc unleashed by this trade became almost a mirror image of what had been wrought in the Indian Southeast for two generations. By the late 17th and 18th centuries, the slave trade in western Africa often was an exchange of humans for guns in which some indigenous polities faced the choice of either slaving for the market or becoming slaves sold in the market. This devil’s bargain had become a basic feature of colonialism throughout the Atlantic World.

The Iroquois

[ Iroquois domination of neighboring tribes through greater gun ownership, and their downfall when enemy tribes gained guns is a pattern that will be repeated across the entire United States for more than two centuries. ]

Certainly the Iroquois were astonished by the pyrotechnics of gunfire, but they also had more practical matters on their minds. Ever since the Mohawks, Oneidas, Onondagas, Cayugas, and Senecas of what is now upstate New York had formed their League sometime between the 14th to late 16th centuries, they had been at war with indigenous neighbors near and far.  Most of this time the main purpose of these campaigns had been to seize captives for adoption (the fate of most women and children) or death by torture (the fate of adult men) to sustain the Iroquois. Such wars were probably responsible for the disappearance of large indigenous communities at the sites of modern Quebec and Montreal that had been visited by French explorer Jacques Cartier during his explorations of the Saint Lawrence River during the 1530s and 1540s. Seventy years later, when the French returned to the area to found a permanent colony, there was no trace of them.

As European fishermen, explorers, and then fur traders began to appear along the lower Saint Lawrence with greater regularity after the mid-sixteenth century, this warfare also began to focus on controlling access to European goods. The Iroquois appear to have enjoyed the upper hand in these conflicts initially. With the founding of French Quebec in 1608, the balance of power had begun to shift to the League’s enemies, the Algonquins, Montagnais, and Hurons, because of their trade and military alliance with the French.

When Dutch flintlock muskets became available in the 1630s, League nations began trading for munitions with a fury. By the mid-17th century, this armament had enabled the Iroquois to transform themselves into the preeminent military power of the Northeast and Great Lakes regions as far west as the Mississippi River. Bands of their gunmen fanned out over this range to capture foreign women and children for adoption, sometimes followed by armies of several hundred and even a thousand men to crush the enemy once and for all.

The story goes that Europeans blasted their way into the North American woods, overawing Indians with their technological prowess. The Natives, fearful of getting shot, then abandoned their customary open-field clashes in favor of ambushes, to make themselves more difficult targets. The ironic result of the colonists’ superiority in arms, then, was the Indians’ so-called skulking way of war, which plagued Euro-American society throughout the colonial era.

But this obscures the fact that it was the threat of Iroquois, not colonial, gunmen that galvanized an arms race throughout the Native Northeast, involving new technologies, stratagems, and politics. By the mid- to late 17th century, arms traders had reached the Five Nations’ rivals in the Chesapeake, New England, and the Great Lakes, enabling them to answer the Iroquois musket for musket. In turn, gun violence erupted across this vast geographic zone.  Indigenous people facing enemy gunmen avoided open-field battles because of the risk of getting shot, and abandoned customary wooden armor because it reduced a warrior’s mobility without protecting him against bullets and metal-edge weapons.

Sieges of fortified villages were on the rise because an invading force with an advantage in firearms and steel-cutting tools possessed the means to breach its enemy’s defenses. Indigenous people answered this threat by replacing their circular palisades with straight-wall fortifications that gave defensive gunmen clearer shots at attackers. Sometimes they even mounted cannons atop their bastions. Politically, their decision making increasingly focused on securing their people’s access to arms and directing arms away from their rivals. To these ends they entered multilateral alliances with shifting lineups of indigenous and colonial polities and even relocated their people closer to gun entrepôts. These innovations constituted a new epoch in Indian life.

The results were terrible, with intertribal wars and related outbreaks of epidemic diseases dramatically reducing the population of nearly every Native group in the region. Some groups were completely wiped out. In the long term, however, the growing balance of power, and recognition of the high cost of gun warfare, produced something of a détente. By the end of the century, people who expected their young men to prove themselves as warriors would have to look outside the region for victims among the poorly armed tribes of the continental interior. As they did, the gun frontier spread with them, leaving a trail of devastation that was becoming a signature of colonialism in indigenous North America.

Politically the 1620s and early 1630s witnessed a renewal of Iroquois warfare against the so-called French Indians (the Algonquins and Montagnais) of the Saint Lawrence River and the Mohicans of the Hudson River Valley. The Five Nations found themselves in a biological war as well. Between 1633 and 1634, smallpox tore through Indian communities along the New England coast and Connecticut River Valley and then up into Iroquoia. The Mohawks alone might have lost two-thirds of their population, with their absolute numbers dropping from an estimated 7,700 to 2,800 people. As the death toll mounted, the cries of mourners built into an irresistible call for the people’s warriors to raid their enemies for scalps and captives. Only then would the ghosts of the dead and the hearts of their survivors find peace.

 

Fortunately for the Iroquois, their Dutch trading partners were able and willing to supply them with Europe’s best firearms technology. The Dutch were not only Europe’s greatest manufacturing and trading nation, boasting supply lines of raw materials from the Baltic, Mediterranean, and Asia, they were also the continent’s main producer and exporter of weapons of every sort, including shoulder arms. The Netherlands’ long war for independence from Spain (1569–1648) had stimulated its gun industry, while the demand for military wares elsewhere in Europe during the Thirty Years’ War (1618–1648) and subsequent conflicts sustained it into the early eighteenth century. By the time of New Netherland’s founding, the Dutch Republic was manufacturing an estimated 14,000 muskets annually, most of them for export, a figure that grew larger by the year. No other European nation came close to this production level until decades later. Furthermore, Dutch gunsmiths were introducing technological innovations to their weapons that made them even more attractive to Indian customers, the Iroquois foremost among them.  By the 1660s it appears that the Dutch were manufacturing guns specifically for the Indian market, especially the Iroquois. These Indian trade muskets were lighter (about 7.5 pounds) and shorter (50 to 67.5 inches) than most European guns (which often weighed as much as 16 pounds and extended more than five feet in length) in order to facilitate use in the bush and long-distance travel.

The primary reason for this demand was that the gun was remarkably effective in Iroquois warfare, particularly as a first-stage weapon in ambush. Small parties of warriors would station themselves at places where enemy travelers were most vulnerable, such as river narrows, portages, bends in the road, or places where cliffs, tree stands, and swamps provided cover for the attackers and blocked the retreat of their targets. The goal in these assaults was to unleash one or two volleys, raise a bloodcurdling war cry, and then rush on the enemy for hand-to-hand combat with tomahawks and clubs. Such ambushes must have been common before the advent of firearms, but the new weapons encouraged the tactic.

Unlike arrows, which needed a clear path to their target, bullets could pass through the camouflage of tall grasses and even thickets without being diverted. Whereas arrows shot from long distances could be dodged, musket balls could not. The damage inflicted by a bullet wound was far greater than that of an arrow. Killing an enemy with an arrow shot required hitting a vital organ. For the most part, minor arrow injuries would heal with proper treatment, at which Native medical practitioners were masters. By contrast, when a lead ball struck its victim, it carried roughly six times more kinetic energy than an arrow, expanded to the size of a large fist, and left behind a medical disaster of shattered bone, mangled soft tissue, and internal and external bleeding many times greater than an arrow could cause. Even when the victim managed to survive the initial impact, there was a high risk of death by infection. At especially close range, gunners could load their weapons with small shot (or grape shot) consisting of several small lead balls instead of a single bullet. What this approach sacrificed in terms of accuracy and kinetic energy, it compensated for in the large, cloud-shaped area covered by the blast, which could injure and even kill more than one person at a time.

The Iroquois further displayed their confidence in guns by using them to hunt deer through the same ambush technique of lying in wait and firing at close range.

Iroquois hunters appreciated that a musket ball would drop a deer in its tracks, whereas an arrow wound might require pursuing the wounded game for long distances. The slow rate of reloading and firing a gun was not an issue because a hunter was not going to get the opportunity to fire more than once at a deer before it bounded away,

Seventeenth-century guns were often undependable at distances of more than 50 yards because of a variety of issues; these included the condition of the barrel (such as whether it was bent or dented or clogged with powder residue), the fit of the musket ball to the barrel (sometimes shooters used bullets of smaller caliber, causing them to brush along the inside of the barrel before exiting and thus sending them off-target), and whether the shooter had properly loaded the weapon (particularly the main charge). Yet long-range accuracy was not much of an issue in ambushes in which the unsuspecting enemy was usually just a stone’s throw away. Another challenge was that firearms required routine cleaning to prevent them from getting clogged with black powder, which reduced bullet velocity and ran the risk of the barrel bursting, with attendant injuries to the shooter such as burns and mangled fingers and hands. Tending to the maintenance of guns on the trail was difficult unless there were Indian villages, colonial settlements, or trade posts along the way where the warriors were welcome. In the mid-seventeenth century, such issues were probably of minor concern to Iroquois war parties because the raiders usually returned straight home after one or two engagements to deposit their captives, scalps, and plunder, and tend to other responsibilities. By the early eighteenth century, when they were often away for several months at a time on raids against distant peoples, warriors learned to make their own minor fixes and negotiated with colonial authorities to receive blacksmithing services at forts and villages along their route of travel.

It took only a few short years before firearms became a part of Iroquois rituals. By at least 1642 it was Mohawk ceremony to fire salutes at the coming and going of foreign delegates, a courtesy that surrounding nations promptly adopted as well. Volleys in honor of “the Sun” also marked the celebration of military victories. A minority of male burials began to contain grave goods of firearms, powder, shot, and flints for the spirit to carry on the journey to the afterworld. Though this practice never became widespread because the living needed the weaponry, its symbolism was poignant. Firearms had become fundamental to the operation of Iroquois society.

Dutch authorities realized the danger inherent in their arms trade to Indians, but there was little they could do about it because the economy and security of New Netherland depended on the Mohawks in particular, and the Iroquois in general. During the 1630s the colony exported as many as 15,000 furs a year (mostly beaver pelts), including almost 30,000 in 1633, a disproportionate amount of which came from the Iroquois. There were only about 300 people in the colony at the time.  Another incentive for the Dutch was that the Iroquois “gave everything they had” for firearms, reportedly paying 20 beaver pelts for a single weapon in the early days of this commerce. To put these figures in perspective, whereas muskets cost the Dutch about 12 guilders each, twenty beaver pelts could be sold in Europe for as much as 120 guilders.

Iroquois military might and commercial leverage meant that their customs shaped trade and diplomacy with the Dutch. The Iroquois expectation was for the Dutch to keep the price of trade goods low regardless of market conditions and for bartering to be preceded by a series of indigenous protocols. To the Iroquois, trade was not an impersonal business transaction in which one side tried to extract maximum profit from the other. They likened commerce to family members meeting each other’s needs out of affection and the pursuit of mutual well-being. It followed that political conferences began with an exchange of gifts between leaders, a historic recounting of the two people’s relationship, feasting and smoking together and addressing each other as metaphorical kin, all in the Mohawk language. Hard-driving, time-conscious Dutch businessmen and officers would have preferred to skip such ceremony but realized they had little choice. Their concessions included accepting that politics with the Iroquois “must be carried on chiefly by means of gunpowder.” In 1655 Dutch officers presented the Mohawks with a gift of 25 pounds of powder, followed in 1659 by another gift of 75 pounds of powder and 100 pounds of lead. The latter came in response to Iroquois complaints that the Dutch practice of charging them for gun repairs and making them wait too long while the work was done was “unbrotherly.” By 1660 Iroquois spokesmen had raised their demands to include the Dutch outfitting League warriors with free powder and lead in times of war.

From the late 1630s well into the 1650s, the Iroquois put Dutch firearms to use in ambushes up and down the Saint Lawrence and Ottawa Rivers connecting New France and Huronia. The usual pattern was for Iroquois armies to break into bands of 10 to 50 men and take positions at various points of ambush along the rivers, sometimes on both sides. When enemy boats passed by, or canoeists unloaded at portages, hidden Iroquois gunners would open fire until they had driven their victims to shore, where they would set upon them with hatchets and clubs, killing some and capturing others.  In addition to captives, these raids netted the Iroquois plunder in furs that they could then trade to the Dutch for more firearms.

French Jesuits, from their close vantage living daily alongside the Hurons, were certain that Iroquois superiority in firearms was what made these assaults so lethal. Even when the Hurons, Algonquins, and Montagnais carried guns, Five Nations attackers seemed to possess twice as many.

By the early 1640s, Iroquois attacks had nearly choked off the French fur trade to the point that some Frenchmen began lobbying for an invasion of New Netherland to punish Dutch gun merchants.

Throughout the late 1630s and early 1640s, the Hurons redesigned their forts to give their meager force of gunmen a fighting chance against an Iroquois invasion that seemed to grow more imminent by the day. The village of Ossossane erected a squared palisade with bastions at opposite corners to permit clear shots along two entire lengths of the walls. Other communities followed with diamond-shaped fortifications. The reason, as Father Jean de Brébeuf put it, was that “we have told them … henceforth, they should make their forts square, and arrange their stakes in straight lines; and that, by means of four little towers at the four corners, four Frenchmen might easily with their arquebuses or muskets defend a whole village.

Beginning in the summer of 1648 and lasting into late 1649, Iroquois armies of up to 1,000 men invaded Huronia repeatedly, overrunning the forts, torching the communities, and killing and capturing thousands of people. Battered, demoralized, and starving, the remaining Hurons scattered in all directions. Some retreated northwest to the Straits of Mackinac and Green Bay, and others eastward to the protection of French guns near Quebec or even to Iroquoia to join their captive relatives. Those who sought refuge among the Hurons’ close neighbors to the west, the Tionnontatés, absorbed yet another blow in December 1649, as 300 Iroquois warriors struck the village of Etharita or St. Jean, killing and capturing a large but indeterminate number of people. It had taken less than two years for the Iroquois to conquer one of the largest Indian confederacies in North America.

Muskets were critical to the Iroquois victory. First and foremost, years of ambushes by Iroquois gunmen had set the stage for the invasion by making the Hurons prisoners in their own towns, too afraid to venture beyond their fortified walls to patrol their country, raise food, or protect their confederates, at least not to any effective degree. By the time the invasion began, Iroquois bands moved almost freely throughout Huronia. As for the campaign itself, the thousand-man force that devastated Huron country in 1649 was reportedly “well furnished with weapons,—and mostly with firearms, which they obtain from the Dutch, their allies.” By contrast, the Hurons were poorly armed, to which at least one Jesuit directly attributed their defeat. With the Jesuits having watched their charges and colleagues die in heaps from gunshot wounds, it was difficult to conclude otherwise. The Hurons reached the same judgment, demanding the Jesuits to “speak to the Captain of France, and tell him that the Dutch of these coasts are causing our destruction, by furnishing firearms in abundance, and at low price, to the Iroquois, our enemies.

During the early 1650s the Iroquois also rode their advantage in guns to a series of victories over the remaining tribes of the eastern Great Lakes, most of which harbored displaced Hurons. In quick succession the Iroquois shattered the Petuns in 1650 and the Neutrals in 1651, the latter with an invasion of 1,500 men. Their next target, beginning in 1653–1654, was the Eries (or Cats), a people some 2,000 strong. As they did with the Hurons and Neutrals, the Iroquois systematically broke down the Eries’ perimeter with gunfire ambushes to prepare for a large-scale invasion. The Eries, who had “no firearms,” nevertheless had a fearsome reputation because of their arsenal of poison arrows, which they could fire “eight or ten times before a musket can be loaded.” The Iroquois neutralized this weapon during their sieges of Erie forts with a combination of thick wooden shields (or mantlets), large portable wooden walls, and even canoes, which they carried over their heads to approach enemy fortifications, then used as ladders to scale the palisades. Collectively these campaigns had netted the Iroquois thousands of captives, produced the deaths of thousands of others, and effectively cleared the region of rival nations.

The Five Nations’ neighbors and rivals to the south and east, particularly the Susquehannocks of the Susquehanna River Valley, the Mohicans of the Hudson and Housatonic River Valleys, and the so-called River Tribes of the Connecticut Valley (Pocumtucks, Norridgewoks, and Squakheags), learned the lesson before it was too late and built up arsenals that gradually tipped the scales away from the Iroquois.

Even when colonial magistrates actually tried to police the flow of arms (or claimed to try), they confronted the limits of their authority on the Delaware. After the Dutch conquered New Sweden in 1655, Swedish gunrunners shifted from English to Dutch suppliers, then carried the weapons inland to Susquehannock country for sale, far from inspectors stationed along the river. The Delaware was an even greater river of rogues than was the Hudson. The real blame or credit for the renegade character of the Delaware Valley gun frontier belonged to the Indians themselves, who exploited the competition at every opportunity.

Maryland had concluded that it was more politic and profitable to seek alliance with the Susquehannocks through the arms trade than to continue trying to resist them. It was the Susquehannocks, not any colonial polity, who were the most formidable power in this region. Given the Susquehannocks’ many options when it came to obtaining European wares, weak colonies like Maryland had the choice to supply them with guns or face their guns. The Susquehannocks, for their part, placed newfound value on Maryland as a trade partner, having lost New Sweden to Dutch conquest in 1655. Peace with the Chesapeake colony was a means of keeping their trade options open.

The Susquehannocks were well prepared by the time the western Iroquois nations turned their raids back against them in the early 1660s. Though the Mohawks and Susquehannocks remained at peace during these years, perhaps because neither of them wanted to imperil their relations with the Dutch (who counted both groups as fur trading partners), the western Iroquois had no such scruples. Their populations (and the Susquehannocks’) had suffered enormously from a recent smallpox epidemic, and, following the destruction of the Lake Erie and Ontario peninsula tribes, there were no major Iroquoian-speaking peoples left to raid for replacements other than the Susquehannocks.

The Five Nations’ Algonquian-speaking rivals to the east, the Mohicans of the Hudson and Housatonic River Valleys and the Sokokis of the Connecticut River, also began to close the arms gap through multilateral trade. Dutch Fort Orange, with its brisk market in arms and ammunition, anchored this commerce in the western portion of the Algonquins’ territory. To the north were the French on the Saint Lawrence, who, after witnessing the Iroquois dispatch the Hurons in the 1640s, opened up the gun market to Christian and non-Christian Indians alike. The Abenakis of what is now Vermont exploited this policy to become middlemen between the French on the Saint Lawrence and the Connecticut River tribes.

This multifront gun frontier set the stage for a failed Iroquois attack on a Sokoki fort at Fort Hill at the site of modern Hinsdale, New Hampshire, in December 1663, just months after the Susquehannocks repulsed the Senecas. From the safety of their palisade, Sokoki gunmen warded off a daybreak assault by the Iroquois, including extinguishing a fire the Iroquois had set to the enclosure with a rudimentary bomb comprised of a lit bag of gunpowder. Ultimately the invaders decided to retreat after suffering a hundred or more casualties. It was their second major setback in just a matter of months at the hands of enemies who had caught up to them in the regional arms race.

Without the advantage in firearms, the Iroquois no longer enjoyed the lopsided victories they had come to expect and that were their measure of a successful campaign. There was little purpose in raiding foreigners for captives to buttress the League’s population if that meant losing large numbers of valuable fighting men along the way and inviting reprisals on the home front.

The Five Nations’ French and Indian enemies to the north and west were building up their arsenals, too, after decades of suffering the attacks of Iroquois gunmen. Not only had the French loosened their restrictions on the weapons trade, but they began to manufacture their own gun for the Indian market to answer the light, durable arms of the Dutch.

Despite suffering five epidemics between 1668 and 1682 and losing some 2,200 people, the Five Nations’ forcible adoption of captives permitted them to man a steady stream of war parties against the Susquehannocks, who appear to have suffered even worse from these diseases.

A few years later, mutual exhaustion and political pressure from New York led to peace between the Iroquois and the New England Algonquins. Five Nations warriors in search of captives, plunder, and glory now had to pursue their ambitions elsewhere. One of those directions was westward against the Algonquian-speaking Miamis and Shawnees of the Ohio River Valley and the Illinois of the upper Mississippi River Valley, to take advantage of those people’s weak armament.

In September 1680 the Iroquois successfully intimidated the Miamis into joining them against the Illinois, creating an army reportedly 900 men strong, “all Fusiliers [or gunmen]; these two nations being well provided with Guns and all sort of ammunitions of war.” This force inflicted steep losses on an Illinois army and overran the town of Tamaroa to seize an estimated 800 captives. Some Iroquois warriors remained in the area for several more months, raiding up and down the Mississippi and even west of the great river.

The Iroquois also redirected their attacks southward along the Great Warrior Path running along the east side of the Appalachian Mountains into the Virginia and Carolina Piedmont,

Eventually their circle of targets grew to include the Cherokees of southern Appalachia, the Catawbas of the Piedmont, and other groups ever farther afield. There were several reasons for this shift. Certainly the Iroquois wanted their raids for captives to avoid poisoning relations with New France and northeastern English colonies, as had so often been the case during the 1660s. Also they were pulled southward by their adoptees from the Susquehannocks, Shawnees, and various Maryland and Virginia tribes. Each time Iroquois warriors ventured south, they risked armed encounters with area tribes that could easily descend into a cycle of revenge warfare.

Yet too often one of the most important factors has been overlooked: the southern Indians’ weak armament, the same kind of consideration that influenced League attacks in Illinois country. Elsewhere the cost of victory had become too great.

By the turn of the century the Iroquois found themselves in the same predicament that had plagued them in the 1660s. Once again their enemies had caught up in the regional arms race, with the French outfitting nations in the western Great Lakes with 700 to 1,000 guns a year, and the southern nations accumulating munitions through the trade of indigenous slaves and deerskins to South Carolina and Virginia. Iroquois deaths mounted in turn. Renewed warfare against New France, extending largely from Iroquois attempts to keep French arms out of the Illinois country, proved even less successful than in the recent past. In 1687, 1693, and 1696 the French and their Indian allies subjected Iroquoia to scorched-earth campaigns, which, though claiming few lives directly, produced famine after famine. The people could not endure this pressure indefinitely. It was time to seek security through diplomacy, not war.

New France’s Indian allies enjoyed many benefits as the French expanded their fur trade and military posts into the western Great Lakes in the late 17th and early 18th century.  In 1716 New France’s governor-general, Philippe de Rigaud de Vaudreuil, recommended to Paris that “to maintain peace with the Indians and to prevent them trading with the English” the colony needed an annual distribution of Indian presents in the amount of 600 guns, 40,000 pounds of powder, and 60,000 pounds of lead. By the early to mid-18th century, these presents constituted 5 to 10% of imperial spending on New France.

Indians were less grateful for these gifts and subsidized trade than insistent on them as conditions of friendship. In 1693 Five Nations headmen turned down a large gift of muskets from New York, finding them too heavy, whereupon Governor Benjamin Fletcher immediately placed a rush order in London for 200 light guns. “They will not carry the heavy firelocks,” he explained, “being accustomed to light, small guns in their hunting.” A year earlier Iroquois delegates had asked Captain Richard Ingoldsby of Albany what he expected them to do with New York’s gift of powder and shot in lieu of guns. “Shall we throw them at the Enemy?

Then they turned the screw: “It is no wonder the Governor of Canada gains upon us, for he supplies his Indians with guns as well as powder.” The practice of contrasting one imperial power’s stinginess with the other’s generosity, and emphasizing that the people’s friendship had to be earned, was the normal Indian response when gifts were scanty or of poor quality, when gunsmiths were in want, and trade goods were in short supply or too expensive. Sometimes these warnings also contained barely veiled threats of war, with headmen observing that a colonial power that failed to arm its Indian allies would be seen as conspiring to weaken and then destroy them. Almost invariably, presents of arms and ammunition followed.

European technologies and even European gunsmithing did not translate into domination of Indians by colonies or imperial governments. It took until the mid-18th century for the number of Anglo-Americans to eclipse that of Indians in the trans-Appalachian West, and even then whites did not begin wresting serious land cessions from the Iroquois until after the American Revolution. In the lightly populated French and Spanish colonies, that day never came for Native people. One reason is that Indians almost always possessed the weaponry to defend their claims.

In the early stages of the trade, a number of factors contributed to Indians’ maintaining a steady supply of arms and ammunition at reasonable rates. These included colonial-Indian interdependence in trade, politics, and war, as well as an expansive, multidirectional gun frontier permitting Indians to do business with traders from different polities. Indian political decisions had as much to do with these conditions as any directives from colonial and imperial powerbrokers. Another important influence was rogue colonialism, with colonial regimes exercising little control over their gunrunners. In the era of French-English warfare beginning in 1688, indigenous people added to this list an imperial play-off system in which colonial authorities competed for Indian favor with gifts of guns, powder, shot, and gunsmiths out of fear that failure to do so would tip Indian loyalties toward their imperial rival and, with this, shift the North American balance of power. The results for Indians were decidedly mixed.

Women and children suffered tremendously along the way. Men were the ones who wielded firearms, who cut the deals with colonial gunrunners and governors, who planned the invasions and ambushes, who took to arms to defend their people, and who garnered the honors when their side was victorious.

Certainly women were critical parts of political decision making in many communities, including whether to send young men on revenge raids, though there is little trace of this role in colonial documents; women also reaped the benefits of the plunder and captives their men brought home. They processed the beaver pelts that men traded not only for arms and ammunition but for clothing, pots, scissors, needles, beads, and innumerable other things that made women’s lives easier and more fulfilling.

One might conclude that guns had transformed Indians, but a more accurate way to explain this history is to say that Indians had used guns to transform their lives and those of their neighbors. Putting the matter this way highlights Native people making choices for their own futures instead of suffering as passive victims of colonial decisions, abstract economic forces, or foreign technology.

Yet the point can be pushed too far. The fact of the matter is that the rise of Native gunmen, beginning with the Iroquois, dramatically circumscribed the choices of other indigenous people. They could either obtain arms by engaging in trade and diplomacy with colonial states, or become easy targets of marauding indigenous gunmen. This was an Indian-directed transformation, to be sure, but that point probably would have come as cold comfort to many of the people caught up in it. For them the colonial era and the gun age were one and the same, a period of terror and high-stakes gains and losses.

How the Native Americans played the English, French, and Spanish off against each other to gain multiple sources of guns

The French used gifts of smithing, gunpowder, and shot, and a “judicious application” of other presents, to compensate for their inability to match the English supplies and low prices of military hardware. The French sent subsidized gunsmiths to live in key Creek and Choctaw communities, which, the English fumed, then led Indians to expect the same of them. Initially the French refused to repair English arms, much to the irritation of Choctaw leader Alibamon Mingo, “because almost all the warriors of his village are armed with these guns.

Writing in 1755 about the imperial rivalry, Carolina trader Edmond Atkin stressed that free gunsmithing gave the French influence with the Indians well beyond the monetary value of the service. “We furnish the Indians with guns enough in exchange for their deer skins and furs,” he recognized, “but the French mend them and keep them in repair gratis.” Smithing was doubly important because when an Indian saw his damaged gun “suddenly restored to its former state, and as useful as before, it gladdens his heart more than a present of a new gun would,” probably because the fix doubled as a gesture of friendship. The French also cultivated Indian alliances through gifts of munitions, particularly gunpowder. French gunpowder set the European standard, and Indians were eager to obtain it, even when they acquired their muskets from the English. Moreover, French powder and shot were available in high volume because the French were able to ferry their goods to Louisiana and its inland posts by water, whereas English traders were reluctant to burden their pack trains with heavy ammunition on journeys that ran hundreds of miles.

Gulf Coast and Mississippi River Valley Indians extracted enormous amounts of free munitions from the French and Spanish by the mere possibility that they would throw in their lot with the British. In 1732 Mobile’s commander put in an order for Indian gifts in the amount of 80,000 pounds of gunpowder, 14,000 pounds of lead, 25,000 gunflints, and 600 trade guns with brass mountings. The post already owed 120 muskets to Indians who “ask for them daily.” The French showed even greater generosity in wartime, as in 1759 amid the Seven Years’ War when Louisiana earmarked 900 guns for presents and 600 guns for trade. Spanish Florida was unable to keep pace, but episodically it too provided Indians with munitions as presents, as in 1736 when it hosted over a hundred unidentified Indians in St. Augustine and gave each one a gun, powder, and shot. All this was enough to make South Carolina merchant Sam Everleigh fume that “the Indians have been so used of late years to receive presents that they now expect it as a right belonging to them, and the English, French, and Spanish are in some measure become tributary to them.

The uninterrupted flow of arms even after the decline of the slave trade enabled Indians in the Southeast to develop a gun culture much like the one that had taken shape in the Northeast in previous decades. Southeastern Indians preferred the gun over the bow and arrow for hunting deer because they could drop their kill with one shot. It was the opinion of John Stewart, a Scottish trader from Charles Town, that Indian hunters with firearms could “get more hides and furs in one moon than formerly with bow and arrow in 12 moons.

If the hunter intended to trade the skin from his hunt, he would have to aim his shot at the head so as not to damage the hide, which attests to both the accuracy of smoothbore muskets when fired at close range and the skill of Native gunmen. Lawson’s impression was that North Carolina Indians used the bow and arrow only for hunting small game like turkey and ducks, “thinking it not worth throwing powder and shot after them,” probably because a single arrow could easily bring them down.

The same deerskin trade and play-off politics that underwrote this gun culture carried the danger of civil strife as young men on the make circumvented established chiefs to open their own trade lines and drum up foreign recognition of their claims to leadership. There was a built-in tension in many Indian societies between established leaders and young aspirants. The former’s leadership rested on their age, maturity, elite lineages, and accomplishments. Such men tended to favor peace and stability. Young men pursuing their own leadership credentials often provoked conflict with foreign peoples in order to prove themselves as warriors. With the onset of European trade, they obtained an additional route to influence, for if a young man managed to bring outside trade into the community, or convince a colonial government that he was a person worthy of receiving chiefly honors, he might actually acquire that status. This dynamic might help explain why one Upper Creek chief in the mid-eighteenth century went by the name of Gun Merchant. The problem was that making a power play by becoming a gun merchant usually involved the young man promising his people’s allegiance to one colonial state exclusively, regardless of the will of the chiefs and the reactions of the other colonial powers.

The Choctaws suffered just this sort of strife after the Natchez War as a result of the ambitions of a warrior named Red Shoes and the draw of English trade. Red Shoes had developed a warrior following by virtue of his exploits against Chickasaws, but he aspired to even greater heights. Throughout the 1730s Red Shoes pursued English trade over the Franco-centric foreign policies of the established leadership, including his hometown’s Mingo Tchito, the so-called “French Great Chief.” One source of discontent for Red Shoes and his men appears to have been the lack of guns provided by the French and the chiefs’ control over this meager stockpile. Generally the chiefs kept firearms given to them as presents by Louisiana and then loaned them out to hunters and warriors, thus strengthening their influence. Red Shoes contended that this system not only put too much power in the chiefs’ hands, but gave the French too much leverage over the chiefs and the people. The chiefs’ response was the English were so far away that if Red Shoes prevailed, the people “would see themselves forced to take up their old arms, the bow and arrow, again,” that is, “unless they wanted to load their guns with [English] limbourg [cloth].” Red Shoes would not be swayed, and thrice during the mid- to late 1730s he arranged for Carolina pack trains laden with trade guns to enter Choctaw country. In return Charles Town awarded him a medallion and proclamation naming him “King of the Choctaws.” Red Shoes also tried to broker peace with the Chickasaws, Carolina’s main indigenous trade partner in the region, first in 1739, then again in 1745.

It was time for the Francophile chiefs and Louisiana to intervene, for Red Shoes was on the verge of achieving a political and commercial realignment that would rob them of power and perhaps threaten the very existence of the French colony. The chiefs tried to limit the internecine violence by killing a visiting Chickasaw diplomat and his wife, but there seemed to be no other choice after Red Shoes retaliated by killing three Frenchman. With Lousiana governor Pierre de Rigaud de Vaudreuil threatening to institute a trade embargo and throw French support to the Choctaws’ longtime enemy, the Alabamas, the Francophile chiefs assassinated Red Shoes

It was the beginning of two years of bloody civil war in the nation. This dark chapter in Choctaw history came to an end only after the French-leaning eastern Choctaws, outfitted with French guns, powder, shot, and even cannons, managed to subdue the English-leaning western towns, which found their Carolina supply lines less reliable in wartime than they had hoped. Eight hundred of Red Shoes’s followers lost their lives in this struggle, their scalps sold to the French for bounties double that offered for Chickasaw trophies. The expense to the French was some 62,000 livres in presents per year to a roster that by 1763 counted over 600 men. Play-off politics, like the adoption of guns, was full of opportunities to accumulate wealth and power, but also loaded with danger.

To be sure, gun violence created even as it destroyed. Survivors formed new coalitions like the Yamasees, Creeks, and Catawbas, in part to protect themselves from slave raiders and organize their warriors into militant slavers. The Indians’ quest for firearms led to political relations with a host of new colonies and empires, and trade lines that connected them to a burgeoning global commerce. Consequently their material life was richer than ever before, marked not only by munitions but brightly colored cloth, tailored clothing, exotic pigments, metal tools, and much more. It is apt to call this change in Indian life a consumer revolution, but it was one in which there were far fewer people to enjoy the goods.

King Philip’s war

Wiki overview: King Philip’s War was an armed conflict between Native American inhabitants of present-day New England and English colonists and their Native American allies in 1675–78 which arose due to   

European settlers’ continued encroaching onto Wampanoag lands and demand by colonists that they  sign a new peace agreement that included the surrender of Indian guns. When officials in Plymouth Colony hanged three Wampanoags in 1675 for the murder of a Christianized Indian, they launched a united assault on colonial towns throughout the region.  By the end of the conflict, the Wampanoags and their Narragansett allies were almost completely destroyed.  The war was the single greatest calamity to occur in 17th century Puritan New England and is considered by many to be the deadliest war in the history of European settlement in North America in proportion to the population. In little more than a year, 12 of the region’s towns were destroyed and many more damaged, the colony’s economy was all but ruined, and its population decimated, losing 10% of all men available for military service. More than half of New England’s towns were attacked by Native American warriors.

For all the colonists’ anxieties about salvation and wolves preying on their sheep, they were also haunted by the fact of being surrounded by indigenous people with superior armaments. Equally unnerving was the danger of the Natives using these weapons to redress their grievances against the colonial order.

These fears materialized in King Philip’s War of 1675–1676. For 9 months, Indian gunmen lured colonial militia into devastating ambushes, sacked outlying English towns, and terrorized the roadways. It seemed within their grasp to push the line of English settlement back to the outskirts of Boston and even into the sea. What made the Natives’ guerilla strikes so effective was that the warriors seemed to blend into the thick New England woods until the very moment they opened fire.

Throughout King Philip’s War, the English possessed the advantage of being able to import large quantities of firearms, gunpowder, and lead from the mother country. Yet they were the ones who felt under siege by Native enemies.

Of all the morals King Philip’s War had to teach, among the most significant was this: It was dangerous, even suicidal, for Indians surrounded by the expanding English colonies and dependent on English munitions to go to war against them unless they had reliable trade alternatives among other European powers. Whereas interior groups like the Iroquois, Creeks, and Chickasaws were encircled by a gun frontier giving them relatively dependable access to multiple colonial markets, by the 1670s east-coast nations like the Wampanoags, Narragansetts, and Nipmucs had only tentative lines beyond the English.

During King Philip’s War the English closed ranks and showed unprecedented respect for laws banning the trade of guns and ammunition to Indians.

The warring Indians in King Philip’s War suffered the loss of thousands of their people to violent deaths and disease. The English captured hundreds and perhaps even thousands of others and sent them into the hell of Caribbean slavery. Most of those lucky enough to survive and escape captivity fled the region for good to take refuge in the Saint Lawrence or Hudson River Valley or places beyond. Even those who sided with the English wound up suffering, for after the war the colonies immediately seized hundreds of square miles of Indian land and began the long but indelible process of acquiring most of the rest, largely through underhanded means.

New England in the mid-17th century was as favorable an arms market as Indians could hope to find, because of numerous divisions within the colonial ranks. Though all of the English colonies in the region were established by reformed Protestants (or Puritans) opposed to Catholic elements in the Anglican Church, several rifts emerged when it came to building their own ecclesiastical order in America. The subsequent hiving off of dissidents and fortune seekers from Plymouth and Massachusetts produced the colonies of Rhode Island, Connecticut, and New Haven, the independent plantations of Martha’s Vineyard and Nantucket, and several semiautonomous English towns on eastern Long Island.

Competition among the English colonies and between the English and Dutch allowed Indians to choose among multiple traders from the two most commercially minded and important arms-producing nations of Europe.

Crazy Horse

On May 6, 1877, Crazy Horse, the great warrior chief of the Oglala Lakotas, finally surrendered to the United States, effectively symbolizing the end of his people’s quarter century of resistance to white American hegemony along the upper Missouri River and Great Plains. Though the Lakotas had welcomed the trade goods accompanying U.S. expansion, practically everything else about it constituted a disaster. Even before the invasion of white ranchers and farmers, the Lakotas had been plagued by an unending succession of American transients, some of them violent, nearly all of them wasteful. First there were the overland migrants, tracing rutted trails from Missouri to the golden fields of Oregon and the gold strikes of California and the Rocky Mountains. These travelers and their livestock stripped precious river bottoms and grasslands of materials the Lakotas needed to build and heat their homes, construct their tools, and feed their horses. Their long wagon trains disrupted the buffalo’s normal migrations, which sometimes forced the Lakotas to go hungry. Close behind them were white hunters, who slaughtered the buffalo wantonly, usually only for their robes, leaving their carcasses to rot on the Plains. It was as if they were eager to starve Indians who relied on these animals for practically everything. At least the overland migrants and hide hunters tended to only pass through Lakota territory. The railroad-building and mining industries delivered some of the roughest, most lawless, and environmentally destructive segments of American society directly into the Lakota heartland, including the sacred Black Hills. Whenever Lakota warriors drove them out, it seemed only to entice more of them to return, with blue-coated soldiers in tow for their protection.

Lakota warriors could handle U.S. cavalry in anything resembling a fair fight, but they could not cope with their relentless hounding of civilian camps, including the massacre of women, children, and the elderly, and the destruction of the people’s horses and food stores. This punishment came when the Lakotas were already suffering acute hunger because of the dwindling buffalo herds, and a population freefall as epidemic diseases accompanying the Americans tore through their tents season after season. By 1877 the people could take no more. One by one, desperate Lakota bands came to the wrenching conclusion to move onto the reservations that the federal government had assigned them, where, its agents promised, at least there would be something to eat and the soldiers would stop pursuing them. Probably no one felt more anguish over this decision than Crazy Horse, who as a mature man in his mid-thirties had spent his adult life battling to avoid just this moment.

In the long term, the U.S. government planned to force the Lakotas to adopt a sedentary, agricultural life, hemmed in by farm fences and the lines of the reservation. This prospect was especially bleak for the men. Lakota men had been hunters and warriors since time out of mind. That was how they defined themselves as individuals, as men, and as Lakotas. To them it was the sacred order of things. Fulfilling these roles also meant a life full of excitement and glory, played out across an expansive territory of beautiful, powerful places. All of this would change under American rule. A man’s life would be reduced to the monotonous routines of tilling the soil and tending to livestock, day in and day out on the same tract of land. Crazy Horse could see little that was good and meaningful in this future, so what could he say in yielding to it after years of fending off the blue coats? What words could possibly capture the worry, humiliation, and sadness of this event?

Hours later, after the people had erected their teepees and refreshed themselves, the men gathered in the center of camp to conclude their surrender. First Crazy Horse, then other chiefs such as Little Big Man, He Dog, and Little Hawk, and finally fifty more men of lesser rank, placed 147 guns in a pile, most of them “first-rate sporting rifles or else Springfield carbines, caliber .45, the same as now issued to United States troops.” Crazy Horse himself relinquished “three fine Winchester rifles,” a repeating gun that held between ten and fourteen rounds. Clearly, a lack of weapons had nothing to do with the Lakotas’ capitulation to the Americans. Clark, however, refused to believe that these were all the arms they had. Rejecting the offer, he calmly but directly explained that he would accept only their complete arsenal, “and to save trouble they had better go out and find those guns at once.

To restore calm, Crazy Horse accompanied the reservation’s Indian guard as it went tent to tent gathering weapons, sometimes in exchange for horses in the case of an unwilling donor. An additional 50 rifles and muskets and 31 pistols surfaced, making 120 rifles and muskets and 75 pistols in all—probably still less than the absolute total, but enough to satisfy the lieutenant.

The ceremonialism of Crazy Horse and Sitting Bull at the time of their surrenders captured a lesson that has too often been lost and even denied in accounts of North American Indian history.

More on the slave trade

Sometime during the 1670s a young woman from the Yuchis of what is now the Tennessee/North Carolina/Virginia border region experienced the horror of being captured and sold into slavery by a band of gun-toting warriors from the Chichimecos, a group her people barely knew. Whether her ordeal began during an attack on her village or an ambush along the trail is unknown, but what came next probably followed what was becoming a well-worn pattern. The Chichimecos, after keeping her in a holding pen until they had accumulated enough captives for the colonial market, would have attached a leather collar around her neck connected to cords tying her wrists behind her back, and then tethered this restraint to a long leash guiding other similarly bound prisoners, most of them women and children. Marched in this constrained position throughout the day and staked to the ground at night, eventually she found herself some 300 miles east to the coast. The destination was the young English colony of Carolina, anchored by the community of Charles Town along the Ashley and Cooper Rivers. Carolina was an offshoot of the Caribbean colony of Barbados, which already had developed an insatiable appetite for cheap bound labor to do the grueling work of growing, harvesting, and processing sugarcane to satisfy Europe’s sweet tooth and thirst for rum. Carolinians hoped one day to discover their own cash crop, but in the meantime they saw their most lucrative opportunity in the export of Indian slaves to Barbados and other island plantations. The Chichimecos were their first supplier, enticed by deals such as the one they got for the captive Yuchi woman: they “sold her for a shot gun.” This woman’s name remains a mystery. Nevertheless, some details of her story survive because she managed to escape the English and make it back to her people. Her chief then used her story to alert Spanish authorities to the Chichimeco threat. Her fellow captives were less fortunate.

[  Although this is a rather long extract, it is just a small part of the book and I hope you’ll buy it to understand the enormous role guns played in American history.]

 

 

 

Posted in Guns, Human Nature, Social Disorder, Violence, War | Tagged , , , , | Leave a comment

EROI of Canadian Natural Gas. A peak was reached despite enormous investment

[ Although I’ve extracted much of this paper, it is not complete—there are missing equations, figures, tables, and text– so see the paper for details (it is available online).  I’ve rearranged the order of the paper.  The conclusion is just below the introduction.  Some of the important points include:

  1. Natural gas production in Western Canada peaked in 2001 and remained nearly flat until 2006 despite more than quadrupling the drilling rate.
  2. Canada seems to be one of many counter examples to the idea that oil and gas production can rise with sufficient investment.
  3. The drilling intensity for natural gas was so high that net energy delivered to society peaked in 2000–2002, while production did not peak until 2006.
  4. The industry consumed all the extra energy it delivered to maintain the high drilling effort.
  5. The inability of a region to increase net energy may be the best definition of peak production. This increase in energy consumption reduces the total energy provided to society and acts as a contracting pressure on the overall economy as the industry consumes greater quantities of labor, steel, concrete and fuel.
  6. It is clear that state of the art conventional oil & natural gas extraction is unable to improve drilling efficiency as fast as depletion is reducing well quality.
  7. This pattern shows the falsehood of the idea that additional investment always results in increased production. During the initial rising EROI phase, flat or falling drilling rates can increase production, and during the falling EROI phase, production can fall despite dramatic increases in investment.
  8. There appears to be a maximum energy investment that can be sustained, which is about 15:1 to 22:1 EROI or 5% to 7% of gross energy. [If this is the case], then economic growth may not be possible if more energy is diverted into the energy producing sector. If this minimum exists, then it places a lower bound EROI on any energy source that is expected to become a major component of societies’ future energy mix.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

Freise, J. November 3, 2011 The EROI of Conventional Canadian Natural Gas Production.  Sustainability 2011, 3, 2080-2104.

Abstract: Canada was the world’s third largest natural gas producer in 2008, with 98% of its gas being produced by conventional, tight gas, and coal bed methane wells in Western Canada.

Natural gas production in Western Canada peaked in 2001 and remained nearly flat until 2006 despite more than quadrupling the drilling rate.

Canada seems to be one of many counter examples to the idea that oil and gas production can rise with sufficient investment.

This study calculated the Energy Return on Energy Invested and Net Energy of conventional natural gas and oil production in Western Canada by a variety of methods to explore the energy dynamics of the peaking process. All these methods show a downward trend in EROI during the last decade.

Natural gas EROI fell from 38:1 in 1993 to 15:1 at the peak of drilling in 2005.

The drilling intensity for natural gas was so high that net energy delivered to society peaked in 2000–2002, while production did not peak until 2006.

The industry consumed all the extra energy it delivered to maintain the high drilling effort. The inability of a region to increase net energy may be the best definition of peak production. This increase in energy consumption reduces the total energy provided to society and acts as a contracting pressure on the overall economy as the industry consumes greater quantities of labor, steel, concrete and fuel. It appears that energy production from conventional oil and gas in Western Canada has peaked and entered permanent decline.

Introduction

At the start of the 21st century we have a lot of pressing questions about our future energy supply: Can the world maintain its oil production plateau? Can natural gas production grow to replace coal and oil? Is it physically possible to grow the economy using renewable energy sources or even transition to renewable energy sources? What ties these questions together is a concept called net energy. It takes an investment of energy (in the form of fuel, steel, labor, and more) to produce energy. The net energy is the amount of surplus after this investment has been paid. This surplus is the energy available to operate the rest of the economy. All of these questions may be asked in a simpler form: Can we do X and still maintain or grow the net energy supply? Thus, insight gained from understanding the energy production of fossil fuels may transition to understanding of the growth (or decline) of renewable energy sources.

Canada’s oil and natural gas industry makes an interesting case study for net energy analysis. The country is a very large petroleum producer and was the world’s third largest natural gas producer in 2008 [1] and most of that production comes from the onshore Western Canadian Sedimentary Basin (WCSB). It went through a peak in oil production in the 1970s and, despite an increase in drilling, the country could not return to peak rates. Most recently, natural gas production fell from an eight-year plateau despite a 300% increase in the rate of drilling and an even greater increase in investment.

A net energy analysis of Canadian conventional oil and natural gas provides several things: First, it is a measurement of current conditions. How much net energy is being produced now and what is the trend? Second, it provides insight into the net energy dynamics of the production growth, peak/plateau, and decline for oil and natural gas production. Third, it gives some indication of what net energy levels are needed for an energy system to grow and below which levels cause a peak or decline in the energy system.

Net Energy and the Economy.  It takes energy to produce energy. For natural gas and oil production, energy is consumed as fuel to drive drilling rigs and other vehicles, energy to make the steel in drill and casing pipe, energy to heat the homes of the workers and provide them with food. These energy expenditures make up the cost of producing energy. Net energy is the surplus energy after these costs have been paid.

Friese 2011 NG EROI figure 1

Figure 1. (a) Energy return on energy invested (EROI) 20:1 energy supply & surplus; (b) contraction caused by fall to 10:1 EROI; and (c) Surplus returned by higher end use efficiency.

As costs rise, the energy sector makes a huge increase in its demand for labor, steel, fuel, etc. from society at large, shown by a large increase in the red area. But at the same time, the energy sector is providing no additional energy that is needed to create that extra steel, supply the fuel, or support the labor. Society must then cannibalize other sectors to supply the demands of the energy sector and the non-energy economy is seen to contract. This non-energy sector contraction would then cause a collapse in demand for energy, and returning society to somewhere between A and B.

To help formalize this example, assume Figure 1 shows a theoretical energy source supplying 1 Giga Joule (GJ) of energy. The three columns show three different net energy conditions. Column A shows an energy supply that requires 5% of the gross energy as input energy. It has an EROI of 20:1 and a net energy of 95%. Column B shows the same energy source, but where the cost of producing energy has doubled to consume 10% of the gross energy supply. It has an EROI of 10:1 and a net energy of 90%. The transport, refining, and end use efficiency remain the same and so the final surplus has contracted.

Column C represents a society that has adapted to the lower EROI energy source by improving efficiency of use and the surplus has returned. The more efficient a society, the lower the net energy supply it may subsist upon. This last point will be important when examining the difference between the peaks in oil and natural gas.

CONCLUSION: The Current State of Western Canadian Natural Gas and Oil Production.  All of three methods show a downward trend in EROI during the last decade (Figure 10) and the combined oil and gas industry has fallen from a long term high EROI of 79:1 (about 1% energy consumed) to a low of 15:1 (7% energy consumed)

Friese 2011 Figure 10 EROI comparison according to technique

Figure 10. EROI comparison according to technique.

Natural gas EROI reached an even deeper low of 14:1 (7%) or even 13:1 (8%) with the NEB EUR method.

 

It is clear that state of the art conventional oil & natural gas extraction is unable to improve drilling efficiency as fast as depletion is reducing well quality. The fact that EROI does not rebound to match prior drilling rates and the EUR result shows no rebound indicates that well quality continues to decline. The small rebound in EROI is an result of the rolling average technique of methods one and two.

The conventional oil and gas in the WCSB has peaked. Falling well quality will likely continue to push cost up or production down.

This pattern shows the falsehood of the idea that additional investment always results in increased production. During the initial rising EROI phase, flat or falling drilling rates can increase production, and during the falling EROI phase, production can fall despite dramatic increases in investment.

There appears to be a maximum energy investment that can be sustained, which is about 15:1 to 22:1 EROI or 5% to 7% of gross energy. This might indicate a minimum EROI that can be supported while the economy grows. The minimum was higher for the oil peak than the natural gas peak and this might have been caused by inexpensive imported oil or because the economy had become more energy efficient (Figure 1 column C) allowing a lower minimum EROI.

The natural gas and oil peaks differed when analyzed using net energy. The oil peak had a peak in gross and net energy on the same year, suggesting that some outside factor was responsible for reducing investment. Natural gas showed a net energy peak before a gross production peak. This suggests that price was not the limiting factor in reducing drilling effort. Instead, from 1996 to 2005, the drilling rate for natural gas quadrupled and expenditures rose even faster, despite falling net energy and this in turn suggests that it was falling net energy was the eventual cause of economic contraction and falling prices.

A peak in net energy may be the best definition of “peak” production. When net energy peaks before gross energy it indicates that price was not the limiting factor in the effort to liberate energy. This is a likely model of world net energy production where less expensive imported energy sources cannot replace existing but declining energy sources.

A rise in EROI appears to be possible only when a new resource or region is being exploited, such as the transition from oil to gas as the primary energy production in the WCSB during the late 1980s. This study has focused on conventional natural gas production and it is very uncertain how exploitation of shale gas reserves will change the energy return.

Wider Implications.  Some wider conclusions about renewable energy are suggested by this net energy study. If there is a maximum level of investment between 5% and 7% of gross energy, then economic growth may not be possible if more energy is diverted into the energy producing sector. If this minimum exists then it places a lower bound EROI on any energy source that is expected to become a major component of societies’ future energy mix. For instance, nuclear power with its low EROI is likely below this level [25,26].

Also, if the maximum level of investment is 7% of output energy consumed and a renewable energy source has an EROI of 20:1, or 5%, then the 2% remaining is the maximum that may be invested into growth of the energy source without causing the economy to decline. This radically reduces the rate at which society may change the energy mix that supports it [27].

This study does not attempt to estimate the EROI or net energy of shale gas, but some caution is warranted by comparison between these results and some cursory findings for the cost of shale gas. The International Energy Agency’s World Energy Outlook 2009 contained a graph showing the cost of natural gas production in the Barnett Shale (Figure 11). The core (best) counties, Johnson and Tarrant, show the lowest cost while counties outside the core production region show higher costs.

A very rough comparison can be made to the costs in this report. If the royalty amounts are subtracted and inflation adjusted into $2002 values, the Johnson County cost would be $2.94 resulting in an EROI of roughly 15:1 (7% of output consumed). This is not much higher than the lowest EROI values found in the WCSB. All the remaining Barnett Shale costs are much higher. Hill and Hood would have an EROI of 8:1 and Jack and Erath would have an EROI of roughly 5:1 (22% of output energy consumed in extraction). Given the history of the WCSB production peaks, it is hard to see how shale gas production could be much increased with such low net energy values. Shale gas may have a very short lived EROI increase over conventional while the core counties are exploited and then suffer a production collapse as EROI falls rapidly. This would fit the pattern seen with oil and then with natural gas in the WCSB.

The IEA WEO 2009 also contains Figure 12, an illustration of a world view that increasing cost will liberate more and more energy for use by society.

Friese 2011 figure 12 net energy reduces volume as quality declines

Figure 12. Modified from the IEA WEO 2009 [28] with dotted lines added to illustrate concept of net energy reducing the total volume of energy available as resource quality declines.

 

Conventional gas reservoirs, now peaked in production and shrinking in the WCSB, are seen as the small tip of a huge number of other resources that could be liberated with increasing investment. But falling net energy may prove this view false. If the energy return is too low, production growth may be limited or impossible from many of these energy sources. Much of the energy produced may need to be consumed during extraction. The proper shape of this diagram is likely to be a diamond with non-conventional resources forming a smaller part of the diamond underneath as denoted by the added dotted lines.

 

 

Background on the Western Canadian Sedimentary Basin.  Western Canada produced 98% of Canada’s natural gas in 2009 with the majority of that coming from the Western Canadian Sedimentary Basin (WCSB) that underlies most of Alberta, parts of British Columbia, Saskatchewan and the Northwest Territories [7].

Friese 2011 Energy Content of Petroleum Production by type stacked

Figure 3. Energy Content of Petroleum Production, by type, stacked.

This paper focuses on conventional natural gas, tight natural gas (gas in a low porosity geologic formation that must be liberated via artificial fracturing) and conventional oil production. Western Canadian natural gas production is still largely conventional and so makes a good area of study. In 2008, 55% of marketed natural gas was conventional gas from gas wells, 32% was tight gas, 8% was solution gas from oil wells, 5% coal bed methane (non-conventional), and less than 1% was shale gas [9,10]. Figure 3. Energy Content of Petroleum Production, by type, stacked.

The Canadian Gas Potential Committee in 2005 estimated that the WCSB contains 71% of the conventional gas endowment of Canada and that of an original 278 Tcf of marketable natural gas (technically and economically recoverable) 143 Tcf remain [11]. They note: “The majority of the large gas pools have been discovered and a significant portion of the discovered reserves has been produced” and further “62% of the undiscovered potential occurs in 21,100 pools larger than 1 Bcf OGIP. The remaining 38% of the undiscovered potential occurs in approximately 470,000 pools each containing less than 1 Bcf”. To put this in context, the petroleum industry has drilled less than 200,000 natural gas wells from 1947 to 2009 [7], and so will require at least a doubling of drilling effort to reach at last half of the marketable natural gas.

Results and Discussion.

Method One: EROI and Net Energy of Western Canadian Oil and Gas Production

The Canadian Association of Petroleum Producers (CAPP) maintains records of oil and gas production and expenditures going back to 1947. In theory it is simple to calculate net energy and EROI from this public data. Energy output equals the total production volumes of each hydrocarbon produced in a given year (conventional oil, natural gas, natural gas liquids), which is converted to heat energy equivalents, and measured in Giga Joules. The energy input side is more difficult as the public data for expenditures is recorded only in Canadian $ per year and not in energy. An energy intensity factor is used to convert the dollar expenditures into energy. This factor is calculated from Energy Input Output—Life Cycle Analysis

As the energy intensity factor includes wages paid to labor, but energy inputs are not quality corrected, the results are equivalent to EROIsociety and not the EROIStandard [12]. EROIStandard corrects the input energy for quality but excludes labor costs. The energy intensity factor was 24 MJ/$(U.S. 2002) and all expenditures were inflation corrected and converted to U.S. dollars. While the focus of this paper is on natural gas production, this result provides a historical time line to compare with the more limited time series for natural gas only. The results are first plotted as gross energy and net energy alongside the meters drilled per year as in Figure 4.

Friese 2011 Net energy content ofoil and gas

Figure 4. Net Energy content of oil and gas produced after invested energy is subtracted, with total meters drilled.

The time period from 1947 to 1956 showed rising production along with a rising drilling rate. From 1956 to 1973 production rose despite no corresponding rise in drilling. From 1973 to 1985 production fell despite a rise in drilling effort. The increased drilling rates were unable to increase gross energy and actually drove down net energy during this period.

In the mid-1980s, energy production once again rose with a falling drilling rate. That trend reversed to rising production with increased drilling. Then, in the year 2000, the petroleum industry showed an initial peak in gross and net energy (see Table 1). The increases in drilling effort that happened after 2000 were unable to increase production and actually drove down net energy (falling EROI). When the drilling rate increased, it drove down net energy. When the drilling rate slowed (as it did after 2006) then production dropped and net energy fell even faster.

Friese 2011 table 1 annual gross and net energy prd of oil gas ngl

Table 1. Annual gross and net energy production of oil, gas, and natural gas liquids.

 

Plotting the same data as EROI is quite illuminating. Figure 5 shows that the industry underwent a dramatic rise in energy efficiency from the early 1950s until 1973 when it reached a peak in EROI of 79:1. At this peak the industry consumed only the equivalent of 1% of the energy it produced. Then, the industry suffered a tremendous efficiency drop to a low EROI of 22:1 (about 5% of energy production consumed by investment) only 7 years later as the industry more than doubled its drilling rate in an effort to return to the oil production peak.

Another interesting inflection point was 1985 when the industry started a 7-year period when a reduced drilling rate providing an increase in production. We can see this corresponded to an increase in efficiency as the industry focused on growing natural gas production (see Figure 3). EROI rose to 46:1 (about 2% consumed by investment) by 1992. This fortunate trend was not long lived. Once the drilling rate started to rise, EROI has had a volatile but downward trend to a new low of 15:1 in 2006, where the industry consumed the equivalent of 7% of all the energy it produced. And further, it took a dramatic reduction in drilling and falling back on the production of older wells to achieve the small uptick in EROI seen in 2009.

Friese 2011 EROI of oil and gas 1947-2009

Figure 5. EROI of oil and gas from 1947 to 2009 with meters drilled.

Natural gas from conventional and tight natural gas wells is now the dominant energy source in the WCSB and has just recently peaked. By removing the oil from the net energy and EROI calculations we can gain an insight into the energy dynamics of peak natural gas production. The data necessary to separate oil and gas production and expenditure is limited to 1993 to 2009. The details of splitting out both gas expenditures and gas production from the oil data are explained in Section 3 methodology. The basic method for finding the net energy from natural gas wells alone is very similar to that for oil and natural gas combined. On the energy output side, the difficulty is that oil wells also produce natural gas and NGL and the amount from oil vs. gas wells is not recorded in the CAPP statistics. A NEB report [13] did report the amount of oil well-associated gas for a limited time series and this relation was used to estimate the amount of associated gas for the remaining years. On the input side, the expenditures for oil and gas well drilling and production are also intermixed. As drilling is the largest expense, it was assumed that the distance of drilling is directly proportional to percentage of expenditures. For example, if gas wells were 75% of the meters drilled, then 75% of exploration and development costs were apportioned to natural gas production.

Figure 6 shows the resulting EROI for natural gas wells and displays a variable but downward trend in EROI over the whole data period except for a rebound during 2007 to 2009 when drilling rates fell back to 1998 levels. However, the EROI did not return to 1998 levels along with the drilling rate.

Friese 2011 EROI of natural gas wells

Figure 6. EROI of natural gas wells with meters drilled

Table 2 displays the net energy of natural gas well production. The peak for the estimated gross energy from natural gas wells occurred in 2006 at 6.9 e9 GJ, but the peak in net energy happened much sooner. In 2002, net energy peaked at 6.5 GJ. The drilling industry doubled the meters drilled from 2002 to 2005, but could not deliver more net energy to society. The additional industry investment consumed all the extra energy produced, and more.

Friese 2011 Table 2a

 

 

 

 

 

Friese 2011 Table 2bTable 2. Gross and net energy from natural gas wells. Gross Net Industry Gas Year Energy Energy Directed

The first two methods used to estimate EROI suffer an inherent inaccuracy: The output energy of a given year is mostly produced by wells drilled in past years. Figure 7 shows an example of how production from wells drilled each year stack on top of each other to yield the annual production rate. Each colored band represents the natural gas produced from a given year’s wells. The wells drilled from 2003 to 2004 produced the yellow band. It is easy to see from this chart how most of the natural gas produced in 2003 was actually from wells drilled in prior years.

Friese 2011 Figure 7 estimate of NG prd by wells each year

 

Figure 7. Canadian National Energy Board (NEB) Estimate of natural gas produced by wells drilled each year. From [8].

 

A well may produce oil or gas for 30 years, but all the expense is applied during the year it was drilled. This mismatch in time scales can cause EROI to spike and dip if the drilling rate moves up and down. A rapid increase in drilling can cause EROI to dip as the investment is booked all at once, but production will take years to arrive. A rapid decrease in drilling will cause investment to suddenly drop, while production from wells from previous years stays high and will result in an EROI spike. These spikes and dips are exactly how the economy experiences the change in energy flows, and so it is perfectly valid to use this technique, but the averaging effect hides how the newest wells are performing.

One method to reveal current well performance would be to attribute the expected full life production of the well, the Estimated Ultimate Recovery (EUR), against the investment amount the year the well was drilled. The Canadian National Energy Board does periodic studies of producing natural gas. They calculate the EUR for the wells drilled each year [8]. They examined the wells drilled each year, totaled the past production from those wells, and used decline curves to estimate the remaining production of each year’s wells.

In this third method, the NEB calculated EUR was used instead of the annual production statistics for that year. The goal was to try to estimate the EROI of the very latest natural gas wells drilled and thus learn if the natural gas EROI rebound seen with the rolling average method was an artifact of the drop in drilling rate or if the natural gas wells improved in quality. The results are shown in Tables 3 and 4 and Figure 8. Again, the EROI trend is clearly declining. A specific example is to compare 1997 to 2005. Both years have very similar estimated ultimate recovery (EUR), but 2005 had a capital expenditure that was 3 times higher. This strongly suggests that the well prospects worsened over a short time period.

Friese 2011 table 3

Table 3. Estimated Ultimate Recovery (EUR) and cost per GJ for natural gas wells. Estimated

 

 

 

 

Friese 2011 Table 4Table 4. Total cost per GJ, Net EUR and EROI for natural gas wells.

 

 

 

 

 

 

Friese 2011 figure 8 EROI usnig NEB ests of ultimate recoveryFigure 8. EROI using NEB estimates of ultimate recovery, with meters drilled.

 

The EROI curve in Figure 8 is slightly less volatile than the rolling average technique, but more strikingly, the years 2007 and 2008 do not show the rebound in EROI that the rolling average method displayed. Assuming the NEB estimates for EUR are correct, this result indicates that the rebound was an artifact of the rapidly falling drilling rate on the rolling average and that new wells are performing considerably worse than prior years’ wells.

EROI Boundary

There are many stages to petroleum production: exploration, drilling, gathering and separation, refining, and transport of finished products, and the burning of the final fuel. The EROI could be calculated at any of these points in the process. Some studies have looked at the EROI of these various stages [6]. This paper examines the EROI within a boundary that includes the exploration, drilling, gathering and separating stages. This is typically referred to as the upstream petroleum industry.  This analysis does not include refining, the transport of finished products, or the final usage efficiency. This boundary does include labor costs. These results correspond to EROI society (lower case) as described in the EROI protocol [12].

These results are not quite EROI Standard which would include quality correcting the input energy values (not available from the EIO-LCA) and excluding the labor costs (which are rolled into the industry statistics and not removable). Care should be taken to match the boundary conditions before comparing these results to other studies.

Method One: EROI and Net Energy of Western Canadian Conventional Oil and Gas Production.  The Canadian Association of Petroleum Producers (CAPP) maintains statistics on oil and natural gas production and oil and gas expenditures going back to 1947 [22] but the expense data is intermingled. This forces us to estimate the EROI of oil and gas together, but doing so provides a historical perspective for the more limited natural gas EROI that will be calculated later. The net energy and EROI of the combined oil and natural gas industry is thus the first result calculated.

Energy Output: Oil and Gas Production Statistics. Records of petroleum production are also maintained by CAPP and published in the annual statistical handbook [22]. Summed were the values for Western Canadian conventional oil, marketed natural gas, condensates, ethane, butane, propane, and pentane plus. This paper focuses on conventional production and excludes synthetic oil from tar sands and bitumen production. States included in Western Canada are Alberta, British Columbia, Manitoba, Saskatchewan, and the Northwest Territories. The resulting energy production values are displayed in Figure 3.

Energy Input: Oil and Gas Expenditure Statistics. CAPP also maintains expenditure statistics for the petroleum industry back to 1947 [22]. Statistics are organized by state and major category. Money paid for land acquisition and royalties were excluded as these do not involve energy expenditure (money paid for land and royalties shifts to who gets to spend the industry profits, not how much energy is expended in extracting the resources). Investment categories include these Exploration expenses: Geological and Geophysical, Drilling and Other. Development expenses include: Drilling, Field Equipment, Enhanced Recovery (EOR), Gas Plants, and Other. Operating expenses include: Well and flow lines, Gas Plants and Other. All expenditures from all categories and states were summed into one value for each year.

Inflation Adjustment & Exchange Rate. The Canadian dollar expenditure statistics are nominal must be inflation corrected to the year 2002 to use the energy intensity factor calculated via EIO-LCA analysis. The inflation adjustment is intended to remove the effect of currency devaluation. The inflation adjustment was done using the Canadian CPI [23]. The adjusted results were converted into U.S. $ using the Bank of Canada Annual Average of Exchange rates for 2002 of $1.0 (U.S.) to $1.57 (Canadian) [24] and then converted into Joules of energy input using the expenditures energy intensity factor of 24 MJ/(U.S. 2002).

Combined Oil and Gas Results and Example. The results are displayed in Table 1 located in Section 2.1. A worked example for the year 2002 has an invested energy of 361 e6 GJ = $15 e9 × 24 MJ/($U.S. 2002). Net energy is 9.78 e9 GJ = 10.14 e9 GJ – 0.361 e9 GJ (note the scale change of 361). EROI is 28 = 10.14 e9 GJ / 0.361 e9 GJ.

Method Two: Net Energy and EROI of Western Canadian Natural Gas Wells. The method of calculating the EROI and net energy of natural gas wells is very similar to that used for oil and gas combined. Production and expenditure data were taken from the CAPP statistics and converted to units of energy. Oil production and expenditures were removed (as detailed below). The same energy intensity factor, inflation correction, and exchange rate were used as during the petroleum EROI calculation. The same EROI boundary was used, which includes the gas plants, but not refining or transportation.

Natural Gas Production Statistics. The energy from oil production was excluded, but natural gas also produced as a byproduct of oil production was included. Natural gas is trapped in solution in the liquid oil. The gas comes out of solution when the pressure drops as the oil is produced. Oil also contains some of the lighter fraction hydrocarbons, such as condensates, propane etc. The CAPP statistical handbook does not make the distinction between solution gas and non-associated gas. However, the Canadian National Energy Board provided solution gas data from private sources for the years 2000 to 2008 [13]. Solution gas accounts for about 10% of the total marketed natural gas so it is important it be removed. For 2000 to 2008 the NEB values were used directly. To extend the solution gas estimates for the whole period of 1993 to 2009, a regression was fit between conventional oil production and the amount of solution gas for the 8 years of data. The linear correlation was high, R = 0.93 and the resulting regression was used to predict the amount of solution gas from conventional oil production for the remaining years. The energy in the lighter hydrocarbons (natural gas liquids) needed to be apportioned between oil and gas wells as they are roughly equal to 16% of the energy in the produced natural gas (so about 1.6% of natural gas well gross energy). No public data could be found that suggested a proper ratio, so for this study it was assumed that the ratio of lighter hydrocarbons associated with oil would be the same as the ratio of natural gas associated with the oil. The solution gas ratio was used for each year and that portion of the total NGLs was removed from the gross energy produced.

Natural Gas Exploration and Development Expenditures. The CAPP expenditure statistics encompass both oil and gas expenditures, so some secondary statistic is needed to estimate how the combined expenditures should be apportioned. The statistics do separate the meters of exploration and development drilling that target oil vs. gas wells. For this study it was assumed that the apportionment of expenditure dollars would be directly related to the meters of drilling. This assumption is true only if the oil and gas wells have similar costs. As most oil and gas are produced from the same basin, this was assumed to be a reasonable apportionment (as opposed to if all the natural gas were on shore and the oil production was done much more expensively off shore). The online version of the CAPP statistical handbook contains only the drilling distance statistics for the current year. Copies of data from past handbooks must be requested directly from CAPP for the years 1993 to 2010 [22]. Table 6 relates these hard to acquire numbers. As an example, in 2002 the total meters drilled for oil was 0.71 e6 + 4.65 e6 = 5.36 e6 meters and the total meters drilled for natural gas was 2.63 e6 + 6.02 e6 = 8.65 e6. Natural gas was thus 61.7% of total drilling and so 61.7% of exploration and development expenditures would be apportioned to natural gas wells for 2002. Exactly like the combined oil and gas method, royalties and land expenditures were removed.

References and Notes

  1. International Energy Statistics: Natural Gas Production. http://www.eia.gov/cfapps/ipdbproject/ IEDIndex3.cfm?tid=3&pid=3&aid=1
  2. Hall, C.A.S.; Powers, R.; Schoenberg, W. Peak Oil, EROI, investments and the economy in an uncertain future. In Biofuels, Solar and Wind as Renewable Energy Systems, 1st ed.; Pimentel, D., Ed.; Springer: Berlin, Germany, 2008; pp. 109-132.
  3. Downey, M. Oil 101, 1st ed.; Wooden Table Press: New York, NY, USA, 2009; p. 452.
  4. Hamilton, J.D. Historical oil shocks. Nat. Bur. Econ. Res. Work. Pap. Ser. 2011, 16790.
  5. Carruth, A.A.; Hooker, M.A.; Oswald, A.J. Unemployment equilibria and input prices: Theory and evidence from the United States. Rev. Econ. Stat. 1998, 80, 621-628.
  6. Hall, C.A.S.; Balogh, S.; Murphy, D.J.R. What is the minimum EROI that a sustainable society must have? Energies 2009, 2, 25-47.
  7. Canada’s Energy Future: Infrastructure changes and challenges to 2020—An Energy Market Assessment October 2009; Technical Report Number NE23-153/2009E-PDF; National Energy Board: Calgary, Alberta, Canada, 2010.
  8. Short-term Canadian Natural Gas Deliverability 2007-2009Short-term Canadian Natural Gas Deliverability 2007-2009 1/2007E; National Energy Board: Calgary, Alberta, Canada, 2007. Available online: http://www.neb-one.gc.ca/clf-nsi/rnrgynfmtn/nrgyrprt/ntrlgs/ntrlgsdlvrblty20072009/ ntrlgsdlvrblty20072009-eng.html
  9. Short-term Canadian Natural Gas Deliverability 2007-2009 Appendices; NE2-1/2007-1E-PDF; National Energy Board: Calgary, Alberta, Canada, 2007. Available online: http://www.nebone.gc.ca/clf-nsi/rnrgynfmtn/nrgyrprt/ntrlgs/ntrlgsdlvrblty20072009/ntrlgsdlvrblty20072009ppndceng.pdf
  10. Johnson, M. Energy Supply Team, National Energy Board, 444 Seventh Avenue SW, Calgary, Alberta, T2P 0X8, Canada; Personal communication, 2010.
  11. Natural Gas Potential in Canada – 2005 (CGPC – 2005). Executive Summary; Canadian Natural Gas Potential Committee: Calgary, Alberta, Canada, 2006. Available online: http://www.centreforenergy.com/documents/545.pdf (accessed on October 1, 2010)
  12. Murphy, D.J.; Hall, C.A.S. Order from chaos: A preliminary protocol for determining EROI of fuels. Sustainability 2011, 3, 1888-1907.
  13. 2009 Reference Case Scenario: Canadian Energy Demand and Supply to 2020—An Energy Market Assessment. Appendixes; National Energy Board: Calgary, Alberta, Canada, 2009. Available online: http://www.neb.gc.ca/clf-nsi/rnrgynfmtn/nrgyrprt/nrgyftr/2009/rfrnccsscnr2009ppndc- eng.zip (accessed on September 7, 2010)
  14. Hall, C.; Kaufman, E.; Walker, S.; Yen, D. Efficiency of energy delivery systems: II. Estimating energy costs of capital equipment. Environ. Manag. 1979, 3, 505-510.
  15. Bullard, C. The energy cost of goods and services. Energ. Pol. 1975, 3, 268-278.
  16. Cleveland, C. Net energy from the extraction of oil and gas in the United States. Energy 2005, 30, 769-782.
  17. Hendrickson, C.T.; Lave, L.B.; Matthews, H.S. Environmental Life Cycle Assessment of Goods and Services: An Input-Output Approach; RFF Press: Washington, DC, USA, 2006; p. 272.
  18. Carnegie Mellon University Green Design Institute Economic Input-Output Life Cycle Assessment (EIO-LCA), USA 1997 Industry Benchmark model. Available online: http://www.eiolca.net (accessed on October 1, 2010).
  19. Crude Petroleum and Natural Gas Extraction: 2002, 2002 Economic Census, Mining, Industry Series; EC02-21I-211111; U.S. Census Bureau: Washington, DC, USA, 2004.
  20. Natural Gas Liquid Extraction: 2002, 2002 Economic Census, Mining, Industry Series Natural Gas Liquid Extraction: 2002, 2002 Economic Census, Mining, Industry Series 21I-211112; U.S. Census Bureau: Washington, DC, USA, Appendices.
  21. Gagnon, N.; Hall, C.A.S.; Brinker, L. A preliminary investigation of energy return on energy investment for global oil and gas production. Energies 2009, 2, 490-503.
  22. Canadian Petroleum Association. Statistical Handbook for Canada’s Upstream Petroleum Industry; Canadian Association of Petroleum Producers: Calgary, Canada, 2010.
  23. Statistics Canada Table 326-0021 Consumer Price Index (CPI), 2005 basket, annual (2002 = 100 unless otherwise noted). Available online: http://www.statcan.gc.ca/start-debut-eng.html (accessed on 20 September 2010).
  24. Annual Average of Exchange Rates 2002. Available online: http://www.cra-arc.gc.ca/tx/ndvdls/ fq/xchng_rt-eng.html (accessed on October 23, 2010) 2
  25. Lenzen, M. Life cycle energy and greenhouse gas emissions of nuclear energy: A review. Energy Convers. Manag. 2008, 49, 2178-2199.
  26. Pearce, J.M. Thermodynamic limitations to nuclear energy deployment as a greenhouse gas mitigation technology. Int. J. Nucl. Govern. Econ. Ecol. 2008, 2, 113-130.
  27. Mathur, J.; Bansal, N.K.; Wagner, H.-J. Dynamic energy analysis to assess maximum growth rates in developing power generation capacity: Case study of India. Energ. Policy 2004, 32, 281-287.
  28. Gas Resources, Technology and Production Profiles, Chapter 11. World Energy Outlook 2009; International Energy Agency: Paris, France, 2009.

 

Posted in EROEI Energy Returned on Energy Invested, Natural Gas, Peak Natural Gas | Tagged , | 4 Comments

Drinking water and sewage treatment use a lot of energy

[ Water treatment (drinking and sewage) use tremendous amounts of energy. Some of the statistics from this document “Water & Wastewater Utility energy research roadmap” below are:

  • In 2008 municipal wastewater treatment systems (WWTP) in the United States used approximately 30.2 billion kilowatt hours (kWh) per year, or about 0.8% of total electricity used in the United States.
  • These WWTPs are becoming large energy consumers and they can require approximately 23% of the public energy use of a municipality.
  • About 10-40% of the total energy consumed by wastewater treatment plants is consumed for sludge handling.
  • Desalination consumes 3% of annual electricity consumption in the United States Future projections estimate this percentage to double to 6% due to higher water demand and more energy intensive treatment processes
  • A significant percentage of energy input to a water distribution system is lost in pipes due to friction, pressure and flow control valves, and consumer taps.
  • AWWA estimates that about 20% of all potable water produced in the United States never reaches a customer water meter mostly due to loss in the distribution system. When water is lost through leakage, energy and water treatment chemicals are also lost.
  • In California, agricultural groundwater and surface water pumping is responsible for approximately 60% of the total peak day electrical demand related to water supply, particularly the energy consumed within Pacific Gas and Electric’s (PG&E) controlled area. Over 500 megawatts (MW) of electrical demand for water agencies in California is used for providing water and sewer services to customers. The water related electrical consumption for the State of California is approximately 52,000 gigawatt hours (GWh). Electricity use for pumping is approximately 20,278 GWh, which is 8% of the state’s total electricity use. The remaining is consumed at the customer end side for heat, pressurize move and cool water.

This paper also looks at ways to save energy, and extraction of nutrients such as phosphorous — a good idea, since phosphate production may peak as soon as 40 years from now.

As global oil production declines and there isn’t enough energy to run civilization as we know it now, hard choices will need to be made.  First in line is agriculture, which consumes about 15 to 20% of energy in the U.S. to plant, harvest, store, distribute, cook, and so on.

Clean water and sewage treatment are just as important as food.  But drought threatens to increase energy requirements.   “The energy intensity of desalination is at least 5 to 7 times the energy intensity of conventional treatment processes”, so even though only 3% of the population is served by desalination, 18% of electricity used in the municipal water industry is for desalination plants.

But making water systems more energy efficient is trivial compared to trying to maintain and replace our aging water infrastructure, which is falling apart.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

CEC. 2016. Water and wastewater utility energy research road map. California Energy Commission.  135 pages.

Excerpts:

ABSTRACT.  Water and wastewater utilities are increasingly looking for innovative and cost effective energy management opportunities to reduce operating costs, mitigate contributions to climate change, and increase the resiliency of their operations. The Water Research Foundation, the California Energy Commission and the New York State Energy Research and Development Authority jointly funded this project to assess the current state-of-knowledge on energy management, concepts and practices at water and wastewater utilities; understand the issues, trends and challenges to implement  energy projects; identify new opportunities to set a direction for future research; and develop a roadmap for energy research that includes a list of prioritized research, development, and demonstration projects on energy management for water and wastewater utilities.

EXECUTIVE SUMMARY.  The water industry faces challenges associated with escalating energy costs due to increased energy consumption and higher energy unit prices. Increased energy consumption is affected by energy-intensive treatment technologies needed to meet more stringent water quality regulations, growing water demand, pumping over longer distances, and climate change. More desalinated water to augment water supply shortages and the growth of groundwater augmentation is also anticipated.

The water industry faces challenges associated with escalating energy costs due to increased energy consumption and higher energy unit prices. Increased energy consumption is affected by energy-intensive treatment technologies needed to meet more stringent water quality regulations, growing water demand, pumping over longer distances, and climate change (GWRC, 2008). Moreover, the need for desalinated water to augment water supply shortages and the growth of groundwater augmentation is also anticipated (House, 2007). The same study by the Energy Commission estimates the demand for electricity in the water industry to double in the next decade. The water sector has shown only a limited response in implementing improvements that effectively address sustainability issues due to insufficient modernization, the presence of numerous regulatory and economic hurdles, and poor integration of energy issues within the water policy decision-making process (Liner and Stacklin, 2013; Rothausen and Conway, 2011).

Energy Management Opportunities in Wastewater Treatment and Water Reuse. Currently, there are over 15,000 municipal wastewater treatment plants (WWTPs), including 6,000 publicly owned treatment works (POTWs) providing wastewater collection and treatment services to around 78% of the United States’ population (Mo and Zhang, 2013; Spellman, 2013). According to the report published by EPRI and the WRF (Arzbaecher et al., 2013) in 2008 municipal wastewater treatment systems in the United States used approximately 30.2 billion kilowatt hours (kWh) per year, or about 0.8% of total electricity used in the United States. These WWTPs are becoming large energy consumers and they can require approximately 23% of the public energy use of a municipality (Means, 2004). Typical wastewater treatment operations have a total average electrical use of 500 to 4,600 kWh per MG treated, which varies depending on the unit operations and their efficiency (Kang et al., 2010; WEF, 2009; GWRC, 2008; NYSERDA, 2008a). Treatment-process power requirements as high as 6,000 kilowatt hours per million gallons (kWh/MG) are required when membrane bioreactors are used in place of activated sludge or extended aeration (Crawford & Sandino, 2010).

Approximately 2,000 million kWh of electricity are consumed annually by wastewater treatment plants in California (Rajagopalan, 2014). Energy use by these utilities is affected by influent loadings and effluent quality goals, as well as process type, size and age (Spellman, 2013). The majority of energy use occurs in the treatment process, for aeration (44%) and pumping (7%) (WEF, 2009). In major Australian WWTPs, the pumping energy for wastewater facilities ranged from 16 to 62% of the energy used for treatment (Kenway et al., 2008). In New York, the wastewater sector uses approximately 25% more electricity on a per unit basis (1,480 kWh/MG) than the national average (1,200 kWh/MG) due to the widespread use of energy intensive activated sludge, as well as compliance with stringent New York State effluent limits, which often require tertiary or other advanced treatment. Additionally, the predominance of combined (storm water and wastewater) sewer systems at the largest facilities, coupled with significant inflow and infiltration, result in extremely large variations in influent flow rates and loading, making efficient operations difficult (Yonkin et al., 2008).

The greatest potential for net positive energy recovery occurs at larger facilities, which are only a small percentage of the treatment works nationwide, but treat a large percentage of the nation’s wastewater. By achieving energy neutrality and eventually energy positive operations at larger facilities, the energy resources in the majority of domestic wastewater can be captured. This principle guided WERF to prepare a program to conduct the research needed to assist treatment facilities over 10 million gallons per day (MGD) to become energy neutral (Cooper et al., 2011). Energy self-sufficiency has been attained at a wastewater plant in Strass, Austria, where the average power usage is approximately 1,000 kWh/million gallon (MG) treated, which is also the approximate electricity generation from the sludge (Kang et al., 2010). The design employs two stages of aerobic treatment, with innovative controls, where biosolids generated in the two stages are thickened and anaerobically digested, with gas recovery and power generation. The centrate from the dewatering operation is treated in a sequencing batch reactor using the DEamMONification (DEMON) process to reduce the recirculation of nutrients to the head of the plant.

The importance of the scale of a facility in understanding the different strategies that may be implementable for the technology or service options available is pointed out in a recent report (AWE and ACEEE, 2013). It is important that energy management best practices are defined with consideration of specific plant size or treatment process. The largest per unit users of energy are, in fact, small water and wastewater treatment plants that treat less than 1 MGD, as well as those that employ an activated sludge with or without tertiary treatment process.

Wastewater treatment facilities have significant electricity demand during periods of peak utility energy prices. An effective energy load management strategy can help wastewater utilities to significantly reduce their electricity bills. A number of electrical load management opportunities are available to wastewater utilities (Table 2.1), notably by flattening the energy demand curve, particularly during peak pricing periods and by shifting major electrical demand to lower cost tariff blocks (e.g., overnight), for intra–day operations, or from season to season where long- or short-term wastewater or sludge storage is practical (NYSERDA, 2010). Wastewater treatment facilities have the potential to benefit from electric utility demand response (DR) opportunities, programs and tariffs. Although the use of integrated energy load management systems for wastewater utilities is still in its infancy, some wastewater utilities have begun implementing strategies that provide a foundation for participation in demand response programs. Such implementations are thus far limited to control pumping in lift stations of wastewater collection systems in utilities equipped with sufficient storage (Thompson et al., 2008). Wastewater treatment processes may offer other opportunities for shifting wastewater treatment loads from peak electricity demand hours to off-peak hours, as part of Demand Management Programs (DMPs), by modulating aeration, backwash pumps, biosolids thickening, dewatering and anaerobic digestion for maximum operation during offpeak periods. Recently, wastewater utilities, such as the Camden County Municipal Utilities Authority, have developed a computerized process system that shaved the peaks by avoiding simultaneous use of energy-intensive process units, to the maximal extent possible, thereby minimizing the peak charge from the energy provider (Horne and Kricun, 2008). In addition, the East Bay Municipal Utilities District has implemented a load management strategy which stores anaerobic digester gas until it can be used for power generation during peak-demand periods. Another opportunity for shifting electrical loads from on-peak to off-peak hours is over-oxygenating stored wastewater prior to a demand response event, then turning off aerators during peak periods without compromising effluent quality (Thompson et al., 2008). For a wastewater facility to successfully implement demand response programs, advanced technologies that enhance efficiency and control equipment are needed, such as a comprehensive and real-time demand control from centralized computer control systems that can provide an automatic transfer switch to running onsite power generators during peak demand periods, in accordance with air quality requirements (Thompson et al., 2008).

An interesting opportunity for reducing energy use in municipal wastewater treatment is to improve storm water management (Lekov, 2010). The adoption of stormwater treatment only at CSO communities can reduce energy consumption for wastewater treatment systems due to reductions in volume at the treatment plant and reduction in volumes requiring pumping in the combined sewer collection system.

Wastewater utilities are actively working to reduce the energy use of their facilities by increasing efficiency. Energy efficiency is part of the process to reduce energy demand along the path to a net energy neutral wastewater treatment plant. Briefly, wastewater treatment plants can target energy efficiency by replacing or improving their core equipment, through use of variable frequency devices (VFDs), appropriately sized impellers and implementation of energy-saving automation schemes. Efficiency can also be improved at the process level, by implementing low energy treatment alternatives to an activated sludge process or improving process control.

Energy Efficient Equipment. There are numerous types of energy efficient equipment that a wastewater utility can utilize to reduce energy consumption. Common facility-wide plant improvements include upgrade of electric motors and the installation of VFDs in pumps. These modifications can result in substantial energy efficiency because at least 60% of the electrical power fed to a typical wastewater treatment plant is consumed by electric motors (Spellman, 2013). VFDs enable pumps to accommodate fluctuating demand and allow more precise control of processes. VFDs can reduce a pump’s energy use by up to 50% compared to a motor running at constant speed for the same period. Wastewater treatment facilities can also upgrade their heating, cooling, and ventilation systems (HVAC) to improve energy efficiency and reduce energy costs. The latest developments in HVAC equipment can substantially reduce cooling energy use by approximately 30 to 40% and achieve energy efficiency ratios as high as 11.5. The latest air-source heat pumps can reduce heating energy use by about 20 to 35%. Water-source heat pumps also have superior ratings, especially when outside air temperatures drop below 20 degrees Fahrenheit (°F) (15.2 energy efficiency ratio) and can use heat from treated effluent to supply space heating. The Sheboygan Wastewater Treatment Plant reduced its energy consumption by 20% from 2003 solely by implementing energy demand management strategies that targeted efficiency by equipment replacement (e.g., motors, VFDs, blowers, etc.) and scheduling of regular maintenance (Liner and Stacklin, 2013).

Wastewater treatment plants have also recently used advanced sensors and control devices to optimize energy so that what is supplied meets but does not exceed the actual demand. For example, the adoption of lower dissolved oxygen set-points in the aeration basin can still maintain microbial growth and generate energy savings of 15-20% (Kang et al., 2010). The installation of energy submeters is another important plant improvement that, however, can require high capital investments for a utility. Recent advances in lamps, luminaries, controls, and lighting design provide numerous advantages over traditional lighting systems. Since lighting accounts for 35 to 45% of the energy use of an office building, the installation of high-efficiency alternatives for nearly every plant can dramatically reduce the operational energy bill for the utility. Incentives and rebates are commonly available from electric utilities and other agencies, such as NYSERDA, to support the installation of energy-efficient fixtures and equipment that reduce energy use financial impacts

Aeration is the largest energy user in a typical wastewater treatment plant, thus the aeration process should be evaluated when implementing energy reduction programs. Installing automatic dissolved oxygen control enables continuous oxygen level monitoring in the wastewater and so that aerators can be turned off when the oxygen demand is met. Based on the aeration capacity of the wastewater treatment system and the average wastewater oxygen requirement, the automated dissolved oxygen control can be the most cost effective method to optimize aeration energy and achieve energy savings up to 25% to 40% if compared to manually controlled systems. In addition to automated control systems, the installation of smaller modular and high efficiency blowers to replace centralized blowers, the proximity of the blowers to the aeration basin to reduce energy losses from friction, and the installation of high efficiency pulsed air mixers are important efficiency measure to be considered.

About 10-40% of the total energy consumed by wastewater treatment plants is consumed for sludge handling. Most of the energy required is due to the shear force applied for dewatering, solids drying and treatment of high-strength centrate. As an example, in California centrifuge and belt filter presses consume 30,000 kWh/year/MGD and 2-6,000 kWh/year/MGD, respectively (Rajagopalan, 2014). Many studies have been conducted on understanding sludge dewatering processes and improving their efficiency. Recent studies by the Energy Commission have focused on the improvement of sludge dewatering to achieve lower energy consumption by using nanoparticulate additives. By implementing this solution at wastewater treatment plants in California, the state would be able to save an additional 10.5 million kWh per year, which includes the cost of energy, polymer and nanoadditives for sludge dewatering, and sludge disposal

Another innovation directed toward more energy efficient systems is the use of distributed systems in place of the centralized treatment systems historically favored due to their economies of scale. Centralized plants are generally located down gradient in urban areas, permitting gravity wastewater flow to the treatment plant, while the demand for reclaimed wastewater generally lies up gradient. This means higher energy demands for pumping the reclaimed wastewater back to the areas in need. These energy costs can be reduced through use of smaller distributed treatment plants located directly in water limited areas

Processes and technologies already in use at wastewater treatment plants include biogas-powered combined heat and power (CHP), thermal conversion from biosolids, renewable energy sources (e.g., systems solar arrays and wind turbines), energy recovery at the head of the wastewater treatment plant and within the treatment process.

Energy recovery from anaerobic digestion with biogas utilization and biosolids incineration with electricity generation is widespread, but there is potential for further deployment. Of the approximately 837 biogas generating facilities in the United States, only 35% generate electricity from biogas and only 9% sell electricity back to the grid (Liner and Stacklin, 2013). The low application rate is partly due to the

dominance of small wastewater systems in the United States (less than 5 MGD). It is estimated that anaerobic digestion could produce about 350 kWh of electricity for each million gallons of wastewater treated at the plant and save 628 to 4,940 million kWh annually in the United States (Stillwell et al., 2010). The electricity produced by CHPs is reliable and consistent, but the installation requires relatively high one-time capital costs. Research shows that recovery of biogas becomes cost-effective for wastewater treatment plants with treatment capacities of at least 5 MGD (Mo and Zhang, 2013; Stillwell et al., 2010). Various wastewater treatment plants, such as by the East Bay Municipal Utility District (Oakland, California) and the Strass WWTP (Austria) became a net-positive, energy-generating wastewater plant by powering low-emission gas turbines with biogas from co-digestion processes.

Biosolids incineration with electricity generation is an effective energy recovery option that uses multiple hearth and fluidized bed furnaces.  Both incineration technologies require cleaning of exhaust gases to prevent emissions of odor, particulates, nitrogen oxides, acid gases, hydrocarbons, and heavy metals.

As for biogas-generating electricity, incineration can be used to power a steam cycle power plant, thus producing electricity in medium to large wastewater treatment plants where a high amount of solids is produced.

Disadvantages of incineration are high capital investments, high operating costs, difficult operations, and the need for air emissions control (Stillwell et al., 2010). Despite these disadvantages, biosolids incineration with electricity generation is an innovative approach to managing both water and energy. For example, the Hartford Water Pollution Control Facility in Hartford (Connecticut) is incorporating an energy recovery facility into their furnace upgrade project and they anticipate that biosolids incineration will generate 40% of the plant’s annual electricity consumption (Stillwell et al., 2010).

Wastewater utilities can now strategically replace incineration with advanced energy recovery technologies (MWH Global, 2014). Like incineration, gasification and pyrolysis offer the potential to minimize the waste mass for ultimate disposal from processing sewage sludge for its sludge treatment centers and also offer the prospect of greater energy recovery and/or lower operating cost than that offered by incineration (MWH Global, 2014). The range of gasification technologies available is large and at present it is believed that there are further synergies, such as recovering heat for digester and/or thermal hydrolysis process heating, that can be derived for a digestion or advanced digestion/ gasification advanced energy recovery. Pyrolysis, offers further advantages over the gasification options due to the production of a better syngas product than gasification, favoring more effective gas engine/CHP power generation.

Nutrient recovery from wastewater can offset the environmental loads associated with producing the equivalent amount of fertilizers from fossil fuels (Mo and Zhang, 2013). Various nutrient recovery methods have been applied in wastewater treatment processes and include biosolids land application, urine separation, controlled struvite crystallization and nutrient recovery through aqua-species. Biosolids land application involves spreading biosolids on the soil surface or incorporating or injecting biosolids into the soil. Urine separation involves separation of urine from other wastewater sources for recovery of nutrients. The process is promising in terms of maximizing nutrient recovery from wastewater, because around 70-80% of nitrogen and 50% of phosphorus in domestic wastewater is contained in urine (Maurer et al., 2003).

Although not widely applied, aqua-species, such as macroalgae, microalgae, duckweed, crops and wetland plants after utilizing nutrients in wastewater, can be harvested and used as fertilizers or animal feeds

While these individual resource recovery methods have been studied, there is a paucity of peer-reviewed articles focusing on the current status and sustainability of these individual methods as well as their integration at different scales

Recently, a few research programs have started investigating the potential for nutrient recovery, including carbon, nitrogen and phosphorus from wastewater treatment process. A recent report from WERF with support from the Commonwealth Scientific and Industrial Research Organization (CSIRO), Resource Recovery from Wastewater: A Research Agenda, summarized and defined the future research needs for the resource recovery opportunities in the wastewater sector (Burn et al., 2014).

WERF is developing a tool for the implementation and acceptance of resource recovery technologies at WWTPs, with a major focus on extractive nutrient (phosphorus) recovery technologies that employ greater energy efficiency and offer monetary savings (Latimer, 2014). WERF has prioritized high profile research on P concentration and recovery opportunities during wastewater treatment processes. Polyphosphate-accumulating organisms (PAO) can be responsible for P concentration in cells and direct concentration and precipitation of struvite that can be recovered for niche agricultural markets (Burn et al., 2014). This report implies that nitrogen recovery seems to be a lower priority than carbon (through biogas) or phosphorus recovery, unless combined with other recovery opportunities. N recovery is possible through the use of adsorption/ion-exchange, precipitation and stripping processes.

A $26 million ion-exchange pilot facility in New York that concentrated ammonia from recycle streams (centrate) of anaerobically digested sludge showed that the above mentioned methods are viable, however not yet as cost effective as the Haber-Bosch process (Burn et al., 2014).

Treated wastewater can be reused for various beneficial purposes to provide ecological benefits, reduce the demand of potable water and augment water supplies (Mo and Zhang, 2013). Beneficial uses include agricultural and landscape irrigation, toilet flushing, groundwater replenishing and industrial processes (EPA, 2004). Currently, around 1.7 billion gallons per day of wastewater is reused in the US, and this reuse rate is growing by 15% every year (Mo and Zhang, 2013) and Florida and California are pioneering states in the country focusing on water reuse. The level of wastewater treatment required varies depending on the regulatory standards, the technologies used and the water quality characteristics. Some of the treatment process or schemes utilized are able to save energy for the same amount of water delivered.

Although there is integrated resource recovery in practice currently, particularly at the community level, the related studies are rare. In a WWTP in Florida onsite energy generation, nutrient recycling and water reuse are combined: CHP is used to generate electricity from the digested gases, biosolids are sold for land application and part of the treated water is used for agricultural and landscape irrigation. In general, to date, very limited studies have reviewed the integrated energy-nutrient-water recovery in WWTPs, particularly on a national-scale (McCarty et al., 2011; Mo and Zhang, 2013; Verstraete et al., 2009) and there are no studies optimizing the resource recovery via multiple approaches

Energy Management Opportunities in Drinking Water and Desalination. Desalination consumes 3% of annual electricity consumption in the United States (Boulos and Bros, 2010; EPA, 2012b; Sanders and Webber, 2012; Arzbaecher et al., 2013). Future projections estimate this percentage to double to 6% due to higher water demand and more energy intensive treatment processes (Chaudhry and Shrier, 2010). Estimates indicate that approximately 90% of the electricity purchased by water utilities, or approximately $10 billion per year, is required for pumping water through the various stages of extraction, treatment, and final distribution to consumers (Bunn, 2011; Skeens et al., 2009). Despite recent energy efficiency progress in pumping systems, there has not been any notable impact on existing energy intensity values. Furthermore, the energy use in drinking water utilities, with the exclusion of energy use for water heating by residential and commercial users, contributes significantly to an increasing carbon footprint with an estimated 45 million tons of greenhouse gases (GHG) emitted annually in the UnitedStates.

In California, agricultural groundwater and surface water pumping is responsible for approximately 60% of the total peak day electrical demand related to water supply, particularly the energy consumed within Pacific Gas and Electric’s (PG&E) controlled area. Over 500 megawatts (MW) of electrical demand for water agencies in California is used for providing water and sewer services to customers (House, 2007). The water related electrical consumption for the State of California is approximately 52,000 gigawatt hours (GWh) (House, 2007). Electricity use for pumping is approximately 20,278 GWh, which is the 8% of the state’s total electricity use. The remaining is consumed at the customer end side for heat, pressurize move and cool water.

To address the challenges associated with poorer quality sources and/or reduced supply, water utilities have been exploiting new water supply options such as seawater and saline groundwater, the use of which is growing about 10% each year. The use of these new water sources require two to ten times more energy per unit of water treated than traditional water treatment technologies.

While previous studies have focused on energy requirements for water utilities, there is a lack of studies that estimate peak electric demand and peak use in the water sector (House, 2007). This lack of understanding of peak electrical demand and use is even more limited by the lack of water demand profiles that can be compared to electric use profiles in the water sector. Development of water demand profiles is very difficult and not monitored as well as electric use, due to the fact that water is billed by volume and not by time-of-use (House, 2007). Pricing water in a TOU structure is still a complicated task for water utilities, however it has the potential to offer large energy savings.

In many cases, successful water efficiency programs reduce the total revenues for water agencies under typical rate structures

Research is needed to investigate the potential for decoupling investments from revenues in water markets and other financial methods that would make conservation and efficiency programs more attractive and encourage alternative energy supplies. Better valuing of the different qualities and sources of water would also facilitate better choices of water resource applications that take the real cost/value of the supply and quality into consideration.

Energy Efficiency Estimates indicate that between 10 and 30% cost savings are readily achievable by almost all utilities implementing energy efficient programs or strategies (Leiby and Burke, 2011). In addition to cost savings, improving efficiency will result in a number of benefits, including the potential to reinvest in new infrastructure or programs, reduce the pressure on the electrical grid, achieve

Energy efficient processes and new technologies to be applied in the water treatment and desalination sector are still at the research stage or are under-development. For example, NeoTech Aqua Solutions, Inc. has developed a new ultraviolet (UV) disinfection technology (D438) that uses 1/10 of the energy compared to lamps required in similar flow conventional UV systems. The technology demands less electricity and results in a smaller electrical bill, less maintenance, and a smaller overall carbon footprint.

Estimates of energy efficiency in water supply and drinking water systems, associated economics and related guidelines are lacking.

Energy Efficient Operations and Processes

Energy efficiency can be targeted in water supply and distribution system operations as well as water treatment. Efficient pump scheduling and network optimization are significant contributors to efficiency practices

A significant percentage of energy input to a water distribution system is lost in pipes due to friction, pressure and flow control valves, and consumer taps (Innovyze, 2013).

The energy intensity (kWh per MG of water treated) of desalination is at least 5 to 7 times the energy intensity of conventional treatment processes, so even though the population served by desalination is only about 3%, we estimate that approximately 18% of the electricity used in the municipal water industry is for desalination plants. Due to the lower energy consumption, RO processes are preferred to thermal treatments for domestic water desalinization in the United States.

In an RO process, costs associated with electricity are 30% of the total cost of desalinated water. Reducing energy consumption is critical for lowering the cost of desalination and addressing environmental concerns about GHG emissions from the continued use of conventional fossil fuels as the primary energy source for seawater desalination plants.

The feed water to the RO is pressurized using a high pressure feed pump to supply the necessary pressure to force water through the membrane to exceed the osmotic pressure and overcome differential pressure losses through the system

Typically, an energy recovery device (ERD) in combination with a booster pump is used to recover the pressure from the concentrate and reduce the required size of the high pressure pump (Stover, 2007; Jacangelo et al., 2013). A theoretical minimum energy is required to exceed the osmotic pressure and produce desalinated water. As the salinity of the seawater or feed water recovery increases, the minimum energy required for desalination also increases. For example, the theoretical minimum energy for seawater desalination with 35,000 milligrams per liter (mg/L) of salt and a feed water recovery of 50% is 1.06 kilowatt hours per cubic meter (kWh/m3)(Elimelech and Philip, 2011). The actual energy consumption is larger as real plants do not operate as a reversible thermodynamic process

Typically, the total energy requirement for seawater desalination using RO (including pre- and post-treatment) is on the order of 3 – 6 kWh/m3 (Semiat, 2008; Subramani et al., 2011). More than 80% of the total power usage by desalination plants is attributed to the high pressure feed pumps

The energy consumption associated with filtration systems increases due to fouling by nanoparticles as reported in a study from the Energy Commission (Rosso and Rajagopalan, 2013). For example, flux analysis of MF 200 nanometer (nm) pore size membranes showed that particles between 100 and 2.5 nm contributed the most to the membrane fouling, more than fouling due to cake formation. Further understanding of the mechanisms of membrane fouling and of pretreatment options with coagulants will offer energy savings opportunities for water and water reclamation utilities

AWWA estimates that about 20% of all potable water produced in the United States never reaches a customer water meter mostly due to loss in the distribution system. When water is lost through leakage, energy and water treatment chemicals are also lost.

REFERENCES

  • ACEEE (American Council for an Energy Efficient Economy). 2005. A Roadmap to Energy in Water and Wastewater Industry. Report # IE054.
  • Adham, S. 2007. Dewatering Reverse Osmosis Concentrate from Water Reuse Applications Using Forward Osmosis. Water Reuse Research Foundation. WRRF # 05-009-1.
  • Arzbaecher, C., K. Parmenter, R. Ehrhard, and J. Murphy. 2013. Electricity Use and Management in the Municipal Water Supply and Wastewater Industries. Denver, Colo.: Water Research Foundation; Palo Alto, Calif.: Electric Power Research Institute. AWE (Alliance for Water Efficiency) and ACEEE (American Council for an Energy Efficient Economy). 2013. Water Energy nexus research recommendations for future opportunities. June, 2013.
  • Badruzzaman, M., C. Cherchi, J. Oppenheimer, C.M. Bros, S. Bunn, M. Gordon, V. Pencheva, C. Jay, I. Darcazallie, and J.G. Jacangelo. 2015. Optimization of Energy and Water Quality Management Systems for Drinking Water Utilities. Denver, Colo.: Water Research Foundation, forthcoming.
  • Bollaci, D. 2013. Challenge Projects on Low Energy Treatment Schemes for Water Reuse, Phase 1. Water Environment Research Foundation
  • Boulos, P.F. and C.M. Bros. 2010. Assessing the carbon footprint of water supply and distribution systems. Journal of American Water Works Association, 102 (11).
  • Brandt, M.J., R.A. Middleton, and S. Wang. 2010. Energy Efficiency in the Water Industry: A compendium of Best Practices and Case studies. UK Water Industry Research. WERF #OWSO9C09.
  • Bunn, S. 2011. Optimizing operations holistically for maximum savings, Proceedings of Annual Conference and Exposition 2010 of the American Water Works Association, June 20–24, 2010, Chicago, IL.
  • Burn, S., T. Muster, A. Kaksonen, and G. Tjandraatmadja. 2014. Resource Recovery from Wastewater: A Research Agenda. Water Environment Research Foundation. Report #NTRY2C13.
  • Cantwell, J.L. 2010a. Energy Efficiency in Value Engineering: Barriers and Pathways. Water Environment Research Foundation. WERF # OWSO6R07a.
  • Cantwell, J.L. 2010b. Overview of State Energy Reduction Programs and Guidelines for the Wastewater Sector. Water Environment Research Foundation. WERF # OWSO6R07b.
  • Carlson, S.W., and A. Walburger. 2007. Energy Index Development for Benchmarking Water and Wastewater Utilities. Denver, Colo.: Water Research Foundation.
  • Cath, T.Y., J.E. Drewes, and C.D. Lundin. 2009. A Novel Hybrid Forward Osmosis Process for Drinking Water Augmentation using Impaired Water and Saline Water Sources. Las Cruces, NM: WERC; Denver, Colo.: Water Research Foundation.
  • Chan, C. 2013. Personal communication. Interview on June 6th, 2013. East Bay Municipal Utility District, Oakland, CA. Chandran, K. [N.d.] Development and Implementation of a Process Technology Toolbox for Sustainable Biological Nutrient Removal using Mainstream Deammonification. Water Environment Research Foundation. WERF # STAR_N2R14. Forthcoming.
  • Chang, Y., D.J. Reardon, P. Kwan, G. Boyd, J. Brandt, K.L. Rakness, and D. Furukawa. 2008. Evaluation of Dynamic Energy Consumption of Advanced Water and Wastewater Treatment Tecnologies. Denver, Colo.: Water Research Foundation.
  • Chaudhry, S., and C. Shrier. 2010. Energy sustainability in the water sector: Challenges and opportunities. Proceedings of Annual Conference and Exposition 2010 of the American Water Works Association, June 20–24, 2010, Chicago, IL.
  • Cherchi, C., M. Badruzzaman, J. Oppenheimer, C.M. Bros, and J.G. Jacangelo. 2015. Energy and water quality management systems for water utility’s operations: A review. Journal of environmental management, 153, 108-120.
  • Conrad, S. [N.d.]. Water and Electric Utility Integrated Planning. Denver, Colo.: Water Research Foundation. Forthcoming.
  • Conrad, S.A., J. Geisenhoff, T. Brueck, M. Volna, and P. Brink. 2011. Decision Support System for Sustainable Energy Management. Denver, Colo.: Water Research Foundation.
  • Cooley, H., and R. Wilkinson. 2012. Implications of Future Water Supply Sources for Energy Demands. Water Reuse Research Foundation. WRRF # 08-16.
  • Cooper, A., C. Coronella, R. Humphries, A. Kaldate, M. Keleman, S. Kelly, N. Nelson, K. O’Connor, S. Pekarek, J. Smith, and Y. Zuo. 2011. Energy Production and Efficiency Research – The Roadmap to Net-Zero Energy. Water Environment Research Foundation Fact Sheet, 2011.
  • Crawford, G.V. 2011a. Sustainable Energy Optimization Tool- Carbon Heat Energy Assessment Plant Evaluation Tool (CHEApet). Water Environment Research Foundation. Report # OWSO4R07c.
  • Crawford, G.V. 2011b. Demonstration of the Carbon Heat Energy Assessment Plant Evaluation Tool (CHEApet). Water Environment Research Foundation. Report # OWSO4R07g.
  • Crawford, G.V. 2010a. Best Practices for Sustainable Wastewater Treatment: Initial Case Study Incorporating European Experience and Evaluation Tool Concept. Water Environment Research Foundation. Report # OWSO4R07a.
  • Crawford, G.V. 2010b. Technology Roadmap for Sustainable Wastewater Treatment Plants in a Carbon-Constrained World. Water Environment Research Foundation. Report # OWSO4R07d.
  • Crawford, G., and J. Sandino. 2010. Energy Efficiency in Wastewater Treatment in North America. WERF, Alexandria, VA.
  • CRS (Congressional Research Service). 2014. Energy-Water Nexus: The Water Sector’s Energy Use. A Report. Elimelech, M., and W.A. Phillip. 2011. The future of seawater desalination: Energy, technology, and the Environment. Science, 333, 712 – 717.
  • El-Shafai, S.A., F.A. El-Gohary, F.A. Nasr, N. Peter van der Steen, and H.J. Gijzen. 2007. Nutrient recovery from domestic wastewater using a UASB-duckweed ponds system. Bioresource Technology 98, 798-807. Environmental KTN (Environmental Knowledge Transfer Network). 2008. Energy Efficient Water & Wastewater Treatment. Stimulating business innovation and environmental protection through the transfer of knowledge. January, 2008.
  • EPA (U.S. Environmental Protection Agency). 2004. Guidelines for Water Reuse. September, 2004.
  • EPA (U.S. Environmental Protection Agency). 2008. Ensuring a Sustainable Future: An Energy Management Guidebook for Wastewater and Water Utilities.
  • EPA (U.S. Environmental Protection Agency). 2012a. Centers for Water Research on National Priorities Related to a Systems View of Nutrient Management Priorities Related to a Systems View of Nutrient Management STAR-H1.
  • EPA (U.S. Environmental Protection Agency). 2012b. National Water Program 2012 Strategy: Response to Climate Change. December 2012.
  • Forrest, A.L., K.P. Fattah, D.S. Mavinic, and F.A. Koch. 2008. Optimizing struvite production for phosphate recovery in WWTP. Journal of Environmental Engineering, 134(5), 395-402.
  • Ghiu, S. 2014. DORIS – Energy Consumption Calculator for Seawater Reverse Osmosis Systems. Denver, Colo.: Water Research Foundation.
  • Griffiths-Sattenspiel, B., and W. Wilson. 2009. The carbon footprint of water. River Network, Portland. GWRC (Global Water Research Coalition). 2008. Water and energy: Report of the GWRC Research Strategy workshop.
  • He, C., Z. Liu, and M. Hodgins. 2013. Using Life Cycle Assessment for Quantifying Embedded Water and Energy in a Water Treatment System. Water Research Foundation. WRF #4443.
  • Hernández, E., M.A. Pardo, E. Cabrera, and R. Cobacho. 2010. Energy assessment of water networks, a case study, Proceedings of the Water Distribution System Analysis 2010 Conference – WDSA2010, Tucson, AZ, USA, Sept. 12–15, 2010. Hightower, M., D. Reible, and M. Webber. 2013. Workshop Report: Developing a Research Agenda for the Energy Water Nexus. National Science Foundation. Grant # CBET 1341032.
  • Holt, J.K., H.G. Park, Y.M. Wang, M. Stadermann, A.B. Artyukhin, C.P. Grigoropoulos, A. Noy, and O. Bakajin. 2006. Fast mass transport through sub-2-nanometer carbon nanotubes. Science, 312, 1034 – 1037.
  • Horne, J., and A. Kricun. 2008. Using management systems to reduce energy consumption and energy costs. Proceedings of the Water Environment Federation, 2008(10), 5826-5843.
  • Horne, J., J. Turgeon, and E. Byous. 2011. Energy Self-Assessment Tools and Energy Audits for Water and Wastewater Utilities. A presentation.
  • Horvath, A., and J. Stokes. 2013. Life-cycle energy assessment of alternative water supply systems in California. California Energy Commission. Report # CEC-500-2013-037.
  • House, L. 2007. Water supply-related electricity demand in California. California Energy Commission. Report # CEC-500-2007-114.
  • House, L. 2011. Time-of-use water meter effects on customer water use. California Energy Commission. Report # CEC-500-2011-023.
  • Huxley, D.E., W.D. Bellamy, P. Sathyanarayan, M. Ridens, and J. Mack. 2009. Greenhouse Gas Emission Inventory and Management Strategy Guidelines for Water Utilities. Denver, Colo.: Water Research Foundation. Innovyze, Inc. 2013. Available online at: www.innovyze.com
  • Jacangelo, J., A. Subramani, J. Oppenheimer, M. Badruzzaman. 2013. Renewable Energy Technologies and Energy Efficiency Strategies (Guidebook for Desalination and Water Reuse). Water Reuse Research Foundation. WateReuse-08-13.
  • Jacobs, J., T.A. Kerestes, and W.F. Riddle. 2003. Best Practices for Energy Management. Denver, Colo.: AwwaRF.
  • Jentgen, L.A., S. Conrad, H. Kidder, M. Barnett, T. Lee, and J. Woolschlager. 2005. Optimizing Operations at JEA’s Water System. Denver, Colo.: AwwaRF.
  • Jentgen, L.A., S. Conrad, R. Riddle, E.V. Sacken, K. Stone, W. Grayman, and S. Ranade. 2003. Implementing a Prototype energy and Water Quality Management System. Denver, Colo.: AwwaRF. Jimenez, J. [N.d.]. Advancing Anaerobic Wastewater and Solids Treatment Processes. Water Environment Research Foundation. WERF # ENER5R12. Forthcoming. Johnson Foundation. 2013. Building Resilient Utilities How Water and Electric Utilities Can CoCreate Their Futures. Report.
  • Jolly, M., and J. Gillard. 2009. The economics of advanced digestion. In Proceedings of the 14th European Biosolids and Organic Resources Conference and Exhibition—9th–11th November (Vol. 2009).
  • Kärkkäinen, S., and J. Ikäheimo. 2009. Integration of demand side management with variable output DG. Proceedings of the 10th IAEE European conference, September, 2009, Vienna, 7–10.
  • Kang, S.J., K.P. Olmstead, and T. Allbaugh. 2010. A Roadmap to Energy Self-Sufficiency for U.S. Wastewater Treatment Plants. In the Proceedings of the Water Environment Federation Technical Exhibition and Conference (WEFTEC), 2010.
  • Kenway, S.J., A. Priestley, S. Cook, S. Seo, M. Inman, A. Gregory, and M. Hall. 2008. Energy Use in the Provision and Consumption of Urban Water in Australia and New Zealand. Commonwealth Scientific and Industrial Research Organisation, Victoria, Australia; & Water Services Association of Australia, Melbourne, Australia. Available online at: www.clw.csiro.au/publications/waterforahealthycountry/2008/wfhc-urban-waterenergy.pdf.
  • Kilian, R.E. [N.d.]. Co-digestion of Organic Waste – Addressing Operational Side Effects. Water Environment Research Foundation. WERF # ENER9C13. Forthcoming.
  • Kim, Y.J., and J.H. Choi. 2010. Enhanced desalination efficiency in capacitive deionization with an ion-selective membrane. Separation and Purification Technology, 71, 70 – 75.
  • Knapp, J., and G. MacDonald. [N.d.] “Energy Recovery from Pressure Reducing Valve Stations Using In-Line Hydrokinetic Turbines.” Denver, Colo.: Water Research Foundation. Forthcoming.
  • Latimer, R. 2014. Towards a renewable future: assessing resource recovery as a viable treatment alternative. Water Environment Research Foundation. Report #NTRY1R12.
  • Lawson, R., R. Sandra, G. Shreeve, and A. Tucker. 2013. Links and Benefits of Water and Energy Efficiency Joint Learning. Denver, Colo.: Water Research Foundation.
  • Leiby, V., and M.E. Burke. 2011. Energy Efficiency in the North American Water Supply Industry: A Compendium of Best Practices and Case Studies. Denver, Colo.: Water Research Foundation.
  • Lekov, A. 2010. Opportunities for Energy Efficiency and Open Automated Demand Response in Wastewater Treatment Facilities in California – Phase I Report. A Report by the Lawrence Berkeley National Laboratory.
  • Li, B. 2011. Electricity Generation from Anaerobic Wastewater Treatment in Microbial Fuel Cell. Water Environment Research Foundation. WERF # OWSO8C09.
  • Liner, B., and C. Stacklin. 2013. Driving Water and Wastewater Utilities to More Sustainable Energy Management. ASME 2013 Power Conference. American Society of Mechanical Engineers, 2013.
  • Lisk, B., E. Greenberg, and F. Bloetscher. 2013. Case Studies: Implementing Renewable Energy at Water Utilities. Denver, Colo.: Water Research Foundation.
  • Lorand, R.T. 2013. Green Energy Life Cycle Assessment Tool Version 2. Denver, Colo.: Water Research Foundation.
  • Maurer, M., P. Schwegler, and T.A. Larsen. 2003. Nutrients in urine: energetic aspects of removal and recovery. Water Science & Technology, 48(1), 37-46.
  • McCarty, P.L., J. Bae, and J. Kim. 2011. Domestic wastewater treatment as a net energy producer–can this be achieved? Environmental science & technology, 45(17), 7100-7106.
  • McCutcheon, J., R.L. McGinnis, and M. Elimelech. 2005. A novel ammonia-carbon dioxide forward (direct) osmosis desalination process. Desalination, 174, 1 – 11.
  • McGuckin, R., J. Oppenheimer, M. Badruzzaman, A. Contreras, and J.G. Jacangelo. 2013 . Toolbox for Water Utility Energy and Greenhouse Gas Emission Management: An International Review. Denver, Colo.: Water Research Foundation.
  • Means, E. 2004. Water and Wastewater Industry Energy Efficiency: A Research Roadmap. Denver, Colo.: AwwaRF.
  • Mo, W., and Q. Zhang. 2013. Energy–nutrients–water nexus: Integrated resource recovery in municipal wastewater treatment plants municipal wastewater treatment plants 267.
  • Monteith, H.D. 2008. State-of-the-Science Energy and Resource Recovery from Sludge. Water Environment Research Foundation. WERF # OWSO3R07. Monteith, H.D. 2011. Life Cycle Assessment Manager for Energy Recovery (LCAMER). Water Environment Research Foundation. Report # OWSO4R07h/f. MWH Global. 2007. Assessment of Energy Recovery Devices for Seawater Desalination” by MWH Global, 2007 – West Basin Ocean Water Desalination Demonstration Facility, Redondo Beach, CA. MWH Global. 2014. The burning question on energy recovery. February 2014. Available at: wwtonline.co.uk.
  • Nerenberg, R., J. Boltz, G. Pizzarro, M. Aybar, K. Martin, and L. Downing. 2013. Challenge Projects on Low Energy Treatment Schemes for Water Reuse, Phase 1. Water Reuse Research Foundation. WateReuse-10-06C, University of Notre Dame.
  • Nikkel, C., E. Marchand, A. Achilli, and A. Childress. 2013. Challenge Projects on Low Energy Treatment Schemes for Water Reuse, Phase 1. Water Reuse Research Foundation WateReuse-10-06B, University of Nevada, Reno.
  • NYSERDA (New York State Energy Research and Development Authority). 2004. Energy Efficiency in Municipal Wastewater Treatment Plants: Technology Assessment.
  • Albany, N.Y. NYSERDA (New York State Energy Research and Development Authority). 2008a. Statewide Assessment of Energy Use by the Municipal Water and Wastewater Sector. NYSERDA. Albany, N.Y.
  • NYSERDA (New York State Energy Research and Development Authority). 2008b. Energy and the Municipal Water and Wastewater Treatment Sector. A presentation for the Genesee/Finger Lakes Regional Planning Council. May 9, 2008.
  • NYSERDA (New York State Energy Research and Development Authority). 2010. Water & wastewater energy management best practices handbook. NYSERDA. Available online at: http://www.nyserda.ny.gov.
  • Papa, F., D. Radulj, B. Karney, and M. Robertson. 2013. Pump Energy Efficiency Field Testing & Benchmarking in Canada. International Conference on Asset Management for Enhancing Energy Efficiency in Water and Wastewater Systems, International Water Association, Marbella, Spain.
  • Parry, D.L. 2014. Co-digestion of Organic Waste Products with Wastewater Solids: Final Report with Economic Model. Water Environment Research Foundation. WERF # OWSO5R07.
  • PLMA (Peak Load Management Alliance). 2002. Demand Response: Principles for regulatory guidance. Jupiter, Fla.: Peak Load Management Alliance, Report.
  • Rajagopalan, G. 2014. The use of novel nanoscale materials for sludge dewatering. California Energy Commission. Report # CEC-500-2014-081.
  • Raucher, R.S., J.E. Cromwell, K. Cooney, P. Thompson, L. Sullivan, B. Carrico, and M. MacPhee. 2008. Risks and Benefits of Energy Management for Drinking Water Utilities. Denver, Colo.: AwwaRF.
  • Reardon, D. [N.d.] Striking the Balance between Nutrient removal in Wastewater Treatment and Sustainability. Water Environment Research Foundation. WERF # NUTR1R06n. Forthcoming.
  • Rosso, D. 2014. Framework for Energy Neutral Treatment for the 21st Century through Energy Efficient Aeration. Water Environment Research Foundation. WERF # INFR2R12.
  • Rosso, D., L.E. Larson, and M.K. Stenstrom. 2010a. Aeration of large-scale municipal wastewater treatment plants: state of the art. CEC-500-2009-076-APF.
  • Rosso, D., S.-Y. Leu, P. Jiang, L.E. Larson, R. Sung, and M.K. Stenstrom. 2010b. Aeration Efficiency Monitoring with Real-Time Off-Gas Analysis. CEC-500-2009-076-APF.
  • Rosso, D., and G. Rajagopalan. 2013. Energy reduction in membrane filtration process through optimization of nanosuspended particle removal. California Energy Commission. Report # CEC-500-2013-132.
  • Rothausen, S.G., and D. Conway. 2011. Greenhouse-gas emissions from energy use in the water sector. Nature Climate Change, 1(4), 210-219.
  • Salveson, A.T. 2013. Challenge Projects on Low Energy Treatment Schemes for Water Reuse, Phase 1. Water Reuse Research Foundation. WateReuse-10-06A, Carollo Engineers.
  • Salveson, A. [N.d.] Evaluation of Innovative Reflectance Based UV for Enhanced Disinfection and Advanced Oxidation. Denver, Colo.: Water Research Foundation. Forthcoming.
  • Sanders, K.T., and M.E. Webber. 2012. Evaluating the energy consumed for water use in the United States. Environmental Research Letters, 7(3), 034034.
  • Sandino, J. 2010. Evaluation of Processes to Reduce Activated Sludge Solids Generation and Disposal. Water Environment Research Foundation. WERF # 05-CTS-3.
  • Seacord, T., J. MacHarg, and S. Coker. (2006). Affordable Desalination Collaboration 2005 Results. Proceedings of the American Membrane Technology Association Conference in Stuart, FL, USA, July 2006.
  • Semiat, R. 2008. Energy issues in desalination processes. Environmental Science and Technology, 42, 8193 – 8201.
  • Senon, C., M. Badruzzaman, A. Contreras, J. Adidjaja, M.S. Allen, and J.G. Jacangelo. [N.d.] Drinking Water Pump Station Design and Operation for Energy Efficiency. Denver, Colo.: Water Research Foundation. Forthcoming.
  • Skeens, B., W. Wood, and N. Spivey. 2009. Water production and distribution real–time energy management. Proceedings of the Awwa DSS Conference, Reno, NV, USA, September.
  • Skerlos, S.J., L. Raskin, N.G. Love, A.L. Smith, L.B. Stadler, and L. Cao. 2013. Challenge Projects on Low Energy Treatment Schemes for Water Reuse, Phase 1. Water Reuse Research Foundation. WateReuse-10-06D, University of Michigan.
  • Spellman, F.R. 2013. Water & Wastewater Infrastructure: Energy Efficiency and Sustainability. CRC Press, Boca Raton, FL.
  • Stillwell, A.S., D.C. Hoppock, and M.E. Webber. 2010. Energy recovery from wastewater treatment plants in the United States: a case study of the energy-water nexus.” Sustainability, 2.4 (2010): 945-962.
  • Stover, R. 2007. Seawater reverse osmosis with isobaric energy recovery devices. Desalination, 203, 168 – 175.
  • Stover, R., and N. Efraty 2011. Record low energy consumption with Closed Circuit Desalination. International Desalination Association (IDA) World Congress – Perth Convention and Exhibition Center (PCEC), Perth, Western Australia, September 4 – 9, 2011.
  • Subramani, A., M. Badruzzaman, J. Oppenheimer, and J.G. Jacangelo. 2011. Energy minimization strategies and renewable energy utilization for desalination: A review. Water Research, 45, 1907 – 1920.
  • Sui, H., B.G. Han, J.K. Lee, P. Walian, and B.K. Jap. 2001. Structural basis of water specific transport through the AQP1 water channel. Nature, 414, 872 – 878.
  • Tarallo, S. 2014. Utilities of the Future Energy Findings.
  • Water Environment Research Foundation. WERF # ENER6C13. Tarallo, S. [N.d.] Energy Balance and Reduction Opportunities, Case Studies of Energy-Neutral Wastewater Facilities and Triple Bottom Line (TBL) Research Planning Support. Water Environment Research Foundation. WERF # ENER1C12. Forthcoming.
  • Thompson, L., K. Song, A. Lekov, and A. McKane. 2008. Automated Demand Response Opportunities in Wastewater Treatment Facilities. Ernest Orlando Lawrence Berkeley National Laboratory. November 2008.
  • Toffey, B. 2010. Beyond Zero Net Energy: Case Studies of Wastewater Treatment for Energy and Resource Production. AWRA-PMAS Meeting. September 16, 2010.
  • Van Horne, M. [N.d.] Developing Solutions to Operational Side Effects Associated with Co-digestion of High Strength Organic Wastes. Water Environment Research Foundation. WERF # ENER8R13. Forthcoming.
  • Van Paassen, J., W. Van der Meer, and J. Post. 2005. Optiflux: from innovation to realization. Desalination 178, 325-331.
  • Veerapaneni, S., B. Jordan, G. Leitner, S. Freeman, and J. Madhavan. 2005. Optimization of RO desalination process energy consumption. International Desalination Association World Congress, Singapore.
  • Veerapaneni, S.V., B. Klayman, S. Wang, and R. Bond. 2011. Desalination Facility Design and Operation for Maximum Energy Efficiency. Denver, Colo.: Water Research Foundation.
  • Verstraete, W., P. Van de Caveye, and V. Diamantis. 2009. Maximum use of resources present in domestic “used water”. Bioresource technology, 100(23), 5537-5545.
  • Von Meier, A. 1999. Occupational cultures as a challenge to technological innovation. Engineering Management, IEEE Transactions on, 46(1), 101-114.
  • VWEA (Virginia Water Environment Association). 2013. WERF: Research on Sustainable and ReNEW-able Resources – Nutrients, Energy, Water. A Presentation for the VWEA Education Committee, 2013 Annual Seminar. Richmond, VA.
  • Wallis, M.J, M.R. Ambrose, C., and C. Chan. 2008. Climate change: Charting a water course in an uncertain future. Journal of American Water Works Association, 100 (6).
  • WEF (Water Environment Federation). 2009. Energy Conservation in Water and Wastewater Facilities – MOP 32 (WEF Manual of Practice). McGraw-Hill Professional, 2009.
  • WEF (Water Environment Federation). 2012. Energy Roadmap: Driving Water and Wastewater Utilities to More Sustainable Energy Management. October, 2012.
  • Welgemoed, T.J. 2005. Capacitive Deionization Technology: Development and evaluation of an industrial prototype system. University of Pretoria, 2005, Dissertation.
  • Wiesner, M. 2013. Direct Contact Membrane Distillation for Water Reuse Using Nanostructured Ceramic Membranes. Water Reuse Research Foundation. WRRF # 07-05-1.
  • Wilcoxson, D., and M. Badruzzaman. 2013. Optimization of wastewater lift stations for reduction of energy usage and greenhouse gas emissions. Water Environment Research Foundation. Report # INFR3R11.
  • Wilf, M., L. Awerbuch, and C. Bartels. 2007. T he guidebook to membrane desalination technology: reverse osmosis, nanofiltration and hybrid systems process, design, applications and economics. Balaban Desalination Publications.
  • Wilf, M., and C. Bartels. 2005. Optimization of seawater RO systems design. Desalination, 173, 1 – 12. Wilf, M., and J. Hudkins. 2010. Energy Efficient Configuration of RO Desalination Units. Proceedings of Water Environment Federation Membrane Applications Conference, Anaheim, California.
  • Willis, J.L. 2011. Combined Heat and power system evaluation tool instruction manual. Water Environment Research Foundation. Report WERF#U2R08b. Willis, J., L. Stone, K. Durden, N. Beecher, C. Hemenway, and R. Greenwood. 2012. Barriers to Biogas Use for Renewable Energy. Water Environment Research Foundation. Report # OWSO11C10.
  • Willis, J.L. [N.d.] Identification of Barriers to Energy Efficiency and Resource Recovery at WRRF’s and Solutions to Promote These Practices. Water Environment Research Foundation. WERF # ENER7C13. Forthcoming.
  • Yonkin, M., K. Clubine, and K. O’Connor. 2008. Importance of Energy Efficiency to the Water and Wastewater Sector. Clearwaters.

 

 

Posted in Sewage treatment, Water | Tagged , , , , | Leave a comment