Blackouts, firestorms, and energy use

Preface. Blackouts are more and more likely in the future from fires, hurricanes, natural gas shortages and more. Below is an account from a friend who had to evacuate due to a wildfire.

Blackouts in the news:

2024: Half a million Victorian customers without power as Loy Yang A coal-fired station shuts down and storms damage infrastructure

2021. Texas Was Seconds Away From Going Dark for Months.

Alice Friedemann  www.energyskeptic.com Women in ecology  author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity

***

This is a letter from a friend about his experiences when PG&E cut his power off (and 2.5 million others).

Last Saturday around 2 pm we received notice that our area was under an evacuation warning owing to the huge Kincade fire that erupted on Wednesday evening (which we watched in terror and awe from our front porch). At 6:30 pm the order became mandatory. In the end, nearly 200,000 people, or about a third of the population of Sonoma County, were evacuated.

This was our first experience having to plan and prepare to leave on a moment’s notice. We found refuge with a friend in San Francisco, where we stayed until the order was downgraded to a warning on the following Tuesday. The experience highlighted a number of lessons for us.

First and foremost, do not ever evacuate without taking your dog’s favorite toy with you. This oversight necessitated a trip to a pet store to find the item in question. Having a dog certainly helped us keep focused and calmer, although I know she sensed that we were quite out of sorts for days.

Second, we discovered that fuel disappears quickly. We went out 15 minutes after the initial warning was issued, and the closest gasoline station already had 7 of 8 pumps taped closed. The second station had fuel, but long lines coming in from each direction. Of course, once the power went off, there was no fuel to be had at all.

Third, having PV was useless. Although Sonoma County is one of the most heavily PV’d counties in the state, nearly all is grid-tied and thus rendered inoperable in a blackout. And EV owners were out of luck and had to head to SF or the central valley to find electricity.

Fourth, it completely reinforced my understanding that “you can’t do just one thing”. Our power utility (PG&E) in October started implementing what they termed PSPS or “Public Safety Power Shutoffs,” or plainly, power blackouts, to avoid sparking additional fires if the high winds (which did reach up to 103 mph in gusts on Sunday) blew trees into energized lines starting new fires. But after the power went off Saturday night to nearly 2.5 million people, it started a cascading series of failures of complex systems. The county’s largest cable and internet provider failed, and even the copper-wire landline went dead (we keep a land line because of frequent winter blackouts), and neither was restored until a day after the power returned 5 days later. This led to a huge range of consequences, including the near-complete shutdown of commerce, and mundane problems such as repair shops unable to release vehicles to owners because state law requires an invoice, and the invoicing system is cloud-based. We also discovered that the battery backup on our garage door (now state-mandated for new houses) which we got after 5 people died in the 2017 fires from not being able to get out of their garages, died since it requires a trickle charge and goes offline after 2 days without power. And most critically, in my region of the county, nearly everyone relies on a well, so without power, there is no water. Fortunately, we didn’t lose any crops on our drip irrigation, though some were were quite stressed from lack of water.

Fifth, evacuations and firefighting are very energy intensive. With 200,000 people leaving the county, that probably involved 75,000 or so cars, trucks, and RVs on the road, and people headed north to Eureka, inland to Sacramento, and south to San Jose. CalFire deployed 10 Super Huey helicopters, 445 fire engines, 41 dozers, and 64 water tenders in addition to the airtankers and the Global Supertanker, a modified 747 with retardant tanks. Air and ground assistance came from as far away as Montana. We saw fire trucks from Fullerton and Santa Barbara in southern California and some from Oregon, all of which drove to the fire zone. To then turn the power back on, PG&E had to deploy over 600 trucks and numerous helicopters to inspect every mile of every distribution line in the county for damage.

Without even speculating on what this means to the viability of living in California, it hardened my belief that folks are completely delusional in their efforts to design “resilient and sustainable cities” with programs that rely heavily on cloud-based sensors reporting traffic, home appliance usage, and requiring big-data crunching to work. I know I’m going to be even more of a gadfly at meetings where this comes up in the future. It just won’t work.

***

When PG&E told us the power would be out for two days, here are a few things we did: freeze as many water bottles as possible, stores sell out of ice over a day ahead of time. Put some frozen bottles into a cooler and all of the ice in the ice maker or it will melt and flood the floor. Add refrigerator food for the next 2 days to the cooler so that you don’t ever have to open the door. Better than candles are battery lanterns. Charge laptops, phones, kindles and a battery to recharge them. Be sure to have matches to light the natural gas burners on the stove. I bet those of you who get hurricanes could add a lot to this list of how to cope!

Posted in Blackouts Electric Grid, Wildfire | Tagged , | 2 Comments

Book Review of Richard Heinberg’s 2011 “The End of Growth”

Preface. This is not a book review really, it’s more a few of my kindle notes. Heinberg writes so well, so clearly, that I am sure history will remember him as the most profound and wide-ranging expert on energy and ecological overshoot. Just a few of the topics in this book include:

  • The depletion of important resources including fossil fuels and minerals
  • The proliferation of environmental impacts arising from both the extraction and use of resources (including the burning of fossil fuels)—leading to snowballing costs from both these impacts themselves and from efforts to avert them and clean them up
  • Financial disruptions due to the inability of our existing monetary, banking, and investment systems to adjust to both resource scarcity and soaring environmental costs—and their inability (in the context of a shrinking economy) to service the enormous piles of government and private debt that have been generated over the past couple of decades.

As always, I noted only what interested me.  So much is left out, so do buy this book!  And not just for yourself — I write a lot about why the electric grid will eventually come down for good in both of my Springer Books, so buy it for your grandchildren to preserve knowledge and so that future generations will understand why collapse happened.

Alice Friedemann    www.energyskeptic.com   author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity, XX2 report

***

Richard Heinberg. 2011. The End of Growth: Adapting to Our New Economic Reality.  New Society Publishers.

The Deepwater Horizon incident also illustrates to some degree the knock-on effects of depletion and environmental damage upon financial institutions. Insurance companies have been forced to raise premiums on deepwater drilling operations, and impacts to regional fisheries have hit the Gulf Coast economy hard.

payments forced the company to reorganize and resulted in lower stock values and returns to investors. BP’s financial woes in turn impacted British pension funds that were invested in the company. This is just one event—admittedly a spectacular one. If it were an isolated problem, the economy could recover and move on. But we are, and will be, seeing a cavalcade of environmental and economic disasters, not obviously related to one another, that will stymie economic growth in more and more ways. These will include but are not limited to: •  Climate change leading to regional droughts, floods, and even famines; • Shortages of water and energy; and • Waves of bank failures, company bankruptcies, and house foreclosures.

Each will be typically treated as a special case, a problem to be solved so that we can get “back to normal.” But in the final analysis, they are all related, in that they are consequences of growing human population striving for higher per-capita consumption of limited resources (including non-renewable, climate-altering fossil fuels), all on a finite and fragile planet.

The result: we are seeing a perfect storm of converging crises that together represent a watershed moment in the history of our species. We are witnesses to, and participants in, the transition from decades of economic growth to decades of economic contraction.

we are adding about 70 million new “consumers” each year. That makes further growth even more crucial: if the economy stagnates, there will be fewer goods and services per capita to go around.

We harnessed the energies of coal, oil, and natural gas to build and operate cars, trucks, highways, airports, airplanes, and electric grids—all the essential features of modern industrial society. Through the one-time-only process of extracting and burning hundreds of millions of years’ worth of chemically stored sunlight, we built what appeared (for a brief, shining moment) to be a perpetual-growth machine. We learned to take what was in fact an extraordinary situation for granted. It became normal.

But as the era of cheap, abundant fossil fuels comes to an end, our assumptions about continued expansion are being be shaken to their core. The end of growth is a very big deal indeed. It means the end of an era, and of our current ways of organizing economies, politics, and daily life. Without growth, we will have to virtually reinvent human life on Earth.

World leaders, if they are deluded about our actual situation, are likely to delay putting in place the support services that can make life in a non-growing economy survivable, and they will almost certainly fail to make needed, fundamental changes to monetary, financial, food, and transport systems. As a result, what could have been a painful but endurable process of adaptation could become history’s greatest tragedy. We can survive the end of growth, but only if we recognize it for what it is and act accordingly.

As early as 1998, petroleum geologists Colin Campbell and Jean Laherrère were discussing a Peak Oil impact scenario that went like this. Sometime around the year 2010, they theorized, stagnant or falling oil supplies would lead to soaring and more volatile petroleum prices, which would precipitate a global economic crash. This rapid economic contraction would in turn lead to sharply curtailed energy demand, so oil prices would then fall; but as soon as the economy regained strength, demand for oil would recover, prices would again soar, and as a result of that the economy would relapse. This cycle would continue, with each recovery phase being shorter and weaker, and each crash deeper and harder, until the economy was in ruins. Financial systems based on the assumption of continued growth would implode, causing more social havoc than the oil price spikes would themselves generate.

Meanwhile, volatile oil prices would frustrate investments in energy alternatives: one year, oil would be so expensive that almost any other energy source would look cheap by comparison; the next year, the price of oil would have fallen far enough that energy users would be flocking back to it, with investments in other energy sources looking foolish. But low oil prices would discourage exploration for more petroleum, leading to even worse fuel shortages later on. Investment capital would be in short supply in any case because the banks would be insolvent due to the crash, and governments would be broke due to declining tax revenues. Meanwhile, international competition for dwindling oil supplies might lead to wars between petroleum importing nations, between importers and exporters, and between rival factions within exporting nations.

But what happened next riveted the world’s attention to such a degree that the oil price spike was all but forgotten: in September 2008, the global financial system nearly collapsed. The reasons for this sudden, gripping crisis apparently had to do with housing bubbles, lack of proper regulation of the banking industry, and the over-use of bizarre financial products that almost nobody understood. However, the oil price spike had played a critical (if largely overlooked) role in initiating the economic meltdown

In the immediate aftermath of that global financial near-death experience, both the Peak Oil impact scenario proposed a decade earlier and the Limits to Growth standard-run scenario of 1972 seemed to be confirmed with uncanny and frightening accuracy. Global trade was falling. The world’s largest auto companies were on life support. The U.S. airline industry had shrunk by almost a quarter. Food riots were erupting in poor nations around the world. Lingering wars in Iraq (the nation with the world’s second-largest crude oil reserves) and Afghanistan (the site of disputed oil and gas pipeline projects) continued to bleed the coffers of the world’s foremost oil-importing nation.

Meanwhile, the debate about what to do to rein in global climate change exemplified the political inertia that had kept the world on track for calamity since the early ’70s. It had by now become obvious to nearly every person of modest education and intellect that the world has two urgent, incontrovertible reasons to rapidly end its reliance on fossil fuels: the twin threats of climate catastrophe and impending constraints to fuel supplies. Yet at the Copenhagen climate conference in December, 2009, the priorities of the most fuel-dependent nations were clear: carbon emissions should be cut, and fossil fuel dependency reduced, but only if doing so does not threaten economic growth.

We must convince ourselves that life in a non-growing economy can be fulfilling, interesting, and secure. The absence of growth does not necessarily imply a lack of change or improvement. Within a non-growing or equilibrium economy there can still be continuous development of practical skills, artistic expression, and certain kinds of technology. In fact, some historians and social scientists argue that life in an equilibrium economy can be superior to life in a fast-growing economy: while growth creates opportunities for some, it also typically intensifies competition—there are big winners and big losers, and (as in most boom towns) the quality of relations within the community can suffer as a result. Within a non-growing economy it is possible to maximize benefits and reduce factors leading to decay, but doing so will require pursuing appropriate goals: instead of more, we must strive for better; rather than promoting increased economic activity for its own sake, we must emphasize whatever increases quality of life without stoking consumption. One way to do this is to reinvent and redefine growth itself.

 “Classical” economic philosophers such as Adam Smith (1723–1790), Thomas Robert Malthus (1766–1834), and David Ricardo (1772–1823) introduced basic concepts such as supply and demand, division of labor, and the balance of international trade.

These pioneers set out to discover natural laws in the day-to-day workings of economies. They were striving, that is, to make of economics a science admired the ability of physicists, biologists, and astronomers to demonstrate the fallacy of old church doctrines, and to establish new universal “laws” by means of inquiry and experiment.

Economic philosophers, for their part, could point to price as arbiter of supply and demand, acting everywhere to allocate resources far more effectively than any human manager or bureaucrat could ever possibly do—surely this was a principle as universal and impersonal as the force of gravitation!

The classical theorists gradually adopted the math and some of the terminology of science. Unfortunately, however, they were unable to incorporate into economics the basic

Economic theory required no falsifiable hypotheses and demanded no repeatable controlled experiments. Economists began to think of themselves as scientists, while in fact their discipline remained a branch of moral philosophy—as it largely does to this day.

Importantly, these early philosophers had some inkling of natural limits and anticipated an eventual end to economic growth. The essential ingredients of the economy were understood to consist of labor, land, and capital. There was on Earth only so much land (which in these theorists’ minds stood for all natural resources), so of course at some point the expansion of the economy would cease. Both Malthus and Smith explicitly held this view. A somewhat later economic philosopher, John Stuart Mill (1806-1873), put the matter as follows: “It must always have been seen, more or less distinctly, by political economists, that the increase in wealth is not boundless: that at the end of what they term the progressive state lies the stationary state…”

But starting with Adam Smith, the idea that continuous “improvement” in the human condition was possible came to be generally accepted.

A key to this transformation was the gradual deletion by economists of land from the theoretical primary ingredients of the economy (increasingly, only labor and capital really mattered—land having been demoted to a sub-category of capital). This was one of the refinements that turned classical economic theory into neoclassical economics; others included the theories of utility maximization and rational choice.

While this shift began in the 19th century, it reached its fruition in the 20th through the work of economists who explored models of imperfect competition, and theories of market forms and industrial organization, while emphasizing tools such as the marginal revenue curve (this is when economics came to be known as “the dismal science”—partly because its terminology was, perhaps intentionally, increasingly mind-numbing). Meanwhile, however, the most influential economist of the 19th century, a philosopher named Karl Marx, had thrown a metaphorical bomb through the window of the house that Adam Smith had built. In his most important book, Das Kapital, Marx proposed a name for the economic system that had evolved since the Middle Ages: capitalism. It was a system founded on capital. Many people assume that capital is simply another word for money, but that entirely misses the essential point: capital is wealth—money, land, buildings, or machinery—that has been set aside for production of more wealth. If you use your entire weekly paycheck for rent, groceries, and other necessities, you may have money but no capital. But even if you are deeply in debt, if you own stocks or bonds, or a computer that you use for a home-based business, you have capital. Capitalism, as Marx defined it, is a system in which productive wealth is privately owned. Communism (which Marx proposed as an alternative) is one in which productive wealth is owned by the community, or by the nation on behalf of the people. In any case, Marx said, capital tends to grow.  

Marx also wrote that capitalism is inherently unsustainable, in that when the workers become sufficiently impoverished by the capitalists, they will rise up and overthrow their bosses and establish a communist state (or, eventually, a stateless workers’ paradise). The ruthless capitalism of the 19th century resulted in booms and busts, and a great increase in inequality of wealth—and therefore an increase in social unrest. With the depression of 1893 and the crash of 1907, and finally the Great Depression of the 1930s, it appeared to many social commentators of the time that capitalism was indeed failing, and that Marx-inspired uprisings were inevitable; the Bolshevik revolt in 1917 served as a stark confirmation of those hopes or fears (depending on one’s point of view).

The next few decades saw a three-way contest between the Keynesian social liberals, the followers of Marx, and temporarily marginalized neoclassical or neoliberal economists who insisted that social reforms and Keynesian meddling by government with interest rates, spending, and borrowing merely impeded the ultimate efficiency of the free Market.

the fall of the Soviet Union at the end of the 1980s, Marxism ceased to have much of a credible voice in economics. Its virtual disappearance from the discussion created space for the rapid rise of the neoliberals, who for some time had been drawing energy from widespread reactions against the repression and inefficiencies of state-run economies. Margaret Thatcher and Ronald Reagan both relied heavily on advice from neoliberal economists of the Chicago School

One of the most influential libertarian, free-market economists of recent decades was Alan Greenspan (b. 1926), who, as U.S. Federal Reserve Chairman from 1987 to 2006, argued for privatization of state-owned enterprises and de-regulation of businesses—yet Greenspan nevertheless ran an activist Fed that expanded the nation’s money supply in ways and to degrees that neither Friedman or Hayek would have approved of. As

There is a saying now in Russia: Marx was wrong in everything he said about communism, but he was right in everything he wrote about capitalism. Since the 1980s, the nearly worldwide re-embrace of classical economic philosophy has predictably led to increasing inequalities of wealth within the U.S. and other nations, and to more frequent and severe economic bubbles and crashes. Which brings us to the global crisis that began in 2008. By this time all mainstream economists (Keynesians and neoliberals alike) had come to assume that perpetual growth is the rational and achievable goal of national economies. The discussion was only about how to maintain it—through government intervention or a laissez-faire approach that assumes the Market always knows best. But

It is clearly a challenge to the neoliberals, whose deregulatory policies were largely responsible for creating the housing bubble whose implosion is generally credited with stoking the crisis. But it is a conundrum also for the Keynesians, whose stimulus packages have failed in their aim of increasing employment and general economic activity. What we have, then, is a crisis not just of the economy, but also of economic theory and philosophy.

The ideological clash between Keynesians and neoliberals (represented to a certain degree in the escalating all-out warfare between the U.S. Democratic and Republican political parties) will no doubt continue and even intensify. But the ensuing heat of battle will yield little light if both philosophies conceal the same fundamental errors. One such error is of course the belief that economies can and should perpetually grow. But that error rests on another that is deeper and subtler. The subsuming of land within the category of capital by nearly all post-classical economists had amounted to a declaration that Nature is merely a subset of the human economy—an endless pile of resources to be transformed into wealth. It also meant that natural resources could always be substituted with some other form of capital—money or technology. The reality, of course, is that the human economy exists within, and entirely depends upon Nature, and many natural resources have no realistic substitutes. This fundamental logical and philosophical mistake, embedded at the very heart of modern mainstream economic philosophies, set society directly upon a course toward the current era of climate change and resource depletion, and its persistence makes conventional economic theories—of both Keynesian and neoliberal varieties—utterly incapable of dealing with the economic and environmental survival threats to civilization in the 21st century.

For help, we can look to the ecological and biophysical economists, whose ideas have been thoroughly marginalized by the high priests and gatekeepers of mainstream economics—and, spectacular growth of debt—in obvious and subtle forms—that has occurred during the past few decades. That phenomenon in turn must be seen in light of the business cycles that characterize economic activity in modern industrial societies, and the central banks that have been set up to manage them.

We’ve already noted how nations learned to support the fossil fuel-stoked growth of their physical economies by increasing their money supply via fractional reserve banking. As money was gradually (and finally completely) de-linked from physical substance (i.e., precious metals), the creation of money became tied to the making of loans by commercial banks.

This meant that the supply of money was entirely elastic—as much could be created as was needed, and the amount in circulation could contract as well as expand. And the growth of money was tied to the growth of debt. The system is dynamic and unstable, and this instability manifests in the business cycle. In the expansionary phase of the cycle, businesses see the future as rosy, and therefore take out loans to build more productive capacity and hire new workers. Because many businesses are doing this at the same time, the pool of available workers shrinks; so, to attract and keep the best workers, businesses have to raise wages. With wages rising, worker-consumers have more money in their pockets. Worker-consumers spend much of that money on products from the businesses that hire them, helping spread even more optimism about the future. Amid all this euphoria, worker-consumers go into debt based on the expectation that their wages will continue to grow, making it easy to repay loans. Businesses go into debt expanding their productive capacity. Real estate prices go up because of rising demand (former renters deciding they can now afford to buy), which means that houses are worth more as collateral if existing homeowners want to take out big loans to do some remodeling or to buy a new car. All of this borrowing and spending increases the money supply and the velocity of money. At some point, however, the overall mood of the country changes. Businesses have invested in as much productive capacity as they are likely to need for a while. They feel they have taken on as much debt as they can handle, and don’t feel the need to hire more employees. Upward pressure on wages ceases, and that helps dampen the general sense of optimism about the economy. Workers likewise become shy about taking on more debt, as they are unsure whether they will be able to make payments. Instead, they concentrate on paying off existing debts. With fewer loans being written, less new money is being created; meanwhile, as earlier loans are paid off, money effectively disappears from the system. The nation’s money supply contracts in a self-reinforcing spiral. But if people increase their savings during this downward segment of the cycle, they eventually will feel more secure and therefore more willing to begin spending again. Also, businesses will eventually have liquidated much of their surplus productive capacity and thereby reduced their debt burden. This sets the stage for the next expansion phase.

A bubble consists of trade in high volumes at prices that are considerably at odds with intrinsic values, but the word can also be used more broadly to refer to any instance of rapid expansion of currency or credit that’s not sustainable over the long run. Bubbles always end with a crash—a rapid, sharp decline in asset values.

The upsides and downsides of the business cycle are reflected in higher or lower levels of inflation. Inflation is often defined in terms of higher wages and prices, but (as the Austrian economists have persuasively argued) wage and price inflation is actually just the symptom of an increase in the money supply relative to the amounts of goods and services being traded, which in turn is typically the result of exuberant borrowing and spending. The downside of the business cycle, in the worst instance, can produce the opposite of inflation, or deflation. Deflation manifests as declining wages and prices, consequent upon a declining money supply relative to goods and services traded, due to a contraction of borrowing and spending.

As we have seen, bubbles are a phenomenon generally tied to speculative investing. But in a larger sense our entire economy has assumed the characteristics of a bubble—even a Ponzi scheme. That is because it has come to depend upon staggering and continually expanding amounts of debt: government and private debt; debt in the trillions, and tens of trillions, and hundreds of trillions of dollars; debt that, in aggregate, has grown by 500 percent since 1980; debt that has grown faster than economic output (measured in GDP) in all but one of the past 50 years; debt that can never be repaid; debt that represents claims on quantities of labor and resources that simply do not exist.

Looking at the problem close up, the globalization of the economy looms as a prominent factor. In the 1970s and ’80s, with stiffer environmental and labor standards to contend with domestically, corporations began eyeing the regulatory vacuum, cheap labor, and relatively untouched natural resource base of less-industrialized nations as a potential goldmine. International investment banks started loaning poor nations enormous sums to pay for ill-advised infrastructure projects (and, incidentally, to pay kickbacks to corrupt local politicians), later requiring these countries to liquidate their natural resources at fire-sale prices so as to come up with the cash required to make loan payments. Then, prodded by corporate interests, industrialized nations pressed for the liberalization of trade rules via the World Trade Organization (the new rules almost always subtly favored the wealthier trading partner).

All of this led predictably to a reduction of manufacturing and resource extraction in core industrial nations, especially the U.S. (many important resources were becoming depleted in the wealthy industrial nations anyway), and a steep increase in resource extraction and manufacturing in several “developing” nations, principally China. Reductions in domestic manufacturing and resource extraction in turn motivated investors within industrial nations to seek profits through purely financial means. As a result of these trends, there are now as many Americans employed in manufacturing as there were in 1940, when the nation’s population was roughly half what it is today—while the proportion of total U.S. economic activity deriving from financial services has tripled during the same period. And speculative investing has become an accepted practice that is taught in top universities and institutionalized in the world’s largest corporations.

The most important financial development during the 1970s was the growth of securitization—a financial practice of pooling various types of contractual debt (such as residential mortgages, commercial mortgages, auto loans, or credit card debt obligations) and selling it to investors in the form of bonds, pass-through securities, or collateralized mortgage obligations (CMOs). The principal and interest on the debts underlying the security are paid back to investors regularly. Securitization provided an avenue for more investors to fund more debt. In effect, securitization caused (or allowed) claims on wealth to increase far above previous levels.

In 1970 the top 100 CEOs earned about $45 for every dollar earned by the average worker; by 2008 the ratio was over 1,000 to one.

In the 1990s, as the surplus of financial capital continued to grow, investment banks began inventing a slew of new securities with high yields. In assessing these new products, rating agencies used mathematical models that, in retrospect, seriously underestimated their levels of risk. Until the early 1970s, bond credit ratings agencies had been paid for their work by investors who wanted impartial information on the credit worthiness of securities issuers and their various offerings. Starting in the early 1970s, the “Big Three” ratings agencies (Standard & Poors, Moody’s, and Fitch) were paid instead by the securities issuers for whom they issued those ratings. This eventually led to ratings agencies actively encouraging the issuance of collateralized debt obligations (CDOs).   The Clinton administration adopted “affordable housing” as one of its explicit goals (this didn’t mean lowering house prices; it meant helping Americans get into debt), and over the decade the percentage of Americans owning their homes increased 7.8 percent. This initiated a persistent upward trend in real estate prices.

In the late 1990s investors piled into Internet-related stocks, creating a speculative bubble. The dot-com bubble burst in 2000 (as with all bubbles, it was only a matter of “when,” not “if”), and a year later the terrifying crimes of September 11, 2001 resulted in a four-day closure of U.S. stock exchanges and history’s largest one-day decline in the Dow Jones Industrial Average. These events together triggered a significant recession. Seeking to counter a deflationary trend, the Federal Reserve lowered its federal funds rate target from 6.5 percent to 1.0 percent, making borrowing more affordable.   Downward pressure on interest rates was also coming from the nation’s high and rising trade deficit. Every nation’s balance of payments must sum to zero, so if a nation is running a current account deficit it must balance that amount by earning from foreign investments, by running down reserves, or by obtaining loans from other countries. In other words, a country that imports more than it exports must borrow to pay for those imports. Hence American imports had to be offset by large and growing amounts of foreign investment capital flowing into the U.S. Higher bond prices attract more investment capital, but there is an inevitable inverse relationship between bond prices and interest rates, so trade deficits tend to force interest rates down.   Foreign investors had plenty of funds to lend, either because they had very high personal savings rates (in China, up to 40 percent of income saved), or because of high oil prices (think OPEC). A torrent of funds—it’s been called a “Giant Pool of Money” that roughly doubled in size from 2000 to 2007, reaching $70 trillion—was flowing into the U.S. financial markets. While foreign governments were purchasing U.S. Treasury bonds, thus avoiding much of the impact of the eventual crash, other foreign investors, including pension funds,

By this time a largely unregulated “shadow banking system,” made up of hedge funds, money market funds, investment banks, pension funds, and other lightly-regulated entities, had become critical to the credit markets and was underpinning the financial system as a whole. But the shadow “banks” tended to borrow short-term in liquid markets to purchase long-term, illiquid, and risky assets, profiting on the difference between lower short-term rates and higher long-term rates. This meant that any disruption in credit markets would result in rapid deleveraging, forcing these entities to sell long-term assets (such as MBSs) at depressed prices.

Between 1997 and 2006, the price of the typical American house increased by 124%.

People bragged that their houses were earning more than they were, believing that the bloating of house values represented a flow of real money that could be tapped essentially forever. In a sense this money was being stolen from the next generation: younger first-time buyers had to burden themselves with unmanageable debt in order to enter the market, while older homeowners who bought before the bubble were able to sell, downsize, and live on the profit.

For a brief time between 2006 and mid-2008, investors fled toward futures contracts in oil, metals, and food, driving up commodities prices worldwide. Food riots erupted in many poor nations, where the cost of wheat and rice doubled or tripled. In part, the boom was based on a fundamental economic trend: demand for commodities was growing—due in part to the expansion of economies in China, India, and Brazil—while supply growth was lagging. But speculation forced prices higher and faster than physical shortage could account for. For Western economies, soaring oil prices had a sharp recessionary impact, with already cash-strapped new homeowners now having to spend eighty to a hundred dollars every time they filled the tank in their SUV. The auto, airline, shipping, and trucking industries were sent reeling.

The U.S. real estate bubble of the early 2000s was the largest (in terms of the amount of capital involved) in history. And its crash carried an eerie echo of the 1930s: Austrian and Post-Keynesian economists have argued that it wasn’t the stock market crash that drove the Great Depression so much as farm failures making it impossible for farmers to make mortgage payments—along with housing bubbles in Florida, New York, and Chicago.

Real estate bubbles are essentially credit bubbles, because property owners generally use borrowed money to purchase property (this is in contrast to currency bubbles, in which nations inflate their currency to pay off government debt). The amount of outstanding debt soars as buyers flood the market, bidding property prices up to unrealistic levels and taking out loans they cannot repay. Too many houses and offices are built, and materials and labor are wasted in building them. Real estate bubbles also lead to an excess of homebuilders, who must retrain and retool when the bubble bursts. These kinds of bubbles lead to systemic crises affecting the economic integrity of nations.   Indeed, the housing bubble of the early 2000s had become the oxygen of the U.S. economy—the source of jobs, the foundation for Wall Street’s recovery from the dot-com bust, the attractant for foreign capital, the basis for household wealth accumulation and spending. Its bursting changed everything.  

And there is reason to think it has not fully deflated: commercial real estate may be waiting to exhale next. Over the next five years, about $1.4 trillion in commercial real estate loans will reach the end of their terms and require new financing. Commercial property values have fallen more than 40 percent nationally since their 2007 peak, so nearly half the loans are underwater. Vacancy rates are up and rents are down.   The impact of the real estate crisis on banks is profound, and goes far beyond defaults upon outstanding mortgage contracts: systemic dependence on MBSs, CDOs, and derivatives means many of the banks, including the largest, are effectively insolvent and unable to take on more risk (we’ll see why in more detail in the next section).   The demographics are not promising for a recovery of the housing market anytime soon: the oldest of the Baby Boomers are 65 and entering retirement. Few have substantial savings; many had hoped to fund their golden years with house equity—and to realize that, they must sell. This will add more houses to an already glutted market, driving prices down even further.  

With regard to debt, what are those limits likely to be and how close are we to hitting them?  There are practical limits to debt within such a system, and those limits are likely to show up in somewhat different ways for each of the four categories of debt indicated in the graph.   With government debt, problems arise when required interest payments become a substantial fraction of tax revenues. Currently for the U.S., the total Federal budget amounts to about $3.5 trillion, of which 12 percent (or $414 billion) goes toward interest payments. But in 2009, tax revenues amounted to only $2.1 trillion; thus interest payments currently consume almost 20 percent, or nearly one-fifth, of tax revenues.

By the time the debt reaches $20 trillion, roughly ten years from now, interest payments may constitute the largest Federal budget outlay category, eclipsing even military expenditures. If Federal tax revenues haven’t increased by that time, Federal government debt interest payments will be consuming 20 percent of them.

Once 100 percent of tax revenues have to go toward interest payments and all government operations have to be funded with more borrowing—on which still more interest will have to be paid—the system will have arrived at a kind of financial singularity: a black hole of debt, if you will. But in all likelihood we would not have to get to that ultimate impasse before serious problems appear.

Many economic wags suggest that when government has to spend 30 percent of tax receipts on interest payments, the country is in a debt trap from which there is no easy escape. Given current trajectories of government borrowing and interest rates, that 30 percent mark could be hit in just a few years. Even before then, U.S. credit worthiness and interest costs will take a beating.

However, some argue that limits to government debt (due to snowballing interest payments) need not be a hard constraint—especially for a large nation, like the U.S., that controls its own currency. The United States government is constitutionally empowered to create money, including creating money to pay the interest on its debts. Or, the government could in effect loan the money to itself via its central bank, which would then rebate interest payments back to the Treasury (this is in fact what the Treasury and Fed are doing with Quantitative Easing 2,

The most obvious complication that might arise is this: If at some point general confidence that external U.S. government debt (i.e., money owed to private borrowers or other nations) could be repaid with debt of equal “value” were deeply and widely shaken, potential buyers of that debt might decide to keep their money under the metaphorical mattress (using it to buy factories or oilfields instead), even if doing so posed its own set of problems. Then the Fed would become virtually the only available buyer of government debt, which might eventually undermine confidence in the currency, possibly igniting a rapid spiral of refusal that would end only when the currency failed. There are plenty of historic examples of currency failures, so this would not be a unique occurrence.

But as long as deficit spending doesn’t exceed certain bounds, and as long as the economy resumes growth in the not-too-distant future, then it can be sustained for quite some time. Ponzi schemes theoretically can continue forever—if the number of potential participants is infinite. The absolute size of government debt is not necessarily a critical factor, as long as future growth will be sufficient so that the proportion of debt relative to revenues remains the same. Even an increase in that proportion is not necessarily cause for alarm, as long as it is only temporary. This, at any rate, is the Keynesian argument. Keynesians would also point out that government debt is only one category of total debt, and that U.S. government debt hasn’t grown proportionally relative to other categories of debt to any alarming degree (until the current recession).

Baby Boomers (the most numerous demographic cohort in the nation’s history, encompassing 70 million Americans) are reaching retirement age, which means that their lifetime spending cycle has peaked. It’s not that Boomers won’t continue to buy things (everybody has to eat), but their aggregate spending is unlikely to increase, given that cohort members’ savings are, on average, inadequate for retirement (one-third of them have no savings whatever). Out of necessity, Boomers will be saving more from now on, and spending less. And that won’t help the economy grow.  

When demand for products declines, corporations aren’t inclined to borrow to increase their productive capacity. Even corporate borrowing aimed at increasing financial leverage has limits. Too much corporate debt reduces resiliency during slow periods—and the future is looking slow for as far as the eye can see. Durable goods orders are down, housing starts and new home sales are down, savings are up. As a result, banks don’t want to lend to companies, because the risk of default on such loans is now perceived as being higher than it was a few years ago; in addition, the banks are reluctant to take on more risk of any sort given the fact that many of the assets on their balance sheets consist of now-worthless derivatives and CDOs.   Meanwhile, ironically and perhaps surprisingly, U.S. corporations are sitting on over a trillion dollars because they cannot identify profitable investment opportunities and because they want to hang onto whatever cash they have in anticipation of continued hard times.   If only we could get to the next upside business cycle, then more corporate debt would be justified for both lenders and borrowers. But so far confidence in the future is still weak.

One of the main reforms enacted during the Great Depression, contained in the Glass Steagall Act of 1933, was a requirement that commercial banks refrain from acting as investment banks. In other words, they were prohibited from dealing in stocks, bonds, and derivatives. This prohibition was based on an implicit understanding that there should be some sort of firewall within the financial system separating productive investment from pure speculation, or gambling. This firewall was eliminated by the passage of the Gramm–Leach–Bliley Act of 1999 (for which the financial services industry lobbied tirelessly). As a result, all large U.S. banks have for the past decade become deeply engaged in speculative investment, using both their own and their clients’ money.   With derivatives, since there is no requirement to own the underlying asset, and since there is often no requirement of evidence of ability to cover the bet, there is no effective limit to the amount that can be wagered. It’s true that many derivatives largely cancel each other out, and that their ostensible purpose is to reduce financial risk. Nevertheless, if a contract is settled, somebody has to pay—unless they can’t.

In the heady years of the 2000s, even the largest and most prestigious banks engaged in what can only be termed criminally fraudulent behavior on a massive scale. As revealed in sworn Congressional testimony, firms including Goldman Sachs deliberately created flawed securities and sold tens of billions of dollars’ worth of them to investors, then took out many more billions of dollars’ worth of derivatives contracts essentially betting against the securities they themselves had designed and sold. They were quite simply defrauding their customers, which included foreign and domestic pension funds. To date, no senior executive with any bank or financial services firm has been prosecuted for running these scams. Instead, most of the key figures are continuing to amass immense personal fortunes, confident no doubt that what they were doing—and in many cases continue to do—is merely a natural extension of the inherent logic of their industry.   The degree and concentration of exposure on the part of the biggest banks with regard to derivatives was and is remarkable: as of 2005, JP Morgan Chase, Bank of America, Citibank, Wachovia, and HSBC together accounted for 96 percent of the $100 trillion of derivatives contracts held by 836 U.S. banks.

Even though many derivatives were insurance against default, or wagers that a particular company would fail, to a large degree they constituted a giant bet that the economy as a whole would continue to grow (and, more specifically, that the value of real estate would continue to climb). So when the economy stopped growing, and the real estate bubble began to deflate, this triggered a systemic unraveling that could be halted (and only temporarily) by massive government intervention.  

Suddenly “assets” in the form of derivative contracts that had a stated value on banks’ ledgers were clearly worth much less. If these assets had to be sold, or if they were “marked to market” (valued on the books at the amount they could actually sell for), the banks would be shown to be insolvent. Government bailouts essentially enabled the banks to keep those assets hidden, so that banks could appear solvent and continue carrying on business.   Despite the proliferation of derivatives, the financial system still largely revolves around the timeworn practice of receiving deposits and making loans. Bank loans are the source of money in our modern economy. If the banks go away, so does the rest of the economy.

But as we have just seen, many banks are probably actually insolvent because of the many near-worthless derivative contracts and bad mortgage loans they count as assets on their balance sheets.   One might well ask: If commercial banks have the power to create money, why can’t they just write off these bad assets and carry on? Ellen Brown explains the point succinctly in her useful book Web of Debt: [U]nder the accountancy rules of commercial banks, all banks are obliged to balance their books, making their assets equal their liabilities. They can create all the money they can find borrowers for, but if the money isn’t paid back, the banks have to record a loss; and when they cancel or write off debt, their assets fall. To balance their books . . . they have to take the money either from profits or from funds invested by the bank’s owners [i.e., shareholders]; and if the loss is more than its owners can profitably sustain, the bank will have to close its doors.

So, given their exposure via derivatives, bad real estate loans, and MBSs, the banks aren’t making new loans because they can’t take on more risk. The only way to reduce that risk is for government to guarantee the loans. Again, as long as the down-side of this business cycle is short, such a plan could work in principle.   But whether it actually will in the current situation is problematic. As noted above, Ponzi schemes can theoretically go on forever, as long as the number of new investors is infinite. Yet in the real world the number of potential investors is always finite. There are limits. And when those limits are hit, Ponzi schemes can unravel very quickly.

The shadow banks can still write more derivative contracts, but that doesn’t do anything to help the real economy and just spreads risk throughout the system. That leaves government, which (if it controls its own currency and can fend off attacks from speculators) can continue to run large deficits, and the central banks, which can enable those deficits by purchasing government debt outright—but unless such efforts succeed in jump-starting growth in the other sectors, that is just a temporary end-game strategy.

Remember: in a system in which money is created through bank loans, there is never enough money in existence to pay back all debts with interest. The system only continues to function as long as it is growing.   So, what happens to this mountain of debt in the absence of economic growth? Answer: Some kind of debt crisis. And that is what we are seeing.   Debt crises have occurred frequently throughout the history of civilizations, beginning long before the invention of fractional reserve banking and credit cards. Many societies learned to solve the problem with a “debt jubilee”: According to the Book of Leviticus in the Bible, every fiftieth year is a Jubilee Year, in which slaves and prisoners are to be freed and debts are to be forgiven.

For householders facing unaffordable mortgage payments or a punishing level of credit card debt, a jubilee may sound like a capitol idea. But what would that actually mean today, if carried out on a massive scale—when debt has become the very fabric of the economy? Remember: we have created an economic machine that needs debt like a car needs gas.   Realistically, we are unlikely to see a general debt jubilee in coming years; what we will see instead are defaults and bankruptcies that accomplish essentially the same thing—the destruction of debt. Which, in an economy like ours, effectively means a destruction of wealth and claims upon wealth. Debt will have to be written off in enormous amounts—by the trillions of dollars. Over the short term, government will attempt to stanch this flood of debt-shedding in the household, corporate, and financial sectors by taking on more debt of its own—but eventually it simply won’t be able to keep up, given the inherent limits on government borrowing discussed above.   We began with the question, “How close are we to hitting the limits to debt?” The evident answer is: we have already probably hit realistic limits to household debt and corporate debt; the ratio of U.S. total debt-to-GDP is probably near or past the danger mark; and limits to government debt may be within sight, though that conclusion is more controversial and doubtful.  

For the U.S., actions undertaken by the Federal government and the Federal Reserve bank system have so far resulted in totals of $3 trillion actually spent and $11 trillion committed as guarantees. Some of these actions are discussed below; for a complete tally of the expenditures and commitments, see the online CNN Bailout Tracker.

The New Deal had cost somewhere between $450 and $500 billion and had increased government’s share of the national economy from 4 percent to 10 percent. ARRA represented a much larger outlay that was spent over a much shorter period, and increased government’s share of the economy from 20 percent to 25 percent.

At the end of 2010, President Obama and congressional leaders negotiated a compromise package of extended and new tax cuts that, in total, would reduce potential government revenues by an estimated $858 billion. This was, in effect, a third stimulus package.

Critics of the stimulus packages argued that transitory benefits to the economy had been purchased by raising government debt to frightening levels. Proponents of the packages answered that, had government not acted so boldly, an economic crisis might have turned into complete and utter ruin.

While the U.S. government stimulus packages were enormous in scale, the actions of the Federal Reserve dwarfed them in terms of dollar amounts committed.   During the past three years, the Fed’s balance sheet has swollen to more than $2 trillion through its buying of bank and government debt. Actual expenditures included $29 billion for the Bear Sterns bailout; $149.7 billion to buy debt from Fannie Mae and Freddie Mac; $775.6 billion to buy mortgage-backed securities, also from Fannie and Freddie; and $109.5 billion to buy hard-to-sell assets (including (MBSs) from banks. However, the Fed committed itself to trillions more in insuring banks against losses, loaning to money market funds, and loaning to banks to purchase commercial paper. Altogether, these outlays and commitments totaled a minimum of $6.4 trillion.

Documents released by the Fed on December 1, 2010 showed that more than $9 trillion in total had been supplied to Wall Street firms, commercial banks, foreign banks, and corporations, with Citigroup, Morgan Stanley, and Merrill Lynch borrowing sums that cumulatively totaled over $6 trillion. The collateral for these loans was undisclosed but widely thought to be stocks, CDSs, CDOs, and other securities of dubious value.   In one of its most significant and controversial programs, known as “quantitative easing,” the Fed twice expanded its balance sheet substantially, first by buying mortgage-backed securities from banks, then by purchasing outstanding Federal government debt (bonds and Treasury certificates) to support the Treasury debt market and help keep interest rates down on consumer loans. The Fed essentially creates money on the spot for this purpose (though no money is literally “printed”), thus monetizing U.S. government debt.  

In November 2008 China announced a stimulus package totaling 4 trillion yuan ($586 billion) as an attempt to minimize the impact of the global financial crisis on its domestic economy. In proportion to the size of China’s economy, this was a much larger stimulus package than that of the U.S. Public infrastructure development made up the largest portion, nearly 38 percent, followed by earthquake reconstruction, funding for social welfare plans, rural development, and technology advancement programs.

What’s the bottom line on all these stimulus and bailout efforts? In the U.S., $12 trillion of total household net worth disappeared in 2008, and there will likely be more losses ahead, largely as a result of continued fall in real estate values though increasingly as a result of job losses as well. The government’s stimulus efforts, totaling less than $1 trillion, cannot hope to make up for this historic evaporation of wealth. While indirect subsidies may temporarily keep home prices from falling further, that just keeps houses less affordable to workers making less income. Meanwhile, the bailouts of banks and shadow banks have been characterized as government throwing money at financial problems it cannot solve, rewarding the very people who created them. Rather than being motivated by the suffering of American homeowners or governments in over their heads, the bailouts of Fannie Mae and Freddie Mac in the U.S., and Greece and Ireland in the E.U. were (according to critics) essentially geared toward securing the investments of the banks and the wealthy bonds holders.

The stimulus-bailout efforts of 2008-2009—which in the U.S. cut interest rates from 5 percent to zero, spent up the budget deficit to 10 percent of GDP, and guaranteed $6.4 trillion to shore up the financial system—arguably cannot be repeated. These constituted quite simply the largest commitments of funds in world history, dwarfing the total amounts spent in all the wars of the 20th century in inflation-adjusted terms (for the U.S., the cost of World War II amounted to $3.2 trillion). Not only the U.S., but Japan and the European nations as well have exhausted their arsenals.   But more will be needed as countries, states, counties, and cities near bankruptcy due to declining tax revenues. Meanwhile the U.S. has lost 8.4 million jobs—and if loss of hours worked is considered that adds the equivalent of another 3 million; the nation will need to generate an extra 450,000 jobs each month for three years to get back to pre-crisis levels of employment. The only way these problems can be allayed (not fixed) is through more central bank money creation and government spending.

Once a credit bubble has inflated, the eventual correction (which entails destruction of credit and assets) is of greater magnitude than government’s ability to spend. The cycle must sooner or later play itself out.   There may be a few more arrows in the quiver of economic policy makers: central bankers could try to drive down the value of domestic currencies to stimulate exports; the Fed could also engage in more quantitative easing. But these measures will sooner or later merely undermine currencies

Further, the way the Fed at first employed quantitative easing in 2009 was minimally productive.

QE1 amounted to adding about a trillion dollars to banks’ balance sheets, with the assumption that banks would then use this money as a basis for making loans.[2] The “multiplier effect” (in which banks make loans in amounts many times the size of deposits) should theoretically have resulted in the creation of roughly $9 trillion within the economy. However, this did not happen: because there was reduced demand for loans (companies didn’t want to expand in a recession and families didn’t want to take on more debt), the banks just sat on this extra capital. A better result could arguably have been obtained if the Fed were somehow to have distributed the same amount of money directly to debtors, rather than to banks, because then at least the money would either have circulated to pay for necessities, or helped to reduce the general debt overhang.

QE2 was about funding Federal government debt interest-free. Because the Federal Reserve rebates its profits (after deducting expenses) to the Treasury, creating money to buy government debt obligations is an effective way of increasing that debt without increasing interest payments. Critics describe this as the government “printing money” and assert that it is highly inflationary; however, given the extremely deflationary context (trillions of dollars’ worth of write-downs in collateral and credit), the Fed would have to “print” far more than it is doing to result in real inflation. Nevertheless, as we will see in Chapter 5 in a discussion of “currency wars,” other nations view this strategy as a way to drive down the dollar so as to decrease the value of foreign-held dollar-denominated debt—in effect forcing them to pay for America’s financial folly.

Central banks and governments are barely keeping the wheels on society, but their actions come with severe long-term costs and risks. And what they can actually accomplish is most likely limited anyway.

Deflation represents a disappearance of credit and money, so that whatever money remains has increased purchasing power. Once the bubble began to burst back in 2007-2008, say the deflationists, a process of contraction began that inevitably must continue to the point where debt service is manageable and prices for assets such as homes and stocks are compelling based on long-term historical trends.   However, many deflationists tend to agree that the inflationists are probably right in the long run: at some point, perhaps several years from now, some future U.S. administration will resort to truly extraordinary means to avoid defaulting on interest payments on its ballooning debt, as well as to avert social disintegration and restart economic activity. There are several scenarios by which this might happen—including government simply printing money in enormous quantities and distributing it directly to banks or citizens. The net effect would be the same in all cases: a currency collapse.

In general, what we are actually seeing so far is neither dramatic deflation nor hyperinflation. Despite the evaporation of trillions of dollars in wealth during the past four years, and despite government and central bank interventions with a potential nameplate value also running in the trillions of dollars, prices (which most economists regard as the signal of inflation or deflation) have remained fairly stable. That is not to say that the economy is doing well: the ongoing problems of unemployment, declining tax revenues, and business and bank failures are obvious to everyone. Rather, what seems to be happening is that the efforts of the U.S. Federal government and the Federal Reserve have temporarily more or less succeeded in balancing out the otherwise massively deflationary impacts of defaults, bankruptcies, and falling property values. With its new functions, the Fed is acting as the commercial bank of last resort, transferring debt (mostly in the form of MBSs and Treasuries) from the private sector to the public sector.

The Fed’s zero-interest-rate policy has given a huge hidden subsidy to banks by allowing them to borrow Fed money for nothing and then lend it to the government at a 3 percent interest rate. But this is still not inflationary, because Federal Reserve is merely picking up the slack left by the collapse of credit in the private sector. In effect, the nation’s government and its central bank are together becoming the lender of last resort and the borrower of last resort—and (via the military) increasingly also both the consumer of last resort and the employer of last resort.

While leaders will make every effort to portray this as a gradual return to growth, in fact the economy will be losing ground and will remain fragile, highly vulnerable to upsetting events that could take any of a hundred forms—including international conflict, terrorism, the bankruptcy of a large corporation or megabank, a sovereign debt event (such as a default by one of the European countries now lined up for bailouts), a food crisis, an energy shortage or temporary grid failure, an environmental disaster, a curtailment of government-Fed intervention based on a political shift in the makeup of Congress, or a currency war

Extreme social unrest would be an inevitable result of the gross injustice of requiring a majority of the population to forego promised entitlements and economic relief following the bailout of a small super-wealthy minority on Wall Street. Political opportunists can be counted on to exacerbate that unrest and channel it in ways utterly at odds with society’s long-term best interests. This is a toxic brew

Growth requires not just energy in the most general sense, but forms of energy with specific characteristics. After all, the Earth is constantly bathed in energy—indeed, the amount of solar energy that falls on Earth’s surface each hour is greater than the amount of fossil-fuel energy the world uses every year. But sunlight energy is diffuse and difficult to use directly. Economies need sources of energy that are concentrated and controllable, and that can be made to do useful work. From a short-term point of view, fossil fuels proved to be energy sources with highly desirable characteristics: they could be extracted from Earth’s crust quite cheaply (at least in the early days), they were portable, and they delivered a lot of energy per unit of weight and/or volume—in most instances, far more than the firewood that people had been accustomed to using.

2009 Post Carbon Institute and the International Forum on Globalization undertook a joint study to analyze 18 energy sources (from oil to tidal power) using 10 criteria (scalability, renewability, energy density, energy returned on energy invested, and so on).

(Searching for a Miracle: Net Energy Limits and the Fate of Industrial Societies),

Our conclusion was that there is no credible scenario in which alternative energy sources can entirely make up for fossil fuels as the latter deplete.

Given oil’s pivotal role in the economy, high prices did more than reduce demand, they had helped undermine the economy as a whole in the 1970s and again in 2008. Economist James Hamilton of the University of California, San Diego, has assembled a collection of studies showing a tight correlation between oil price spikes and recessions during the past 50 years. Seeing this correlation, every attentive economist should have forecast a steep recession beginning in 2008, as oil price soared.

By mid-2009 the oil price had settled within the “Goldilocks” range—not too high (so as to kill the economy and, with it, fuel demand), and not too low (so as to scare away investment in future energy projects and thus reduce supply). That just-right price band appeared to be between $60 and $80 a barrel. How long prices can stay in or near the Goldilocks range is anyone’s guess but as declines in production in the world’s old super-giant oilfields continue to accelerate and exploration costs continue to mount, the lower boundary of that just-right range will inevitably continue to migrate upward. And while the world economy remains frail, its vulnerability to high energy prices is more pronounced, so that even $80-85 oil could gradually weaken it further, choking off signs of recovery.  In other words, oil prices have effectively put a cap on economic recovery. This problem would not exist if the petroleum industry could just get busy and make a lot more oil, so that each unit would be cheaper. But despite its habitual use of the terms “produce” and “production,” the industry doesn’t make oil, it merely extracts the stuff from finite stores in the Earth’s crust. As we have already seen, the cheap, easy oil is gone. Economic growth is hitting the Peak Oil ceiling.  

As more and more resources acquire the Goldilocks syndrome, general commodity prices will likely spike and crash repeatedly, making a hash of efforts to stabilize the economy.

There are three main solutions to the problem of Peak Phosphate: composting of human wastes, including urine diversion; more efficient application of fertilizer; and farming in such a way as to make existing soil phosphorus more accessible to plants.

It’s worth noting that for the past few decades a vocal minority of farmers, agricultural scientists, and food system theorists including Wendell Berry, Wes Jackson, Vandana Shiva, Robert Rodale, and Michael Pollan, has argued against centralization, industrialization, and globalization of agriculture, and for an ecological agriculture with minimal fossil fuel inputs. Where their ideas have taken root, the adaptation to Peak Oil and the end of growth will be easier. Unfortunately, their recommendations have not become mainstream, because industrialized, globalized agriculture has proved capable of producing larger short-term profits for banks and agribusiness cartels. Even more unfortunately, the available time for a large-scale, proactive food system transition before the impacts of Peak Oil and economic contraction arrive is gone. We’ve run out the clock.   In his book, Dirt, David Montgomery makes a powerful case that soil erosion was a major cause of the Roman economy’s decline.

Data from the U.S. Geological Survey shows that within the U.S. many mineral resources are well past their peak rates of production.[4] These include bauxite (whose production peaked in 1943), copper (1998), iron ore (1951), magnesium (1966), phosphate rock (1980), potash (1967), rare earth metals (1984), tin (1945), titanium (1964), and zinc (1969).[5]

There are 17 rare earth elements (REEs) with names like lanthanum, neodymium, europium, and yttrium. They are critical to a variety of high-tech products including catalytic converters, color TV and flat panel displays, permanent magnets, batteries for hybrid and electric vehicles, and medical devices; to manufacturing processes like petroleum refining; and to various defense systems like missiles, jet engines, and satellite components. REEs are even used in making the giant electromagnets in modern wind turbines. But rare earth mines are failing to keep up with demand. China produces 97 percent of the world’s REEs, and has issued a series of contradictory public statements about whether, and in what amounts, it intends to continue exporting these elements.

Indium is used in indium tin oxide, which is a thin-film conductor in flat-panel television screens. Armin Reller, a materials chemist, and his colleagues at the University of Augsburg in Germany have been investigating the problem of indium depletion. Reller estimates that the world has, at best, 10 years before production begins to decline; known deposits will be exhausted by 2028, so new deposits will have to be found and developed. Some analysts are now suggesting that shortages of energy minerals including indium, REEs, and lithium for electric car batteries could trigger trade wars. 

Armin Reller and his colleagues have also looked into gallium supplies. Discovered in 1831, Gallium is a blue-white metal with certain unusual properties, including a very low melting point and an unwillingness to oxidize. These make it useful as a coating for optical mirrors, a liquid seal in strongly heated apparatus, and a substitute for mercury in ultraviolet lamps. Gallium is also essential to making liquid-crystal displays in cell phones, flat-screen televisions, and computer monitors. With the explosive profusion of LCD displays in the past decade, supplies of gallium have become critical; Reller projects that by about 2017 existing sources will be exhausted. 

Palladium (along with platinum and rhodium) is a primary component in the autocatalysts used in automobiles to reduce exhaust emissions. Palladium is also employed in the production of multi-layer ceramic capacitors in cellular telephones, personal and notebook computers, fax machines, and auto and home electronics. Russian stockpiles have been a key component in world palladium supply for years, but those stockpiles are nearing exhaustion, and prices for the metal have soared as a result.

Uranium is the fuel for nuclear power plants and is also used in nuclear weapons manufacturing; small amounts are employed in the leather and wood industries for stains and dyes, and as mordants of silk or wool. Depleted uranium is used in kinetic energy penetrator weapons and armor plating. In 2006, the Energy Watch Group of Germany studied world uranium supplies and issued a report concluding that, in its most optimistic scenario, the peak of world uranium production will be achieved before 2040. If large numbers of new nuclear power plants are constructed to offset the use of coal as an electricity source, then supplies will peak much sooner. Tantalum for cell phones. Helium for blimps. The list could go on. Perhaps it is not too much of an exaggeration to say that humanity is in the process of achieving Peak Everything.

Accidents and natural disasters have long histories; therefore it may seem peculiar at first to think that these could now suddenly become significant factors in choking off economic growth. However, two things have changed.   First, growth in human population and proliferation of urban infrastructure are leading to ever more serious impacts from natural and human-caused disasters.

There are also limits to the environment’s ability to absorb the insults and waste products of civilization, and we are broaching those limits in ways that can produce impacts of a scale far beyond our ability to contain or mitigate. The billions of tons of carbon dioxide that our species has released into the atmosphere through the combustion of fossil fuels are not only changing the global climate but also causing the oceans to acidify. Indeed, the scale of our collective impact on the planet has grown to such an extent that many scientists contend that Earth has entered a new geologic era—the Anthropocene. Humanly generated threats to the environment’s ability to support civilization are now capable of overwhelming civilization’s ability to adapt and regroup.

GDP impacts from the 2010 disasters were substantial. BP’s losses from the Deepwater Horizon gusher (which included cleanup costs and compensation to commercial fishers) have so far amounted to about $40 billion. The Pakistan floods caused damage estimated at $43 billion, while the financial toll of the Russian wildfires has been pegged at $15 billion.[4] Add in other events listed above, plus more not mentioned, and the total easily tops $150 billion for GDP losses in 2010 resulting from natural disasters and industrial accidents.[5] This does not include costs from ongoing environmental degradation (erosion of topsoil, loss of forests and fish species). How does this figure compare with annual GDP growth? Assuming world annual GDP of $58 trillion and an annual growth rate of three percent, annual GDP growth would amount to $1.74 trillion. Therefore natural disasters and industrial accidents, conservatively estimated, are already costing the equivalent of 8.6 percent of annual GDP growth.   As resource extraction moves from higher-quality to lower-quality ores and deposits, we must expect worse environmental impacts and accidents along the way. There are several current or planned extraction projects in remote and/or environmentally sensitive regions that could each result in severe global impacts equaling or even surpassing the Deepwater Horizon blowout. These include oil drilling in the Beaufort and Chukchi Seas; oil drilling in the Arctic National Wildlife Refuge; coal mining in the Utukok River Upland, Arctic Alaska; tar sands production in Alberta; shale oil production in the Rocky Mountains; and mountaintop-removal coal mining in Appalachia.

Since climate is changing mostly because of the burning of fossil fuels, averting climate change is largely a matter of reducing fossil fuel consumption.[9] But as we have seen (and will confirm in more ways in the next chapter), economic growth depends on increasing energy consumption. Due to the inherent characteristics of alternative energy sources, it is extremely unlikely that society can increase its energy production while dramatically curtailing fossil fuel use.

Anther environmental impact that is relatively slow and ongoing and even more difficult to put a price tag on is the decline in the number of other species inhabiting our planet. According to one recent study, one in five plant species faces extinction as a result of climate change, deforestation, and urban growth.

Non-human species perform ecosystem services that only indirectly benefit our kind, but in ways that turn out to be crucial. Phytoplankton, for example, are not a direct food source for people, but comprise the base of oceanic food chains—in addition to supplying half of the oxygen produced each year by nature. The abundance of plankton in the world’s oceans has declined 40 percent since 1950, according to a recent study, for reasons not entirely clear. This is one of the main explanations for a gradual decline in atmospheric oxygen levels recorded worldwide.  A 2010 study by Pavan Sukhdev, a former banker, to determine a price for the world’s environmental assets, concluded that the annual destruction of rainforests entails an ultimate cost to society of $4.5 trillion—$650 for each person on the planet. But that cost is not paid all at once; in fact, over the short term, forest cutting looks like an economic benefit as a result of the freeing up of agricultural land and the production of timber. Like financial debt, environmental costs tend to accumulate until a crisis occurs and systems collapse.

Declining oxygen levels, acidifying oceans, disappearing species, threatened oceanic food chains, changing climate—when considering planetary changes of this magnitude, it may seem that the end of economic growth is hardly the worst of humanity’s current problems. However, it is important to remember that we are counting on growth to enable us to solve or respond to environmental crises. With economic growth, we have surplus money with which to protect rainforests, save endangered species, and clean up after industrial accidents. Without economic growth, we are increasingly defenseless against environmental disasters—many of which paradoxically result from growth itself.

Talk of limits typically elicits dismissive references to the failed warnings of Thomas Malthus—the 18th century economist who reasoned that population growth would inevitably (and soon) outpace food production, leading to a general famine. Malthus was obviously wrong, at least in the short run: food production expanded throughout the 19th and 20th centuries to feed a fast-growing population. He failed to foresee the introduction of new hybrid crop varieties, chemical fertilizers, and the development of industrial farm machinery. The implication, whenever Malthus’s ghost is summoned, is that all claims that environmental limits will overtake growth are likewise wrong, and for similar reasons. New inventions and greater efficiency will always trump looming limits.

The main advantages of electrics are that their energy is used more efficiently (electric motors translate nearly all their energy into motive force, while internal combustion engines are much less efficient), they need less drive-train maintenance, and they are more environmentally benign (even if they’re running on coal-derived electricity, they usually entail lower carbon emissions due to their much higher energy efficiency). The drawbacks of electric vehicles have to do with the limited ability of batteries to store energy, as compared to conventional liquid fuels. Gasoline carries 45 megajoules per kilogram, while lithium-ion batteries can store only 0.5 MJ/kg. Improvements are possible, but the theoretical limit of chemical energy storage is still only about 3 MJ/kg. This is why we’ll never see battery-powered airliners: the batteries would be way too heavy to allow planes to get off the ground.

The low energy density (by weight) of batteries tends to limit the range of electric cars. This problem can be solved with hybrid power trains—using a gasoline engine to charge the batteries, as in the Chevy Volt, or to push the car directly part of the time, as with the Toyota Prius—but that adds complexity and expense.  

Posted in Richard Heinberg | Tagged , , | Comments Off on Book Review of Richard Heinberg’s 2011 “The End of Growth”

U.S. Army new jobs: quell social unrest from climate change, help get arctic oil

Preface. Of all the branches of government, the military is the most on top of climate change, peak oil, pandemics, power grid failure, and other disasters. I guess that shouldn’t be surprising, it’s their job to defend the U.S. against threats.

What I found interesting was that given the coming threats, the military is proposing new job opportunities for themselves in addition to fighting wars abroad. They anticipate that disorder from pandemics, climate change, financial crashes and more might require them to be here in U.S. to maintain order.  The army also proposes to enable and defend arctic hydrocarbon resources, which climate change may make more available.

This study examines the implications of climate change over the next 50 years for the United States Army assuming that IPCC RCP 4.5 is our likely future to predict expected outcomes.

Related: you might want to read Nafeez Ahmed’s take on this report here: U.S. Military Could Collapse Within 20 Years Due to Climate Change, Report Commissioned By Pentagon Says. The report says a combination of global starvation, war, disease, drought, and a fragile power grid could have cascading, devastating effects.

Alice Friedemann    www.energyskeptic.com   author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity, XX2 report

***

Brosig M, Frawley CP, Hill A, et al (2019) Implications of climate change for the U.S. army. U.S. Army War College.  52 pages

Sea level rise, changes in water and food security, and more frequent extreme weather events are likely to result in the migration of large segments of the population. Rising seas will displace tens (if not hundreds) of millions of people, creating massive, enduring instability. This migration will be most pronounced in those regions where climate vulnerability is exacerbated by weak institutions and governance and underdeveloped civil society. Recent history has shown that mass human migrations can result in increased propensity for conflict and turmoil as new populations intermingle with and compete against established populations. More frequent extreme weather events will also increase demand for military humanitarian assistance.

Salt water intrusion into coastal areas and changing weather patterns will also compromise or eliminate fresh water supplies in many parts of the world. Additionally, warmer weather increases hydration requirements. This means that in expeditionary warfare, the Army will need to supply itself with more water. This significant logistical burden will be exacerbated on a future battlefield that requires constant movement due to the ubiquity of adversarial sensors and their deep strike capabilities.

My caption: New jobs for the military

A warming trend will also increase the range of insects that are vectors of infectious tropical diseases. This, coupled with large scale human migration from tropical nations, will increase the spread of infectious disease. The Army has tremendous logistical capabilities, unique in the world, in working in austere or unsafe environments. In the event of a significant infectious disease outbreak (domestic or international), the Army is likely to be called upon to assist in the response and containment. They propose working closely with the CDC and relief plans.

As the electorate becomes more concerned about climate change, it follows that elected officials will, as well. This may result in significant restrictions on military activities (in peacetime) that produce carbon emissions. The Department of Defense (DoD) does not currently possess an environmentally conscious mindset. Political and social pressure will eventually force the military to mitigate its environmental impact in both training and wartime. Implementation of these changes will be costly in effort, time and money.

All of the plans require energy, here are plans that are directly energy related

In light of these findings, the military must consider changes in doctrine, organization, equipping, and training to anticipate changing environmental requirements. Lagging behind public and political demands for energy efficiency and minimal environmental footprint will significantly hamstring the Department’s efforts to face national security challenges. The Department will struggle to maintain its positive public image and that will impact the military’s ability to receive the required funding to face the growing number of security challenges.

[My comment: In a sly way, this study seems to acknowledge peak oil, though it’s stated as if the cause for lack of fuel will be the public’s awareness of climate change: “Problem: potential disruptions to readiness due to restrictions on fuel use”]

The decrease in Arctic sea ice and associated sea level rise will bring conflicting claims to newly-accessible natural resources. It will also introduce a new theater of direct military contact between an increasing belligerent Russia and other Arctic nations, including the U.S. Yet the opening of the Arctic will also increase commercial opportunities. Whether due to increased commercial shipping traffic or expanded opportunities for hydrocarbon extraction, increased economic activity will drive a requirement for increased military expenditures specific to that region. The study recommends training and equipment to conduct future Arctic operations.

Power grid vulnerabilities: improve grid near military installations and fund internal power generation from solar/battery farms and small nuclear reactors.

The Arctic

According to the Intergovernmental Panel on Climate Change (IPCC), since satellite monitoring of the Arctic began in 1979, the Arctic ice extent has de creased from 3.5 – 4.1% (“Climate Change 2014 Synthesis Report.” International Panel on Climate Change. 2015. http://ipcc.ch/report/ar5/syr/ )

According to a 2008 U.S. Geological survey, the Arctic likely holds approximately one quarter of the world’s undiscovered hydrocarbon reserves, with 20% of them potentially in U.S. Territory.

Since territories aren’t well defined, this is mainly a Navy and Air Force issue, however the Army will be tasked with wide area security and reconnaissance roles as part of any joint efforts to secure Arctic interests.

Russia has embarked on a rapid build-up in the Arctic, including expensive refurbishment of Soviet era Arctic bases. Russia’s current Arctic plans include the opening of ten search and rescue stations, 16 deep water ports, 13 airfields and ten air defense sites.  These developments create not only security outposts for Russia, but also threats to the U.S. mainland. Russia’s recent development of KH-101/102 air launched cruise missiles and SSC-8 ground launched cruise missiles potentially put much of the United States at risk from low altitude, radar evading, nuclear capable missiles.   

POWER GRID STRESS

The power grid that serves the United States is aging and continues to operate without a coordinated and significant infrastructure investment. Vulnerabilities exist to electricity-generating power plants, electric transmission infrastructure and distribution system components. Power transformers average over 40 years of age and 70 percent of transmission lines are 25 years or older. The U.S. national power grid is susceptible to coordinated cyber or physical attacks; electromagnetic pulse (EMP) attacks; space weather; and other natural events, to include the stressors of a changing climate (Transmission & Distribution Infrastructure: A Harris Williams & Co. White Paper” Harris Williams & Co. 2014.)

If the power grid infrastructure collapsed:

  • Loss of perishable foods and medications
  • Loss of water and wastewater distribution systems
  • Loss of heating/air conditioning and electrical lighting systems
  • Loss of computer, telephone, and communications systems (including airline flights, satellite networks and GPS services
  • Loss of public transportation systems
  • Loss of fuel distribution systems and fuel pipelines
  • Loss of all electrical systems that do not have back-up power

There are 16 critical infrastructure sectors (here) that would be affected by a blackout: chemical, commercial facilities, communications, critical manufacturing, dams, defense industrial base, emergency services, energy, financial services, food and agriculture, government facilities, healthcare and public health, information technology, nuclear reactors / materials / waste, transportation systems, water and wastewater systems.

The Congressional Electro-Magnetic Pulse (EMP) Commission, in 2008, estimated it would cost $2 billion to harden just the grid’s critical nodes. The Task Force on National and Homeland Security calculates an additional $10 to $30 billion and many years necessary for a complete grid overhaul. The EMP Commission further cited that some of the very improvements of network interconnectedness created through the updated Supervisory Control and Data Acquisition (SCADA) network, which control power distribution around the country, introduced additional weaknesses to cyber-attack.

Department of Defense installations are 99 percent reliant on the U.S. power grid for electrical power generation due to the decommissioning of autonomous power generation capability for budgetary cost saving measures over the last two decades.93

Global reductions in demand for hydrocarbons means that gasoline, diesel, and jet fuel should become less expensive. On the other hand, reduced demand tends to reduce incentives to explore potential oil fields or build new refining facilities. Much of the U.S.’s domestic oil extraction is unprofitable at oil prices below $30 a barrel. Technological advances tend to push this number lower, but exhaustion of oil fields tends to push the number higher. In all scenarios, global declines in oil consumption increase the sensitivity of oil markets to the choices of large consumers like the U.S. DoD.

The automated, A.I.-enhanced force of the Army’s future is one that runs on electricity, not jet fuel (JP-8). More efficient or resilient production of electricity through micro-nuclear power generation or improved solar arrays can fundamentally alter the mobility and the logistical challenges of a mechanized force. Light, quick-charging batteries (super-capacitors) have tremendous value in such a force; so does the wireless transmission of electrical current.

[many pages on climate change]

Then request for $100 million for fighting in middle eastern deserts: “The U.S. Army is precipitously close to mission failure concerning hydration of the force in a contested arid environment. The experience and best practices of the last 17 years of conflict in Afghanistan, Iraq, Syria, and Africa rely heavily on logistics force structures to support the warfighter with water mostly procured through contracted means of bottled water, local wells and Reverse Osmosis Water Purification Units (ROWPU). The ability to supply this amount of water in the most demanding environment is costly in money, personnel, infrastructure, and force structure.  The calculations for water (8.34 pounds per gallon) in an arid environment equates to 66 pounds of water per soldier. Water is 30-40% of the force sustainment requirement.  The Army must develop advanced technologies to capture ambient humidity.

Daily: Temperate 12.2 gallons, tropical 15.4, arid 15.8

Current planning methodologies remain heavily vested in bottled water meaning a more considerable force is needed to transport it.

In the 2000s in Iraq, over 864,000 bottles of water were consumed each month at one Forward Operating Base (FOB) with that number doubling during hotter months. Browne, Mathuel. “Marines Invest in New System to Purify Water on the Go.” Armed with Science: The Official US Defense Department Science Blog. 2017. http://science.dodlive. mil/2017/02/01/marines-invest-in-new-system-to-purify-water-onthe-go/.

ARCTIC OIL

Increased accessibility to the region for economic activity will consequently increase the security requirements and competition in the region. Currently Russia is rapidly expanding their Arctic military capabilities and capacity. The U.S. military must immediately begin expanding its capability to operate in the Artic to defend economic interests and to partner with allies across the region.

As ice melts there will be increased shipping, population shifts to the region and increased competition to extract the vast hydrocarbon resources more readily available as the ice sheets contract. These changes will drive an expansion of security efforts from nations across the region as they vie to claim and protect the economic resources of the region.

the competition for resources in the Arctic will increase security requirements and the potential for conflict. The Army will not be excluded from those requirements or any conflict that develops. The Army will simply be unprepared for the mission and the environment in which it will occur. As Russian activity expands in the Arctic, both the Navy and the Air Force will compete for resources to meet the Russian threat. The Army must compete as well

The Army needs to focus on the development of an infantry carrier vehicle with low surface pressure to maximize maneuverability in adverse terrain. An amphibious capable vehicle that has high weight distribution characteristics across the drive (either wheeled or tracked) contact patches will increase the speed of maneuver necessary for units to conduct wide area security across greater coverage areas.

PANDEMICS AND DISEASE (from climate change, yet more jobs for the army):  As the largest source of potential capacity and capability to respond to widespread disease outbreaks in the United States, the military should be prepared to execute defense support to civil authority (DSCA) missions of this type.

NUCLEAR POWER INDUSTRY

Currently, the Department of Energy conducts tritium production using 2 to 4 commercial nuclear pressurized water reactors (PWRs) run by the Tennessee Valley Authority (TVA). This commercial capability currently meets the U.S. stockpile tritium production capability; however, due to the overall age of the U.S. nuclear power industry, future PWRs may not be available to continue tritium production.168 The loss of tritium production directly reduces the effectiveness of the U.S. nuclear stockpile by reducing or hindering the overall yield produced by the nuclear warheads. Without an effective U.S. nuclear stockpile, the U.S. cannot deter peer nuclear competitors and rogue nuclear states increasing the risk to all-out war against the United States.

Directly tied to tritium production is the future of the nuclear power industry. It is filled with an aging fleet of reactors built in the late 1960s and 1970s. Most receive a commercial license by the Nuclear Regulatory Commission (NRC) to operate on average 30 years, but many have or are seeking extensions to increase the operations out to 40 and 50 years.170 The age of the industry and the lack of new reactors coming on-line creates a significant risk to both the environment and the maintenance of the U.S. nuclear stockpile. “The highest priority of nuclear innovation policy should be to promote the availability of an advanced nuclear power system 15 to 20 years from now”.

Increasing the underlying U.S. baseline nuclear power generation capability from a mere 20% (and declining) to more than 80% (to cover the 60% coal production capability that currently exists) can significantly reduce greenhouse gases.172 The government will need to lead this expansion which goes against the fossil fuel business paradigms that have existed for more than 100 years. Any nuclear industry expansion must include a long-term review of tritium production requirements and analyze how the government will maintain its required tritium production capability.

[natters on and on about need for nuclear, tritium for bombs, no mention of how to dispose of nuclear waste, the lesson learned from Fukushima that it’s the spent nuclear fuel pools not in a containment vessel that are the real hazard (see “A Nuclear spent fuel fire at Peach Bottom in Pennsylvania could force 8.8 million people to evacuate” ]

CONCLUSION

It is useful to remind ourselves regularly of the capacity of human beings to persist in stupid beliefs in the face of significant, contradictory evidence.  Mitigation of new large-scale stresses requires a commitment to learning, systematically, about what is happening.

Life is full of the unexpected, or the overlooked obvious. The term “black swan event” describes surprises of an especially momentous and nasty type. Popularized by the mathematician Nicholas Nassim Taleb in his 2007 book of the same title, Taleb argued that black swan events have three characteristics: “rarity, extreme impact, and retrospective (though not prospective) predictability.”176 In recent years, the concept of black swan events has gained currency in political, military, and financial contexts.

The black swan has a venerable history as an illustration of the ancient epistemological problem of induction: simply stated, no number of observations of a given relationship are sufficient to prove that a different relationship cannot occur. No amount of white swan sightings can guarantee that a different color swan is not out there waiting to be seen.  

Three maxims can help us avoid dangerous failures of recognition, and speed learning when unexpected things happen.

1. Everything we believe about the world is provisional – “serving for the time being.” Adding the words “so far” to assertions about reality reminds us of this.

2. Unjustified certainty is very costly. The greater your certainty that you are right when you are wrong, the longer it will take you to recognize and incorporate new data into your system of belief, and to change your mind. General Douglas MacArthur was a confident man, and this confidence usually served him well, such as when he undertook the risky landings at Incheon in the Korean War. Yet MacArthur’s confidence betrayed him when China entered the war. He was certain that this would not happen, and MacArthur’s certainty delayed his recognition of a key change, exposing forces under his command to terrible risk. Confidence in your beliefs is valuable only insofar as it results in different choices (e.g., I choose A or B). Beyond that point, confidence has increasing costs.

3. Pay special attention to data that is unlikely in light of your current beliefs; it has much more information per unit, all else equal. In this sense, information content is measured as the potential to change how you think about the world. Information that is probable in light of your beliefs will have minimal effects on your understanding. Improbable information, if incorporated, will change it.

Posted in Military | Tagged , , , , | Comments Off on U.S. Army new jobs: quell social unrest from climate change, help get arctic oil

Reforestation for the return to biomass after fossil fuels

Preface. Below are excerpts from a New York Times article about forests.

My book “Life After Fossil Fuels: A Reality Check on Alternative Energy” explains why the myriad ways we use fossil fuels can’t be electrified (or hydrogenized or anything else). Not even the electric grid can be 100% renewable.

Only biomass can do it all, obviously, since the 5,000 years of civilizations that preceded fossil fuels used biomass for energy as well as infrastructure.  The least we could do for our descendants is to plant forests so they don’t freeze in the dark, can build homes, carts, and more and rebuild anew (and bury nuclear wastes).

Alice Friedemann    www.energyskeptic.com   author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity, XX2 report

***

Jabr F (2020) The Social Life of Forests. Trees appear to communicate and cooperate through subterranean networks of fungi. What are they sharing with one another? New York Times.

When Europeans arrived on America’s shores in the 1600s, forests covered one billion acres of the future United States — close to half the total land area. Between 1850 and 1900, U.S. timber production surged to more than 35 billion board feet from five billion. By 1907, nearly a third of the original expanse of forest — more than 260 million acres — was gone. As of 2012, the United States had more than 760 million forested acres. The age, health and composition of America’s forests have changed significantly, however. Although forests now cover 80 percent of the Northeast, for example, less than 1 percent of its old-growth forest remains intact.

And though clearcutting is not as common as it once was, it is still practiced on about 40 percent of logged acres in the United States and 80 percent of them in Canada. In a thriving forest, a lush understory captures huge amounts of rainwater, and dense root networks enrich and stabilize the soil. Clearcutting removes these living sponges and disturbs the forest floor, increasing the chances of landslides and floods, stripping the soil of nutrients and potentially releasing stored carbon to the atmosphere. When sediment falls into nearby rivers and streams, it can kill fish and other aquatic creatures and pollute sources of drinking water. The abrupt felling of so many trees also harms and evicts countless species of birds, mammals, reptiles and insects.

Humans have relied on forests for food, medicine and building materials for many thousands of years. Forests have likewise provided sustenance and shelter for countless species over the eons. But they are important for more profound reasons too. Forests function as some of the planet’s vital organs. The colonization of land by plants between 425 and 600 million years ago, and the eventual spread of forests, helped create a breathable atmosphere with the high level of oxygen we continue to enjoy today. Forests suffuse the air with water vapor, fungal spores and chemical compounds that seed clouds, cooling Earth by reflecting sunlight and providing much-needed precipitation to inland areas that might otherwise dry out. Researchers estimate that, collectively, forests store somewhere between 400 and 1,200 gigatons of carbon, potentially exceeding the atmospheric pool.

Crucially, a majority of this carbon resides in forest soils, anchored by networks of symbiotic roots, fungi and microbes. Each year, the world’s forests capture more than 24 percent of global carbon emissions, but deforestation — by destroying and removing trees that would otherwise continue storing carbon — can substantially diminish that effect. When a mature forest is burned or clear-cut, the planet loses an invaluable ecosystem and one of its most effective systems of climate regulation. The razing of an old-growth forest is not just the destruction of magnificent individual trees — it’s the collapse of an ancient republic whose interspecies covenant of reciprocation and compromise is essential for the survival of Earth as we’ve known it.

By the time she was in grad school at Oregon State University, however, Simard, today 60-years-old and a professor of ecology at the University of British Columbia, understood that commercial clearcutting had largely superseded the sustainable logging practices of the past. Loggers were replacing diverse forests with homogeneous plantations, evenly spaced in upturned soil stripped of most underbrush. Without any competitors, the thinking went, the newly planted trees would thrive. Instead, they were frequently more vulnerable to disease and climatic stress than trees in old-growth forests. In particular, Simard noticed that up to 10 percent of newly planted Douglas fir were likely to get sick and die whenever nearby aspen, paper birch and cottonwood were removed. The reasons were unclear. The planted saplings had plenty of space, and they received more light and water than trees in old, dense forests. So why were they so frail?

Simard suspected that the answer was buried in the soil. Underground, trees and fungi form partnerships known as mycorrhizas: Threadlike fungi envelop and fuse with tree roots, helping them extract water and nutrients like phosphorus and nitrogen in exchange for some of the carbon-rich sugars the trees make through photosynthesis. Research had demonstrated that mycorrhizas also connected plants to one another and that these associations might be ecologically important, but most scientists had studied them in greenhouses and laboratories, not in the wild. For her doctoral thesis, Simard decided to investigate fungal links between Douglas fir and paper birch in the forests of British Columbia. Apart from her supervisor, she didn’t receive much encouragement from her mostly male peers. “The old foresters were like, Why don’t you just study growth and yield?” Simard told me. “I was more interested in how these plants interact. They thought it was all very girlie.”

Simard has studied webs of root and fungi in the Arctic, temperate and coastal forests of North America for nearly three decades. Her initial inklings about the importance of mycorrhizal networks were prescient, inspiring whole new lines of research that ultimately overturned longstanding misconceptions about forest ecosystems. By analyzing the DNA in root tips and tracing the movement of molecules through underground conduits, Simard has discovered that fungal threads link nearly every tree in a forest — even trees of different species. Carbon, water, nutrients, alarm signals and hormones can pass from tree to tree through these subterranean circuits. Resources tend to flow from the oldest and biggest trees to the youngest and smallest. Chemical alarm signals generated by one tree prepare nearby trees for danger. Seedlings severed from the forest’s underground lifelines are much more likely to die than their networked counterparts. And if a tree is on the brink of death, it sometimes bequeaths a substantial share of its carbon to its neighbors.

Although Simard’s peers were skeptical and sometimes even disparaging of her early work, they now generally regard her as one of the most rigorous and innovative scientists studying plant communication and behavior. David Janos, co-editor of the scientific journal Mycorrhiza, characterized her published research as “sophisticated, imaginative, cutting-edge.” Jason Hoeksema, a University of Mississippi biology professor who has studied mycorrhizal networks, agreed: “I think she has really pushed the field forward.” Some of Simard’s studies now feature in textbooks and are widely taught in graduate-level classes on forestry and ecology. She was also a key inspiration for a central character in Richard Powers’s 2019 Pulitzer Prize-winning novel, “The Overstory”: the visionary botanist Patricia Westerford. In May, Knopf will publish Simard’s own book, “Finding the Mother Tree,” a vivid and compelling memoir of her lifelong quest to prove that “the forest was more than just a collection of trees.”

Since Darwin, biologists have emphasized the perspective of the individual. They have stressed the perpetual contest among discrete species, the struggle of each organism to survive and reproduce within a given population and, underlying it all, the single-minded ambitions of selfish genes. Now and then, however, some scientists have advocated, sometimes controversially, for a greater focus on cooperation over self-interest and on the emergent properties of living systems rather than their units.

Before Simard and other ecologists revealed the extent and significance of mycorrhizal networks, foresters typically regarded trees as solitary individuals that competed for space and resources and were otherwise indifferent to one another. Simard and her peers have demonstrated that this framework is far too simplistic. An old-growth forest is neither an assemblage of stoic organisms tolerating one another’s presence nor a merciless battle royale: It’s a vast, ancient and intricate society. There is conflict in a forest, but there is also negotiation, reciprocity and perhaps even selflessness. The trees, understory plants, fungi and microbes in a forest are so thoroughly connected, communicative and codependent that some scientists have described them as superorganisms. Recent research suggests that mycorrhizal networks also perfuse prairies, grasslands, chaparral and Arctic tundra — essentially everywhere there is life on land. Together, these symbiotic partners knit Earth’s soils into nearly contiguous living networks of unfathomable scale and complexity. “I was taught that you have a tree, and it’s out there to find its own way,” Simard told me. “It’s not how a forest works, though.”

In some of her earliest and most famous experiments, Simard planted mixed groups of young Douglas fir and paper birch trees in forest plots and covered the trees with individual plastic bags. In each plot, she injected the bags surrounding one tree species with radioactive carbon dioxide and the bags covering the other species with a stable carbon isotope — a variant of carbon with an unusual number of neutrons. The trees absorbed the unique forms of carbon through their leaves. Later, she pulverized the trees and analyzed their chemistry to see if any carbon had passed from species to species underground. It had. In the summer, when the smaller Douglas fir trees were generally shaded, carbon mostly flowed from birch to fir. In the fall, when evergreen Douglas fir was still growing and deciduous birch was losing its leaves, the net flow reversed. As her earlier observations of failing Douglas fir had suggested, the two species appeared to depend on each other. No one had ever traced such a dynamic exchange of resources through mycorrhizal networks in the wild. In 1997, part of Simard’s thesis was published in the prestigious scientific journal Nature — a rare feat for someone so green. Nature featured her research on its cover with the title “The Wood-Wide Web,” a moniker that eventually proliferated through the pages of published studies and popular science writing alike.

In 2002, Simard secured her current professorship at the University of British Columbia, where she continued to study interactions among trees, understory plants and fungi. In collaboration with students and colleagues around the world, she made a series of remarkable discoveries. Mycorrhizal networks were abundant in North America’s forests. Most trees were generalists, forming symbioses with dozens to hundreds of fungal species. In one study of six Douglas fir stands measuring about 10,000 square feet each, almost all the trees were connected underground by no more than three degrees of separation; one especially large and old tree was linked to 47 other trees and projected to be connected to at least 250 more; and seedlings that had full access to the fungal network were 26 percent more likely to survive than those that did not.

Depending on the species involved, mycorrhizas supplied trees and other plants with up to 40 percent of the nitrogen they received from the environment and as much as 50 percent of the water they needed to survive. Below ground, trees traded between 10 and 40 percent of the carbon stored in their roots. When Douglas fir seedlings were stripped of their leaves and thus likely to die, they transferred stress signals and a substantial sum of carbon to nearby ponderosa pine, which subsequently accelerated their production of defensive enzymes. Simard also found that denuding a harvested forest of all trees, ferns, herbs and shrubs — a common forestry practice — did not always improve the survival and growth of newly planted trees. In some cases, it was harmful.

At this point other researchers have replicated most of Simard’s major findings. It’s now well accepted that resources travel among trees and other plants connected by mycorrhizal networks. Most ecologists also agree that the amount of carbon exchanged among trees is sufficient to benefit seedlings, as well as older trees that are injured, entirely shaded or severely stressed, but researchers still debate whether shuttled carbon makes a meaningful difference to healthy adult trees. On a more fundamental level, it remains unclear exactly why resources are exchanged among trees in the first place, especially when those trees are not closely related.

“Darwin’s theory of evolution by natural selection is obviously 19th-century capitalism writ large,” wrote the evolutionary biologist Richard Lewontin.

As Darwin well knew, however, ruthless competition was not the only way that organisms interacted. Ants and bees died to protect their colonies. Vampire bats regurgitated blood to prevent one another from starving. Vervet monkeys and prairie dogs cried out to warn their peers of predators, even when doing so put them at risk. At one point Darwin worried that such selflessness would be “fatal” to his theory. In subsequent centuries, as evolutionary biology and genetics matured, scientists converged on a resolution to this paradox: Behavior that appeared to be altruistic was often just another manifestation of selfish genes — a phenomenon known as kin selection. Members of tight-knit social groups typically share large portions of their DNA, so when one individual sacrifices for another, it is still indirectly spreading its own genes.

Kin selection cannot account for the apparent interspecies selflessness of trees, however — a practice that verges on socialism. Some scientists have proposed a familiar alternative explanation: Perhaps what appears to be generosity among trees is actually selfish manipulation by fungi. Descriptions of Simard’s work sometimes give the impression that mycorrhizal networks are inert conduits that exist primarily for the mutual benefit of trees, but the thousands of species of fungi that link trees are living creatures with their own drives and needs. If a plant relinquishes carbon to fungi on its roots, why would those fungi passively transmit the carbon to another plant rather than using it for their own purposes? Maybe they don’t. Perhaps the fungi exert some control: What looks like one tree donating food to another may be a result of fungi redistributing accumulated resources to promote themselves and their favorite partners.

“Where some scientists see a big cooperative collective, I see reciprocal exploitation,” said Toby Kiers, a professor of evolutionary biology at Vrije Universiteit Amsterdam. “Both parties may benefit, but they also constantly struggle to maximize their individual payoff.” Kiers is one of several scientists whose recent studies have found that plants and symbiotic fungi reward and punish each other with what are essentially trade deals and embargoes, and that mycorrhizal networks can increase conflict among plants. In some experiments, fungi have withheld nutrients from stingy plants and strategically diverted phosphorous to resource-poor areas where they can demand high fees from desperate plants.

Several of the ecologists I interviewed agreed that regardless of why and how resources and chemical signals move among the various members of a forest’s symbiotic webs, the result is still the same: What one tree produces can feed, inform or rejuvenate another. Such reciprocity does not necessitate universal harmony, but it does undermine the dogma of individualism and temper the view of competition as the primary engine of evolution.

The most radical interpretation of Simard’s findings is that a forest behaves “as though it’s a single organism,” as she says in her TED Talk. Some researchers have proposed that cooperation within or among species can evolve if it helps one population outcompete another — an altruistic forest community outlasting a selfish one, for example. The theory remains unpopular with most biologists, who regard natural selection above the level of the individual to be evolutionarily unstable and exceedingly rare. Recently, however, inspired by research on microbiomes, some scientists have argued that the traditional concept of an individual organism needs rethinking and that multicellular creatures and their symbiotic microbes should be regarded as cohesive units of natural selection. Even if the same exact set of microbial associates is not passed vertically from generation to generation, the functional relationships between an animal or plant species and its entourage of microorganisms persist — much like the mycorrhizal networks in an old-growth forest. Humans are not the only species that inherits the infrastructure of past communities.

When a seed germinates in an old-growth forest, it immediately taps into an extensive underground community of interspecies partnerships. Uniform plantations of young trees planted after a clear-cut are bereft of ancient roots and their symbiotic fungi. The trees in these surrogate forests are much more vulnerable to disease and death because, despite one another’s company, they have been orphaned. Simard thinks that retaining some mother trees, which have the most robust and diverse mycorrhizal networks, will substantially improve the health and survival of future seedlings — both those planted by foresters and those that germinate on their own.

Since at least the late 1800s, North American foresters have devised and tested dozens of alternatives to standard clearcutting: strip cutting (removing only narrow bands of trees), shelterwood cutting (a multistage process that allows desirable seedlings to establish before most overstory trees are harvested) and the seed-tree method (leaving behind some adult trees to provide future seed), to name a few. These approaches are used throughout Canada and the United States for a variety of ecological reasons, often for the sake of wildlife, but mycorrhizal networks have rarely if ever factored into the reasoning.

Ryan told me about the 230,000-acre Menominee Forest in northeastern Wisconsin, which has been sustainably harvested for more than 150 years. Sustainability, the Menominee believe, means “thinking in terms of whole systems, with all their interconnections, consequences and feedback loops.” They maintain a large, old and diverse growing stock, prioritizing the removal of low-quality and ailing trees over more vigorous ones and allowing trees to age 200 years or more — so they become what Simard might call grandmothers. Ecology, not economics, guides the management of the Menominee Forest, but it is still highly profitable. Since 1854, more than 2.3 billion board feet have been harvested — nearly twice the volume of the entire forest — yet there is now more standing timber than when logging began. “To many, our forest may seem pristine and untouched,” the Menominee wrote in one report. “In reality, it is one of the most intensively managed tracts of forest in the Lake States.”

Diverse microbial communities inhabit our bodies, modulating our immune systems and helping us digest certain foods. The energy-producing organelles in our cells known as mitochondria were once free-swimming bacteria that were subsumed early in the evolution of multicellular life. Through a process called horizontal gene transfer, fungi, plants and animals — including humans — have continuously exchanged DNA with bacteria and viruses. From its skin, fur or bark right down to its genome, any multicellular creature is an amalgam of other life-forms. Wherever living things emerge, they find one another, mingle and meld.

Five hundred million years ago, as both plants and fungi continued oozing out of the sea and onto land, they encountered wide expanses of barren rock and impoverished soil. Plants could spin sunlight into sugar for energy, but they had trouble extracting mineral nutrients from the earth. Fungi were in the opposite predicament. Had they remained separate, their early attempts at colonization might have faltered or failed. Instead, these two castaways — members of entirely different kingdoms of life — formed an intimate partnership. Together they spread across the continents, transformed rock into rich soil and filled the atmosphere with oxygen.

Eventually, different types of plants and fungi evolved more specialized symbioses. Forests expanded and diversified, both above- and below ground. What one tree produced was no longer confined to itself and its symbiotic partners. Shuttled through buried networks of root and fungus, the water, food and information in a forest began traveling greater distances and in more complex patterns than ever before. Over the eons, through the compounded effects of symbiosis and coevolution, forests developed a kind of circulatory system. Trees and fungi were once small, unacquainted ocean expats, still slick with seawater, searching for new opportunities. Together, they became a collective life form of unprecedented might and magnanimity.

Posted in Deforestation | Tagged , | Comments Off on Reforestation for the return to biomass after fossil fuels

The History of Drunkenness

Preface. This is a book review of “A short history of Drunkenness” by Mark Forsyth.

I expect alcohol to be a big part of life postcarbon not only because most cultures have embraced alcohol, but to drown the sorrows and memories of the time when we lived likes Gods & Goddesses during the brief oil age. Those of you who survive The Great Simplification may find brewing a good way to make a living. 

Taxation of alcohol is also how governments pay for wars, elites grow rich, and a large role in many religions:

There is, in the Western world, no tradition of religious drunkenness. But it is a practice found across history and across the globe. From Mexico to the Pacific islands to Ancient China there is or has been drunken mysticism, god found at the bottom of a bottle.

The sway of alcohol over mankind is unquestionably due to its power to stimulate the mystical faculties of human nature, usually crushed to earth by the cold facts and dry criticisms of the sober hour. Sobriety diminishes, discriminates, and says no; drunkenness expands, unites, and says yes. It is in fact the great exciter of the Yes function in man. It brings its votary from the chill periphery of things to the radiant core. It makes him for the moment one with truth. Not through mere perversity do men run after it. To the poor and the unlettered it stands in the place of symphony concerts and of literature. The drunken consciousness is one bit of the mystic consciousness.”

Alice Friedemann    www.energyskeptic.com   author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity, XX2 report

***

Mark Forsyth. 2018. A Short History of Drunkenness: How, Why, Where, and When Humankind Has Gotten Merry from the Stone Age to the Present.

Drunkenness

Drunkenness is near universal. Almost every culture in the world has booze. The only ones that weren’t too keen—North America and Australia—have been colonized by those who were. And at every time and in every place, drunkenness is a different thing. It’s a celebration, a ritual, an excuse to hit people, a way of making decisions or ratifying contracts, and a thousand other peculiar practices. When the Ancient Persians had a big political decision to make they would debate the matter twice: once drunk, and once sober. If they came to the same conclusion both times, they acted.

History books like to tell us that so-and-so was drunk, but they don’t explain the minutiae of drinking. Where was it done? With whom? At what time of day? Drinking has always been surrounded by rules, but they rarely get written down. In present-day Britain, for example, though there is no law in place, absolutely everybody knows that you must not drink before noon, except, for some reason, in airports and at cricket matches.

All we know for sure is that if a male fruit fly has his romantic advances spurned by a cruel and disdainful female fruit fly, he ups his alcohol consumption dramatically. Unfortunately for animals, alcohol doesn’t occur naturally in large enough quantities to allow for a proper party.  Though sometimes it does. There’s an island off Panama where the mantled howler monkey can feast happily on the fallen fruit of the astrocaryum palm (4.5 percent ABV). They get boisterous and noisy, and then they get sleepy and stumbly, and then sometimes they fall out of trees and injure themselves. If you adjust their alcohol intake for bodyweight, they can get through the equivalent of two bottles of wine in thirty minutes. But they are a rarity.

What happens if you give a whole colony of rats an open bar? Actually, they’re rather civilized. Though not for the first few days, when they go a bit crazy, but then most of them settle down to two drinks a day: one just before feeding (which the scientists refer to as the cocktail hour) and one just before bedtime (the nightcap). Every three or four days there’s a spike in alcohol consumption as all the rats get together for little rat parties.  Rat colonies usually have one dominant male, the King Rat. The King Rat is a teetotaler. Alcohol consumption is highest among the males with the lowest social status. They drink to calm their nerves, they drink to forget their worries, they drink, it seems, because they’re failures.

Load a couple of barrels of beer onto the back of a pickup truck, drive to somewhere near the elephants, take the lids off and let them have a sip. There’s usually a bit of jostling and the big bull elephants take most of it. But you can then observe them stumbling around and falling asleep and it’s all rather amusing. Even this, though, can go wrong. One scientist who allowed a dominant bull to get a bit too pissed found himself having to break up a fight between a soused elephant and a rhino. Usually, elephants don’t attack rhinos, but the beer makes them quarrelsome.

On the following morning monkeys who drank were very cross and dismal; they held their aching heads with both hands and wore a most pitiable expression: when beer or wine was offered them, they turned away with disgust, but relished the juice of lemons.  If, Darwin thought, man and monkey both react the same way to hangovers, they must be related. This wasn’t his only proof, but it was a start in proving that bishops were primates.     From the New Yorker: In “Descent of Man,” Darwin states, “Many kinds of monkeys have a strong taste for . . . spirituous liquors.” And he cites the reported effects of the monkeys’ being exposed to strong beer—“cross and dismal . . . aching heads . . . a most pitiable expression”—as suggestive evidence for the evolutionary affinity between humans and primates. “These trifling facts prove how similar the nerves of taste must be in monkeys and man, and how similarly their whole nervous system is affected”—by alcohol.

Humans are designed to drink. We’re really damned good at it. Better than any other mammal, except maybe the Malaysian tree shrew. Never get into a drinking contest with a Malaysian tree shrew; or, if you do, don’t let them insist that you adjust for bodyweight. They can take nine glasses of wine and be none the worse for it. That’s because they’ve evolved to survive on fermented palm nectar. For millions of years evolution has been naturally selecting the best shrew drinkers in Malaysia and now they’re champions. But we are the same. We evolved to drink. Ten million years ago our ancestors came down from the trees. Why they did this is not entirely clear, but it may well be that they were after the lovely overripe fruit that you find on the forest floor. That fruit has more sugar in it and more alcohol. So we developed noses that could smell the alcohol at a distance. The alcohol was a marker that could lead us to the sugar.

Alcohol has led us to our food, alcohol has made us want to eat our food, but now we need to process the alcohol; otherwise we’ll just become food for somebody else. It’s hard enough to fight off a prehistoric predator when you’re sober, but trying to punch a saber-toothed tiger when you’re five sheets to the wind is a nightmare.

So now that we’d acquired the taste, we needed—evolutionarily—to develop a coping mechanism. There is one quite precise genetic mutation that occurred ten million years ago that makes us process alcohol nearly as well as a Malaysian shrew. It’s to do with the production of a particular enzyme that we started to produce. Humans (or the ancestors of humans) were suddenly able to drink all the other apes under the table. For a modern human, 10% of the enzyme machinery in your liver is devoted to converting alcohol into energy.   From the internet: Once alcohol has entered your bloodstream it remains in your body until it is processed. About 90-98% of alcohol that you drink is broken down in your liver, the other 2-10% is removed in your urine, breathed out through your lungs or excreted in your sweat.  The average person will take about an hour to process 10 grams of alcohol, which is the amount of alcohol in a standard drink. So if you drink alcohol faster than your body can process it, your blood alcohol level will continue to rise.

Benjamin Franklin, Founding Father of the United States, famously observed that the existence of wine was “proof that God loves us, and loves to see us happy.  He also made a significant observation about human anatomy: To confirm still more your piety and gratitude to Divine Providence, reflect upon the situation which it has given to the elbow. You see in animals who are intended to drink the waters that flow upon the earth, that if they have long legs, they have also a long neck, so that they can get at their drink without kneeling down. But man, who was destined to drink wine, is framed in a manner that he may raise the glass to his mouth. If the elbow had been placed nearer the hand, the part in advance would have been too short to bring the glass up to the mouth; and if it had been nearer the shoulder, that part would have been so long, that when it attempted to carry the wine to the mouth it would have overshot the mark, and gone beyond the head

Most of the early drinks wouldn’t so much have been invented as discovered. A pleasant theory involves bees. Imagine a bees’ nest in the hollow of a tree. Then there’s a storm, the tree falls over and the nest is flooded with rainwater. So long as you have roughly one part honey to two parts rainwater, fermentation ought to kick in pretty soon.   More prosaically you simply need to be picking and storing fruit somewhere reasonably watertight. The juice at the bottom will start to bubble and pretty soon you’ll have a very primitive wine. For that you would probably need pottery. More importantly you need to remain in the same place for a while, and all of the evidence suggests that our ancestors were mostly on the move.

It looks like there was beer, and, importantly, it looks like there was beer before there were temples and before there was farming. This leads to the great theory of human history: that we didn’t start farming because we wanted food—there was loads of that around. We started farming because we wanted booze. This makes a lot more sense than you might think, for six reasons. 1) beer is easier to make than bread as no hot oven is required, 2) beer contains vitamin B, which humans require if they’re going to be healthy and strong. Hunters get their vitamin B by eating other animals. On a diet of bread and no beer, grain farmers will all turn into anemic weaklings and be killed by the big healthy hunters. But fermentation of wheat and barley produces vitamin B. 3) beer is simply a better food than bread. It’s more nutritious because the yeast has been doing some of the digesting for you.

From NPR: Charlie Bamforth, a professor of brewing sciences at the University of California, Davis. Though it’s been blamed for many a paunch, it’s more nutritious than most other alcoholic drinks, Bamforth says. “There’s a reason people call it liquid bread,” he says. Beer, he says, has more selenium, B vitamins, phosphorus, folate and niacin than wine. Beer also has significant protein and some fiber. And it is one of a few significant dietary sources of silicon, which research has shown can help thwart the effects of osteoporosis. 150 calories in your typical, 12-ounce serving of 5 percent-alcohol beer. A 12-ounce bottle of 9.6 percent has 300 calories, 200 from the alcohol.   

4) beer can be stored and consumed later, 5) the alcohol in beer purifies the water that was used to make it, killing all the nasty microbes.  6) The biggest argument is that to really change behavior you need a cultural driver. If beer was worth traveling for (which Göbekli Tepe suggests it was) and if beer was a religious drink (which Göbekli Tepe suggests it was), then even the most ardent huntsman might be persuaded to settle down and grow some good barley to brew it with.

And so in about 9000 BC, we invented farming because we wanted to get drunk on a regular basis.

Cities are the result of farmers working too hard. In fact, history is the result of farmers working too hard. If you have a job that doesn’t involve food-production (and you’re alive), that means that somewhere there’s a farmer producing more food than he needs. The second that happens you get specialized jobs, because ultimately you’ve got to be providing something to the farmer in exchange for the food, whether it’s clothes or housing or protection or accountancy services.

The sure sign of agricultural surplus is that there are populated places that produce no food at all. Such places are called cities, inhabited by citizens. The Latin for citizen was civis, and from that we get the words civil and civilization. When we give the farmers something in return, it’s called trade, and trade causes disputes, and the people who solve these disputes are called the government. The government requires money to spend on important things like thrones, armies and fact-finding trips. And because it’s terribly hard to remember who’s paid their tax and who hasn’t, tax requires writing. Writing causes Prehistory to stop, and History to begin.

Everybody drank beer. Kings drank it on their thrones. Priests drank it in temples.

There was a myth that civilization had only come about through beer. The story went that Enki, the god of wisdom, had sat down with the goddess of hanky-panky, whose name was Inana. At the time, humans had no skills or knowledge. So it came about that Enki and Inana were drinking beer together in the abzu, and enjoying the taste of sweet wine. The bronze aga vessels were filled to the brim, and the two of them started a competition, drinking from the bronze vessels of Uraš. Long story short: Inana wins. While Enki is passed out drunk, she steals all the wisdom from heaven and takes it down to earth. When Enki wakes up, he notices that all the wisdom is missing and throws a fit, but by then it’s too late.

The most famous Sumerian myth of all, The Epic of Gilgamesh, starts with a wild man called Enkidu who lives among the animals like a Mesopotamian Mowgli, until a priestess of Inana turns up and tries to make him human. She does this by having sex with him, and then giving him a drink (not the usual order).

SUMERIA: So now we sit down at a table and the beer is brought to us in an amam jar, along with two straws. Beer has to be drunk through a straw. This is because Sumerian beer is not like our lovely modern clear amber nectar. It’s a sort of fizzing barley porridge with lots of solid stuff floating on the surface. A straw lets us go below the surface and suck out the sweet liquid. There are lots of representations of Sumerians doing this, and people still do it with palm wine in parts of central Africa.

RELIGION AND ALCOHOL

There is, in the Western world, no tradition of religious drunkenness. But it is a practice found across history and across the globe. From Mexico to the Pacific islands to Ancient China there is or has been drunken mysticism, god found at the bottom of a bottle

The sway of alcohol over mankind is unquestionably due to its power to stimulate the mystical faculties of human nature, usually crushed to earth by the cold facts and dry criticisms of the sober hour. Sobriety diminishes, discriminates, and says no; drunkenness expands, unites, and says yes. It is in fact the great exciter of the Yes function in man. It brings its votary from the chill periphery of things to the radiant core. It makes him for the moment one with truth. Not through mere perversity do men run after it. To the poor and the unlettered it stands in the place of symphony concerts and of literature;

The drunken consciousness is one bit of the mystic consciousness,

The Greeks didn’t drink beer, they drank wine; but they watered it down by a ratio of about two or three parts water to one part wine, which made it almost exactly the same strength.  The Persians drank beer; that made them barbarians. The Thracians drank undiluted wine; that made them barbarians. The Greeks were the only people who had it just right, according to the Greeks.

It’s rather intriguing that the Greek god of wine and the Egyptian goddess of beer were both said to arrive from the exotic south with a dancing menagerie of humans, animals and spirits, but it’s probably just a coincidence.

The myths about Dionysus mostly fall into two categories. (1) There are the stories of people who don’t recognize him, and don’t even realize that he is a god. Who these people are varies from pirates to princes, but their fate is usually the same. Dionysus punishes them by turning them into animals. The moral of the stories is reasonably clear. When you’re dealing with wine you need to remember that you are dealing with something powerful, something divine. This is no ordinary drink. It is holy. Moreover, alcohol, if you’re not careful, can bring out the beast in you.

The only fully human friends Dionysus had were the maenads. Maenads were women who worshipped Dionysus. They did this by going out into the mountains wearing next to nothing and getting very, very drunk. Then they would dance and let their hair down and rip animals to pieces in a sort of terrifying Arcadian hen party. Nobody is quite sure whether maenads ever actually existed, or whether they were just a sexual fantasy of Greek men, like the Amazons.  The maenads, though, were terribly important in the second type of Dionysus myth.  Dionysus didn’t like teetotalers. This is unsurprising for a god of wine, but Dionysus being Dionysus he tends to kill them cruelly. The most famous example is a play by Euripides where the King tries to outlaw maenadism so Dionysus makes his maenads believe that the King is a lion and they rip him limb from limb (the group is led by the King’s mother). There’s another story about Orpheus wandering the countryside. His wife has died and he wants to have a good cry. Unfortunately, he comes across a group of maenads who are all getting plastered and want him to join in. Orpheus politely declines and they rip him limb from limb as well.

There are a lot of stories like this and they all end the same way. The moral is pretty clear: you should recognize that drinking is dangerous and that it might turn you into a wild beast, but you should still drink. Never turn down an invitation to a party.

CHRISTIANITY. Paul notes that people were getting drunk at communion. He has to point out that communion is for drinking, not for getting drunk, which must have come as something of a shock to the Corinthians. Once you start to look for it, you find this problem a lot in early Christianity. The poor apostles were going out preaching the good news of a new religion that required you to drink wine. And people seem to have got the wrong impression. The Acts of the Apostles opens with Pentecost and the Holy Spirit descending upon the Christians, who proceed to speak in tongues. The people in the crowd that gathered: asked one another, “What does this mean?” Some, however, made fun of them and said, “They have had too much wine.” And poor St. Peter has to jump up and explain: Fellow Jews and all of you who live in Jerusalem, let me explain this to you; listen carefully to what I say. These people are not drunk, as you suppose. It’s only nine in the morning! When you think about it, the drink would have made a perfect stick with which to beat early Christianity. It would be so easy to caricature this strange new sect as a group of drunkards, a Jewish version of the cult of Dionysus, that it would be surprising if pagans didn’t do this.

Greek drinking

Plato, quite specifically, says that getting drunk is like going to the gym: the first time you do it you’ll be really bad and end up in pain. But practice makes perfect. If you can drink a lot and still behave yourself, then you are an ideal man. If you can do this in company, then you can show the world that you are an ideal man, because you are displaying the great virtue of self-control even under the influence. Self-control, said Plato, was like bravery.

A chap who spends his days fighting battles can train himself to be brave. A man who spends his evenings getting drunk can train himself to ever higher levels of self-control.

Let us say that you were a lady in classical Athens and you wanted to get drunk. You couldn’t. Women weren’t allowed at symposiums. Or, to be more precise, women might be allowed but not ladies

So it was the men who gathered, and they gathered at somebody’s private house. Not at a bar. For a typical symposium you might have a dozen chaps over. A really large one might be up to thirty fellows, but that was unusual. First, you had supper. This was a plain meal that was consumed pretty quickly and pretty silently. The food was not the thing—it was only really there to soak up the wine. Arranged in a circle around the room were couches with cushions on them. The men would lie down on the couches with a pillow under one arm. Young men, though, were not allowed to lie down.

It may then have been necessary to choose a symposiarch—the leader of the evening’s drinking. This would almost always be the host, whose first job was to choose the wine. Usually, this would be from his private estate as most Athenian gentlemen would own a vineyard, indeed the class system in Athens was built around how big your vineyard was. The lowest level was 7 acres or less; the highest had over 25.  If it was summer, the wine would have been cooled by lowering it into a well, or burying it.

At a symposium you got deliberately, methodically and publicly drunk. Everybody was given a bowl of wine. Everybody had to drink their bowl of wine before there’s a refill. Just as the guests at a symposium didn’t get to choose how much they drank, so they didn’t get to choose what they talked about, or indeed if they talked at all. The symposiarch would name a subject and then each guest in turn would have to give their opinion on it.  Each guest is meant to launch into a long and detailed answer.

There would be none of the free flow of conversation that we associate with a drinking session, and no opportunity simply to remain silent.

A game that Athenians played at symposiums was called kottabos. You took the last few drops of wine in your drinking bowl and tried to flick it at something. Sometimes a special bronze target would be brought in and everyone would flick their wine at it. Sometimes the target was a bowl floating in a pot of water and your aim was to sink it. Sometimes the target was a person. It all sounds rather messy, and old people used to complain about it and say that young men should be doing something constructive instead.

For sensible men I prepare only three kraters: one for health (which they drink first), the second for love and pleasure, and the third for sleep. After the third one is drained, wise men go home. The fourth krater is not mine any more—it belongs to bad behavior; the fifth is for shouting; the sixth is for rudeness and insults; the seventh is for fights; the eighth is for breaking the furniture; the ninth is for depression; the tenth is for madness and unconsciousness.

ROMAN EMPIRE

Early Rome was a very stern and sober place. In the days of the high republic (we’re talking about 200 BC–ish), they were all clean-shaven, short-haired militaristic types. Drunkenness was frowned upon. Sternly. It was associated with the long-haired, bearded, luxurious Greeks, whom the Romans were busy defining themselves against.

The Roman Empire was, in essence, a system whereby the entire wealth of the known world was funneled back to one city. This produced possibly the wealthiest city that the earth has ever known. Money corrupts and huge amounts of money are huge amounts of fun. The result, as every schoolboy learns, was decadence. Roman men started enjoying wine more than water. Then they even let their womenfolk try some. Then they finally read some Greek books and realized they were rather good. And then they thought they’d give homosexuality a go, and that was a big hit. By the time you got to the mid-first century AD those stern senators of 186 BC would have been turning in their graves.

So how did you get in on the fun? The problem with Roman money was that, though there was an awful lot of it, it arrived at the very top of society and flowed down. If you wanted a bit of wealth and wine, you had to find yourself a patron, somebody to sponge off. This sounds horribly parasitical, and in a sense it was, but it was all out in the open. There were patrons with money, and there were dependents with flattery. Everyone knew what was going on. So long as you were prepared to sell your dignity, you got paid in good food and wine. The central component of the system was a banquet called the convivium. Not everybody liked the system. The poet Juvenal asked: “Is a dinner worth all the insults with which you have to pay for it? Is your hunger so importunate, when it might, with greater dignity, be shivering where you are, and munching dirty scraps of dog’s bread?” And most people said yes.

The Roman convivium was not about being convivial. The Roman convivium was all about showing off, and about asserting who was on the top and who was right down at the bottom. You are not here to have fun. You’re here to learn your place, to applaud those above you, and to sneer at those below you. This was accomplished through seating, slaves, quality of wine, quantity of wine, food, what the wine was served in and where that was thrown.

The dining room contained one big table. One side was left empty as that was the side where the slaves, those endless crowds of slaves, served the brimming platters, and took away the empties. The other three sides had a couch each, and each couch held three people, lying down, because the Romans liked to drink horizontally. Looked at from the slaves’ point of view, the couch on the right was for inferior guests, with the least honored guest nearest to you. That corner of the table, diagonally opposite the host and his friend, could be covered with inferior food and inferior wine for the clearly inferior guest. If you’re there, you weren’t really welcome, you certainly weren’t honored. The host is telling you that he doesn’t give a galley-slave’s cuss about you. And you still have to say thank you. That’s the point of the convivium.  

The whole house is crawling with crawling slaves. They had to crawl, or they got whipped. Hosts would whip their slaves in front of their guests as a demonstration of power.  

The monks of the Dark Ages, indeed the people of the Dark Ages, needed booze because the alternative was water. Water requires a well-maintained well, or preferably an aqueduct, and that requires effective organization and government and all the things that the Dark Ages are not best known for. In the absence of these, your best source of water is the nearest stream, and for most of us, those who don’t live high in the mountains, that is a murky prospect.

Water drawn from the nearest stream was barely transparent. It was liable to contain creeping things, whatever they were—worms or leeches. One Anglo-Saxon book recommends a cure for swallowing creeping things: immediately drink some hot sheep’s blood. This tells us two things: (a) water was disgusting; (b) people did nonetheless drink it sometimes. Sometimes you had to, you were thirsty and you could afford nothing better. The standard Anglo-Saxon attitude to the subject is summed up in Abbot Aelfric’s dictum: “Ale if I have it, water if I have no ale.

Wine, continued Aelfric in a wistful tone, was way too dear for the average English monk. Instead, the standard ration was a mere gallon of ale a day (and more on feast days).

THE VIKINGS

Most polytheistic religions have one chief god, and then a god of drunkenness/wine/brewing, etc., somewhere on the side. Enlil was superior to Ninkasi; Amun to Hathor; Zeus to Dionysus. The drunken god turns up, causes some fun and chaos, but is always subject to the wiser ways and greater powers of the chief god, who usually has a beard. You don’t need to be the sharpest theologian to interpret this as drunkenness having to find its niche within society, its little spot where it can be tamed and controlled. But with the Vikings the chief god is the drunk god. The chief god is actually called “the drunk one.” There is no other Viking god of alcohol. It’s Odin. That’s because alcohol and drunkenness didn’t need to find their place within Viking society, they were Viking society. Alcohol was authority, alcohol was family, alcohol was wisdom, alcohol was poetry, alcohol was military service and alcohol was fate.

There were only three kinds of Viking booze.  There was wine which was immensely expensive and almost nobody could get hold of it. The next drink down the pecking order was mead, fermented honey, sweet and reasonably expensive. Almost everybody almost all the time just drank ale, which was much less expensive. Their ale was probably slightly stronger than ours at about 8 percent ABV.

If you wanted to set yourself up as a lord, you needed to build a mead hall, even if all you ever served in it was ale. You still called it a mead hall for appearances’ sake. Your mead hall could even be quite small—some were only about 10 by 15 feet. Others were huge, a hundred yards in length. In Beowulf when Hrothgar wants to become a mighty king, he builds Heorot, the biggest mead hall that anyone has ever seen, filled with pillars and gold.

The mead hall makes you a lord because the very first duty of a lord is to provide booze to his warriors. This was the formal way in which you showed your lordship. And conversely, if you went to somebody’s mead hall and drank their mead, you were honor-bound to protect them militarily.

Alcohol was, literally, power. It was how you swore people to loyalty. A king without a mead hall would be like a banker with no money or a library with no books.

You also needed a queen, because, strange as it may seem, women were a rather important (if a trifle subjugated) part of the mead hall feast. Women—or peace-weavers as the Vikings called them—were the ones who kept the formal footing of the feast going, who lubricated the rowdy atmosphere and provided a healthy dose of womanly calm. They were in charge of the logistics of the sumbl, which was the Norse name for a drunken feast. They may even have enjoyed the beginning of the evening, the first three drinks which were to Odin (for victory), to Njord and Freya (for peace and good harvest), and then the minnis-öl, the “memory-ale” to spirits of ancestors and of dead friends.

There’s a funny kind of Viking frost-cup that archaeologists call a funnel glass. That’s because archaeologists aren’t poets. A funnel glass is about 5 inches tall and is shaped just as you might imagine it, which means that it can’t be put down on a table. It would just fall over. This is quite deliberate as the idea is to make you down your whole drink in one. This was immensely important to the Vikings as downing drinks made you a real man. This was also the purpose of the more traditional drinking horn: to test your virility by reference to your ability to swallow.

There’s a story about Thor (the god of warfare and hammers) and Loki (the god of mischief). Loki challenged Thor to drink a horn of ale. Thor, who could never resist a challenge, accepted and Loki had a horn brought to the table and told Thor that a real man could down it in one. Thor grabbed the horn, put it to his mouth, and drank, and drank, and drank, and, when he could drink no more, the horn was still almost full. Loki looked disappointed and said that a normal chap might need to do it in two. So Thor tried again, and again his godlike drinking had almost no effect. Loki murmured that a weakling could do it in three. Same thing happened. This left Thor feeling rather ashamed and effeminate, until Loki revealed that he had tricked him, and that the other end of the horn was connected to the sea. Thor had drunk so much that he had brought the whole level of the world’s oceans down, and that, according to the Vikings, was the origin of tides.

Along with the drinking competitions, Vikings did an awful lot of boasting. This was not seen as a bad thing. A Viking chap was meant to boast. He was meant to recount all of his great rapacious deeds. And then another Viking was meant to outdo him. These boasts were not quick one-liners either. They were long affairs that waxed poetic and lyrical. It was a big, formal occasion, much like a modern rap battle, or so I am informed. Moreover, your boasting was in deadly earnest. You were expected to stand by anything you said, whether it was a claim of something you had done in the past, or of something that you were merely planning on. There was no possibility of excusing yourself the next morning by saying, as we would, that that was just the drink talking.

It was a viciously violent society, a hall full of warriors who are being forced to drink much too quickly, ceremonial bragging and insulting, and they’re all carrying swords. The result of all this can best be summed up in the Viking/Anglo-Saxon epic Beowulf, where the poet is trying to explain just what a wonderful man Beowulf was. He lavishes praise on him, and the highest praise of all is that Beowulf “never killed his friends when he was drunk”.

There’s a lovely mythical creature called the Heron of Oblivion (I’ve no idea why) that was said to come down and hover over the sumbl until everybody dozed off. Nobody went home. You stayed in your lord’s mead hall until you could stay awake no longer and then you lay down on a bench or a table or whatever you could find and you fell fast asleep.

SWEDEN

There was, apparently, an eighth-century Swedish king called Ingjald who invited all the neighboring kings to his coronation. When the bragarfull came round, he swore to enlarge his kingdom by half in every direction. Everyone drank. Everyone got drunk. The Heron of Oblivion did his restful work, and when everyone else was asleep, Ingjald went outside, locked the doors and burned down his own mead hall with all the other kings in it. I’d like to say that that was a one-off, but it wasn’t. There are a fair few accounts of burning down mead halls with everyone in them. There’s even one of a queen doing it to her husband, which seems fair.

ENGLAND

Taverns sold wine. Wine, because it had to be imported, was very, very expensive. Taverns were for wealthy men who wanted to splash a bit of cash, which meant that they were almost all in London. It also meant that taverns could have a rather degenerate side. This is where you’d find prostitutes and gamblers because, by definition, if you could afford wine you could afford other sinful luxuries.

Shakespeare, I’m pretty sure, was a wine-drinker. His works have over a hundred references to wine and sack, and only sixteen to ale.

In England in the year 1200 there was no such thing as a pub. Villages simply did not have drinking establishments. This may seem strange. Imagining England without a village pub is like imagining Russia with no vodka (there was, at this time, no vodka in Russia; but we’ll come to that in another chapter).

There were no pubs, because there was no need for pubs. Everybody was drinking at work. Often it was part of the pay. A carter, for example, might expect to have 3 pints and some food thrown in with his wages. When a lord employed laborers to work his land, he had to give them some booze. Medieval Englishwomen and children also drank. Water was still pretty dangerous, and only for the very poor.

Not that people got drunk. A few pints spread out over the course of a hard day’s toil in the fields won’t do that. But it will nourish you. Ale is, after all, liquid bread. People drank in church as well. The medieval village church was not so much a place of worship as a community center (with some worship thrown in on Sundays).   Opportunities to cadge booze in church were neither few nor far between.

A husband would expect his wife to cook and clean and look after children, and brew, and spin. Spinning wool into cloth and brewing ale had the added advantage that they could make you extra money. A wife would weave the cloth to clothe her husband, and, if there was any left over, she could sell it. This was almost the only way that the average medieval single woman could get an income. And it was so common that an unmarried woman is, to this day, called a spinster. 

A woman who brewed would be called a Brewster. A woman who brewed for profit could also be called an alewife. Medieval ale had a very short shelf life. It would go off after two or three days. So when an alewife had brewed more than her family needed, she would put up an ale stake above her front door. This was just a horizontal stick with a sprig of bush tied to the end. She would put the barrel outside her house, and sell to passersby who would turn up with a flagon and some pennies. They could then stroll off and drink it at work, at their own home or in church.

That’s how things were all the way up to the beginning of the 14th century. Then several things happened at once. First, people stopped drinking in churches. This was not because they didn’t like drinking in church, but because the church didn’t like people drinking in it.

Once upon a time, a nobleman employed people to till his fields. But in the 14th century noblemen decided that it was simpler just to rent plots of land out to the peasants and let them farm it for themselves. This meant that any peasant who didn’t have a good alewife now had to go and buy ale, which was good news for alewives. Thirsty laborers would show up after work, they wanted ale, but they also wanted somewhere to sit down and drink it. So alewives started to let people into their kitchens. Thus the pub was born.

Finally, beer was invented. Throughout this chapter I’ve been talking about ale, which was made with barley and water. It was not a very pleasant substance. Nutritious? Yes. Alcoholic? Yes. Tasty and pure and fizzy and refreshing? No. It was a sort of sludgy porridge with bits in it. The only way to make it taste nice was to flavor it with herbs and spices—horseradish was a favorite. But you were trying to disguise the taste. Trying to make something vile into something drinkable. Then hops arrived. When you add them to ale you get beer.

Most people much preferred the taste of hoppy beer. And beer had one other massive advantage over ale: it didn’t go off. You could keep beer for a year or so and, as long as the barrel was well sealed, it would still be good. Because of this, beer could be mass-produced. In every major town, breweries were set up which could produce lots of lovely beer that could then be sold to all the local alehouses (they continued to be called alehouses, long after the awful sludgy porridge had been forgotten).

The breweries could filter the beer and make a much better product.

Let us suppose that we are travelers sometime around the end of the 15th century. To find an alehouse we’d look for an ale stake. Pub signs (and by extension pub names) don’t come in until the 1590s.  The ale bench, which, as you may have guessed, was a bench just outside the door where, in fine weather, you could sit and drink in the sunshine. It’s also quite possible that we’ll spot some people playing games—bowls was a favorite—and betting on them. The door will be open. This was a legal requirement, except in the depths of winter. The idea was that any passing authority figure should be able to see inside an alehouse and thus check that nothing naughty was going on, while also not having to sully themselves by actually going in.

One of the great advantages of visiting an alehouse was that there was usually a fire blazing away. Many medieval peasants simply couldn’t afford such a luxury in their own homes. One of the first differences we’ll notice from a modern pub is that there is no bar. Countertop bars, the sort of thing we know and love, don’t actually come in until the 1820s. This place doesn’t look like a pub. It looks like somebody’s kitchen, which is basically what it is. There’s a barrel of beer somewhere in the room. And there are a few stools and benches, perhaps a trestle table or two. But the total value of the furniture isn’t more than a few shillings. We are in somebody’s house, but it’s public.

The person whose house we’re in is almost certainly a woman.  There’s also a good chance that she’s a widow. Running an alehouse was still one of the only ways that a woman could make money, and, in the days before pensions, alehouse licenses would be granted to widows out of pity. It was that or she would have to throw herself upon the parish, which the parish found inconvenient.

Women usually went to alehouses in groups. A woman on her own might be talked about. A group of respectable matrons, though, was in the clear. People also went on dates to alehouses. If a couple were known to be courting, then going out for drink was considered perfectly normal and respectable.

Alehouses were only for the poorest in society. Even moderately well-off people like yeoman farmers were still drinking at home. The alehouse was a place of escape. Servants came here for the same reason as lovers; it was what anthropologists call the Third Place. It wasn’t work, where you have to obey your boss, and it wasn’t home, where you have to obey your parents or your spouse. That’s also why the place is full of teenagers. Medieval England was an edenic place where there were absolutely no laws about underage drinking.

Not that people will actually get that drunk, unless it’s a Sunday. Just as we think of Friday night as the standard time for drinking, the medievals liked to get sloshed on a Sunday morning. This makes a lot of sense, if you think about it, as you get to be buzzed all day. But it does mean that there is a permanent war between the alehouse and the church for attendance on a Sunday morning. A war that the alehouse tended to win.

The standard greeting for a stranger arriving in an alehouse was “What news?” In the days before newspapers and even television, travelers were the main way to find out what was going on in the world. Who was king? Were we at war? Had we been invaded? Alehouses actually developed a rather bad reputation for spreading absolute lies. In 1619 the whole of Kent was sent into a panic by the news that the Spanish had taken Dover Castle; and, very curiously, the alehouse drinkers of Leicester heard the news of Elizabeth I’s death forty-eight hours before it happened.

AZTECS

But if drinking was so very, very illegal, how did it have such a central place in Aztec culture? And it did. They had gods of drinking. Several of them. Mayahuel, who was the goddess of the agave plant, was said to have married Patecatl, who was the god of fermentation. Mayahuel had 400 breasts, which was probably fun for Patecatl, but was also useful because she gave birth to 400 divine little rabbits, the Centzon Totochtin. The reason that there were 400 of them is that the Aztecs counted in base twenty. Four hundred is twenty squared and so the number had much the same place in their culture that 100 (ten squared) does in ours.

So, to recap, booze is ferociously forbidden and punishable by death. Booze is ubiquitous. Booze is revered and central to the culture and religion. Booze is legal for the elderly. This combination has left historians somewhat confused, and indeed inclined toward a quick dose of teonanacatl, the Aztec hallucinogen of choice that was entirely legal. There is, though, a theory that makes sense of all this. Anthropologists who study drunkenness draw a distinction between what they call “wet cultures” and “dry cultures.” In wet cultures people are terribly relaxed about alcohol. They sip it all day and have a terribly pleasant time, and very rarely get properly, falling-over drunk. Dry cultures are the opposite. They aren’t dry in the sense of being alcohol-free; they’re called dry because people are very wary of alcohol and have strict rules about when you can’t drink it. Then, when it is permitted, they get trollied.

But on the day of a religious festival—for example, one devoted to the 400 drunken rabbits—they got absolutely hammered. They got apocalyptically and religiously drunk, and, like the Ancient Egyptians and the Ancient Chinese before them, they used alcohol to give them an experience of the divine. And then for the rest of the month they didn’t drink at all.

It was the relaxation of the rules and the disorientation of society produced by Christianity which pushed the conquered to perpetual pulque.

The people of Zumbagua in Ecuador drink in order to communicate with ancestral spirits, and, indeed, believe that when you drink so much that you throw up, the vomit becomes food for the ghosts of the dead. To this day there is a phrase in Mexico: “As drunk as 400 rabbits.

DISTILLING

Ancient Greeks definitely knew about distilling over 2,000 years ago, but there’s no evidence that they distilled alcohol. Instead, they wasted their invention on producing drinkable water.

You start to get, in the 15th century, mentions of distilled alcohol being used as a medicine in very small doses.

James IV of Scotland bought several barrels of whisky, or aqua vitae as it was called, from a monastery in 1495.  A hundred years later, there was one bar in England—just outside London—that served aqua vitae. It was still a novelty drink that most people would never even have heard of. And then, in the second half of the 17th century, western Europe went crazy for spirits. The French suddenly got into brandy.

Come the Restoration, the English aristocracy stampeded back from France with a newfound taste for all sorts of funny newfound drinks: champagne, vermouth, and brandy. These became the drinks of the nobility.

Gin became popular in England for four reasons: monarchy, soldiers, religion and an end to world hunger. Some historians would add “hatred of the French,” which makes five. First, monarchy. King William III liked gin because he was Dutch and all Dutch people liked gin. Second, soldiers. Dutch soldiers liked gin for two reasons. Because they were Dutch and because gin infused Dutch soldiers with a peculiar form of bravery, which to this day we refer to as Dutch courage. Third, during this period European countries were constantly going to war with each other, usually on a Protestant vs. Catholic basis. England and Holland were both Protestant, so English soldiers fought alongside the Dutch, and drank alongside the Dutch, and came home with a hangover and a taste for gin. Gin was thus soldierly and Protestant. Fourth, an end to world hunger. From time immemorial, and probably before, every country in the world had had a problem with Bad Harvests. In a normal year farmers produced just enough grain to feed everybody. They didn’t produce any more than that, because they wouldn’t be able to sell it. Every so often, though, you got a year with a Bad Harvest. When this happened there wasn’t enough grain to go around, and farmers were not in the slightest bit upset. A funny aspect of the economics of farming is that a Bad Harvest means less grain; less grain means higher grain prices; these higher prices mean that farmers made just as much money from a Bad Harvest as they did from a good one, and it was less work.

William III thought he had this problem solved. Gin is made out of grain, and the quality of the grain doesn’t particularly matter. Once the stuff has been fermented and distilled, you can’t taste the difference. Therefore, if he could make gin popular in England he would produce a great big market for excess grain during normal years; and that meant that when a Bad Harvest came round there would be an excess to cover it. It might not be the highest-quality excess, but it would be edible. Thus he could end starvation forever.

But to do so he’d have to make gin really, really popular. To do that, you’d have to make gin more readily available than beer. You’d have to make it completely tax-free and unregulated and let anybody who wants to start distilling distill. Also, you’d have to ban the import of French brandy.

Where did a poor Londoner actually go to get gin? And when? And from whom? And the answer is absolutely everywhere. To set up shop you went to a distiller and got a gallon or so, distilled it a second time to make it even stronger, and added flavorings like juniper, turpentine, or sulfuric acid, whatever you liked.  Many drank way too much  and died.

Gin arrived in England in the 1690s and by the 1720s the streets of London were full of unconscious drunks who had sold their clothes for gin, so authorities tried to cut consumption by taxing it and requiring a license, which people ignored.

AUSTRALIA

Lord Sydney had a utopian idea of what Australia would be – hard work, fresh air, nature and no alcohol or money. But the sailors refused to sail without booze. And home-brewing began on day 1 of the convict ships arrival, mainly rum.  The sailors sold rum to the convicts at a markup of 1200 percent.

The economy was a bartered one with work exchanged for food or other goods.  Most of the population were convicts doing forced labor, to get them to do a speck more than they were expected to do you had to offer them something.  And that was rum, which greatly enabled the Governor to control the colony as a measure of social control.  Rum was the one and only lever of power.

The British government was not all OK with this and sent the famous Captain Bligh of mutiny on the bounty to dry out Australia as the next Governor and get rid of the militia who controlled the rum trade. He began by confiscating the stills of Captain John Macarthur, the richest man of the colony, and took him to court as well.  When he showed up, the jury cheered him, as did the hundreds of soldiers gathered outside the courthouse. He was absolutely furious, and ordered Major Johnston to get his men under control, but Johnston replied saying the was sorry, he’d been so drunk the night before he’d crashed his carriage, so couldn’t intervene.  Later that day, Johnston arrested Bligh and took control of the Colony.  Effigies of Bligh were burned in the street and had a roasted sheep and rum BBQ to celebrate.

So the government sent a new Governor called Macquarie who took control by realizing everyone was a crok and outcrooking them all.  He began by asking for exclusive rights to import rum for 3 years in exchange for a new hospital, and so began Australia’s health care system.

AMERICA

In 1979 George Washington own the largest distillery, producing 11,000 gallons of whisky a year, and after handing out free booze to voters, won his first election.  His military success came from doubling his men’s rum rations.   

Although Hollywood usually has just one giant saloon in the center of town which forces the hero and villain to confront each other, in real life there were many saloons in a town, so many they might not bump into each other.  The doors were solid, not swinging, and instead of a large room, bars were narrow, with the bar usually on the left, usually with a large mirror that lets those at the bar see anyone approaching them from behind. Although there are bottles of wine and crème de menthe, no one orders them. Everyone’s drinking whiskey and beer, though mainly whiskey. Another odd thing is that no one every asks how much drinks cost or gets change, because everyone knows the charge. It’s one-bit (about 12 cents) at the poor saloons, and two-bits at a fancier one with floor shows and a chandelier.

It’s mainly white men. A black man might be tolerated, native americans banned by law, and most unwelcome were the Chinese.  Respectable women never went into a saloon. Many weren’t for rent, why do that when you could earn $10 a week chatting with lonely men? At the back the card game would be faro, not poker, a very simple game of pure chance and easy to cheat at.

Prohibition was meant to get rid of saloons, which were perceived, especially in the Midwest, as the root of many evils.  Husbands drank their salaries, beat up their wives, died young.  Saloons were places decent women didn’t go, though the gals that were there often weren’t prostitutes but paid in whiskey (actually cold tea) to talk to men.  Saloons always had a bar on the left with a mirror behind it, and a brass rail with spittoons for every 4 people at the bottom, and no swinging doors like in the old westerns.  Horses were parked outside in huge piles of poop, since naturally while their owners drank, they pooped.  In a one-bit saloon, you plopped down a bit (12.5 cents, so really a quarter and had two drinks).  Or most often you bought someone else a drink, and the favor would be returned later by a newcomer.

Prohibition succeeded in getting rid of saloons.  That was its purpose, not stopping all alcohol, and Germans and other ethnic groups that made beer and wine weren’t worried about it.  But then the Vollmer act defined alcoholic beverages as anything over half a percent.  So for 13 years the U.S. lost the skills to make wine and beer or even whiskey well and it took 50 years to recover.  Speakeasies were quite unlike saloons, pretty much anything from someone’s living room where pasta might also be served, to the movie versions of New York city.  And women went too, unlike saloons.   

Russia

Traditions there were good at getting everyone to drink, a toast was made and all were expected to participate.  Ivan the Terrible began this in the 1500s to use drunkenness as a form of political control. Scribes attended who wrote down what everyone said while drunk, and read to him in the morning, with punishments handed out. He started state-run drinking to get as much tax money as possible. While most countries try to limit the crimes, riots, broken homes and health of drunkards, Russia was too keen for the revenue to discourage drinking in any way.

In 1914, Tsar Nicholas II outlawed vodka. In 1918 he and his family were executed. These two facts are not unrelated. It was poorly timed too, WWI was beginning and a quarter of all revenue came from taxes on alcohol.  And being sober the population could see what their government was doing to them. Today in Russia nearly a quarter of all deaths are related to alcohol.

Stalin ruled with terror and drunkenness.  He’d invite his politburo to dinner and make them drink and drink and drink, which they couldn’t refuse to do.  At one dinner there were 22 toasts before any food arrived.  He would tap out his pipe on Kruschev’s bald head and order him to do a Cossack dance. He loved to push one of the commissars into a pond.  But Stalin was mainly drinking water himself.  He did this to humiliate them, to set their tongues against each other, and make it hard to plot against him.  Even Peter the Great was known for forcing drinks on others. If he caught someone not drinking, they were forced to drink 1.5 liters of win in one go.  The head of Peter’s secret police had a tame bear who would offer guests a glass of vodka and attack if refused.

Posted in Advice, Agriculture, Human Nature | Tagged , , | Comments Off on The History of Drunkenness

Pentagon report: collapse within 20 years from climate change

Preface. The report that the article by Ahmed below is based on is: Brosig, M., et al. 2019. Implications of climate change for the U.S. Army. United States Army War College.

It was written in 2019, before covid-19 and so quite prescient: The two most prominent risks are a collapse of the power grid and the danger of disease epidemics.

It is basically a long argument to increase the military budget so it can help cope with epidemics, water and food shortages, electric grid outages, flooding, and protect the (oil and gas) resources in the arctic.

Since I see energy decline as a far more immediate threat than climate change, and the military knows this, it is odd so little is written about energy in this report. But then I looked at the pages about the arctic, and though the word oil doesn’t appear, you can see that the military is very aware of the resources (oil) there and the chance of war with Russia. Therefore they propose that the military patrol this vast area with ships, aircraft, and new vehicles that can traverse the bogs and marshes of melted permafrost. They propose sending more soldiers to the arctic for training, satellites for navigation, to develop new ways of fighting, enhance batteries and other equipment to be able function in the cold arctic environment, and more.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Financial Sense, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Ahmed, N. 2019. U.S. Military Could Collapse Within 20 Years Due to Climate Change, Report Commissioned By Pentagon Says. vice.com

According to a new U.S. Army report, Americans could face a horrifically grim future from climate change involving blackouts, disease, thirst, starvation and war. The study found that the US military itself might also collapse. This could all happen over the next two decades.

The senior US government officials who wrote the report are from several key agencies including the Army, Defense Intelligence Agency, and NASA. The study called on the Pentagon to urgently prepare for the possibility that domestic power, water, and food systems might collapse due to the impacts of climate change as we near mid-century.

The report was commissioned by General Mark Milley, Trump’s new chairman of the Joint Chiefs of Staff, making him the highest-ranking military officer in the country (the report also puts him at odds with Trump, who does not take climate change seriously.)

The report, titled Implications of Climate Change for the U.S. Army, was launched by the U.S. Army War College in partnership with NASA in May at the Wilson Center in Washington DC. The report was commissioned by Gen. Milley during his previous role as the Army’s Chief of Staff. It was made publicly available in August via the Center for Climate and Security, but didn’t get a lot of attention at the time.

The two most prominent scenarios in the report focus on the risk of a collapse of the power grid within “the next 20 years,” and the danger of disease epidemics. Both could be triggered by climate change in the near-term, it notes.

The report also warns that the US military should prepare for new foreign interventions in Syria-style conflicts, triggered due to climate-related impacts. Bangladesh in particular is highlighted as the most vulnerable country to climate collapse in the world. “The permanent displacement of a large portion of the population of Bangladesh would be a regional catastrophe with the potential to increase global instability. This is a potential result of climate change complications in just one country. Globally, over 600 million people live at sea level.”

Without urgent reforms, the report warns that the US military itself could end up effectively collapsing as it tries to respond to climate collapse. It could lose capacity to contain threats in the US and could wilt into “mission failure” abroad due to inadequate water supplies.

The report paints a frightening portrait of a country falling apart over the next 20 years due to the impacts of climate change on “natural systems such as oceans, lakes, rivers, ground water, reefs, and forests.”

Current infrastructure in the US, the report says, is woefully under prepared: “Most of the critical infrastructures identified by the Department of Homeland Security are not built to withstand these altered conditions.”

Some 80 percent of US agricultural exports and 78 percent of imports are water-borne. This means that episodes of flooding due to climate change could leave lasting damage to shipping infrastructure, posing “a major threat to US lives and communities, the US economy and global food security,” the report notes.

At particular risk is the US national power grid, which could shut down due to “the stressors of a changing climate,” especially changing rainfall levels:

“The power grid that serves the United States is aging and continues to operate without a coordinated and significant infrastructure investment. Vulnerabilities exist to electricity-generating power plants, electric transmission infrastructure and distribution system components,” it states.

As a result, the “increased energy requirements” triggered by new weather patterns like extended periods of heat, drought, and cold could eventually overwhelm “an already fragile system.”

The report’s grim prediction has already started playing out, with utility PG&E cutting power to more than a million people across California to avoid power lines sparking another catastrophic wildfire. While climate change is intensifying the dry season and increasing fire risks, PG&E has come under fire for failing to fix the state’s ailing power grid.

The US Army report shows that California’s power outage could be a taste of things to come, laying out a truly dystopian scenario of what would happen if the national power grid was brought down by climate change. One particularly harrowing paragraph lists off the consequences bluntly:

“If the power grid infrastructure were to collapse, the United States would experience significant:

  • Loss of perishable foods and medications
  • Loss of water and wastewater distribution systems
  • Loss of heating/air conditioning and electrical lighting systems
  • Loss of computer, telephone, and communications systems (including airline flights, satellite networks and GPS services)
  • Loss of public transportation systems
  • Loss of fuel distribution systems and fuel pipelines
  • Loss of all electrical systems that do not have back-up power”

Also at “high risk of temporary or permanent closure due to climate threats” are US nuclear power facilities.

There are currently 99 nuclear reactors operating in the US, supplying nearly 20% of the country’s utility-scale energy. But the majority of these, some 60%, are located in vulnerable regions which face “major risks” including sea level rise, severe storms, and water shortages.

“Climate change is introducing an increased risk of infectious disease to the US population. It is increasingly not a matter of ‘if’ but of when there will be a large outbreak.”

Water is currently 30-40% of the costs required to sustain a US military force operating abroad, according to the new Army report. A huge infrastructure is needed to transport bottled water for Army units. So the report recommends major new investments in technology to collect water from the atmosphere locally, without which US military operations abroad could become impossible. The biggest obstacle is that this is currently way outside the Pentagon’s current funding priorities.

Bizarrely for a report styling itself around the promotion of environmental stewardship in the Army, the report identifies the Arctic as a critical strategic location for future US military involvement: to maximize fossil fuel consumption.

Noting that the Arctic is believed to hold about a quarter of the world’s undiscovered hydrocarbon reserves, the authors estimate that some 20% of these reserves could be within US territory, noting a “greater potential for conflict” over these resources, particularly with Russia.

The melting of Arctic sea ice is depicted as a foregone conclusion over the next few decades, implying that major new economic opportunities will open up to exploit the region’s oil and gas resources as well as to establish new shipping routes: “The US military must immediately begin expanding its capability to operate in the Arctic to defend economic interests and to partner with allies across the region.”

Senior US defense officials in Washington clearly anticipate a prolonged role for the US military, both abroad and in the homeland, as climate change wreaks havoc on critical food, water and power systems. Apart from causing fundamental damage to our already strained democratic systems, the bigger problem is that the US military is by far a foremost driver of climate change by being the world’s single biggest institutional consumer of fossil fuels.

The prospect of an ever expanding permanent role for the Army on US soil to address growing climate change impacts is a surprisingly extreme scenario which goes against the grain of the traditional separation of the US military from domestic affairs.

In putting this forward, the report inadvertently illustrates what happens when climate is seen through a narrow ‘national security’ lens. Instead of encouraging governments to address root causes through “unprecedented changes in all aspects of society” (in the words of the UN’s IPCC report this time last year), the Army report demands more money and power for military agencies while allowing the causes of climate crisis to accelerate. It’s perhaps no surprise that such dire scenarios are predicted, when the solutions that might avert those scenarios aren’t seriously explored.

Rather than waiting for the US military to step in after climate collapse—at which point the military itself could be at risk of collapsing—we would be better off dealing with the root cause of the issue skirted over by this report: America’s chronic dependence on the oil and gas driving the destabilization of the planet’s ecosystems.

Posted in Arctic, Blackouts, Climate Change, Infrastructure & Fast Crash, Military, Over Oil | Tagged , , | 2 Comments

Updates to Life After Fossil Fuels: A Reality Check on Alternative Energy

Updates to “Life After Fossil Fuels”

Last updated 28 April 2024. Other posts related to this book here.

My book is about our many dependencies on fossil fuels, quickly depicted in these very short videos:  Life without Petroleum  A Day Without Oil  Can You Go a Day Without Fossil Fuels?

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Chapter 2 We Are Running Out of Time

Norway-based energy consultancy Rystad Energy has warned that Big Oil could see its proven reserves run out in less than 15 years, unless Big Oil makes more commercial discoveries quickly, thanks to produced volumes not being fully replaced with new discoveries (Kimani A (2021) Big Oil Is In Desperate Need Of New Discoveries. Oilprice.com).

“Global oil and gas discoveries have been on a constant shrinking trend prior to and over the last decade, with oil discoveries reaching a low of 3.8 BBO (billion barrels of oil) in 2016; in 2020 it was 4.3 BBO. During the decade, 89 BBO were discovered while 289 BBO of reserves were produced, a ratio of over 3 to 1, which is unsustainable.” Rafael Sandrea, Energy Policy Research Foundation.

However alarming Figure 3 in Chapter 2 (IEA 2018) may be, reality is even more worrisome, because this chart doesn’t depict Business As Usual, rather, it is an optimistic forecast called the IEA Sustainable Development scenario shown in figure 1 as requiring far less oil supply than other projections to 2040 above it.  The IEA Sustainable Development assumes that by 2030: global primary energy use declines 7% from 2019 to 2030 (compared to a 20% increase over the prior 11 years); solar generation grows by a factor of 5.6x, wind generation grows by a factor of 2.4x; nuclear generation increases by 23% (no decommissioning); coal use for power/heat declines by 51%; and electric vehicles sales reach 40% from today’s 4.5% levels (Cembalist 2021).

Figure 1. Oil “future production wedge”: demand vs existing field supply Million barrels per day

Chapter 4 We Are Alive Thanks to Fossil-Fueled Fertilizer

This chapter is about how we hit the wall at 1.6 billion in population in the early 1900s. Then natural gas based fertilizer was invented, which is responsible for allowing at least 4 billion more people to exist.  USDOE (2020) points out the myriad other ways natural gas aids agriculture: “Natural gas also is used to dry these crops. Further, plastics made from hydrocarbons provide bags for hay and silage, greenhouse covers, bale wrapping material, mulch film to prevent weed growth, and plant nursery containers.”

This chapter also explains the many ways natural gas fertilizers damage soil and water, and emit greenhouse gases. Made worse by plastic coated fertilizers which creates greenhouse gases during decay and microplastics that kill soil organisms. The United Nations’ Food and Agriculture Organization estimates that 100,000 tonnes of fertilizer encircled by plastic per year are dumped into the environment and now companies intend to add encapsulated chemicals (Nargi 2022).

Chapter 6  What Fuels Could Replace Diesel?

Peak diesel is the main civilization crusher, since heavy-duty transportation depends on it. Prices in March 2022 just hit an all time high, more than 2008. As the fuel of transportation, the price rally affects everything and everone, adding to inflationary pressures that are already running at a multi-decade high. This is partly because natural gas prices skyrocketed, which plays a key role in making diesel at refineries, where NG is used to produce hydrogen to remove sulfur from diesel. The spike in gas prices in late 2021 made that process prohibitively expensive, cutting diesel output. Low-sulfur crude is also in short supply: countries that pump that kind of oil, such as Nigeria and Angola, are unable to increase output. Any additional production has to come from Saudi Arabia and the United Arab Emirates, but both largely produce crude with high sulfur content. In the U.S., diesel stocks fell last week to their lowest seasonal level in 16 years (Blas J (2022) The Oil Price Rally Is Bad. The Diesel Crisis Is Far Worse. Bloomberg).

Non-renewable non-commercial exploding hydrogen. Most updates are in post “Hydrogen: The dumbest & most impossible renewable“, and  Energy/Hydrogen.

7 Why Not Electrify Commercial Transportation with Batteries? 

The U.S. would have to double today’s electric grid if 66% of all cars are EVs by 2050 (Groom 2021, NREL 2021). Yet the electric grid is falling apart, will be increasingly affected by climate change, and since wind and solar construction depends on fossil fuels for every step of their life cycle, their construction will be constrained by energy shortages

Energyskeptic battery posts:

Chapter 9 Manufacturing Uses Over Half of All Fossil Energy

Energyskeptic manufacturing posts:

Geothermal power: Can Geothermal power replace declining fossil fuels?

2021-10-22 Hydrogen steel: From a scientist at LBNL on steel made from hydrogen: while hydrogen direct reduction can take pre-heated iron ore and convert it into direct reduced iron (H2 DRI), also known as sponge iron, even if done in an electric arc furnace (EAF) plant, it still needs carbon from coal or biomass charcoal to create steel. China produced 1.05 BILLION tonnes last year, of which only 14% was produced through the electric arc furnace process with scrap steel. Half of China’s iron & steel plants have been built since 2010. Are all of these going to be retired, replaced by H2 DRI, EAF capacity expanded, and sufficient non-fossil electricity provided to support the required H2 production in any time-frame that is relevant to the atmosphere?  Even if china cut steel production in half by 2050, and DRI/EAF increased from 14% to 60% penetration, the electricity just to produce just the H2 alone would take over 200 TWh. Which is 150 GW of solar capacity dedicated to only creating hydrogen, a capacity nearly as much as the total installed capacity of Europe today.

2021-10-24 Biomass charcoal steel (private communication from Thomas Troszak): The problem is that a charcoal smelter is tiny because charcoal is fragile. All of the charcoal smelters in Brazil only supply enough pig iron to meet 20% of melting capacity of a single electric furnace in a plant that refines pig iron to grey iron (wiki definition), and casts grey ingots for auto manufacturers to remelt and cast into engine blocks. And they burn up thousands of hectares of eucalyptus in the process of supplying the charcoal pigs for that one furnace. As far as I know they aren’t even making steel from the charcoal pig iron. That would represent another whole level of unsustainability. But with charcoal alone, you’re looking at the technology available prior to the 1850s. but at extravagant cost in land area for forests. Abraham Darby mentioned that when he first put the coke in his furnace, his burden capacity increased by 30 times. And that was in the early 1700s. Charcoal smelters like that could produce something like 200 tons of iron per year. By the late 1800s I think there was a mega charcoal furnace in the US that could smelt up to 200 tons per day, but that was unusual. A modern coke smelter can produce 12,000 tons of pig iron per day. And a modern foundry could cast 250 tons of steel in a single pour. So charcoal furnaces can’t support the kind of furnace burden that would be necessary for billeting chunks big enough for the components of a modern bridge, or submarine or nuclear reactor or whatever, nohow.

There is a growing awareness that there are no “renewable” ways to replace fossils for essential products like cement and steel. This article is of interest because it explains why this is challenging, and ideas that I think you will see are unlikely from reading my book, and probably too late to make commercial if peak oil was in 2018 (true so far in 2022) as shown in chapter 2 (and also see “Peak Oil is Here!“).

Fennell P et al (2022) Cement and steel — nine steps to net zero. Nature.

Chapter 10 What Alternatives Can Replace Fossil Fueled Electricity Generation

Fusion. Updates are in Why fusion power is forever away and Energy/Fusion.

Nuclear Power. Updates are in Nuclear Power problems, Nuclear waste, and other Nuclear Power posts.

Chapter 12 Half a Million Products Are Made Out of Fossil Fuels

This chapter lists a few of the 500,000 products such as plastic made from petroleum. USDOE (2020) lists additional natural gas and NG liquids products: “Homebuilders use many natural gas-based materials to build affordable and safe homes, including plastic foam insulation and sheathing materials, vinyl siding, weatherproof window frames, high performance caulks and paints, asphalt roofing materials, polyvinyl chloride (PVC) pipe, and chemically treated lumber. Within our homes, plastic foam insulation helps refrigerators, freezers, dishwashers, and heating and air conditioning systems operate quietly and efficiently. Healthcare: Surgical gloves, antiseptics, medications, anesthetics, heart valves, surgical devices, prosthetics, eyeglasses, pacemakers, stents, joint replacements…  Automakers have met increased fuel efficiency standards by replacing heavy metal parts with lightweight plastics, now 50% of car’s by volume and just 10% by weight, dramatically improving gas mileage, plus safety features like seat belts, air bags, interior cushioning, and crumple zones.

Paul Martin wrote “A mass shift from fossil petroleum to biomass sources for chemicals and materials is extremely unlikely in my view. Why is that? Simple. Biomass has an average general chemical formula of C6 H10 O5. There are exceptions- food oils being one example- but the greatest mass of biomass is cellulose and lignin, not vegetable oil. It is hydrogen deficient, and worse still, there’s nearly one oxygen atom for every carbon atom. To make most useful chemicals, those oxygens need to be removed by reacting them with hydrogen to produce water, or burned off to produce CO2. Both represent a huge loss of energy and mass (Martin 2024)”.

Chapter 15 Grow More Biomass: Where Is the Land?

Under the topic of “Genetically Engineer Plants to Grow Faster, Get Larger” I wrote: “Photosynthesis evolved about three billion years ago, and to this day, only converts a tiny fraction of sunlight into biomass. So seriously—we are going to enhance photosynthesis when Mother Nature did not figure that out over three billion years of random mutations? It is possible improved photosynthesis would make a plant less disease-resistant, or put more growth into leaves and stalks rather than edible fruit or grain, or require yet more water and soil nutrition. There are probably good reasons and limitations keeping nature from improving photosynthesis.”

Here’s another reason why we probably can’t improve photosynthesis — 14% of the energy goes into lifting water from the soil to their leaves, since photosynthesis requires water as well as light and CO2. Quetin GR et al (2022) Quantifying the Global Power Needed for Sap Ascent in Plants. Journal of Geophysical Research: Biogeosciences  DOI: 10.1029/2022JG006922

Chapter 16 The Ground is Disappearing Beneath Our Feet

More than one-third of the Corn Belt in the Midwest has completely lost its carbon-rich topsoil, which is critical for plant growth because of its water and nutrient retention properties. Thaler et al (2021) estimate the loss at about 100 million acres, which is 156,251 square miles — the size of Illinois, Iowa, and Wisconsin combined.   Degradation of soil quality by erosion reduces crop yields, which this research estimated has reduced corn and soybean yields by about 6%, almost $3 billion in annual economic losses for farmers across the Midwest.

Briggs H (2022) Farm machinery exacting heavy toll on soil – study. BBC
The weight of modern combine harvesters, tractors and other farm machinery risks compacting the soil, leading to flooding and poor harvests, according to researchers in Sweden. The researchers calculated that combine harvesters, when fully loaded, have ballooned in size from about 4,000 kg (8,800 pounds) in 1958 to around 36,000kg (80,000 pounds) in 2020. This makes it difficult for plants to put down roots and draw up nutrients, and the land is prone to flooding. The researchers think the growing weight of farm machinery poses a threat to agricultural productivity. Their analysis, published in the Proceedings of the National Academy of Sciences, suggests combine harvesters could be damaging up to a fifth of the global land used to grow crops. Thomas Keller, professor of soil management at the Swedish University of Agricultural Sciences in Uppsala, Sweden, says machinery should be designed not to exceed a certain load. “Compaction can happen within a few seconds when we drive on the soil, but it can take decades for that soil to recover,” he said.
Scientific paper: Kelly T, Or, D (May 16, 2022) Farm vehicles approaching weights of sauropods exceed safe mechanical limits for soil functioning. PNAS. https://doi.org/10.1073/pnas.2117699119

Lambert (2020: Just as replacing grasslands with crops caused the 1930s dustbowl, so too will the replacement of grasslands with corn crops bring on Dustbowl 2.0 and potentially desertification.  From 2006 to 2011 there was a 10% increase in land growing corn for ethanol over 2046 square miles.  Before that, grasslands protected the soil by holding it tightly in place. Dust storms remove nutrients from the soil, making it harder for crops to grow and for even more wind erosion to occur.  This destructive cycle, now aggravated by drought, can eventually lead to desertification, and is also a health hazard. The ultrafine dust particles can penetrate cells in the lungs and cause lung and heart disease. Dust storms increased by 5% a year for a whopping 100% increase over the 20 years of the study from 1998-2018.   Even the Midwest is seeing dust storms grow after the planting and harvesting of soybeans in June & October, an area also threatened by drought from climate change.  The lead author of the findings in Geophysical Research Letters, Andrew Lambert, points out that “It’s particularly ironic that the biofuel commitments were meant to help the environment.”   Lambert A et al (2020) Dust Impacts of Rapid Agricultural Expansion on the Great Plains. Geophysical Research Letters. https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2020GL090347

Chapter 19 Grow More Biomass: Dwindling Groundwater

Billions more people could have difficulty accessing water if the world opts for a massive expansion in growing energy crops to fight climate change.  The idea of growing crops and trees to absorb CO2 and capturing the carbon released when they are burned for energy is a central plank to most of the Intergovernmental Panel on Climate Change’s scenarios for the negative emissions approaches needed to avoid the catastrophic impacts of more than 1.5°C of global warming.

But the technology, known as bioenergy with carbon capture and storage (BECCS), could prove a cure worse than the disease, at least when it comes to water stress. The water needed to irrigate enough energy crops to stay under the 1.5°C limit would leave 4.58 billion people experiencing high water stress by 2100 – up from 2.28 billion today, especially in South America and south Africa (Vaughan 2021).

Chapter 21 Grow More Biomass: Pesticides

I’m adding updates to energyskeptic.com in the post below as well as others in category decline/pollution/pesticides here.

Chemical industrial farming is unsustainable. Why poison ourselves when pesticides don’t save more of our crops than in the past?

Chapter 24 Corn Ethanol. Why?

Renshaw J et al (2021) U.S. bread, doughnut makers urge Biden to roll back biofuel requirements. Reuters.  A trade group representing some of America’s biggest baked goods companies is urging the Biden administration to ratchet back its biofuel ambitions, arguing that using fuel made from crops could raise the cost of donuts, bread and other foods. They met with the Environmental Protection Agency (EPA) last week to urge reduced blending mandates, particularly for biodiesel since supplies of soy and canola oil are running low (40% of soybeans go to biodiesel fuels).

From Chapter 24: Ethanol Raises Food Prices and Harms People and Businesses

The Renewable Fuel Standard (RFS) mandating ethanol has led to a shortage of corn for food and animal feed. From 2007 to 2012, prices were driven up so much that farmers planted 17 million new acres of corn rather than soybeans, wheat, hay, cotton, and other crops, driving their prices up to all-time records as well. Cattle feed prices were so high that herds were culled to levels not seen in 60 years, causing beef prices to rise an incredible 60% from 2007 to 2012.

Restaurants were also affected because corn, meat, and other crops rose in price. It appears the interests of Archer Daniels Midland (ADM), Cargill, and 3.2 million farmers were favored over those of us who eat food. That includes the 15.6 million Americans who work in the restaurant industry—about one in ten US workers.

Fraud in the RFS program will likely increase (Cohn 2022): The Inflation Reduction Act signed into law by President Joe Biden in August includes historic investments to combat climate change. It may also open new avenues for fraud by expanding a program that has given federal authorities fits for years. It does not include any new provisions to prevent fraud.

Peter Whitfield, a partner at law firm Sidley Austin in Washington, D.C said that in [the Renewable Fuel Standard] program you have little oversight, so there’s a way to generate a massive amount of money fraudulently with little effort, so possibilities for fraud will still exist, and is skeptical investigators’ will catch frauds as the programs expand. One issue is that biofuel feedstock is in short supply yet biofuel incentives are being increased, which will tempt some to cheat to get the lucrative biofuel credits.

One example of fraud happened in 2019 when members of a polygamous, Utah-based religious sect known as “The Order” pleaded guilty to conspiring with a Los Angeles businessman who called himself “The Lion” to bilk the federal government out of some $1 billion in a scheme involving Renewable Fuel Standard credits and related IRS tax credits. Using a series of shell companies and sham transactions, the team made it look like they were producing massive amounts of biofuel at a plant in northern Utah and shipping it far and wide. That allowed them to rake in millions of dollars in incentives, even though they were producing very little fuel. The extent of the scam came to light only after a member of the sect who happened to work in the accounting department broke away from the group — she said she was about to be forced to marry her cousin — and told authorities what she knew.

26 Fill ‘er up with seaweed (see energyskeptic post here).

Bever (2021) Fighting climate change by farming kelp NPR:  An absurd  project to cash in on carbon sequestration funds to haul kelp out to sea until it’s so heavy the buoy sinks and the kelp CO2 is sequestered on the ocean floor. What could go wrong: whales entangled, ship propellers snarled, beaches fouled? The price and energy to do this? And why? As Life After Fossil Fuels explains, peak oil occurred in 2018 and the decline of emissions at 4% a year now that will exponentially increase dwarves all sequestration and renewable contraption dreams and schemes.

Chapter 27 “The Problems with Cellulosic Ethanol Could Drive You to Drink”

Ethanol is pointless: trucks, locomotives, and ships don’t run on ethanol or diesohol. Only diesel matters, peak diesel is more apt than peak oil.

The main reason there is yet to be a commercial ethanol production plant is that “Except for fruits and protected seeds, the rest of a plant evolved over hundreds of millions of years to not be eaten by herbivores or microbes, with barriers of toxins, spines, and thick bark. The most formidable defense is a rigid structure of indigestible cellulose, hemicellulose, and lignin, which even after death can take a year or more for microbes and fungi to consume and break down into new soil. Scientists try to speed up the process with brute force. Bioreactors create high pressures and temperatures, other machines mill, radiate, steam explode, accelerate electrons, hydrolyze with acids, freeze, drench in harsh chemicals, expand fbers with ammonia or ozone, and infict other torments to get the sugars out. Nothing much works. They have hit a cellulosic wall.”

Or as Service (2022) writes: “…and the spearlike corn stalks and other woody biomass often jams machines designed to grind it up. The chemical industry is built on handling liquids and gases, it’s much harder with solids. This extra handling and processing mean jet fuel from biofuels will never be as cheap as fuel made from petroleum.

Service RF (2022) Can biofuels really fly? Science.

In this chapter I discussed why attempts to use termites to make ethanol haven’t worked out: “…Scientists have been trying for many years to replicate a termite’s ability to break down plants. Termites digest wood by outsourcing the work to the protists in their gut. Protists, in turn, outsource the work to many bacteria that use enzymes to break wood down further. Just like at a factory, each microbe performs one task, and excretes a different substance than it consumed. In a termite gut factory, one working microbes’ poop is ambrosia for another. This intricate chain reaction has proven difficult to synthesize. Too much of anything along the chain of reactions and it can kill the process. For example, in ethanol production, when yeast has raised the concentration of excreted ethanol from 12 to 18%, the yeast dies. So far scientists haven’t been able to get termite or ruminant gut organisms to expand from their tiny world into the expansive gut of a 2,000-gallon stainless-steel tank.”

Altamia’s 2020 paper discusses the bacteria of shipworms, which have been destroying wooden ships and docks for thousands of years. There’s a hope their enzymes can be used to break down wood to make biofuels, but they sound a lot like underwater termites to me. Shipworms are long, thin mollusks famed and feared for their ability to eat wood. But they can’t do it alone. They rely on bacterial partners that don’t reside in the gut, but inside the cells of their gills. Perhaps their enzymes can be used to breakdown lignocellulose into sugars, and then into ethanol.

Chapter 28 Biodiesel to keep trucks running

Last month, several airline CEOs met with Biden administration officials to discuss emissions and the options for government incentives for aviation biofuels as a way of reducing these emissions. But to increase biofuel production to 20 million barrels of oil equivalent a day could cause cooking oil to become unaffordable for millions of people, and result in large-scale deforestation as 100 million more acres of land were cleared to grow biofuel crops on. Reforesting 100 million acres would offset 8 times more CO2 emissions. The Center for Biological Diversity objects also, since far more CO2 emissions reduction could come from phasing out dirty, aging aircraft, and maximizing operational efficiencies (Slav 2021).

Chapter 29 Can we Eat enough French Fries

In this chapter I reported that a sewer in London was clogged with a record-breaking fatberg of 140 tons.  Breaking news: that record has been broken with a 330 ton London fatberg (Picheta 2021). So that’s good news, more fat to propel our four ton autos.  Or maybe not, there’s a new competitor: Insulation for homes made of cooking oil, wool, and sulfur (Najmah 2021).

No worries about finding enough human fat from liposuction. There are 390 million tonnes of humans, but just 22 million tons of wild animals. Lots of fat available from us, and our domesticated animals — 630 million tons of sheep, rodents, dogs, pigs, cattle, and more (Greenspoon 2023).

Chapter 30 Combustion: Burn Baby Burn

The Ryegate Power Station’s biomass plant in Vermont may shut down sooner than expected, the contract that expires in 2022 is only being renewed for 2 years, rather than the 10 expected due to the much higher cost of electricity, which Vermonters subsidize with $5 million a year. It’s pricey because it’s only 23% efficient — so for every four trees burned, only one tree is converted to electricity. Biomass plants like Ryegate have been closing throughout the region, with plants in New Hampshire and Maine not being relicensed (Gockee 2021)

Chapter 33 Conclusion: Do You Want to Eat, Drink, or Drive?

I wrote: “Declining oil means you can stop worrying about robots taking over. What energy could they be built with and run on after fossils? Not that a robot overthrow was ever an issue. The human cortex is 600 billion times more complicated than any artificial network. The code to simulate the human brain would require hundreds of trillions of lines of code inevitably riddled with trillions of errors.

Nor do you need to fear artificial intelligence (AI), which many otherwise intelligent people think is an existential threat.  It isn’t. Nail (2021) describes how AI treats the brain like a computer with a very narrow range of tasks in a closed system where all possibilities are known, and breaks down when confronted with novel situations.  But brains are nothing like computers, which have fixed logic gates are a binary 0 or 1. Brain neurons are analog, changing their firing thresholds, with chemicals that further alter activity, efficiency, and connectivity. And then there’s the role of dreaming, and much more that makes our brains neuroplastic in ways a computer AI never will be, see the article for details.

The European Union has initiated an ambitious plan called Farm to Fork (EU 2021) that hopes to cut pesticide and excess nutrient use by 50%, and converting 25% of farms to organic agriculture by 2030 (Rosmino 2021).

Do you want to eat or drive? Many energy companies plan to increase their biofuel capacity by 2030, mainly with corn and soybean oil. This is driving price inflation for vegetable oils, including palm oil, canola and soybean oil, doubling corn futures and tripling lumber costs. The accelerating demand for renewable biodiesel fuels is directly responsible for price inflation. Food costs have been pushed to their highest in seven years (Kimani 2021).

And there may be a lot less oil than the EIA, IEA, BP Statistical review, and other estimates of world reserves have estimated. Laherrere et al (2022) explain the various methods used to calculate world fossil reserves, and why their method is probably most accurate — this is what Laherrere has written about for the past 60 years so I find this paper very plausible. Many geologists who’ve modeled likely fossil fuel decline within the IPCC climate model predicted that the most likely outcomes were RCP 2.6 to 4.5 (see the last chapter in “Life After Fossil Fuels”), though their papers came out before it became likely that 2018 was the world peak oil production year, so I expect that the lower RCP 2.6 is most likely. This paper estimates RCP 3.0 since the global CO2 emissions for the period 2020–2100 are approximately 1000 for coal, 750 for oil and 650 for natural gas GtCO2, a total of 2400 GtCO2, with a further ~850 GtCO2 being emitted beyond 2100. Clearly such emissions are incompatible with the 580 GtCO2 limit to CO2 emissions to 2100 assumed by Welsby et al 2021 to meet 1.5 °C goal in the 2022 IPCC report. If the 1750 GtCO2 emitted so far has led to a 1.1 C increase, 3250 GtCO2 would add another 2 C for a total of 3 C above pre-industrial levels.

But oil makes all other resources possible, including coal and natural gas, and its decline is likely to lead to social unrest, depressions, war and civil wars, supply chain failures, natural disasters like hurricanes taking out offshore oil platforms, floods and earthquakes affecting refineries, and more that disrupt oil production, so much so that even Laherrere et al (2022) much lower estimates of oil production and CO2 emissions may be too high. Plus the FLOW RATES will be lower.  Nor are unconventional tar sands (Canada) or heavy oil (Venezuela) likely to produce much oil since their energy return on invested is very low. So that leaves their estimate of remaining conventional oil of 1100 Gb (Table 1) to carbon of ~470 GtCO2, well under the 580 GtCO2 limit to CO2 emissions. To the extent that oil lasts despite wars and other disruptions, coal and natural gas emissions may go over the 580 GtCO2 limit.  But again, if whatever is produced takes a very long time compared to today, the ocean and land sinks will absorb some of the CO2, lowering the ultimate temperature rise. Perhaps.

Book Reviews:

Ennos R (2021) The Age of Wood: Our Most Useful Material and the Construction of Civilization.

References

Altamia MA et al (2020) Teredinibacter waterburyi sp. nov., a marine, cellulolytic endosymbiotic bacterium isolated from the gills of the wood-boring mollusc Bankia setacea…. International Journal of Systematic and Evolutionary Microbiology.

Cembalist M (2021) 2021 Annual Energy Paper. JP Morgan asset & wealth management.

Cohn S (2022) Inflation Reduction Act’s expanded biofuel incentives raise concerns about fraud. CNBC.

EU (2021) Farm to Fork Strategy. European Commission.

Gockee A (2021) Is time ticking on the Ryegate Power Station biomass plant? vtdigger.org

Greenspoon L et al (2023) The global biomass of wild mammals. PNAS https://doi.org/10.1073/pnas.2204892120

Groom N et al (2021) EV rollout will require huge investments in strained U.S. power grids. Reuters.

IEA (2018) International Energy Agency World Energy Outlook 2018, figures 1.19 and 3.13. International Energy Agency.

Laherrère J, Hall CAS, Bentley R (2022) How much oil remains for the world to produce? Comparing assessment methods, and separating fact from fiction. Current research in Environmental Sustainability.

Kimani A (2021) Global Food Prices Soaring As Demand For Biofuels Continues To Climb. oilprice.com

Martin P (2024) The Refinery of the Future- a thought experiment. https://www.linkedin.com/pulse/refinery-future-thought-experiment-paul-martin-4pfoc?utm_source=share&utm_medium=member_ios&utm_campaign=share_via

Nail T (2021) Artificial intelligence research may have hit a dead end. “Misfired” neurons might be a brain feature, not a bug — and that’s something AI research can’t take into account. Salon.

Najmah IB et al (2021) Insulating Composites Made from Sulfur, Canola Oil, and Wool. ChemSusChem, Wiley.

Nargi L (2022) Plastic-coated agricultural chemicals are destroying human and planetary health. Salon.com

NREL (2021) Electrification Futures Study. National Renewable Energy Laboratory.

Picheta R (2021) A 330-ton fatberg is clogging an English city’s sewer, and it won’t move for weeks. CNN.

Rosmino C (2021) Meet the EU farmers using fewer pesticides to make agriculture greener. Euronews.com.

Slav I (2021) The Biofuel Boom Could Threaten Food Security. Oilprice.com

Thaler EA et al (2021) The extent of soil loss across the US Corn Belt. PNAS.

USDOE (2020) U.S. Oil and natural gas: providing Energy Security and supporting our quality of life. U.S. Department of energy, office off oil & natural gas.

Vaughan A (2021) Carbon-negative crops may mean water shortages for 4.5 billion people. NewScientist.  Scientific article: Nature CommunicationsDOI: 10.1038/s41467-021-21640-3

 

 

 

Posted in Biofuels, Fusion, Groundwater, How Much Left, Hydrogen, Life After Fossil Fuels, Peak Water | Tagged , , , , , , , , , | 3 Comments

Compressed air energy storage (CAES)

Figure 1. Potential salt dome locations for CAES facilities are mainly along the Gulf coast

Preface. Besides pumped hydro storage (PHS), which provides 99% of energy storage today, CAES is the only other commercially proven energy storage technology that can provide large-scale (over 100 MW) energy storage. But there are just two CAES plants in the world because there are so few places to put them, as you can see in Figure 1 and Figure i.

CAES is the most sustainable energy storage with no environmental issues like what PHS poses, such as the flooding of land and the damming of rivers. And Barnhart (2013) rates the ESOI, or energy stored on energy invested, the best of all for CAES. Batteries need up to 100 times more energy to create than the energy they can store.

A more detailed and technical article on CAES with wonderful pictures can be found here: Kris De Decker. History and Future of the Compressed Air Economy.

Alice Friedemann   www.energyskeptic.com  author of  “Life After Fossil Fuels – Back to Wood World”, 2021, Springer, “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

How it works: Using off-peak electricity, compressed air is pumped into very large underground cavities at a depth of 1650–4250 feet (Hovorka 2009), and then drawn out to spin turbines at peak demand periods.

Uh-oh — it still needs fossil fuels. But a big drawback of CAES is that it still needs fossil fuels, since electric generators use natural gas to supplement the energy from the stored compressed air. Natural gas also provides the power to compress and pump the air underground, and when the compressed air is withdrawn, natural gas is used a second time to heat it and force it through expanders to power a generator. Current CAES facilities are essentially gas turbines that consume 40–60 % less gas than conventional turbines (SBC 2013).

Few locations: Domal salt formations are rare (orange in figure i below)

Locations are scarce because they must be airtight. There are only two CAES plants in the world: Alabama (110 MW) built in 1991 and in Germany in 1979, both of them in Domal Salt formations.

There are only two because domal salt formations are so rare and exist in only a few states in the U.S. as shown in figure i.  These have one or more deep chambers within the salt dome that are airtight, so they can handle frequent charging and discharging, with pure, thick salt walls that self-heal with air moisture, preventing leaks. Bedded salt is not as ideal because it takes a huge amount of energy and water to carve salt chambers out. Domal salt is also superior because they are purer and thicker than bedded salt (Hovorka).

Areas with class 4+ wind and possible CAES locations. Succar. 2008. Compressed Air Energy Storage: Theory, Resources, And Applications For Wind Power. Princeton University.

Areas with class 4+ wind and possible CAES locations. Succar. 2008. Compressed Air Energy Storage: Theory, Resources, And Applications For Wind Power. Princeton University.

 

 

 

 

 

 

Ideally a CAES facility would store renewable wind power, but the best wind locations are seldom near domal salt areas.  Though there is a wind/CAES project being planned, an $8 billion dollar project in Utah. It would use the only known salt dome outside of Texas, Louisiana, or Alabama for a $1.5 billion dollar CAES to store electricity from a $4 billion wind farm in Wyoming to deliver power to Los Angeles over $2.6 billion of new transmission lines that run for 535 miles ($4.86 million/mile) (DATC 2014; Gruver 2014).

This is not exactly run of the mill geology. CAES has yet to be deployed in bedded salt, aquifers, or abandoned rock mines because these formations are less likely to be airtight, and hence able to charge and discharge frequently and to maintain constant pressure. Underground areas once but no longer used to store natural gas or oil would have to be free of blockages that could gum up the works. Water is another limiting factor. High volumes are needed to cool the compressed air before storing it.

CAES systems generally have twice as much up-ramping capability as down-ramping. Translation: They can produce electricity faster than they can store it (IEA 2011a).

They are inefficient

The CAES plants in operation in Germany and the US have an electric-to-electric efficiency of only 40–54%, respectively (Luo 2015).  A conversion efficiency this low will require a doubling of wind and solar power to make up for the loss. 

The Pacific Northwest National Laboratory calculated the cost of energy storage devices for balancing the grid if wind power reached 20 % of electric generation across the United States. The cost for CAES was the most expensive: 170.6 billion. Storage would fill spans ranging from milliseconds up to an hour. Not 2 hours, not a day, and not a week— that will cost you extra. In billions of dollars, the options examined included $54.03 NaS battery, $63.85 flywheel, $81.62 Li-ion battery, $116.61 redox flow battery, $125.06 demand response (car PHEV batteries), $130.24 pumped hydro storage (PHS), $135.48 combustion turbine (CT), and $170.62 compressed air energy storage (PNNL 2013).

Based on nine vendor estimates, to build CAES units able to store one day of U.S. electricity would cost from $912 billion to $1.48 trillion. That’s below ground. Above ground CAES would cost $3.8 trillion (DOE/EPRI 2013).

Locations must be near the electric grid: It’s far too expensive to add transmission from remote locations. It’s already too expensive to build them….

According to Alfred Cavallo, “The immense magnitude of stored energy required to transform the intermittent wind resource to a constantly available power supply is not widely appreciated. For example, a 200 MW wind/CAES plant would need a minimum storage capacity of 10,000 MWh, or 50 hours of full plant output (this assumes that the wind power density is constant throughout the year). If the wind was not constant, but seasonal, say mainly in the winter or spring, the energy storage for seasonal output would require a minimum of 40,000 MWh (200 hours of full power plant output). Clearly, only the most inexpensive of storage media, like air or water, could be used in such an application” (Cavallo 2007).

Since the wind is a seasonal resource, it would be ideal to be able to store weeks of wind energy, but that is impossibly expensive (Cavallo 1995).

CAES in aquifers has never been accomplished, and attempt to do so was abandoned after $8 million was spent in Iowa  because testing found it would leak (see Haugen below).  Aquifers are far more expensive than salt caverns, partly due to the high cost of conducting tests, such as seismic, drilling test wells, modeling the reservoir, and so on (Swensen, Hydrodynamics Group, Marchese). Aquifers may not be suitable for CAES– they have to have the right amount of porosity and permeability beneath an impermeable caprock with the right geometry (Succar). This makes it very expensive to find out.

Hard rock caverns, such as abandoned mines, are the least likely place to put CAES and this has never been attempted, leakage is too likely, and finding a mine at the exact right depth reduces the choices further.

Air has problems being stored that natural gas does not. Using underground storage that once had natural gas may not work, because “a CAES system used for arbitrage or backing wind power will likely switch between compression and generation at least once a day and perhaps several times a day. In contrast, most natural gas storage facilities are often only cycled once over the course of the year to meet the seasonal demand fluctuations for natural gas. Third, several oxidation processes might take place in the presence of oxygen from the air depending on the mineralogy of the formation. Also, introduction of air into the formation might promote propagation of aerobic bacteria that might pose a significant corrosion risk. Finally, additional corrosion mechanisms might be promoted due to the introduction of oxygen into the formation (Succar).

Haugen, D. 2012. Scrapped Iowa project leaves energy storage lessons. Midwest Energy News.

After spending $8 million on a CAES aquifer in Iowa, the project was halted when it was concluded that air didn’t flow fast enough through the aquifer for it to be effective as a compressed-air energy storage site.

Hydrodynamics Group. CAES in aquifers is problematic. Lack of geological data-poor. Reservoir properties.

Hydrodynamics has found that CAES in aquifer storage medium is problematic. We found that geological data for aquifer structures is typically very limited, resulting in costly exploration, field testing, and analysis development programs. Otrher challenges include constraint of air storage pressure around the hydrostatic pressure of the aquifer, limitations on well productivity, the potential for oxygen depletion, and the potential of water production with the air. We have found that the mitigation of the challenges of CAES development is dependent on the selection of an anticline structure at the proper depth, and the choice of highly permeable porous medium.

REFERENCES

Barnhart, CJ, et al. 2013. On the importance of reducing the energetic and material demands of electrical energy storage. Energy & Environmental Science. 

Cavallo, A.  et al. 1995. Cost effective seasonal storage of wind energy. Houston, TX, USA,  pp. 119-125.

Cavallo, A. 2007. Controllable and affordable utility-scale electricity from intermittent wind resources and compressed air energy storage (CAES). Energy 32: 120-127.

DATC. 2014. $8-billion green energy initiative proposed for Los Angeles. Los Angeles: Duke
American Transmission Co.

Denholm. September 23, 2013. Energy Storage in the U.S. National Renewable Energy Laboratory. Slide 15.

DOE/EPRI. 2013. Electricity storage handbook in collaboration with NRECA. USA: Sandia
National Laboratories and Electric Power Research Institute.

Gruver, M. 2014. Renewable energy plan hinges on huge Utah caverns. New York: Associated Press.

Hovorka, S. 2009. Characterization of Bedded Salt for Storage Caverns: Case Study from the Midland Basin . Texas Bureau of Economic Geology.

Hydrodynamics Group. 2009. Norton compressed air energy storage. http://hydrodynamics-group.com/mbo/content/view/16/40

IEA. 2011. IEA harnessing variable renewables: a guide to the balancing challenge. Paris: International Energy Agency.

Luo X, et al. 2015. Overview of current development in electrical energy storage technologies and the application potential in power system operation. Applied Energy 137: 511-536.

Marchese, D. 2009. Transmission system benefits of CAES assets in a growing renewable generation market. Energy Storage Association Annual Meeting.

NREL. 2014. Renewable Electricity Futures Study. National Renewable Energy Laboratory.

PNNL. 2013. National assessment of energy storage for grid balancing and arbitrage: phase II, vol 2: cost and performance characterization. Washington, DC: Pacific Northwest National Laboratory.

SBC. 2013. Electricity storage. SBC Energy Institute.

Succar, S. et al. 2008. Compressed Air Energy Storage: Theory, Resources, and Applications for Wind Power. Princeton Environmental Institute.

Swensen, E. et al. 1994. Evaluation of Bene ts and Identi cation of Sites for a CAES Plant in New York State. Energy Storage and Power Consultants. EPRI Report TR-104268.

Posted in CAES Compressed Air | Tagged , , , , | Comments Off on Compressed air energy storage (CAES)

Heinberg on what to do at home to conserve energy

Preface. A quick summary. Best investment: insulate exterior walls, ceiling, and floors for energy savings. Other good changes were to plant a garden and fruit-and-nut orchard, and buy solar hot water heater, solar food dryer, solar cooker, chickens, energy-efficient appliances

Lessons learned: These are expensive, especially energy storage. Solar cookers work mainly in the summer.

In the future there will be more bikes and ebikes than cars. There needs to be much more local production of food and other goods to shorten supply chains.

Bottom line: there’s very little we can do as individuals, we can’t mine for the minerals we need, few of us can grow all of our food, and despite all these investments Heinberg still heavily depends on the greater world for food, electricity, and clothes, cars and most other objects in our lives can’t be home-made. What is required to make a transition is much bigger than most people imagine.

Alice Friedemann   www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Richard Heinberg. 2020. If My House Were the World: The Renewable Energy Transition Via Chickens and Solar Cookers. Resilience.org

For the past two decades, my wife Janet and I have been trying to transition our home to a post-fossil-fuel future. I say “trying,” because the experiment is incomplete and only somewhat successful. It doesn’t offer an exact model for how the rest of the world might make the shift to renewable energy; nevertheless, there’s quite a bit that we’ve learned that could be illuminating for others as they contemplate what it will take to minimize climate change by replacing coal, oil, and gas with cleaner energy sources.

We started with a rather trashy 1950s suburban house on a quarter-acre lot. We didn’t design a solar-optimal house from scratch the way Amory Lovins did (we thought about it, but we just didn’t have the time or money). We did what we could afford to do, when we could afford to do it.

Our first step was to insulate our exterior walls, ceiling, and floors. That was probably our best investment overall: it saved energy, and it made the house quieter and more pleasant to live in. Then we installed a small (1.2 kw) photovoltaic system, and planted a garden and fruit-and-nut orchard. Gradually, over the years, we added battery backup for our PV system, a solar hot water heater, a solar food dryer, chickens, solar cookers, energy-efficient appliances (including a mini-split electric HVAC system), and an electric car.

Here are ten things we learned along the way.

  1. It’s expensive. Altogether, we’ve spent tens of thousands of dollars on our quest for personal sustainability. And we’re definitely not big spenders. We economized at every stage, and occasionally benefitted from free labor and materials (our solar hot water panels, for example, were donated, and we built our food dryer from scrap). Still, once every few years we made a significant outlay for some new piece of electricity-generating or energy-saving technology. True, solar panels have gotten cheaper in the intervening years. On the other hand, there are things we still haven’t gotten to: we continue to rely on an old natural gas-fired kitchen cooking stove, which really should be replaced with an induction range if we hope to be all-solar-electric.
  2. Some things didn’t work. Early on, we planned and built a glassed-in extension on the south side of our house. Our idea was that it would capture sunlight in the winter and reduce our heating bills. As it turned out, we didn’t get the window and roof angles right, and so we receive relatively little heating benefit from this add-on. Instead we use it as a garden room for starting seedlings in the early spring. I suspect the global renewable energy transition will similarly see a lot of good ideas go awry, and false starts repurposed.
  3. Some things worked well. Twenty years after purchase, we have an antique PV system, with museum-quality Siemens panels still spitting out electrons. We made a big investment up-front, and got free electricity for two decades. This is a very different economic bargain from the familiar one with fossil fuels, which is pay-as-you-go. Similarly, making a rapid global energy transition, though offering some economic benefits in the long run, will require an enormous up-front expenditure. We learned that solar cookers are extremely cheap and pleasing to work with—in the summer months. Finally, we learned that keeping chickens is an economical source of eggs, though hens are less cost-effective from a food-production standpoint if you choose to treat them well (and continue caring for them after their egg laying subsides), as we did. There can be valuable side benefits: one hen, who’s been with us for nearly 10 years, has become an emotional support animal who supplants our need for more costly sources of psychological aid. I could say much more about her—but that’s for another occasion. Our chickens also provide manure and eggshells that enrich our soil. We compost some of our greenwaste and keep a worm bin, thus reducing energy usage by diverting some of our waste that would otherwise go to a landfill; we seasonally dry some produce in our solar dehydrator; and we can some of our fruit. These activities require little financial investment, but need a noticeable ongoing investment of effort.
  4. Energy storage is especially expensive. Our solar panels have lasted a long time, but our battery backup system didn’t. It now provides only about 20 minutes of power. True, our battery system is far from being state-of-the-art (it consists of five high-capacity lead-acid cells). Nevertheless, this proved to be the least-durable, least cost-effective aspect of our whole effort. The truth is, on both a diurnal and a seasonal basis, we rely almost entirely on the grid for energy storage and for matching electricity supply with demand. The lesson for our global energy transition: even though batteries are getting cheaper, energy storage will still be a costly engineering challenge.
  5. Reduce energy usage before you transition. Because renewable energy generation requires a lot of up-front investment, and because energy storage is also costly, it makes sense to minimize energy demand. For a household, that’s not problematic: we were quite happy shrinking our energy usage to roughly a quarter of the California average. But for society as a whole, this has huge implications. It’s possible to reduce demand somewhat through energy-efficiency measures, but serious reduction will have economic repercussions. We have built our national and global economic systems on the expectation of always using more. A successful energy transition will necessarily entail moving away from a growth-based consumer economy to an entirely different way of organizing investment, production, consumption, and employment.
  6. Our house is not an industrial manufacturing site. We don’t make our own cement or glass. If we had tried, it would have been a more interesting experiment, but much harder. We were undertaking the easy aspects of energy transition. The really difficult bits include things like aviation and high-heat industrial processes.
  7. Adding personal transportation to our renewable energy regime shifted us into energy deficit mode. We like our electric car, but charging it takes a lot of electricity (the energy needed to manufacture the car is another story altogether). Once we bought the car, we realized we need a larger PV system (that’s on our to-do list). For society as a whole, this suggests that transitioning the transportation sector will require sacrifice (see number 5, above). A renewable future will likely be less mobile and more local, and will feature more bikes and ebikes than cars. We should start shortening supply chains immediately.
  8. True sustainability and self-sufficiency would have required a lot more money, a lot more work, adaptation to a lot less consumption—or all three. Our experiment was informal; we didn’t keep track of every way in which we were using energy directly or indirectly (for example, via the embodied energy in the products we purchased). We continue to depend on flows of energy and money, and stocks of resources, in the world at large. We don’t generate the energy needed to mine minerals, or to manufacture cars, solar panels, or other stuff we have bought, such as clothes, a TV, computers, and books. The same holds for food self-sufficiency: we get a lot of fruit, nuts, eggs, and veggies from our backyard with minimal fossil energy inputs, but we buy the rest of what we eat from a local organic market. The world as a whole doesn’t have the luxury of going elsewhere to get what it needs; the transition will have to be comprehensive.
  9. You can’t expect someone else to do it all for you. Many people assume that the cost of the energy transition will somehow be paid by society as a whole—primarily, by big utility companies acting under government regulations and incentives. But households like yours and mine will have to bear a lot of the expense, and businesses will have to do even more of the heavy lifting. If households can’t afford to buy new equipment, or businesses can’t do so profitably, that will make the transition that much harder and slower. If we make the transition more through energy demand reduction rather than new technology, that will require massive shifts in people’s (read: your and my) expectations and behavior.
  10. We’re glad we did what we did. Our experiment has been instructive and rewarding. As a result of it, we have a much better appreciation for where our energy and manufactured products come from, and how much they impact the environment. We are more keenly aware of what we formerly took for granted and how cluelessly privileged our nation has been in its reliance on cheap fossil fuels. Our quality of life has improved as our consumption declined.

We would do most of it all over again (though I’d put more effort into designing the solarium that now serves as our garden room). I would have thought, at the outset, that after 20 years we’d be more sustainable and self-sufficient than we actually are. My take-away: the energy transition is an enormous job, and people who look at it just in terms of politics and policy have little understanding of what is actually required.

Posted in Advice, Richard Heinberg | Tagged | Comments Off on Heinberg on what to do at home to conserve energy

Life After Fossil Fuels: manufacturing will be less precise

Preface. This is a book review and excerpts of Winchester’s “The Perfectionists: How Precision Engineers created the modern world”. The book describes how the industrial revolution was made possible with ever more precision.  First came the steam engine, possible to build when a way to make them to one tenth of an inch precision so the steam didn’t escape was invented.  By World War II parts could be made precise to within a millionth of an inch and today to 35 zeros of precision (0.00000000000000000000000000000000001), which is required for microchips, jet engines, and other high-tech.

This amazing precision is possible using machine tools to make precise parts by shaping metal, glass, plastic, ceramics and other rigid materials by cutting, boring, grinding, shearing, squeezing, rolling, and stamping plus riveting metals, plastic and other hard materials.  Most precision machine tools are powered by electricity today, and steam engines in the past.

Machine tools also revolutionized our ability to kill each other.  Winchester writes: “When any part of a gun failed, another part had to be handmade by an army blacksmith, a process that, with an inevitable backlog caused by other failures, could take days. As a soldier, you then went into battle without an effective gun, or waited for someone to die and took his, or did your impotent best with your bayonet, or else you ran. Once a gun had been physically damaged in some way, the entire weapon had to be returned to its maker or to a competent gunsmith to be remade or else replaced. It was not possible, incredible though this might, simply to identify the broken part and replace it with another. No one had ever thought to make a gun from component parts that were each so precisely constructed that they were identical one with another.”

Machine tools can not be used for wood because it is flexible. It swells and contracts in unpredictable ways. It can never be a fixed dimension and whether planed or jointed, lapped or milled, or varnished to a brilliant luster, since wood is fundamentally and inherently imprecise.

Since both my books, “When trucks stop running” and “Life After Fossil Fuels” make the case that we are returning to a world where the electric grid is down for good, and wood is the main energy source and infrastructure material after fossil fuels become scarce, the level of civilization we can achieve will depend greatly on how precisely we can make objects in the future.  Because wood charcoal makes inferior and weaker iron, steel, and other metals than coal, today’s precision will no longer be possible. Microchips, jet engines, and much more will be lost forever.  Wood, because of eventual deforestation, will lead to orders of magnitude less metal, brick, ceramics, glass and other products because of lack of wood charcoal. And since peak coal is here, and the remaining reserves in the U.S. are mostly lignite, not great for the high heat needed in manufacturing, civilization as we know it has a limited time-span.

“The Great Simplification” will reduce precision. The good news is that hand-crafting of beautiful objects will return, a far more rewarding way of life than production lines at factories today.

Alice Friedemann   www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Winchester, S. 2018. The Perfectionists: How Precision Engineers created the modern world. HarperCollins.

Two particular aspects of precision need to be addressed. First, its ubiquity in the contemporary conversation—the fact that precision is an integral, unchallenged, and seemingly essential component of our modern social, mercantile, scientific, mechanical, and intellectual landscapes. It pervades our lives entirely, comprehensively, wholly.

Because an ever-increasing desire for ever-higher precision seems to be a leitmotif of modern society, I have arranged the chapters that follow in ascending order of tolerance, with low tolerances of 0.1 and 0.01 starting the story and the absurdly, near-impossibly high tolerances to which some scientists work today—claims of measurements of differences of as little as 0.000 000 000 000 000 000 000 000 000 01 grams, 10 to the -28th grams, have recently been made, for example—toward the end.

Any piece of manufactured metal (or glass or ceramic) must have chemical and physical properties: it must have mass, density, a coefficient of expansion, a degree of hardness, specific heat, and so on. It must also have dimensions: length, height, and width. It must possess geometric characteristics: it must have measurable degrees of straightness, of flatness, of circularity, cylindricity, perpendicularity, symmetry, parallelism, and position—among a mesmerizing host of other qualities even more arcane and obscure.

The piece of machined metal must have a degree of what has come to be known as tolerance. It has to have a tolerance of some degree if it is to fit in some way in a machine, whether that machine is a clock, a ballpoint pen, a jet engine, a telescope, or a guidance system for a torpedo.

To fit with another equally finely machined piece of metal, the piece in question must have an agreed or stated amount of permissible variation in its dimensions or geometry that will allow it to fit. That allowable variation is the tolerance, and the more precise the manufactured piece, the greater the tolerance that will be needed and specified.

The tolerances of the machines at the LIGO site are almost unimaginably huge, and the consequent precision of its components is of a level and nature neither known nor achieved anywhere else on Earth. LIGO is an observatory, the Laser Interferometer Gravitational-Wave Observatory.  The LIGO machines had to be constructed to standards of mechanical perfection that only a few years before were well-nigh inconceivable and that, before then, were neither imaginable nor even achievable.

Precision’s birth derives from the then-imagined possibility of maybe holding and managing and directing this steam, this invisible gaseous form of boiling water, so as to create power from it,

The father of true precision was an eighteenth-century Englishman named John Wilkinson, who was denounced sardonically as lovably mad, and especially so because of his passion for and obsession with metallic iron. He made an iron boat, worked at an iron desk, built an iron pulpit, ordered that he be buried in an iron coffin, which he kept in his workshop (and out of which he would jump to amuse his comely female visitors), and is memorialized by an iron pillar he had erected in advance of his passing in a remote village in south Lancashire.

Though the eventual function of the mechanical clock, brought into being by a variety of claimants during the fourteenth century, was to display the hours and minutes of the passing days, it remains one of the eccentricities of the period (from our current viewpoint) that time itself first played in these mechanisms a subordinate role. In their earliest medieval incarnations, clockwork clocks, through their employment of complex Antikythera-style gear trains and florid and beautifully crafted decorations and dials, displayed astronomical information at least as an equal to the presentation of time.

The behavior of the heavenly bodies was ordained by gods, and therefore was a matter of spiritual significance. As such, it was far worthier of human consideration than our numerical constructions of hours and minutes, and was thus more amply deserving of flamboyant mechanical display.

John Harrison, the man who most famously gave mariners a sure means of determining a vessel’s longitude. This he did by painstakingly constructing a family of extraordinarily precise clocks and watches, each accurate to just a few seconds in years, no matter how sea-punished its travels in the wheelhouse of a ship.

An official Board of Longitude was set up in London in 1714, and a prize of 20,000 pounds offered to anyone who could determine longitude with an accuracy of 30 miles. John Harrison eventually, and after a lifetime of heroic work on five timekeeper designs, would claim the bulk of the prize.

The fact that the Harrison clocks were British-invented and their successor clocks firstly British-made allowed Britain in the heyday of her empire to become for more than a century the undisputed ruler of all the world’s oceans and seas. Precise-running clockwork made for precise navigation; precise navigation made for maritime knowledge, control, and power.

In place of the oscillating beam balances that made the magic of his large clocks so spectacular to see, he substituted a temperature-controlled spiral mainspring, together with a fast-beating balance wheel that spun back and forth at the hitherto unprecedented rate of some 18,000 times an hour. He also had an automatic remontoir, which rewound the mainspring eight times a minute, keeping the tension constant, the beats unvarying. There was a downside, though: this watch needed oiling, and so, in an effort to reduce friction and keep the needed application of oil to a minimum, Harrison introduced, where possible, bearings made of diamond, one of the early instances of a jeweled escapement.

It remains a mystery just how, without the use of precision machine tools—the development of which will be central to the story that follows—Harrison was able to accomplish all this. Certainly, all those who have made watches since then have had to use machine tools to fashion the more delicate parts of the watches: the notion that such work could possibly be done by the hand of a 66-year-old John Harrison still beggars belief. But John Harrison’s clockworks enjoyed perhaps only three centuries’ worth of practical usefulness.

For precision to be a phenomenon that would entirely alter human society, it has to be expressed in a form that is duplicable; it has to be possible for the same precise artifact to be made again and again with comparative ease and at a reasonable frequency and cost.

It was only when precision was created for the many that precision as a concept began to have the profound impact on society as a whole that it does today. And the man who accomplished that single feat, of creating something with great exactitude and making it not by hand but with a machine, and, moreover, with a machine that was specifically created to create it

A machine that makes machines, known today as a “machine tool,” was, is, and will long remain an essential part of the precision story—was the 18th-century Englishman denounced for his supposed lunacy because of his passion for iron, the then-uniquely suitable metal from which all his remarkable new devices could be made.

Wilkinson is today rather little remembered. He is overshadowed quite comprehensively by his much-better-known colleague and customer, the Scotsman James Watt, whose early steam engines came into being, essentially, by way of John Wilkinson’s exceptional technical skills.

On January 27, 1774, John Wilkinson, whose local furnaces, all fired by coal, were producing a healthy twenty tons of good-quality iron a week, invented a technique for the manufacture of guns. The technique had an immediate cascade effect very much more profound than those he ever imagined, and of greater long-term importance.  Up until then, naval cannons were cast hollow, with the interior tube through which the powder and projectile were pushed and fired

The problem with this technique was that the cutting tool would naturally follow the passage of the tube, which may well not have been cast perfectly straight in the first place. This would then cause the finished and polished tube to have eccentricities, and for the inner wall of the cannon to have thin spots where the tool wandered off track.  And thin spots were dangerous—they meant explosions and bursting tubes and destroyed cannon and injuries to the sailors who manned the notoriously dangerous gun decks.

Then came John Wilkinson and his new idea. He decided that he would cast the iron cannon not hollow but solid. This, for a start, had the effect of guaranteeing the integrity of the iron itself—there were fewer parts that cooled early and came out with bubbles and  spongy sections (“honeycomb problems,” as they were called) for which hollow-cast cannon were then notorious.

The secret was in the boring of the cannon hole. Both ends of the operation, the part that did the boring and the part to be bored, had to be held in place, rigid and immovable, because to cut or polish something into dimensions that are fully precise, both tool and workpiece have to be clasped and clamped as tightly as possible to secure immobility.

Cannon after cannon tumbled from the mill, each accurate to the measurements the navy demanded, each one, once unbolted from the mill, identical to its predecessor, each one certain to be the same as the successor that would next be bolted onto it. The new system worked impeccably from the very start.

Yet what elevates Wilkinson’s new method to the status of a world-changing invention would come the following year, 1775, when he started to do serious business with James Watt.

The principle of a steam engine is familiar, and is based on the simple physical fact that when liquid water is heated to its boiling point it becomes a gas. Because the gas occupies some 1,700 times greater volume than the original water, it can be made to perform work.

Newcomen then realized he could increase the work by injecting cold water into the steam-filled cylinder, condensing the steam and bringing it back to 1/1,700 of its volume—creating, in essence, a vacuum, which enabled the pressure of the atmosphere to force the piston back down again. This downstroke could then lift the far end of the rocker beam and, in doing so, perform real work. The beam could lift floodwater, say, out of a waterlogged tin mine.  Thus was born a very rudimentary kind of steam engine, almost useless for any application beyond pumping water.  The Newcomen engine and its like remained in production for more than 70 years, its popularity beginning to lessen only in the mid-1760s, when James Watt showed that it could be markedly improved.

Watt realized that the central inefficiency of the engine he was examining was that the cooling water injected into the cylinder to condense the steam and produce the vacuum also managed to cool the cylinder itself. To keep the engine running efficiently, the cylinder needed to be kept as hot as possible at all times, so the cooling water should perhaps condense the steam not in the cylinder but in a separate vessel, keeping the vacuum in the main cylinder, which would thus retain the cylinder’s heat and allow it to take on steam once more. To make matters even more efficient, the fresh steam could be introduced at the top of the piston rather than the bottom, with stuffing of some sort placed and packed into the cylinder around the piston rod to prevent any steam from leaking out in the process.

These two improvements (the inclusion of a separate steam condenser and the changing of the inlet pipes to allow for the injection of new steam into the upper rather than the lower part of the main cylinder) changed Newcomen’s so-called fire-engine into a fully functioning steam-powered machine.

Once perfected, it was to be the central power source for almost all factories and foundries and transportation systems in Britain and around the world for the next century and more.

Yet perpetually enveloping his engine in a damp, hot, opaque gray fog, were billowing clouds of steam, which incensed James Watt. Try as he might, do as he could, steam always seemed to be leaking in prodigious gushes from the engine’s enormous main cylinder. He tried blocking the leak with all kinds of devices and substances. The gap between the piston’s outer surface and the cylinder’s inner wall should, in theory, have been minimal, and more or less the same wherever it was measured. But because the cylinders were made of iron sheets hammered and forged into a circle, and their edges then sealed together, the gap actually varied enormously from place to place. In some places, piston and cylinder touched, causing friction and wear. In other places, as much as half an inch separated them, and each injection of steam was followed by an immediate eruption from the gap.

Watt tried tucking in pieces of linseed oil–soaked leather; stuffing the gap with a paste made from soaked paper and flour; hammering in corkboard shims, pieces of rubber, even dollops of half-dried horse dung.

By the purest accident, John Wilkinson asked for an engine to be built for him, to act as a bellows for one of his iron forges—and in an instant, he saw and recognized Watt’s steam-leaking problem, and in an equal instant, he knew he had the solution: he would apply his cannon-boring technique to the making of cylinders for steam engines.  Watt beamed with delight. Wilkinson had solved his problem, and the Industrial Revolution—we can say now what those two never imagined—could now formally begin.

And so came the number, the crucial number, the figure that is central to this story, that which appears at the head of this chapter and which will be refined in its exactitude in all the remaining parts of this story. This is the figure of 0.1—one-tenth of an inch. This was the tolerance to which John Wilkinson had ground out his first cylinder.  All of a sudden, there was an interest in tolerance, in the clearance by which one part was made to fit with or into another. This was something quite new, and it begins, essentially, with the delivery of that first machine on May 4, 1776.

The central functioning part of the steam engine was possessed of a mechanical tolerance never before either imagined or achieved, a tolerance of 0.1 inches.

Locks were a British obsession at the time. The social and legislative changes that were sweeping the country in the late eighteenth century were having the undesirable effect of dividing society quite brutally: while the landed aristocracy had for centuries protected itself in grand houses behind walls and parks and ha-has, and with resident staff to keep mischief at bay, the enriched beneficiaries of the new business climate were much more accessible to the persistent poor.

Envy was abroad. Robbery was frequent. Fear was in the air. Doors and windows needed to be bolted. Locks had to be made, and made well. A lock such as Mr. Marshall’s, pickable in 15 minutes by a skilled man, and by a desperate and hungry man maybe in 10, was clearly not good enough. Joseph Bramah decided he would design and make a better one. He did so in 1784, less than a year after picking the Marshall lock. His patent made it almost impossible for a burglar with a wax-covered key blank, the tool most favored by the criminals who could use it to work out the position of the various levers and tumblers inside a lock, to divine what was beyond the keyhole, inside the workings.

Maudslay solved Bramah’s supply problems in an inkling by creating a machine to make them.  He built a whole family of machine tools, in fact, that would each make, or help to make, the various parts of the fantastically complicated locks Joseph Bramah had designed. They could make the parts fast and well and cheaply, without the errors that handcrafting and hand tools inevitably cause. The machines that Maudslay made would, in other words, make the necessary parts with precision.

Metal pieces can be machined into a range of shapes and sizes and configurations, and provided that the settings of the leadscrew and the slide rest are the same for every procedure, and the lathe operator can record these positions and make certain they are the same, time after time, then every machined piece will be the same—will look the same, measure the same, weigh the same (if of the same density of metal) as every other. The pieces are all replicable. They are, crucially, interchangeable. If the machined pieces are to be the parts of a further machine—if they are gearwheels, say, or triggers, or handgrips, or barrels—then they will be interchangeable parts, the ultimate cornerstone components of modern manufacturing. Of equally fundamental importance, a lathe so abundantly equipped as Maudslay’s was also able to make that most essential component of the industrialized world, the screw.

Screws were made to a standard of tolerance of one in one ten-thousandth of an inch.

A slide rest allowed for the making of myriad items, from door hinges to jet engines to cylinder blocks, pistons, and the deadly plutonium cores of atomic bombs

Maudslay next created in truly massive numbers, a vital component for British sailing ships. He built the wondrously complicated machines that would, for the next 150 years, make ships’ pulley blocks, the essential parts of a sailing ship’s rigging that helped give the Royal Navy its ability to travel, police, and, for a while, rule the world’s oceans.  At the time, sails were large pieces of canvas suspended, supported, and controlled by way of endless miles of rigging, of stays and yards and shrouds and footropes, most of which had to pass through systems of tough wooden pulleys that were known simply to navy men as blocks—pulley blocks, beyond the maritime world as block and tackle.

A large ship might have as many as 1400 pulley blocks of varying types and sizes depending on the task required. The lifting of a very heavy object such as an anchor might need an arrangement of six blocks, each with three sheaves, or pulleys, and with a rope passing through all six such that a single sailor might exert a pull of only a few easy pounds in order to lift an anchor weighing half a ton.

Blocks for use on a ship are traditionally exceptionally strong, having to endure years of pounding water, freezing winds, tropical humidity, searing doldrums heat, salt spray, heavy duties, and careless handling by brutish seamen. Back in sailing ship days, they were made principally of elm, with iron plates bolted onto their sides, iron hooks securely attached to their upper and lower ends, and with their sheaves, or pulleys, sandwiched between their cheeks, and around which ropes would be threaded. The sheaves themselves were often made of Lignum vitae (trees from South America),

What principally concerned the admirals was not so much the building of enough ships but the supply of the vital blocks that would allow the sailing ships to sail. The Admiralty needed 130,000 of them every year The complexity of their construction meant that they could be fashioned only by hand. Scores of artisanal woodworkers in and around southern England but were notoriously unreliable.

The Block Mills still stand as testament to many things, most famously to the sheer perfection of each and every one of the hand-built iron machines housed inside. So well were they made—they were masterpieces, most modern engineers agree—that most were still working a century and a half later; the Royal Navy made its last pulley blocks in 1965.

The Block Mills were the first factory to run entirely by steam engine.  The next invention that mattered depended on flatness, without curvature, indentation or protuberance. It involves the creation of a base from which all precise measurement and manufacture can be originated. For, as Maudslay realized, a machine tool can make an accurate machine only if the surface on which the tool is mounted is perfectly flat, is perfectly plane, exactly level, its geometry entirely exact.

A bench micrometer would be able to measure the actual dimension of a physical object to make sure that the components of the machines they were constructing would all fit together, with exact tolerances, and be precise for each machine and accurate to the design standard.

The micrometer that performed all these measurements turned out to be extremely accurate and consistent: this invention of his could measure down to one one-thousandth of an inch and, according to some, maybe even one ten-thousandth of an inch: to a tolerance of 0.0001.

To any schoolchild today, Eli Whitney means just one thing: the cotton gin. To any informed engineer, he signifies something very different: confidence man, trickster, fraud, charlatan almost entirely from his association with the gun trade, with precision manufacturing, and with the promise of being able to deliver weapons assembled from interchangeable parts.  When Whitney won the commission and signed a government contract to do so in 1798, he knew nothing about muskets and even less about their components: he won the order largely because of his Yale connections and the old alumni network that, even then, flourished in the corridors of power in Washington, DC.

It was John Hall who succeeded in making precision guns. At every stage of the work, from the forging of the barrel to the turning of the rifling and the shaping of the barrel, his 63 gauges were set to work, more than any engineer before him, to ensure as best he could that every part of every gun was exactly the same as every other—and that all were made to far stricter tolerances than hitherto: for a lock merely to work required a tolerance of maybe a fifth of a millimeter; to ensure that it not only worked but was infinitely interchangeable, he needed to have the pieces machined to a fiftieth of a millimeter.

Precision shoes were made by turning a shapeless block of wood into a foot-shaped entity of specific dimensions, and repeated time and time again. These shoemaker lasts were of exact sizes, seven inches long, nine, and so on. Before precise shoes were made, they were offered up in barrels and customers pulled them out randomly trying to find a shoe that more or less fit.

Oliver Evans was making flour-milling machinery; Isaac Singer introduced precision into the manufacturing of sewing machines; Cyrus McCormick was creating reapers, mowers, and, later, combine harvesters; and Albert Pope was making bicycles for the masses.

Joseph Whitworth was an absolute champion of accuracy, an uncompromising devotee of precision, and the creator of a device, unprecedented at the time, that could truly measure to an unimaginable one-millionth of an inch.  Using his superb mechanical skills, in 1859 he created a micrometer that allowed for one complete turn of the micrometer wheel to advance the screw not by 1/20 of an inch, but by 1/4,000 of an inch, a truly tiny amount.

Whitworth then incised 250 divisions on the turning wheel’s circumference, which meant that the operator of the machine, by turning the wheel by just one division, could advance or retard the screw and provided the ends of the item being measured are as plane as the plates on the micrometer, opening the gap by that 1/1,000,000 of an inch would make the difference between the item being held firmly, or falling, under the influence of gravity.

Now metal pieces could be made and measured to a tolerance of one-millionth of an inch.

Until Whitworth, each screw and nut and bolt was unique to itself, and the chance that any one-tenth-inch screw, say, might fit any randomly chosen one-tenth-inch nut was slender at best.

With the Model T, Henry Ford changed everything. From the start, he was insistent that no metal filing ever be done in his motor-making factories, because all the parts, components, and pieces he used for the machine would come to him already precisely finished, and to tolerances of cruelly exacting standards such that each would fit exactly without the need for even the most delicate of further adjustment. Once that aspect of his manufacturing system was firmly established, he created a whole new means of assembling the bits and pieces into cars.  He demanded a standard of precision for his components that had seldom been either known or achieved before, and he now married this standard to a new system of manufacture seldom tried before.

The Model T had fewer than 100 parts. A modern car has more than 30,000.

Within Rolls-Royce, it may seem as though the worship of the precise was entirely central to the making of these enormously comfortable, stylish, swift, and comprehensively memorable cars. In fact, it was far more crucial to the making of the less costly, less complex, less remembered machines that poured from the Ford plants around the world. And for a simple reason: the production lines required a limitless supply of parts that were exactly interchangeable.

If one happened not to be so exact, and if an assembly-line worker tried to fit this inexact and imprecise component into a passing workpiece and it refused to fit and the worker tried to make it fit, and wrestled with it—then, just like Charlie Chaplin’s assembly-line worker in Modern Times or, less amusingly, one in Fritz Lang’s Metropolis, the line would slow and falter and eventually stop, and workers for yards around would find their work disrupted, and parts being fed into the system would create unwieldy piles, and the supply chain would clog, and the entire production would slow and falter and maybe even grind, quite literally, to a painful halt. Precision, in other words, is an absolute essential for keeping the unforgiving tyranny of a production line going.

Henry Ford had been helped in his aim of making it so by using one component (and then buying the firm that made it), a component whose creation, by a Swedish man of great modesty, turned out to be of profoundly lasting importance to the world of precision. The Swede was Carl Edvard Johansson, popularly and proudly known by every knowledgeable Swede today as the world’s Master of Measurement. He was the inventor of the set of precise pieces of perfectly flat, hardened steel known to this day as gauge blocks, slip gauges, or, to his honor and in his memory, as Johansson gauges, or quite simply, Jo blocks.

His idea was to create a set of gauge blocks that, if held together in combination, could in theory measure any needed dimension. He calculated that the minimum number of blocks that would be needed was 103 blocks made of certain carefully specified sizes. Arranged in three series, it was possible to take some 20,000 measurements in increments of one one-thousandth of a millimeter, by laying two or more blocks together. His 103-piece combination gauge block set has since directly and indirectly taught engineers, foremen and mechanics to treat tools with care, and at the same time given them familiarity with dimensions of thousandths and ten thousandths of a millimeter.

Gauge blocks first came to the United States in 1908.  Cars were precise only to themselves; maybe every manufactured piece fit impeccably because it was interchangeable to itself, but once another absolutely impeccably manufactured, gauge-block-confirmed piece from another company (a ball bearing from SKF, say) was introduced into the Ford system, then maybe its absolute perfection trumped that of Ford’s, and Ford was wrong—ever so slightly maybe, but wrong nonetheless

Gauge blocks after the Great War, achieved accuracies of up to one-millionth of an inch.

Modern jet engines have hundreds of parts jerking to and fro and they cannot be made more powerful without becoming too complicated.  Modern jet engines can produce more than 100,000 horsepower—still, essentially, they have only a single moving part: a spindle, a rotor, which is induced to spin and, in doing so, causes many pieces of high-precision metal to spin with it.

All that ensures they work as well as they do are the rare and costly materials from which they are made, the protection of the integrity of the pieces machined from these materials, and the superfine tolerances of the manufacture of every part of which they are composed.  Since any increase in engine power and thus aircraft speed would lead to heavier engines, perhaps too heavy for an aircraft to carry, a new kind of engine was invented. The gas turbine.  A crucial element in any combustion engine is air—air is drawn into the engine, mixed with fuel, and then burns or explodes. The thermal energy from that event is turned into kinetic energy, and the engine’s moving parts powered. But a factor in the amount of air sucked into a piston engine is limited by the size of the cylinders. In a gas turbine, there is almost no limit: a gigantic fan at the opening of such an engine can swallow vastly more air than can be taken into a piston engine.

Gas turbines were already beginning to power ships, to generate electricity, to run factories. The simplicity of the basic idea was immensely attractive. Air was drawn in through a cavernous doorway at the front of the engine and immediately compressed, and made hot in the process, and was then mixed with fuel, and ignited. It was the resulting ferociously hot, tightly compressed, and controlled explosion that then drove the turbine, which spun its blades and then performed two functions. It used some of its power to drive the aforementioned compressor, which sucked in and squeezed the air, but it then had a very considerable fraction of its power left, and so was available to do other things, such as turn the propeller of a ship, or turn a generator of electricity, or turn the driving wheels of a railway locomotive (didn’t happen, too many problems), or provide the power for a thousand machines in a factory and keep them running, tirelessly.

The first jet plane was invented in 1941 in Britain, and in 1944 that the public learned about it.  Inside a jet engine, everything is a diabolic labyrinth, a maze of fans and pipes and rotors and discs and tubes and sensors and a Turk’s head of wires of such confusion that it doesn’t seem possible that any metal thing inside it could possibly even move without striking and cutting and dismembering all the other metal things that are crammed together in such dangerously interfering proximity. Yet work and move a jet engine most certainly does, with every bit of it impressively engineered to do so, time and again, and under the harshest and fiercest of working conditions.

There are scores of blades of various sizes in a modern jet engine, whirling this way and that and performing various tasks that help push the hundreds of tons of airplane up and through the sky. But the blades of the high-pressure turbines represent the singularly truest marvel of engineering achievement—and this is primarily because the blades themselves, rotating at incredible speeds and each one of them generating during its maximum operation as much power as a Formula One racing car, operate in a stream of gases that are far hotter than the melting point of the metal from which the blades were made. What stopped these blades from melting?

It turns out to be possible to cool the blades by drilling hundreds of tiny holes in each blade, and of making inside each blade a network of tiny cooling tunnels, all of them manufactured at a size and to such minuscule tolerances as were quite unthinkable only a few years ago.

The first blades that Whittle made were of steel, which somewhat limited the performance of his early prototypes, since steel loses its structural integrity at temperatures higher than about 500 degrees Celsius. But alloys were soon found that made matters much easier, after which blades were constructed from these new metal compounds. They did not run the risk of melting, because the temperatures at which they operated were on the order of a thousand degrees, and the special nickel-and-chromium alloy from which they were made, known as Nimonic, remained solid and secure and stiff up to 1,400 degrees Celsius (2550 F).

the next generation of engines required that the gas mixture roaring out from the combustion chamber be heated to around 1,600 degrees Celsius, and even the finest of the alloys then used melted at around 1,455 degrees Celsius. The metals tended to lose their strength and become soft and vulnerable to all kinds of shape changes and expansions at even lower temperatures. In fact, extended thermal pummeling of the blades at anything above 1,300 degrees Celsius was regarded by early researchers as just too difficult and risky.

Most of that air bypasses the engine (for reasons that are beyond the scope of this chapter), but a substantial portion of it is sent through a witheringly complex maze of blades, some whirling, some bolted and static, that make up the front and relatively cool end of a jet engine and that compress the air, by as much as 50 times. The one ton of air taken each second by the fan, and which would in normal circumstances entirely fill the space equivalent of a squash court, is squeezed to a point where it could fit into a decent-size suitcase. It is dense, and it is hot, and it is ready for high drama. For very nearly all this compressed air is directed straight into the combustion chamber, where it mixes with sprayed kerosene, is ignited by an array of electronic matches, as it were, and explodes directly into the whirling wheel of turbine blades. These blades (more than ninety of them in a modern jet engine, and attached to the outer edge of a disc rotating at great speed) are the first port of call for the air before it passes through the rest of the turbine and, joining the bypassed cool air from the fan, gushes wildly out of the rear of the engine and pushes the plane forward. “Nearly all” is the key. Some of this cool air, the Rolls-Royce engineers realized, could actually be diverted before it reached the combustion chamber, and could be fed into tubes in the disc onto which the blades were bolted. From there it could be directed into a branching network of channels or tunnels that had been machined into the interior of the blade itself. And now that the blade was filled with cool air—cool only by comparison; the simple act of compressing it made it quite hot, about 650 degrees Celsius, but still cooler by a thousand degrees than the post–combustion chamber fuel-air mixture. To make use of this cool air, scores of unimaginably tiny holes were then drilled into the blade surface, drilled with great precision and delicacy and in configurations that had been dictated by the computers, and drilled down through the blade alloy until each one of them reached just into the cool-air-filled tunnels—thus immediately allowing the cool air within to escape or seep or flow or thrust outward, and onto the gleaming hot surface of the blade.

It is here that the awesome computational power that has been available since the late 1960s comes into its own, becomes so crucially useful. Aside from the complex geometry of the hundreds of tiny pinholes, is the fact that the blades are grown from, incredibly, a single crystal of metallic nickel alloy. This makes them extremely strong—which they need to be, as in their high-temperature whirlings, they are subjected to centrifugal forces equivalent to the weight of a double-decker London bus. Very basically, the molten metal (an alloy of nickel, aluminum, chromium, tantalum, titanium, and five other rare-earth elements that Rolls-Royce coyly refuses to discuss) is poured into a mold that has at its base a little and curiously three-turned twisted tube, which resembles nothing more than the tail of P and ends up with all its molecules lined up evenly.

It has become a single crystal of metal, and thus, its eventual resistance to all the physical problems that normally plague metal pieces like this is mightily enhanced. It is very much stronger—which it needs to be, considering the enormous centrifugal forces.

Electrical discharge machining, or EDM, as it is more generally known, employs just a wire and a spark, both of them tiny, the whole process directed by computer and inspected by humans, using powerful microscopes, as it is happening.  The more complex the engines, the more holes need to be drilled into the various surfaces of a single blade: in a Trent XWB engine, there are some 600, arranged in bewildering geometries to ensure that the blade remains stiff, solid, and as cool as possible. Their integrity owes much to the geometry of the cooling holes that are being drilled, which is measured and computed and checked by skilled human beings. No tolerance whatsoever can be accorded to any errors that might creep into the manufacturing process, for a failure in this part of a jet engine can turn into a swiftly accelerating disaster.

As the tolerances shrink still further and limits are set to which even the most well-honed human skills cannot be matched, automation has to take over. The Advanced Blade Casting Facility can perform all these tasks (from the injection of the losable wax to the growing of single-crystal alloys to the drilling of the cooling holes) with the employment of no more than a handful of skilled men and women. It can turn out 100,000 blades a year, all free of errors.

But failure was still possible. The fate of passengers depended on the performance of one tiny metal pipe no more than five centimeters long and three-quarters of a centimeter in diameter, into which someone at a factory in the northern English Midlands had bored a tiny hole, but had mistakenly bored it fractionally out of true. The engine part in question is called an oil feed stub pipe, and though there are many small steel tubes wandering snakelike through any engine, this particular one, a slightly wider stub at the end of longer but narrower snakelike pipe, was positioned in the red-hot air chamber between the high- and intermediate-pressure turbine discs. It was designed to send oil down to the bearings on the rotor that carried the fast-spinning disc. It was machined improperly due to a drill bit that did the work being misaligned, with the result that along one small portion of its circumference, the tube was about half a millimeter too thin.

Metal fatigue is what caused the engine to fail. The aircraft had spent 8,500 hours aloft, and had performed 1,800 takeoff and landing cycles. It is these last that punish the mechanical parts of a plane: the landing gear, the flaps, the brakes, and the internal components of the jet engines. For, every time there is a truly fast or steep takeoff, or every time there is a hard landing, these parts are put under stress that is momentarily greater than the running stresses of temperature and pressure for which the innards of a jet engine are notorious.

Heisenberg, in helping in the 1920s to father the concepts of quantum mechanics, made discoveries and presented calculations that first suggested this might be true: that in dealing with the tiniest of particles, the tiniest of tolerances, the normal rules of precise measurement simply cease to apply. At near-and subatomic levels, solidity becomes merely a chimera; matter comes packaged as either waves or particles that are by themselves both indistinguishable and immeasurable and, even to the greatest talents, only vaguely comprehensible.

The making of the smallest parts for today’s great jet engines, we are reaching down nowhere near the limits that so exercise the minds of quantum mechanicians. Yet we have reached a point in the story where we begin to notice our own possible limitations and, by extension and extrapolation, also the possible end point of our search for perfection.

An overlooked measurement error on the mirror amounting to one-fiftieth the thickness of a human hair managed to render most of the images beamed down from Hubble fuzzy and almost wholly useless.

Chapter 9 (TOLERANCE: 0.000 000 000 000 000 000 000 000 000 000 000 01)  35 places

Here we come to the culmination of precision’s quarter-millennium evolutionary journey. Up until this moment, almost all the devices and creations that required a degree of precision in their making had been made of metal, and performed their various functions through physical movements of one kind or another. Pistons rose and fell; locks opened and closed; rifles fired; sewing machines secured pieces of fabric and created hems and selvedges; bicycles wobbled along lanes; cars ran along highways; ball bearings spun and whirled; trains snorted out of tunnels; aircraft flew through the skies; telescopes deployed; clocks ticked or hummed, and their hands moved ever forward, never back, one precise second at a time. Then came the computer, into an immobile and silent universe, one where electrons and protons and neutrons have replaced iron and oil and bearings and lubricants and trunnions and the paradigm-altering idea of interchangeable parts.

Precision had by now reached a degree of exactitude that would be of relevance and use only at the near-atomic level.

Fab 42—of electronic microprocessor chips, the operating brains of almost all the world’s computers. The enormous ASML devices allow the firm to manufacture these chips, and to place transistors on them in huge numbers and to any almost unreal level of precision and minute scale that today’s computer industry, pressing for ever-speedier and more powerful computers, endlessly demands.

Gordon Moore, one of the founders of Intel, is most probably the man to blame for this trend toward ultraprecision in the electronics world. He made an immense fortune by devising the means to make ever-smaller and smaller transistors and to cram millions, then billions of them onto a single microprocessing chip. There are now more transistors at work on this planet (some 15 quintillion, or 15,000,000,000,000,000,000) than there are leaves on all the trees in the world. In 2015, the four major chip-making firms were making 14 trillion transistors every single second. Also, the sizes of the individual transistors are well down into the atomic level.

When the Broadwell family of chips was created in 2016, node size was down to a previously inconceivably tiny fourteen-billionths of a meter (the size of the smallest of viruses), and each wafer contained no fewer than seven billion transistors. The Skylake chips made by Intel at the time of this writing have transistors that are sixty times smaller than the wavelength of light used by human eyes, and so are literally invisible.

It takes three months to complete a microprocessing chip, starting with the growing of a 400-pound, very fragile, cylindrical boule of pure smelted silicon, which fine-wire saws will cut into dinner plate–size wafers, each an exact two-thirds of a millimeter thick. Chemicals and polishing machines will then smooth the upper surface of each wafer to a mirror finish, after which the polished discs are loaded into ASML machines for the long and tedious process toward becoming operational computer chips. Each wafer will eventually be cut along the lines of a grid that will extract a thousand chip dice from it—and each single die, an exactly cut fragment of the wafer, will eventually hold the billions of transistors that form the non-beating heart of every computer, cellphone, video game, navigation system, and calculator on modern Earth, and every satellite and space vehicle above and beyond it. What happens to the wafers before the chips are cut out of them demands an almost unimaginable degree of miniaturization. Patterns of newly designed transistor arrays are drawn with immense care onto transparent fused silica masks, and then lasers are fired through these masks and the beams directed through arrays of lenses or bounced off long reaches of mirrors, eventually to imprint a highly shrunken version of the patterns onto an exact spot on the gridded wafer, so that the pattern is reproduced, in tiny exactitude, time and time again. After the first pass by the laser light, the wafer is removed, is carefully washed and dried, and then is brought back to the machine, whence the process of having another submicroscopic pattern imprinted on it by a laser is repeated, and then again and again, until thirty, forty, as many as sixty infinitesimally thin layers of patterns (each layer and each tiny piece of each layer a complex array of electronic circuitry) are engraved, one on top of the other.

Rooms within the ASML facility in Holland are very much cleaner than that. They are clean to the far more brutally restrictive demands of ISO number 1, which permits only 10 particles of just one-tenth of a micron per cubic meter, and no particles of any size larger than that. A human being existing in a normal environment swims in a miasma of air and vapor that is five million times less clean.

The test masses on the LIGO devices in Washington State and Louisiana are so exact in their making that the light reflected by them can be measured to one ten-thousandth of the diameter of a proton.

Alpha Centauri A, which lies 4.3 light-years away. The distance in miles of 4.3 light-years is 26 trillion miles, or, in full, 26,000,000,000,000 miles. It is now known with absolute certainty that the cylindrical masses on LIGO can help to measure that vast distance to within the width of a single human hair.

 

Posted in Infrastructure & Collapse, Jobs and Skills, Life After Fossil Fuels, Manufacturing & Industrial Heat | Tagged , , , , , | Comments Off on Life After Fossil Fuels: manufacturing will be less precise