U.S. Army new jobs: quell social unrest from climate change, help get arctic oil

Of all the branches of government, the military is the most on top of climate change, peak oil, pandemics, power grid failure, and other disasters. I guess that should be surprising, it’s their job to defend the U.S. against threats.

What I found interesting was that given the coming threats, the military is proposing new job opportunities for themselves in addition to fighting wars abroad. They anticipate that disorder from pandemics, climate change, financial crashes and more might require them more to be here in U.S. helping.  The army also proposes to enable and defend arctic hydrocarbon resources, which climate change may make more available.

This study examines the implications of climate change over the next 50 years for the United States Army assuming that IPCC RCP 4.5 is our likely future to predict expected outcomes.

Related: you might want to read Nafeez Ahmed’s take on this report here: U.S. Military Could Collapse Within 20 Years Due to Climate Change, Report Commissioned By Pentagon Says. The report says a combination of global starvation, war, disease, drought, and a fragile power grid could have cascading, devastating effects.

Alice Friedemann www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer, “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Brosig M, Frawley CP, Hill A, et al (2019) Implications of climate change for the U.S. army. U.S. Army War College.  52 pages

Sea level rise, changes in water and food security, and more frequent extreme weather events are likely to result in the migration of large segments of the population. Rising seas will displace tens (if not hundreds) of millions of people, creating massive, enduring instability. This migration will be most pronounced in those regions where climate vulnerability is exacerbated by weak institutions and governance and underdeveloped civil society. Recent history has shown that mass human migrations can result in increased propensity for conflict and turmoil as new populations intermingle with and compete against established populations. More frequent extreme weather events will also increase demand for military humanitarian assistance.

Salt water intrusion into coastal areas and changing weather patterns will also compromise or eliminate fresh water supplies in many parts of the world. Additionally, warmer weather increases hydration requirements. This means that in expeditionary warfare, the Army will need to supply itself with more water. This significant logistical burden will be exacerbated on a future battlefield that requires constant movement due to the ubiquity of adversarial sensors and their deep strike capabilities.

My caption: New jobs for the military

A warming trend will also increase the range of insects that are vectors of infectious tropical diseases. This, coupled with large scale human migration from tropical nations, will increase the spread of infectious disease. The Army has tremendous logistical capabilities, unique in the world, in working in austere or unsafe environments. In the event of a significant infectious disease outbreak (domestic or international), the Army is likely to be called upon to assist in the response and containment. They propose working closely with the CDC and relief plans.

As the electorate becomes more concerned about climate change, it follows that elected officials will, as well. This may result in significant restrictions on military activities (in peacetime) that produce carbon emissions. The Department of Defense (DoD) does not currently possess an environmentally conscious mindset. Political and social pressure will eventually force the military to mitigate its environmental impact in both training and wartime. Implementation of these changes will be costly in effort, time and money.

All of the plans require energy, here are plans that are directly energy related

In light of these findings, the military must consider changes in doctrine, organization, equipping, and training to anticipate changing environmental requirements. Lagging behind public and political demands for energy efficiency and minimal environmental footprint will significantly hamstring the Department’s efforts to face national security challenges. The Department will struggle to maintain its positive public image and that will impact the military’s ability to receive the required funding to face the growing number of security challenges.

[My comment: In a sly way, this study seems to acknowledge peak oil, though it’s stated as if the cause for lack of fuel will be the public’s awareness of climate change: “Problem: potential disruptions to readiness due to restrictions on fuel use”]

The decrease in Arctic sea ice and associated sea level rise will bring conflicting claims to newly-accessible natural resources. It will also introduce a new theater of direct military contact between an increasing belligerent Russia and other Arctic nations, including the U.S. Yet the opening of the Arctic will also increase commercial opportunities. Whether due to increased commercial shipping traffic or expanded opportunities for hydrocarbon extraction, increased economic activity will drive a requirement for increased military expenditures specific to that region. The study recommends training and equipment to conduct future Arctic operations.

Power grid vulnerabilities: improve grid near military installations and fund internal power generation from solar/battery farms and small nuclear reactors.

The Arctic

According to the Intergovernmental Panel on Climate Change (IPCC), since satellite monitoring of the Arctic began in 1979, the Arctic ice extent has de creased from 3.5 – 4.1% (“Climate Change 2014 Synthesis Report.” International Panel on Climate Change. 2015. http://ipcc.ch/report/ar5/syr/ )

According to a 2008 U.S. Geological survey, the Arctic likely holds approximately one quarter of the world’s undiscovered hydrocarbon reserves, with 20% of them potentially in U.S. Territory.

Since territories aren’t well defined, this is mainly a Navy and Air Force issue, however the Army will be tasked with wide area security and reconnaissance roles as part of any joint efforts to secure Arctic interests.

Russia has embarked on a rapid build-up in the Arctic, including expensive refurbishment of Soviet era Arctic bases. Russia’s current Arctic plans include the opening of ten search and rescue stations, 16 deep water ports, 13 airfields and ten air defense sites.  These developments create not only security outposts for Russia, but also threats to the U.S. mainland. Russia’s recent development of KH-101/102 air launched cruise missiles and SSC-8 ground launched cruise missiles potentially put much of the United States at risk from low altitude, radar evading, nuclear capable missiles.   

POWER GRID STRESS

The power grid that serves the United States is aging and continues to operate without a coordinated and significant infrastructure investment. Vulnerabilities exist to electricity-generating power plants, electric transmission infrastructure and distribution system components. Power transformers average over 40 years of age and 70 percent of transmission lines are 25 years or older. The U.S. national power grid is susceptible to coordinated cyber or physical attacks; electromagnetic pulse (EMP) attacks; space weather; and other natural events, to include the stressors of a changing climate (Transmission & Distribution Infrastructure: A Harris Williams & Co. White Paper” Harris Williams & Co. 2014.)

If the power grid infrastructure collapsed:

  • Loss of perishable foods and medications
  • Loss of water and wastewater distribution systems
  • Loss of heating/air conditioning and electrical lighting systems
  • Loss of computer, telephone, and communications systems (including airline flights, satellite networks and GPS services
  • Loss of public transportation systems
  • Loss of fuel distribution systems and fuel pipelines
  • Loss of all electrical systems that do not have back-up power

There are 16 critical infrastructure sectors (here) that would be affected by a blackout: chemical, commercial facilities, communications, critical manufacturing, dams, defense industrial base, emergency services, energy, financial services, food and agriculture, government facilities, healthcare and public health, information technology, nuclear reactors / materials / waste, transportation systems, water and wastewater systems.

The Congressional Electro-Magnetic Pulse (EMP) Commission, in 2008, estimated it would cost $2 billion to harden just the grid’s critical nodes. The Task Force on National and Homeland Security calculates an additional $10 to $30 billion and many years necessary for a complete grid overhaul. The EMP Commission further cited that some of the very improvements of network interconnectedness created through the updated Supervisory Control and Data Acquisition (SCADA) network, which control power distribution around the country, introduced additional weaknesses to cyber-attack.

Department of Defense installations are 99 percent reliant on the U.S. power grid for electrical power generation due to the decommissioning of autonomous power generation capability for budgetary cost saving measures over the last two decades.93

Global reductions in demand for hydrocarbons means that gasoline, diesel, and jet fuel should become less expensive. On the other hand, reduced demand tends to reduce incentives to explore potential oil fields or build new refining facilities. Much of the U.S.’s domestic oil extraction is unprofitable at oil prices below $30 a barrel. Technological advances tend to push this number lower, but exhaustion of oil fields tends to push the number higher. In all scenarios, global declines in oil consumption increase the sensitivity of oil markets to the choices of large consumers like the U.S. DoD.

The automated, A.I.-enhanced force of the Army’s future is one that runs on electricity, not jet fuel (JP-8). More efficient or resilient production of electricity through micro-nuclear power generation or improved solar arrays can fundamentally alter the mobility and the logistical challenges of a mechanized force. Light, quick-charging batteries (super-capacitors) have tremendous value in such a force; so does the wireless transmission of electrical current.

[many pages on climate change]

Then request for $100 million for fighting in middle eastern deserts: “The U.S. Army is precipitously close to mission failure concerning hydration of the force in a contested arid environment. The experience and best practices of the last 17 years of conflict in Afghanistan, Iraq, Syria, and Africa rely heavily on logistics force structures to support the warfighter with water mostly procured through contracted means of bottled water, local wells and Reverse Osmosis Water Purification Units (ROWPU). The ability to supply this amount of water in the most demanding environment is costly in money, personnel, infrastructure, and force structure.  The calculations for water (8.34 pounds per gallon) in an arid environment equates to 66 pounds of water per soldier. Water is 30-40% of the force sustainment requirement.  The Army must develop advanced technologies to capture ambient humidity.

Daily: Temperate 12.2 gallons, tropical 15.4, arid 15.8

Current planning methodologies remain heavily vested in bottled water meaning a more considerable force is needed to transport it.

In the 2000s in Iraq, over 864,000 bottles of water were consumed each month at one Forward Operating Base (FOB) with that number doubling during hotter months. Browne, Mathuel. “Marines Invest in New System to Purify Water on the Go.” Armed with Science: The Official US Defense Department Science Blog. 2017. http://science.dodlive. mil/2017/02/01/marines-invest-in-new-system-to-purify-water-onthe-go/.

ARCTIC OIL

Increased accessibility to the region for economic activity will consequently increase the security requirements and competition in the region. Currently Russia is rapidly expanding their Arctic military capabilities and capacity. The U.S. military must immediately begin expanding its capability to operate in the Artic to defend economic interests and to partner with allies across the region.

As ice melts there will be increased shipping, population shifts to the region and increased competition to extract the vast hydrocarbon resources more readily available as the ice sheets contract. These changes will drive an expansion of security efforts from nations across the region as they vie to claim and protect the economic resources of the region.

the competition for resources in the Arctic will increase security requirements and the potential for conflict. The Army will not be excluded from those requirements or any conflict that develops. The Army will simply be unprepared for the mission and the environment in which it will occur. As Russian activity expands in the Arctic, both the Navy and the Air Force will compete for resources to meet the Russian threat. The Army must compete as well

The Army needs to focus on the development of an infantry carrier vehicle with low surface pressure to maximize maneuverability in adverse terrain. An amphibious capable vehicle that has high weight distribution characteristics across the drive (either wheeled or tracked) contact patches will increase the speed of maneuver necessary for units to conduct wide area security across greater coverage areas.

PANDEMICS AND DISEASE (from climate change, yet more jobs for the army):  As the largest source of potential capacity and capability to respond to widespread disease outbreaks in the United States, the military should be prepared to execute defense support to civil authority (DSCA) missions of this type.

NUCLEAR POWER INDUSTRY

Currently, the Department of Energy conducts tritium production using 2 to 4 commercial nuclear pressurized water reactors (PWRs) run by the Tennessee Valley Authority (TVA). This commercial capability currently meets the U.S. stockpile tritium production capability; however, due to the overall age of the U.S. nuclear power industry, future PWRs may not be available to continue tritium production.168 The loss of tritium production directly reduces the effectiveness of the U.S. nuclear stockpile by reducing or hindering the overall yield produced by the nuclear warheads. Without an effective U.S. nuclear stockpile, the U.S. cannot deter peer nuclear competitors and rogue nuclear states increasing the risk to all-out war against the United States.

Directly tied to tritium production is the future of the nuclear power industry. It is filled with an aging fleet of reactors built in the late 1960s and 1970s. Most receive a commercial license by the Nuclear Regulatory Commission (NRC) to operate on average 30 years, but many have or are seeking extensions to increase the operations out to 40 and 50 years.170 The age of the industry and the lack of new reactors coming on-line creates a significant risk to both the environment and the maintenance of the U.S. nuclear stockpile. “The highest priority of nuclear innovation policy should be to promote the availability of an advanced nuclear power system 15 to 20 years from now”.

Increasing the underlying U.S. baseline nuclear power generation capability from a mere 20% (and declining) to more than 80% (to cover the 60% coal production capability that currently exists) can significantly reduce greenhouse gases.172 The government will need to lead this expansion which goes against the fossil fuel business paradigms that have existed for more than 100 years. Any nuclear industry expansion must include a long-term review of tritium production requirements and analyze how the government will maintain its required tritium production capability.

[natters on and on about need for nuclear, tritium for bombs ]

CONCLUSION

It is useful to remind ourselves regularly of the capacity of human beings to persist in stupid beliefs in the face of significant, contradictory evidence.  Mitigation of new large-scale stresses requires a commitment to learning, systematically, about what is happening.

Life is full of the unexpected, or the overlooked obvious. The term “black swan event” describes surprises of an especially momentous and nasty type. Popularized by the mathematician Nicholas Nassim Taleb in his 2007 book of the same title, Taleb argued that black swan events have three characteristics: “rarity, extreme impact, and retrospective (though not prospective) predictability.”176 In recent years, the concept of black swan events has gained currency in political, military, and financial contexts.

The black swan has a venerable history as an illustration of the ancient epistemological problem of induction: simply stated, no number of observations of a given relationship are sufficient to prove that a different relationship cannot occur. No amount of white swan sightings can guarantee that a different color swan is not out there waiting to be seen.  

Three maxims can help us avoid dangerous failures of recognition, and speed learning when unexpected things happen.

1. Everything we believe about the world is provisional – “serving for the time being.” Adding the words “so far” to assertions about reality reminds us of this.

2. Unjustified certainty is very costly. The greater your certainty that you are right when you are wrong, the longer it will take you to recognize and incorporate new data into your system of belief, and to change your mind. General Douglas MacArthur was a confident man, and this confidence usually served him well, such as when he undertook the risky landings at Incheon in the Korean War. Yet MacArthur’s confidence betrayed him when China entered the war. He was certain that this would not happen, and MacArthur’s certainty delayed his recognition of a key change, exposing forces under his command to terrible risk. Confidence in your beliefs is valuable only insofar as it results in different choices (e.g., I choose A or B). Beyond that point, confidence has increasing costs.

3. Pay special attention to data that is unlikely in light of your current beliefs; it has much more information per unit, all else equal. In this sense, information content is measured as the potential to change how you think about the world. Information that is probable in light of your beliefs will have minimal effects on your understanding. Improbable information, if incorporated, will change it.

Posted in Military | Tagged , , , , | Leave a comment

Reforestation for the return to biomass after fossil fuels

Preface. Below are excerpts from a New York Times article about forests. My book “Life After Fossil Fuels” explains why all the myriad ways we use fossil fuels — to heat, cook, generate electricity (62% in the U.S. from coal & natural gas, 19% nuclear, just 9% wind and solar), manufacturing of millions of products, transportation, fertilizer – can’t be done with electricity, hydrogen, and other alternative energies. Nor do we have time to scale them up even if they did work.

Only biomass can do it all, obviously, since the 5,000 years of civilizations that preceded fossil fuels used biomass for everything.

Though of course a Wood World can support perhaps half a billion people sustainably. Even so, in the past, deforestation was the main or one of the main factors in collapse (along with soil erosion caused partly by cutting down forests to grow food and harvest timber for buildings, wagons, and more). This will no doubt be the case for the next civilization that arises out of the postcarbon ashes. But, this article shows that forests can be cut sustainably, so perhaps we can stop the boom bust cycles of history.

Since we’ve messed up the planet to extremes made possible by fossil fuels, the least we could do for our descendants is to plant forests so they don’t freeze in the dark, can build homes, carts, and more and rebuild anew.

Another reason this is a great article is the Wonderment, something to hold onto no matter how hard times get — amazement at the complexity and cooperation in nature.

Alice Friedemann   www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer; Barriers to Making Algal Biofuels; and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, Peak Prosperity , XX2 report

***

Jabr F (2020) The Social Life of Forests. Trees appear to communicate and cooperate through subterranean networks of fungi. What are they sharing with one another? New York Times.

When Europeans arrived on America’s shores in the 1600s, forests covered one billion acres of the future United States — close to half the total land area. Between 1850 and 1900, U.S. timber production surged to more than 35 billion board feet from five billion. By 1907, nearly a third of the original expanse of forest — more than 260 million acres — was gone. As of 2012, the United States had more than 760 million forested acres. The age, health and composition of America’s forests have changed significantly, however. Although forests now cover 80 percent of the Northeast, for example, less than 1 percent of its old-growth forest remains intact.

And though clearcutting is not as common as it once was, it is still practiced on about 40 percent of logged acres in the United States and 80 percent of them in Canada. In a thriving forest, a lush understory captures huge amounts of rainwater, and dense root networks enrich and stabilize the soil. Clearcutting removes these living sponges and disturbs the forest floor, increasing the chances of landslides and floods, stripping the soil of nutrients and potentially releasing stored carbon to the atmosphere. When sediment falls into nearby rivers and streams, it can kill fish and other aquatic creatures and pollute sources of drinking water. The abrupt felling of so many trees also harms and evicts countless species of birds, mammals, reptiles and insects.

Humans have relied on forests for food, medicine and building materials for many thousands of years. Forests have likewise provided sustenance and shelter for countless species over the eons. But they are important for more profound reasons too. Forests function as some of the planet’s vital organs. The colonization of land by plants between 425 and 600 million years ago, and the eventual spread of forests, helped create a breathable atmosphere with the high level of oxygen we continue to enjoy today. Forests suffuse the air with water vapor, fungal spores and chemical compounds that seed clouds, cooling Earth by reflecting sunlight and providing much-needed precipitation to inland areas that might otherwise dry out. Researchers estimate that, collectively, forests store somewhere between 400 and 1,200 gigatons of carbon, potentially exceeding the atmospheric pool.

Crucially, a majority of this carbon resides in forest soils, anchored by networks of symbiotic roots, fungi and microbes. Each year, the world’s forests capture more than 24 percent of global carbon emissions, but deforestation — by destroying and removing trees that would otherwise continue storing carbon — can substantially diminish that effect. When a mature forest is burned or clear-cut, the planet loses an invaluable ecosystem and one of its most effective systems of climate regulation. The razing of an old-growth forest is not just the destruction of magnificent individual trees — it’s the collapse of an ancient republic whose interspecies covenant of reciprocation and compromise is essential for the survival of Earth as we’ve known it.

By the time she was in grad school at Oregon State University, however, Simard, today 60-years-old and a professor of ecology at the University of British Columbia, understood that commercial clearcutting had largely superseded the sustainable logging practices of the past. Loggers were replacing diverse forests with homogeneous plantations, evenly spaced in upturned soil stripped of most underbrush. Without any competitors, the thinking went, the newly planted trees would thrive. Instead, they were frequently more vulnerable to disease and climatic stress than trees in old-growth forests. In particular, Simard noticed that up to 10 percent of newly planted Douglas fir were likely to get sick and die whenever nearby aspen, paper birch and cottonwood were removed. The reasons were unclear. The planted saplings had plenty of space, and they received more light and water than trees in old, dense forests. So why were they so frail?

Simard suspected that the answer was buried in the soil. Underground, trees and fungi form partnerships known as mycorrhizas: Threadlike fungi envelop and fuse with tree roots, helping them extract water and nutrients like phosphorus and nitrogen in exchange for some of the carbon-rich sugars the trees make through photosynthesis. Research had demonstrated that mycorrhizas also connected plants to one another and that these associations might be ecologically important, but most scientists had studied them in greenhouses and laboratories, not in the wild. For her doctoral thesis, Simard decided to investigate fungal links between Douglas fir and paper birch in the forests of British Columbia. Apart from her supervisor, she didn’t receive much encouragement from her mostly male peers. “The old foresters were like, Why don’t you just study growth and yield?” Simard told me. “I was more interested in how these plants interact. They thought it was all very girlie.”

Simard has studied webs of root and fungi in the Arctic, temperate and coastal forests of North America for nearly three decades. Her initial inklings about the importance of mycorrhizal networks were prescient, inspiring whole new lines of research that ultimately overturned longstanding misconceptions about forest ecosystems. By analyzing the DNA in root tips and tracing the movement of molecules through underground conduits, Simard has discovered that fungal threads link nearly every tree in a forest — even trees of different species. Carbon, water, nutrients, alarm signals and hormones can pass from tree to tree through these subterranean circuits. Resources tend to flow from the oldest and biggest trees to the youngest and smallest. Chemical alarm signals generated by one tree prepare nearby trees for danger. Seedlings severed from the forest’s underground lifelines are much more likely to die than their networked counterparts. And if a tree is on the brink of death, it sometimes bequeaths a substantial share of its carbon to its neighbors.

Although Simard’s peers were skeptical and sometimes even disparaging of her early work, they now generally regard her as one of the most rigorous and innovative scientists studying plant communication and behavior. David Janos, co-editor of the scientific journal Mycorrhiza, characterized her published research as “sophisticated, imaginative, cutting-edge.” Jason Hoeksema, a University of Mississippi biology professor who has studied mycorrhizal networks, agreed: “I think she has really pushed the field forward.” Some of Simard’s studies now feature in textbooks and are widely taught in graduate-level classes on forestry and ecology. She was also a key inspiration for a central character in Richard Powers’s 2019 Pulitzer Prize-winning novel, “The Overstory”: the visionary botanist Patricia Westerford. In May, Knopf will publish Simard’s own book, “Finding the Mother Tree,” a vivid and compelling memoir of her lifelong quest to prove that “the forest was more than just a collection of trees.”

Since Darwin, biologists have emphasized the perspective of the individual. They have stressed the perpetual contest among discrete species, the struggle of each organism to survive and reproduce within a given population and, underlying it all, the single-minded ambitions of selfish genes. Now and then, however, some scientists have advocated, sometimes controversially, for a greater focus on cooperation over self-interest and on the emergent properties of living systems rather than their units.

Before Simard and other ecologists revealed the extent and significance of mycorrhizal networks, foresters typically regarded trees as solitary individuals that competed for space and resources and were otherwise indifferent to one another. Simard and her peers have demonstrated that this framework is far too simplistic. An old-growth forest is neither an assemblage of stoic organisms tolerating one another’s presence nor a merciless battle royale: It’s a vast, ancient and intricate society. There is conflict in a forest, but there is also negotiation, reciprocity and perhaps even selflessness. The trees, understory plants, fungi and microbes in a forest are so thoroughly connected, communicative and codependent that some scientists have described them as superorganisms. Recent research suggests that mycorrhizal networks also perfuse prairies, grasslands, chaparral and Arctic tundra — essentially everywhere there is life on land. Together, these symbiotic partners knit Earth’s soils into nearly contiguous living networks of unfathomable scale and complexity. “I was taught that you have a tree, and it’s out there to find its own way,” Simard told me. “It’s not how a forest works, though.”

In some of her earliest and most famous experiments, Simard planted mixed groups of young Douglas fir and paper birch trees in forest plots and covered the trees with individual plastic bags. In each plot, she injected the bags surrounding one tree species with radioactive carbon dioxide and the bags covering the other species with a stable carbon isotope — a variant of carbon with an unusual number of neutrons. The trees absorbed the unique forms of carbon through their leaves. Later, she pulverized the trees and analyzed their chemistry to see if any carbon had passed from species to species underground. It had. In the summer, when the smaller Douglas fir trees were generally shaded, carbon mostly flowed from birch to fir. In the fall, when evergreen Douglas fir was still growing and deciduous birch was losing its leaves, the net flow reversed. As her earlier observations of failing Douglas fir had suggested, the two species appeared to depend on each other. No one had ever traced such a dynamic exchange of resources through mycorrhizal networks in the wild. In 1997, part of Simard’s thesis was published in the prestigious scientific journal Nature — a rare feat for someone so green. Nature featured her research on its cover with the title “The Wood-Wide Web,” a moniker that eventually proliferated through the pages of published studies and popular science writing alike.

In 2002, Simard secured her current professorship at the University of British Columbia, where she continued to study interactions among trees, understory plants and fungi. In collaboration with students and colleagues around the world, she made a series of remarkable discoveries. Mycorrhizal networks were abundant in North America’s forests. Most trees were generalists, forming symbioses with dozens to hundreds of fungal species. In one study of six Douglas fir stands measuring about 10,000 square feet each, almost all the trees were connected underground by no more than three degrees of separation; one especially large and old tree was linked to 47 other trees and projected to be connected to at least 250 more; and seedlings that had full access to the fungal network were 26 percent more likely to survive than those that did not.

Depending on the species involved, mycorrhizas supplied trees and other plants with up to 40 percent of the nitrogen they received from the environment and as much as 50 percent of the water they needed to survive. Below ground, trees traded between 10 and 40 percent of the carbon stored in their roots. When Douglas fir seedlings were stripped of their leaves and thus likely to die, they transferred stress signals and a substantial sum of carbon to nearby ponderosa pine, which subsequently accelerated their production of defensive enzymes. Simard also found that denuding a harvested forest of all trees, ferns, herbs and shrubs — a common forestry practice — did not always improve the survival and growth of newly planted trees. In some cases, it was harmful.

At this point other researchers have replicated most of Simard’s major findings. It’s now well accepted that resources travel among trees and other plants connected by mycorrhizal networks. Most ecologists also agree that the amount of carbon exchanged among trees is sufficient to benefit seedlings, as well as older trees that are injured, entirely shaded or severely stressed, but researchers still debate whether shuttled carbon makes a meaningful difference to healthy adult trees. On a more fundamental level, it remains unclear exactly why resources are exchanged among trees in the first place, especially when those trees are not closely related.

“Darwin’s theory of evolution by natural selection is obviously 19th-century capitalism writ large,” wrote the evolutionary biologist Richard Lewontin.

As Darwin well knew, however, ruthless competition was not the only way that organisms interacted. Ants and bees died to protect their colonies. Vampire bats regurgitated blood to prevent one another from starving. Vervet monkeys and prairie dogs cried out to warn their peers of predators, even when doing so put them at risk. At one point Darwin worried that such selflessness would be “fatal” to his theory. In subsequent centuries, as evolutionary biology and genetics matured, scientists converged on a resolution to this paradox: Behavior that appeared to be altruistic was often just another manifestation of selfish genes — a phenomenon known as kin selection. Members of tight-knit social groups typically share large portions of their DNA, so when one individual sacrifices for another, it is still indirectly spreading its own genes.

Kin selection cannot account for the apparent interspecies selflessness of trees, however — a practice that verges on socialism. Some scientists have proposed a familiar alternative explanation: Perhaps what appears to be generosity among trees is actually selfish manipulation by fungi. Descriptions of Simard’s work sometimes give the impression that mycorrhizal networks are inert conduits that exist primarily for the mutual benefit of trees, but the thousands of species of fungi that link trees are living creatures with their own drives and needs. If a plant relinquishes carbon to fungi on its roots, why would those fungi passively transmit the carbon to another plant rather than using it for their own purposes? Maybe they don’t. Perhaps the fungi exert some control: What looks like one tree donating food to another may be a result of fungi redistributing accumulated resources to promote themselves and their favorite partners.

“Where some scientists see a big cooperative collective, I see reciprocal exploitation,” said Toby Kiers, a professor of evolutionary biology at Vrije Universiteit Amsterdam. “Both parties may benefit, but they also constantly struggle to maximize their individual payoff.” Kiers is one of several scientists whose recent studies have found that plants and symbiotic fungi reward and punish each other with what are essentially trade deals and embargoes, and that mycorrhizal networks can increase conflict among plants. In some experiments, fungi have withheld nutrients from stingy plants and strategically diverted phosphorous to resource-poor areas where they can demand high fees from desperate plants.

Several of the ecologists I interviewed agreed that regardless of why and how resources and chemical signals move among the various members of a forest’s symbiotic webs, the result is still the same: What one tree produces can feed, inform or rejuvenate another. Such reciprocity does not necessitate universal harmony, but it does undermine the dogma of individualism and temper the view of competition as the primary engine of evolution.

The most radical interpretation of Simard’s findings is that a forest behaves “as though it’s a single organism,” as she says in her TED Talk. Some researchers have proposed that cooperation within or among species can evolve if it helps one population outcompete another — an altruistic forest community outlasting a selfish one, for example. The theory remains unpopular with most biologists, who regard natural selection above the level of the individual to be evolutionarily unstable and exceedingly rare. Recently, however, inspired by research on microbiomes, some scientists have argued that the traditional concept of an individual organism needs rethinking and that multicellular creatures and their symbiotic microbes should be regarded as cohesive units of natural selection. Even if the same exact set of microbial associates is not passed vertically from generation to generation, the functional relationships between an animal or plant species and its entourage of microorganisms persist — much like the mycorrhizal networks in an old-growth forest. Humans are not the only species that inherits the infrastructure of past communities.

When a seed germinates in an old-growth forest, it immediately taps into an extensive underground community of interspecies partnerships. Uniform plantations of young trees planted after a clear-cut are bereft of ancient roots and their symbiotic fungi. The trees in these surrogate forests are much more vulnerable to disease and death because, despite one another’s company, they have been orphaned. Simard thinks that retaining some mother trees, which have the most robust and diverse mycorrhizal networks, will substantially improve the health and survival of future seedlings — both those planted by foresters and those that germinate on their own.

Since at least the late 1800s, North American foresters have devised and tested dozens of alternatives to standard clearcutting: strip cutting (removing only narrow bands of trees), shelterwood cutting (a multistage process that allows desirable seedlings to establish before most overstory trees are harvested) and the seed-tree method (leaving behind some adult trees to provide future seed), to name a few. These approaches are used throughout Canada and the United States for a variety of ecological reasons, often for the sake of wildlife, but mycorrhizal networks have rarely if ever factored into the reasoning.

Ryan told me about the 230,000-acre Menominee Forest in northeastern Wisconsin, which has been sustainably harvested for more than 150 years. Sustainability, the Menominee believe, means “thinking in terms of whole systems, with all their interconnections, consequences and feedback loops.” They maintain a large, old and diverse growing stock, prioritizing the removal of low-quality and ailing trees over more vigorous ones and allowing trees to age 200 years or more — so they become what Simard might call grandmothers. Ecology, not economics, guides the management of the Menominee Forest, but it is still highly profitable. Since 1854, more than 2.3 billion board feet have been harvested — nearly twice the volume of the entire forest — yet there is now more standing timber than when logging began. “To many, our forest may seem pristine and untouched,” the Menominee wrote in one report. “In reality, it is one of the most intensively managed tracts of forest in the Lake States.”

Diverse microbial communities inhabit our bodies, modulating our immune systems and helping us digest certain foods. The energy-producing organelles in our cells known as mitochondria were once free-swimming bacteria that were subsumed early in the evolution of multicellular life. Through a process called horizontal gene transfer, fungi, plants and animals — including humans — have continuously exchanged DNA with bacteria and viruses. From its skin, fur or bark right down to its genome, any multicellular creature is an amalgam of other life-forms. Wherever living things emerge, they find one another, mingle and meld.

Five hundred million years ago, as both plants and fungi continued oozing out of the sea and onto land, they encountered wide expanses of barren rock and impoverished soil. Plants could spin sunlight into sugar for energy, but they had trouble extracting mineral nutrients from the earth. Fungi were in the opposite predicament. Had they remained separate, their early attempts at colonization might have faltered or failed. Instead, these two castaways — members of entirely different kingdoms of life — formed an intimate partnership. Together they spread across the continents, transformed rock into rich soil and filled the atmosphere with oxygen.

Eventually, different types of plants and fungi evolved more specialized symbioses. Forests expanded and diversified, both above- and below ground. What one tree produced was no longer confined to itself and its symbiotic partners. Shuttled through buried networks of root and fungus, the water, food and information in a forest began traveling greater distances and in more complex patterns than ever before. Over the eons, through the compounded effects of symbiosis and coevolution, forests developed a kind of circulatory system. Trees and fungi were once small, unacquainted ocean expats, still slick with seawater, searching for new opportunities. Together, they became a collective life form of unprecedented might and magnanimity.

Posted in Deforestation | Tagged , | Comments Off on Reforestation for the return to biomass after fossil fuels

The History of Drunkenness

Preface. This is a book review of “A short history of Drunkenness” by Mark Forsyth, mainly my kindle notes.

I expect alcohol to be a big part of life postcarbon not only because most cultures have embraced alcohol, but to drown the sorrows and memories of the time when we lived likes Gods & Goddesses during the brief oil age. Plus for those of you around after fossils, this is yet another way to make a living if being a farmer isn’t appealing.

Taxation of alcohol is also how governments pay for wars, elites grow rich, and a large role in many religions:

There is, in the Western world, no tradition of religious drunkenness. But it is a practice found across history and across the globe. From Mexico to the Pacific islands to Ancient China there is or has been drunken mysticism, god found at the bottom of a bottle.

The sway of alcohol over mankind is unquestionably due to its power to stimulate the mystical faculties of human nature, usually crushed to earth by the cold facts and dry criticisms of the sober hour. Sobriety diminishes, discriminates, and says no; drunkenness expands, unites, and says yes. It is in fact the great exciter of the Yes function in man. It brings its votary from the chill periphery of things to the radiant core. It makes him for the moment one with truth. Not through mere perversity do men run after it. To the poor and the unlettered it stands in the place of symphony concerts and of literature. The drunken consciousness is one bit of the mystic consciousness.”

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Mark Forsyth. 2018. A Short History of Drunkenness: How, Why, Where, and When Humankind Has Gotten Merry from the Stone Age to the Present.

Drunkenness

Drunkenness is near universal. Almost every culture in the world has booze. The only ones that weren’t too keen—North America and Australia—have been colonized by those who were. And at every time and in every place, drunkenness is a different thing. It’s a celebration, a ritual, an excuse to hit people, a way of making decisions or ratifying contracts, and a thousand other peculiar practices. When the Ancient Persians had a big political decision to make they would debate the matter twice: once drunk, and once sober. If they came to the same conclusion both times, they acted.

History books like to tell us that so-and-so was drunk, but they don’t explain the minutiae of drinking. Where was it done? With whom? At what time of day? Drinking has always been surrounded by rules, but they rarely get written down. In present-day Britain, for example, though there is no law in place, absolutely everybody knows that you must not drink before noon, except, for some reason, in airports and at cricket matches.

All we know for sure is that if a male fruit fly has his romantic advances spurned by a cruel and disdainful female fruit fly, he ups his alcohol consumption dramatically. Unfortunately for animals, alcohol doesn’t occur naturally in large enough quantities to allow for a proper party.  Though sometimes it does. There’s an island off Panama where the mantled howler monkey can feast happily on the fallen fruit of the astrocaryum palm (4.5 percent ABV). They get boisterous and noisy, and then they get sleepy and stumbly, and then sometimes they fall out of trees and injure themselves. If you adjust their alcohol intake for bodyweight, they can get through the equivalent of two bottles of wine in thirty minutes. But they are a rarity.

What happens if you give a whole colony of rats an open bar? Actually, they’re rather civilized. Though not for the first few days, when they go a bit crazy, but then most of them settle down to two drinks a day: one just before feeding (which the scientists refer to as the cocktail hour) and one just before bedtime (the nightcap). Every three or four days there’s a spike in alcohol consumption as all the rats get together for little rat parties.  Rat colonies usually have one dominant male, the King Rat. The King Rat is a teetotaler. Alcohol consumption is highest among the males with the lowest social status. They drink to calm their nerves, they drink to forget their worries, they drink, it seems, because they’re failures.

Load a couple of barrels of beer onto the back of a pickup truck, drive to somewhere near the elephants, take the lids off and let them have a sip. There’s usually a bit of jostling and the big bull elephants take most of it. But you can then observe them stumbling around and falling asleep and it’s all rather amusing. Even this, though, can go wrong. One scientist who allowed a dominant bull to get a bit too pissed found himself having to break up a fight between a soused elephant and a rhino. Usually, elephants don’t attack rhinos, but the beer makes them quarrelsome.

On the following morning monkeys who drank were very cross and dismal; they held their aching heads with both hands and wore a most pitiable expression: when beer or wine was offered them, they turned away with disgust, but relished the juice of lemons.  If, Darwin thought, man and monkey both react the same way to hangovers, they must be related. This wasn’t his only proof, but it was a start in proving that bishops were primates.     From the New Yorker: In “Descent of Man,” Darwin states, “Many kinds of monkeys have a strong taste for . . . spirituous liquors.” And he cites the reported effects of the monkeys’ being exposed to strong beer—“cross and dismal . . . aching heads . . . a most pitiable expression”—as suggestive evidence for the evolutionary affinity between humans and primates. “These trifling facts prove how similar the nerves of taste must be in monkeys and man, and how similarly their whole nervous system is affected”—by alcohol.

Humans are designed to drink. We’re really damned good at it. Better than any other mammal, except maybe the Malaysian tree shrew. Never get into a drinking contest with a Malaysian tree shrew; or, if you do, don’t let them insist that you adjust for bodyweight. They can take nine glasses of wine and be none the worse for it. That’s because they’ve evolved to survive on fermented palm nectar. For millions of years evolution has been naturally selecting the best shrew drinkers in Malaysia and now they’re champions. But we are the same. We evolved to drink. Ten million years ago our ancestors came down from the trees. Why they did this is not entirely clear, but it may well be that they were after the lovely overripe fruit that you find on the forest floor. That fruit has more sugar in it and more alcohol. So we developed noses that could smell the alcohol at a distance. The alcohol was a marker that could lead us to the sugar.

Alcohol has led us to our food, alcohol has made us want to eat our food, but now we need to process the alcohol; otherwise we’ll just become food for somebody else. It’s hard enough to fight off a prehistoric predator when you’re sober, but trying to punch a saber-toothed tiger when you’re five sheets to the wind is a nightmare.

So now that we’d acquired the taste, we needed—evolutionarily—to develop a coping mechanism. There is one quite precise genetic mutation that occurred ten million years ago that makes us process alcohol nearly as well as a Malaysian shrew. It’s to do with the production of a particular enzyme that we started to produce. Humans (or the ancestors of humans) were suddenly able to drink all the other apes under the table. For a modern human, 10% of the enzyme machinery in your liver is devoted to converting alcohol into energy.   From the internet: Once alcohol has entered your bloodstream it remains in your body until it is processed. About 90-98% of alcohol that you drink is broken down in your liver, the other 2-10% is removed in your urine, breathed out through your lungs or excreted in your sweat.  The average person will take about an hour to process 10 grams of alcohol, which is the amount of alcohol in a standard drink. So if you drink alcohol faster than your body can process it, your blood alcohol level will continue to rise.

Benjamin Franklin, Founding Father of the United States, famously observed that the existence of wine was “proof that God loves us, and loves to see us happy.  He also made a significant observation about human anatomy: To confirm still more your piety and gratitude to Divine Providence, reflect upon the situation which it has given to the elbow. You see in animals who are intended to drink the waters that flow upon the earth, that if they have long legs, they have also a long neck, so that they can get at their drink without kneeling down. But man, who was destined to drink wine, is framed in a manner that he may raise the glass to his mouth. If the elbow had been placed nearer the hand, the part in advance would have been too short to bring the glass up to the mouth; and if it had been nearer the shoulder, that part would have been so long, that when it attempted to carry the wine to the mouth it would have overshot the mark, and gone beyond the head

Most of the early drinks wouldn’t so much have been invented as discovered. A pleasant theory involves bees. Imagine a bees’ nest in the hollow of a tree. Then there’s a storm, the tree falls over and the nest is flooded with rainwater. So long as you have roughly one part honey to two parts rainwater, fermentation ought to kick in pretty soon.   More prosaically you simply need to be picking and storing fruit somewhere reasonably watertight. The juice at the bottom will start to bubble and pretty soon you’ll have a very primitive wine. For that you would probably need pottery. More importantly you need to remain in the same place for a while, and all of the evidence suggests that our ancestors were mostly on the move.

It looks like there was beer, and, importantly, it looks like there was beer before there were temples and before there was farming. This leads to the great theory of human history: that we didn’t start farming because we wanted food—there was loads of that around. We started farming because we wanted booze. This makes a lot more sense than you might think, for six reasons. 1) beer is easier to make than bread as no hot oven is required, 2) beer contains vitamin B, which humans require if they’re going to be healthy and strong. Hunters get their vitamin B by eating other animals. On a diet of bread and no beer, grain farmers will all turn into anemic weaklings and be killed by the big healthy hunters. But fermentation of wheat and barley produces vitamin B. 3) beer is simply a better food than bread. It’s more nutritious because the yeast has been doing some of the digesting for you.

From NPR: Charlie Bamforth, a professor of brewing sciences at the University of California, Davis. Though it’s been blamed for many a paunch, it’s more nutritious than most other alcoholic drinks, Bamforth says. “There’s a reason people call it liquid bread,” he says. Beer, he says, has more selenium, B vitamins, phosphorus, folate and niacin than wine. Beer also has significant protein and some fiber. And it is one of a few significant dietary sources of silicon, which research has shown can help thwart the effects of osteoporosis. 150 calories in your typical, 12-ounce serving of 5 percent-alcohol beer. A 12-ounce bottle of 9.6 percent has 300 calories, 200 from the alcohol.   

4) beer can be stored and consumed later, 5) the alcohol in beer purifies the water that was used to make it, killing all the nasty microbes.  6) The biggest argument is that to really change behavior you need a cultural driver. If beer was worth traveling for (which Göbekli Tepe suggests it was) and if beer was a religious drink (which Göbekli Tepe suggests it was), then even the most ardent huntsman might be persuaded to settle down and grow some good barley to brew it with.

And so in about 9000 BC, we invented farming because we wanted to get drunk on a regular basis.

Cities are the result of farmers working too hard. In fact, history is the result of farmers working too hard. If you have a job that doesn’t involve food-production (and you’re alive), that means that somewhere there’s a farmer producing more food than he needs. The second that happens you get specialized jobs, because ultimately you’ve got to be providing something to the farmer in exchange for the food, whether it’s clothes or housing or protection or accountancy services.

The sure sign of agricultural surplus is that there are populated places that produce no food at all. Such places are called cities, inhabited by citizens. The Latin for citizen was civis, and from that we get the words civil and civilization. When we give the farmers something in return, it’s called trade, and trade causes disputes, and the people who solve these disputes are called the government. The government requires money to spend on important things like thrones, armies and fact-finding trips. And because it’s terribly hard to remember who’s paid their tax and who hasn’t, tax requires writing. Writing causes Prehistory to stop, and History to begin.

Everybody drank beer. Kings drank it on their thrones. Priests drank it in temples.

There was a myth that civilization had only come about through beer. The story went that Enki, the god of wisdom, had sat down with the goddess of hanky-panky, whose name was Inana. At the time, humans had no skills or knowledge. So it came about that Enki and Inana were drinking beer together in the abzu, and enjoying the taste of sweet wine. The bronze aga vessels were filled to the brim, and the two of them started a competition, drinking from the bronze vessels of Uraš. Long story short: Inana wins. While Enki is passed out drunk, she steals all the wisdom from heaven and takes it down to earth. When Enki wakes up, he notices that all the wisdom is missing and throws a fit, but by then it’s too late.

The most famous Sumerian myth of all, The Epic of Gilgamesh, starts with a wild man called Enkidu who lives among the animals like a Mesopotamian Mowgli, until a priestess of Inana turns up and tries to make him human. She does this by having sex with him, and then giving him a drink (not the usual order).

SUMERIA: So now we sit down at a table and the beer is brought to us in an amam jar, along with two straws. Beer has to be drunk through a straw. This is because Sumerian beer is not like our lovely modern clear amber nectar. It’s a sort of fizzing barley porridge with lots of solid stuff floating on the surface. A straw lets us go below the surface and suck out the sweet liquid. There are lots of representations of Sumerians doing this, and people still do it with palm wine in parts of central Africa.

RELIGION AND ALCOHOL

There is, in the Western world, no tradition of religious drunkenness. But it is a practice found across history and across the globe. From Mexico to the Pacific islands to Ancient China there is or has been drunken mysticism, god found at the bottom of a bottle

The sway of alcohol over mankind is unquestionably due to its power to stimulate the mystical faculties of human nature, usually crushed to earth by the cold facts and dry criticisms of the sober hour. Sobriety diminishes, discriminates, and says no; drunkenness expands, unites, and says yes. It is in fact the great exciter of the Yes function in man. It brings its votary from the chill periphery of things to the radiant core. It makes him for the moment one with truth. Not through mere perversity do men run after it. To the poor and the unlettered it stands in the place of symphony concerts and of literature;

The drunken consciousness is one bit of the mystic consciousness,

The Greeks didn’t drink beer, they drank wine; but they watered it down by a ratio of about two or three parts water to one part wine, which made it almost exactly the same strength.  The Persians drank beer; that made them barbarians. The Thracians drank undiluted wine; that made them barbarians. The Greeks were the only people who had it just right, according to the Greeks.

It’s rather intriguing that the Greek god of wine and the Egyptian goddess of beer were both said to arrive from the exotic south with a dancing menagerie of humans, animals and spirits, but it’s probably just a coincidence.

The myths about Dionysus mostly fall into two categories. (1) There are the stories of people who don’t recognize him, and don’t even realize that he is a god. Who these people are varies from pirates to princes, but their fate is usually the same. Dionysus punishes them by turning them into animals. The moral of the stories is reasonably clear. When you’re dealing with wine you need to remember that you are dealing with something powerful, something divine. This is no ordinary drink. It is holy. Moreover, alcohol, if you’re not careful, can bring out the beast in you.

The only fully human friends Dionysus had were the maenads. Maenads were women who worshipped Dionysus. They did this by going out into the mountains wearing next to nothing and getting very, very drunk. Then they would dance and let their hair down and rip animals to pieces in a sort of terrifying Arcadian hen party. Nobody is quite sure whether maenads ever actually existed, or whether they were just a sexual fantasy of Greek men, like the Amazons.  The maenads, though, were terribly important in the second type of Dionysus myth.  Dionysus didn’t like teetotalers. This is unsurprising for a god of wine, but Dionysus being Dionysus he tends to kill them cruelly. The most famous example is a play by Euripides where the King tries to outlaw maenadism so Dionysus makes his maenads believe that the King is a lion and they rip him limb from limb (the group is led by the King’s mother). There’s another story about Orpheus wandering the countryside. His wife has died and he wants to have a good cry. Unfortunately, he comes across a group of maenads who are all getting plastered and want him to join in. Orpheus politely declines and they rip him limb from limb as well.

There are a lot of stories like this and they all end the same way. The moral is pretty clear: you should recognize that drinking is dangerous and that it might turn you into a wild beast, but you should still drink. Never turn down an invitation to a party.

CHRISTIANITY. Paul notes that people were getting drunk at communion. He has to point out that communion is for drinking, not for getting drunk, which must have come as something of a shock to the Corinthians. Once you start to look for it, you find this problem a lot in early Christianity. The poor apostles were going out preaching the good news of a new religion that required you to drink wine. And people seem to have got the wrong impression. The Acts of the Apostles opens with Pentecost and the Holy Spirit descending upon the Christians, who proceed to speak in tongues. The people in the crowd that gathered: asked one another, “What does this mean?” Some, however, made fun of them and said, “They have had too much wine.” And poor St. Peter has to jump up and explain: Fellow Jews and all of you who live in Jerusalem, let me explain this to you; listen carefully to what I say. These people are not drunk, as you suppose. It’s only nine in the morning! When you think about it, the drink would have made a perfect stick with which to beat early Christianity. It would be so easy to caricature this strange new sect as a group of drunkards, a Jewish version of the cult of Dionysus, that it would be surprising if pagans didn’t do this.

Greek drinking

Plato, quite specifically, says that getting drunk is like going to the gym: the first time you do it you’ll be really bad and end up in pain. But practice makes perfect. If you can drink a lot and still behave yourself, then you are an ideal man. If you can do this in company, then you can show the world that you are an ideal man, because you are displaying the great virtue of self-control even under the influence. Self-control, said Plato, was like bravery.

A chap who spends his days fighting battles can train himself to be brave. A man who spends his evenings getting drunk can train himself to ever higher levels of self-control.

Let us say that you were a lady in classical Athens and you wanted to get drunk. You couldn’t. Women weren’t allowed at symposiums. Or, to be more precise, women might be allowed but not ladies

So it was the men who gathered, and they gathered at somebody’s private house. Not at a bar. For a typical symposium you might have a dozen chaps over. A really large one might be up to thirty fellows, but that was unusual. First, you had supper. This was a plain meal that was consumed pretty quickly and pretty silently. The food was not the thing—it was only really there to soak up the wine. Arranged in a circle around the room were couches with cushions on them. The men would lie down on the couches with a pillow under one arm. Young men, though, were not allowed to lie down.

It may then have been necessary to choose a symposiarch—the leader of the evening’s drinking. This would almost always be the host, whose first job was to choose the wine. Usually, this would be from his private estate as most Athenian gentlemen would own a vineyard, indeed the class system in Athens was built around how big your vineyard was. The lowest level was 7 acres or less; the highest had over 25.  If it was summer, the wine would have been cooled by lowering it into a well, or burying it.

At a symposium you got deliberately, methodically and publicly drunk. Everybody was given a bowl of wine. Everybody had to drink their bowl of wine before there’s a refill. Just as the guests at a symposium didn’t get to choose how much they drank, so they didn’t get to choose what they talked about, or indeed if they talked at all. The symposiarch would name a subject and then each guest in turn would have to give their opinion on it.  Each guest is meant to launch into a long and detailed answer.

There would be none of the free flow of conversation that we associate with a drinking session, and no opportunity simply to remain silent.

A game that Athenians played at symposiums was called kottabos. You took the last few drops of wine in your drinking bowl and tried to flick it at something. Sometimes a special bronze target would be brought in and everyone would flick their wine at it. Sometimes the target was a bowl floating in a pot of water and your aim was to sink it. Sometimes the target was a person. It all sounds rather messy, and old people used to complain about it and say that young men should be doing something constructive instead.

For sensible men I prepare only three kraters: one for health (which they drink first), the second for love and pleasure, and the third for sleep. After the third one is drained, wise men go home. The fourth krater is not mine any more—it belongs to bad behavior; the fifth is for shouting; the sixth is for rudeness and insults; the seventh is for fights; the eighth is for breaking the furniture; the ninth is for depression; the tenth is for madness and unconsciousness.

ROMAN EMPIRE

Early Rome was a very stern and sober place. In the days of the high republic (we’re talking about 200 BC–ish), they were all clean-shaven, short-haired militaristic types. Drunkenness was frowned upon. Sternly. It was associated with the long-haired, bearded, luxurious Greeks, whom the Romans were busy defining themselves against.

The Roman Empire was, in essence, a system whereby the entire wealth of the known world was funneled back to one city. This produced possibly the wealthiest city that the earth has ever known. Money corrupts and huge amounts of money are huge amounts of fun. The result, as every schoolboy learns, was decadence. Roman men started enjoying wine more than water. Then they even let their womenfolk try some. Then they finally read some Greek books and realized they were rather good. And then they thought they’d give homosexuality a go, and that was a big hit. By the time you got to the mid-first century AD those stern senators of 186 BC would have been turning in their graves.

So how did you get in on the fun? The problem with Roman money was that, though there was an awful lot of it, it arrived at the very top of society and flowed down. If you wanted a bit of wealth and wine, you had to find yourself a patron, somebody to sponge off. This sounds horribly parasitical, and in a sense it was, but it was all out in the open. There were patrons with money, and there were dependents with flattery. Everyone knew what was going on. So long as you were prepared to sell your dignity, you got paid in good food and wine. The central component of the system was a banquet called the convivium. Not everybody liked the system. The poet Juvenal asked: “Is a dinner worth all the insults with which you have to pay for it? Is your hunger so importunate, when it might, with greater dignity, be shivering where you are, and munching dirty scraps of dog’s bread?” And most people said yes.

The Roman convivium was not about being convivial. The Roman convivium was all about showing off, and about asserting who was on the top and who was right down at the bottom. You are not here to have fun. You’re here to learn your place, to applaud those above you, and to sneer at those below you. This was accomplished through seating, slaves, quality of wine, quantity of wine, food, what the wine was served in and where that was thrown.

The dining room contained one big table. One side was left empty as that was the side where the slaves, those endless crowds of slaves, served the brimming platters, and took away the empties. The other three sides had a couch each, and each couch held three people, lying down, because the Romans liked to drink horizontally. Looked at from the slaves’ point of view, the couch on the right was for inferior guests, with the least honored guest nearest to you. That corner of the table, diagonally opposite the host and his friend, could be covered with inferior food and inferior wine for the clearly inferior guest. If you’re there, you weren’t really welcome, you certainly weren’t honored. The host is telling you that he doesn’t give a galley-slave’s cuss about you. And you still have to say thank you. That’s the point of the convivium.  

The whole house is crawling with crawling slaves. They had to crawl, or they got whipped. Hosts would whip their slaves in front of their guests as a demonstration of power.  

The monks of the Dark Ages, indeed the people of the Dark Ages, needed booze because the alternative was water. Water requires a well-maintained well, or preferably an aqueduct, and that requires effective organization and government and all the things that the Dark Ages are not best known for. In the absence of these, your best source of water is the nearest stream, and for most of us, those who don’t live high in the mountains, that is a murky prospect.

Water drawn from the nearest stream was barely transparent. It was liable to contain creeping things, whatever they were—worms or leeches. One Anglo-Saxon book recommends a cure for swallowing creeping things: immediately drink some hot sheep’s blood. This tells us two things: (a) water was disgusting; (b) people did nonetheless drink it sometimes. Sometimes you had to, you were thirsty and you could afford nothing better. The standard Anglo-Saxon attitude to the subject is summed up in Abbot Aelfric’s dictum: “Ale if I have it, water if I have no ale.

Wine, continued Aelfric in a wistful tone, was way too dear for the average English monk. Instead, the standard ration was a mere gallon of ale a day (and more on feast days).

THE VIKINGS

Most polytheistic religions have one chief god, and then a god of drunkenness/wine/brewing, etc., somewhere on the side. Enlil was superior to Ninkasi; Amun to Hathor; Zeus to Dionysus. The drunken god turns up, causes some fun and chaos, but is always subject to the wiser ways and greater powers of the chief god, who usually has a beard. You don’t need to be the sharpest theologian to interpret this as drunkenness having to find its niche within society, its little spot where it can be tamed and controlled. But with the Vikings the chief god is the drunk god. The chief god is actually called “the drunk one.” There is no other Viking god of alcohol. It’s Odin. That’s because alcohol and drunkenness didn’t need to find their place within Viking society, they were Viking society. Alcohol was authority, alcohol was family, alcohol was wisdom, alcohol was poetry, alcohol was military service and alcohol was fate.

There were only three kinds of Viking booze.  There was wine which was immensely expensive and almost nobody could get hold of it. The next drink down the pecking order was mead, fermented honey, sweet and reasonably expensive. Almost everybody almost all the time just drank ale, which was much less expensive. Their ale was probably slightly stronger than ours at about 8 percent ABV.

If you wanted to set yourself up as a lord, you needed to build a mead hall, even if all you ever served in it was ale. You still called it a mead hall for appearances’ sake. Your mead hall could even be quite small—some were only about 10 by 15 feet. Others were huge, a hundred yards in length. In Beowulf when Hrothgar wants to become a mighty king, he builds Heorot, the biggest mead hall that anyone has ever seen, filled with pillars and gold.

The mead hall makes you a lord because the very first duty of a lord is to provide booze to his warriors. This was the formal way in which you showed your lordship. And conversely, if you went to somebody’s mead hall and drank their mead, you were honor-bound to protect them militarily.

Alcohol was, literally, power. It was how you swore people to loyalty. A king without a mead hall would be like a banker with no money or a library with no books.

You also needed a queen, because, strange as it may seem, women were a rather important (if a trifle subjugated) part of the mead hall feast. Women—or peace-weavers as the Vikings called them—were the ones who kept the formal footing of the feast going, who lubricated the rowdy atmosphere and provided a healthy dose of womanly calm. They were in charge of the logistics of the sumbl, which was the Norse name for a drunken feast. They may even have enjoyed the beginning of the evening, the first three drinks which were to Odin (for victory), to Njord and Freya (for peace and good harvest), and then the minnis-öl, the “memory-ale” to spirits of ancestors and of dead friends.

There’s a funny kind of Viking frost-cup that archaeologists call a funnel glass. That’s because archaeologists aren’t poets. A funnel glass is about 5 inches tall and is shaped just as you might imagine it, which means that it can’t be put down on a table. It would just fall over. This is quite deliberate as the idea is to make you down your whole drink in one. This was immensely important to the Vikings as downing drinks made you a real man. This was also the purpose of the more traditional drinking horn: to test your virility by reference to your ability to swallow.

There’s a story about Thor (the god of warfare and hammers) and Loki (the god of mischief). Loki challenged Thor to drink a horn of ale. Thor, who could never resist a challenge, accepted and Loki had a horn brought to the table and told Thor that a real man could down it in one. Thor grabbed the horn, put it to his mouth, and drank, and drank, and drank, and, when he could drink no more, the horn was still almost full. Loki looked disappointed and said that a normal chap might need to do it in two. So Thor tried again, and again his godlike drinking had almost no effect. Loki murmured that a weakling could do it in three. Same thing happened. This left Thor feeling rather ashamed and effeminate, until Loki revealed that he had tricked him, and that the other end of the horn was connected to the sea. Thor had drunk so much that he had brought the whole level of the world’s oceans down, and that, according to the Vikings, was the origin of tides.

Along with the drinking competitions, Vikings did an awful lot of boasting. This was not seen as a bad thing. A Viking chap was meant to boast. He was meant to recount all of his great rapacious deeds. And then another Viking was meant to outdo him. These boasts were not quick one-liners either. They were long affairs that waxed poetic and lyrical. It was a big, formal occasion, much like a modern rap battle, or so I am informed. Moreover, your boasting was in deadly earnest. You were expected to stand by anything you said, whether it was a claim of something you had done in the past, or of something that you were merely planning on. There was no possibility of excusing yourself the next morning by saying, as we would, that that was just the drink talking.

It was a viciously violent society, a hall full of warriors who are being forced to drink much too quickly, ceremonial bragging and insulting, and they’re all carrying swords. The result of all this can best be summed up in the Viking/Anglo-Saxon epic Beowulf, where the poet is trying to explain just what a wonderful man Beowulf was. He lavishes praise on him, and the highest praise of all is that Beowulf “never killed his friends when he was drunk”.

There’s a lovely mythical creature called the Heron of Oblivion (I’ve no idea why) that was said to come down and hover over the sumbl until everybody dozed off. Nobody went home. You stayed in your lord’s mead hall until you could stay awake no longer and then you lay down on a bench or a table or whatever you could find and you fell fast asleep.

SWEDEN

There was, apparently, an eighth-century Swedish king called Ingjald who invited all the neighboring kings to his coronation. When the bragarfull came round, he swore to enlarge his kingdom by half in every direction. Everyone drank. Everyone got drunk. The Heron of Oblivion did his restful work, and when everyone else was asleep, Ingjald went outside, locked the doors and burned down his own mead hall with all the other kings in it. I’d like to say that that was a one-off, but it wasn’t. There are a fair few accounts of burning down mead halls with everyone in them. There’s even one of a queen doing it to her husband, which seems fair.

ENGLAND

Taverns sold wine. Wine, because it had to be imported, was very, very expensive. Taverns were for wealthy men who wanted to splash a bit of cash, which meant that they were almost all in London. It also meant that taverns could have a rather degenerate side. This is where you’d find prostitutes and gamblers because, by definition, if you could afford wine you could afford other sinful luxuries.

Shakespeare, I’m pretty sure, was a wine-drinker. His works have over a hundred references to wine and sack, and only sixteen to ale.

In England in the year 1200 there was no such thing as a pub. Villages simply did not have drinking establishments. This may seem strange. Imagining England without a village pub is like imagining Russia with no vodka (there was, at this time, no vodka in Russia; but we’ll come to that in another chapter).

There were no pubs, because there was no need for pubs. Everybody was drinking at work. Often it was part of the pay. A carter, for example, might expect to have 3 pints and some food thrown in with his wages. When a lord employed laborers to work his land, he had to give them some booze. Medieval Englishwomen and children also drank. Water was still pretty dangerous, and only for the very poor.

Not that people got drunk. A few pints spread out over the course of a hard day’s toil in the fields won’t do that. But it will nourish you. Ale is, after all, liquid bread. People drank in church as well. The medieval village church was not so much a place of worship as a community center (with some worship thrown in on Sundays).   Opportunities to cadge booze in church were neither few nor far between.

A husband would expect his wife to cook and clean and look after children, and brew, and spin. Spinning wool into cloth and brewing ale had the added advantage that they could make you extra money. A wife would weave the cloth to clothe her husband, and, if there was any left over, she could sell it. This was almost the only way that the average medieval single woman could get an income. And it was so common that an unmarried woman is, to this day, called a spinster. 

A woman who brewed would be called a Brewster. A woman who brewed for profit could also be called an alewife. Medieval ale had a very short shelf life. It would go off after two or three days. So when an alewife had brewed more than her family needed, she would put up an ale stake above her front door. This was just a horizontal stick with a sprig of bush tied to the end. She would put the barrel outside her house, and sell to passersby who would turn up with a flagon and some pennies. They could then stroll off and drink it at work, at their own home or in church.

That’s how things were all the way up to the beginning of the 14th century. Then several things happened at once. First, people stopped drinking in churches. This was not because they didn’t like drinking in church, but because the church didn’t like people drinking in it.

Once upon a time, a nobleman employed people to till his fields. But in the 14th century noblemen decided that it was simpler just to rent plots of land out to the peasants and let them farm it for themselves. This meant that any peasant who didn’t have a good alewife now had to go and buy ale, which was good news for alewives. Thirsty laborers would show up after work, they wanted ale, but they also wanted somewhere to sit down and drink it. So alewives started to let people into their kitchens. Thus the pub was born.

Finally, beer was invented. Throughout this chapter I’ve been talking about ale, which was made with barley and water. It was not a very pleasant substance. Nutritious? Yes. Alcoholic? Yes. Tasty and pure and fizzy and refreshing? No. It was a sort of sludgy porridge with bits in it. The only way to make it taste nice was to flavor it with herbs and spices—horseradish was a favorite. But you were trying to disguise the taste. Trying to make something vile into something drinkable. Then hops arrived. When you add them to ale you get beer.

Most people much preferred the taste of hoppy beer. And beer had one other massive advantage over ale: it didn’t go off. You could keep beer for a year or so and, as long as the barrel was well sealed, it would still be good. Because of this, beer could be mass-produced. In every major town, breweries were set up which could produce lots of lovely beer that could then be sold to all the local alehouses (they continued to be called alehouses, long after the awful sludgy porridge had been forgotten).

The breweries could filter the beer and make a much better product.

Let us suppose that we are travelers sometime around the end of the 15th century. To find an alehouse we’d look for an ale stake. Pub signs (and by extension pub names) don’t come in until the 1590s.  The ale bench, which, as you may have guessed, was a bench just outside the door where, in fine weather, you could sit and drink in the sunshine. It’s also quite possible that we’ll spot some people playing games—bowls was a favorite—and betting on them. The door will be open. This was a legal requirement, except in the depths of winter. The idea was that any passing authority figure should be able to see inside an alehouse and thus check that nothing naughty was going on, while also not having to sully themselves by actually going in.

One of the great advantages of visiting an alehouse was that there was usually a fire blazing away. Many medieval peasants simply couldn’t afford such a luxury in their own homes. One of the first differences we’ll notice from a modern pub is that there is no bar. Countertop bars, the sort of thing we know and love, don’t actually come in until the 1820s. This place doesn’t look like a pub. It looks like somebody’s kitchen, which is basically what it is. There’s a barrel of beer somewhere in the room. And there are a few stools and benches, perhaps a trestle table or two. But the total value of the furniture isn’t more than a few shillings. We are in somebody’s house, but it’s public.

The person whose house we’re in is almost certainly a woman.  There’s also a good chance that she’s a widow. Running an alehouse was still one of the only ways that a woman could make money, and, in the days before pensions, alehouse licenses would be granted to widows out of pity. It was that or she would have to throw herself upon the parish, which the parish found inconvenient.

Women usually went to alehouses in groups. A woman on her own might be talked about. A group of respectable matrons, though, was in the clear. People also went on dates to alehouses. If a couple were known to be courting, then going out for drink was considered perfectly normal and respectable.

Alehouses were only for the poorest in society. Even moderately well-off people like yeoman farmers were still drinking at home. The alehouse was a place of escape. Servants came here for the same reason as lovers; it was what anthropologists call the Third Place. It wasn’t work, where you have to obey your boss, and it wasn’t home, where you have to obey your parents or your spouse. That’s also why the place is full of teenagers. Medieval England was an edenic place where there were absolutely no laws about underage drinking.

Not that people will actually get that drunk, unless it’s a Sunday. Just as we think of Friday night as the standard time for drinking, the medievals liked to get sloshed on a Sunday morning. This makes a lot of sense, if you think about it, as you get to be buzzed all day. But it does mean that there is a permanent war between the alehouse and the church for attendance on a Sunday morning. A war that the alehouse tended to win.

The standard greeting for a stranger arriving in an alehouse was “What news?” In the days before newspapers and even television, travelers were the main way to find out what was going on in the world. Who was king? Were we at war? Had we been invaded? Alehouses actually developed a rather bad reputation for spreading absolute lies. In 1619 the whole of Kent was sent into a panic by the news that the Spanish had taken Dover Castle; and, very curiously, the alehouse drinkers of Leicester heard the news of Elizabeth I’s death forty-eight hours before it happened.

AZTECS

But if drinking was so very, very illegal, how did it have such a central place in Aztec culture? And it did. They had gods of drinking. Several of them. Mayahuel, who was the goddess of the agave plant, was said to have married Patecatl, who was the god of fermentation. Mayahuel had 400 breasts, which was probably fun for Patecatl, but was also useful because she gave birth to 400 divine little rabbits, the Centzon Totochtin. The reason that there were 400 of them is that the Aztecs counted in base twenty. Four hundred is twenty squared and so the number had much the same place in their culture that 100 (ten squared) does in ours.

So, to recap, booze is ferociously forbidden and punishable by death. Booze is ubiquitous. Booze is revered and central to the culture and religion. Booze is legal for the elderly. This combination has left historians somewhat confused, and indeed inclined toward a quick dose of teonanacatl, the Aztec hallucinogen of choice that was entirely legal. There is, though, a theory that makes sense of all this. Anthropologists who study drunkenness draw a distinction between what they call “wet cultures” and “dry cultures.” In wet cultures people are terribly relaxed about alcohol. They sip it all day and have a terribly pleasant time, and very rarely get properly, falling-over drunk. Dry cultures are the opposite. They aren’t dry in the sense of being alcohol-free; they’re called dry because people are very wary of alcohol and have strict rules about when you can’t drink it. Then, when it is permitted, they get trollied.

But on the day of a religious festival—for example, one devoted to the 400 drunken rabbits—they got absolutely hammered. They got apocalyptically and religiously drunk, and, like the Ancient Egyptians and the Ancient Chinese before them, they used alcohol to give them an experience of the divine. And then for the rest of the month they didn’t drink at all.

It was the relaxation of the rules and the disorientation of society produced by Christianity which pushed the conquered to perpetual pulque.

The people of Zumbagua in Ecuador drink in order to communicate with ancestral spirits, and, indeed, believe that when you drink so much that you throw up, the vomit becomes food for the ghosts of the dead. To this day there is a phrase in Mexico: “As drunk as 400 rabbits.

DISTILLING

Ancient Greeks definitely knew about distilling over 2,000 years ago, but there’s no evidence that they distilled alcohol. Instead, they wasted their invention on producing drinkable water.

You start to get, in the 15th century, mentions of distilled alcohol being used as a medicine in very small doses.

James IV of Scotland bought several barrels of whisky, or aqua vitae as it was called, from a monastery in 1495.  A hundred years later, there was one bar in England—just outside London—that served aqua vitae. It was still a novelty drink that most people would never even have heard of. And then, in the second half of the 17th century, western Europe went crazy for spirits. The French suddenly got into brandy.

Come the Restoration, the English aristocracy stampeded back from France with a newfound taste for all sorts of funny newfound drinks: champagne, vermouth, and brandy. These became the drinks of the nobility.

Gin became popular in England for four reasons: monarchy, soldiers, religion and an end to world hunger. Some historians would add “hatred of the French,” which makes five. First, monarchy. King William III liked gin because he was Dutch and all Dutch people liked gin. Second, soldiers. Dutch soldiers liked gin for two reasons. Because they were Dutch and because gin infused Dutch soldiers with a peculiar form of bravery, which to this day we refer to as Dutch courage. Third, during this period European countries were constantly going to war with each other, usually on a Protestant vs. Catholic basis. England and Holland were both Protestant, so English soldiers fought alongside the Dutch, and drank alongside the Dutch, and came home with a hangover and a taste for gin. Gin was thus soldierly and Protestant. Fourth, an end to world hunger. From time immemorial, and probably before, every country in the world had had a problem with Bad Harvests. In a normal year farmers produced just enough grain to feed everybody. They didn’t produce any more than that, because they wouldn’t be able to sell it. Every so often, though, you got a year with a Bad Harvest. When this happened there wasn’t enough grain to go around, and farmers were not in the slightest bit upset. A funny aspect of the economics of farming is that a Bad Harvest means less grain; less grain means higher grain prices; these higher prices mean that farmers made just as much money from a Bad Harvest as they did from a good one, and it was less work.

William III thought he had this problem solved. Gin is made out of grain, and the quality of the grain doesn’t particularly matter. Once the stuff has been fermented and distilled, you can’t taste the difference. Therefore, if he could make gin popular in England he would produce a great big market for excess grain during normal years; and that meant that when a Bad Harvest came round there would be an excess to cover it. It might not be the highest-quality excess, but it would be edible. Thus he could end starvation forever.

But to do so he’d have to make gin really, really popular. To do that, you’d have to make gin more readily available than beer. You’d have to make it completely tax-free and unregulated and let anybody who wants to start distilling distill. Also, you’d have to ban the import of French brandy.

Where did a poor Londoner actually go to get gin? And when? And from whom? And the answer is absolutely everywhere. To set up shop you went to a distiller and got a gallon or so, distilled it a second time to make it even stronger, and added flavorings like juniper, turpentine, or sulfuric acid, whatever you liked.  Many drank way too much  and died.

Gin arrived in England in the 1690s and by the 1720s the streets of London were full of unconscious drunks who had sold their clothes for gin, so authorities tried to cut consumption by taxing it and requiring a license, which people ignored.

AUSTRALIA

Lord Sydney had a utopian idea of what Australia would be – hard work, fresh air, nature and no alcohol or money. But the sailors refused to sail without booze. And home-brewing began on day 1 of the convict ships arrival, mainly rum.  The sailors sold rum to the convicts at a markup of 1200 percent.

The economy was a bartered one with work exchanged for food or other goods.  Most of the population were convicts doing forced labor, to get them to do a speck more than they were expected to do you had to offer them something.  And that was rum, which greatly enabled the Governor to control the colony as a measure of social control.  Rum was the one and only lever of power.

The British government was not all OK with this and sent the famous Captain Bligh of mutiny on the bounty to dry out Australia as the next Governor and get rid of the militia who controlled the rum trade. He began by confiscating the stills of Captain John Macarthur, the richest man of the colony, and took him to court as well.  When he showed up, the jury cheered him, as did the hundreds of soldiers gathered outside the courthouse. He was absolutely furious, and ordered Major Johnston to get his men under control, but Johnston replied saying the was sorry, he’d been so drunk the night before he’d crashed his carriage, so couldn’t intervene.  Later that day, Johnston arrested Bligh and took control of the Colony.  Effigies of Bligh were burned in the street and had a roasted sheep and rum BBQ to celebrate.

So the government sent a new Governor called Macquarie who took control by realizing everyone was a crok and outcrooking them all.  He began by asking for exclusive rights to import rum for 3 years in exchange for a new hospital, and so began Australia’s health care system.

AMERICA

In 1979 George Washington own the largest distillery, producing 11,000 gallons of whisky a year, and after handing out free booze to voters, won his first election.  His military success came from doubling his men’s rum rations.   

Although Hollywood usually has just one giant saloon in the center of town which forces the hero and villain to confront each other, in real life there were many saloons in a town, so many they might not bump into each other.  The doors were solid, not swinging, and instead of a large room, bars were narrow, with the bar usually on the left, usually with a large mirror that lets those at the bar see anyone approaching them from behind. Although there are bottles of wine and crème de menthe, no one orders them. Everyone’s drinking whiskey and beer, though mainly whiskey. Another odd thing is that no one every asks how much drinks cost or gets change, because everyone knows the charge. It’s one-bit (about 12 cents) at the poor saloons, and two-bits at a fancier one with floor shows and a chandelier.

It’s mainly white men. A black man might be tolerated, native americans banned by law, and most unwelcome were the Chinese.  Respectable women never went into a saloon. Many weren’t for rent, why do that when you could earn $10 a week chatting with lonely men? At the back the card game would be faro, not poker, a very simple game of pure chance and easy to cheat at.

Prohibition was meant to get rid of saloons, which were perceived, especially in the Midwest, as the root of many evils.  Husbands drank their salaries, beat up their wives, died young.  Saloons were places decent women didn’t go, though the gals that were there often weren’t prostitutes but paid in whiskey (actually cold tea) to talk to men.  Saloons always had a bar on the left with a mirror behind it, and a brass rail with spittoons for every 4 people at the bottom, and no swinging doors like in the old westerns.  Horses were parked outside in huge piles of poop, since naturally while their owners drank, they pooped.  In a one-bit saloon, you plopped down a bit (12.5 cents, so really a quarter and had two drinks).  Or most often you bought someone else a drink, and the favor would be returned later by a newcomer.

Prohibition succeeded in getting rid of saloons.  That was its purpose, not stopping all alcohol, and Germans and other ethnic groups that made beer and wine weren’t worried about it.  But then the Vollmer act defined alcoholic beverages as anything over half a percent.  So for 13 years the U.S. lost the skills to make wine and beer or even whiskey well and it took 50 years to recover.  Speakeasies were quite unlike saloons, pretty much anything from someone’s living room where pasta might also be served, to the movie versions of New York city.  And women went too, unlike saloons.   

Russia

Traditions there were good at getting everyone to drink, a toast was made and all were expected to participate.  Ivan the Terrible began this in the 1500s to use drunkenness as a form of political control. Scribes attended who wrote down what everyone said while drunk, and read to him in the morning, with punishments handed out. He started state-run drinking to get as much tax money as possible. While most countries try to limit the crimes, riots, broken homes and health of drunkards, Russia was too keen for the revenue to discourage drinking in any way.

In 1914, Tsar Nicholas II outlawed vodka. In 1918 he and his family were executed. These two facts are not unrelated. It was poorly timed too, WWI was beginning and a quarter of all revenue came from taxes on alcohol.  And being sober the population could see what their government was doing to them. Today in Russia nearly a quarter of all deaths are related to alcohol.

Stalin ruled with terror and drunkenness.  He’d invite his politburo to dinner and make them drink and drink and drink, which they couldn’t refuse to do.  At one dinner there were 22 toasts before any food arrived.  He would tap out his pipe on Kruschev’s bald head and order him to do a Cossack dance. He loved to push one of the commissars into a pond.  But Stalin was mainly drinking water himself.  He did this to humiliate them, to set their tongues against each other, and make it hard to plot against him.  Even Peter the Great was known for forcing drinks on others. If he caught someone not drinking, they were forced to drink 1.5 liters of win in one go.  The head of Peter’s secret police had a tame bear who would offer guests a glass of vodka and attack if refused.

Posted in Advice, Agriculture, Human Nature | Tagged , , | Comments Off on The History of Drunkenness

Pentagon report: collapse within 20 years from climate change

Preface. The report that the article by Ahmed below is based on is: Brosig, M., et al. 2019. Implications of climate change for the U.S. Army. United States Army War College.

It was written in 2019, before covid-19 and so quite prescient: The two most prominent risks are a collapse of the power grid and the danger of disease epidemics.

It is basically a long argument to increase the military so it can help cope with epidemics, water and food shortages, electric grid outages, flooding, and protect the (oil and gas) resources in the arctic.

Since I see energy decline as a far more immediate threat than climate change, and the military knows this, it is odd so little is written about energy in this report. But then I looked at the pages about the arctic, and though the word oil doesn’t appear, you can see that the military is very aware of the resources (oil) there and the chance of war with Russia. Therefore they propose that the military patrol this vast area with ships, aircraft, and new vehicles that can traverse the bogs and marshes of melted permafrost. They propose sending more soldiers to the arctic for training, satellites for navigation, to develop new ways of fighting, enhance batteries and other equipment to be able function in the cold arctic environment, and more.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Ahmed, N. 2019. U.S. Military Could Collapse Within 20 Years Due to Climate Change, Report Commissioned By Pentagon Says. vice.com

According to a new U.S. Army report, Americans could face a horrifically grim future from climate change involving blackouts, disease, thirst, starvation and war. The study found that the US military itself might also collapse. This could all happen over the next two decades.

The senior US government officials who wrote the report are from several key agencies including the Army, Defense Intelligence Agency, and NASA. The study called on the Pentagon to urgently prepare for the possibility that domestic power, water, and food systems might collapse due to the impacts of climate change as we near mid-century.

The report was commissioned by General Mark Milley, Trump’s new chairman of the Joint Chiefs of Staff, making him the highest-ranking military officer in the country (the report also puts him at odds with Trump, who does not take climate change seriously.)

The report, titled Implications of Climate Change for the U.S. Army, was launched by the U.S. Army War College in partnership with NASA in May at the Wilson Center in Washington DC. The report was commissioned by Gen. Milley during his previous role as the Army’s Chief of Staff. It was made publicly available in August via the Center for Climate and Security, but didn’t get a lot of attention at the time.

The two most prominent scenarios in the report focus on the risk of a collapse of the power grid within “the next 20 years,” and the danger of disease epidemics. Both could be triggered by climate change in the near-term, it notes.

The report also warns that the US military should prepare for new foreign interventions in Syria-style conflicts, triggered due to climate-related impacts. Bangladesh in particular is highlighted as the most vulnerable country to climate collapse in the world. “The permanent displacement of a large portion of the population of Bangladesh would be a regional catastrophe with the potential to increase global instability. This is a potential result of climate change complications in just one country. Globally, over 600 million people live at sea level.”

Without urgent reforms, the report warns that the US military itself could end up effectively collapsing as it tries to respond to climate collapse. It could lose capacity to contain threats in the US and could wilt into “mission failure” abroad due to inadequate water supplies.

The report paints a frightening portrait of a country falling apart over the next 20 years due to the impacts of climate change on “natural systems such as oceans, lakes, rivers, ground water, reefs, and forests.”

Current infrastructure in the US, the report says, is woefully under prepared: “Most of the critical infrastructures identified by the Department of Homeland Security are not built to withstand these altered conditions.”

Some 80 percent of US agricultural exports and 78 percent of imports are water-borne. This means that episodes of flooding due to climate change could leave lasting damage to shipping infrastructure, posing “a major threat to US lives and communities, the US economy and global food security,” the report notes.

At particular risk is the US national power grid, which could shut down due to “the stressors of a changing climate,” especially changing rainfall levels:

“The power grid that serves the United States is aging and continues to operate without a coordinated and significant infrastructure investment. Vulnerabilities exist to electricity-generating power plants, electric transmission infrastructure and distribution system components,” it states.

As a result, the “increased energy requirements” triggered by new weather patterns like extended periods of heat, drought, and cold could eventually overwhelm “an already fragile system.”

The report’s grim prediction has already started playing out, with utility PG&E cutting power to more than a million people across California to avoid power lines sparking another catastrophic wildfire. While climate change is intensifying the dry season and increasing fire risks, PG&E has come under fire for failing to fix the state’s ailing power grid.

The US Army report shows that California’s power outage could be a taste of things to come, laying out a truly dystopian scenario of what would happen if the national power grid was brought down by climate change. One particularly harrowing paragraph lists off the consequences bluntly:

“If the power grid infrastructure were to collapse, the United States would experience significant:

  • Loss of perishable foods and medications
  • Loss of water and wastewater distribution systems
  • Loss of heating/air conditioning and electrical lighting systems
  • Loss of computer, telephone, and communications systems (including airline flights, satellite networks and GPS services)
  • Loss of public transportation systems
  • Loss of fuel distribution systems and fuel pipelines
  • Loss of all electrical systems that do not have back-up power”

Also at “high risk of temporary or permanent closure due to climate threats” are US nuclear power facilities.

There are currently 99 nuclear reactors operating in the US, supplying nearly 20 percent of the country’s utility-scale energy. But the majority of these, some 60 percent, are located in vulnerable regions which face “major risks” including sea level rise, severe storms, and water shortages.

“Climate change is introducing an increased risk of infectious disease to the US population. It is increasingly not a matter of ‘if’ but of when there will be a large outbreak.”

Water is currently 30-40 percent of the costs required to sustain a US military force operating abroad, according to the new Army report. A huge infrastructure is needed to transport bottled water for Army units. So the report recommends major new investments in technology to collect water from the atmosphere locally, without which US military operations abroad could become impossible. The biggest obstacle is that this is currently way outside the Pentagon’s current funding priorities.

Bizarrely for a report styling itself around the promotion of environmental stewardship in the Army, the report identifies the Arctic as a critical strategic location for future US military involvement: to maximize fossil fuel consumption.

Noting that the Arctic is believed to hold about a quarter of the world’s undiscovered hydrocarbon reserves, the authors estimate that some 20 percent of these reserves could be within US territory, noting a “greater potential for conflict” over these resources, particularly with Russia.

The melting of Arctic sea ice is depicted as a foregone conclusion over the next few decades, implying that major new economic opportunities will open up to exploit the region’s oil and gas resources as well as to establish new shipping routes: “The US military must immediately begin expanding its capability to operate in the Artic to defend economic interests and to partner with allies across the region.”

Senior US defense officials in Washington clearly anticipate a prolonged role for the US military, both abroad and in the homeland, as climate change wreaks havoc on critical food, water and power systems. Apart from causing fundamental damage to our already strained democratic systems, the bigger problem is that the US military is by far a foremost driver of climate change by being the world’s single biggest institutional consumer of fossil fuels.

The prospect of an ever expanding permanent role for the Army on US soil to address growing climate change impacts is a surprisingly extreme scenario which goes against the grain of the traditional separation of the US military from domestic affairs.

In putting this forward, the report inadvertently illustrates what happens when climate is seen through a narrow ‘national security’ lens. Instead of encouraging governments to address root causes through “unprecedented changes in all aspects of society” (in the words of the UN’s IPCC report this time last year), the Army report demands more money and power for military agencies while allowing the causes of climate crisis to accelerate. It’s perhaps no surprise that such dire scenarios are predicted, when the solutions that might avert those scenarios aren’t seriously explored.

Rather than waiting for the US military to step in after climate collapse—at which point the military itself could be at risk of collapsing—we would be better off dealing with the root cause of the issue skirted over by this report: America’s chronic dependence on the oil and gas driving the destabilization of the planet’s ecosystems.

Posted in Arctic, Climate Change, Military, Over Oil | Tagged , , | 2 Comments

A Relentless Growth of Disparity in Wealth

Preface.  I write a lot about why electric vehicles won’t be widely adopted. One reason is that the bottom 95% can’t afford them.  This tremendous unfairness will likely make peak oil decline more violent and chaotic than it would have been otherwise.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Huddleston, C. 2019. Survey: 69% of Americans Have Less Than $1,000 in Savings

  • Almost half of respondents — 45% — said they have $0 in a savings account. Another 24% said they have less than $1,000 in savings.
  • The top reason respondents said they weren’t saving more was because they were living paycheck to paycheck. Nearly 33% said this obstacle was keeping them from saving, and about 20% said a high cost of living prevented them from saving more.
  • The No. 1 thing respondents said they need to save more money was a higher salary. About 38% said having a bigger paycheck would help them save more, while 18% said lowering their debt would make it easier to set aside cash.
  • The most common place where those with savings put their cash is in a savings account. Although 33% of respondents said they take advantage of a savings account to store their cash, 29% said they don’t have any savings.

Atkins, D. 2014. David Atkins. How the Rich Stole Our Money – and Made Us Think They Were Doing Us a Favor. Salon.

You’ve doubtless seen the charts and figures showing the decline of the American middle class and the explosion of wealth for the super-rich. Wages have stagnated over the last 40 years even as productivity has increased — Americans are working harder but getting paid less. Unemployment remains stubbornly high even though corporate profits and the stock market are near record highs. Passive assets in the form of stocks and real estate are doing very well. Wages for working people are not. Unfortunately for the middle class, the top 1 percent of incomes own almost 50 percent of asset wealth, and the top 10 percent own over 85 percent of it. When assets do well but wages don’t, the middle class suffers.  This ominous trend is particularly prominent in the United States. That shouldn’t surprise us: study after study shows that American policymakers operate almost purely on behalf of wealthy interests. Recent polling also proves that the American rich want policies that encourage the growth of asset values while lowering their own tax rates, and are especially keen on outcomes that favor themselves at the expense of the poor and middle classSo why isn’t the 99 percent in open revolt? 

The Super Rich Are Richer Than We Thought, Hiding Huge Sums, New Reports Find

4/12/2014. Professors Emmanuel Saez (UC Berkeley) and Gabriel Zucman (LSE and UC Berkeley)

The Shocking Rise of Wealth Inequality: Is it Worse Than We Thought? 

April 2, 2014. Jordan Weissmann.  Slate.com

A Relentless Widening of Disparity in Wealth

Eduardo Porter. 11 March 2014. New York Times.

The richest 10 percent of Americans take a larger slice of the economic pie than they did in 1913, at the peak of the Gilded Age.

What if inequality were to continue growing years or decades into the future? Say the richest 1% a quarter of the nation’s income, up from about a fifth today. What about half?  Thomas Piketty of the Paris School of Economics believes this future is not just possible. It is likely.

In “Capital in the Twenty-First Century,”  Professor Piketty provides a fresh and sweeping analysis of the world’s economic history that puts into question many of our core beliefs about the organization of market economies.

His most startling news is that the belief that inequality will eventually stabilize and subside on its own, a long-held tenet of free market capitalism, is wrong. Rather, the economic forces concentrating more and more wealth into the hands of the fortunate few are almost sure to prevail for a very long time.

History does not offer much hope that political action will  turn the tide: “Universal suffrage and democratic institutions have not been enough to make the system react.”

Professor Piketty’s description of inexorably rising inequality probably fits many Americans’ intuitive understanding of how the world works today. But it cuts hard against the grain of economic orthodoxy that prevailed throughout the second half of the 20th century and still holds sway today as shaped during  the Cold War by economist Simon Kuznets.  After assembling tax return data he estimated between 1913 and 1948, the slice of the nation’s income absorbed by the richest 10% of Americans declined from 50% to 33%.

Mr. Kuznets’s conclusion provided a huge moral lift to capitalism as the United States faced off with the Soviet Union. It suggested that the market economy could distribute its fruits equitably, without any heavy-handed intervention of the state.

This isn’t true anymore: Wages have been depressed for years. Profits account for the largest share of national income since the 1930s. The richest 10% of Americans take a larger slice of the economic pie than they did in 1913, at the peak of the Gilded Age.

Like Kuznets’s analysis, Mr. Piketty’s is based on data. He just has much more: centuries’ worth, from dozens of countries.

Kuznets’s misleading curve is easy to understand in this light. He used data from one exceptional period in history, when a depression, two world wars and high inflation destroyed a large chunk of the world’s capital stock. Combined with fast growth after World War II and high taxes on the rich, this flattened the distribution of income until the 1970s.

But this exceptional period long ago ran its course.

Americans will argue that this description does not fit the United States. Wealth here is largely earned, not inherited, we say. The American rich are “creators,” like Bill Gates of Microsoft or Lloyd Blankfein of Goldman Sachs, rewarded for their economic contributions to society.

Mr. Piketty doubts that the enormous remuneration of top executives and financiers in the United States — enhanced by the decline of top income tax rates since the 1980s — really reflects their contributions. What’s more, he points out, inherited inequality has been lower in the United States mainly because its population has grown so fast — from three million at the time of independence to 300 million today — driving a vast economic expansion.

But this population boom will not repeat itself. The share of national income absorbed by corporate profits, a major component of capital’s share, is already rising sharply.

If anything, this means future inequality in the United States will be driven by two forces. A growing share of national income will go to the owners of capital. Of the remaining labor income, a growing share will also go to the top executives and highly compensated stars at the pinnacle of the earnings scale.

Is there a politically feasible antidote? Professor Piketty notes that the standard recipe — education for all — is no match against the powerful forces driving inherited wealth ever higher.

Taxes are, of course, the most feasible counterweight. Progressive wealth taxes could reduce the after-tax return to capital so that it equaled the rate of economic growth.

But politically, “the fiscal institutions to redistribute incomes in a balanced and equitable way have been badly damaged,” Professor Piketty told me.

The holders of wealth, hardly a powerless bunch, will oppose any such move, even if that’s what is needed to preserve capitalism against the populist impulses of those left behind.

Professor Piketty offers early-20th-century France as an example. “France was a democracy and yet the system did not respond to an incredible concentration of wealth and an incredible level of inequality,” he said. “The elites just refused to see it. They kept claiming that the free market was going to solve everything.”

It didn’t.

Posted in Distribution of Wealth | 3 Comments

Updates to Life After Fossil Fuels: A Reality Check on Alternative Energy

Updates to “Life After Fossil Fuels”

Last updated 26 May 2021. Other posts related to this book here.

Alice Friedemann   www.energyskeptic.com  author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Chapter 2 We Are Running Out of Time

Norway-based energy consultancy Rystad Energy has warned that Big Oil could see its proven reserves run out in less than 15 years, unless Big Oil makes more commercial discoveries quickly, thanks to produced volumes not being fully replaced with new discoveries (Kimani A (2021) Big Oil Is In Desperate Need Of New Discoveries. Oilprice.com).

“Global oil and gas discoveries have been on a constant shrinking trend prior to and over the last decade, with oil discoveries reaching a low of 3.8 BBO (billion barrels of oil) in 2016; in 2020 it was 4.3 BBO. During the decade, 89 BBO were discovered while 289 BBO of reserves were produced, a ratio of over 3 to 1, which is unsustainable.” Rafael Sandrea, Energy Policy Research Foundation.

However alarming Figure 3 in Chapter 2 (IEA 2018) may be, reality is even more worrisome, because this chart doesn’t depict Business As Usual, rather, it is an optimistic forecast called the IEA Sustainable Development scenario shown in figure 1 as requiring far less oil supply than other projections to 2040 above it.  The IEA Sustainable Development assumes that by 2030: global primary energy use declines 7% from 2019 to 2030 (compared to a 20% increase over the prior 11 years); solar generation grows by a factor of 5.6x, wind generation grows by a factor of 2.4x; nuclear generation increases by 23% (no decommissioning); coal use for power/heat declines by 51%; and electric vehicles sales reach 40% from today’s 4.5% levels (Cembalist 2021).

Figure 1. Oil “future production wedge”: demand vs existing field supply Million barrels per day

Chapter 6  What Fuels Could Replace Diesel?

Non-renewable non-commercial exploding hydrogen. Most updates are in post “Hydrogen: The dumbest & most impossible renewable“, and  Energy/Hydrogen.

7 Why Not Electrify Commercial Transportation with Batteries? 

The U.S. would have to double today’s electric grid if 66% of all cars are EVs by 2050 (Groom 2021, NREL 2021). Yet the electric grid is falling apart, will be increasingly affected by climate change, and since wind and solar construction depends on fossil fuels for every step of their life cycle, their construction will be constrained by energy shortages due to peak oil happening in 2018 (chapter 2 in Life After Fossil Fuels).

Energyskeptic battery posts:

Chapter 9 Manufacturing Uses Over Half of All Fossil Energy

Energyskeptic manufacturing posts:

Geothermal power: Can Geothermal power replace declining fossil fuels?

Chapter 10 What Alternatives Can Replace FossilFueled Electricity Generation

Fusion. Updates are in Why fusion power is forever away and Energy/Fusion.

Nuclear Power. Updates are in Nuclear Power problems, Nuclear waste, and other Nuclear Power posts.

Chapter 16 The Ground is Disappearing Beneath Our Feet

More than one-third of the Corn Belt in the Midwest has completely lost its carbon-rich topsoil, which is critical for plant growth because of its water and nutrient retention properties. Thaler et al (2021) estimate the loss at about 100 million acres, which is 156,251 square miles — the size of Illinois, Iowa, and Wisconsin combined.   Degradation of soil quality by erosion reduces crop yields, which this research estimated has reduced corn and soybean yields by about 6%, almost $3 billion in annual economic losses for farmers across the Midwest.

Chapter 19 Grow More Biomass: Dwindling Groundwater

Billions more people could have difficulty accessing water if the world opts for a massive expansion in growing energy crops to fight climate change.  The idea of growing crops and trees to absorb CO2 and capturing the carbon released when they are burned for energy is a central plank to most of the Intergovernmental Panel on Climate Change’s scenarios for the negative emissions approaches needed to avoid the catastrophic impacts of more than 1.5°C of global warming.

But the technology, known as bioenergy with carbon capture and storage (BECCS), could prove a cure worse than the disease, at least when it comes to water stress. The water needed to irrigate enough energy crops to stay under the 1.5°C limit would leave 4.58 billion people experiencing high water stress by 2100 – up from 2.28 billion today, especially in South America and south Africa (Vaughan 2021).

Chapter 21 Grow More Biomass: Pesticides

I’m adding updates to energyskeptic.com in the post below as well as others in category decline/pollution/pesticides here.

Chemical industrial farming is unsustainable. Why poison ourselves when pesticides don’t save more of our crops than in the past?

26 Fill ‘er up with seaweed (see energyskeptic post here).

Bever (2021) Fighting climate change by farming kelp NPR:  An absurd  project to cash in on carbon sequestration funds to haul kelp out to sea until it’s so heavy the buoy sinks and the kelp CO2 is sequestered on the ocean floor. What could go wrong: whales entangled, ship propellers snarled, beaches fouled? The price and energy to do this? And why? As Life After Fossil Fuels explains, peak oil occurred in 2018 and the decline of emissions at 4% a year now that will exponentially increase dwarves all sequestration and renewable contraption dreams and schemes.

Chapter 27 “The Problems with Cellulosic Ethanol Could Drive You to Drink”

In this chapter I discussed why attempts to use termites to make ethanol haven’t worked out: “…Scientists have been trying for many years to replicate a termite’s ability to break down plants. Termites digest wood by outsourcing the work to the protists in their gut. Protists, in turn, outsource the work to many bacteria that use enzymes to break wood down further. Just like at a factory, each microbe performs one task, and excretes a different substance than it consumed. In a termite gut factory, one working microbes’ poop is ambrosia for another. This intricate chain reaction has proven difficult to synthesize. Too much of anything along the chain of reactions and it can kill the process. For example, in ethanol production, when yeast has raised the concentration of excreted ethanol from 12 to 18%, the yeast dies. So far scientists haven’t been able to get termite or ruminant gut organisms to expand from their tiny world into the expansive gut of a 2,000-gallon stainless-steel tank.”

This paper discusses the bacteria of shipworms, which have been destroying wooden ships and docks for thousands of years.  There’s a hope their enzymes can be used to break down wood to make biofuels, but they sound a lot like underwater termites to me: Shipworms are long, thin mollusks famed and feared for their ability to eat wood. But they can’t do it alone. They rely on bacterial partners that don’t reside in the gut, but inside the cells of their gills. Perhaps their enzymes can be used to breakdown lignocellulose into sugars, and then into ethanol.  Though that would be pointless: trucks, locomotives, and ships don’t run on ethanol or diesohol (Altamia 2020).

Chapter 29 Can we Eat enough French Fries

In this chapter I reported that a sewer in London was clogged with a record-breaking fatberg of 140 tons.  Breaking news: that record has been broken with a 330 ton London fatberg (Picheta 2021). So that’s good news, more fat to propel our four ton autos.  Or maybe not, there’s a new competitor: Insulation for homes made of cooking oil, wool, and sulfur (Najmah 2021).

Chapter 30 Combustion: Burn Baby Burn

The Ryegate Power Station’s biomass plant in Vermont may shut down sooner than expected, the contract that expires in 2022 is only being renewed for 2 years, rather than the 10 expected due to the much higher cost of electricity, which Vermonters subsidize with $5 million a year. It’s pricey because it’s only 23% efficient — so for every four trees burned, only one tree is converted to electricity. Biomass plants like Ryegate have been closing throughout the region, with plants in New Hampshire and Maine not being relicensed (Gockee 2021)

Chapter 33 Conclusion: Do You Want to Eat, Drink, or Drive?

I wrote: “Declining oil means you can stop worrying about robots taking over. What energy could they be built with and run on after fossils? Not that a robot overthrow was ever an issue. The human cortex is 600 billion times more complicated than any artificial network. The code to simulate the human brain would require hundreds of trillions of lines of code inevitably riddled with trillions of errors.

Nor do you need to fear artificial intelligence (AI), which many otherwise intelligent people think is an existential threat.  It isn’t. Nail (2021) describes how AI treats the brain like a computer with a very narrow range of tasks in a closed system where all possibilities are known, and breaks down when confronted with novel situations.  But brains are nothing like computers, which have fixed logic gates are a binary 0 or 1. Brain neurons are analog, changing their firing thresholds, with chemicals that further alter activity, efficiency, and connectivity. And then there’s the role of dreaming, and much more that makes our brains neuroplastic in ways a computer AI never will be, see the article for details.

The European Union has initiated an ambitious plan called Farm to Fork (EU 2021) that hopes to cut pesticide and excess nutrient use by 50%, and converting 25% of farms to organic agriculture by 2030 (Rosmino 2021).

Do you want to eat or drive? Many energy companies plan to increase their biofuel capacity by 2030, mainly with corn and soybean oil. This is driving price inflation for vegetable oils, including palm oil, canola and soybean oil, doubling corn futures and tripling lumber costs. The accelerating demand for renewable biodiesel fuels is directly responsible for price inflation. Food costs have been pushed to their highest in seven years (Kimani 2021).

Sri Lanka will be the first country in the world to eliminate the use of chemical fertilizers and pesticides (2021).

 

Book Reviews:

Ennos R (2021) The Age of Wood: Our Most Useful Material and the Construction of Civilization.

References

Altamia MA et al (2020) Teredinibacter waterburyi sp. nov., a marine, cellulolytic endosymbiotic bacterium isolated from the gills of the wood-boring mollusc Bankia setacea…. International Journal of Systematic and Evolutionary Microbiology.

Cembalist M (2021) 2021 Annual Energy Paper. JP Morgan asset & wealth management.

EU (2021) Farm to Fork Strategy. European Commission.

Gockee A (2021) Is time ticking on the Ryegate Power Station biomass plant? vtdigger.org

Groom N et al (2021) EV rollout will require huge investments in strained U.S. power grids. Reuters.

IEA (2018) International Energy Agency World Energy Outlook 2018, figures 1.19 and 3.13. International Energy Agency

Kimani A (2021) Global Food Prices Soaring As Demand For Biofuels Continues To Climb. oilprice.com

Nail T (2021) Artificial intelligence research may have hit a dead end. “Misfired” neurons might be a brain feature, not a bug — and that’s something AI research can’t take into account. Salon.

Najmah IB et al (2021) Insulating Composites Made from Sulfur, Canola Oil, and Wool. ChemSusChem, Wiley.

NREL (2021) Electrification Futures Study. National Renewable Energy Laboratory.

Picheta R (2021) A 330-ton fatberg is clogging an English city’s sewer, and it won’t move for weeks. CNN.

Rosmino C (2021) Meet the EU farmers using fewer pesticides to make agriculture greener. Euronews.com.

Thaler EA et al (2021) The extent of soil loss across the US Corn Belt. PNAS.

Vaughan A (2021) Carbon-negative crops may mean water shortages for 4.5 billion people. NewScientist.  Scientific article: Nature CommunicationsDOI: 10.1038/s41467-021-21640-3

 

 

 

Posted in Biofuels, Fusion, Groundwater, How Much Left, Hydrogen, Life After Fossil Fuels, Water | Tagged , , , , , , , , | 3 Comments

Why fusion power is forever away

Preface. When my husband Jeffery Kahn was a science writer at Lawrence Berkeley National Laboratory, we became friends with several astrophysicists who used to joke about how fusion was 30 years away and always would be.

I would argue now that it will never be possible, because like all contraptions that generate electricity, it is reliant on fossil fuels for every step of its life cycle — oil for transportation (mining, parts, workers) and coal to make cement, steel, ceramics, microchips, and so on.

Conventional oil peaked in 2008, and conventional + deep sea + shale “fracked” oil peaked in 2018. From now on oil will be declining rapidly (citations in Chapter 2 Life After Fossil Fuels).  If the Department of Energy 1980 rationing plan is adopted, agriculture will get whatever oil it needs to plant, harvest, and distribute food before other essential agencies get access, and then what is left over rationed to everyone else.  The energy to throw at non-renewable non-solutions will soon begin.

“When Trucks Stop Running” explains why civilization ends within a week if trucks stop running, and why trucks and other heavy-duty transportation can’t be electrified.   “Life After Fossil Fuels” explains why manufacturing can’t run on electricity either. Fusion is not renewable if it can’t make more fusion plants.  And it has an extremely high negative return on energy.

After the overview below, there are over half a dozen more articles about fusion. There are many issues with fusion not included in this post, see the others in category Energy/Fusion here.

Alice Friedemann   www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Fusion is not likely to work out, yet it is the only possible energy source that could replace fossil fuels (No single or combination of alternative energy resources can replace fossil fuels).

Ugo Bardi (2014), in his book “Extracted” points out that even the minerals needed for nuclear fusion are finite, and the “infinitely abundant energy” thought possible at the beginning of the atomic age isn’t possible.  here’s why:

“In practice, past attempts to obtain controlled nuclear fusion as a source of energy had hinged on the possibility of fusing a heavier isotope of hydrogen, deuterium. But not even the controlled deuterium-deuterium reaction is considered feasible, and the current effort focuses on the reaction of a still heavier hydrogen isotope, tritium, with deuterium. Tritium is not a mineral resource, as it is so unstable that it doesn’t exist on Earth. But it can be created by bombarding a lithium isotope, Li-6, with neutrons that in turn can be created by the deuterium-tritium fusion reaction. (In this sense a fusion reactor is another kind of “breeder” reactor, as it produces its own fuel.) However, since the mineral resources of lithium are limited, and since the Li-6 isotope forms only 7.5 percent of the total, the problem of mineral depletion exists. 58″

The immense gravity of the sun creates fusion by pushing atoms together.  We can’t do that on earth, where the two choices (and the main projects pursuing them) are:

1) ITER: use magnetic fields to contain plasma until atoms collide and fuse. This has been compared to holding jello together with rubber bands.

But there is nothing to say about fusion from the International Thermonuclear Experimental Reactor because it’s still being built:

  • The cost so far is $22.3 billion
  • The original deadline was 2016, the latest 2027 date is highly unlikely.
  • Their goal of a ‘burning plasma’ that produces more energy than the machine itself consumes is at least 20 years away
  • It’s so poorly run that a recent assessment found serious problems with the project’s leadership, management, and governance. The report was so damning the project’s governing body only allowed senior management to see it because they feared “the project could be interpreted as a major failure”.
  • April 2014: The U.S. contribution to ITER will cost a total of $3.9 billion — 4 times as much as originally estimated according to a report that came out April 10, 2014
  • Even if ITER does reach break-even someday, it will have produced just heat, not the ultimate aim, electricity. More work will be needed to hook it up to a generator. For ITER and tokamaks in general, commercialization remains several decades away.

2) The National Ignition Facility (NIF) at Lawrence Livermore National Laboratory is trying to use lasers to fuse hydrogen atoms together.

Despite all the recent publicity from a recent test, this project is at least as far as ITER is from attempting fusion:

  • The cost so far is $5.3 billion dollars
  • The original deadline was 2009.  A physicist working on the project, Denise Hinkel, said of the recent 2014 test that “we’re so far away from fusion it may not be a useful way to talk about what’s happening here at Livermore”.

The goal of the NIF is to achieve “ignition”. That means that the fused hydrogen atoms need to generate as much energy as was used to run the lasers that bombarded them with heat and pressure.

According to Mark Herrmann, at Sandia National Laboratory, the pressures achieved in the recent test were “1,000 times lower” than needed to meet the criteria for ignition.

Well, actually, according to the June 2014 issue of Scientific American, it was a hell of a lot less than that (Biello):

  • 17,000 joules of energy were yielded by the fuel pellet
  • 500,000,000,000,000 joules (500 trillion joules) were required just to feed the lasers alone
  • the pellet needs to yield 29.4 million more times energy to reach ignition.  Not 1,000.
  • Or if you look at it another way, that’s .0000000034% (17,000/500,000,000,000,000 = .000000000034 )
  • Biello concludes “A source of nearly unlimited, clean energy is still decades away”.

When you consider what it would take to reach ignition, you will understand why many physicists don’t think NIF will ever work and is a total waste of money:

To reach ignition, 192 lasers in an area the size of 3 football fields will need to heat a tiny ball of hydrogen gas the size of a peppercorn to 50 million degrees Centigrade at 150 billion times the pressure of Earth’s atmosphere. Each of the 192 lasers must bombard the peppercorn at exactly the same time with perfect symmetry on all sides.  If there is any lack of symmetry, the peppercorn will be squeezed like a balloon, which creates escape holes for the hydrogen and no fusion.

To get to ignition scientists would need create a source of energy greater than all the energy pumped into the system by the facility’s 192 high-powered lasers – a goal some scientists say may be unachievable.

And if somehow NIF succeeded, practical fusion would still likely be decades away. NIF, at its quickest, fires once every few hours. The targets take weeks to build with artisan precision. A commercial laser fusion power plant would probably have to vaporize fuel pellets at a rate of 10 per second (Chang).

“You want to look at the big lie in each program,” says Edward C. Morse, a professor of nuclear engineering at the University of California, Berkeley. “The big lie in [laser-based] fusion is that we can make these target capsules for a nickel a piece.” The target capsules, the peppercorn-size balls of deuterium-tritium fuel, have to be exquisitely machined and precisely round to ensure that they compress evenly from all sides. Any bump on the pellet and the target won’t blow, which makes current iterations of the pellets prohibitively expensive. Although Livermore (LLNL), which plans to make its pellets on site, does not release anticipated costs, the Laboratory for Laser Energetics at the University of Rochester also makes similar deuterium-tritium balls. “The reality now is that the annual budget to make targets that are used at Rochester is several million dollars, and they make about six capsules a year,” Morse says. “So you might say those are $1 million a piece.” LLNL can only blast one pellet every few hours, but in the future, targets will need to cycle through the chamber with the speed of a Gatling gun consuming almost 90,000 targets a day (Moyer).

Hirsch RL, Bezdek RH (2021) Fusion: Ten times more expensive than nuclear power. RealClearEnergy.org.

Hirsch & Bezdek wrote the 2005 Department of Energy Peak Oil report.

The U.S. and world fusion energy research programs are developing something that no one will want or can afford. Ever so slowly the promise of commercially viable fusion power from tokamaks has ebbed away.  Some recognized the worsening commercial outlook, but most researchers simply continued to study and increase the size of their tokamak devices — and to increase the size of their budgets.

Today, the ITER plant, which was initially expected to cost $5 billion, will now cost somewhere between $22 and 65 billion dollars.  Even at $22 billion, the cost is ten times more than a nuclear fission power plant, and 30 times more if $65 billion.  And nuclear fission power plants are considered to be too expensive for further adoption in the U.S.

The largest source of tritium in the world is heavy water nuclear reactors in Canada. The combination of very limited world production of tritium and its loss by radioactive decay means that world supplies of tritium are inherently limited.  It has recently become clear that world supplies of tritium for larger fusion experiments are limited to the point that world supplies are inadequate for future fusion pilot plants, let alone commercial fusion reactors based on the deuterium-tritium fuel cycle.  In other words, fusion researchers are developing a fusion concept for which there will not be enough fuel in the world to operate!

So fusion researchers are developing a fusion concept that stands no hope of being economically acceptable, running on a fuel that does not exist in adequate quantities.

To stop wasting funding on these pointless fusion projects, we recently suggested to the Secretary of Energy that she appoint a panel of non-fusion engineers and environmentalists to conduct the objective, independent evaluation we believe is necessary.

Moyer (2010) Fusion’s False Dawn. Scientific American.

Scientists have long dreamed of harnessing nuclear fusion—the power plant of the stars—for a safe, clean and virtually unlimited energy supply. Even as a historic milestone nears, skeptics question whether a working reactor will ever be possible

The deuterium-tritium fusion only kicks in at temperatures above 150 million degrees Celsius — 25,00 times hotter than the surface of the sun.

Yet the flash of ignition may be the easy part. The challenges of constructing and operating a fusion-based power plant could be more severe than the physics challenge of generating the fireballs in the first place.  A working reactor would have to be made of materials that can withstand temperatures of millions of degrees for years on end. It would be constantly bombarded by high-energy nuclear particles–conditions that turn ordinary materials brittle and radioactive. It has to make its own nuclear fuel in a complex breeding process. And to be a useful energy-producing member of the electricity grid, it has to do these things pretty much constantly–with no outages, interruptions or mishaps–for decades.

Fusion plasmas are hard to control. Imagine holding a large, squishy balloon. Now squeeze it down to as small as it will go. No matter how evenly you apply pressure, the balloon will always squirt out through a space between your fingers. The same problem applies to plasmas. Anytime scientists tried to clench them down into a tight enough ball to induce fusion, the plasma would find a way to squirt out the sides. It is a paradox germane to all types of fusion reactors–the hotter you make the plasma and the tighter you squeeze it, the more it fights your efforts to contain it.  So scientists have built ever larger magnetic bottles, but every time they did so, new problems emerged.

No matter how you make fusion happen–whether you use megajoule lasers (like at Lawrence Livermore National Laboratory) or the crunch of magnetic fields–energy payout will come in the currency of neutrons. Because these particles are neutral, they are not affected by electric or magnetic fields. Moreover, they pass straight through most solid materials as well.

The only way to make a neutron stop is to have it directly strike an atomic nucleus. Such collisions are often ruinous. The neutrons coming out of a deuterium-tritium fusion reaction are so energetic that they can knock out of position an atom in what would ordinarily be a strong metal–steel for instance. Over time these whacks weaken a reactor, turning structural components brittle.

Other times the neutrons will turn material radioactive, dangerously so.

Other times the neutrons will turn benign material radioactive. When a neutron hits an atomic nucleus, the nucleus can absorb the neutron and become unstable. A steady stream of neutrons—even if they come from a “clean” reaction such as fusion—would make any ordinary container dangerously radioactive, Baker says. “If someone wants to sell you any kind of nuclear system and says there is no radioactivity, hang onto your wallet.”

A fusion-based power plant must also convert energy from the neutrons into heat that drives a turbine. Future reactor designs make the conversion in a region surrounding the fusion core called the blanket. Although the chance is small that a given neutron will hit any single atomic nucleus in a blanket, a blanket thick enough and made from the right material—a few meters’ worth of steel, perhaps—will capture nearly all the neutrons passing through. These collisions heat the blanket, and a liquid coolant such as molten salt draws that heat out of the reactor. The hot salt is then used to boil water, and as in any other generator, this steam spins a turbine to generate electricity.

Except it is not so simple. The blanket has another job, one just as critical to the ultimate success of the reactor as extracting energy. The blanket has to make the fuel that will eventually go back into the reactor.

Although deuterium is cheap and abundant, tritium is exceptionally rare and must be harvested from nuclear reactions. An ordinary nuclear power plant can make between two to three kilograms of it in a year, at an estimated cost of between $80 million and $120 million a kilogram. Unfortunately, a magnetic fusion plant will consume about a kilogram of tritium a week. “The fusion needs are way, way beyond what fission can supply,” says Mohamed Abdou, director of the Fusion Science and Technology Center at the University of California, Los Angeles.

For a fusion plant to generate its own tritium, it has to borrow some of the neutrons that would otherwise be used for energy. Inside the blanket channels of lithium, a soft, highly reactive metal, would capture energetic neutrons to make helium and tritium. The tritium would escape out through the channels, get captured by the reactor and be reinjected into the plasma.

When you get to the fine print, though, the accounting becomes precarious. Every fusion reaction devours exactly one tritium ion and produces exactly one neutron. So every neutron coming out of the reactor must make at least one tritium ion, or else the reactor will soon run a tritium deficit—consuming more than it creates. Avoiding this obstacle is possible only if scientists manage to induce a complicated cascade of reactions. First, a neutron hits a lithium 7 isotope, which, although it consumes energy, produces both a tritium ion and a neutron. Then this second neutron goes on to hit a lithium 6 isotope and produce a second tritium ion.

Moreover, all this tritium has to be collected and reintroduced to the plasma with near 100 percent efficiency. “In this chain reaction you cannot lose a single neutron, otherwise the reaction stops,” says Michael Dittmar, a particle physicist at the Swiss Federal Institute for Technology in Zurich. “The first thing one should do [before building a reactor] is to show that the tritium production can function. It is pretty obvious that this is completely out of the question.”

“This is a very fancy gadget, this fusion blanket,” Hazeltine says. “It is accepting a lot of heat and taking care of that heat without overheating itself. It is accepting neutrons, and it is made out of very sophisticated materials so it doesn’t have a short lifetime in the face of those neutrons. And it is taking those neutrons and using them to turn lithium into tritium.

ITER, unfortunately, will not test blanket designs. That is why many scientists—especially those in the U.S., which is not playing a large role in the design, construction or operation of ITER—argue that a separate facility is needed to design and build a blanket. “You must show that you can do this in a practical system,” Abdou says, “and we have never built or tested a blanket. Never.” If such a test facility received funding tomorrow, Abdou estimates that it would take between 30 and 75 years to understand the issues sufficiently well to begin construction on an operational power plant. “I believe it’s doable,” he says, “but it’s a lot of work.”

The Big Lie

Let’s say it happens. The year is 2050. Both the NIF and ITER were unqualified successes, hitting their targets for energy gain on time and under budget. Mother Nature held no surprises as physicists ramped up the energy in each system; the ever unruly plasmas behaved as expected. A separate materials facility demonstrated how to build a blanket that could generate tritium and convert neutrons to electricity, as well as stand up to the subatomic stresses of daily use in a fusion plant. And let’s assume that the estimated cost for a working fusion plant is only $10 billion. Will it be a useful option?

Even for those who have spent their lives pursuing the dream of fusion energy, the question is a difficult one to answer. The problem is that fusion-based power plants—like ordinary fission plants—would be used to generate baseload power. That is, to recoup their high initial costs, they would need to always be on. “Whenever you have any system that is capital-intensive, you want to run it around the clock because you are not paying for the fuel,” Baker says.

Unfortunately, it is extremely difficult to keep a plasma going for any appreciable length of time. So far reactors have been able to maintain a fusing plasma for less than a second. The goal of ITER is to maintain a burning plasma for tens of seconds. Going from that duration to around-the-clock operation is yet another huge leap. “Fusion will need to hit 90 percent availability,” says Baker, a figure that includes the downtime required for regular maintenance. “This is by far the greatest uncertainty in projecting the economic reliability of fusion systems.

It used to be that fusion was [seen as] fundamentally different from dirty fossil fuels or dangerous uranium. It was beautiful and pure—a permanent fix, an end to our thirst for energy. It was as close to the perfection of the cosmos as humans were ever likely to get. Now those visions are receding. Fusion is just one more option and one that will take decades of work to bear fruit…the age of unlimited energy is not [in sight].

Clery D (2013) The Most Expensive Science Experiment Ever. Popular Science.

Some people have spent their whole working lives researching fusion and then retired feeling bitter at what they see as a wasted career. But that hasn’t stopped new recruits joining the effort every year…, perhaps motivated by … the need for fusion has never been greater, considering the twin threats of dwindling oil supplies and climate change.  ITER won’t generate any electricity, but designers hope to go beyond break-even and spark enough fusion reactions to produce 10 times as much heat as that pumped in to make it work.

To get there requires a reactor of epic proportions:

  • The building containing the reactor will be nearly 200 feet tall and extend 43 feet underground.
  • The reactor inside will weigh 23,000 tons.
  • Rare earth metal niobium will be combined with tin to make superconducting wires for the reactor’s magnets. When finished, they will have made 50,000 miles of wire, enough to wrap around the equator twice.
  • There will be 18 magnets, each 46 feet tall and weighing 360 tons (as much as a fully-laden jumbo jet) with  giant D-shaped coils of wire forming the electromagnets used to contain the plasma

That huge sum of money is, for the nations involved, a gamble against a future in which access to energy will become an issue of national security. Most agree that oil production is going to decline sharply during this century.  That doesn’t leave many options for the world’s future energy supplies. Conventional nuclear power makes people uneasy for many reasons, including safety, the problems of disposing of waste, nuclear proliferation and terrorism.

Alternative energy sources such as wind, wave and solar power will undoubtedly be a part of our energy future. It would be very hard, however, for our modern energy-hungry society to function on alternative energy alone because it is naturally intermittent–sometimes the sun doesn’t shine and the wind doesn’t blow–and also diffuse–alternative technologies take up a lot of space to produce not very much power.

Difficult choices lie ahead over energy and, some fear, wars will be fought in coming decades over access to energy resources, especially as the vast populations of countries such China and India increase in prosperity and demand more energy. Anywhere that oil is produced or transported–the Strait of Hormuz, the South China Sea, the Caspian Sea, the Arctic–could be a flashpoint. Supporting fusion is like backing a long shot: it may not come through, but if it does it will pay back handsomely. No one is promising that fusion energy will be cheap; reactors are expensive things to build and operate. But in a fusion-powered world geopolitics would no longer be dominated by the oil industry, so no more oil embargoes, no wild swings in the price of crude and no more worrying that Russia will turn off the tap on its gas pipelines.

Hambling D (2011) Star power: Small fusion start-ups aim for break-even. NewScientist.

The deuterium-tritium fusion only kicks in at temperatures above 150 million degrees Celcius — 25,00 times hotter than the surface of the sun. Not only does reaching such temperatures require a lot of energy, but no known material can withstand them once they have been achieved. The ultra-hot, ultra-dense plasma at the heart of a fusion reactor must instead be kept well away from the walls of its container using magnetic fields. Following a trick devised in the Soviet Union in the 1950s, the plasma is generated inside a doughnut or torus-shaped vessel, where encircling magnetic fields keep the plasma spiraling clear of the walls – a configuration known as a tokamak. This confinement is not perfect: the plasma has a tendency to expand, cool and leak out, limiting the time during which fusion can occur. The bigger the tokamak, the better the chance of extracting a meaningful amount of energy, since larger magnetic fields hold the plasma at a greater distance, meaning a longer confinement time.

Break-even is the dream ITER was conceived to realize.

With a huge confinement volume, it should contain a plasma for several minutes, ultimately producing 10 times as much power as is put in.  But this long confinement time brings its own challenges. An elaborate system of gutters is needed to extract from the plasma the helium produced in the reaction, along with other impurities. The neutrons emitted, which are chargeless and so not contained by magnetic fields, bombard the inside wall of the torus, making it radioactive and meaning it must be regularly replaced. These neutrons are also needed to breed the tritium that sustains the reaction, so the walls must be designed in such a way that the neutrons can be captured on lithium to make tritium. The details of how to do this are still being worked out.

The success of the project is by no means guaranteed 

“We know we can produce plasmas with all the right elements, but when you are operating on this scale there are uncertainties,” says David Campbell, a senior ITER scientist. Extrapolations from the performance of predecessors suggest a range of possible outcomes, he says. The most likely is that ITER will work as planned, delivering 10 times break-even energy. Yet there is a chance it might work better – or produce too little energy to be useful for commercial fusion.

Richard Wolfson, in “Nuclear Choices: A Citizen’s Guide to Nuclear Technology”:

“In the long run, fusion itself could bring on the ultimate climactic crisis. The energy released in fusion would not otherwise be available on Earth; it would represent a new input to the global energy flow. Like all the rest of the global energy, fusion energy would ultimately become heat that Earth would have to radiate into space. As long as humanity kept its energy consumption a tiny fraction of the global energy flow, there would be no major problem. But history shows that human energy consumption grows rapidly when it is not limited by shortages of fuel. Fusion fuel would be unlimited, so our species might expand its energy consumption to the point where the output of our fusion reactors became significant relative to the global input of solar energy. At that point Earth’s temperature would inevitably rise. This long-term criticism of fusion holds for any energy source that could add to Earth’s energy flow even a few percent of what the Sun provides. Only solar energy itself escapes this criticism”. page 274

Robert L. Hirsch, author of the Department of Energy 2005 Peak Oil study, in his book “The Impending World Energy Mess”:

“Fusion has been in the research stage since the 1950s….Fusion happens when fuels are heated to hundreds of millions of degrees long enough for more energy to be released than was used to create the heat. Containment of fusion fuels on the sun is by gravity. Since gravity is not usable for fusion on earth, researchers have used magnetic fields, electrostatic fields, and inertia to provide containment. Thus far, no magnetic or electrostatic fusion concept has demonstrated success.”  Hirsch thinks this will never work out and it’s been a waste of tens of billions of dollars.

William Parkins, formerly the chief scientist at Rockwell International, asks in the 10 Mar 2006 edition of Science  “Fusion Power: Will it Ever Come?

When I read Parkins article and translated some of the measurements to ones more familiar to me, it was obvious that fusion would never see the light of day:

  • Fusion requires heating D-T (deuterium-tritium) to a temperature of 180 million degrees Fahrenheit — 6.5 times hotter than the core of the sun.
  • So much heat is generated that the reactor vacuum vessel has to be at least 65 feet long, and no matter what the material, will need to be replaced periodically because the heat will make the reactor increasingly brittle as it undergoes radiation damage.  The vessel must retain vacuum integrity, requiring many connections for heat transfer and other systems.  Vacuum leaks are inevitable and could only be solved with remotely controlled equipment.
  • A major part of the cost of a fusion plant is the blanket-shield component. Its area equals that of the reactor vacuum vessel, about 4,500 cubic yards in a 1000 MWe plant.  The surrounding blanket-shield, made of expensive materials, would need to be at least 5.5 feet thick and weigh 10,000 metric tons, conservatively costing $1.8 billion dollars.

Here are some of the other difficulties Parkins points out in this article:

The blanket-shield component “amounts to $1,800/kWe of rated capacity—more than nuclear fission reactor plants cost today. This does not include the vacuum vessel, magnetic field windings with their associated cryogenic system, and other systems for vacuum pumping, plasma heating, fueling, “ash” removal, and hydrogen isotope separation. Helium compressors, primary heat exchangers, and power conversion components would have to be housed outside of the steel containment building—required to prevent escape of radioactive tritium in the event of an accident. It will be at least twice the diameter of those common in nuclear plants because of the size of the fusion reactor.

Scaling of the construction costs from the Bechtel estimates suggests a total plant cost on the order of $15 billion, or $15,000/kWe of plant rating. At a plant factor of 0.8 and total annual charges of 17% against the capital investment, these capital charges alone would contribute 36 cents to the cost of generating each kilowatt hour. This is far outside the competitive price range.

The history of this dream is as expensive as it is discouraging. Over the past half-century, fusion appropriations in the U.S. federal budget alone have run at about a quarter-billion dollars a year. Lobbying by some members of the physics community has resulted in a concentration of work at a few major projects—the Tokamak Fusion Test Reactor at Princeton, the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory, and the International Thermonuclear Experimental Reactor (ITER), the multinational facility now scheduled to be constructed in France after prolonged negotiation. NIF is years behind schedule and greatly over budget; it has poor political prospects, and the requirement for waiting between laser shots makes it a doubtful source for reliable power.

Even if a practical means of generating a sustained, net power-producing fusion reaction were found, prospects of excessive plant cost per unit of electric output, requirement for reactor vessel replacement, and need for remote maintenance for ensuring vessel vacuum integrity lie ahead. What executive would invest in a fusion power plant if faced with any one of these obstacles? It’s time to sell fusion for physics, not power”.

Former House of Representatives Congressman Roscoe Bartlett (R-MD), head of the “Peak Oil Caucus”:

“…hoping to solve our energy problems with fusion is a bit like you or me hoping to solve our personal financial problems by winning the lottery. That would be real nice. I think the odds are somewhere near the same. I am about as likely to win the lottery as we are to come to economically feasible fusion.”

Bartlett’s full speech to congress: http://www.energybulletin.net/4733.html

National Academy of Sciences. 2013. An Assessment of the Prospects for Inertial Fusion Energy

The 3 principal research efforts in the USA are all trying to implode fusion fuel pellets by: (1) lasers, including solid state lasers at the Lawrence Livermore National Laboratory’s (LLNL’s) NIF and the University of Rochester’s Laboratory for Laser Energetics (LLE), as well as the krypton fluoride gas lasers at the Naval Research Laboratory; (2) particle beams, being explored by a consortium of laboratories led by the Lawrence Berkeley National Laboratory (LBNL); and (3) pulsed magnetic fields, being explored on the Z machine at Sandia National Laboratories. The minimum technical accomplishment that would give confidence that commercial fusion may be feasible—the ignition of a fuel pellet in the laboratory—has not been achieved.

This is 247 pages long chock-full of the problems that fusion must overcome – not just technical but the funding — billions of dollars in the unlikely event any of the various flavors of fusion makes enough progress to scale up to a higher level.  If you ever wanted to know the minutiae of why fusion will never work, this is a great document to read  — if you can understand it that is.  I spent about 10 minutes grabbing just a few of the hundreds of “challenges” that need to be overcome:

  • Making a reliable, long-lived chamber is challenging since the charged particles, target debris, and X-rays will erode the wall surface and the neutrons will embrittle and weaken the solid materials.
  • Unless the initial layer surfaces are very smooth (i.e., perturbations are smaller than about 20 nm), short-wavelength (wavelength comparable to shell thickness) perturbations can grow rapidly and destroy the compressing shell. Mix Similarly, near the end of the implosion, such instabilities can mix colder material into the spot that must be heated to ignition. If too much cold material is injected into the hot spot, ignition will not occur. Most of the fuel must be compressed to high density, approximately 1,000 to 4,000 times solid density.
  • To initiate fusion, the deuterium and tritium fuel must be heated to over 50 million degrees and held together long enough for the reactions to take place. Drivers must deliver very uniform ablation; otherwise the target is compressed asymmetrically. If the compression of the target is insufficient, the fusion reaction rate is too slow and the target disassembles before the reactions take place. Asymmetric compression excites strong Rayleigh-Taylor instabilities that spoil compression and mix dense cold plasma with the less dense hot spot. Preheating of the target can also spoil compression. For example, mistimed driver pulses can shock heat the target before compression. Also, interaction of the driver with the surrounding plasma can create fast electrons that penetrate and preheat the target.
  • The technology for the reactor chambers, including heat exhaust and management of tritium, involves difficult and complicated issues with multiple, frequently competing goals and requirements.  Understanding the performance at the level of subsystems such as a breeding blanket and tritium management, and integrating these complex subsystems into a robust and self-consistent design will be very challenging.
  • Avoiding frequent replacement of components that are difficult to access and replace will be important to achieving high availability. Such components will need to achieve a very high level of operational reliability.
  • Experimental investigations of the fast-ignition concept are challenging and involve extremely high-energy-density physics: ultraintense lasers (>1019 W cm–2); pressures in excess of 1 Gbar; magnetic fields in excess of 100 MG; and electric fields in excess of 1012 V/m. Addressing the sheer complexity and scale of the problem inherently requires the high-energy and high-power laser facilities

References

Bardi, Ugo. 2014. Extracted: How the Quest for Mineral Wealth Is Plundering the Planet. Chelsea Green Publishing.

Biello, David.  June 2014. A Milestone on the Long and Winding Road to Fusion.  Scientific American.

Chang, Ken. Mar 18, 2014. Machinery of an Energy Dream Machinery of an Energy Dream. New York Times.

Clery, D. 28 February 2014. New Review Slams Fusion Project’s Management. Science: Vol. 343 no. 6174 pp. 957-8.

Hinkel, D *, Springer P * , Standen, A, Krasny, M. Feb 13, 2014. Bay Area Scientists Make Breakthrough on Nuclear Fusion. Forum. (*) scientists at Lawrence Livermore National Laboratory.

Moyer, M. March/April 2010. Fusion’s False Dawn. Scientific American.

Perlman, David. Feb 13, 2014. Livermore Lab’s fusion energy tests get closer to ‘ignition’. San Francisco Chronicle.

Posted in Alternative Energy, Energy, Fusion | Tagged , , | Comments Off on Why fusion power is forever away

Compressed air energy storage (CAES)

Figure 1. Potential salt dome locations for CAES facilities are mainly along the Gulf coast

Preface. Besides pumped hydro storage (PHS), which provides 99% of energy storage today, CAES is the only other commercially proven energy storage technology that can provide large-scale (over 100 MW) energy storage. But there are just two CAES plants in the world because there are so few places to put them, as you can see in Figure 1 and Figure i.

CAES is the most sustainable energy storage with no environmental issues like what PHS poses, such as the flooding of land and the damming of rivers. And Barnhart (2013) rates the ESOI, or energy stored on energy invested, the best of all for CAES. Batteries need up to 100 times more energy to create than the energy they can store.

A more detailed and technical article on CAES with wonderful pictures can be found here: Kris De Decker. History and Future of the Compressed Air Economy.

Alice Friedemann   www.energyskeptic.com  author of  “Life After Fossil Fuels – Back to Wood World”, 2021, Springer, “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

How it works: Using off-peak electricity, compressed air is pumped into very large underground cavities at a depth of 1650–4250 feet (Hovorka 2009), and then drawn out to spin turbines at peak demand periods.

Uh-oh — it still needs fossil fuels. But a big drawback of CAES is that it still needs fossil fuels, since electric generators use natural gas to supplement the energy from the stored compressed air. Natural gas also provides the power to compress and pump the air underground, and when the compressed air is withdrawn, natural gas is used a second time to heat it and force it through expanders to power a generator. Current CAES facilities are essentially gas turbines that consume 40–60 % less gas than conventional turbines (SBC 2013).

Few locations: Domal salt formations are rare (orange in figure i below)

Locations are scarce because they must be airtight. There are only two CAES plants in the world: Alabama (110 MW) built in 1991 and in Germany in 1979, both of them in Domal Salt formations.

There are only two because domal salt formations are so rare and exist in only a few states in the U.S. as shown in figure i.  These have one or more deep chambers within the salt dome that are airtight, so they can handle frequent charging and discharging, with pure, thick salt walls that self-heal with air moisture, preventing leaks. Bedded salt is not as ideal because it takes a huge amount of energy and water to carve salt chambers out. Domal salt is also superior because they are purer and thicker than bedded salt (Hovorka).

Areas with class 4+ wind and possible CAES locations. Succar. 2008. Compressed Air Energy Storage: Theory, Resources, And Applications For Wind Power. Princeton University.

Areas with class 4+ wind and possible CAES locations. Succar. 2008. Compressed Air Energy Storage: Theory, Resources, And Applications For Wind Power. Princeton University.

 

 

 

 

 

 

Ideally a CAES facility would store renewable wind power, but the best wind locations are seldom near domal salt areas.  Though there is a wind/CAES project being planned, an $8 billion dollar project in Utah. It would use the only known salt dome outside of Texas, Louisiana, or Alabama for a $1.5 billion dollar CAES to store electricity from a $4 billion wind farm in Wyoming to deliver power to Los Angeles over $2.6 billion of new transmission lines that run for 535 miles ($4.86 million/mile) (DATC 2014; Gruver 2014).

This is not exactly run of the mill geology. CAES has yet to be deployed in bedded salt, aquifers, or abandoned rock mines because these formations are less likely to be airtight, and hence able to charge and discharge frequently and to maintain constant pressure. Underground areas once but no longer used to store natural gas or oil would have to be free of blockages that could gum up the works. Water is another limiting factor. High volumes are needed to cool the compressed air before storing it.

CAES systems generally have twice as much up-ramping capability as down-ramping. Translation: They can produce electricity faster than they can store it (IEA 2011a).

They are inefficient

The CAES plants in operation in Germany and the US have an electric-to-electric efficiency of only 40–54%, respectively (Luo 2015).  A conversion efficiency this low will require a doubling of wind and solar power to make up for the loss. 

The Pacific Northwest National Laboratory calculated the cost of energy storage devices for balancing the grid if wind power reached 20 % of electric generation across the United States. The cost for CAES was the most expensive: 170.6 billion. Storage would fill spans ranging from milliseconds up to an hour. Not 2 hours, not a day, and not a week— that will cost you extra. In billions of dollars, the options examined included $54.03 NaS battery, $63.85 flywheel, $81.62 Li-ion battery, $116.61 redox flow battery, $125.06 demand response (car PHEV batteries), $130.24 pumped hydro storage (PHS), $135.48 combustion turbine (CT), and $170.62 compressed air energy storage (PNNL 2013).

Based on nine vendor estimates, to build CAES units able to store one day of U.S. electricity would cost from $912 billion to $1.48 trillion. That’s below ground. Above ground CAES would cost $3.8 trillion (DOE/EPRI 2013).

Locations must be near the electric grid: It’s far too expensive to add transmission from remote locations. It’s already too expensive to build them….

According to Alfred Cavallo, “The immense magnitude of stored energy required to transform the intermittent wind resource to a constantly available power supply is not widely appreciated. For example, a 200 MW wind/CAES plant would need a minimum storage capacity of 10,000 MWh, or 50 hours of full plant output (this assumes that the wind power density is constant throughout the year). If the wind was not constant, but seasonal, say mainly in the winter or spring, the energy storage for seasonal output would require a minimum of 40,000 MWh (200 hours of full power plant output). Clearly, only the most inexpensive of storage media, like air or water, could be used in such an application” (Cavallo 2007).

Since the wind is a seasonal resource, it would be ideal to be able to store weeks of wind energy, but that is impossibly expensive (Cavallo 1995).

CAES in aquifers has never been accomplished, and attempt to do so was abandoned after $8 million was spent in Iowa  because testing found it would leak (see Haugen below).  Aquifers are far more expensive than salt caverns, partly due to the high cost of conducting tests, such as seismic, drilling test wells, modeling the reservoir, and so on (Swensen, Hydrodynamics Group, Marchese). Aquifers may not be suitable for CAES– they have to have the right amount of porosity and permeability beneath an impermeable caprock with the right geometry (Succar). This makes it very expensive to find out.

Hard rock caverns, such as abandoned mines, are the least likely place to put CAES and this has never been attempted, leakage is too likely, and finding a mine at the exact right depth reduces the choices further.

Air has problems being stored that natural gas does not. Using underground storage that once had natural gas may not work, because “a CAES system used for arbitrage or backing wind power will likely switch between compression and generation at least once a day and perhaps several times a day. In contrast, most natural gas storage facilities are often only cycled once over the course of the year to meet the seasonal demand fluctuations for natural gas. Third, several oxidation processes might take place in the presence of oxygen from the air depending on the mineralogy of the formation. Also, introduction of air into the formation might promote propagation of aerobic bacteria that might pose a significant corrosion risk. Finally, additional corrosion mechanisms might be promoted due to the introduction of oxygen into the formation (Succar).

Haugen, D. 2012. Scrapped Iowa project leaves energy storage lessons. Midwest Energy News.

After spending $8 million on a CAES aquifer in Iowa, the project was halted when it was concluded that air didn’t flow fast enough through the aquifer for it to be effective as a compressed-air energy storage site.

Hydrodynamics Group. CAES in aquifers is problematic. Lack of geological data-poor. Reservoir properties.

Hydrodynamics has found that CAES in aquifer storage medium is problematic. We found that geological data for aquifer structures is typically very limited, resulting in costly exploration, field testing, and analysis development programs. Otrher challenges include constraint of air storage pressure around the hydrostatic pressure of the aquifer, limitations on well productivity, the potential for oxygen depletion, and the potential of water production with the air. We have found that the mitigation of the challenges of CAES development is dependent on the selection of an anticline structure at the proper depth, and the choice of highly permeable porous medium.

REFERENCES

Barnhart, CJ, et al. 2013. On the importance of reducing the energetic and material demands of electrical energy storage. Energy & Environmental Science. 

Cavallo, A.  et al. 1995. Cost effective seasonal storage of wind energy. Houston, TX, USA,  pp. 119-125.

Cavallo, A. 2007. Controllable and affordable utility-scale electricity from intermittent wind resources and compressed air energy storage (CAES). Energy 32: 120-127.

DATC. 2014. $8-billion green energy initiative proposed for Los Angeles. Los Angeles: Duke
American Transmission Co.

Denholm. September 23, 2013. Energy Storage in the U.S. National Renewable Energy Laboratory. Slide 15.

DOE/EPRI. 2013. Electricity storage handbook in collaboration with NRECA. USA: Sandia
National Laboratories and Electric Power Research Institute.

Gruver, M. 2014. Renewable energy plan hinges on huge Utah caverns. New York: Associated Press.

Hovorka, S. 2009. Characterization of Bedded Salt for Storage Caverns: Case Study from the Midland Basin . Texas Bureau of Economic Geology.

Hydrodynamics Group. 2009. Norton compressed air energy storage. http://hydrodynamics-group.com/mbo/content/view/16/40

IEA. 2011. IEA harnessing variable renewables: a guide to the balancing challenge. Paris: International Energy Agency.

Luo X, et al. 2015. Overview of current development in electrical energy storage technologies and the application potential in power system operation. Applied Energy 137: 511-536.

Marchese, D. 2009. Transmission system benefits of CAES assets in a growing renewable generation market. Energy Storage Association Annual Meeting.

NREL. 2014. Renewable Electricity Futures Study. National Renewable Energy Laboratory.

PNNL. 2013. National assessment of energy storage for grid balancing and arbitrage: phase II, vol 2: cost and performance characterization. Washington, DC: Pacific Northwest National Laboratory.

SBC. 2013. Electricity storage. SBC Energy Institute.

Succar, S. et al. 2008. Compressed Air Energy Storage: Theory, Resources, and Applications for Wind Power. Princeton Environmental Institute.

Swensen, E. et al. 1994. Evaluation of Bene ts and Identi cation of Sites for a CAES Plant in New York State. Energy Storage and Power Consultants. EPRI Report TR-104268.

Posted in CAES Compressed Air | Tagged , , , , | Comments Off on Compressed air energy storage (CAES)

Heinberg on what to do at home to conserve energy

Preface. A quick summary. Best investment: insulate exterior walls, ceiling, and floors for energy savings. Other good changes were to plant a garden and fruit-and-nut orchard, and buy solar hot water heater, solar food dryer, solar cooker, chickens, energy-efficient appliances

Lessons learned: These are expensive, especially energy storage. Solar cookers work mainly in the summer.

In the future there will be more bikes and ebikes than cars. There needs to be much more local production of food and other goods to shorten supply chains.

Bottom line: there’s very little we can do as individuals, we can’t mine for the minerals we need, few of us can grow all of our food, and despite all these investments Heinberg still heavily depends on the greater world for food, electricity, and clothes, cars and most other objects in our lives can’t be home-made. What is required to make a transition is much bigger than most people imagine.

Alice Friedemann   www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Richard Heinberg. 2020. If My House Were the World: The Renewable Energy Transition Via Chickens and Solar Cookers. Resilience.org

For the past two decades, my wife Janet and I have been trying to transition our home to a post-fossil-fuel future. I say “trying,” because the experiment is incomplete and only somewhat successful. It doesn’t offer an exact model for how the rest of the world might make the shift to renewable energy; nevertheless, there’s quite a bit that we’ve learned that could be illuminating for others as they contemplate what it will take to minimize climate change by replacing coal, oil, and gas with cleaner energy sources.

We started with a rather trashy 1950s suburban house on a quarter-acre lot. We didn’t design a solar-optimal house from scratch the way Amory Lovins did (we thought about it, but we just didn’t have the time or money). We did what we could afford to do, when we could afford to do it.

Our first step was to insulate our exterior walls, ceiling, and floors. That was probably our best investment overall: it saved energy, and it made the house quieter and more pleasant to live in. Then we installed a small (1.2 kw) photovoltaic system, and planted a garden and fruit-and-nut orchard. Gradually, over the years, we added battery backup for our PV system, a solar hot water heater, a solar food dryer, chickens, solar cookers, energy-efficient appliances (including a mini-split electric HVAC system), and an electric car.

Here are ten things we learned along the way.

  1. It’s expensive. Altogether, we’ve spent tens of thousands of dollars on our quest for personal sustainability. And we’re definitely not big spenders. We economized at every stage, and occasionally benefitted from free labor and materials (our solar hot water panels, for example, were donated, and we built our food dryer from scrap). Still, once every few years we made a significant outlay for some new piece of electricity-generating or energy-saving technology. True, solar panels have gotten cheaper in the intervening years. On the other hand, there are things we still haven’t gotten to: we continue to rely on an old natural gas-fired kitchen cooking stove, which really should be replaced with an induction range if we hope to be all-solar-electric.
  2. Some things didn’t work. Early on, we planned and built a glassed-in extension on the south side of our house. Our idea was that it would capture sunlight in the winter and reduce our heating bills. As it turned out, we didn’t get the window and roof angles right, and so we receive relatively little heating benefit from this add-on. Instead we use it as a garden room for starting seedlings in the early spring. I suspect the global renewable energy transition will similarly see a lot of good ideas go awry, and false starts repurposed.
  3. Some things worked well. Twenty years after purchase, we have an antique PV system, with museum-quality Siemens panels still spitting out electrons. We made a big investment up-front, and got free electricity for two decades. This is a very different economic bargain from the familiar one with fossil fuels, which is pay-as-you-go. Similarly, making a rapid global energy transition, though offering some economic benefits in the long run, will require an enormous up-front expenditure. We learned that solar cookers are extremely cheap and pleasing to work with—in the summer months. Finally, we learned that keeping chickens is an economical source of eggs, though hens are less cost-effective from a food-production standpoint if you choose to treat them well (and continue caring for them after their egg laying subsides), as we did. There can be valuable side benefits: one hen, who’s been with us for nearly 10 years, has become an emotional support animal who supplants our need for more costly sources of psychological aid. I could say much more about her—but that’s for another occasion. Our chickens also provide manure and eggshells that enrich our soil. We compost some of our greenwaste and keep a worm bin, thus reducing energy usage by diverting some of our waste that would otherwise go to a landfill; we seasonally dry some produce in our solar dehydrator; and we can some of our fruit. These activities require little financial investment, but need a noticeable ongoing investment of effort.
  4. Energy storage is especially expensive. Our solar panels have lasted a long time, but our battery backup system didn’t. It now provides only about 20 minutes of power. True, our battery system is far from being state-of-the-art (it consists of five high-capacity lead-acid cells). Nevertheless, this proved to be the least-durable, least cost-effective aspect of our whole effort. The truth is, on both a diurnal and a seasonal basis, we rely almost entirely on the grid for energy storage and for matching electricity supply with demand. The lesson for our global energy transition: even though batteries are getting cheaper, energy storage will still be a costly engineering challenge.
  5. Reduce energy usage before you transition. Because renewable energy generation requires a lot of up-front investment, and because energy storage is also costly, it makes sense to minimize energy demand. For a household, that’s not problematic: we were quite happy shrinking our energy usage to roughly a quarter of the California average. But for society as a whole, this has huge implications. It’s possible to reduce demand somewhat through energy-efficiency measures, but serious reduction will have economic repercussions. We have built our national and global economic systems on the expectation of always using more. A successful energy transition will necessarily entail moving away from a growth-based consumer economy to an entirely different way of organizing investment, production, consumption, and employment.
  6. Our house is not an industrial manufacturing site. We don’t make our own cement or glass. If we had tried, it would have been a more interesting experiment, but much harder. We were undertaking the easy aspects of energy transition. The really difficult bits include things like aviation and high-heat industrial processes.
  7. Adding personal transportation to our renewable energy regime shifted us into energy deficit mode. We like our electric car, but charging it takes a lot of electricity (the energy needed to manufacture the car is another story altogether). Once we bought the car, we realized we need a larger PV system (that’s on our to-do list). For society as a whole, this suggests that transitioning the transportation sector will require sacrifice (see number 5, above). A renewable future will likely be less mobile and more local, and will feature more bikes and ebikes than cars. We should start shortening supply chains immediately.
  8. True sustainability and self-sufficiency would have required a lot more money, a lot more work, adaptation to a lot less consumption—or all three. Our experiment was informal; we didn’t keep track of every way in which we were using energy directly or indirectly (for example, via the embodied energy in the products we purchased). We continue to depend on flows of energy and money, and stocks of resources, in the world at large. We don’t generate the energy needed to mine minerals, or to manufacture cars, solar panels, or other stuff we have bought, such as clothes, a TV, computers, and books. The same holds for food self-sufficiency: we get a lot of fruit, nuts, eggs, and veggies from our backyard with minimal fossil energy inputs, but we buy the rest of what we eat from a local organic market. The world as a whole doesn’t have the luxury of going elsewhere to get what it needs; the transition will have to be comprehensive.
  9. You can’t expect someone else to do it all for you. Many people assume that the cost of the energy transition will somehow be paid by society as a whole—primarily, by big utility companies acting under government regulations and incentives. But households like yours and mine will have to bear a lot of the expense, and businesses will have to do even more of the heavy lifting. If households can’t afford to buy new equipment, or businesses can’t do so profitably, that will make the transition that much harder and slower. If we make the transition more through energy demand reduction rather than new technology, that will require massive shifts in people’s (read: your and my) expectations and behavior.
  10. We’re glad we did what we did. Our experiment has been instructive and rewarding. As a result of it, we have a much better appreciation for where our energy and manufactured products come from, and how much they impact the environment. We are more keenly aware of what we formerly took for granted and how cluelessly privileged our nation has been in its reliance on cheap fossil fuels. Our quality of life has improved as our consumption declined.

We would do most of it all over again (though I’d put more effort into designing the solarium that now serves as our garden room). I would have thought, at the outset, that after 20 years we’d be more sustainable and self-sufficient than we actually are. My take-away: the energy transition is an enormous job, and people who look at it just in terms of politics and policy have little understanding of what is actually required.

Posted in Advice, Richard Heinberg | Tagged | Comments Off on Heinberg on what to do at home to conserve energy

Life After Fossil Fuels: manufacturing will be less precise

Preface. This is a book review and excerpts of Winchester’s “The Perfectionists: How Precision Engineers created the modern world”. The book describes how the industrial revolution was made possible with ever more precision.  First came the steam engine, possible to build when a way to make them to one tenth of an inch precision so the steam didn’t escape was invented.  By World War II parts could be made precise to within a millionth of an inch and today to 35 zeros of precision (0.00000000000000000000000000000000001), which is required for microchips, jet engines, and other high-tech.

This amazing precision is possible using machine tools to make precise parts by shaping metal, glass, plastic, ceramics and other rigid materials by cutting, boring, grinding, shearing, squeezing, rolling, and stamping plus riveting metals, plastic and other hard materials.  Most precision machine tools are powered by electricity today, and steam engines in the past.

Machine tools also revolutionized our ability to kill each other.  Winchester writes: “When any part of a gun failed, another part had to be handmade by an army blacksmith, a process that, with an inevitable backlog caused by other failures, could take days. As a soldier, you then went into battle without an effective gun, or waited for someone to die and took his, or did your impotent best with your bayonet, or else you ran. Once a gun had been physically damaged in some way, the entire weapon had to be returned to its maker or to a competent gunsmith to be remade or else replaced. It was not possible, incredible though this might, simply to identify the broken part and replace it with another. No one had ever thought to make a gun from component parts that were each so precisely constructed that they were identical one with another.”

Machine tools can not be used for wood because it is flexible. It swells and contracts in unpredictable ways. It can never be a fixed dimension and whether planed or jointed, lapped or milled, or varnished to a brilliant luster, since wood is fundamentally and inherently imprecise.

Since both my books, “When trucks stop running” and “Life After Fossil Fuels” make the case that we are returning to a world where the electric grid is down for good, and wood is the main energy source and infrastructure material after fossil fuels become scarce, the level of civilization we can achieve will depend greatly on how precisely we can make objects in the future.  Because wood charcoal makes inferior and weaker iron, steel, and other metals than coal, today’s precision will no longer be possible. Microchips, jet engines, and much more will be lost forever.  Wood, because of eventual deforestation, will lead to orders of magnitude less metal, brick, ceramics, glass and other products because of lack of wood charcoal. And since peak coal is here, and the remaining reserves in the U.S. are mostly lignite, not great for the high heat needed in manufacturing, civilization as we know it has a limited time-span.

“The Great Simplification” will reduce precision. The good news is that hand-crafting of beautiful objects will return, a far more rewarding way of life than production lines at factories today.

Alice Friedemann   www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Winchester, S. 2018. The Perfectionists: How Precision Engineers created the modern world. HarperCollins.

Two particular aspects of precision need to be addressed. First, its ubiquity in the contemporary conversation—the fact that precision is an integral, unchallenged, and seemingly essential component of our modern social, mercantile, scientific, mechanical, and intellectual landscapes. It pervades our lives entirely, comprehensively, wholly.

Because an ever-increasing desire for ever-higher precision seems to be a leitmotif of modern society, I have arranged the chapters that follow in ascending order of tolerance, with low tolerances of 0.1 and 0.01 starting the story and the absurdly, near-impossibly high tolerances to which some scientists work today—claims of measurements of differences of as little as 0.000 000 000 000 000 000 000 000 000 01 grams, 10 to the -28th grams, have recently been made, for example—toward the end.

Any piece of manufactured metal (or glass or ceramic) must have chemical and physical properties: it must have mass, density, a coefficient of expansion, a degree of hardness, specific heat, and so on. It must also have dimensions: length, height, and width. It must possess geometric characteristics: it must have measurable degrees of straightness, of flatness, of circularity, cylindricity, perpendicularity, symmetry, parallelism, and position—among a mesmerizing host of other qualities even more arcane and obscure.

The piece of machined metal must have a degree of what has come to be known as tolerance. It has to have a tolerance of some degree if it is to fit in some way in a machine, whether that machine is a clock, a ballpoint pen, a jet engine, a telescope, or a guidance system for a torpedo.

To fit with another equally finely machined piece of metal, the piece in question must have an agreed or stated amount of permissible variation in its dimensions or geometry that will allow it to fit. That allowable variation is the tolerance, and the more precise the manufactured piece, the greater the tolerance that will be needed and specified.

The tolerances of the machines at the LIGO site are almost unimaginably huge, and the consequent precision of its components is of a level and nature neither known nor achieved anywhere else on Earth. LIGO is an observatory, the Laser Interferometer Gravitational-Wave Observatory.  The LIGO machines had to be constructed to standards of mechanical perfection that only a few years before were well-nigh inconceivable and that, before then, were neither imaginable nor even achievable.

Precision’s birth derives from the then-imagined possibility of maybe holding and managing and directing this steam, this invisible gaseous form of boiling water, so as to create power from it,

The father of true precision was an eighteenth-century Englishman named John Wilkinson, who was denounced sardonically as lovably mad, and especially so because of his passion for and obsession with metallic iron. He made an iron boat, worked at an iron desk, built an iron pulpit, ordered that he be buried in an iron coffin, which he kept in his workshop (and out of which he would jump to amuse his comely female visitors), and is memorialized by an iron pillar he had erected in advance of his passing in a remote village in south Lancashire.

Though the eventual function of the mechanical clock, brought into being by a variety of claimants during the fourteenth century, was to display the hours and minutes of the passing days, it remains one of the eccentricities of the period (from our current viewpoint) that time itself first played in these mechanisms a subordinate role. In their earliest medieval incarnations, clockwork clocks, through their employment of complex Antikythera-style gear trains and florid and beautifully crafted decorations and dials, displayed astronomical information at least as an equal to the presentation of time.

The behavior of the heavenly bodies was ordained by gods, and therefore was a matter of spiritual significance. As such, it was far worthier of human consideration than our numerical constructions of hours and minutes, and was thus more amply deserving of flamboyant mechanical display.

John Harrison, the man who most famously gave mariners a sure means of determining a vessel’s longitude. This he did by painstakingly constructing a family of extraordinarily precise clocks and watches, each accurate to just a few seconds in years, no matter how sea-punished its travels in the wheelhouse of a ship.

An official Board of Longitude was set up in London in 1714, and a prize of 20,000 pounds offered to anyone who could determine longitude with an accuracy of 30 miles. John Harrison eventually, and after a lifetime of heroic work on five timekeeper designs, would claim the bulk of the prize.

The fact that the Harrison clocks were British-invented and their successor clocks firstly British-made allowed Britain in the heyday of her empire to become for more than a century the undisputed ruler of all the world’s oceans and seas. Precise-running clockwork made for precise navigation; precise navigation made for maritime knowledge, control, and power.

In place of the oscillating beam balances that made the magic of his large clocks so spectacular to see, he substituted a temperature-controlled spiral mainspring, together with a fast-beating balance wheel that spun back and forth at the hitherto unprecedented rate of some 18,000 times an hour. He also had an automatic remontoir, which rewound the mainspring eight times a minute, keeping the tension constant, the beats unvarying. There was a downside, though: this watch needed oiling, and so, in an effort to reduce friction and keep the needed application of oil to a minimum, Harrison introduced, where possible, bearings made of diamond, one of the early instances of a jeweled escapement.

It remains a mystery just how, without the use of precision machine tools—the development of which will be central to the story that follows—Harrison was able to accomplish all this. Certainly, all those who have made watches since then have had to use machine tools to fashion the more delicate parts of the watches: the notion that such work could possibly be done by the hand of a 66-year-old John Harrison still beggars belief. But John Harrison’s clockworks enjoyed perhaps only three centuries’ worth of practical usefulness.

For precision to be a phenomenon that would entirely alter human society, it has to be expressed in a form that is duplicable; it has to be possible for the same precise artifact to be made again and again with comparative ease and at a reasonable frequency and cost.

It was only when precision was created for the many that precision as a concept began to have the profound impact on society as a whole that it does today. And the man who accomplished that single feat, of creating something with great exactitude and making it not by hand but with a machine, and, moreover, with a machine that was specifically created to create it

A machine that makes machines, known today as a “machine tool,” was, is, and will long remain an essential part of the precision story—was the 18th-century Englishman denounced for his supposed lunacy because of his passion for iron, the then-uniquely suitable metal from which all his remarkable new devices could be made.

Wilkinson is today rather little remembered. He is overshadowed quite comprehensively by his much-better-known colleague and customer, the Scotsman James Watt, whose early steam engines came into being, essentially, by way of John Wilkinson’s exceptional technical skills.

On January 27, 1774, John Wilkinson, whose local furnaces, all fired by coal, were producing a healthy twenty tons of good-quality iron a week, invented a technique for the manufacture of guns. The technique had an immediate cascade effect very much more profound than those he ever imagined, and of greater long-term importance.  Up until then, naval cannons were cast hollow, with the interior tube through which the powder and projectile were pushed and fired

The problem with this technique was that the cutting tool would naturally follow the passage of the tube, which may well not have been cast perfectly straight in the first place. This would then cause the finished and polished tube to have eccentricities, and for the inner wall of the cannon to have thin spots where the tool wandered off track.  And thin spots were dangerous—they meant explosions and bursting tubes and destroyed cannon and injuries to the sailors who manned the notoriously dangerous gun decks.

Then came John Wilkinson and his new idea. He decided that he would cast the iron cannon not hollow but solid. This, for a start, had the effect of guaranteeing the integrity of the iron itself—there were fewer parts that cooled early and came out with bubbles and  spongy sections (“honeycomb problems,” as they were called) for which hollow-cast cannon were then notorious.

The secret was in the boring of the cannon hole. Both ends of the operation, the part that did the boring and the part to be bored, had to be held in place, rigid and immovable, because to cut or polish something into dimensions that are fully precise, both tool and workpiece have to be clasped and clamped as tightly as possible to secure immobility.

Cannon after cannon tumbled from the mill, each accurate to the measurements the navy demanded, each one, once unbolted from the mill, identical to its predecessor, each one certain to be the same as the successor that would next be bolted onto it. The new system worked impeccably from the very start.

Yet what elevates Wilkinson’s new method to the status of a world-changing invention would come the following year, 1775, when he started to do serious business with James Watt.

The principle of a steam engine is familiar, and is based on the simple physical fact that when liquid water is heated to its boiling point it becomes a gas. Because the gas occupies some 1,700 times greater volume than the original water, it can be made to perform work.

Newcomen then realized he could increase the work by injecting cold water into the steam-filled cylinder, condensing the steam and bringing it back to 1/1,700 of its volume—creating, in essence, a vacuum, which enabled the pressure of the atmosphere to force the piston back down again. This downstroke could then lift the far end of the rocker beam and, in doing so, perform real work. The beam could lift floodwater, say, out of a waterlogged tin mine.  Thus was born a very rudimentary kind of steam engine, almost useless for any application beyond pumping water.  The Newcomen engine and its like remained in production for more than 70 years, its popularity beginning to lessen only in the mid-1760s, when James Watt showed that it could be markedly improved.

Watt realized that the central inefficiency of the engine he was examining was that the cooling water injected into the cylinder to condense the steam and produce the vacuum also managed to cool the cylinder itself. To keep the engine running efficiently, the cylinder needed to be kept as hot as possible at all times, so the cooling water should perhaps condense the steam not in the cylinder but in a separate vessel, keeping the vacuum in the main cylinder, which would thus retain the cylinder’s heat and allow it to take on steam once more. To make matters even more efficient, the fresh steam could be introduced at the top of the piston rather than the bottom, with stuffing of some sort placed and packed into the cylinder around the piston rod to prevent any steam from leaking out in the process.

These two improvements (the inclusion of a separate steam condenser and the changing of the inlet pipes to allow for the injection of new steam into the upper rather than the lower part of the main cylinder) changed Newcomen’s so-called fire-engine into a fully functioning steam-powered machine.

Once perfected, it was to be the central power source for almost all factories and foundries and transportation systems in Britain and around the world for the next century and more.

Yet perpetually enveloping his engine in a damp, hot, opaque gray fog, were billowing clouds of steam, which incensed James Watt. Try as he might, do as he could, steam always seemed to be leaking in prodigious gushes from the engine’s enormous main cylinder. He tried blocking the leak with all kinds of devices and substances. The gap between the piston’s outer surface and the cylinder’s inner wall should, in theory, have been minimal, and more or less the same wherever it was measured. But because the cylinders were made of iron sheets hammered and forged into a circle, and their edges then sealed together, the gap actually varied enormously from place to place. In some places, piston and cylinder touched, causing friction and wear. In other places, as much as half an inch separated them, and each injection of steam was followed by an immediate eruption from the gap.

Watt tried tucking in pieces of linseed oil–soaked leather; stuffing the gap with a paste made from soaked paper and flour; hammering in corkboard shims, pieces of rubber, even dollops of half-dried horse dung.

By the purest accident, John Wilkinson asked for an engine to be built for him, to act as a bellows for one of his iron forges—and in an instant, he saw and recognized Watt’s steam-leaking problem, and in an equal instant, he knew he had the solution: he would apply his cannon-boring technique to the making of cylinders for steam engines.  Watt beamed with delight. Wilkinson had solved his problem, and the Industrial Revolution—we can say now what those two never imagined—could now formally begin.

And so came the number, the crucial number, the figure that is central to this story, that which appears at the head of this chapter and which will be refined in its exactitude in all the remaining parts of this story. This is the figure of 0.1—one-tenth of an inch. This was the tolerance to which John Wilkinson had ground out his first cylinder.  All of a sudden, there was an interest in tolerance, in the clearance by which one part was made to fit with or into another. This was something quite new, and it begins, essentially, with the delivery of that first machine on May 4, 1776.

The central functioning part of the steam engine was possessed of a mechanical tolerance never before either imagined or achieved, a tolerance of 0.1 inches.

Locks were a British obsession at the time. The social and legislative changes that were sweeping the country in the late eighteenth century were having the undesirable effect of dividing society quite brutally: while the landed aristocracy had for centuries protected itself in grand houses behind walls and parks and ha-has, and with resident staff to keep mischief at bay, the enriched beneficiaries of the new business climate were much more accessible to the persistent poor.

Envy was abroad. Robbery was frequent. Fear was in the air. Doors and windows needed to be bolted. Locks had to be made, and made well. A lock such as Mr. Marshall’s, pickable in 15 minutes by a skilled man, and by a desperate and hungry man maybe in 10, was clearly not good enough. Joseph Bramah decided he would design and make a better one. He did so in 1784, less than a year after picking the Marshall lock. His patent made it almost impossible for a burglar with a wax-covered key blank, the tool most favored by the criminals who could use it to work out the position of the various levers and tumblers inside a lock, to divine what was beyond the keyhole, inside the workings.

Maudslay solved Bramah’s supply problems in an inkling by creating a machine to make them.  He built a whole family of machine tools, in fact, that would each make, or help to make, the various parts of the fantastically complicated locks Joseph Bramah had designed. They could make the parts fast and well and cheaply, without the errors that handcrafting and hand tools inevitably cause. The machines that Maudslay made would, in other words, make the necessary parts with precision.

Metal pieces can be machined into a range of shapes and sizes and configurations, and provided that the settings of the leadscrew and the slide rest are the same for every procedure, and the lathe operator can record these positions and make certain they are the same, time after time, then every machined piece will be the same—will look the same, measure the same, weigh the same (if of the same density of metal) as every other. The pieces are all replicable. They are, crucially, interchangeable. If the machined pieces are to be the parts of a further machine—if they are gearwheels, say, or triggers, or handgrips, or barrels—then they will be interchangeable parts, the ultimate cornerstone components of modern manufacturing. Of equally fundamental importance, a lathe so abundantly equipped as Maudslay’s was also able to make that most essential component of the industrialized world, the screw.

Screws were made to a standard of tolerance of one in one ten-thousandth of an inch.

A slide rest allowed for the making of myriad items, from door hinges to jet engines to cylinder blocks, pistons, and the deadly plutonium cores of atomic bombs

Maudslay next created in truly massive numbers, a vital component for British sailing ships. He built the wondrously complicated machines that would, for the next 150 years, make ships’ pulley blocks, the essential parts of a sailing ship’s rigging that helped give the Royal Navy its ability to travel, police, and, for a while, rule the world’s oceans.  At the time, sails were large pieces of canvas suspended, supported, and controlled by way of endless miles of rigging, of stays and yards and shrouds and footropes, most of which had to pass through systems of tough wooden pulleys that were known simply to navy men as blocks—pulley blocks, beyond the maritime world as block and tackle.

A large ship might have as many as 1400 pulley blocks of varying types and sizes depending on the task required. The lifting of a very heavy object such as an anchor might need an arrangement of six blocks, each with three sheaves, or pulleys, and with a rope passing through all six such that a single sailor might exert a pull of only a few easy pounds in order to lift an anchor weighing half a ton.

Blocks for use on a ship are traditionally exceptionally strong, having to endure years of pounding water, freezing winds, tropical humidity, searing doldrums heat, salt spray, heavy duties, and careless handling by brutish seamen. Back in sailing ship days, they were made principally of elm, with iron plates bolted onto their sides, iron hooks securely attached to their upper and lower ends, and with their sheaves, or pulleys, sandwiched between their cheeks, and around which ropes would be threaded. The sheaves themselves were often made of Lignum vitae (trees from South America),

What principally concerned the admirals was not so much the building of enough ships but the supply of the vital blocks that would allow the sailing ships to sail. The Admiralty needed 130,000 of them every year The complexity of their construction meant that they could be fashioned only by hand. Scores of artisanal woodworkers in and around southern England but were notoriously unreliable.

The Block Mills still stand as testament to many things, most famously to the sheer perfection of each and every one of the hand-built iron machines housed inside. So well were they made—they were masterpieces, most modern engineers agree—that most were still working a century and a half later; the Royal Navy made its last pulley blocks in 1965.

The Block Mills were the first factory to run entirely by steam engine.  The next invention that mattered depended on flatness, without curvature, indentation or protuberance. It involves the creation of a base from which all precise measurement and manufacture can be originated. For, as Maudslay realized, a machine tool can make an accurate machine only if the surface on which the tool is mounted is perfectly flat, is perfectly plane, exactly level, its geometry entirely exact.

A bench micrometer would be able to measure the actual dimension of a physical object to make sure that the components of the machines they were constructing would all fit together, with exact tolerances, and be precise for each machine and accurate to the design standard.

The micrometer that performed all these measurements turned out to be extremely accurate and consistent: this invention of his could measure down to one one-thousandth of an inch and, according to some, maybe even one ten-thousandth of an inch: to a tolerance of 0.0001.

To any schoolchild today, Eli Whitney means just one thing: the cotton gin. To any informed engineer, he signifies something very different: confidence man, trickster, fraud, charlatan almost entirely from his association with the gun trade, with precision manufacturing, and with the promise of being able to deliver weapons assembled from interchangeable parts.  When Whitney won the commission and signed a government contract to do so in 1798, he knew nothing about muskets and even less about their components: he won the order largely because of his Yale connections and the old alumni network that, even then, flourished in the corridors of power in Washington, DC.

It was John Hall who succeeded in making precision guns. At every stage of the work, from the forging of the barrel to the turning of the rifling and the shaping of the barrel, his 63 gauges were set to work, more than any engineer before him, to ensure as best he could that every part of every gun was exactly the same as every other—and that all were made to far stricter tolerances than hitherto: for a lock merely to work required a tolerance of maybe a fifth of a millimeter; to ensure that it not only worked but was infinitely interchangeable, he needed to have the pieces machined to a fiftieth of a millimeter.

Precision shoes were made by turning a shapeless block of wood into a foot-shaped entity of specific dimensions, and repeated time and time again. These shoemaker lasts were of exact sizes, seven inches long, nine, and so on. Before precise shoes were made, they were offered up in barrels and customers pulled them out randomly trying to find a shoe that more or less fit.

Oliver Evans was making flour-milling machinery; Isaac Singer introduced precision into the manufacturing of sewing machines; Cyrus McCormick was creating reapers, mowers, and, later, combine harvesters; and Albert Pope was making bicycles for the masses.

Joseph Whitworth was an absolute champion of accuracy, an uncompromising devotee of precision, and the creator of a device, unprecedented at the time, that could truly measure to an unimaginable one-millionth of an inch.  Using his superb mechanical skills, in 1859 he created a micrometer that allowed for one complete turn of the micrometer wheel to advance the screw not by 1/20 of an inch, but by 1/4,000 of an inch, a truly tiny amount.

Whitworth then incised 250 divisions on the turning wheel’s circumference, which meant that the operator of the machine, by turning the wheel by just one division, could advance or retard the screw and provided the ends of the item being measured are as plane as the plates on the micrometer, opening the gap by that 1/1,000,000 of an inch would make the difference between the item being held firmly, or falling, under the influence of gravity.

Now metal pieces could be made and measured to a tolerance of one-millionth of an inch.

Until Whitworth, each screw and nut and bolt was unique to itself, and the chance that any one-tenth-inch screw, say, might fit any randomly chosen one-tenth-inch nut was slender at best.

With the Model T, Henry Ford changed everything. From the start, he was insistent that no metal filing ever be done in his motor-making factories, because all the parts, components, and pieces he used for the machine would come to him already precisely finished, and to tolerances of cruelly exacting standards such that each would fit exactly without the need for even the most delicate of further adjustment. Once that aspect of his manufacturing system was firmly established, he created a whole new means of assembling the bits and pieces into cars.  He demanded a standard of precision for his components that had seldom been either known or achieved before, and he now married this standard to a new system of manufacture seldom tried before.

The Model T had fewer than 100 parts. A modern car has more than 30,000.

Within Rolls-Royce, it may seem as though the worship of the precise was entirely central to the making of these enormously comfortable, stylish, swift, and comprehensively memorable cars. In fact, it was far more crucial to the making of the less costly, less complex, less remembered machines that poured from the Ford plants around the world. And for a simple reason: the production lines required a limitless supply of parts that were exactly interchangeable.

If one happened not to be so exact, and if an assembly-line worker tried to fit this inexact and imprecise component into a passing workpiece and it refused to fit and the worker tried to make it fit, and wrestled with it—then, just like Charlie Chaplin’s assembly-line worker in Modern Times or, less amusingly, one in Fritz Lang’s Metropolis, the line would slow and falter and eventually stop, and workers for yards around would find their work disrupted, and parts being fed into the system would create unwieldy piles, and the supply chain would clog, and the entire production would slow and falter and maybe even grind, quite literally, to a painful halt. Precision, in other words, is an absolute essential for keeping the unforgiving tyranny of a production line going.

Henry Ford had been helped in his aim of making it so by using one component (and then buying the firm that made it), a component whose creation, by a Swedish man of great modesty, turned out to be of profoundly lasting importance to the world of precision. The Swede was Carl Edvard Johansson, popularly and proudly known by every knowledgeable Swede today as the world’s Master of Measurement. He was the inventor of the set of precise pieces of perfectly flat, hardened steel known to this day as gauge blocks, slip gauges, or, to his honor and in his memory, as Johansson gauges, or quite simply, Jo blocks.

His idea was to create a set of gauge blocks that, if held together in combination, could in theory measure any needed dimension. He calculated that the minimum number of blocks that would be needed was 103 blocks made of certain carefully specified sizes. Arranged in three series, it was possible to take some 20,000 measurements in increments of one one-thousandth of a millimeter, by laying two or more blocks together. His 103-piece combination gauge block set has since directly and indirectly taught engineers, foremen and mechanics to treat tools with care, and at the same time given them familiarity with dimensions of thousandths and ten thousandths of a millimeter.

Gauge blocks first came to the United States in 1908.  Cars were precise only to themselves; maybe every manufactured piece fit impeccably because it was interchangeable to itself, but once another absolutely impeccably manufactured, gauge-block-confirmed piece from another company (a ball bearing from SKF, say) was introduced into the Ford system, then maybe its absolute perfection trumped that of Ford’s, and Ford was wrong—ever so slightly maybe, but wrong nonetheless

Gauge blocks after the Great War, achieved accuracies of up to one-millionth of an inch.

Modern jet engines have hundreds of parts jerking to and fro and they cannot be made more powerful without becoming too complicated.  Modern jet engines can produce more than 100,000 horsepower—still, essentially, they have only a single moving part: a spindle, a rotor, which is induced to spin and, in doing so, causes many pieces of high-precision metal to spin with it.

All that ensures they work as well as they do are the rare and costly materials from which they are made, the protection of the integrity of the pieces machined from these materials, and the superfine tolerances of the manufacture of every part of which they are composed.  Since any increase in engine power and thus aircraft speed would lead to heavier engines, perhaps too heavy for an aircraft to carry, a new kind of engine was invented. The gas turbine.  A crucial element in any combustion engine is air—air is drawn into the engine, mixed with fuel, and then burns or explodes. The thermal energy from that event is turned into kinetic energy, and the engine’s moving parts powered. But a factor in the amount of air sucked into a piston engine is limited by the size of the cylinders. In a gas turbine, there is almost no limit: a gigantic fan at the opening of such an engine can swallow vastly more air than can be taken into a piston engine.

Gas turbines were already beginning to power ships, to generate electricity, to run factories. The simplicity of the basic idea was immensely attractive. Air was drawn in through a cavernous doorway at the front of the engine and immediately compressed, and made hot in the process, and was then mixed with fuel, and ignited. It was the resulting ferociously hot, tightly compressed, and controlled explosion that then drove the turbine, which spun its blades and then performed two functions. It used some of its power to drive the aforementioned compressor, which sucked in and squeezed the air, but it then had a very considerable fraction of its power left, and so was available to do other things, such as turn the propeller of a ship, or turn a generator of electricity, or turn the driving wheels of a railway locomotive (didn’t happen, too many problems), or provide the power for a thousand machines in a factory and keep them running, tirelessly.

The first jet plane was invented in 1941 in Britain, and in 1944 that the public learned about it.  Inside a jet engine, everything is a diabolic labyrinth, a maze of fans and pipes and rotors and discs and tubes and sensors and a Turk’s head of wires of such confusion that it doesn’t seem possible that any metal thing inside it could possibly even move without striking and cutting and dismembering all the other metal things that are crammed together in such dangerously interfering proximity. Yet work and move a jet engine most certainly does, with every bit of it impressively engineered to do so, time and again, and under the harshest and fiercest of working conditions.

There are scores of blades of various sizes in a modern jet engine, whirling this way and that and performing various tasks that help push the hundreds of tons of airplane up and through the sky. But the blades of the high-pressure turbines represent the singularly truest marvel of engineering achievement—and this is primarily because the blades themselves, rotating at incredible speeds and each one of them generating during its maximum operation as much power as a Formula One racing car, operate in a stream of gases that are far hotter than the melting point of the metal from which the blades were made. What stopped these blades from melting?

It turns out to be possible to cool the blades by drilling hundreds of tiny holes in each blade, and of making inside each blade a network of tiny cooling tunnels, all of them manufactured at a size and to such minuscule tolerances as were quite unthinkable only a few years ago.

The first blades that Whittle made were of steel, which somewhat limited the performance of his early prototypes, since steel loses its structural integrity at temperatures higher than about 500 degrees Celsius. But alloys were soon found that made matters much easier, after which blades were constructed from these new metal compounds. They did not run the risk of melting, because the temperatures at which they operated were on the order of a thousand degrees, and the special nickel-and-chromium alloy from which they were made, known as Nimonic, remained solid and secure and stiff up to 1,400 degrees Celsius (2550 F).

the next generation of engines required that the gas mixture roaring out from the combustion chamber be heated to around 1,600 degrees Celsius, and even the finest of the alloys then used melted at around 1,455 degrees Celsius. The metals tended to lose their strength and become soft and vulnerable to all kinds of shape changes and expansions at even lower temperatures. In fact, extended thermal pummeling of the blades at anything above 1,300 degrees Celsius was regarded by early researchers as just too difficult and risky.

Most of that air bypasses the engine (for reasons that are beyond the scope of this chapter), but a substantial portion of it is sent through a witheringly complex maze of blades, some whirling, some bolted and static, that make up the front and relatively cool end of a jet engine and that compress the air, by as much as 50 times. The one ton of air taken each second by the fan, and which would in normal circumstances entirely fill the space equivalent of a squash court, is squeezed to a point where it could fit into a decent-size suitcase. It is dense, and it is hot, and it is ready for high drama. For very nearly all this compressed air is directed straight into the combustion chamber, where it mixes with sprayed kerosene, is ignited by an array of electronic matches, as it were, and explodes directly into the whirling wheel of turbine blades. These blades (more than ninety of them in a modern jet engine, and attached to the outer edge of a disc rotating at great speed) are the first port of call for the air before it passes through the rest of the turbine and, joining the bypassed cool air from the fan, gushes wildly out of the rear of the engine and pushes the plane forward. “Nearly all” is the key. Some of this cool air, the Rolls-Royce engineers realized, could actually be diverted before it reached the combustion chamber, and could be fed into tubes in the disc onto which the blades were bolted. From there it could be directed into a branching network of channels or tunnels that had been machined into the interior of the blade itself. And now that the blade was filled with cool air—cool only by comparison; the simple act of compressing it made it quite hot, about 650 degrees Celsius, but still cooler by a thousand degrees than the post–combustion chamber fuel-air mixture. To make use of this cool air, scores of unimaginably tiny holes were then drilled into the blade surface, drilled with great precision and delicacy and in configurations that had been dictated by the computers, and drilled down through the blade alloy until each one of them reached just into the cool-air-filled tunnels—thus immediately allowing the cool air within to escape or seep or flow or thrust outward, and onto the gleaming hot surface of the blade.

It is here that the awesome computational power that has been available since the late 1960s comes into its own, becomes so crucially useful. Aside from the complex geometry of the hundreds of tiny pinholes, is the fact that the blades are grown from, incredibly, a single crystal of metallic nickel alloy. This makes them extremely strong—which they need to be, as in their high-temperature whirlings, they are subjected to centrifugal forces equivalent to the weight of a double-decker London bus. Very basically, the molten metal (an alloy of nickel, aluminum, chromium, tantalum, titanium, and five other rare-earth elements that Rolls-Royce coyly refuses to discuss) is poured into a mold that has at its base a little and curiously three-turned twisted tube, which resembles nothing more than the tail of P and ends up with all its molecules lined up evenly.

It has become a single crystal of metal, and thus, its eventual resistance to all the physical problems that normally plague metal pieces like this is mightily enhanced. It is very much stronger—which it needs to be, considering the enormous centrifugal forces.

Electrical discharge machining, or EDM, as it is more generally known, employs just a wire and a spark, both of them tiny, the whole process directed by computer and inspected by humans, using powerful microscopes, as it is happening.  The more complex the engines, the more holes need to be drilled into the various surfaces of a single blade: in a Trent XWB engine, there are some 600, arranged in bewildering geometries to ensure that the blade remains stiff, solid, and as cool as possible. Their integrity owes much to the geometry of the cooling holes that are being drilled, which is measured and computed and checked by skilled human beings. No tolerance whatsoever can be accorded to any errors that might creep into the manufacturing process, for a failure in this part of a jet engine can turn into a swiftly accelerating disaster.

As the tolerances shrink still further and limits are set to which even the most well-honed human skills cannot be matched, automation has to take over. The Advanced Blade Casting Facility can perform all these tasks (from the injection of the losable wax to the growing of single-crystal alloys to the drilling of the cooling holes) with the employment of no more than a handful of skilled men and women. It can turn out 100,000 blades a year, all free of errors.

But failure was still possible. The fate of passengers depended on the performance of one tiny metal pipe no more than five centimeters long and three-quarters of a centimeter in diameter, into which someone at a factory in the northern English Midlands had bored a tiny hole, but had mistakenly bored it fractionally out of true. The engine part in question is called an oil feed stub pipe, and though there are many small steel tubes wandering snakelike through any engine, this particular one, a slightly wider stub at the end of longer but narrower snakelike pipe, was positioned in the red-hot air chamber between the high- and intermediate-pressure turbine discs. It was designed to send oil down to the bearings on the rotor that carried the fast-spinning disc. It was machined improperly due to a drill bit that did the work being misaligned, with the result that along one small portion of its circumference, the tube was about half a millimeter too thin.

Metal fatigue is what caused the engine to fail. The aircraft had spent 8,500 hours aloft, and had performed 1,800 takeoff and landing cycles. It is these last that punish the mechanical parts of a plane: the landing gear, the flaps, the brakes, and the internal components of the jet engines. For, every time there is a truly fast or steep takeoff, or every time there is a hard landing, these parts are put under stress that is momentarily greater than the running stresses of temperature and pressure for which the innards of a jet engine are notorious.

Heisenberg, in helping in the 1920s to father the concepts of quantum mechanics, made discoveries and presented calculations that first suggested this might be true: that in dealing with the tiniest of particles, the tiniest of tolerances, the normal rules of precise measurement simply cease to apply. At near-and subatomic levels, solidity becomes merely a chimera; matter comes packaged as either waves or particles that are by themselves both indistinguishable and immeasurable and, even to the greatest talents, only vaguely comprehensible.

The making of the smallest parts for today’s great jet engines, we are reaching down nowhere near the limits that so exercise the minds of quantum mechanicians. Yet we have reached a point in the story where we begin to notice our own possible limitations and, by extension and extrapolation, also the possible end point of our search for perfection.

An overlooked measurement error on the mirror amounting to one-fiftieth the thickness of a human hair managed to render most of the images beamed down from Hubble fuzzy and almost wholly useless.

Chapter 9 (TOLERANCE: 0.000 000 000 000 000 000 000 000 000 000 000 01)  35 places

Here we come to the culmination of precision’s quarter-millennium evolutionary journey. Up until this moment, almost all the devices and creations that required a degree of precision in their making had been made of metal, and performed their various functions through physical movements of one kind or another. Pistons rose and fell; locks opened and closed; rifles fired; sewing machines secured pieces of fabric and created hems and selvedges; bicycles wobbled along lanes; cars ran along highways; ball bearings spun and whirled; trains snorted out of tunnels; aircraft flew through the skies; telescopes deployed; clocks ticked or hummed, and their hands moved ever forward, never back, one precise second at a time. Then came the computer, into an immobile and silent universe, one where electrons and protons and neutrons have replaced iron and oil and bearings and lubricants and trunnions and the paradigm-altering idea of interchangeable parts.

Precision had by now reached a degree of exactitude that would be of relevance and use only at the near-atomic level.

Fab 42—of electronic microprocessor chips, the operating brains of almost all the world’s computers. The enormous ASML devices allow the firm to manufacture these chips, and to place transistors on them in huge numbers and to any almost unreal level of precision and minute scale that today’s computer industry, pressing for ever-speedier and more powerful computers, endlessly demands.

Gordon Moore, one of the founders of Intel, is most probably the man to blame for this trend toward ultraprecision in the electronics world. He made an immense fortune by devising the means to make ever-smaller and smaller transistors and to cram millions, then billions of them onto a single microprocessing chip. There are now more transistors at work on this planet (some 15 quintillion, or 15,000,000,000,000,000,000) than there are leaves on all the trees in the world. In 2015, the four major chip-making firms were making 14 trillion transistors every single second. Also, the sizes of the individual transistors are well down into the atomic level.

When the Broadwell family of chips was created in 2016, node size was down to a previously inconceivably tiny fourteen-billionths of a meter (the size of the smallest of viruses), and each wafer contained no fewer than seven billion transistors. The Skylake chips made by Intel at the time of this writing have transistors that are sixty times smaller than the wavelength of light used by human eyes, and so are literally invisible.

It takes three months to complete a microprocessing chip, starting with the growing of a 400-pound, very fragile, cylindrical boule of pure smelted silicon, which fine-wire saws will cut into dinner plate–size wafers, each an exact two-thirds of a millimeter thick. Chemicals and polishing machines will then smooth the upper surface of each wafer to a mirror finish, after which the polished discs are loaded into ASML machines for the long and tedious process toward becoming operational computer chips. Each wafer will eventually be cut along the lines of a grid that will extract a thousand chip dice from it—and each single die, an exactly cut fragment of the wafer, will eventually hold the billions of transistors that form the non-beating heart of every computer, cellphone, video game, navigation system, and calculator on modern Earth, and every satellite and space vehicle above and beyond it. What happens to the wafers before the chips are cut out of them demands an almost unimaginable degree of miniaturization. Patterns of newly designed transistor arrays are drawn with immense care onto transparent fused silica masks, and then lasers are fired through these masks and the beams directed through arrays of lenses or bounced off long reaches of mirrors, eventually to imprint a highly shrunken version of the patterns onto an exact spot on the gridded wafer, so that the pattern is reproduced, in tiny exactitude, time and time again. After the first pass by the laser light, the wafer is removed, is carefully washed and dried, and then is brought back to the machine, whence the process of having another submicroscopic pattern imprinted on it by a laser is repeated, and then again and again, until thirty, forty, as many as sixty infinitesimally thin layers of patterns (each layer and each tiny piece of each layer a complex array of electronic circuitry) are engraved, one on top of the other.

Rooms within the ASML facility in Holland are very much cleaner than that. They are clean to the far more brutally restrictive demands of ISO number 1, which permits only 10 particles of just one-tenth of a micron per cubic meter, and no particles of any size larger than that. A human being existing in a normal environment swims in a miasma of air and vapor that is five million times less clean.

The test masses on the LIGO devices in Washington State and Louisiana are so exact in their making that the light reflected by them can be measured to one ten-thousandth of the diameter of a proton.

Alpha Centauri A, which lies 4.3 light-years away. The distance in miles of 4.3 light-years is 26 trillion miles, or, in full, 26,000,000,000,000 miles. It is now known with absolute certainty that the cylindrical masses on LIGO can help to measure that vast distance to within the width of a single human hair.

 

Posted in Infrastructure, Jobs and Skills, Life After Fossil Fuels, Manufacturing & Industrial Heat | Tagged , , , , , | Comments Off on Life After Fossil Fuels: manufacturing will be less precise