Was the fall of the Roman Empire due to plagues & climate change?

Preface. Harper (2017) shows the brutal effects of plagues and climate change on the Roman Empire. McConnell (2020) proposes that a huge volcanic eruption in Alaska was a factor in bringing the Roman Empire  and Cleopatra’s Egypt down.

In addition, there are other ecological reasons for collapse not mentioned in this book, such as deforestation (A Forest Journey: The Story of Wood and Civilization by John Perlin, topsoil erosion (Dirt: The Erosion of Civilizations by David Montgomery), and barbarian invasions (“The Fall of Rome: And the End of Civilization” and “Empires and Barbarians: the fall of Rome and the birth of Europe”.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

McConnell JR et al (2020) Extreme climate after massive eruption of Alaska’s Okmok volcano in 43 BCE and effects on the late Roman Republic and Ptolemaic Kingdom.  Proceedings of the National Academy of Sciences.

Caesar’s assassination happened at a time of unusually cold wet weather, as much as 13.3 F cooler than today, and up to 400% more rain, drenching farmland and causing crop failures leading to food shortages and disease.  In Egypt the annual Nile flood that agriculture depended on failed. Although an eruption of Mount Etna in Sicily in 44 BC has been blamed, this paper found evidence that it may have been the eruption of the Okmok volcano in Alaska that altered the climate enough to weaken the Roman and Egyptian states. It was one of the largest in the past few thousand years (Kornei K (2020) Ancient Rome Was Teetering. Then a Volcano Erupted 6,000 Miles Away. Scientists have linked historical political instability to a number of volcanic events. New York Times).

A more nuanced and critical look at this scientific paper can be found here: Middleton G (2020) Did a volcanic eruption in Alaska help end the Roman republic? The Conversation.

Kyle Harper. 2017. The Fate of Rome Climate Disease and the End of an Empire. Princeton University Press.

How the Antonine plague from 165 to 180 AD affected the Pagan cults

To the ancient mind, plague was an instrument of divine anger. The Antonine Plague provoked spectacular acts of religious supplication at the civic level, fired by the great oracular temples of the god Apollo. The emperors started minting a new image on the currency, invoking “Apollo the Healer.” Religious solutions were desperately sought in Rome.

Pagan philosopher Porphyry blamed the insolence of the Christians for this health catastrophe. “And they marvel that the sickness has befallen the city for so many years, while Asclepius and the other gods are no longer dwellers among us. For no one has seen any succor for the people while Jesus is being honored.” Valerian implemented measures that were unequivocally aimed at hunting out Christians.

The rise of Christianity from the Cyprian plague 249-262 (ebola or smallpox)

This plague was instrumental in making Christianity popular, because the pagan religions did nothing for pandemic victims.  But because Christianity was able to forge kinship-like networks among perfect strangers based on an ethic of sacrificial love, Christian ethics turned the chaos of pestilence into a mission of aid. The vivid promise of the resurrection helped convince the faithful not to fear of death. Priests pleaded for them to show love to the enemy, so they helped everyone, pagans and Christians alike. The compassion was conspicuous and consequential. Basic nursing of the sick can have massive effects on case fatality rates; with Ebola, for instance, the provision of water and food may drastically reduce the incidence of death. The Christian ethic was a blaring advertisement for the faith. The church was a safe harbor in the storm.  The traditional civic cults lost favor.

So much death and the alternative of religious life made it hard to find soldiers

The empire’s fortunes reached a low tide in the AD 260s. The cities were never quite the same; even the healthiest late antique cities were smaller than they had formerly been, and in aggregate, even after the recovery, there were simply fewer major towns. The old days when army recruitment could be handled with a light touch were forever gone.

The fourth-century state had to contend with at least one truly novel alternative to military service: the allure of the religious life for men who might have heeded the call to arms. “The huge army of clergy and monks were for the most part idle mouths.” By the end of the fourth century, their total number was perhaps half the size of the actual army, a not inconsiderable drain on the manpower reserves of the empire. The civil service was also an attractive, and safe, career. The vexing issue of military recruitment in the fourth century was not directly a demographic problem.

Supply chains played a role in spreading disease

Supply chains and manufacturing were extensive. For example, consider the accoutrements of soldiers. The Roman soldier carried arms manufactured in over three dozen specialized imperial factories spaced across three continents. Officers wore bronze armor, embellished with silver and gold, made at five different plants. Roman archers would have used bows made in Pavia and arrows made in Mâcon. The foot soldier was dressed in a uniform (shirt, tunic, and cloak) made at imperial textile mills and finished at separate dye-works. He wore boots made at a specialized manufactory. When a Roman cavalryman of the later fourth century rode into battle, he was mounted on a mare or gelding that had been bred on imperial stud farms in Cappadocia, Thrace, or Spain. The troops were fed by a lumbering convoy system that carried provisions across continents in mind-boggling bulk. The emperor Constantius II ordered 3 million bushels of wheat to be stored in the depots of the Gallic frontier and another 3 million bushels in the Alps, before moving his field army to the west.

These extensive supply chains helped to spread the Antonine and Cyprian pandemics, followed by one of the worst pandemics in 542 AD from the plague. The fusion of global trade and rodent led to the greatest disease event human civilization had ever experienced.  The plague is an exceptional and promiscuous killer. Compared to smallpox, influenza, or a filovirus, Y. pestis is a huge microbe, lumbering along with an array of weapons. But, it is in constant need of a ride.

The plague moved at two speeds: swiftly by sea and slowly by land. The mere sight of ships stirred terror.  Once infected rats made landfall, the diffusion of the disease was accelerated by Roman transportation networks. Carts and wagons carried rodent stowaways along Roman roads. It could spread anywhere that rats could travel.

Climate change and the Huns

The 4th-century was a time of mega-drought. The two decades from ca. AD 350 to 370 were the worst multi-decadal drought event of the last two millennia. The nomads who called central Asia home suddenly faced a crisis as dramatic as the Dust Bowl. The Huns became armed climate refugees on horseback. Their mode of life enabled them to search out new pastures with amazing speed. In the middle of the fourth century, the center of gravity on the steppe shifted from the Altai region (on the borders of what is today Kazakhstan and Mongolia) to the west. By AD 370, Huns had started to cross the Volga River. The advent of these people on the western steppe was momentous, terrorizing the tribes north of Italy, who fled to the Roman Empire in great numbers to escape them (for a longer explanation of the effect of the Huns, see my Book review of “The Fall of Rome: And the End of Civilization” and “Empires and Barbarians: the fall of Rome and the birth of Europe”).

They brought new cavalry tactics that terrorized the inhabitants of the trans-Danubian plains. Their horses were ferociously effective. In the words of a Roman veterinary text, “For war, the horses of the Huns are by far the most useful, by reason of their endurance of hard work, cold and hunger.” What made the Huns overwhelming was their basic weapon, the composite reflex bow.

The Justinian Plague (541 to 749 AD)

Justinian reigned as emperor from AD 527 to 565. Less than a decade into his reign, he had already accomplished more than most who had ever held the title. The first part of his reign was a flurry of action virtually unparalleled in Roman history. Between his accession in AD 527 and the advent of plague in AD 541, Justinian made peace with Persia, reattached vast stretches of the western territories to Roman rule, codified the entire body of Roman law, overhauled the fiscal administration, and executed the grandest building spree in the annals of Roman history. He survived a perilous urban revolt and tried to forge orthodox unity in a fractious church, through his own theological labors.

In the spring of 542 AD the plague appeared for the first time (Yersinia pestic) in the capital Constantinople.   For the next 23 years it became difficult to find and field armies. Taxes rose to unseen heights.  There have been two major plague pandemics since then, the Black Death in AD 1346–53, which lasted nearly 500 years, and the third in 1855 AD in Yunnan China and spread globally.

The dependence of the imperial system on the transport and storage of grain made the Roman Empire a heaven for the black rat.

It required one last twist of fate for the bacterium to make its grand entrance into the Roman world. The Asian uplands had prepared a monster in the germ Y. pestis. The ecology of the empire had built an infrastructure awaiting a pandemic. The silk trade was ready to ferry the deadly package. But the final conjunction, what finally let the spark jump, was abrupt climate change. The year AD 536 is known as a “Year without Summer.” It was the terrifying first spasm in what is now known to be a cluster of volcanic explosions unmatched in the last three thousand years. Again in AD 540–41 there was a gripping volcanic winter. As we will see in the next chapter, the AD 530s and 540s were not just frosty. They were the coldest decades in the late Holocene. The reign of Justinian was beset by an epic, once-in-a-few-millennia cold snap, global in scale.

One thing is certain: the relation between climate and plague is not neat and linear. As with so many biological systems, it is marked by wild swings, narrow thresholds, and frenzied opportunism. Rainy years foster vegetation growth, which in turn sparks a trophic cascade in rodent populations. In excess, water can also flood the burrows of underground rodents and send them scurrying for new ground. Population explosions stir the emigration of rodents in search of new habitats.

Given that there is a strong correlation between volcanism and El Niño, the volcanic eruptions of the AD 530s may have stirred the Chinese marmots or gerbils carrying Y. pestis out of their familiar subterranean colonies, triggering an epizootic that reached the rodents of the seaborne trade routes heading west.

The first victims were the homeless. The toll started to rise. “…the mortality rose higher until the toll in deaths reached 5,000 a day, then 10,000, and then even more.” John’s daily counts are similar. He estimated from 5000 rising to 7000, 12000 and 16000 dead per day. At first, there remained a semblance of public order. “Men were standing by the harbors, at the crossroads and at the gates counting the dead.” According to John, the grisly tally continued until 230,000 had been numbered. “From then on the corpses were brought out without being counted.” John reckoned that over 300,000 were laid low. A tally of ca. 250,000–300,000 dead within a population of probably 500,000 would fall squarely within the most carefully derived estimates for the death rates in places hit by the Black Death at 50–60%.

Ancient societies were always tilted toward the countryside. By now some 85–90% of the population lived outside of cities. What set the plague apart from earlier pandemics was its ability to infiltrate rural areas.

Plague had another, even more insidious stratagem in the long run. An obligate human parasite like smallpox lacked an animal reservoir where it could hide between outbreaks. Plague was more patient. As the wave of the first visitation pulled back from a ravaged landscape, small tidal pools were left behind. The plague lurked in any number of rodent species. These biological weapons of the plague—the fact that it does not confer strong immunity and that it has animal reservoirs—allowed the first pandemic to stretch across two centuries and cause repeated mass mortality events.

The social order wobbled and then collapsed. Work of all kinds stopped. The retail markets were shuttered, and a strange food shortage followed. The harvest rotted in the fields. Food was scarce.

The Late Antique Little Ice Age (536 to 660 AD) climate change effects.

AD 536 was the coldest year of the last two millennia. Average summer temperatures in Europe fell instantly by up to 2.5°, a truly staggering drop. In the aftermath of the eruption in AD 539–40, temperatures plunged worldwide. In Europe, average summer temperatures fell again by up to 2.7°.

The decade of 536–545 was the coldest during this time.

Late in AD 589, torrential rains inundated Italy. The Adige flooded. The Tiber spilled its banks and crept higher than Rome’s walls. Whole regions of the city were under water. Churches collapsed, and the papal grain stores were ruined. No one remembered a flood so overwhelming. Then followed the plague again, in early AD 590.

The combination of plague and climate change sapped the strength of the empire.

The Justinian Plague effects on religion

For the first time in history, an apocalyptic mood came to permeate a large, complex society. Gregory’s sense of the approaching end was hardly his alone. The apocalyptic key transcended traditions, languages, and political boundaries in late antiquity. The plague was a last chance to turn from sin. And no sin weighed more heavily on the late antique heart than greed. Anxieties about wealth generated a perpetual moral crisis in late ancient Christianity. Earthly possessions were a trial of faith. Here the plague struck a tender nerve. The most memorable vignettes in John of Ephesus’ history of the plague linger over individuals singled out for punishment because of their greed. From one angle, the plague was God’s final, ghastly effort to pry loose our tight-gripped hold on material things.

Materially and imaginatively, the ascent of Islam would have been inconceivable without the upheavals of nature. The imminent judgment was a call to repentance.

Monotheism and eschatological warning were central to the prophet Muhammad’s religious message. “The coming judgment is in fact the second most common theme of the Quran, preceded only by the call to monotheism.” The Quran proclaims itself to be “a warning like those warnings of old: that Last Hour which is so near draws ever nearer.” “God’s is the knowledge of the hidden reality of the heavens and the earth. And so, the advent of the Last Hour will but manifest itself like the twinkling of an eye, or closer still.” The origins of Islam lie in an urgent eschatological movement, willing to spread its revelation by the sword, proclaiming the Hour to be at hand. Here, the eschatological energy of the seventh century found its most unrestrained development. It was electrifying. The message was the last element in the perfect storm. The southeastern frontier of the empire was erased almost overnight. Political lines of a thousand years were instantaneously and permanently redrawn.

Egypt and the Justinian Plague effects

The Nile valley was the most heavily engineered ecological district in the ancient world. Every year, at the inundation, its divine waters were diverted through an immense network of canals to irrigate the land. The intricate machinery of dikes, canals, pumps, and wheels was a huge symphony of human ingenuity and hard labor. The sudden disappearance of manpower in lands upriver threw the network of water control into disrepair. The controlled flow of water in the valley had been interrupted, and the downstream inhabitants in the fertile delta were overwhelmed. Remarkably, these events were replayed almost exactly in the aftermath of the medieval Black Death.

Famine effects

The twittering climate regime of late antiquity also had an intimate relationship with the pulses of epidemic mortality. Food shortage was a corollary of disease outbreak. Anomalous weather events might trigger explosive breeding of disease vectors. A devastating famine in Italy in AD 450–51 was coincident with a wave of malaria, for instance. Food crisis fanned desperate migrants in search of survival, overwhelming the normal environmental controls embedded in urban order. Food shortages forced the hungry to resort to consuming inedible or even poisonous food, all while depleting the power of their immune systems to resist infection.

A famine and pestilence swept Edessa and its hinterland. In March of AD 500, a plague of locusts destroyed the crops in the field. By April, the price of grain skyrocketed to about eight times the normal price. An alarmed populace quickly sowed a crop of millet, an insurance crop. It too faltered. People began to sell their possessions, but the bottom fell out of the market. Starving migrants poured into the city. Pestilence – very probably smallpox – followed. Imperial relief came too late. The poor “wandered through the streets, colonnades, and squares begging for a scrap of bread, but no one had any spare bread in his house.  In desperation, the poor started to boil and eat the remnants of flesh from dead carcasses. They turned to vetches and droppings from vines. “They slept in the colonnades and streets, howling night and day from the pangs of hunger.” When the December frosts arrived, the “sleep of death” laid low those exposed to the elements.

The migrants were worst affected, but by spring no one was spared. “Many of the rich died, who had not suffered from hunger.” The loss of environmental control collapsed even the buffers that subtly insulated the wealthy from the worst hazards of contagion.

During a famine that swept Syria in AD 384–85, Antioch found its streets filled with hungry refugees, who had been unable to find even grass to eat and suddenly massed in town to scavenge

Rise of Slavery

After the dislocations of the third century, the slave system experienced a brutal resurgence.

Melania the Younger, from one of the most blue-blooded lines in Rome, owned over 8,000 slaves.

Slave-ownership on Melania’s scale was rare. More consequential were the elites, late antiquity’s 1 percent, who owned “multitudes,” “herds,” “swarms,” “armies,” or simply “innumerable” slaves, both in their households and in the fields. To own a slave was a standard of minimum respectability. In the fourth century, priests, doctors, painters, prostitutes, petty military officers, actors, inn-keepers, and fig-sellers are found owning slaves. Many slaves owned slaves. All over the empire we find working peasants with households that included slaves.

Posted in Pandemic Fast Crash, Roman Empire | Tagged , , , , | 1 Comment

Biogas from cow manure is not a solution for the energy crisis

Preface. Smil’s article about biogas sums up why it won’t contribute to energy shortages as fossils decline. Biogass doesn’t scale and is easy to muck up. Hayes (2015) also makes this case, pointing out that even if every ounce of manure was used it would only generate 3% of U.S. electricity, and electricity only provides 20% of the energy we use, yet 64% of electricity is still generated with fossil fuels.  Biogas is not renewable either, and pollutes the air and groundwater.

Biogas also has an extremely low energy return on investment (EROI) of 1.75 to 2.1 (Yazan 2017) or 1.12-1.57 (Wang 2021). Some scientists estimate an EROI of 10:1 or more  is needed to keep modern society functioning (Hall and Cleveland 1981, Mearns 2008, Lambert et al. 2014, Murphy 2014, Fizaine and Court 2016).

I summarize four articles below.

Alice Friedemann  www.energyskeptic.com Women in ecology  author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity

***

Smil, Vaclav. 2010. Energy Myths and Realities: Bringing Science to the Energy Policy Debate. AEI Press.

Before modernization, China’s biogas digesters were unable to produce enough fuel to cook rice three times a day, still less every day for four seasons. The reasons are obvious to anyone familiar with the complexities of bacterial processes. Biogas generation, simple in principle, is a fairly demanding process to manage in practice.  Here are some of the pitfalls:

  1. The slightest leakage will destroy the anaerobic condition required by methanogenic bacteria
  2. Low temperatures (below 20°C),
  3. Improper feedstock addition,
  4. Poor mixing practices
  5. Shortages of appropriate substrates will result in low (or no) fermentation rates,
  6. Undesirable carbon-to-nitrogen ratios and pH
  7. Formation of heavy scum.

Unless assiduously managed, a biogas digester can rapidly turn into an expensive waste pit, which—unless emptied and properly restarted—will have to be abandoned, as millions were in China. Even widespread fermentation would have provided no more than 10% of rural household energy use during the early 1980s, and once the privatization of farming got underway, most of the small family digesters were abandoned.

More than half of humanity is now living in cities, and an increasing share inhabits megacities from São Paulo to Bangkok, from Cairo to Chongqing, and megalopolises, or conglomerates of megacities. How can these combinations of high population, transportation, and industrial density be powered by small-scale, decentralized, soft-energy conversions? How can the fuel for vehicles moving along eight- or twelve-lane highways be derived from crops grown locally?

How can the massive factories producing microchips or electronic gadgets for the entire planet be energized by attached biogas digesters or by tree-derived methanol? And while some small-scale renewable conversions can be truly helpful to a poor rural household or to a small village, they cannot support such basic, modern, energy-efficient industries as iron and steel making, nitrogen fertilizer synthesis by the Haber-Bosch process, and cement production.”

Hayes, Denis and Gail. 2015. Cowed: The Hidden Impact of 93 Million Cows on America’s Health, Economy, Politics, Culture, and Environment.    W.W. Norton & Company.

Digesters are more about controlling pollution than generating electricity. If every ounce of manure from 93 million cows were converted to biogas and used to generate electricity, it would produce less than 3% of the electricity Americans currently use  (Cuellar, A.D., et al. 2008. Cow Power: the energy and emissions benefits of converting manure to biogas. Environmental Research Letters 3).

Bergamin A (2021) Turning Cow Poop Into Energy Sounds Like a Good Idea — But Not Everyone Is on Board. Discover.

Methane is a potent heat-trapping gas that is prone to leaking from gas drilling sites and pipelines in addition to cow feedlots. Because the dairy industry accounts for more than half of California’s methane emissions, the state has allocated more than $180 million to digester projects as part of its California Climate Investments program. Another $26.5 million has come from SoCalGas as part of a settlement for a natural gas leak in Aliso Canyon that dumped more than 100,000 tons of methane into the atmosphere.

While biogas, as it’s known, sounds promising, its potential is limited. Fossil gas alternatives could only supply about 13 percent of current gas demand in buildings — a limitation acknowledged by insiders from both the dairy and natural gas industries, whose research provided the data for this figure.

“So-called efforts to ‘decarbonize’ the pipeline with [dairy biogas] are a pipe dream only a gas utility executive could love,” Michael Boccadoro, executive director of Dairy Cares, an advocacy group for the dairy industry, says. “It just doesn’t make good policy sense.”

Biogas also produces the same contaminants as fossil gas when it’s burned, says Julia Jordan, a policy coordinator at Leadership Counsel for Justice & Accountability, which advocates for California’s low-income and rural communities. For that reason, biogas will do little to address the health issues that stem from using gas stoves, which have been shown to generate dangerous levels of indoor pollution.

The biggest beneficiaries of biogas, advocates say, are gas utilities and dairy operations. As California cities look to replace gas heaters, stoves and ovens with electric alternatives, SoCalGas can tout biogas as a green alternative to electrification. Meanwhile, the dairy industry will profit from the CAFO system while Central Valley communities bear the burden of air and water pollution

“We’re relying on a flawed system that makes manure a money-making scheme for not just the dairies but the natural gas industry,” Jordan says. “And this industrial, animal-feedlot style of agriculture is not working for the people in the Valley.”

Beyond methane, industrial dairies also emit huge sums of ammonia, which combines with pollution from cars and trucks to form tiny particles of ammonium nitrate that irritate the lungs. The Central Valley has some of the highest rates of asthma in the state, particularly among children. While digesters curb methane and ammonia emissions, they don’t eliminate pollution from feedlots entirely.

Feedlots also contaminate water supplies. A 2019 nitrate monitoring report found elevated nitrate concentrations in groundwater at 250 well sites across dairies in the Central Valley. The report said that nitrates seeping from liquid manure lagoons play a role. Young children exposed to nitrates can develop blue baby syndrome, which starves the body of oxygen and can prove fatal. Some studies have also linked nitrates to cancer and thyroid disease.

Tulare County residents are worried that the use of biogas will encourage the growth of industrial dairies, worsening groundwater pollution, says Blanca Escobedo, a Fresno-based policy advocate with Leadership Counsel for Justice & Accountability. Escobedo’s father worked for a Tulare County dairy.

Digesters are most profitable when fed by larger herds. At least 3,000 cows are needed to make an anaerobic digester financially viable, according to a 2018 study. Dairies that have received state digester funding have an average herd size of 7,500 cattle.

“Because of the tremendous concentration of pollutants in one area, [biogas] isn’t a renewable resource when you’re using it on this scale,” says Jonathan Evans, a senior attorney and the Environmental Health Legal Director at Center for Biological Diversity. “Especially in terms of California’s water supply and the impact on adjacent communities who have to suffer the brunt of increasingly poor air quality.”

Weißbach, D., et al. April 2013. Energy intensities, EROIs, and energy payback times of electricity generating power plants. Energy 52: 210–221

Producing natural gas from maize growing, so-called biogas, is energetically expensive due to the large electricity needs for the fermentation plants, followed by the agriculture’s energy demand because of fertilizers and machines.

Biogas-fired plants, even though they need no buffering, have the problem of enormous fuel provisioning effort which brings them clearly below the economic limit with no potential of improvements in reach.

“The Maas brothers decided to set up their Farm Power plant right between the dairies, so the manure wouldn’t need to be trucked long distances to the digester, and the finished product could be piped at reasonable cost to nearby fields. With the farmers lined up, all Farm Power had to do was find $3 million to build a million-gallon tank in which to digest manure, a generator, and tanks to hold the stuff coming in and going out of the digester, which included up to 30% pre-consumer food waste—things like cow blood, dead chickens, and fish waste. Food that has not already been digested by animals contains more energy, allowing the anaerobic bacteria in the digester to pump out more methane. The facility can process 40 to 50,000 gallons of manure daily.

This generator and another, which Farm Power operates at Lynden, Washington, generate enough electricity to power 1,000 homes. The liquid material coming out of the digester is a better fertilizer than raw manure because it contains far fewer pathogens and weed seeds and doesn’t stink as much. It first flows into a pit; from there, as a more stable manure slurry, it’s piped to nearby fields where it can be pumped through an irrigation nozzle or injected into the soil. The dry residue is turned into sanitary, comfy cow bedding. After the dry matter is squeezed through a screen, it’s loaded into trucks and hauled back to the farms. In the future, Farm Power plans to pasteurize the bedding product. Kevin scooped up some finished product stored at one of the nearby dairies. He held it out, inviting Denis to examine it. The bedding was still hot, and smelled like soil and hay.

Digesters don’t solve every environmental problem. Certain antibiotics in cow manure can kill off the fermenting and methanogenic bacteria that make the process possible. The heat in digesters probably doesn’t destroy most antibiotics. New research suggests some pathogenic and antibiotic-resistant bacteria survive anaerobic digestion. Installing a scrubber to remove sulfur dioxide from the digester gas wasn’t economically feasible for the Maas brothers, so they got a permit to emit some pollution. More nitrogen, phosphorus, and potassium remain in the final product than is ideal. Carbon dioxide is also put in the air, and the trucks hauling waste and bedding burn fuel.”

References

Fizaine F, Court V (2016) Energy expenditure, economic growth, and the minimum EROI of society. Energy Policy 95: 172-186.

Hall CAS, Cleveland CJ (1981) Petroleum drilling and production in the United States: Yield per effort and net energy analysis. Science 211: 576-579.

Lambert JG, Hall CAS, Balogh S, et al (2014) Energy, EROI and quality of life. Energy Policy 64:153–167.

Mearns E (2008) The global energy crisis and its role in the pending collapse of the global economy. Presentation to the Royal Society of Chemists, Aberdeen, Scotland. http://www. theoildrum.com/node/4712

Murphy DJ (2014) The implications of the declining energy return on investment of oil production. Philosophical transactions of the Royal Society A. https://doi.org/10.1098/rsta.2013.0126

Wang C et al (2021) Energy return on investment (EROI) of biomass conversion systems in China: Meta-analysis focused on system boundary unification. Renewable and Sustainable Energy Reviews 137.

Yazan DM et al (2017) Cooperation in manure-based biogas production networks: An agent-based modeling approach. Applied Energy 212: 820-833.
see table 8 here: https://www.researchgate.net/figure/Energy-return-on-investment-EROI_tbl3_322251797

 

Posted in Biofuels, Biomass EROI, Peak Biofuels, Pollution | Tagged , , , , , | 2 Comments

The Next Big Thing: Distributed Generation & Microgrids

 

Preface. Last updated 2022-9-5   The first article below explains what microgrids will look like in the future.  But first a brief look at what a microgrid is, as Angwin explains in her book  “Shorting the Grid. The Hidden Fragility of Our Electric Grid”.

Today the grid is mostly a one-way street, with huge power plants pushing power to customers.  A microgrid will have to be “smart” so that people can both buy and consume electricity, pushing it two directions.  So how will you sell power to your neighbors?  Probably not a wind turbine, even in the unlikely event you have enough wind to justify one, they’re expensive, noisy, and break down a lot. Burn wood? No, you would have to build a wood-fired boiler, raise steam, spin a turbine, attach a generator, and connect the whole thing to the grid.  But if you’re a dairy farmer you can buy methane digesters and small diesels attached to the digester using manure as fuel. In reality, if the power goes down a lot, the wealthy, in suburbia, might buy solar panels and batteries for their own home. In India, where Greenpeace tried to supply electricity via solar power and batteries, they were quickly drained, the same is true for most home batteries offered today. The only way you can produce electricity is a noisy and polluting diesel generator, sold to neighbors via jury-rigged and dangerous wires.

This is happening in Beirut. “Power Hungry” Robert Bryce, who runs the “Power Hungry” podcast went to Beirut ask the locals how this worked. They referred to the electricity “brokers” as the “electricity Mafia.” They paid two electricity bills each month: about $35 to the state-owned power company for the little power the provided 6 hours a day, and around $100 a month to their local “mafia” generator. Bryce asked one man why he didn’t just buy his own generator, since he was paying his neighbor a significant amount of money. The answer was that, if he broke away from the local “mafia” generator, he might be killed. At the very least, the wire to his generator would be cut. Bryce reports how a clash between two generator-owners left two people dead and required the Lebanese army to end the violence.”

Pedro Prieto’s work has taken him all over the world and seen “Beiruts” in many places, such as Brazil, the Democratic Republic of Congo, and Cuba to name a few.  Tad Patzek wrote that at 45-50 C people have hours to live, and in the future giant air-conditioned centers will be essential for people to retreat to (if they can get there).

The first article below from Wired magazine, describes Beirut’s diesel generator microgrid in greater detail. It is coming to you some day as  power outages increase when fracked natural gas and imported LNG can’t keep natural gas plants running to balance wind and solar when they happen to be up, and 100% backs them up otherwise.

The second article below explains why renewables are destabilizing the electric grid. Basically, electricity distribution is designed to flow one way from a centralized system to customers. But Distributed Generation (DG) from solar  PV and wind violates this.

Impacts caused by high penetration levels of intermittent renewable DG can be complex and severe and may include voltage increase, voltage fluctuation, interaction with voltage regulation and control equipment, reverse power flows, temporary over-voltage, power quality and protection concerns, and current and voltage unbalance, to name a few.

There are solutions, but they’re expensive, complicated, and add to the already insane challenges of thousands of utilities, power generators, independent system operators, and other entities trying to coordinate the largest machine in the world when cooperation isn’t always in their best interest.

Lebanon in the news:

Bradstock F (2022) Can Lebanon repair its failing energy sector? oilprice.com. Lebanon has been grappling with severe energy shortages over the past year. On average, the Lebanese population gets just one to two hours of electricity per day. Growing political instability threatens to worsen the country’s ongoing energy crisis. Lebanon has continued to face severe energy shortages due to years of poor investment in infrastructure that has led many to rely on polluting diesel generators for their power. Lebanon has faced rolling blackouts for years due to poor infrastructure spending. Now, with such high debt (495% of GDP), the government can no longer afford to run national power plants. Few external powers have been willing to step in to help. In addition, the rise of the militia group Hezbollah is driving others away.

Jo L (2022) Lebanon’s poorest scavenge through trash to survive. AP. In the dark streets of a Beirut now often without electricity, sometimes the only light that shines is from headlamps worn by scavengers, searching through garbage for scrap to sell. Even trash has become a commodity fought over in Lebanon, mired in one of the world’s worst financial crises in modern history. With the ranks of scavengers growing among the desperately poor, some tag trash cans with graffiti to mark their territory and beat those who encroach on it. Meanwhile, even better-off families sell their own recyclables because it can get them U.S. dollars rather than the country’s collapsing currency. The fight for garbage shows the rapid descent of life in Beirut, once known for its entrepreneurial spirit, free-wheeling banking sector and vibrant nightlife. Instead of civil war causing the chaos, the disaster over the past two years was caused by the corruption and mismanagement of the calcified elite that has ruled Lebanon since the end of its 1975-90 conflict. Thugs roaming the streets on motorcycles sometimes target scavengers at the end of day to steal the recyclables they collected.  “They are ready to kill a person for a plastic bag,” Mohammed said. More than half the population has been plunged into poverty. Banks have drastically limited withdrawals and transfers. Hyperinflation has made daily goods either unaffordable or unavailable.

2022 Professor Jan Blomgren: How are we in time?  https://youtu.be/0Oh_w5KrEVc.  There are 5 kinds of large electric power generators, natural gas, coal, oil, nuclear, and hydropower.  These give a stabilizing effect. You can’t keep the grid up with 1,000+ smaller generators. Solar cells have no generators at all. They do not provide stability to the grid like the larger generators.  Big generators also control and regulate electricity so it gets to the right place.  It can’t be done with small generators. They just don’t have the adjustability. They could have more if we built them that way, but even so, could never be as effective.  This means if we shut down large generators and replace them with many smaller ones, the electricity system will be more unstable and inefficient.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

* * *

Rosen KR (2018) Inside the Haywire World of Beirut’s Electricity Brokers. When the grid goes out, gray-market generators power up to keep the Wi-Fi running and laptops charged.Wired

https://www.wired.com/story/beruit-electricity-brokers/

Sometimes the lights do not stay on, even for the power company in Beirut.  Electrical power here does not come without concerted exertion or personal sacrifice. Gas-powered generators and their operators fill the void created by a strained electric grid. Most people in Lebanon, in turn, are often stuck with two bills, and sometimes get creative to keep their personal devices—laptops, cell phones, tablets, smart watches—from going dead. Meanwhile, as citizens scramble to keep their inanimate objects alive, the local authorities are complicit in this patchwork arrangement, taking payments from the gray-market generator operators and perpetuating a nation’s struggle to stay wired.

Lebanon has been a glimmering country ever since the 15-year civil war began in 1975, and the reverberations from that conflict persist. These days there is only one city, Zahle, with electricity 24/7. Computer banks in schools and large air conditioners pumping out chills strain the grid, and daily state-mandated power cuts run from at least three hours to 12 hours or more. Families endure power outages mid-cooking, mid-washing, mid-Netflix binging. Residents rely on mobile phone apps to track the time of day the power will be cut, as it shifts between three-hour windows in the morning and afternoon, rotating throughout the week.

Beirut’s supplementary power needs are effectively under the control of what is known here as the generator mafia: a loose conglomerate of generator owners and landlords who supply a great deal of the country’s power. This group is indirectly responsible for the Wi-Fi, which makes possible any number of WhatsApp conversations—an indispensable lifeline for the country’s refugees, foreign aid workers, and journalists and locals alike.

Electricité du Liban, the Lebanese electricity company, has a meager budget and relies on a patchwork approach—including buying power from neighboring countries and leasing diesel generator barges—to produce power; meanwhile, corruption in local and state politics means that government-allocated funds often do not reach the people or places for which they are intended. The community—or mafia—of generator owners is thus a solution to a widespread problem, and it has grown into a cottage industry, both intractable and necessary.

Sam says he doesn’t buy backup juice for his apartment, which he rented last spring. Somewhere along the electrical wires cast like nets across the city, a bootlegged electrical line running from a generator was spliced in his favor: A single “magic” outlet powers his wireless router during outages. It’s one thing to be kept from doing your laundry, and another thing entirely to be kept from your friends or family. Besides, tracking down the generator owner responsible for this one outlet would be a journey of more than 1,001 nights. In the city of Beirut alone, there are roughly 12,000 generators and their owners. Though it is technically illegal, regulators have a hard time squashing the network, which has grown to cover most of the country. Officials aren’t so much paid off to look the other way; they’re paid because, it is said, they own some of the generators.

In the Mreijeh neighborhood, one of the electricians is known to locals as “the real energy minister.” His wiring, strung between generators and buildings to which they pump power, are so thick that they blot out the sun. In the Bourj al-Barajneh neighborhood, some residents share their power “subscription,” perhaps with magic outlets of their own; the subscription operator and generator owner turns a blind eye. In the district known as Shiah, the “Dons” do not allow any such manipulation—they do, however, have a weakness for European soccer matches and boost power on game nights. And in al-Fanar, it is important that the distributors of this power pay close attention to usage and monitor peak hours, doing their best to keep service operating when the state fails.

“We cover where there is no state,” says Abdel al-Raham, an owner and operator of generators in East Beirut. He began with a small generator, which he used to power his house, around the start of the civil war in 1975. But the generator was loud and noxious, so over time, as a gesture of good faith, he would give his neighbors a lamp connected to his generator. “Just enough for them to light their house and to make up for all the annoying noise,” he says.

But because of his generosity, his wife soon became unable to run the washing machine. He went out and bought a new, bigger generator. Then shop owners nearby needed more power, and his brother came to him and proposed they split profits on the power they could sell to the neighborhood. Self-sufficiency turned into entrepreneurship.

Raham, like other operators, complains about repair costs; under-the-table operating fees—essentially, bribes—to the local municipalities in which they operate; the unpaid bills by some of the country’s Syrian and Egyptian refugees who are using an estimated additional 486 megawatts; and the increasing cost of diesel fuel to run the generators.

But Raham felt a responsibility to his community in which three-quarters of the homes rely on his generators for some portion of their power. In some of those homes, he says, elderly people rely on medical devices 24 hours a day. A lack of electricity would be a threat to their health.

Residents of Lebanon have three basic options: buy a generator subscription, own your own generator, or splurge for what’s known as an uninterruptible power supply.

When you move into an apartment, you will most often connect with the local generator owner who will set up a subscription for 5 amps, 10 amps, 15 amps, or more, depending on your budget and consumption during the scheduled power outages. Residents will also do this with their water providers—one bill and service provider for filtered water, and another bill and service provider for gray water. (Water utilities are likewise a … gray area.) Internet is handled by another ad hoc collection of quasi-legal independent operators, as is trash, which the city is supposed to take care of but often fails to collect. These entities are more than private providers or secret crusaders. They are a necessary convenience to which one is connected through inconvenient terms.

Though they claim they make little money on their ventures, generator owners can net tens of thousands of dollars in monthly revenue. They also undercut one another, vying for customers in any given neighborhood.

Many developing countries suffer from electricity problems, but a World Bank report from 2015 suggested that Lebanon’s problems go beyond technical issues. It would cost the government $5 billion to $6 billion to bring 24-hour power to the country, according to one estimate, and yet the government spends roughly $1.4 billion a year just to cover the cost of fuel.  The report also noted that, on average, Lebanese households spent more than $1,300 a year on electricity in 2016 at a time when gross national income per capita was roughly $9,800.

Haddad pays for 10 amps a month (roughly 2,200 watts, or enough to power an electric kettle and desktop computer concurrently) and also receives a separate bill for the building elevator and hallway lights. It used to be that residents paid $90 for 10 amps, which cost $14 to generate, but Haddad says that today he pays $267 for 5 amps every month—about four times the amount he pays concurrently to Electricité du Liban. Municipalities now regulate the maximum cost the generator owners can charge their clients, though their control over the generator owners is hardly comprehensive. It is a tractor-pull relationship between local officials and generator owners. “The policy by which the municipalities and generator owners are connected is neither legal nor organized,” Antoine K. Gebara, the mayor of an eastern Beirut suburb, told me. “There is no system. … It should not be like this.”

The generator owners stepped in when the government could not provide services, but had to be controlled and regulated (as best as one might regulate a network of entrepreneurial privateers) by the same municipalities that couldn’t effectively supply power. Now, the generator owners turn around and pay the municipalities for the pleasure of dominating a market in which other generator owners might come to set up shop.

“They call us criminals, electricity thieves, robbers with generators. How are we the criminals?” Antanios asks me, his voice a rasp. “Yes, it’s extremely expensive. But that’s the government’s fault.”   He reaches into a drawer and pulls out another sheaf of receipts from the municipality and one signed by a local politician, each one totaling around $1,300. He had paid his commission to the politician—a headache Antanios wishes he could avoid (though perhaps it is better than being under the thumb of Hezbollah factions, who at times questioned me while working on this story as I sought answers about generators and their owners)—along with his taxes to the city government. Such a monthly burden meant his business had to generate substantial cash. He tells me he can sometimes get $32,000 a month in revenue. But he is quick to point out that he works hard for the money. For example, Antanios says, the night before he and his electricians spent six hours trying to identify the cause of a shortage throughout the neighborhood.

Just then the room darkens. A loud popping rips through the room, as though someone were stepping on a floor made of light bulbs. From across the street, emerging from a shantytown, from under an umbrella of corrugated metal, several of Antanios’ workers race to the office. The power from Electricité du Liban had cut in his sector, and now the breakers and generators were turning on, feeding into the lines that were cast out from his office and the nearby generators. But the switchover happens smoothly. An oscillating fan in the office hadn’t come to a stop before the power kicked back on, less than 30 seconds later.

Last year, researchers visited the Hamra neighborhood, a popular tourism and shopping district in Beirut, to study the health effects of generator usage. Fifty-three percent of the 588 buildings there had diesel generators. The study, by the American University of Beirut’s Collaborative for the Study of Inhaled Atmospheric Aerosols, found that throughout the city, the 747 tons of fuel consumed during a typical daily three-hour outage resulted in the production of 11,000 tons of nitrogen oxide annually. The territory of Delhi, India, relies heavily on diesel generators too, but Beirut emissions are more than five times worse per capita than those in the Indian capital.

***

IEEE. September 5, 2014. IEEE Report to DOE Quadrennial Energy Review on Priority Issues. IEEE

On the distribution system, high penetration levels of intermittent renewable Distributed Generation (DG) creates a different set of challenges than at transmission system level, given that distribution is generally designed to be operated in a radial fashion with one way flow of power to customers, and DG (including PV and wind technologies) interconnection violates this fundamental assumption. Impacts caused by high penetration levels of intermittent renewable DG can be complex and severe and may include voltage increase, voltage fluctuation, interaction with voltage regulation and control equipment, reverse power flows, temporary overvoltage, power quality and protection concerns, and current and voltage unbalance, among others.

Common impacts of DG in distribution grids are described below; this list is not exhaustive and includes operational and planning aspects50, 51.

  • Voltage increase can lead to customer complaints and potentially to customer and utility equipment damage, and service disruption.
  • Voltage fluctuation may lead to flicker issues, customer complaints, and undesired interactions with voltage regulation and control equipment.
  • Reverse power flow may cause undesirable interactions with voltage control and
  • regulation equipment and protection system misoperations.
  • Line and equipment loading increase may cause damage to equipment and service disruption may occur.
  • Losses increase(under high penetration levels) can reduce system efficiency.
  • Power factor decrease below minimum limits set by some utilities in their contractual agreements with transmission organizations, would create economic penalties and losses for utilities.
  • Current unbalance and voltage unbalance may lead to system efficiency and protection issues, customer complaints and potentially to equipment damage.
  • Interaction with Load Tap Changers (LTC), line voltage regulators (VR), and switched
  • capacitor banks due to voltage fluctuations can cause undesired and frequent voltage
  • changes, customer complaints, reduce equipment life and increase the need for maintenance
  • Temporary Overvoltage (TOV): if accidental islanding occurs and no effective reference to ground is provided then voltages in the island may increase significantly and exceed allowable operating limits. This can damage utility and customer equipment, e.g., arresters may fail, and cause service disruptions.
  • Harmonic distortion caused by proliferation of power electronic equipment such as PV inverters.

The aggregate effect from hundreds or thousands of inverters may cause service disruptions, complaints or customer economic losses, particularly for those relying on the utilization of sensitive equipment for critical production processes.

  • Voltage sags and swells caused by sudden connection and disconnection of large DG units may cause the tripping of sensitive equipment of end users and service disruptions.
  • Interaction with protection systems including increase in fault currents, reach
  • modification, sympathetic tripping, miscoordination, etc.
  • Voltage and transient stability: voltage and transient stability are well-known phenomena at transmission and sub-transmission system level but until very recently were not a subject of interest for distribution systems. As DG proliferates, such concerns are becoming more common.

The severity of these impacts is a function of multiple variables, particularly of the DG penetration level and real-time monitoring, control and automation of the distribution system. However, generally speaking, it is difficult to define guidelines to determine maximum penetration limits of DG or maximum hosting capacities of distribution grids without conducting detailed studies.

From the utility perspective, high PV penetration and non-utility microgrid implementations shift the legacy, centralized, unidirectional power system to a more  complex, bidirectional power system with new supply and load variables at the grid’s edge. This shift introduces operational issues such as the nature, cost, and impact of interconnections, voltage stability, frequency regulation, and personnel safety, which in turn impact resource planning and investment decisions.

NREL. 2014. Volume 4: Bulk Electric Power Systems: Operations and Transmission Planning. National Renewable Energy Laboratory.

Initial experience with PV indicates that output can vary more rapidly than wind unless aggregated over a large footprint. Further, PV installed at the distribution level (e.g., residential and commercial rooftop systems) can create challenges in management of distribution voltage.

Meier, A. May 2014. Challenges to the integration of renewable resources at high system penetration. California Energy Commission.

3.2 Distribution Level: Local Issues

A significant class of challenges to the integration of renewable resources is associated primarily with distributed siting, and only secondarily with intermittence of output. These site‐specific issues apply equally to renewable and non‐renewable resources, collectively termed distributed generation (DG). However, DG and renewable generation categories overlap to a large extent due to

  • technical and environmental feasibility of siting renewables close to loads
  • high public interest in owning renewable generation, especially photovoltaics (PV)
  • distributed siting as an avenue to meet renewable portfolio standards (RPS), augmenting the contribution from large‐scale installations Motivation exists; therefore, to facilitate the integration of distributed generation, possibly at substantial cost and effort, if this generation is based on renewable resources.

Distributed generation may therefore be clustered, with much higher penetration on individual distribution feeders than the system‐wide average, for any number of reasons outside the utility’s control, including local government initiatives, socio‐economic factors, or neighborhood social dynamics.

The actual effects of distributed generation at high penetration levels are still unknown but are likely to be very location specific, depending on the particular characteristics of individual distribution feeders.

Technical issues associated with high local penetration of distributed generation include

  • Clustering: The local effects of distributed generation depend on local, not system‐wide penetration (percent contribution). Local penetration level of distributed generation may be clustered on individual feeders for reasons outside the utility’s control, such as local government initiatives, socio‐economic factors, including neighborhood social dynamics Clustering density is relative to the distribution system’s functional connectivity, not just geographic proximity, and may therefore not be obvious to outside observers.
  • Transformer capacity: Locally, the relative impact of DG is measured relative to load – specifically, current. Equipment, especially distribution transformers, may have insufficient capacity to accommodate amounts of distributed generation desired by customers. Financial responsibility for capacity upgrades may need to be negotiated politically.
  • Modeling: From the grid perspective, DG is observed in terms of net load. Neither the amount of actual generation nor the unmasked load may be known to the utility or system operator. Without this information, however, it is impossible to construct an accurate model of local load, for purposes of: forecasting future load, including ramp rates, ascertaining system reliability and security in case DG fails Models of load with high local DG penetration will have to account for both generation and load explicitly in order to predict their combined behavior. • Voltage regulation: Areas of concern, explained in more detail in the Background section below, include: maintaining voltage in permissible range, wear on existing voltage regulation equipment, reactive power (VAR) support from DG

Areas of concern and strategic interest, explained in more detail in the Background section below, include: preventing unintentional islanding, application of microgrid concept, variable power quality and reliability Overall, the effect of distributed generation on distribution systems can vary widely between positive and negative, depending on specific circumstances that include

  • the layout of distribution circuits
  • existing voltage regulation and protection equipment
  • the precise location of DG on the circuit

3.2.2 Background: Voltage Regulation Utilities are required to provide voltage at every customer service entrance within permissible range, generally ±5 percent of nominal. For example, a nominal residential service voltage of 120V means that the actual voltage at the service entrance may vary between 114 and 126 V. Due to the relative paucity of instrumentation in the legacy grid, the precise voltage at different points in the distribution system is often unknown, but estimated by engineers as a function of system characteristics and varying load conditions.

Different settings of load tap changer (LTC) or other voltage regulation equipment may be required to maintain voltage in permissible range as DG turns on and off. Potential problems include the following:

  • DG drives voltage out of the range of existing equipment’s ability to control
  • Due to varying output, DG provokes frequent operation of voltage regulation equipment, causing excessive wear

DG creates conditions where voltage profile status is not transparent to operators

Fundamentally, voltage regulation is a solvable problem, regardless of the level of DG penetration. However, it may not be possible to regulate voltage properly on a given distribution feeder with existing voltage regulation equipment if substantial DG is added. Thus a high level of DG may necessitate upgrading voltage regulation capabilities, possibly at significant cost. Research is needed to determine the best and most cost‐effective ways to provide voltage regulation, where utility distribution system equipment and DG complement each other.

Legacy power distribution systems generally have a radial design, meaning power flows in only one direction: outward from substations toward customers. The “outward” or “downstream” direction of power flow is intuitive on a diagram; on location, it can be defined in terms of the voltage drop (i.e., power flows from higher to lower voltage).

If distributed generation exceeds load in its vicinity at any one moment, power may flow in the opposite direction, or “upstream” on the distribution circuit. To date, interconnection standards are written with the intention to prevent such “upstream” power flow.

The function of circuit protection is to interrupt power flow in case of a fault, i.e. a dangerous electrical contact between wires, ground, trees or animals that results in an abnormal current (fault current). Protective devices include fuses (which simply melt under excessive current), circuit breakers (which are opened by a relay) and reclosers (which are designed to re‐establish contact if the fault has gone away).

The exception is a networked system, where redundant supply is always present. Networks are more complicated to protect and require special circuit breakers called “network protectors” to prevent circulating or reverse power flow. If connected within such a networked system, DG is automatically prevented from backfeeding into the grid. Due to their considerable cost, networked distribution systems are common only in dense urban areas with a high concentration of critical loads, such as downtown Sacramento or San Francisco, and account for a small percentage of distribution feeders in California.

3.2.5 Research Needs Related to Circuit Protection

The presence of distributed generation complicates protection coordination in several ways: • The fault must now be isolated not only from the substation (“upstream”) power source, but also from DG • Until the fault is isolated, DG contributes a fault current that must be modeled and safely managed

Shifting fault current contributions can compromise the safe functioning of other protective devices: it may delay or prevent their actuation (relay desensitization), and it may increase the energy (I2t) that needs to be dissipated by each device.6 Interconnection standards limit permissible fault current contributions (specifically, no more than 10 percent of total for all DG collectively on a given feeder). The complexity of protection coordination and modeling increases dramatically with increasing number of connected DG units, and innovative protection strategies are likely required to enable higher penetration of DG.

Standard utility operating procedures in the United States do not ordinarily permit power islands. The main exception is the restoration of service after an outage, during which islanded portions of the grid are re‐connected in a systematic, sequential process; in this case, each island is controlled by one or larger, utility‐operated generators. Interconnection rules for distributed generation aim to prevent unintentional islanding. To this end, they require that DG shall disconnect in response to disturbances, such as voltage or frequency excursions, that might be precursors to an event that will isolate the distribution circuit with DG from its substation source.

Disconnecting the DG is intended to assure that if the distribution circuit becomes isolated, it will not be energized. This policy is based on several risks entailed by power islands: • Safety of utility crews: If lines are unexpectedly energized by DG, they may pose an electrocution hazard, especially to line workers sent to repair the cause of the interruption. It is important to keep in mind that even though a small DG facility such as a rooftop solar array has limited capacity to provide power, it would still energize the primary distribution line with high voltage through its transformer connection, and is therefore just as potentially lethal as any larger power source. • Power quality: DG may be unable to maintain local voltage and frequency within desired or legally mandated parameters for other customers on its power island, especially without provisions for matching generation to local load. Voltage and frequency departures may cause property damage for which the utility could be held liable, although it would have no control over DG and power quality on the island. • Re‐synchronization: When energized power islands are connected to each other, the frequency and phase of the a.c. cycle must match precisely (i.e., be synchronized), or elsegenerators could be severely damaged. DG may lack the capability to synchronize its output with the grid upon re‐connection of an island.

3.2.7 Research Needs Related to Islanding

In view of the above risks, most experts agree that specifications for the behavior of DG should be sufficiently restrictive to prevent unintentional islanding. Interconnection rules aim to do this by requiring DG to disconnect within a particular time frame in response to a voltage or frequency deviation of particular magnitude, disconnecting more quickly (down to 0.16 seconds, or 10 cycles) in response to a larger deviation. At the same time, however, specifications should not be too conservative to prevent DG from supporting power quality and reliability when it is most needed.

There is no broad consensus among experts at this time about how best to reconcile the competing goals of minimizing the probability of unintentional islanding, while also maximizing the beneficial contribution from DG to distribution circuits.

As for the possibility of permitting DG to intentionally support power islands on portions of the utility distribution system, there is a lack of knowledge and empirical data concerning how power quality might be safely and effectively controlled by different types of DG, and what requirements and procedures would have to be in place to assure the safe creation and re‐connection of islands. Because of these uncertainties, the subject of islanding seems likely to remain somewhat controversial for some time.

Needed: • Modeling of DG behavior at high local penetrations, including o prevention of unintentional islanding o DG control capabilities during intentional islanding • Collaboration across utility and DG industries to facilitate DG performance standardization, reliability and trust. This means that utilities can depend on DG equipment to perform according to expectations during critical times and abnormal conditions on the distribution system, the handling of which is ultimately the utility’s responsibility.

In the long run, intentional islanding capabilities – with appropriate safety and power quality control – may be strategically desirable for reliability goals, security and optimal resource utilization. Such hypothetical power islands are related to but distinct from the concept of microgrids, in that they would be scaled up to the primary distribution system rather than limited to a single customer’s premises. A microgrid is a power island on customer premises, intermittently connected to the distribution system behind a point of common coupling (PCC) that may comprise a diversity of DG resources, energy storage, loads, and control infrastructure. Three key features of a microgrid are • Design around total system energy requirements:

Depending on their importance, time preference or sensitivity to power quality, different loads may be assigned to different primary and/or back‐up generation sources, storage, or uninterruptible power supplies (UPS). A crucial concept is that the expense of providing highly reliable, high‐quality power (i.e., very tightly controlled voltage and frequency) can be focused on those loads where it really matters to the end user (or life of the appliance), at considerable overall economic savings. However, the provision of heterogeneous power quality and reliability (PQR) requires a strategic decision of what service level is desired for each load, as well as the technical capability to discriminate among connected loads and perform appropriate switching operations. • Presentation to the macrogrid as a single controlled entity: At the point of common coupling, the microgrid appears to the utility distribution system simply as a time‐varying load. The complexity and information management involved in coordinating generation, storage and loads is thus contained within the local boundaries of the microgrid.

Note that the concepts of microgrids and power islands differ profoundly in terms of • ownership • legal responsibility (i.e. for safety and power quality) • legality of power transfers (i.e., selling power to loads behind other meters) • regulatory jurisdiction • investment incentives Nevertheless, microgrids and hypothetical power islands on distribution systems involve many of the same fundamental technical issues. In the long run, the increased application of the microgrid concept, possibly at a higher level in distribution systems, may offer a means for integrating renewable DG at high penetration levels, while managing coordination issues and optimizing resource utilization locally. Research Needs: • Empirical performance validation of microgrids • Study of the implications of applying microgrid concepts to higher levels of distribution circuits, including o time‐varying connectivity o heterogeneous power quality and reliability o local coordination of resources and end‐uses to strategically optimize local benefits of distributed renewable generation • Study of interactions among multiple microgrids

Posted in Alternative Energy, Blackouts Electric Grid, Distributed Generation, Electric Grid & Fast Collapse, Grid instability, Photovoltaic Solar, Wind | Tagged , , , , | 1 Comment

Interdependencies & supply chain failures in the News

Preface. Joseph Tainter, explains in his famous book “The collapse of complex societies” how complexity causes civilizations to collapse. Fossil fuels have created the most complex society that has ever, or will ever exist, using fossil energy that can’t be replaced (as I explain in “Life After Fossil Fuels”). This is starting to happen. The most complex product we make are microchips, and I predict they will be the first to fail. Their supply chains are so long that just one missing component or one natural disaster in one country can stop production (see posts in microchips and computers, critical elements, and rare earth elements). These are the most complex products we make, with precision engineering to almost the atomic scale, and so will be the first to fail as energy declines and supply chains break for many reasons. Microchips, sometimes dozens or more, are in every car, computer, phone, car, laptop, toaster, TV, and other electronic devices.

Plastics are also used across many industries, and are made out of mainly oil and invented only recently. Thwaites tried to build a toaster from scratch for a Masters degree, and plastics were beyond him for many reasons.

Plastics, refineries, and many other chemicals have to be produced around the clock or the pipes clog up. On a Power Hungry podcast, Oxer explained if the power goes down while making styrene plastic precursor (used in many plastics) it takes 6 hours to get plastic out of pipes, and if not done in that time frame, then it will take 6 weeks. Because of this, many factories have their own power plant. But in the Texas power outage, the natural gas stopped flowing to factories and power plants because some of the compressors that keep gas flowing in the pipes were electric (to lower emissions) rather than the gas itself, which is the usual way to do it. Doh!

Oxer further explained that 85 power plants were close to damaging the entire transmission, interconnect transformers, substations, power plants, and if this had happened, it could have taken 3 months, until May, to get it the electric grid back running if it had crashed. You Texans can expect this to happen again: many other storms in the past were a problem as well, such as the Panhandle blizzard 1957, Houston snowstorm 1960, San Antonio snowstorm 1985, winter storm goliath of 2015, North American Ice storm 2017. Also below freezing 1899 and 1933. But ERCOT was created to avoid FERC regulation and so reliability is a low priority for ERCOT.

Alice Friedemann  www.energyskeptic.com Women in ecology  author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity

***

2022 Lessons From Henry Ford About Today’s Supply Chain Mess. NYT.

Ford’s Rouge auto-making plant is bedeviled by a shortage of a crucial component that would have horrified Mr. Ford, who vertically integrated his company to control all manufacturing supplies to prevent shortages disrupting the production of cars. Today, Ford cannot buy enough semiconductors, the computer chips that are the brains of the modern-day car. Ford is heavily dependent on a single supplier of chips located more than 7,000 miles away, in Taiwan. With chips scarce throughout the global economy, Ford and other automakers have been forced to intermittently halt production. Yet given the high cost of a chip fabrication plant and expertise, Ford and other companies are not likely to every try to make chips themselves.

The F-150 pickup produced at the Rouge uses more than 800 types of chips, requiring dependence on specialists. And chips have limited shelf lives, making them difficult to stockpile. And chip companies have catered heavily to their investors by limiting their capacity — a strategy to maintain high prices.

2021 Texas Freeze Creates Global Plastics Shortage

First the demand for electronics caused a shortage of microchips, which hit the automotive industry particularly hard. Now, the Texas Freeze has caused a global shortage of plastics. The Wall Street Journal reported this week that the cold spell that shut down oil fields and refineries in Texas is still affecting operations, with several petrochemical plants on the Gulf Coast remaining closed a month after the end of the crisis. This creates a shortage of essential raw materials for a range of industries, from carmaking to medical consumables and even house building.

The WSJ report mentions carmakers Honda and Toyota as two companies that would need to start cutting output because of the plastics shortage, which came on top of an already pressing shortage of microchips. Ford, meanwhile, is cutting shifts because of the chip shortage and building some models only partially. 

Another victim is the construction industry. Builders are bracing for shortages of everything from siding to insulation.

More than 60 percent of polyvinyl chloride (PVC) production capacity in the United States is still out of operation a month after the Texas Freeze, affecting businesses that use piping, roofing, flooring, cable insulation, siding, car windshields, car seat foam, car interiors, adhesives, bread bags, dry cleaner bags, paper towel wrapping, shipping sacks, plastic wrap, pouches, toys, covers, lids, pipes, buckets, containers, cables, geomembranes, flexible tubing, and the lumber and steel industries. Hospitals are experiencing shortages of plastic medical equipment, such as disposable containers for needles and other sharp items (“Going To Get Ugly” – Global Plastic Shortage Triggered By Texas Deep Freeze).

This has made clear how complex and vulnerable global supply chains are, the other is how dependent we are on plastics. Various kinds of plastic are used in every single industry and there is no way we can wean ourselves off it.

Seeing the energy transition ahead, Big Oil has shifted big time into plastics, but “Big Plastic” plans could lead to $400 billion in stranded assets as oil companies overestimated plastic demand growth. Yet, the current shortage seems to prove the bet on petrochemicals is safe. There are no economically viable alternatives to plastic cable insulation—or a car interior, or a smartphone casing, or a laptop, or a thousand other things from everyday life—has yet to make an appearance.

2021 Microchip shortages

A chip shortage that started in a surge in demand for personal computers and other electronics for work or school from home during the covid-19 pandemic now threatens to snarl car production around the world. Semiconductors are in short supply because of big demand for electronics, shifting business models which include outsourcing production, and effects from former President Donald Trump’s trade war with China. Chips are likely to remain in short supply in coming months.

Car makers production due to lack of microchips is going down at GM, Ford, Honda, Toyota, Subaru, Volkswagen, Audi, and Fiat Chrysler.

Cars can have thousands of tiny semiconductors, many of which perform functions like power management. Cars also use a lot of micro-controllers, which can control traditional automotive tasks like power steering, or are the brain at the heart of an infotainment system. They’re used for in-car dials and automatic braking as well. Car makers also usually use “just-in-time” production, which means they avoid having extra parts in storage. The problem is even if that 10-cent chip is missing, you can’t sell your $30,000 car.

 

 

Posted in Interdependencies, Microchips and computers, Supply Chains | Tagged , , , , , , | 2 Comments

Jason Bradford on reforming the current food system

Preface. Jason Bradford is amazing: He taught ecology for a few years at Washington University in St. Louis, worked for the Center for Conservation and Sustainable Development at the Missouri Botanical Garden, and co-founded the Andes Biodiversity and Ecosystem Research Group (ABERG). After joining with the Post Carbon Institute in 2004 he shifted from academia to sustainable agriculture, had six months of training with Ecology Action (aka GrowBiointensive) in Willits, California, started the Willits Economic LocaLization and hosted The Reality Report radio show on KZYX in Mendocino County. In 2009 he moved to Corvallis, Oregon, as one of the founders of Farmland LP, a farmland management fund implementing organic and mixed crop and livestock systems. He now lives with his family outside of Corvallis on an organic farm.

Below is the Introduction of his book “The Future is Rural” followed by an older piece he wrote back in 2009.

The book “The Future is Rural” is available for “free” as a PDF if you join the mailing list at the Post Carbon Institute here: https://www.postcarbon.org/publications/the-future-is-rural/

You can listen to Bradford at my favorite podcast “Crazy Town”, subscribe or listen here: https://www.postcarbon.org/crazytown/

Organic Agriculture in the news:

2021 Rodale Enlists Cargill in Unlikely Alliance to Increase Organic Farmland. Rodale will help Cargill convert 50,000 acres of corn and soy to being organically grown

Eshel G (2021) Small-scale integrated farming systems can abate continental-scale nutrient leakage. PLOS Biology. Eshel calculated how adopting nitrogen-sparing agriculture in the USA could feed the country nutritiously and reduce nitrogen leakage into water supplies. He proposes to shift to small, mixed agricultural farms with the core 1.43-hectares an intensive cattle facility from which manure production supports crops for humans as well as livestock fodder.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Bradford J (2019) The Future is Rural. Food System Adaptations to the Great Simplification. Post Carbon Institute.

Introduction

Today’s economic globalization is the most extreme case of complex social organization in history—and the energetic and material basis for this complexity is waning.   Not only are concentrated raw resources becoming rarer, but previous investments in infrastructure (for example, ports) are in the process of decay and facing accelerating threats from climate change and social disruptions. 2 The collapse of complex societies is a historically common occurrence,3 but what we are facing now is at an unprecedented scale. Contrary to the forecasts of most demographers, urbanization will reverse course as globalization unwinds during the 21st century. The eventual decline in fossil hydrocarbon flows, and the inability of renewables to fully substitute, will create a deficiency of energy to power bloated urban agglomerations and require a shift of human populations back to the countryside. 4 In short, the future is rural.

Given the drastic changes that are unfolding, this report has four main aims:

  • Understand how we got to a highly urbanized, globalized society and why a more rural, relocalized society is inevitable.
  • Provide a framework (sustainability and resilience science) for how to think about our predicament and the changes that will need to occur.
  • Review the most salient aspects of agronomy, soil science, and local food systems, including some of the schools of thought that are adapted to what’s in store.
  • Offer a strategy and tactics to foster the transformation to a local, sustainable, resilient food system.

This report reviews society’s energy situation; explores the consequences for producing, transporting, storing, and consuming food; and provides essential information and potentially helpful advice to those working on reform and adaptation. It presents a difficult message. Our food system is at great risk from a problem most are not yet aware of, i.e., energy decline. Because the problem is energy, we can’t rely on just-in-time innovative technology, brilliant experts, and faceless farmers in some distant lands to deal with it. Instead, we must face the prospect that many of us will need to be more responsible for food security. People in highly urbanized and globally integrated countries like the U.S. will need to reruralize and relocalize human settlement and subsistence patterns over the coming decades to adapt to both the end of cheaply available fossil fuels and climate change.

These trends will require people to change the way they go about their lives, and the way their communities go about business. There is no more business as usual. The point is not to give you some sort of simple list of “50 things you should do to save the planet” or “the top 10 ways to grow food locally.” Instead, this report provides the broad context, key concepts, useful information, and ways of thinking that will help you and those around you understand and adapt to the coming changes.

To help digest the diverse material, the report is divided into five sections plus a set of concluding thoughts:

  • Part One sets the broad context of how fossil hydrocarbons—coal, oil and natural gas—transformed civilization, how their overuse has us in a bind, and why renewable energy systems will fall short of most expectations.
  • Part Two presents ways to think about how the world works from disciplines such as ecology, and highlights the difference between more prevalent, but outdated, mental models.
  • Part Three reviews basic science on soils and agronomy, and introduces historical ways people have fed themselves.
  • Part Four outlines some modern schools of thought on agrarian ways of living without fossil fuels.
  • Part Five brings the knowledge contained in the report to bear on strategies and tactics to navigate the future. Although the report is written for a U.S. audience, much of the content is more widely applicable.

During the process of writing this report, thought leaders and practitioners were interviewed to capture their perspectives on some of the key questions that arise from considering the decline of fossil fuels, consequences for the food system, and how people can adapt. Excerpts from those interviews are given in the Appendix section “Other Voices,” and several of their quotes are inserted throughout the main text.

Globalization has become a culture, and the prospect of losing this culture is unsettling. Much good has arisen from the integration and movements of people and materials that have occurred in the era of globalization. But we will soon be forced to face the consequences of unsustainable levels of consumption and severe disruption of the biosphere. For the relatively wealthy, these consequences have been hidden by tools of finance and resources flows to power centers, while people with fewer means have been trampled in the process of assimilation. In the U.S., our food system is culturally bankrupt, mirroring and contributing to crises of health and the environment. We can rebuild the food system in ways that reflect energy, soil, and climate realities, seeking opportunities to recover elements of past cultures that inhabited the Earth with grace. Something new will arise, and in the evolution of what comes next, many may find what is often lacking in life today—the excitement of a profound challenge, meaning beyond the self, a deep sense of purpose, and commitment to place.

Bradford J (2009) Ecological Economics and the Food System. The oil drum: Campfire.

To get by on ambient energy as much as possible, we have sought alternatives to fossil fuels in every aspect of the food system we participate in. Table 1 considers each type of work done on the farm, to the fork, and back again and contrasts how fossil fuels are commonly used with the technologies we have applied.

Type of Work Common Fossil-Fuel Inputs Alternatives Implemented
Soil cultivation Gasoline or diesel powered rototiller or small tractor Low-wheel cultivator, broadfork, adze or grub hoe, rake and human labor
Soil fertility In-organic or imported organic fertilizer Growing of highly productive, nitrogen and biomass crop (banner fava beans), making aerobic compost piles sufficient to build soil carbon and nitrogen fertility, re-introducing micro-nutrients by importing locally generated food waste and processing in a worm bin, and application of compost teas for microbiology enhancement.
Pest and weed management Herbicide and pesticide applications, flame weeder, tractor cultivation Companion planting, crop rotation, crop diversity and spatial heterogeneity, beneficial predator attraction through landscape plantings, emphasis on soil and plant health, and manual removal with efficient human-scaled tools
Seed sourcing Bulk ordering of a few varieties through centralized seed development and distribution outlets Sourcing seeds from local supplier, developing a seed saving and local production and distribution plan using open pollinated varieties
Food distribution Produce trucks, refrigeration, long-distance transport, eating out of season Produce only sold locally, direct from farm or hauled to local restaurants or grocers using bicycles or electric vehicles, produce grown with year-round consumption in mind with farm delivering large quantities of food in winter months
Storage and processing at production end Preparation of food for long distance transport, storage and retailing requiring energy intensive cooling, drying, food grade wax and packaging Passive evaporative cooling, solar dehydrating, root cellaring and re-usable storage baskets and bags
Home and institutional storage and cooking Natural gas, propane or electric fired stoves and ovens, electric freezers and refrigerators Solar ovens, promotion of eating fresh and seasonal foods, home-scale evaporative cooling for summer preservation and “root cellaring” techniques for winter storage

Table 1. Feeding people requires many kinds of work and all work entails energy. In most farm operations the main energy sources are fossil fuels. By contrast, Brookside Farm uses and develops renewable energy based alternatives.

Our use of food scraps to replace exported fertility also reduces energy by diverting mass from the municipal waste stream. Solid Waste of Willits has a transfer station in town but no local disposal site. Our garbage is trucked to Sonoma County about 100 miles to the south. From there it may be sent to a rail yard and taken several hundred miles away to an out of state land fill. We are also installing a rainwater catchment and storage system that will supply about half the annual water needs to offset use of treated municipal water. The associated irrigation system will be driven by a photovoltaic system instead of the usual diesel-driven pumps on many farms.

Let me put the area of lawn from this study into a food perspective. The 128,000 square kilometers of lawns is the same as 32 million acres. A generous portion of fruits and vegetables for a person per year is 700 lbs, or about half the total weight of food consumed in a year.[xviii] Modest yields in small farms and gardens would be in the range of about 20,000 lbs per acre.[xix] Even with half the area set aside to grow compost crops each year, simple math reveals that the entire U.S. population could be fed plenty of vegetables and fruits using two thirds of the area currently in lawns.

Number of people in U.S. 300,000,000
Pounds of fruits and vegetables per person per year 700
Yield per acre in pounds 20,000
People fed per acre in production 29
Fraction of area set aside for compost crops 0.5
Compost-adjusted people fed per acre 14
Number of acres to feed population 21,000,000
Acres in lawn 32,000,000
Percent of lawn area needed 66%

Labor Compared to Hours of T.V.

For its members Brookside Farm’s role is to provide a substantial proportion of their yearly vegetable and fruit needs. Using our farming techniques, we estimate that one person working full time could grow enough produce for ten to twenty people. By contrast, an individual could grow their personal vegetable and fruit needs on a very part-time basis, probably half an hour per day, on average, working an area the size of a small home (700 sq ft in veggies and fruits plus 700 sq ft in cover crops). American’s complain that they feel cramped for time and overworked. But is this really true or just a function of addiction to a fast-paced media culture? According to Nielsen Media Research:[xx]

Posted in Agriculture, Farming & Ranching | Tagged , , , , , , | 1 Comment

There are over 300,000 contaminated groundwater sites in the U.S.

Preface.  If peak oil did indeed happen in 2018 as the EIA world production data shows, then let’s use the oil we still have, before it is rationed, to clean up the 126,000+ sites that threaten to pollute groundwater for thousands of years as this report from the National Research Council explains.  And while we’re at it, nuclear waste, which will pollute for hundreds of thousands of years. 

Pollution in the news:

Westenhaus B (2022) The environmental consequence of burning rubber. oilprice.com. Have you ever wondered what happens to the rubber tread that wears off a vehicle’s tires? On a planet with hundreds of millions of vehicles there has to be quite a lot somewhere. New modeling at the University of British Columbia Okanagan (UBCO) campus suggests an increasing amount of what are microplastics, the fragments from tires and roadways, are ending up in lakes and streams.The researchers found that more than 50 metric tons of tire and road wear particles are released into waterways annually in an area like the Okanagan valley of British Columbia. With1.5 billion tires  produced every year globally, thatl’s six million tonnes of tire and road wear particles plus chemical additives, contaminating fresh water.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

NRC. 2013. Alternatives for Managing the Nation’s Complex Contaminated Groundwater Sites. National Research Council, National Academies Press.

contaminated water sites and costTABLE 2-6 Rough Estimate of the total number of currently known facilities or contaminated sites and estimated costs to complete

CONCLUSIONS AND RECOMMENDATIONS At least 126,000 sites across the country have been documented that have residual contamination at levels preventing them from reaching closure.

This number is likely to be an underestimate of the extent of contamination in the United States for a number of reasons. For some programs data are available only for contaminated facilities rather than individual sites, and the total does not include sites that likely exist but have not yet been identified, such as dry cleaners or small chemical-intensive businesses (e.g., electroplating, furniture refinishing). Information on cleanup costs incurred to date and estimates of future costs, as shown in Table 2-6, are highly uncertain. Despite this uncertainty, the estimated “cost to complete” of $110-$ 127 billion is likely an underestimate of future liabilities. Remaining sites include some of the most difficult to remediate sites, for which the effectiveness of planned remediation remains uncertain given their complex site conditions. Furthermore, many of the estimated costs do not fully consider the cost of long-term management of sites that will have contamination remaining in place at high levels for the foreseeable future.

Despite nearly 40 years of intensive efforts in the United States as well as in other industrialized countries worldwide, restoration of groundwater contaminated by releases of anthropogenic chemicals to a condition allowing for unlimited use and unrestricted exposure remains a significant technical and institutional challenge.

Recent estimates by the U.S. Environmental Protection Agency (EPA) indicate that expenditures for soil and groundwater cleanup at over 300,000 sites through 2033 may exceed $200 billion (not adjusted for inflation), and many of these sites have experienced groundwater impacts.

One dominant attribute of the nation’s efforts on subsurface remediation efforts has been lengthy delays between discovery of the problem and its resolution. Reasons for these extended timeframes are now well known: ineffective subsurface investigations, difficulties in characterizing the nature and extent of the problem in highly heterogeneous subsurface environments, remedial technologies that have not been capable of achieving restoration in many of these geologic settings, continued improvements in analytical detection limits leading to discovery of additional chemicals of concern, evolution of more stringent drinking water standards, and the realization that other exposure pathways, such as vapor intrusion, pose unacceptable health risks. A variety of administrative and policy factors also result in extensive delays, including, but not limited to, high regulatory personnel turnover, the difficulty in determining cost-effective remedies to meet cleanup goals, and allocation of responsibility at multiparty sites.

There is general agreement among practicing remediation professionals, however, that there is a substantial population of sites, where, due to inherent geologic complexities, restoration within the next 50 to 100 years is likely not achievable. Reaching agreement on which sites should be included in this category, and what should be done with such sites, however, has proven to be difficult.  A key decision in that Road Map is determining whether or not restoration of groundwater is “likely.

Summary

The nomenclature for the phases of site cleanup and cleanup progress are inconsistent between federal agencies, between the states and federal government, and in the private sector. Partly because of these inconsistencies, members of the public and other stakeholders can and have confused the concept of “site closure” with achieving unlimited use and unrestricted exposure goals for the site, such that no further monitoring or oversight is needed. In fact, many sites thought of as “closed” and considered as “successes” will require oversight and funding for decades and in some cases hundreds of years in order to be protective.

At hundreds of thousands of hazardous waste sites across the country, groundwater contamination remains in place at levels above cleanup goals. The most problematic sites are those with potentially persistent contaminants including chlorinated solvents recalcitrant to biodegradation, and with hydrogeologic conditions characterized by large spatial heterogeneity or the presence of fractures. While there have been success stories over the past 30 years, the majority of hazardous waste sites that have been closed were relatively simple compared to the remaining caseload.

At least 126,000 sites across the country have been documented that have residual contamination at levels preventing them from reaching closure. This number is likely to be an underestimate of the extent of contamination in the United States

Significant limitations with currently available remedial technologies persist that make achievement of Maximum Contaminant Levels (MCL) throughout the aquifer unlikely at most complex groundwater sites in a time frame of 50-100 years. Furthermore, future improvements in these technologies are likely to be incremental, such that long-term monitoring and stewardship at sites with groundwater contamination should be expected.

IMPLICATIONS OF CONTAMINATION REMAINING IN PLACE

Chapter 5 discusses the potential technical, legal, economic, and other practical implications of the finding that groundwater at complex sites is unlikely to attain unlimited use and unrestricted exposure levels for many decades.  First, the failure of hydraulic or physical containment systems, as well as the failure of institutional controls, could create new exposures. Second, toxicity information is regularly updated, which can alter drinking water standards, and contaminants that were previously unregulated may become so. In addition, pathways of exposure that were not previously considered can be found to be important, such as the vapor intrusion pathway. Third, treating contaminated groundwater for drinking water purposes is costly and, for some contaminants, technically challenging. Finally, leaving contamination in the subsurface may expose the landowner, property manager, or original disposer to complications that would not exist in the absence of the contamination, such as natural resource damages, trespass, and changes in land values. Thus, the risks and the technical, economic, and legal complications associated with residual contamination need to be compared to the time, cost, and feasibility involved in removing contamination outright.

New toxicological understanding and revisions to dose-response relationships will continue to be developed for existing chemicals, such as trichloroethene and tetrachloroethene, and for new chemicals of concern, such as perchlorate and perfluorinated chemicals. The implications of such evolving understanding include identification of new or revised ARARs (either more or less restrictive than existing ones), potentially leading to a determination that the existing remedy at some hazardous waste sites is no longer protective of human health and the environment.

Introduction

Since the 1970s, hundreds of billions of dollars have been invested by federal, state, and local government agencies as well as responsible parties to mitigate the human health and ecological risks posed by chemicals released to the subsurface environment. Many of the contaminants common to these hazardous waste sites, such as metals and volatile organic compounds, are known or suspected to cause cancer or adverse neurological, reproductive, or developmental conditions.

Over the past 30 years, some progress in meeting mitigation and remediation goals at hazardous waste sites has been achieved. For example, of the 1,723 sites ever listed on the National Priorities List (NPL), which are considered by the U.S. Environmental Protection Agency (EPA) to present the most significant risks, 360 have been permanently removed from the list because EPA deemed that no further response was needed to protect human health or the environment (EPA, 2012).

Seventy percent of the 3,747 hazardous waste sites regulated under the Resource Conservation and Recovery Act (RCRA) corrective action program have achieved “control of human exposure to contamination,” and 686 have been designated as “corrective action completed”. The Underground Storage Tank (UST) program also reports successes, including closure of over 1.7 million USTs since the program was initiated in 1984. The cumulative cost associated with these national efforts underscores the importance of pollution prevention and serves as a powerful incentive to reduce the discharge or release of 13 hazardous substances to the environment, particularly when a groundwater resource is threatened. Although some of the success stories described above were challenging in terms of contaminants present and underlying hydrogeology, the majority of sites that have been closed were relatively simple (e.g., shallow, localized petroleum contamination from USTs) compared to the remaining caseload.

Indeed, hundreds of thousands of sites across both state and federal programs are thought to still have contamination remaining in place at levels above those allowing for unlimited land and groundwater use and unrestricted exposure (see Chapter 2).  According to its most recent assessment, EPA estimates that more than $209 billion dollars (in constant 2004 dollars) will be needed over the next 30 years to mitigate hazards at between 235,000 to 355,000 sites (EPA, 2004). This cost estimate, however, does not include continued expenditures at sites where remediation is already in progress, or where remediation has transitioned to long-term management.

It is widely agreed that long-term management will be needed at many sites for the foreseeable future, particularly for the more complex sites that have recalcitrant contaminants, large amounts of contamination, and/or subsurface conditions known to be difficult to remediate (e.g., low-permeability strata, fractured media, deep contamination).

According to the most recent annual report to Congress, the Department of Defense (DoD) currently has almost 26,000 active sites under its Installation Restoration Program where soil and groundwater remediation is either planned or under way. Of these, approximately 13,000 sites are the responsibility of the Army, the sponsor of this report. The estimated cost to complete cleanup at all DoD sites is approximately $12.8 billion. (Note that these estimates do not include sites containing unexploded ordnance.)

Complex Contaminated Sites

Although progress has been made in remediating many hazardous waste sites, there remains a sizeable population of complex sites, where restoration is likely not achievable in the next 50-100 years. Although there is no formal definition of complexity, most remediation professionals agree that attributes include a really extensive groundwater contamination, heterogeneous geology, large releases and/or source zones, multiple and/or recalcitrant contaminants, heterogeneous contaminant distribution in the subsurface, and long time frames since releases occurred.

Complexity is also directly tied to the contaminants present at hazardous waste sites, which can vary widely and include organics, metals, explosives, and radionuclides. Some of the most challenging to remediate are dense nonaqueous phase liquids (DNAPLs), including chlorinated solvents.

Each of the NRC studies has, in one form or another, recognized that in almost all cases, complete restoration of contaminated groundwater is difficult, and in a substantial fraction of contaminated sites, not likely to be achieved in less than 100 years.

Trichloroethene (TCE) and tetrachloroethene are particularly challenging to restore because of their complex contaminant distribution in the subsurface.

Three classes of contaminants that have proven very difficult to treat once released to the subsurface: metals, radionuclides, and DNAPLs, such as chlorinated solvents. The report concluded that “removing all sources of groundwater contamination, particularly DNAPLs, will be technically impracticable at many Department of Energy sites, and long-term containment systems will be necessary for these sites.”

An example of the array of challenges faced by the DoD is provided by the Anniston Army Depot, where groundwater is contaminated with chlorinated solvents (as much as 27 million pounds of TCE and inorganic compounds. TCE and other contaminants are thought to be migrating vertically and horizontally from the source areas, affecting groundwater downgradient of the base including the potable water supply to the City of Anniston, Alabama. The interim Record of Decision called for a groundwater extraction and treatment system, which has resulted in the removal of TCE in extracted water to levels below drinking water standards. Because the treatment system is not significantly reducing the extent or mobility of the groundwater contaminants in the subsurface, the current interim remedy is considered “not protective.” Therefore, additional efforts have been made to remove greater quantities of TCE from the subsurface, and no end is in sight. Modeling studies suggest that the time to reach the TCE MCL in the groundwater beneath the source areas ranges from 1,200 to 10,000 years, and that partial source removal will shorten those times to 830–7,900 years.

The Department of Defense

The DoD environmental remediation program, measured by the number of facilities, is the largest such program in the United States, and perhaps the world.

The Installation Restoration Program (IRP), which addresses toxic and radioactive wastes as well as building demolition and debris removal, is responsible for 3,486 installations containing over 29,000 contaminated sites

The Military Munitions Response Program, which focuses on unexploded ordnance and discarded military munitions, is beyond the scope of this report and is not discussed further here, although its future expenses are greater than those anticipated for the IRP.

The CERCLA program was established to address hazardous substances at abandoned or uncontrolled hazardous waste sites. Through the CERCLA program, the EPA has developed the National Priorities List (NPL).  There are 1,723 facilities that have been on the NPL.

As of June 2012, 359 of the 1,723 facilities have been “deleted” from the NPL, which means the EPA has determined that no further response is required to protect human health or the environment; 1,364 remain on the NPL.

Statistics from EPA (2004) illustrate the typical complexity of hazardous waste sites at facilities on the NPL. Volatile organic compounds (VOCs) are present at 78 percent of NPL facilities, metals at 77 percent, and semivolatile organic compounds (SVOCs) at 71 percent. All three contaminant groups are found at 52 percent of NPL facilities, and two of the groups at 76 percent of facilities

RCRA Corrective Action Program Among other objectives, the Resource Conservation and Recovery Act (RCRA) governs the management of hazardous wastes at operating facilities that handle or handled hazardous waste.

Although tens of thousands of waste handlers are potentially subject to RCRA, currently EPA has authority to impose corrective action on 3,747 RCRA hazardous waste facilities in the United States

Underground Storage Tank Program In 1984, Congress recognized the unique and widespread problem posed by leaking underground storage tanks by adding Subtitle I to RCRA.

UST contaminants are typically light nonaqueous phase liquids (LNAPLs) such as petroleum hydrocarbons and fuel additives.

Responsibility for the UST program has been delegated to the states (or even local oversight agencies such as a county or a water utility with basin management programs), which set specific cleanup standards and approve specific corrective action plans and the application of particular technologies at sites. This is true even for petroleum-only USTs on military bases, a few of which have hundreds of such tanks.

At the end of 2011, there were 590,104 active tanks in the UST program

Currently, there are 87,983 leaking tanks that have contaminated surrounding soil and groundwater, the so-called “backlog.” The backlog number represents the cumulative number of confirmed releases (501,723) minus the cumulative number of completed cleanups (413,740).

Department of Energy

The DOE faces the task of cleaning up the legacy of environmental contamination from activities to develop nuclear weapons during World War II and the Cold War. Contaminants include short-lived and long-lived radioactive wastes, toxic substances such as chlorinated solvents, “mixed wastes” that include both toxic substances and radionuclides, and, at a handful of facilities, unexploded ordnance. Much like the military, a given DOE facility or installation will tend to have multiple sites where contaminants may have been spilled, disposed of, or abandoned that can be variously regulated by CERCLA, RCRA, or the UST program. T

The DOE Environmental Management program, established in 1989 to address several decades of nuclear weapons production, “is the largest in the world, originally involving two million acres at 107 sites in 35 states and some of the most dangerous materials known to man”.

Given that major DOE sites tend to be more challenging than typical DoD sites, it is not surprising that the scope of future remediation is substantial. Furthermore, because many DOE sites date back 50 years, contaminants have diffused into the subsurface matrix, considerably complicating remediation.

More recent reports suggest that about 7,000 individual release sites out of 10,645 historical release sites have been “completed,” which means at least that a remedy is in place, leaving approximately 3,650 sites remaining. In 2004, DOE estimated that almost all installations would require long-term stewardship

As of April 1995, over 3,000 contaminated sites on 700 facilities, distributed among 17 non-DoD and non-DOE federal agencies, were potentially in need of remediation. The Department of Interior (DOI), Department of Agriculture (USDA), and National Aeronautics and Space Administration (NASA) together account for about 70 percent of the civilian federal facilities reported to EPA as potentially needing remediation (EPA, 2004). EPA estimates that many more sites have not yet been reported, including an estimated 8,000 to 31,000 abandoned mine sites, most of which are on federal lands, although the fraction of these that are impacting groundwater quality is not reported. The Government Accountability Office (GAO) (2008) determined that there were at least 33,000 abandoned hardrock mine sites in the 12 western states and Alaska that had degraded the environment by contaminating surface water and groundwater or leaving arsenic-contaminated tailings piles.

State Sites

A broad spectrum of sites is managed by states, local jurisdictions, and private parties, and thus are not part of the CERCLA, RCRA, or UST programs. These types of sites can vary in size and complexity, ranging from sites similar to those at facilities listed on the NPL to small sites with low levels of contamination.

States typically define Brownfields sites as industrial or commercial facilities that are abandoned or underutilized due to environmental contamination or fear of contamination. EPA (2004) postulated that only 10 to 15 percent of the estimated one-half to one million Brownfields sites have been identified.

As of 2000, 23,000 state sites had been identified as needing further attention that had not yet been targeted for remediation (EPA, 2004). The same study estimated that 127,000 additional sites would be identified by 2030. Dry Cleaner Sites Active and particularly former dry cleaner sites present a unique problem in hazardous waste management because of their ubiquitous nature in urban settings, the carcinogenic contaminants used in the dry cleaning process (primarily the chlorinated solvent PCE, although other solvents have been used), and the potential for the contamination to reach receptors via the drinking water and indoor air (vapor intrusion) exposure pathways. Depending on the size and extent of contamination, dry cleaner sites may be remediated under one or more state or federal programs such as RCRA, CERCLA, or state mandated or voluntary programs discussed previously, and thus the total estimates of dry cleaner sites are not listed separately in

In 2004, there were an estimated 30,000 commercial, 325 industrial, and 100 coin-operated active dry cleaners in the United States (EPA, 2004). Despite their smaller numbers, industrial dry cleaners produce the majority of the estimated gallons of hazardous waste from these facilities (EPA, 2004). As of 2010, the number of dry cleaners has grown, with an estimated 36,000 active dry cleaner facilities in the United States—of which about 75 percent (27,000 dry cleaners) have soil and groundwater contamination (SCRD, 2010b). In addition to active sites, dry cleaners that have moved or gone out of business—i.e., inactive sites—also have the potential for contamination. Unfortunately, significant uncertainty surrounds estimates of the number of inactive dry cleaner sites and the extent of contamination at these sites. Complicating factors include the fact that (1) older dry cleaners used solvents less efficiently than younger dry cleaners thus enhancing the amount of potential contamination and (2) dry cleaners that have moved or were in business for long amounts of time tend to employ different cleaning methods throughout their lifetime. EPA (2004) documented at least 9,000 inactive dry cleaner sites, although this number does not include data on dry cleaners that closed prior to 1960. There are no data on how many of these documented inactive dry cleaner sites may have been remediated over the years. EPA estimated that there could be as many as 90,000 inactive dry cleaner sites in the United States.

Department of Defense The Installation Restoration Program reports that it has spent approximately $31 billion through FY 2010, and estimates for “cost to complete” exceed $12 billion

Implementation costs for the CERCLA program are difficult to obtain because most remedies are implemented by private, nongovernmental PRPs and generally there is no requirement for these PRPs to report actual implementation costs.

EPA (2004) estimated that the cost for addressing the 456 facilities that have not begun remedial action is $16-$23 billion.

A more recent report from the GAO (2009) suggests that individual site remediation costs have increased over time (in constant dollars) because a higher percentage of the remaining NPL facilities are larger and more complex (i.e., “megasites”) than those addressed in the past. Additionally, GAO (2009) found that the percentage of NPL facilities without responsible parties to fund cleanups may be increasing. When no PRP can be identified, the cost for Superfund remediation is shared by the states and the Superfund Trust Fund. The Superfund Trust Fund has enjoyed a relatively stable budget—e.g., $1.25 billion, $1.27 billion, and $1.27 billion for FY 2009, 2010, and 2011, 8 respectively—although recent budget proposals seek to reduce these levels. States contribute as much as 50 percent of the construction and operation costs for certain CERCLA actions in their state. After ten years of remedial actions at such NPL facilities, states become fully responsible for continuing long-term remedial actions.

In 2004, EPA estimated that remediation of the remaining RCRA sites will cost between $31 billion and $58 billion, or an average of $11.4 million per facility

Underground Storage Tank Program

There is limited information available to determine costs already incurred in the UST program. EPA (2004) estimated that the cost to close all leaking UST (LUST) sites could reach $12-$19 billion or an average of $125,000 to remediate each release site (this includes site investigations, feasibility studies, and treatment/disposal of soil and groundwater). Based on this estimate of $125,000 per site, the Committee calculated that remediating the 87,983 backlogged releases would require $11 billion. The presence of the recalcitrant former fuel additive methyl tert-butyl ether (MTBE) and its daughter product and co-additive tert-butyl alcohol could increase the cost per site. Most UST cleanup costs are paid by property owners, state and local governments, and special trust funds based on dedicated taxes, such as fuel taxes. Department of Energy

The Department’s FY 2011 report to Congress, which shows that DOE’s anticipated cost to complete remediation of soil and groundwater contamination ranges from $17.3 to $20.9 billion. The program is dominated by a small number of mega-facilities, including Hanford (WA), Idaho National Labs, Savannah River (SC), Los Alamos National Labs (NM), and the Nevada Test Site. Given that the cost to complete soil and groundwater remediation at these five facilities alone ranges from $16.4 to $19.9 billion (DOE, 2011), the Committee believes that the DOE’s anticipated cost-to-complete figure is likely an underestimate of the Agency’s financial burden; the number does not include newly discovered releases or the cost of long-term management at all sites where waste remains in the subsurface. Data on long-term stewardship costs, including the expense of operating and maintaining engineering controls, enforcing institutional controls, and monitoring, are not consolidated but are likely to be substantial and ongoing.

Stewardship costs for just the five facilities managed by the National Nuclear Security Administration (Lawrence Livermore National Laboratory, CA, Livermore’s Site 300, Pantex, TX, Sandia National Laboratories, NM, and the Kansas City Plant, MO) total about $45 million per year (DOE, 2012c).

Other Federal Sites EPA (2004) reports that there is a $15-$22 billion estimated cost to address at least 3,000 contaminated areas on 700 civilian federal facilities, based on estimates from various reports from DOI, USDA, and NASA. States EPA (2004) estimated that states and private parties together have spent about $1 billion per year on remediation, addressing about 5,000 sites annually under mandatory and voluntary state programs. If remediation were continued at this rate, 150,000 sites would be completed over 30 years, at a cost of approximately $30 billion (or $20,000 per site). IMPACTS TO

DRINKING WATER SUPPLIES

The Committee sought information both on the number of hazardous waste sites that impact a drinking water aquifer—that is, pose a substantial near-term risk to public water supply systems that use groundwater as a source. Unfortunately, program-specific information on water supply impacts was generally not available. Therefore, the Committee also sought other evidence related to the effects of hazardous waste disposal on the nation’s drinking water aquifers.

Despite the existence of several NPL and DoD facilities that are known sources of contamination to public or domestic wells (e.g., the San Fernando and San Gabriel basins in Los Angeles County), there is little aggregated information about the number of CERCLA, RCRA, DoD, DOE, UST, or other sites that directly impact drinking water supply systems. None of the programs reviewed in this chapter specifically compiles information on the number of sites currently adversely affecting a drinking water aquifer. However, the Committee was able to obtain information relevant to the groundwater impacts from some programs, i.e. the DoD. The Army informed the Committee that public water supplies are threatened at 18 Army installations

Also, private drinking water wells are known to be affected at 23 installations. A preliminary assessment in 1997 showed that 29 Army installations may possibly overlie one or more sole source aquifers. Some of the best known are Camp Lejeune Marine Corps Base (NC), Otis Air National Guard Base (MA), and the Bethpage Naval Weapons Industrial Reserve Plant (NY).

CERCLA. Each individual remedial investigation/feasibility study (RI/FS) and Record of Decision (ROD) should state whether a drinking water aquifer is affected, although this information has not been compiled. Canter and Sabatini (1994) reviewed the RODs for 450 facilities on the NPL. Their investigation revealed that 49 of the RODs (11 percent) indicated that contamination of public water supply systems had occurred. “A significant number” of RODs also noted potential threats to public supply wells. Additionally, the authors note that undeveloped aquifers have also been contaminated, which prevents or limits the unrestricted use (i.e., without treatment) of these resources as a future water supply.

The EPA also compiles information about remedies implemented within Superfund. EPA (2007) reported that out of 1,072 facilities that have a groundwater remedy, 106 specifically have a water supply remedy, by which we inferred direct treatment of the water to allow potable use or switching to an alternative water supply. This suggests that 10 percent of NPL facilities adversely affect or significantly threaten drinking water supply systems.

RCRA. Of the 1,968 highest priority RCRA Corrective Action facilities, EPA (2008) reported that there is “unacceptable migration of contaminated groundwater” at 77 facilities. Also, 17,042 drinking water aquifers have a RCRA facility within five miles, but without additional information, it is impossible to know if these facilities are actually affecting the water sources.

UST. In 2000, 35 states reported USTs as the number one threat to groundwater quality (and thus indirectly to drinking water). However, more specific information on the number of leaking USTs currently impacting a drinking water aquifer is not available. Other Evidence That Hazardous Waste Sites Affect Water Supplies The U.S. Geological Survey (USGS) has compiled large data sets over the past 20 years regarding the prevalence of VOCs in waters derived from domestic (private) and public wells. VOCs include solvents, trihalomethanes (some of which are solvents [e.g., chloroform], but may also arise from chlorination of drinking water), refrigerants, organic synthesis compounds (e.g., vinyl chloride), gasoline hydrocarbons, fumigants, and gasoline oxygenates. Because many (but not all) of these compounds may arise from hazardous waste sites, the USGS studies provide further insight into the extent to which anthropogenic activities contaminate groundwater supplies

Zogorski et al. (2006) summarized the presence of VOCs in groundwater, private domestic wells, and public supply wells from sampling sites throughout the United States. Using a threshold level of 0.2 µg/L—much lower than current EPA drinking water standards for individual VOCs (see Table 3-1)—14 percent of domestic wells and 26 percent of public wells had one or more VOCs present. The detection frequencies of individual VOCs in domestic wells were two to ten times higher when a threshold of 0.02 µg/L was used (see Figures 2-2 and 2-3). In public supply wells, PCE was detected above the 0.2 µg/L threshold in 5.3 percent of the samples and TCE in 4.3 percent of the samples. The total percentage of public supply wells with either PCE or TCE (or both) above the 0.2 µg/L threshold is 7.3

Chloroform Methyl tert-butyl ether (MTBE) Perchloroethene (PCE) 1,1,1-Trichloroethane (TCA) Dichlorodifluoromethane (CFC-12) Toluene Chloromethane Trichloroethene (TCE) Dibromochloropropane (DBCP) Methylene chloride Trichlorofluoromethane (CFC-11) Bromodichloromethane 1,2-Dichloropropane Dibromochloromethane 1,2,3-Trichloropropane

FIGURE 2-2 Detection frequencies in domestic well samples for 15 most frequently detected VOCs at levels of 0.2 and 0.02 mg/L. SOURCE: Zogorski et al. (2006) with illustration provided by USGS National Water Quality Assessment program. Figure 2-2 Chloroform Methyl tert-butyl ether (MTBE) Perchloroethene (PCE) Bromoform Dibromochloromethane Trichloroethene (TCE) Bromodichloromethane 1,1,1-Trichloroethane (TCA) 1,1-Dichloroethane (1,1-DCA) Dichlorodifluoromethane (CFC-12) cis-1,2-Dichloroethene (cis-1,2-DCE) 1,1-Dichloroethene (1,1-DCE) Trichlorofluoromethane (CFC-11) trans-1,2-Dichloroethene (trans-1,2-DCE) Toluene

FIGURE 2-3 The 15 most frequently detected VOCs in public supply wells. SOURCE: Zogorski et al. (2006) with illustration provided by USGS National Wa ter Quality Assessment program.Figure

Further analysis of domestic wells by DeSimone et al. (2009) showed that organic contaminants were detected in 60 percent of 2,100 sampled wells. Wells were sampled in 48 states in parts of 30 regionally extensive aquifers used for water supply. Aquifers were randomly selected for sampling and there was no prior knowledge of contamination.

Of 30 regionally extensive aquifers used for water supply. Aquifers were randomly selected for sampling and there was no prior knowledge of contamination.

Toccalino and Hopple (2010) and Toccalino et al. (2010) focused on 932 public supply wells across the United States. The public wells sampled in this study represent less than 1 percent of all groundwater that feeds the nation’s public water systems. The samples, however, were widely distributed nationally and were randomly selected to represent typical aquifer conditions. Overall, 60 percent of public wells contained one or more VOCs at a concentration of = 0.02 µg/L, and 35 percent of public wells contained one or more VOCs at a concentration of = 0.2 µg/L.

 

Overall detection frequencies for individual compounds included 23 percent for PCE, 15 percent for TCE, 14 percent for MTBE, and 12 percent for 1,1,1-TCA (see Figure 2-5). PCE and TCE exceeded the MCL in approximately 1 percent of the public wells sampled.

 

PERCENT FIGURE 2-4 VOCs (in black) and pesticides (in white) detected in more than 1 percent of domestic wells at a level of 0.02 µg/L.

 

FIGURE 2-5 VOCs and pesticides with detection frequencies of 1 percent or greater at assessment levels of 0.02 µg/L in public wells in samples collected from 1993–2007. SOURCE: Toccalino and Hopple (2010) and Toccalino et al. (2010)

 

Overall, the USGS studies show that there is widespread, very low level contamination of private and public wells by VOCs, with a reasonable estimate being 60 to 65% of public wells having detectable VOCs. According to the data sets of Toccalino and Hopple (2010) and Toccalino et al. (2010), approximately 1% of sampled public wells have levels of VOCs above MCLs. Thus, water from these wells requires additional treatment to remove the contaminants before it is provided as drinking water to the public. EPA (2009b) compiled over 309,000 groundwater measurements of PCE and TCE from raw water samples at over 46,000 groundwater-derived public water supplies in 45 states. Compared to the USGS data, this report gives a lower percentage of water supplies being contaminated: TCE concentration exceeded its MCL in 0.34 percent of the raw water samples from groundwater-derived drinking water supply systems. There are other potential sources of VOCs in groundwater beyond hazardous waste sites. For example, chloroform is a solvent but also a disinfection byproduct, so groundwater sources impacted by chlorinated water (e.g., via aquifer storage/recharge, leaking sewer pipes) would be expected to show chloroform detections. Another correlation seen in the USGS data is that domestic and public wells in urban areas are more likely to have VOC detections that those in rural areas. This finding is not unexpected given the much higher level of industrial practices in urban areas that can result in releases of these chemicals to the subsurface. Another way to estimate the number of public water supplies affected by contaminated groundwater is to consider the number of water supply systems that specifically seek to remove organic contaminants. The EPA Community Water System Survey (EPA, 2002) reports that 2.3 to 2.6 percent of systems relying solely on groundwater have “organic contaminant removal” as a treatment goal. For systems that use both surface water and groundwater, 10.3 to 10.5 percent have this as a treatment goal.

 

In summary, it appears that the following conclusions about the contamination of private and public groundwater systems can be drawn: (1) there is VOC contamination of many private and public wells (upwards of 65%) in the U.S., but at levels well below MCLs; the origin of this contamination is uncertain and the proportion caused by releases from hazardous waste sites is unknown; (2) approximately one in 10 NPL facilities is impacting or significantly threatening a drinking water supply system relying on groundwater, requiring wellhead treatment or the use of alternative water sources; and (3) public wells are more susceptible to contamination than private wells, due their higher likelihood of being in urban areas and their higher pumping rates and hydraulic capture zones.

 

All of these issues suggest that there can be no generalizations about the condition of sites referred to as “closed,” particularly assumptions that they are “clean,” meaning available for unlimited use and unrestricted exposure. Indeed, the experience of the Committee in researching “closed sites” suggests that many of them contain contaminant levels above those allowing for unlimited use and unrestricted exposure, even in those situations where there is “no further action” required.

 

Furthermore, it is clear that states are not tracking their caseload at the level of detail needed to ensure that risks are being controlled subsequent to “site closure.” Thus, reports of cleanup success should be viewed with caution.

 

CONCLUSIONS AND RECOMMENDATIONS

 

The Committee’s rough estimate of the number of sites remaining to be addressed and their associated future costs is presented in Table 2-6, which lists the latest available information on the number of facilities (for CERCLA and RCRA) and contaminated sites (for the other programs) that have not yet reached closure, and the estimated costs to remediate the remaining sites.

 

 

water/contaminated

 

TABLE 2-6 Rough Estimate of the total number of currently known facilities or contaminated sites that have Not reached closure and estimated costs to complete

 

 

At least 126,000 sites across the country have been documented that have residual contamination at levels preventing them from reaching closure. This number is likely to be an underestimate of the extent of contamination in the United States for a number of reasons. First, for some programs data are available only for contaminated facilities rather than individual sites; for example, RCRA officials declined to provide an average number of solid waste management units per facility, noting that it ranged from 1 to “scores.” CERCLA facilities frequently contain more than one individual release site. The total does not include DoD sites that have reached remedy in place or response complete, although some such sites may indeed contain residual contamination. Finally, the total does not include sites that likely exist but have not yet been identified, such as dry cleaners or small chemical-intensive businesses (e.g., electroplating, furniture refinishing).

 

 

Information on cleanup costs incurred to date and estimates of future costs, as shown in Table 2-6, are highly uncertain. Despite this uncertainty, the estimated “cost to complete” of $110-$ 127 billion is likely an underestimate of future liabilities. Remaining sites include some of the most difficult to remediate sites, for which the effectiveness of planned remediation remains uncertain given their complex site conditions. Furthermore, many of the estimated costs (e.g., the CERCLA figure) do not fully consider the cost of long-term management of sites that will have contamination remaining in place at high levels for the foreseeable future.

 

Remedial Objectives, Remedy Selection, and Site Closure

The issue of setting remedial objectives touches upon every aspect and phase of soil and groundwater cleanup, but none perhaps as important as defining the conditions for “site closure.” Whether a site can be “closed” depends largely on whether remediation has met its stated objectives, usually stated as “remedial action objectives.” Such determinations can be very difficult to make when objectives are stated in such ill-defined terms as removal of mass “to the maximum extent practicable.” More importantly, there are debates at hazardous waste sites across the country about whether or not to alter long-standing cleanup objectives when they are unobtainable in a reasonable time frame. For example, the state of California is closing a large number of petroleum underground storage tank sites that are deemed to present a low threat to the public, despite the affected groundwater not meeting cleanup. In other words, some residual contamination remains in the subsurface, but this residual contamination is deemed not to pose unacceptable future risks to human health and the environment. Other states have pursued similar pragmatic approaches to low-risk sites where the residual contaminants are known to biodegrade over time, as is the case for most petroleum-based chemicals of concern (e.g., benzene, naphthalene). Many of these efforts appear to be in response to the slow pace of cleanup of contaminated groundwater; the inability of many technologies to meet drinking water-based cleanup goals in a reasonable period of time, particularly at sites with dense nonaqueous phase liquids (DNAPLs) and complicated hydrogeology like fractured rock; and the limited resources available to fund site remediation.

There is considerable variability in how EPA and the states consider groundwater as a potential source of drinking water. EPA has defined groundwater as not capable of being used as a source of drinking water if (1) the available quantity is too low (e.g., less than 150 gallons per day can be extracted), (2) the groundwater quality is unacceptable (e.g., greater than 10,000 ppm total dissolved solids, TDS), (3) background levels of metals or radioactivity are too high, or (4) the groundwater is already contaminated by manmade chemicals (EPA, 1986, cited in EPA, 2009a). California, on the other hand, establishes the TDS criteria at less than 3,000 ppm to define a “potential” source of drinking water. And in Florida, cleanup target levels for groundwater of low yield and/or poor quality can be ten times higher than the drinking water standard (see Florida Administrative Code Chapter 62-520 Ground Water Classes, Standards, and Exemptions). Some states designate all groundwater as a current or future source of drinking water (GAO, 2011).

The Limits of Aquifer Restoration

As shown in many previous reports (EPA, 2003; NRC, 1994, 1997, 2003, 2005), at complex groundwater contamination sites (particularly those with low solubility or strongly adsorbed contaminants), conventional and alternative remediation technologies have not been capable of reducing contaminant concentrations (particularly in the source area) to drinking water standards quickly.

 

 

Posted in Chemicals, Hazardous Waste, National Academies of Sciences, Water Pollution | Tagged , , , | 1 Comment

Book review of Fruits of Eden: David Fairchild & Americas Plant Hunters

Preface. Botanist David Fairchild is one of the reasons the average grocery store has 39,500 items. Before he came along, most people ate just a few kinds of food day in day out (though that was partly due to a lack of refrigeration).

I have longed to eat a mangosteen ever since I read this book, Fairchild’s favorite fruit, with mango a close second. But no luck so far.

What wonderful and often adventurous work Fairchild and other botanists had traveling all over the world in search of new crops American farmers could grow. Grains that could grow in colder climates were sought out.

Since 80 to 90% of future generations will be farmers after fossil fuels are gone, who will be growing food organically since fertilizer and pesticides are made from natural gas and oil, it would be wise for them to plant as many varieties of crops as possible not only for gourmet meals, but biodiversity, pest control, and a higher quality of life.

As usual, what follows are Kindle notes, this isn’t a proper book review.

Alice Friedemann   www.energyskeptic.com  author of  “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer; Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Amanda Harris.  2015. Fruits of Eden: David Fairchild and Americas Plant Hunters. University Press of Florida.   

At the end of the 19th century, most food in America was bland and brown. The typical family ate pretty much the same dishes every day. Their standard fare included beefsteaks smothered in onions, ham with rank-smelling cabbage, or maybe mushy macaroni coated in cheese. Since refrigeration didn’t exist, ingredients were limited to crops raised in the backyard or on a nearby farm. Corn and wheat, cows and pigs dominated American agriculture and American kitchens.

Fairchild transformed American meals by introducing foods from other countries. His campaign began as a New Year’s Resolution for 1897 and continued for more than 30 years, despite difficult periods of xenophobia at home and international warfare abroad. After he persuaded the United States Department of Agriculture to sponsor his project, he sent other smart, curious botanists to Asia, Africa, South America, and Europe to find new foods and plants. They explored remote jungles, desert oases, and mountain valleys and shipped their discoveries to government gardeners for testing across America. Collectively, the plant explorers introduced more than 58,000 items.

Many of their discoveries have been used as breeding material to improve existing plants, and others have become staples of the American table like mangoes, avocados, soybeans, figs, dates, and Meyer lemons.

Fairchild arrived in the nation’s capital on July 25, 1889, four months after the inauguration of Benjamin Harrison, a Republican from Indiana. The United States totaled 38, although four new ones— North Dakota, Washington, South Dakota, and Montana—would be added in November 1889. The country’s population was a little more than 50 million. Farming was an enormously important segment of the economy: the market value of agricultural products was more than $500 million (more than $12.5 billion in current dollars). Young scientists working to improve agriculture were as valuable to the nation as rocket scientists would be 75 years later.

Despite the national importance of farming, the U.S. Department of Agriculture had become a cabinet-level agency—one of seven—only a few months earlier. For decades, presidents had considered creating a separate office to help farmers, but many legislators, especially southerners, vehemently opposed granting the federal government any official role in the family farm, a fiercely independent American institution.  Congress had finally established the office in 1862 only because the southern states had seceded, leaving northern senators and representatives free to approve the legislation without opposition.

After the Civil War ended, his uncle Thomas Barbour Bryan built Graceland Cemetery, a significant urban development that was the city’s first landscaped burial ground. He hired his nephew, Bryan Lathrop, to manage the cemetery, a job he apparently did well. Creating Graceland would probably have remained the family’s biggest accomplishment if not for the Great Chicago Fire of Sunday, October 8, 1871, a day that created one of the biggest real estate investment opportunities in American history. The fire triggered a chain of events that transformed urban architecture and, in the process, produced the personal fortune that bankrolled America’s first plant expeditions.

After Fairchild arrived in Naples he immediately recognized how unexciting American meals had been. “No sooner had I landed in Italy that I began to get a perspective on the limited number of foods which the fare in my home and in American boarding houses had brought to my palate,” he wrote later. His education began in a small restaurant where he usually ate lunch. There he sampled his first foreign food: a dried fig, a wickedly sweet morsel for a young man raised on boiled vegetables. He tried vermicelli with a sauce of tomatoes, a fruit whose possibly poisonous qualities were still being debated in America. He enjoyed Italian pasta so much—it was chewy and flavorful, not the mushy kind made with soft American wheat—that he collected 52 shapes and mailed them to friends in Washington.

As he rushed away from Corsica Fairchild stole a few cuttings from citron trees along a road and hid them under his coat. Unequipped with material to protect the branches from drying out on the long voyage between Italy and America, he jammed the sticks into raw potatoes, packaged the lot and mailed them. The potatoes provided enough moisture to nourish the cuttings, which survived the trip to Washington. Officials sent the twigs to California, where they launched a profitable business.

At the end of 1895, Fairchild went to Java. The ship landed on the west coast of Sumatra at the village of Padang, a collection of low buildings strung along the waterfront and backed by thick jungle. Fairchild was finally in the South Seas, on the verge of seeing the world he had dreamed about in Kansas. He never forgot the thrill of his first visit. “The memory of that first tropical night on shore and of the noise of the myriads of insects and the smell of the vegetation and the sensation of being close to wild jungles and wild people sometimes comes back to me even though millions of later experiences have left their traces on my brain.”  

The Visitors’ Laboratory at the botanical garden in Buitenzorg, a city now called Bogor, was, like the Zoological States in Naples, an unusual spot where botanists from around the world worked together. This spirit of shared scientific inquiry among researchers of all nationalities and all specialties stayed with Fairchild for the rest of his life.

“The institution was to discover and bring to light a knowledge of the plant life of the tropical world,” Fairchild wrote later. “Not for the uses of Holland and Netherlands India alone, but for the whole world of plants—a world which knows no national boundaries, a world which constitutes a vast, magnificent realm of living stuff destined to be of interest to the human race for all time.”  

Most remarkable were the unfamiliar, even bizarre tropical fruits. It was in Java, in the summer of 1896, that David Fairchild began his lifelong love affair with one food: the mangosteen. Four years later he launched a lifelong but ultimately unsuccessful push to cultivate them in America. His enthusiasm mirrored the fascination of Queen Victoria, who in 1855 allegedly promised to pay 100 pounds to the first person to bring her a single mangosteen.

After this Fairchild went to Sumatra, and after landing  toured the public market in a settlement called Pandang. It was a noisy, crowded place that offered a cornucopia of strange cultivated fruits and vegetables. Fairchild was immediately intrigued. The visit “showed me how many new and interesting food plants there were if only we had an established place where they could be sent,” he wrote.

Fairchild’s wealthy supporter, Lathrop, proposed that these strange, foreign plants be sent to America to see which ones take root, produce fruit, and make money for farmers and merchants. At the time, only about 2% of the world’s edible plants were cultivated in America, and the typical farmer grew only about twenty of them. Lathrop wanted Americans to open their mouths to new foods.

“He began to lay before me his idea of what a botanist could do if he were given the opportunity to travel and collect the native vegetables, fruits, drug plants, grains and all the other types of useful plants as yet unknown in America,” Fairchild wrote later. It was a long evening of lively debate, and in the end, Lathrop won. Fairchild agreed to join his project. He would abandon his cloistered studies in Java and take up the mission of foreign plant introduction. As the clock approached midnight, David Fairchild promised Barbour Lathrop that he would spend his life searching the globe for new foods. “Without Barbour Lathrop to goad him into an entirely different life work,” Douglas wrote later, “to pay his salary and his expenses on their long wanderings, David Fairchild might have become a quiet, little-known if distinguished plant pathologist and entomologist, a scientist-scholar whose life might have been lived almost entirely within the walls of some laboratory.

“The greatest service which can be rendered any country is to add a useful plant to its culture,” Jefferson wrote in 1800, a remark that later American plant explorers frequently quoted with pride. Jefferson had followed his own advice: he once smuggled grains of rice from Italy to Virginia in his coat pocket even though Italian officials could have executed him if he had been caught.

When Fairchild and Lathrop began the adventures that would change America’s eating habits, they looked like improbable companions. Lathrop was tall, slim, and always well dressed; in bearing he resembled the military men he admired. He carried a cane and wore a hat wherever he went. Fairchild, in contrast, was gawky and uncertain and rarely wore clothes appropriate to the occasion, whatever it was. Lathrop was demanding and critical; Fairchild was constantly frazzled. In the beginning Lathrop, who had flashing dark blue eyes and expressive bushy eyebrows, called Fairchild “my investment,” with a little bit of a sneer. Fairchild, fully aware of the contrast, felt inadequate. “Somehow I could not do anything quite to suit him,” he admitted. Fairchild was so socially awkward that he agreed to one condition of working with Lathrop: he promised not to get married while he was exploring for plants.

Their expedition began immediately with a leisurely cruise to Singapore and Siam. A few days later when he and Lathrop attended a young couple’s wedding dinner. It was a special occasion because the Crown Prince of Siam also attended the feast. Fairchild found the food unfamiliar and the formal etiquette bizarre. “During the 13-course dinner, every dish was strange to us except the rice,” he wrote later. “Each course was noiselessly placed on the table by a servant deferentially crawling on his knees. Not a person stood or walked erect while the prince and his guests were at the table. At the close of the long meal, the wives appeared and even those of royal birth all hitched themselves across the floor like a child who has not yet learned to creep.” As witnesses to the wedding ceremony, Fairchild and Lathrop were obliged by local custom to trickle perfumed water down the bride and groom’s necks as the couple knelt together with their foreheads touching. “If the others poured as much water from the jeweled conch shell as I did,” he wrote later, “the poor bride and groom must have been well soaked.

The two had a clear plan. First of all, they were only interested in new foods and other useful plants, nothing ornamental or impractical. Also, they needed trained botanists to do the hunting so the government wouldn’t be inundated with worthless material. Next, they wanted experiment gardens prepared to test the foreign plants. Finally, Swingle and Fairchild proposed, the whole operation could be funded by quietly diverting $20,000 (equal to about $500,000 today) from another line in the agriculture department’s budget. It was an audacious scheme from two junior botanists. But by then Fairchild had grown more confident.

Fairchild and Swingle were apprehensive when they entered their new boss’s office at the end of August 1897 even though they had arranged for a senior department employee to go with them to give their idea more credibility. “Secretary Wilson was a tall, gaunt man with a gray beard and deep-set eyes,” Fairchild remembered. “He sat listening to us with his eyes half closed and, at intervals, made use of the nearby spittoon. … I waited breathlessly for his verdict.

Wilson named it “the section of foreign seed and plant introduction”. No modern government had employed its own team of full-time plant explorers. In England and France, large private companies had sponsored many foreign plant expeditions to increase their profits by selling rare plants, usually showy ornamentals. These private firms were fiercely competitive and proprietary about their discoveries, but the U.S. government would be eager to share its findings with the public and let farmers make money.

Lathrop suddenly arrived in person as Fairchild was engaged in his valuable but sedentary work. Wasting no time, Lathrop tempted him with the offer of another exciting trip to faraway lands, one that would be longer and more interesting than their six-month cruise through the South Seas. When Fairchild protested that he had just started his new job, Lathrop argued that he was too inexperienced to supervise international plant collectors. If the government’s scheme were to succeed, Lathrop insisted, Fairchild couldn’t depend on strangers to send the material he wanted. He needed to visit the places himself and make important contacts with botanists, gardeners, and government officials

The two-year trip Lathrop had promised turned into a five-year odyssey. It was a remarkable adventure of luxury travel experiences, punctuated by meetings with prominent horticulturalists—few were lowly enough to be called gardeners—and casual, dreamlike botanizing sessions on remote islands.

His visit to Maine in the summer of 1898 was brief. Because Lathrop was paying the bills, traveling was always conducted on his terms: expensive, comfortable, quick, and not always in a straight line. The zigzagging began immediately after the two men left Maine for California where Fairchild met Luther Burbank, America’s first celebrity nurseryman. Burbank had caused great excitement in horticultural circles by inventing startling new varieties of fruits, vegetables, and flowers in these years before scientists understood the science of plant breeding

Trinidad, Jamaica, and Barbados received a little more attention. In Kingston, Fairchild first tasted chayote, a mild-flavored squash that he later tried hard to persuade Americans to appreciate. Fairchild collected 16 varieties of yams and four kinds of sweet potatoes, nutritious stables in the Caribbean diet.

Throughout South America, Fairchild hunted for plants the easiest way possible: he bought them in local markets and took cuttings from plants in botanical gardens. At this point in his travels everything was so new and Fairchild’s interests were so broad that he randomly collected samples of almost everything that was unfamiliar.

He shipped large batches to Washington, often without providing information or advice for the people who were supposed to test the plants. By July 1899 the department had received more than 200 samples of Latin American beans, peppers, squashes, melons, peas, apples, and other fruits and vegetables. Fairchild’s most successful discovery during the first part of the expedition was an alfalfa from Lima, Peru, that eventually flourished as a forage plant in Arizona known as the “Hairy Peruvian”.

In Chile he bought a bushel of avocado seeds that wound up in California; they produced one of the earliest varieties grown there. Many foods Fairchild collected failed; he admitted that a large percentage of the plants he shipped were lost before they got a chance to grow in America.

The men were constantly exposed to illness. When they arrived in Panama in February 1899, a few years after yellow fever had forced French engineers to abort construction of the canal there, Panama was considered the most dangerous place in South America. Death was so common that all hospital patients were fitted for coffins when they were admitted for treatment.

These secret shipments included broccoli, then virtually unknown in America. In Venice Fairchild also discovered zucchini—identified as “vegetable marrow”—for sale in a market.

Before he arrived in Egypt he said he knew the word sesame only as Ali Baba’s famous password; afterward he understood it to be a source of valuable cooking oil. He also collected chickpeas, okra, strawberry spinach, and more hot peppers.

Lathrop encouraged Fairchild to buy as much cotton as possible. He shipped six bushels of seeds of three varieties, material that eventually boosted the lucrative cotton industries in Arizona and California.

Banda was an important source of nutmeg, an especially handsome plant. “There are few fruit trees more beautiful than nutmeg trees with their glossy leaves and pear-shaped, straw-colored fruits,” he recalled. “As the fruits ripen, they crack open and show the brilliant crimson mace which covers the seed or nutmeg with a thin, waxy covering. The vivid color of the fruit and the deep green foliage make the trees among the most dramatic and colorful of the tropical plant world.” Fairchild, who rarely passed up an opportunity to stroll alone among trees, spent hours wandering through nutmeg groves.

In May 1900, Fairchild visited Scandinavia to collect examples of tough-weather fruits and fodder plants.

the Chinese treated Fairchild well and he had time to introduce himself to John M. Swan, a doctor at a missionary hospital in Canton who helped him collect dozens of peaches, plums, persimmons, and other fruits. Swan also told him how to find the seeds that produce tung oil, the glossy material used to waterproof the exterior of Chinese junks.

Fairchild was able to visit rural areas outside Canton and wander among the small vegetable plots there. “These truck gardens of a city of 2,000,000 people did not contain a single vegetable with which we are familiar in America.

He watched Chinese farmers control pests the old-fashioned way: they picked off each insect on every plant by hand.

By the time Fairchild finished this two-month detour to the Persian Gulf he had collected 224 date palm offshoots or suckers, each weighing about thirty pounds.

After he arranged to send almost four tons of trees to Washington, Fairchild retraced his route and joined Lathrop in Japan in the summer of 1902. They lived comfortably at the Imperial Hotel in Tokyo where Lathrop relaxed and Fairchild searched for plants. He bought fruits and vegetables at public markets and discovered zoysia, a plant that eventually became a popular ground cover in America. At Lathrop’s insistence he also bought bamboo plants, a purchase that triggered Fairchild’s long love affair with this huge grass.

Japanese flowering cherry trees remained one of Fairchild’s passions.

During his travels with Lathrop, Fairchild constantly hunted for varieties of one particular food, the mango. It was his second favorite fruit after the mangosteen, which, despite its name, is not related.

It was Elbridge Gale’s determination and defiance of conventional, wrong-headed wisdom that inspired Fairchild to search for mangos all over the world. During the four years he spent traveling alone and with Lathrop, Fairchild sent 24 varieties from six countries, each supposedly tastier or hardier than the other.

Hansen, who emigrated from Denmark when he was seven years old, was a young plant breeder who worked in the northern plains, the region that Wilson was trying hardest to help. Hansen had done some traveling before Wilson hired him in spring 1897, having visited Russia and seven other countries for four months in 1894 while he was a student at Iowa State College and Wilson ran the plant experiment station there. Hansen also had another, more important qualification for the job. Unlike many other horticulturalists at the time, he was a plant breeder who understood that it was botanically impossible to acclimate plants to tolerate severe conditions; only cross breeding with proven hardy varieties could produce tough plants. Because Hansen possessed this scientific sophistication, Wilson trusted him to know what to look for in the field.

Hansen was thirty-one in 1897 when Wilson convinced him that the future of American agriculture depended on his returning to Russia to find material that could be introduced in the Dakotas, then a dry, unproductive region where few crops grew. The mission was haphazard and dangerous. Wilson paid him $3,000, a generous salary equal to about $78,000 in current dollars.  Shortly after Hansen arrived in Uzbek province in Turkistan in November 1897, a field of alfalfa with small blue flowers attracted his attention. He believed the plant would survive in South Dakota, where temperatures range from 50 degrees below zero to 114 degrees above, to provide year-round feed for livestock, as well as produce nitrogen to enrich the soil. Before he could recommend the plant to Secretary Wilson, however, he needed to figure out how far north the blue alfalfa grew.

On Christmas 1897 he reached Kopal in southwestern Siberia, a town on the same latitude as South Dakota, where the blue Turkistan alfalfa was still growing. Confident it could thrive on the northern plains, he sent thousands of seeds to Washington. (Years later he returned and discovered a hardier type, an alfalfa with tiny yellow flowers, and brought that one to America, too. As a lasting tribute to Hansen’s work, South Dakota State University selected blue and yellow as its school colors.)

At first the parcels trickled in from Russia; soon, however, hundreds of packages arrived in a deluge. One day in February 1898, twelve tons of seeds of a fodder plant called smooth brome grass from the Volga River district turned up. Fairchild struggled to keep the shipments straight and check for dangerous insects or diseases that might have accompanied the material. The department had organized a system of public and private experiment gardens to test the material, so Fairchild arranged the seeds into 5,000 small packages and shipped them around the country. The enormous workload made him miserable. Fairchild, who hated clerical tasks, soon decided that he would rather be exploring himself. Again he was unhappy. “Hansen felt that he had been sent out to collect, and he collected everything and collected it in quantity,” Fairchild recalled. Later in an unpublished essay his criticism was harsher: “Hansen’s collections took on the character of a nightmare.” Nonetheless, Hansen had Secretary Wilson’s support, and Wilson sent him on two more trips to Russia. Fairchild, who may have been jealous of Hansen’s close relationship with his boss, accused Plant Explorer Number One of keeping bad records, overspending, and—perhaps an explorer’s biggest sin—passing off plants he bought in a market as material he found in the wild.

The department’s second staff explorer, who was hired in July 1898, earned Fairchild’s great respect. He was Mark Alfred Carleton, Fairchild’s classmate at Kansas State Agricultural College, who had become a cereal specialist for the department after graduation. Carleton’s great passion was to improve the grains cultivated in America’s wheat belt. Born in Ohio and raised on a farm in Kansas, Carleton spent his childhood and youth watching his neighbors labor constantly to harvest good wheat. Most wheat cultivated in America at this time was a red or white winter variety with soft kernels high in starch and low in protein. America’s earliest settlers had planted it east of the Mississippi River and ground it into flour to make bread and pastry.

As pioneers moved west early in the nineteenth century, they brought seeds of these soft wheats with them, unaware that the varieties couldn’t handle the different growing conditions west of the Mississippi. Midwestern winters are too cold and summers are too hot and dry for most soft wheats. In the prairie fields of Kansas, Carleton learned, they were especially vulnerable to rust, a fungus that shrivels the grain and rots the straw.

Carleton had also learned, however, that not all farmers in Kansas had this problem. The exceptions were Mennonites who had arrived from Russia in 1873. America was the most recent home for these Protestants, who had wandered through Europe for generations. The sect had originally lived in West Prussia, but many members moved to southern Russia about 1770 when Catherine the Great convinced them to settle remote sections of her country in exchange for one hundred years of special privileges, including exemption from military service. The Mennonites were skilled farmers who thrived in the Crimea by developing through trial and error hard wheat varieties that could handle the tough climate there.

In the mid-1800s, as Catherine’s century of protection drew to an end, the Russian government warned the Mennonites that they would soon face conscription despite their pacifist convictions. Many in the community fled Russia and sought religious freedom in the New World.

After exploring for six months, Carleton returned to Washington with several types of wheat, including the hardest of all—durum, often known as macaroni wheat.

While midwestern farmers were pleased with Carleton’s seeds, midwestern millers were not. They didn’t want the trouble and expense of updating their machinery to process harder grains. “Durum, the hardest of hard wheats, met at once with the most violent opposition, chiefly from millers, but also from all grain men,” Carleton wrote later. “Various epithets, such as ‘bastard’ and ‘goose,’ were applied to the wheat without restriction.

Carleton’s promotional campaign worked. Within a few years, large grain processors relented and modified their mills to grind hard wheat into flour. Carleton’s trip cost the U.S. government about $10,000 (about $250,000 today); by 1905 the new crop was worth $10 million a year (more than $250 million today)—a 1,000 percent increase. America had so much durum wheat that the country exported 6 million bushels a year. By 2011 production rose to about 50 million bushels a year. Because of Mark Carleton, American farmers had more than enough wheat, freeing experts at the end of the nineteenth century to worry about something other than widespread famine.

Americans consumed rice primarily as a pudding, not—like most people in the world—as part of a meal’s main course. Americans demanded kernels with a clean, smooth texture. Farmers in Louisiana and Texas grew mostly long-grain varieties originally imported from Honduras, but the kernel’s length made the rice fragile. When the outer coating was polished to whiten the grains, the only kind most Americans would eat, the rice often shattered. To make the product pretty and smooth enough to attract shoppers, processors coated it with paraffin wax. Of course, this beauty came with a price; buffing removed rice’s nutrients and wax removed its taste.

America’s rice-eating habits appalled Fairchild. “Rice is the greatest food staple in the world, more people living on it than on any other, and yet Americans know so little about it that they are actually throwing away the best part of the grains of rice and are eating only the tasteless, starchy, proteinless remainder,” he wrote in a magazine article. He mocked Americans for demanding rice as shiny as “glass beads.” “A pudding of stewed, sweetened rice, dusted with cinnamon is about as unappetizing to a fastidious Japanese as a sugar-coated beefsteak filled with raisins would be to an American,” Fairchild wrote.

Those glass beads were unhealthy as well. In 1908, a decade after Knapp’s trip, scientists determined that a diet of polished white rice could cause beriberi, a discovery that forced rice growers to enrich the grains with the nutrients removed by milling.

Fairchild had taken hundreds during his travels, and as he chatted with Grosvenor, he described one unforgettable scene he had captured. In May 1901 he had gone to North Africa to find date palms. When he landed in Tunis, he noticed an astonishing spectacle: strolling through town were young women wearing yards of brilliantly colored silk and tall pointed hats. Each woman weighed about 300 pounds. “I simply could not turn my eyes away from them,” Fairchild wrote later, “and frequently turned my Kodak toward them too, although they did not like it.”  

Davidia involucrata is the most interesting and most beautiful of all trees which grow in the north temperate regions

That spring Meyer set off for Manchuria, his first long trip inside Asia. It was a remote but promising destination because Manchuria’s growing conditions were similar to those of the northern United States, the section of the country that Secretary Wilson wanted most to help. Problems plagued the trip from the beginning, however. Officials wouldn’t let Meyer travel freely because Russian and Japanese soldiers were still skirmishing in the region, a bitter after-effect of the Russo-Japanese War that had ended only seven months earlier. Notorious outlaws called the Hun-hutzes (Red Beards) also menaced the area. Despite these obstacles, Meyer, confident he would be safe, was determined to make the trip. He knew he could be physically intimidating, especially when he wore a heavy sheepskin coat, big boots, and a bearskin hat to survive temperatures that dropped to 30 degrees below zero Fahrenheit. With a revolver and a Bowie knife in his belt, Meyer was prepared to defend himself. He relished the adventure

He spent only three months in Manchuria, including side trips to northern Korea and Siberia. It was still a rough expedition: he covered 1,800 miles from Liaoyang to Vladivostock almost entirely on foot, averaging twenty miles a day for ninety days. He wore out three pairs of boots in three months. On the way he saw beautiful peonies growing wild and collected many specimens of useful plants, including one that eventually became enormously important to America: the soybean.

Meyer, recognizing that it was a mainstay of the Chinese diet, sent samples to Fairchild: he collected seeds, whole plants, even beans prepared as tofu, which he called cheese. During his travels Meyer shipped more than one hundred varieties—including ones that launched America’s vast soybean oil industry.

Meyer told de Vries that he had wanted to walk across Manchuria to Harbin, but the trip would have been too dangerous, so he took a train. Tigers, panthers, bears, and wolves lurked nearby, but Meyer said he was more afraid of humans than wild animals.

On March 31, 1908, as he was heading to Peking toward the end of his first expedition, Meyer stopped briefly in the small village of Fengtai. In a doorway he noticed something new. It was a small tree bearing about a dozen unusual fruits that looked like a cross between a lemon and an orange. Villagers told him that the strange plant was valuable; rich Chinese paid as much as ten dollars for each tree because it produced fruit all year. “The idea is to have as many fruits as possible on the smallest possible plant,” Meyer explained later. He sliced a thin branch off the tree with his Bowie knife and packed it carefully in damp moss. Meyer delivered it two months later to Fairchild. He gave the cutting an unexciting label—“Plant Introduction No. 23028”—and sent it to the department’s garden in Chico, California, to see if it would grow and, what was more important, produce fruit in America. The experiment lasted seven years, but eventually Fairchild was able to report that the cutting was a success. “Meyer’s dwarf lemon from Peking was producing a high yield,” he said. “It had begun to attract attention as a possible commercial lemon, even though its fruit flesh had an orange tint.

Six weeks after he spotted the lemon, Meyer boarded a ship in Shanghai for San Francisco. He carried twenty tons of trees, cuttings, seeds, and dried herbarium material as well as, almost as an afterthought, two rare monkeys for the National Zoo. “They cause me as much trouble as babies,” Meyer complained when he arrived in California in June 1908.

Roosevelt, who was battling with Congress over the need for tough conservation laws, wanted a firsthand account of the devastation of Wutaishan. The burly plant explorer, seated in a leather armchair in a large room decorated with moose heads and bearskin rugs, described deforestation in China to the president of the United States. “The Chinese peasants have no regard for the wild vegetation and they cut down and grub out every wild wood plant in their perpetual search for fuel,” Meyer explained

Four months later, in the leaflet he sent to Congress as his annual State of the Union message, Roosevelt quoted Meyer by name and included his photographs to illustrate the price America could pay if the nation didn’t protect its trees. Meyer’s pictures, Roosevelt told lawmakers, “show in vivid fashion the appalling desolation, taking the shape of barren mountains and gravel and sand-covered plains, which immediately follows and depends upon the deforestation of the mountains.

While Wilson was sidelined in the hospital, Paul and Homer Brett, a U.S. consul in Muscat, set off into the interior of the Arabian Peninsula to buy date palms. They traveled sixty miles through the desert under the sultan of Oman’s protection in a caravan of eleven of the sultan’s best camels, Wilson told his father. They were ambushed twice, yet they escaped unharmed each time. The assignment was not easy. The Popenoe brothers, who both had fair skin, light hair, and bright blue eyes, must have stood out dramatically in the Mideast. “As we passed through the bazaars [in Basra], merchants would spit on the ground and significantly draw their fingers across their throats,” Paul wrote later. “In Baghdad we were chased for a mile by a crowd throwing stones, and in one of the seaports of Persia a native suddenly took a shot at us with his rifle, which fortunately missed.” Despite the risks, they did not disappoint their father. The brothers bought 9,000 date palm offshoots in Baghdad and Basra and another 6,000 in Algeria and arranged to have the huge lot—each healthy offshoot stood about three feet tall and weighed thirty pounds—shipped across the Atlantic Ocean. The trees survived the voyage because Wilson Popenoe gave his portable typewriter to the ship’s captain in exchange for enough fresh water to keep the palms alive. During the last leg of the journey from Galveston, Texas, to California, the offshoots filled seventeen refrigerated railroad cars, a load so remarkable that newspapers reported the shipment in detail. Paul Popenoe’s separate journey home took long enough for him to write three hundred pages about date palms.

The trip’s primary purpose was to finish an assignment that Frank Meyer had started before he died: save the American chestnut tree. At the beginning of the 20th century, America’s native chestnuts thrived along the Eastern Seaboard. An estimated four billion trees—many as tall as 100 feet—covered about a quarter of the region’s forests. Chestnut wood was hard and straight and vital to serve the nation’s growing needs for railroad ties and telephone poles. But in 1904 a scientist at the Bronx Zoo in New York City noticed a canker or fungus spreading on the trees’ bark. Three years later the same disease was evident on chestnut trees growing across the street in the New York Botanical Garden. It was the beginning of the most significant invasion of a foreign plant disease in American history.

Fairchild’s last official day on the agriculture department’s staff was June 30, 1935. As of that date the office he established had introduced 111,857 varieties of seeds and plants to America.

 “Many of the immigrants have their little day or hour and are never again heard from,” he wrote in the 1928 Yearbook of Agriculture. “Others sink out of sight for a time and later achieve great prominence.” He could have added that a few were out-and-out flops and others were impractical curiosities that Fairchild showed off to his friends and relatives. Yet many of David Fairchild’s plant immigrants were great successes of incalculable value. Mark Carleton’s durum wheat and Frank Meyer’s soybeans completely transformed American agriculture in the twentieth century. And by the beginning of the twenty-first century, Walter Swingle’s dates and figs and Wilson Popenoe’s avocados had become staples of the American diet. Meyer’s lemon was a food lovers’ delight. Many other introductions served the important but less visible role of providing essential breeding material to make existing plants hardier or more productive.

When David Fairchild left Washington in 1924, after giving up a job that kept him at a desk for most of 20 years, his weariness suddenly vanished. Overnight, it seems, he acquired enormous energy and enthusiasm that propelled him into a constant series of adventures that filled the rest of his life. “As the fieldmen used to say, DF had it made,” Ryerson said later. His first project took him back to the tropics. While he waited for Allison Armour to outfit his ship for the scientific expedition, Fairchild helped his friends William Morton Wheeler and Thomas Barbour, an entomologist and a zoologist associated with Harvard University, set up a new scientific research center on an island in Panama’s rain forest. Initially called the Barro Colorado Island Biological Laboratory (and now known as the Smithsonian Tropical Research Institute), the facility was modeled after the botanical institutes Fairchild loved in Naples and Java.

In September 1924, David and Marian Fairchild—and sometimes their children and friends—began exploring for plants, often under Allison Armour’s sponsorship. They drove an old American car through Algeria and Morocco, visiting gardens, ancient cities, and souks. They especially enjoyed Mogador, then a drowsy little town on the sea that was home of the rare argan nut trees. Marian Fairchild showed off her firm feminist convictions by driving their Dodge sedan through Fez. “Marian takes every opportunity to run the car around through the narrow streets just to show that she is not in any way under her husband’s thumb,” Fairchild told Grosvenor on April 4, 1925.

Sumatra and nearby islands were full of fascinating, mysterious plants. In April 1926, Fairchild finally took Marian to Java, fulfilling a promise he had made when they married more than twenty years earlier. Soon after they arrived, they visited a penal colony off the coast of Java where they encountered an imprisoned headhunter. He “had failed to get as many heads as his sweetheart demanded before she would marry him,” Fairchild explained, “because the government stopped him and sent him here after his last murder.” He had only five; she wanted six.

The kepel, whose proper name is Stelechocarpus burahol, is related to the cherimoya and the pawpaw, both fruits Fairchild had promoted in America. Local guides told the Fairchilds that sultans had planted the trees and ordered their lovers to eat kepel fruit because it made their bodily fluids smell like violets. They also warned outsiders that stealing the fruit would bring bad luck. Fairchild immediately went to the open market in Djokjakarta to buy some for America. (Kepel was the 67,491st seed or plant to arrive in Washington from the ends of the earth. In 2012 the plant was growing at The Kampong in Coconut Grove.) At the age of 57, in a beautiful, rundown spot far away from home, Fairchild had discovered one of the world’s most romantic fruits.

Between trips he joined Marjory Douglas on Ernest F. Coe’s early campaign to save the Florida Everglades by becoming the first president of the Tropical Everglades Park Association and writing articles about the natural glories of the swamp. “The Everglades of South Florida have a strange and to me appealing beauty,” he said during a speech on February 28, 1929. “Their charm partakes of the charm of the Pacific Islands.” With the authority of a global traveler, he insisted that the Everglades’ natural beauty was unmatched anywhere in the world.

Fairchild’s many books and articles brought attention to his accomplishments and led to the establishment of the Fairchild Tropical Botanic Garden in Coral Gables by Colonel Robert H. Montgomery, yet another wealthy philanthropist who loved nature—he collected trees, large ones—and was charmed by David Fairchild.

The project began by accident. One day in 1936 Montgomery, an accountant and business executive with a home in Florida, was playing bridge with Stanton Griffis, a New York investor and businessman. Griffis said he wanted some land near Miami, so Montgomery obligingly bought twenty-five acres for him. But Griffis backed out of the deal, leaving Montgomery with land he didn’t need. The situation gave Montgomery the opportunity to create a garden of palms. This palmetum soon expanded into the 83-acre site that is now the Fairchild Tropical Garden. The garden officially opened on March 23, 1938. Griffis became one of its first lifetime members. Montgomery and Fairchild’s love of palm trees led to Fairchild’s last big seagoing adventure.

Fairchild bought hundreds of mangosteens in the market at Penang and sent the seeds to Wilson Popenoe, who was setting up the Lancetilla Agricultural Experiment Station in Tela, Honduras.

Popenoe planted the seeds and waited. Mangosteens are difficult plants to grow for they need the right soil and climate and, most significantly, more time than commercial growers want to give them, especially in America. However, by 1944 the orchard had produced thirty tons of David Fairchild’s favorite fruit.

By the middle of 1954, Fairchild’s own health had deteriorated. He died at home in Coconut Grove on the afternoon of August 6, 1954. He was 85.  

Posted in Agriculture, Farming & Ranching | Tagged , , , , | Comments Off on Book review of Fruits of Eden: David Fairchild & Americas Plant Hunters

A Century from Now Concrete Will be Nothing But Rubble

FL Keys Old Hwy (4)Photo: road abandoned since 1984 in the Florida Keys

Preface. Much of the material that follows is based on Robert Courland’s 2011 book Concrete Planet, which explains why concrete is an essential part of our infrastructure. And it’s all falling apart.

After water, concrete is the most widely-used substance in the world. Producing cement, a key component of concrete, is responsible for about 8% of global carbon dioxide (CO2) emissions because of the dependency of the high heat generated by coal. There are no electric, hydrogen, or other possible fuels besides fossil fuels, best explained in Chapter 9 of Life After Fossil Fuels: A Reality Check on Alternative Energy .

Related posts:

Expressways & Interstates are only designed to last for 20 years

Alice Friedemann  www.energyskeptic.com Women in ecology  author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity

***

Courland writes that some of our infrastructure may last even less than a century.  For example, in the ocean, concrete shows signs of decay within 50 years according to Marie Jackson at Lawrence Berkeley National Laboratory (Yang 2013).

The problem is the iron and steel rebar reinforcement inside of concrete.  Cracks in cement can be fixed, but when air, moisture, and chemicals seep into reinforced concrete, the rebar rusts, expanding in diameter up to seven-fold, which destroys the surrounding concrete.

Roads are by far the most vulnerable, with cracks developing and water entering to rust the rebar and expand it up to 7-fold. It cracks from freeze/thaw cycles, vibration, heavy trucks, and salt to melt snow. Also vulnerable are bridges, airport runways, canals, parking lots / garages and sidewalks. Many roads have a life expectancy of 20 years or less.

Buildings are vulnerable too (Boydell 2021, Lacasse 2020)

Buildings in coastal areas are especially susceptible as the chloride in salt water accelerates rusting. Rising sea levels will raise the water table and make it saltier, affecting building foundations, while salt-spray will spread further on stronger winds.

At the same time, the concrete is affected by carbonation, a process where carbon dioxide from the air reacts with the cement to form a different chemical element, calcium carbonate. This lowers the pH of the concrete, making the steel even more prone to corrosion. Since the 1950s, global CO₂ levels have increased from about 300 parts per million in the atmosphere to well over 400. More CO₂ means more carbonation.

The tragic recent collapse of an apartment building in Miami in the US may be an early warning of this process gaining speed.

Buildings have always been less vulnerable because of their protective external cladding.  But they were designed to operate within a certain climate, and global warming is likely to increase the range of hot and cold temperatures, rain, snow, wind, ground water levels, floods, chemical deposition on metals from the atmosphere, and more solar and UV radiation. These will cause the building envelope to deteriorate more rapidly, and warmer temperatures allow wood chomping termite populations to explode.  And like roads, allow water to seep into the concrete and expand the rebar below, greatly shortening their expected lifespan.

Heat expands metals, so buildings with steel frames will suffer, and soil subsidence is expected to increase with warmer temperatures and greater rainfall.

Uh-oh, nuclear reactors are vulnerable too. This will eventually destroy nuclear reactors, spent nuclear fuel pools, and nearby waste containers (in 2009 the only contender for a nuclear waste disposal site after 40 years and $10 Billion of studies is Yucca Mountain, but it was put off limits by Energy Secretary Steven Chu in order to get Henry Reid elected).

All rocks weather, and natural disasters can crack and expose the steel rebar to corrosion, shortening its lifespan, so in the end, concrete structures are temporary: coal and natural gas power plants, buildings, homes, and skyscrapers;  dams, levees, water mains, barges, sewage and water treatment plants and pipes, schools, subways, corn and grain silos, shipping wharves and piers, tunnels, shopping malls, swimming pools, and so on will waste away.

What to do: Replacing these structures as energy declines will be far more difficult than maintaining them properly, so hopefully this will be a top priority when our throwaway society is no longer possible.  In a world that’s shrinking from declining energy resources, topsoil, aquifers, and minerals, it’s time to construct buildings that last and maintain the ones we have.

Fixing instead of rebuilding will also reduce CO2, since cement takes a lot of energy to produce, around 450 grams of coal per 900 grams of cement produced, up to 8% of global carbon dioxide emissions per year.

Courland says that engineers and architects have known about concrete’s short lifespan for a long time, yet either refuse to admit it or don’t think it matters.  The main theme of Courland’s book is that it does matter:

1)  The lifespan of concrete is not only shorter than masonry, it “is probably less than that of wood…We have built a disposable world using a short-lived material, the manufacture of which generates millions of tons of greenhouse gases.”

2)  “Even more troubling is that all this steel-reinforced concrete that we use for building our roads, buildings, bridges, sewer pipes, and sidewalks is ultimately expendable, so we will have to keep rebuilding them every couple of generations, adding more pollution and expense for our descendants to bear.  Most of the concrete structures built at the beginning of the 20th century have begun falling apart, and most will be, or already have been, demolished”.

3) The world we have built over the last century is decaying at an alarming rate. Our infrastructure is especially terrible:

  • One in four bridges are either structurally deficient or structurally obsolete
  • The service life of most reinforced concrete highway bridges is 50 years, and their average age is 42 years….
  • Besides our crumbling highway system, the reinforced concrete used for our water conduits, sewer pipes, water-treatment plants, and pumping stations is also disintegrating.  The chemicals and bacteria in sewage make it almost as corrosive as seawater, reducing the life span of the reinforced concrete used in these systems to 50 years of less.”

Perhaps the American Society of Civil Engineers (ASCE) would agree. Below is their 2017 report card for America’s infrastructure which all has a lot of concrete: Aviation (D), Bridges (C+), Dams (D), Drinking water (D), Energy (D+), Hazardous Waste (D+), Inland Waterways (D), Levees (D), Ports (C+), Public Parks (D+), Roads (D), Schools (D+), Waste Water (D+).  It will cost over $3 trillion to fix this.

Alan Weisman’s in his book, “The World Without Us”, writes of places abandoned by people, such as Chernobyl.  It doesn’t take long for vegetation to crack and take over buildings, roads, and other concrete structures.  For example, consider what knotweed can do:

Knotweed can pierce tarmac and crack concrete foundations, causing serious damage to infrastructure, and grow up to a meter per month. In winter the underground rhizome survives and can grow as much as 14 meters long and 3 meters deep. The rhizome can even survive burial by volcanic lava and send up rock-piercing shoots when the surface cools. “A plant like that will laugh at concrete foundations,” says Mike Clough of Japanese Knotweed Solutions in Manchester, UK (Pain).

Improving Concrete

There is a program to make better concrete at the National Institute of Standards & Technology Engineering Laboratory. One programs is researching how to prevent concrete from cracking in a program called REACT: Reducing Early-Age Cracking Today.  In 2007, the National Infrastructure Improvement Act, to establish a National Commission on the Infrastructure of the United States, passed in the Senate but failed in the House.

Researchers are now experimenting with root vegetables and recycled plastic in concrete to see whether this can make it stronger—and more sustainable.   Cement needs to be combined with water so it adheres to sand and crushed rock and binds them together. However, not all cement particles become hydrated during the process. Most of them remain essentially sitting there doing nothing which is a waste. If this hydration mechanism could be amplified, its strength will increase significantly and we can use less cement. At last, the carrots: incorporating sheets made from vegetable waste was able to improve cement hydration by acting as reservoirs that allowed water to reach more cement particles and thus improve its binding ability.  After hydration ends, some of these carrot nanosheets remain in the cement and make its structure very strong. But don’t hold your breath, this was done in the laboratory, and postcarbon society will be so simplified that nanosheets will not be possible.  But perhaps research on plastic will be more successful, and there is certainly plent of that around (Ceurstemont 2021).

Engineers are working on making better concrete.   The fixes below will extend lifespan one time:

  1. Using bacteria that emit limestone to self-heal concrete by mixing tiny capsules of these bacteria within concrete that multiply when a crack breaks the capsule open.  The bacteria also use up oxygen that would have corroded the steel bars.  Whether this can be done or not is not clear since concrete is a very hostile place for bacteria due to high alkalinity, and as the concrete cures, it’s likely to crush many of the microcapsules.
  2. Filling the concrete with polymer microcapsules that break open and turn into a water-resistant solid when exposed to sunlight, filling in the crack.
  3. Add spores of bacteria that can last for 50 years and food for them so that when concrete cracks, they form a glue to fix it.  This is a one-time-only fix though.
  4. Coat rebar to protect it from rust.  This special rebar takes 20 years longer to rust.

It is hard to make concrete last

Concrete tends to be made from local gravel, stone, and sand since these are very heavy and so expensive to move any distance.  So the best recipe will likely vary a bit from place to place.Steel also varies in what alloys were used, how strong and corrodable it is, and asphaltic concrete will vary depending on the crude oil source of the bitumen.  It’s often mentioned that Roman concrete lasted because of the use of volcanic ash, perhaps the Romans just lucked out with good local materials.  And Rome didn’t have to deal with the freeze-thaw cycle, rust from steel rebar, heavy trucks, and other modern insults.  Dealing with all these local materials makes it hard to come up with a one-formula fits all solution to long-lasting concrete.

According to David Fridley at Lawrence Berkeley National Laboratory: Even though Roman concrete was superior to what he have now, we use concrete for far more applications now than the romans did, many of which require rebar.  “Concrete has very high compressive strength, so it is the best material for foundations, arches, domes, etc. for which weight is the major concern. However, concrete has very poor tensile strength, so applications that require resistance to bending (such as a beam) requires the addition of rebar, as the tensile strength of steel is quite high (but compressive strength low). The Romans didn’t use their concrete for such applications. Rebar inevitably corrodes, leading to expansion (tensile stressing), cracks, spalling, and ultimately, failure.  According to an article in Nature Geoscience last fall (http://www.nature.com/ngeo/journal/v9/n12/full/ngeo2840.html), carbonation of cement is substantial, with the impact of increasing the acidity of the concrete, and thus susceptibility of the rebar to corrosion. There’s not a rebarred concrete structure today that could last a millennium.

Peak Energy and Concrete

This reminds me of the verses from the Talking Heads Nothing But Flowers out of my head:

There was a factory
Now there are mountains and rivers
There was a shopping mall
Now it’s all covered with flowers
The highways and cars
Were sacrificed for agriculture
Once there were parking lots
Now it’s a peaceful oasis
This was a Pizza Hut
Now it’s all covered with daisies
And as things fell apart
Nobody paid much attention

Why waste our remaining energy  to make concrete? At this point it seems crazy to build projects with short-term concrete we KNOW will only last for decades.  Once we stop repairing our concrete (and cement) structures, they will quickly fall apart.

Why try to rebuild our infrastructure and create vastly more greenhouse gases?

Our descendants won’t be driving much.  They’ll probably wish we had converted most of the roads to farmland, which will take centuries even after the cement is gone for the soil to recover — why not start now?  Stop maintaining roads in the national forests, rural areas, and wherever else it makes sense –let them return to gravel, jackhammer and remove the rubble while we still have the energy to do so.

De-paving and de-damming would also restore streams, fisheries, wetlands, and ecosystems for future generations.

Future generations eventually won’t have the energy to maintain, repair, or rebuild very many concrete structures in a wood energy based civilization.  Courland says it takes one cord (4 x 4 x 8 feet) of wood to make 1 cubic yard of lime.

Those of you downstream from large dams might be interested to know that Courland says they are still “undergoing the curing process, thus forestalling corrosion. It will be interesting for our descendants to discover whether the tremendous weight of these dams will continue to put off the rebar’s corrosion expansion”.

Failing dams are a double tragedy, since electricity from hydro-power will be especially valuable as one of the few (reliable) energy sources in the future.

Peter Taylor, in “Long-life Concrete: how long will my concrete last?” closes his paper with: “The need for long-lasting pavement systems is growing as budgets decrease,traffic increases,and sustainability becomes more important. Increasing complexity of concrete mixtures and the demands being placed on them means that business as usual is no longer acceptable”

After oil decline, there will be absurd amounts of concrete rubble — what the hell are people in the future going to do with 300 billion tons of concrete? Build sheep fences? Since peak oil occured in November of 2018, I suggest we have contests to figure out what to do with all the rubble, especially since the energy to make new concrete won’t exist.

References

Boydell R (2021) Most buildings were designed for an earlier climate – here’s what will happen as global warming accelerates. The Conversation.

Ceurstemont S (2021) Carrot cement: How root vegetables and ash could make concrete more sustainable. The EU Research & Innovation Magazine

Lacasse MA et al (2020) Durability and Climate Change—Implications for Service Life Prediction and the Maintainability of Buildings. Buildings.

Pain, Stephanie. 3 July 2014. How to kill knotweed: Let slip the bugs of war. NewScientist.

Yang S (2013) To improve today’s concrete, do as the Romans did. U.C. Berkeley

Posted in Concrete, Infrastructure & Collapse, Manufacturing & Industrial Heat, Peak Sand, Roads | Tagged , , , , , , , , , , , | 3 Comments

Microchip fabrication plants need electricity 24 x 7 for four months

Preface. I explain in both of my books, When Trucks Stop Running and Life After Fossil Fuels, why heavy duty transportation and manufacturing can’t be electrified, as well as why the electric grid can’t stay up without natural gas to balance intermittency and provide baseload as well as long-term power for the weeks when neither solar or wind are around.  Utility scale energy storage batteries require more elements than be mined on planet earth. Nor will Concentrated Solar Power, Pumped hydro energy storage, or Compressed Air Energy Storage work: They don’t scale up and have too few possible locations.

Computer chip fabrication plants need to run continuously for months to accomplish the thousands of steps needed to make microchips. A half-hour power outage at Samsung’s Pyeongtaek chip plant caused losses of over $43 million dollars (Reuters 2019). Intermittent power will kill microprocessors when there’s no natural gas or other fossils, which today function as energy storage.

Here are just a few devices that have microprocessors: televisions, VCRs, DVD players, microwaves, toasters, ovens, stoves, clothes washers, stereo systems, computers, hand-held game devices, thermostats, video game systems, alarm clocks, bread machines, dishwashers, central heating systems, washing machines, burglar alarm system, remote control TV, electric kettles, home lighting systems, refrigerators with digital temperature control, cars, boats, planes, trucks, heavy machinery, gasoline pumps, credit card processing units, traffic control devices, elevators, computer servers, most high tech medical devices, digital kiosks, security systems, surveillance systems, doors with automatic entry, thermal imaging equipment.

This is unfortunate for the Preservation of Knowledge, since so many books and journals are online only.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

The US Energy Department recently reported that “the nation’s aging electric grid cannot keep pace with innovations in the digital information and telecommunications network … Power outages and power quality disturbances cost the economy billions of dollars annually” (DOE).  Val Jensen, a vice president at ComEd, says the current grid is “relatively dumb…the power put into the grid at the plant flows according to the law of physics through all of the wires.”

But wait — that may be a good thing.  The less dependent the electric power system is on computers, micro-controllers and processors, and SCADA, the more resilient, easy to repair, and less vulnerable to cyber attacks the power system will be.  The electric grid is already complicated enough, with 9,200 generation plants, 300,000 miles of transmission lines, and dozens of squabbling entities running it.

The Smart Grid will dramatically increase the dependency of the electric grid on microprocessors, and turn the electric system into a giant computer that will monitor itself, optimize power delivery, remotely control and automate processes, and increase communications between control centers, transformers, switches, substations, homes, and businesses.

Smart Grid devices have the potential of making the electric grid less stable: “Many of these devices must function in harsh electromagnetic environments typical of utility, industrial, and commercial locations. Due to an increasing density of electromagnetic emitters (radiated and conducted, intentional and unintentional), the new equipment must have adequate immunity to function consistently and reliably, be resilient to major disturbances, and coexist with other equipment.” (NIST)

The electric grid is  vulnerable to disruptions from drought (especially hydroelectricity), hurricanes, floods, cyberattack, terrorism, and soon rising sea level and oil shocks (oil-fueled trains and barges deliver most coal to power plants). Making the electric grid even more dependent on microprocessors than it already is will make the grid more difficult and expensive to fix, and overly-dependent on microprocessor production — the most vulnerable industry of all.

Chip fabrication can stop for weeks after a short electric power disturbance or outage, potentially ruining an entire 30-hour batch of microprocessors and manufacturing equipment. High quality electricity must be available 24 hours a day, 7 days a week. Semiconductor chips are vulnerable to even tiny power disruptions because a single mistake anywhere in the dozens to hundreds of steps renders the product useless.

Chip fabrication plants can not handle rolling blackouts 

Electric service interruption is one of the major causes of semiconductor fab losses (Global). It can take a week or more for a fabrication plant to start up again (EPRI 2003).  There can be losses of millions of dollars an hour when a chip fabrication plant shuts down (Sheppard).

Chip fabrication & Financial system Interdependency

“The semiconductor industry is widely recognized as a key driver for economic growth in its role as a multiple lever and technology enabler for the whole electronics value chain. In other words, from a worldwide base semiconductor market of $213 billion in 2004, the industry enables the generation of some $1,200 billion in electronic systems business and $5 trillion in services, representing close to 10% of the world’s GDP” (wiki semiconductor industry).

Chip fabrication & Electric Grid Interdependency

Without microprocessors or electricity, infrastructure fails and civilization collapses. Just about everything that matters — financial systems, transportation, drinking water, sewage treatment, etc — is interdependent with both electricity and  microprocessors, which are found in just about every electronic device from toasters to computers.

Low Quality Electricity

The electric power system was designed to serve analog electric loads—those without microprocessors—and is largely unable to consistently provide the level of digital quality power required by digital manufacturing assembly lines and information systems, and, soon, even our home appliances. Achieving higher power quality places an additional burden on the power system.

Electricity disturbance causes:

  • Voltage sags can result from utility transmission line faults, or at a given business from motor start-ups, defective wiring, and short circuits, which reduce voltage until a protective device kicks in.
  • Transients happen due to utility capacitor bank switching or grounding problems at the energy user.
  • Harmonics and spikes often originate at end-user sites, from non-linear loads such as variable speed motor drives, arc furnaces, and fluorescent ballasts.

Any device with a microprocessor is vulnerable to the slightest disruption of electricity. Billions of microprocessors have been incorporated into industrial sensors, home appliances, and other devices. These digital devices are highly sensitive to even the slightest disruption (an outage of a small fraction of a single cycle can disrupt performance), as well as to variations in power quality due to transients, harmonics, and voltage surges and sags.

Voltage and frequency must be maintained within narrow limits

The generation and demand for electricity must be balanced over large regions to ensure that voltage and frequency are maintained within narrow limits (usually 59.98 to 60.02 Hz). If not enough generation is available, the frequency will decrease to a value less than 60 Hz; when there is too much generation, the frequency will increase to above 60 Hz. If voltage or frequency strays too far from this range, the resulting stress can damage power systems and users’ equipment, and may cause larger system outages.

Chip Fabrication plant shutdowns and consequences

Concern over the impact of utility power disturbances is probably the greatest in the semiconductor wafer fabrication industry. Producing complex computer chips is an extremely delicate process that blends microelectronics with chemical and mechanical systems, requiring tolerances in microns. The process can take 30 to 50 days to complete and can be totally ruined in a blink of an eye (Energy User News)

Power outages frequently cause damage to chips, which are fabricated on silicon wafers about the size of dinner plates that may take eight to 12 weeks to process. Wafers that are inside processing machines at the time of an outage are often ruined. In some cases, a shutdown of the air-purifying and conditioning system that keeps air in a chip factory free of dust also could contaminate chips.

Here are a few examples:

2007. Samsung, the world’s biggest maker of memory chips, shut down 6 of its chip production lines after a power cut at its Kiheung plant, near Seoul, costing the company $43.4 million. A problem at a switchboard at a transformer substation caused the power outage. Some analysts had said the outage could wipe out as much as a month’s worth of Samsung’s total production of NAND flash memory chips, which are widely used for data storage in portable electronics. Chips that were already in the fabrication process when the outage hit were discarded, and ramping back up to the previous production level could take some time (So-eui).

2010. A drop in voltage caused a .07-second power disruption at a Toshiba NAND memory chip plant in Japan which could raise prices on many devices, such as smartphones, tablet PCs and digital music players. NAND flash chips are fabricated on silicon wafers about the size of dinner plates and can take between 8 to 12 weeks to process. If the power goes out at any point in that time frame, the entire batch can be destroyed (Clark).

2011. The earthquake and tsunami in Japan took out nearly 70% of global semiconductor silicon wafers, the platform computer chips are built on (Dobosz). Production of microchips to control car electronic  operations  was  stopped at 10 Renesas factories where about 40% of these microprocessors are made, mainly due to power outages, not physical damage.  Renesas doesn’t expect to get back to pre-quake production levels for 4 months (SupplyChain Digital).

2011. The massive monsoon flooding of Thailand took out 25% of the world’s hard disk drives (Thailand is the world’s  #2 producer). One company, Western Digital, was out for 6 weeks and lost about $250 million dollars.

2011. Due to the Fukushima nuclear power plant disaster, Japan had to institute rolling outages, which shut down chip manufacturing.  Even a 3-hour outage can result in a stopped production line that can’t be restarted for a week or so. Analysts estimated this could cost $3.7 billion in losses (SIRIJ).

2013. DRAM supplies from Hynix’s fabrication plant in Wuxi, China, aren’t expected to return to normal until next year after a fire severely damaged that facility, according to a new report. In the meantime, DRAM prices are up 35% since the fire, as looming supply constraints prevail and there appears to be no rush by DRAM makers to sign new contracts, according to the report from analysts at investment bank PiperJaffray. The fire that blazed for almost two hours on September 4th, damaged equipment used for making PC DRAM, which sent memory prices skyrocketing. Hynix said it would make every effort to ramp up its Waxi-based fab operations to return to normal DRAM production by this November, a prediction Piper Jaffray contested (Mearian)

Emergency and Backup Power

A supply of fluctuation-free electricity is critical. Chip fabrication plants and server farms must balance the expense of building independent electricity resources with the cost of equipment failures and network crashes caused by unreliable power. Hewlett-Packard has estimated that a 15-minute outage at a chip fabrication plant cost the company $30 million, about half the plant’s power budget for a year.   Backup systems are so expensive, that a survey of 48 companies revealed only 3 had backup power sources: 3 used generators and the other one solar (Hordeski).

It’s too expensive to operate a separate power plant to generate power. Fab plants use up to 60 megawatts of power, so putting a natural gas or coal power plant onsite would cost somewhere between $100-400 million dollars.

Microprocessors and electricity are coupled

Microprocessors can’t be made if the electric grid is down. The electric grid can’t function without microprocessors — about 10% of total electrical demand America is controlled by microprocessors, and by 2020 this level is expected to reach 30% or more (EPRI).

Related Articles:

References

Clark, D. Dec 10, 2010. Power Blip Jolts Supply of Gadget Chips. Wall Street Journal.

Dobosz, J. 15 March 2011. Japan Outages Serve Up Semiconductor Bargains On A Platter. Forbes.

DOE. July 2003. Grid 2030. A National Vision for Electricity’s second 100 years. United Dtates Department of Energy.

EPRI (Electric Power Research Institute). 2003. Electricity Technology Roadmap: Meeting the Critical Challenges of the 21st Century: Summary and Synthesis. Palo Alto, Calif.: EPRI.

Energy User News Vol 26 #1. Jan 2001. Semiconductor Wafer Fab Plant Gets Premium Utility Power

Global, FM. 31 Oct 2003. Safeguarding the Semiconductor Fabrication Facility. Controlled Environments.

Hordeski, Michael F. 2005. Emergency and Backup Power Sources: Preparing for Blackouts and Brownouts. CRC press.

Mearian, L. 30 Sep 2013. DRAM prices up 35% since China fab plant fire. Computerworld.com

(NIST) National Institute of Standards and Technology.  24 Jan 2014. Electromagnetic Compatibility of Smart Grid Devices and Systems. U.S. Department of Commerce.

Reuters (2019) Samsung electronics chip output at South Korea plant partly halted due to short blackout. https://www.reuters.com/article/us-samsung-elec-plant/samsung-electronics-chip-output-at-south-korea-plant-partly-halted-due-to-short-blackout-idUSKBN1Z01K3. Accessed 2 Nov 2020

Sheppard, J. Oct 14, 2003. Reducing Risk with Enterprise Energy Management: Observations After the Biggest Blackout in US History. IntelligentUtility.com

SIRIJ. April 6, 2011. Rolling power outages make chip fabrication impossible. Semiconportal.com

So-eui, R. Aug 4, 2007. Samsung chip lines fully working. Reuters.

SupplyChain Digital. 11 May 2011. Renesas to renew operations June 1. supplychaindigital.com

Related articles

Electric Grid

Posted in Blackouts, Electric Grid & Fast Collapse, Electricity Infrastructure, Interdependencies, Microchips and computers, Preservation of Knowledge | Tagged , , , , | 27 Comments

The orbiting solar power fantasy

Preface. This 2020 article “Solar Power Beamed Down To Earth From Space Moves Forward” will leave you all warm, fuzzy, and unworried about the future. The Scientists Will Come Up With Something. 

But that’s because you know little to nothing about orbital solar power.  It’s hard to be a bullshit detector without knowing something a topic, but you can still notice missing information.  How much will all this stuff weigh?  How much will it cost to launch into space?  How often will maintenance flights need to be made? And if the Air Force and Northrup Grumman are building this solar contraption, it might occur to you that this is more likely to be a weapon than an orbital solar power solution.

A genuine orbiting solar power generator meant to provide electricity could turn into a weapon if the computer hiccuped and allowed the down link beam to drift off target by a few degrees, slewing the beam across the countryside and barbecuing whatever was in its path with a few gigawatts of microwave radiation.  

Alice Friedemann   www.energyskeptic.com  author of  “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer; Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

In theory, orbiting solar arrays could make electricity, convert it to microwaves and then beam that energy to a ground antenna where it would be converted back to electricity. But to make 10 trillion watts of power would require about 660 space solar power arrays, each about the size of Manhattan, in orbit about 22,000 miles above the Earth (Hoffert et al 2002).

So how are you going to get these gigantic solar power satellites into space?  Normile (2001)  estimates that it would take 1,000 space shuttle payloads to deliver the necessary material, an order of magnitude more than the number of missions needed to construct the international space station.  The average space shuttle mission cost $450 million (NASA 2020). Without breakthroughs in launching technology, space solar power “would be impractical and uneconomical for the generation of terrestrial base load power due to the high cost and mass of the components and construction.”

Nor can we be sure that there will be breakthrough advances in a number of technologies according to Richard Schwartz, an electrical engineer and dean of engineering at Purdue University in West Lafayette, Indiana (NRC 2001).

And we can’t run wires from Earth’s surface to an orbiting satellite, so the solar energy would have to be converted into electric energy on obard to power a microwave transmitter or laser emitter, and focus its beam toward a collector on Earth.  

And can you imagine how often astronauts would have to go into space to fix and maintain hundreds of these objects, how much fossil energy that would take at a time when fossil energy is declining?

And astronauts will have to up there to replace the solar panels, because space is hostile and the solar panels will suffer about eight times as much damage and degradation as they do on earth.

These truly gigantic orbital arrays could be hit by space junk and create even more space junk, taking out other satellites or orbital solar stations and their microwave emissions would probably interfere with the functioning of other satellites.

Meanwhile, shell out even more money for the enormous receiving stations on the ground.

Power beaming from geostationary orbit by microwaves  requires very large ‘optical aperture’ sizes, including a 1-km diameter transmitting antenna in outer space, and a 10 km diameter receiving rectenna on earth, for a microwave beam at 2.45 GHz. These sizes can be somewhat decreased by using shorter wavelengths, although they have increased atmospheric absorption and even potential beam blockage by rain or water droplets. Because of the thinned array curse, it is not possible to make a narrower beam by combining the beams of several smaller satellites. The large size of the transmitting and receiving antennas means that the minimum practical power level for an SPS will necessarily be high; small SPS systems will be possible, but uneconomic (Wiki 2020).

To give an idea of the scale of the problem, assuming a solar panel mass of 20 kg per kilowatt (without considering the mass of the supporting structure, antenna, or any significant mass reduction of any focusing mirrors) a 4 GW power station would weigh about 80,000 metric tons, all of which would, in current circumstances, be launched from the Earth. Very lightweight designs could likely achieve 1 kg/kW, meaning 4,000 metric tons for the solar panels for the same 4 GW capacity station. This would be the equivalent of between 40 and 150 heavy-lift launch vehicle (HLLV) launches to send the material to low earth orbit, where it would likely be converted into subassembly solar arrays, which then could use high-efficiency ion-engine style rockets to (slowly) reach GEO (Geostationary orbit).

The cost to build orbital solar is, well, out of this world.

With an estimated serial launch cost for shuttle-based HLLVs of $500 million to $800 million, and launch costs for alternative HLLVs at $78 million, total launch costs would range between $11 billion (low cost HLLV, low weight panels) and $320 billion (‘expensive’ HLLV, heavier panels).To these costs must be added the environmental impact of heavy space launch emissions, if such costs are to be used in comparison to earth-based energy production. For comparison, the direct cost of a new coal or nuclear power plant ranges from $3 billion to $6 billion per GW (not including the full cost to the environment from CO2 emissions or storage of spent nuclear fuel, respectively); another example is the Apollo missions to the Moon cost a grand total of $24 billion (1970s’ dollars), taking inflation into account, would cost $140 billion today, more expensive than the construction of the International Space Station.

SBSP costs might be reduced if a means of putting the materials into orbit were developed that did not rely on rockets. Some possible technologies include ground launch systems such as Star Tram, mass drivers or launch loops, which would launch using electrical power, or the geosynchronous orbit space elevator. However, these require technology that is yet to be developed. Project Orion (nuclear propulsion) is a low cost launch option which could be implemented without major technological advances, but would result in the release of Nuclear fallout.

Patterson (2003) wrote “It’s hard to calculate the cost per pound of deliver to the geosynchronous orbit (GSO), but Futron Corporation is paid by the companies that actually launch satellites to make estimates (www.futron.com). In 2003, Futron estimated GSO launch vehicles cost per pound at $17,000 (Western) and $7,000 (non-Western). In 2000, the costs were around $12,000 per pound. Low Earth Orbit, (LEO) is much cheaper. At $7,000 per pound, it would cost $42 billion to launch a 3,055-ton satellite into geosynchronous orbit, and another $4.2 billion for every refueling run. These costs are for UNMANNED objects.”

References

Hoffert MI, Caldeira K, Benford G, et al (2002) Advanced Technology Paths to Global Climate Stability: Energy for a Greenhouse Planet. Science 298: 981-987.

NASA (2017) Space Shuttle and International Space Station. National Aeronautics and Space Administration.

Normile D (2001) SPACE SOLAR POWER. Japan Looks for Bright Answers to Energy Needs. Science 294: 1273

NRC (2001) Laying the Foundation for Space Solar Power: An Assessment of NASA’s Space Solar Power Investment Strategy. Washington, DC:
The National Academies Press. https://doi.org/10.17226/10202.

Patterson R (14 Jan 2003) Energyresources message 28631

Wiki (2020) Space-based solar power. https://en.wikipedia.org/wiki/Space-based_solar_power

Posted in Alternative Energy, Critical Thinking, Far Out, Orbiting Solar | Tagged , , | 2 Comments