Book review of Underbug: an obsessive tale of termites and technology

Preface.  I read this book mainly to find out where “grassoline” stood. Scientists thought 10 years ago that we could recreate the termite biota system of digesting biomass to create biofuels.  But this appears to be far in the future — if ever — the termite biota system in their guts is simply too difficult, if not impossible, to scale up in a giant vat.

An unexpected pleasure was how very funny Margonelli is.  This is a delightful book, highly recommended.  As usual my notes below from the Kindle are what interested me, rather than the best parts of the book.  So read it!

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

Lisa Margonelli. 2018. Underbug. An Obsessive Tale of Termites and Technology. Scientific American.

Meta levels of understanding of the termite superorganism

When early European naturalists looked into beehives and termite mounds, they saw the monarchies they came from—with workers, soldiers, and kings and queens. It was misleading, he said, and kept us from really understanding what was going on with termites at all. When I got home, I looked this up. Eugene was correct. Peering into beehives in the 1500s, naturalists literally saw Europe and its political structures in miniature. For two hundred years, they generally didn’t describe queen bees as “queens”—that is, females, because they believed only a male king could be head of such a magnificent insect state. It wasn’t until the 1670s that queens were females became known.

Consider the report that Henry Smeathman gave to the Royal Society in 1781 about the glories of termite civilization: The mound is England in miniature, with “laborers,” “soldiers,” and “the nobility or gentry.” He noted that bug nobility were worthless: they couldn’t feed themselves, work, or fight, but had to be supported by the others. He saw this as a justification for aristocracy—in insects as in humans—“and nature has so ordered it.

 

The great danger of seeing social insects anthropomorphically is that it obscures their true bugginess. In the 1970s and ’80s, when the ant scientist Deborah Gordon began studying massive ant colonies in the American Southwest, scientists described the ant colony as “a factory with assembly-line workers, each performing a single task over and over.” Gordon felt the factory model clouded what she actually saw in her colonies—a tremendous variation in the tasks that ants were doing. Rather than having intrinsic task assignments, she saw that ants changed their behavior based on clues they got from the environment and one another. Gordon suggested that we should stop thinking of ants as factory workers and instead think of them as “the firing patterns of neurons in the brain,” where simple environmental information gives cues that make the individuals work for the whole, without central regulation.

The role of joy in social organisms is not something we have a metric for, so it’s not anything that modern biology entertains seriously. Robots and virtual termites have rules, but the rules of socialness—these urges and possibly even intentions—are unknowable to us. Watching this party, we find it hard to separate the building imperative (that possible stigmergy) from the termites’ strange sticky social nature. Maybe they build the mound because it’s fun to do it together.  Maybe they transfer water because they’re thirsty and moving the stuff around feels fun and necessary. And on this feeling of fun, perhaps, entire ecosystems are organized.

The field of complex systems is still in the stage of gathering insights into biology while waiting for someone to appear with a unifying theory. Come up with a viable theory for the way termites build and it could change the way computer networks run, how wars are fought, and how disasters get responded to. The emergent equivalent of thermodynamics could upend the world.

Should we worry that we’re just modeling our own assumptions? Are the termites random, noisy, or something else? The very concept of the black box might be a kind of cognitive trap that was preventing the scientists from seeing that the termites were, at some level, doing.

If termites were actually factory workers, most of them would be fired. During one experiment, it was clear that only 5 of 25 termites were building. In another dish two termites did the building while four helped a little and the remaining 19 just ran around. Kirstin said that when she started tracking what each termite was doing—not just where it was going—she discovered that even though some ran around a lot, only a few made progress on the actual building. Termites seemed to do whatever they felt like: dig, take up soil and clean the dish, sit around.

Kirstin’s data revealed a world that was more intuitive—more gooey, more individual, and less robotic—than the more mechanistic views of termites that humans had been able to imagine. It was as if scientists had forced themselves to obey a set of rules about how to think about what termites do—their own internal algorithm of possibility—and that led them astray.

In one study, scientists expected termites to drop their dirt balls on old mound soil, but they also seemed to pick up balls from that soil. For Paul, this was a eureka moment. If the old mound soil contained a cement pheromone, then it should work like a key fitting into a lock, releasing exactly one behavior. But once you could see individual termites in the video, you could see that they did all sorts of things when they encountered the mound soil containing its possible pheromones. In fact, whatever they were doing, they changed it. If they were carrying, they dropped. If they were empty, they picked up. “It causes everything!” Paul explained. Technically, it appeared that the mound soil contained an arrestant that signaled the termites to finish up whatever they were doing. Paul called it a “Shalom” chemical, appropriate for any and all occasions, its meaning dependent on the context.

The cue for building—like the sound of running water for beavers—was digging itself. The concept of stigmergy, in other words, might be upside down: instead of being driven by dirt balls that inspired further dirt balls, it was driven by digging. When a few termite individuals started digging, others would join them, shoving in—as we’d seen—like pigs at a trough.

Paul figured out the termites’ rules for tunneling. If one termite was in a tunnel, it went straight. If so many termites were in the tunnel that they piled up, some would start digging a branch off to the side. So the pressure of termites in the tunnel influenced how much it branched.

Scott had come to think that the mounds themselves were a physical memory, with their mixture of shapes and smells and templates of gases, that allowed one generation of termites to pass their gains on to the next the way we hand down machines and books. This concept made them, in a sense, the architects of their own codes—in the balls of mud and spit of the mound—rather than robots who merely enacted the code written in their genes.

The symbiotic relationship between Macrotermes and the fungus is tight. Prejudiced by our human sense of a hierarchy of the animate termites over inanimate mushrooms, we’d be inclined to believe that the termites control the fungus. But the fungus is physically much larger than the termites in size and energy production: Scott estimates that its metabolism is about eight times bigger than that of the termites in the mound. “I like to tell people that this is not a termite-built structure; it’s a fungus-built structure,” he says, chuckling. It is possible that the fungus has kidnapped the termites. It’s even possible that the fungus has put out a template of chemical smells that stimulates the termites to build the mound itself.

Even though we assume the termite is in charge of the guts, it’s completely possible that the guts are in charge of the termite.  Perhaps, he added, the termite is just a delivery vehicle for the contents of the guts!  Maybe our gut microbes are in charge of us—demanding caffeine, say, or salt—fooling us into thinking we have free will and would like a cup of coffee.

Without the need to reproduce, or to venture far aboveground, both worker and soldier termites lost things they didn’t need: eyes, wings, and big, tough exoskeletons.  Most of the termites are eyeless and wingless, but the fertile termites who leave the mound on this night have eyes

Called “alates,” these male and female termites capable of reproduction are like fragile balsa wood glider planes: just sturdy enough to cruise briefly before crash-landing their payloads of genes. Alates are scrumptiously fatty, and reportedly have a nutty flavor, so what starts as a confetti shower of gametes turns into a scrum of birds, lizards, aardwolves, and sometimes humans trying to gobble them up, with the result that hardly any survive this nuptial flight. It’s possible that catching and eating these termites gave our australopithecine ancestors a booster shot of fat, proteins, and micronutrients that helped to feed their growing brains, leading eventually to our current human situation. This strange fact—that termites themselves may be partly responsible for the brains with which we try to study them—is typical of the weird dual vision of studying termites.

Termites suck water into their own bodies, sometimes taking up a quarter or even half of their body weight in water. They also grab soupy mud balls and move them to drier parts of the mound. For every pound of dirt the termites moved, they also carried nine pounds of water, meaning that in a year in just one mound termites were also moving thirty-three hundred pounds of water.

GRASSOHOL

Because termites are famously good at eating wood, the genes in their guts were attractive to government labs trying to turn wood and grass into fuel: “grassoline.

Termite guts are a molecular treasure chest: 90% of the organisms in them are found nowhere else on Earth.

The geneticists didn’t just want the microbes’ DNA, they also wanted the molecules of RNA, which could tell them which parts of the genetic code were in use at the precise moment the termites took their tumble into the thermos. Perhaps by seeing exactly how termites break down wood, we’d be able to do it, too.

The problem was that they regularly molted their intestines, which cleaned the microbes right out. Our evolving cockroaches started to exchange what entomologists politely call “woodshake”—a slurry of feces, microbes, and wood chips—among themselves, mouth to mouth and mouth to butt. After they pooled their digestion, it was a quick trip to constant communal living.

The termite itself is another shell company for a consortium of five hundred species of symbiotic microbes, all cooperating to digest wood for the mutual benefit of the Many.

Even better, some of these microbes are themselves conglomerations of several creatures acting as one.

Phil suspected the spirochetes in a termite’s guts had some kind of special enzyme capable of cutting the wall. If the lab could find these cutting enzymes and identify their genes, they might be helpful for the greater project of making grassoline.

When PHIL and thirty-eight other researchers first did genetic analysis of the Costa Rican termites’ guts in 2007, they found 71 million base pairs, or twinned molecules of DNA, which they sorted into approximately 80,000 genes, and among those—using computers—they identified 1,267 enzymes that might work to digest wood.

Press releases suggested that once the termite’s gut was decoded, we’d soon be inserting these codes into tame laboratory bacteria to produce enzymes and start digesting wood on a grand scale.

But the termite, it turned out, was a hard bug to crack…much more than an exceptionally elegant machine, a natural blueprint for a factory, or a source of code to “boot up” a bioreactor.

The details of how the termite’s crazy consortium of microbes accomplished wood eating are a mystery, difficult to re-create in the lab. “The joke is that by the time you’re done you’ll have a termite, and you might as well go and hook your car to a bunch of termites.

Here’s what will happen when termites finally get around to eating this book: one will use the clippers on the end of its mandibles to grab a mouthful about the size of a period. It’ll push that into its mouth, which resembles a grinder, with its hand-like palps. From there the shredded paper will make its way into the gut, which is about an eighth of an inch long and the width of a hair. The first stop in the gut is a gizzard, where the bite will be vigorously mashed with saliva containing enzymes to grab any free sugars, which are quickly absorbed by our termite. Next, this paper bite will journey through an alkaline tenderization chamber for a nice soak in the termite’s version of drain cleaner. After that, depending upon which kind of termite it is, the bit of papier-mâché will proceed through an elaborate enteric valve—a gorgeous gatekeeper made of many little fingers brushing the particle into the cavernous nightclub of the hindgut, named P3.

Microbially speaking, they’re a freak show. There are as many as 1400 different species of bacteria.

These microbes release enzymes that can unzip the cellulose and hemicellulose in our paper particle, producing sugars.

All around are masses of other microbes waiting to grab the sugars and process them into hydrogen and methane. Along the way they may synthesize some nitrogen compounds, too.

Microbes arrange themselves in neighborhoods where sympathetic creatures can eat one another’s garbage. Those who are the most friendly with oxygen sit on the edges of the gut, while those who can tolerate none hang out in the middle. All termites have bacteria; but some so-called higher termites, like the fungus-growing Macrotermes of Namibia, have only bacteria. By contrast, the guts of so-called lower termites host bacteria as well as exotic creatures called “protists”—single-celled organisms that are neither animal nor plant nor fungus. Protists are relatively huge and quite weird.

If you were a piece of paper the size of a bacterium, say, and just entering the termite’s third gut, you would be greeted by a giant swirling thing, 300 times your size, approaching like a cruise ship coming in to a dock, so big you wouldn’t have any idea how big it really was. That would be Trichonympha, the most common of the termite protists. It has a smooth, round cap, like the tip of a badminton birdie, and an enormous whirling hairball, made of thousands of flagella over its barrel-shaped body. Opposite the tip, buried under all the waving flagella, is a mouth, or maybe more accurately a portal, where Trichonympha draws in wood chips for digestion. That mouth, much like yours, is covered by little jujube-shaped bacteria—a nano-environment within a microenvironment. But you would have no time to think of these wondrous worlds within worlds because the Trichonympha’s great swirls would swirl you in, ever closer to that portal, where you would finally be ripped molecule from molecule in this gut within the gut.

Some of the “fringe” surrounding the protest is actually made of other symbiotic creatures.

For most of the history of microbiology, the vast majority of microbes have been untested and unknown because fewer than 1 % of them can be grown alone in a petri dish.

Ninety percent of the microbes were found nowhere else on Earth. Half of the genes in the gut were unknown.   “Any single one of those forty thousand unknown genes could be a whole PhD for someone.

“It’s a neat little system,” he enthused. “You’ve got all of these symbiotic microbes evolving with the termite hosts. It’s a simple enough system, but there’s an amazing complexity of hosts and dietary habits.

Did the termites get these microbes from eating dinosaur poo and coevolve with their passengers over the epochs? Or did they pick new microbes up whenever they ate a new food?

The termite’s gut is a black box for which we increasingly know the parts, and the results, but we don’t know exactly how they work. Freezing them fast preserves not only DNA—the stable strings of genetic material—but also the unstable RNA, which can reveal what genes were actually in play at the moment of death. Perhaps if we knew what termites were actually doing in their guts, rather than what they were capable of, we could understand the black box.

All termites use symbiotic collectives of bacteria and other microbes to digest cellulose for them, but Macrotermes outsource the major work to a fungus. In some senses the fungus functions as a stomach. Under the mound and around the nest sit hundreds of little rooms, each containing fungus comb. This comb is made of millions of mouthfuls of chewed dry grass, excreted as pseudofeces and carefully assembled into a maze.

Workers scour the landscape for dry grass, quickly run it through their guts, then place and inoculate each ball to suit the fungus’s picky temperament, tend the comb, and snarfle the fungus and its sugars before distributing the goodies to the rest of the family. Then the workers run off to gather more grass for the fungus.

It was clear that the termite was no longer in the running to provide genes for grassoline—the bug was just too complex—but it had become a sort of mascot, biological proof that those cellulose sugar chains could, in fact, be cracked.

For the biofuel project, the lab had turned its attention to wood-eating microbes in compost and in shipworms. But the termite remained a big shining example, an inspiration, and so Phil’s team continued to comb termite guts in search of ideas, microbial strategies, and systems.

In 2005 researchers at the Department of Energy had estimated that if the United States went totally termite we could harvest trees, crop residue (such as cornstalks), and high-energy grasses, and engineer microbes to turn them into sugars. Then those sugars could be fermented to make nearly 60 billion gallons of ethanol—a potential gasoline substitute—a year by 2030. In 2016 that estimate was updated to 100 billion gallons. Theoretically—and all of this was very theoretical—that would equal most of the petroleum we used for driving in 2015, while reducing greenhouse gas emissions from driving by as much as 86 %.

JBEI’s explicit goal was to brew biofuel at a price that could eventually compete with gasoline. To accomplish that, the lab needed to engineer biological processes so that they are predictable and can scale from the small flasks in lab experiments to vast industrial tank farms. Teams of researchers focused on understanding and manipulating the plants themselves, understanding and increasing the processes that can break down cellulose, and designing microbes that can synthesize fuels from the sugars.

When it was finally extracted, the protein—it was just a squidge of stuff now, barely visible—was sent off to the crew who worked with mass spectrometry. They would hit the proteins with an electron beam to determine the identity of the amino acids and then use that to make educated guesses about the likely shape and identity of the protein. The thought of this made John philosophical. “We really don’t understand how proteins work. We know that they’re made of amino acids but we don’t understand how they fold. They have a pocket here and a pocket there.” A protein may behave one way in acid and another in water.

The metagenomic view shows that termites have guts that do certain jobs—think of it as a spec sheet for eating wood: soften the cellulose, chop up the sugar chains, ferment the sugars, and so on. All of the microbe species who’ve evolved for the party in the termite’s gut end up playing along with this essential script. And in doing that, they lose genes that they’d have needed to survive independently outside the gut and gain genes that allow them to be more helpful inside the gut. Finally, they are capable only of living in this one termite gut environment.   [my comment: Huge problem to scale up ]

Phil got the group to flip between databases to get the genomic data from a single spirochete, which strangely lacked its usual kit of genes for mobility and tracking toward chemicals. “What’s going on? This is totally atypical for a spirochete!” said Phil. Moving and sniffing for chemicals are defining characteristics of spirochetes. What is a spirochete that can’t move or smell? It’s an absurdity, and yet it is right there, in the data. Shaomei wondered if the spirochete’s genome got smaller and lost its genes for defense and mobility as the spirochete spent more evolutionary time in the termite’s gut. Phil hunched inward in front of his computer and then looked up to announce that this particular spirochete is living inside a protist—like Trichonympha—which lived inside the termite.* Protected inside two different organisms, apparently it no longer needs to move or defend, and so has lost those genes. Once you go symbiotic, you can never go back. It’s here, in this stuffy room, that I can see for the first time what it means that the termite’s gut is another composite animal made of millions of bacteria, who, like their termite hosts, have traded away eyes and wings for the advantages of living in numbers.

While competition has been part of the evolutionary process, at the microbial level it increasingly appears that cells compete to cooperate in communities—fitting in and helping out is essential to their survival.

Contrary to the orthodox evolutionary view that altruism is exceptional and requires special explanation … the norm among organisms is a disposition to act for the benefit of other organisms or cells. To get ahead they’ve got to get along. Codependent forevermore. Our old friend the superorganism has shown up here too, though sometimes it’s called a meta-organism.

Termites’ guts generally contain lots of bacterial genes for fixing nitrogen. The biggest difference between the wood-eating Nasutitermes from Rudi’s shower stall and the Amitermes who lived in an Arizona cow pie was that the wood eaters have tons of genes for fixing nitrogen while the cow-pie eaters don’t. This isn’t surprising: wood is a nitrogen-poor food, so the wood eaters would need ways to fix it for themselves. Cow dung, on the other hand, is rich in the stuff (because the cow’s stomach microbes have already gone to the trouble of fixing the nitrogen). So somehow, termites’ food sources may influence the capabilities of their guts. But how?

If we only looked at genomes, he said, we wouldn’t know that crows can use tools. We might not even realize they can fly! But with microbes, genomes are especially misleading because they don’t reveal two important things: behavior and structure. Trichomonas termopsidis, for example, processes wood in termites’ guts, but in a vagina its close relative Trichomonas vaginalis is an STD, eating vaginal secretions. The genomes of the two are similar enough that it would be difficult for scientists to understand how differently they act in the world.

Termite gut microbes coevolved with their termite carriers over time, swapping functions among the different organisms. The termites didn’t pick up new organisms; the termite and the gut microbes changed together. When their diets changed, it appeared that the termites could rebalance their gut portfolios without changing the list of inhabitants, only their relative numbers.

So the answer to the Rosetta stone question was that termites and microbes lived in deep symbiosis over millions of years, becoming inseparable. The amazingly wide numbers of genes doing similar things in the gut seemed to allow the partners to adjust to whatever the world threw at them.

While it was interesting to know how the termites and their bugs evolved, it was still an open question whether a system so tightly bound together, so self-regulating, could be disassembled to reliably produce products such as biofuels. The ability to swap genes and change behaviors has been key to the survival of the termites and their symbiotic fellow travelers, but they remain more like superorganisms (with all their cultish connotations) than gene-based computers.

The idea of the termite as a model for biofuels was pretty much dead, at least at this lab. Still, I wondered how scientists working on biofuels imagined we’d get the capabilities of termites—not to mention unlimited growth and solutions—from clots of microbes in stainless steel tanks.

As fire is a violent chemical process, metabolism is life’s very low flame. “We’re all basically burning very slowly.” When I asked to see what he meant, he showed me a flowchart of how the termite’s gut breaks down wood that looked like a map of the Tokyo subway system. Near the center was a loop with hundreds of subsidiary reactions hanging off the sides like intersecting train lines on the Yamanote Line. Among those interconnecting lines were the two different nitrogen cycles Phil and his crew came across during their jazz sessions, but they were just two tiny nodes in a vast network.

When I asked him what he thought about termites, he said it would take 20 years to understand them, and for now he needed to work on just a single organism—a nice tame E. coli, say, or a yeast.

The second thing that struck me was something that seemed ironic at first: we once worked mightily to figure out how to use natural gas to make fertilizer to grow crops, and now we’re laboring to do the opposite—turn plants into replacements for fossil fuels.

Nested inside the Mastotermes gut, though, is another amazing thing—a legendary protist named Mixotricha paradoxa: “the paradoxical being with mixed-up hairs.” Under a low-power microscope, M. paradoxa looks like a grenade with a bad case of shag carpet, and it was discovered and named by a Jean L. Sutherland in 1933. Under interrogation, however, M. paradoxa turns out to be five entirely different creatures, with five separate genomes, collaborating as one, like a bunch of kids crowded into a donkey suit.

She’d already found 32 new protist chimeras—each with multiple genomes—in Australian termite guts. Like Trichonympha, some of these protists were 100 times bigger than the bacteria in the termite’s guts.

The peculiar environmental conditions of the termite gut supported the evolution of their structure, behavior, and symbiotic relationships, many times over, in both similar and strange ways. How did the little flagellate make itself a hundred times bigger, enabling it to eat really big wood chips? The answer seems to be that it repeated its structural elements along a line of symmetry, as if bolting one IKEA bookshelf to the next until it had something the size of a library.

These odd marriages of protist and bacteria, then, are probably not snapshots from a former time when symbiogenesis was common, but very peculiar products of the futuristic junkyard of Australian termites’ guts.

In 2050, as the population of the planet peaks, we’ll need 60 percent more food than we currently grow to feed increasingly affluent people. And if synthetic biologists do manage to make grassoline, we’ll need to increase the amount of green stuff we grow per acre between two and three times.

 

One such MFB was limonene—a lemon-scented solvent that is normally made by squeezing the skins left over from orange juice processing. It could be used as a fuel or an industrial ingredient. Pinene can be combined with another molecule to create JP-10, an advanced rocket fuel that goes for $25 a gallon. Producing very high-priced chemicals for the military was one way to keep the lab alive long enough to find other biofuels.

Genomatica’s 1,4 butanediol (BDO), used in making Spandex and plastics. It apparently moonlights as a psychedelic drug. The field’s legitimate blockbuster was DuPont’s 1,3 propanediol, used in creating polyester, paints, and glues. Produced by a genetically altered E. coli that lives on corn syrup, by 2021 it’s expected to have sales of more than half a billion dollars a year. Both appear to be significantly better for the environment than the petrochemicals they replace. And a neat trick of turning corn syrup—often blamed for making us fat—into Spandex

Why was progress so slow? When I first started reporting on JBEI, in 2008, scientists talked regularly about booting up yeast and bacteria with new DNA as if they were computers.

The complexity in the labs’ test tubes suggested that the cells themselves had an agenda. As Héctor put it, “What we’re doing is taking a bug [like E. coli] with no interest in producing biofuels and forcing it to produce them by inserting a pathway in there.” The bug’s “interest”—whatever it was—resisted manipulation. Eventually JBEI scientists learned to disrupt the cell’s internal communications, or at least jam them, to keep the cell off-kilter.

The multiple ways that biology resisted engineering reminded Héctor of Carl Woese, his biologist/physicist inspiration, who had observed that, unlike an electron, a cell has a history. The engineering teams recognized that cell metabolism has memories that do not reside in DNA, but in some other network or way of storing information within the cell. Their whimsical resistance to producing grassoline resembled—in a remote way—the quirky, idiosyncratic responses of the termites in the roboticists’ petri dishes.

By 2016, the team’s work increased the output of fatty acids that could be used as fuels from that strain of E. coli by 40 % using a systematic approach that could be applied to other problems. And the metabolic map tool combined with protein databases had increased production of pinene by 200% and limonene by 40%. They weren’t anywhere near Craig Venter’s dream of a million percent, but they were ramping up.

Yet the big question of how the termite’s gut was different from a 500-gallon steel tank was still out there, and it was standing in the way of getting the biofuel the scientists needed. Once the lab got one of their “bugs” producing a chemical, scaling up 1000-fold—from a flask the size of an orange juice glass to one the size of a kitchen garbage can—production would crater. How did the “bugs” change their behavior? And why? If there is a meaning in the scale and relationship of one organism to the whole—as Corina’s work showed in the fields—it wasn’t yet known in the bioreactor.

Fail to mix a bioreactor evenly and they’d end up with uneven streaks of oxygen and glucose that could create 400-fold changes in production—making it a black box within a black box.

DROUGHT, nutrients, robustness

Macrotermes in that part of Kenya build most of their mound underground, so they look less like the fingers I saw in Namibia than like land with a case of chicken pox, with each bump of a mound situated 20 to 40 yards from other bumps on all sides. The closer he was to the center of the mound, the more geckos Rob found. So then he looked at the bunchgrass and the acacia trees. A similar pattern. It was as though the termites had organized the entire landscape from below into a large checkerboard of fertility.

Some part of termites’ influence had to do with nutrients: a team of scientists found that the soils in the mounds were much richer in nitrogen and phosphorus than those off the mounds, and as a result the trees and grasses were not only more abundant there, but also had more nitrogen in their leaves, making them more nutritious—and possibly even more delicious—to everyone eating them. The termites also moved sand particles, so water behaved differently on the mounds.

Corina discovered that when grass was associated with a termite mound, it could survive on very little water, much less than expected. In the simplest terms, termite mounds made the landscape much more drought resistant.

Theoretical models from the mid-2000s predicted that when these dry land systems crashed, they wouldn’t gradually dry up but would instead progress from a labyrinth pattern of grass to spots, and then basically fall off a cliff (called a “critical transition”) to become desert.  But when Corina adjusted the rainfall in the model to produce the labyrinth of plants that might precede a crash, she found that when a landscape had termite mounds, the crash occurred very slowly—it was not a cliff but a staircase. What this meant was that places with termite mounds were much less likely to become desert, and if they did, they were likely to recover when rains reappeared.

Termites, then, appeared to increase the robustness of the whole place, in addition to providing homes for the geckos and food for the elephants. And with dry lands making up about 40 % of the world, and climate change redistributing rainfall, termites might actually be saving the planet. For real.

The idea that termites could be competing so strongly that they create patterns while making the ecosystem less likely to collapse? It’s a hard hump to get over.

Australia Aborigine view of the world

Paperbark can be boiled and used for colds, she said. I prepared myself for a mini lecture on ethnobotany but we were quickly into some kind of cosmology, with a cascade of identifications, each leading to some new point in time and space. There was the yellow acacia flower, and when it’s out the oysters in the bay are fat. The pandamus grass can be used to make a basket. And here, under the leaf litter, is a grass with bright red roots that can be used for dyeing pandamus for baskets. When a shrub with red waxy flowers blooms, the sharks are fat and ready to eat in a nearby bay. And when the stringybark eucalyptus flowers, the honey will be ready inside the trees.

Everything here is relational to everything else and then interconnected, until the forest is a giant Internet leading to stories, lore, law, medicine, and fat delicious sharks.

There were other associations: the honey is related in some ways to the sea in the songlines and to the character Wuyal the “honeybag man,” but she thinks I might be interested in it because the termites hollow out the trees where the honey is found.

The songlines, he said, start from the horizon of the ocean, with the clouds breaking and the sun rising and setting. They talk about individual trees and plants and animals both at sea and on land. They talk about the stringybark trees. “We see what’s been sung in the sea and on land and that becomes how we manage the land,” he said. “But these feral [invasive] weeds are not in the songlines. The crazy ants are not nor the buffalo pigs or the coastal gnats.

Some termite facts

  • The word superorganism is used 39 times in this book.
  • They’re related to roaches.
  • With the shipment of goods and munitions around the world after the war, the Formosan subterranean termite was transplanted from Asia to Louisiana and other southern U.S. states and began to spread in massive supercolonies.
  • 11 pounds of termites can move about 364 pounds of dirt in a year.
  • Namibian farmers estimate that every Macrotermes mound—which contains just 11 pounds of termites—eats as much dead grass as a 900-pound cow.
  • Only 28 out of 2800 termite species are invasive pests.
  • Darwin Australia: By 2070, more than 300 days a year are expected to be over 95 degrees, up from eleven days. In this area, 80 percent of the eucalyptus trees here in the north were hollow, eaten by termites. Once hollowed out, the trees burn differently. The tops fall off and flames shoot out the top, and the trees also produce different gases,
  • One possible way to use nanobots in war is giving them orders to execute combatants based on whether they have certain DNA.
  • In southern Florida the human process of urbanization has led to the spread of two invasive termites (Coptotermes formosans and C. gestroi). But climate change has made the timing of the two species’ nuptial flights sync up. Recently, males of one species started preferring females of the other species to those of their own. Now the two species have begun to hybridize, forming colonies that grow at twice the speed of either of the originals, with individuals that researchers describe as potential “super-termites.
  • Twelve of the thirteen most invasive termite species are likely to spread, meaning you’ll soon have new neighbors, too.
  • Termite mounds only need to stay whole 51% of the time to survive.

 

Posted in Agriculture, Wood | Tagged , , , | Leave a comment

Booklist: Natural history & Science, Evolution, Critical thinking, Health, Resource allocation, Climate change, Fire

Preface. My goal since college has been to read as much as I could across as many fields as possible to obtain a Big Picture View and understand the world as it really is rather than how I’d like it to be.  At first it was a bit like learning Santa didn’t exist all the time, but then I got used to the world not being how I wanted it to be, and amazed/interested rather than upset when new information came along.  All this reading has made my life quite joyous and interesting, and my wonder at the complexity of nature and the universe continues to grow.

I worked full time as a systems analyst at Electronic Data Systems, Bank of America, and American President Lines.  So how did I read so many books?  Instead of driving to work, I read books as I walked 8 to 10 miles (round-trip), and I still do today.

More booklists

Alice Friedemann Bwww.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Natural History & Science 

  • R. Conniff.  The Natural History of the Rich: A Field Guide
  • L. Margonelli. Underbug: an obsessive tale of termites and technology
  • P. Ward. Rare Earth: Why Complex Life Is Uncommon in the Universe
  • Sy Montgomery. The soul of an octopus: A surprising exploration into the wonder of consciousness
  • L. Cooke. The truth about animals: stoned sloths, lovelorn hippos, and other tales from the wild side
  • B. Bryson. A short history of nearly everything
  • E. O. Wilson. Consilience. The unity of knowledge. 1998
  • J. Sterba. Nature Wars: The Incredible Story of How Wildlife Comebacks Turned Backyards into Battlegrounds
  • E. O. Wilson. The Social Conquest of Earth
  • S. McCarthy. Becoming a Tiger: How baby animals learn to live in the wild
  • J. Burger. The Parrot Who Owns Me: The Story of a Relationship
  • J. Tresl. Who Ever Heard Of A Horse In The House
  • B. Krause. Great Animal Orchestra: Finding the origins of music in the world’s wild places
  • Charles Foster. Being a beast. Adventures across the species divide.
  • Carl Safina. Beyond words. What animals think and feel.
  • D. G. Haskell. The forest unseen. A year’s watch in nature
  • J. McPhee. The Control of Nature.
  • C. Safina. Eye of the Albatross. Views of the Endangered Sea
  • A. Weisman. Countdown: Our Last, Best Hope for a Future on Earth?
  • C. Slobodchikoff. Chasing Doctor Dolittle: Learning the Language of Animals
  • D. Bodanis. The Secret House.
  • C. Combes. The Art of Being a Parasite
  • J. Vaillant. The golden spruce: A true story of myth, madness, and greed
  • B. Kilham.  In the company of bears: what black bears have taught me about intelligence and intuition
  • A. Wulf. The invention of nature: Alexander von Humboldt’s New World
  • E. O. Wilson. The meaning of human existence
  • J. Hemming. Naturalists in Paradise: Wallace, Bates and Spruce in the Amazon
  • M. Roach. Stiff. The Curious Lives of Human Cadavers.
  • M. Roach. Gulp. Adventures on the Alimentary Canal.
  • M. Roach.  Packing for Mars. The Curious Science of Life in the Void.
  • N. Jablonski. Skin, A Natural History
  • D. Wolfe. Tales from the Underground: A Natural History of Subterranean
  • C. Tudge. The Bird: A natural history of who birds are, where they came from, & how they live
  • M. Derr.  A Dog’s History of America
  • S. Ellis. The Man who lives with wolves
  • C. Zimmer. Parasite Rex. Inside the Bizarre World of Nature’s Most Dangerous Creatures
  • J. Smith. Nature Noir A Park Ranger’s Patrol in the Sierra
  • B. Heinrich. Mind of the Raven. Investigations & adventures with Wolf-birds.
  • C. Mooney. The Republican Brain. The Science of why they Deny Science–and Reality
  • J. Gould. Animal Architects: Building and the Evolution of Intelligence
  • S. Hawking. A brief history of time.
  • M. Novacek. The biodiversity crisis: Losing what counts.
  • Peter Ward. A new history of life: the radical new discoveries about the origins and evolution of life
  • Yuval Noah Harari. Sapiens: a brief history of humankind
  • Cat Urbigkit. Shepherds of Coyote rocks: public lands, private herds & the natural world
  • E. Bailey. The sound of a wild snail eating
  • R. Conniff. The species seekers: heroes, fools, & the mad pursuit of life on earth
  • J. Vaillant. The tiger: a true story of vengeance and survival
  • M. Adams. Tip of the Iceberg: my 3,000 mile journey around wild Alaska, the last great American frontier
  • T. Flannery. 2002. The Future Eaters: An Ecological History of the Australian Lands and People          
  • M. Williams. 2002. Deforesting the Earth: From Prehistory to Global Crisis   
  • T. Flannery. 2001. The Eternal Frontier: An Ecological History of North America and Its Peoples. 
  • J. F. Mount. 1995. California Rivers & Streams. The Conflict between Fluvial Process & Land Use.  

Evolution

Critical Thinking

  • K. Andersen. Fantasyland. How America went haywire. A 500-year history
  • N. Oreskes. Merchants of doubt. How a handful of scientists obscured the truth
  • C. Sagan. The Demon-Haunted World:  Science as a Candle in the Dark
  • S. Singh. Trick or treatment.  The undeniable facts about alternative medicine.
  • C. Mooney. The Republican Brain: The Science of Why They Deny Science- and Reality
  • J. Garvey. The Persuaders: the hidden industry that wants to change your mind
  • C. Mooney. Unscientific America: How scientific illiteracy threatens our future
  • N. Postman. Amusing Ourselves to Death
  • R. Moynihan. Selling Sickness.
  • S. Salerno. Sham: How the Self-Help Movement Made America Helpless
  • Dietrich Dorner. The Logic of Failure
  • D. Levitan. Not a scientist: how politicians mistake, misrepresent, and utterly mangle science
  • N. Capaldi. The Art of Deception: An Introduction to Critical Thinking.
  • R. Cialdini. Influence: The Art of Persuasion
  • M. Shermer. Why People Believe Weird Things. Pseudoscience, superstition
  • M. Shermer. The Science of Good & Evil. Why People Cheat, Gossip, Care, Share, and follow the golden rule
  • T. Nichols. The death of expertise. The campaign against established knowledge and why it matters
  • J.J. Romm. Language intelligence: lessons on persuasion from Jesus, Shakespeare, Lincoln, and Lady Gaga
  • D. Kahneman. Thinking, Fast and Slow
  • A. Friedemann. Book Review of Grain Brain: Extraordinary claim not backed up by evidence

Health

  • M. Moss.  Salt, sugar, fat. How the food giants hooked us.
  • D. Kessler. The end of overeating: Taking control of the insatiable American appetite
  • Merrill Goozner. The $800 Million Pill. The Truth Behind the Cost of New Drugs
  • S. Glantz. Tobacco War.
  • J. Bennett. Unhealthy Charities: Hazardous to Your Health and Wealth
  • M. Nestle.  How the Food Industry Influences Nutrition and Health
  • L. Garrett. Betrayal of Trust. The collapse of global health
  • E. Whitney, et al.  Nutrition for Health and Health Care
  • G. Reynolds. The first 20 minutes. Surprising science reveals how we can exercise better, train smarter, live longer
  • B. Ehrenreich. Natural causes: An epidemic of wellness, the certainty of dying, and killing ourselves to live
  • R. George. Nine pints: A journey through the money, medicine, and mysteries of blood

Resource Allocation     

  • D. Landes. 1998. The Wealth and Poverty of Nations: Why Some Are So Rich and Some So Poor          
  • Jared Diamond. 2017. Guns, Germs, and Steel: The Fates of Human 
  • B. Ehrenreich. 2010. Nickeled and dimed: On (not) getting by in America
  • Susan George. 1994. Faith and Credit: The World Bank’s secular empire. 
  • M. Naim. 2016. Illicit.  How smugglers, traffickers, and copycats are hijacking the global economy

Climate Change    

  • S. R. Weart. 2004. The Discovery of Global Warming           
  • J. D. Cox. 2005. Climate Crash: Abrupt Climate Change And What It Means For Our Future      
  • Brian Fagan. 2000. The little ice age: how climate made history 1300 – 1850          
  • Brian Fagan. 2004. The long summer. How climate changed civilization     
  • J. Friedrichs. The future is not what it used to be. Climate change & energy scarcity
  • National Research council. 2002. Abrupt Climate Change: Inevitable Surprises  

Fire

  • S. J. Pyne. 1997. Fire in America: A Cultural History of Wildland and Rural Fire      
  • S. J. Pyne. 1991. Burning Bush, A Fire History of Australia              
  • M. Taylor. 2001. Jumping Fire.  A Smoke Jumper’s memoir of fighting wildfire
  • G. L. Simon. Flame and fortune in the American west. Urban development, environmental change, and the great Oakland hills fire              
Posted in Book List | Tagged , , | 4 Comments

Can Zinc batteries save us?

Preface: The New York Times had two articles about zinc air batteries in September 2018.  Right now, finite natural gas is the dominant way of balancing unreliable, outright missing, or intermittent power from wind and solar.  So other energy storage solutions simply have to be invented to replace natural gas, and zinc air is one way to do this.  Zinc air batteries are only proposed for energy storage, not electric vehicles.

Penn (2018) states that there are only 25 years of zinc reserves left.  As if that weren’t alarming and astounding enough, the article goes on to say that lithium reserves are even smaller – just 5% of zinc reserves.

Yet I suspect the average reader will come away from reading this article with optimism that progress is being made and not have alarm bells triggered by the 25 years of reserves.

The articles neglect to say that there are other problems with zinc batteries.

As you can see, compared to natural gas, the energy storage of batteries per unit weight pales in comparison.

Specific energy is the amount of stored energy per mass unit (i.e. kilogram or liter). Source: Kurt Zenze House. 2009. The limits of energy storage technology.

It also takes far more energy to create batteries to store energy for a much shorter time of operation before the battery needs to be replaced. If you look at the energy stored over the lifetime of a storage device, compared to the energy used to build it, compressed air energy storage and pumped hydro storage ate orders of magnitude cheaper and more effective than batteries, with zinc-bromide near the bottom:

This graph shows the ratio of electrical energy stored over the lifetime of a technology to the energy needed to build it. Stored energy over the lifetime depends significantly on the cycling life, the efficiency, and the depth of discharge. Source: Charles J. Barnhart (2013) On the importance of reducing the energetic and material demands of electrical energy storage.

The deployment challenges of zinc-air batteries also include poor reversibility and resultant cycling problems due to metal plating, as well as evaporation of the aqueous electrolyte when used in an open system (Parfomak 2012).

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

***

Zinc air batteries have been lauded for their potential cheap energy storage, much lower than lithium-ion, though experts cautioned that the actual cost varied a great deal depending on the application, making it hard to compare with lithium batteries.

Zinc is also much less toxic than lithium, or a fire hazard like lithium.  Though it’s not completely safe — the ore is zinc sulfide and produced along with lead, cadmium, and nickel which can cause harm from sulfur dioxide and cadmium-vapors.

More importantly, Dr. Narayan, professor of chemistry at the University of Southern California, said reserves of lithium, a primary element in lithium-ion batteries, were only 5% of the reserves of zinc.  But, he noted “At the present rate of production of zinc, zinc reserves will last about 25 years. So it is not clear from the reserves available if we will have enough zinc to support the enormous need that will result from the demand for grid-scale batteries.” (Penn 2018).

Mr. Cooper, senior research fellow for economic analysis at the Institute for Energy and the Environment at the Vermont Law School pointed out that fracked gas has taken attention away from the need for alternative ways to store energy, since natural gas is the main way the intermittency, unreliability, and complete lack of wind and solar are coped with now.  Cooper noted that capitalism doesn’t deal with problems where there isn’t scarcity, so money isn’t flowing into battery energy storage research and development (Penn 2018b).

Parfomak, P. W. 2012. Energy Storage for Power Grids and Electric Transportation: A Technology Assessment. Congressional Research Service.

Penn, I. 2018. How zinc batteries could change energy storage. New York Times.

Penn, I. 2018b. Cheaper battery is unveiled as a step to a carbon-free grid. New York Times.

Posted in Batteries, Energy Storage, Lithium-ion | Tagged , , , , | 2 Comments

Earthquakes in California could cost over $200 billion dollars

earthquake haz nxt 50 yrs USGS 2014

Preface. The figures below don’t do justice to the harm an earthquake would do.  There is $1.9 trillion dollars of property at risk from earthquakes in the San Francisco Bay Area, where a catastrophic earthquake on the Hayward Fault would almost certainly have ripple effects throughout California, the U.S. and the world, since this area has one of the highest concentrations of people, wealth, and innovation in the U.S. (Grossi).

There are two government documents below, first excerpts from the National Research Council 2011 National Earthquake Resilience: Research, Implementation, and Outreach and second a House of representative hearing called “Are we prepared?  Assessing earthquake risk reduction in the U.S.” also from 2011.

These are just a few of the earthquake faults and their estimated costs in California:

Earthquake (Cost / Where):

  • $  69 billion / Southern California Puente Hills fault
  • $  54 billion / Northern California San Andreas Fault
  • $ 213 billion / Southern California San Andreas Fault (Ii 2016, USGS 2008)
  • $  49 billion / Southern California Newport-Inglewood fault
  • $ 190-235 billion / Northern California Hayward Fault (Lesle 2014, Grossi 2013)
  • $  30 billion / Southern California Palos Verdes fault
  • $  29 billion / Southern California Whittier fault
  • $  24 billion / Southern California Verdugo fault

A more detailed estimation (NRC 2011):

TABLE 3.2 HAZUS-MH Annualized Earthquake Loss (AEL) and Annualized Earthquake Loss Ratios (AELR) for 43 High-Risk (AEL greater than $10 million) Metropolitan Areas Table

Possible cascading effects of a large earthquake would be:

  • Destruction of the delta levee system, resulting in $40 billion losses and no drinking water for 23 million people
  • Crashing the U.S. financial system, perhaps also the global financial system
  • Los Angeles is the #1 port in the USA and Oakland #7 in the value of import and exported goods
  • Food security: California supplies a third of food in the United States, and exports a great deal of food as well
  • Bankruptcy of most insurance and re-insurance companies, delaying and preventing recovery
  • Earthquakes sometimes result in compound disasters, in which the major event triggers a secondary event, natural or from the failure of a man-made system. In urban areas, fires may originate in gas lines and spread to storage facilities for petroleum products, gases, and chemicals. These fires often are a much more destructive agent than the tremors themselves because water mains and fire-fighting equipment are rendered useless. More than 80 percent of the total damage in the 1906 San Francisco quake was due to fire (OTA).

California Bay Area Hayward or San Andreas earthquake

  • According to reports by the Association of Bay Area Governments, more than 100,000 dwellings would be uninhabitable and as many as 400,000 could sustain some damage. In a region where rents and home prices are at a premium and vacancies are extremely low, damage to one third of the housing stock in the counties closest to the fault rupture (combined with the business disruption and the inability to travel around the region) would create a social and financial disaster.
  • The potential for massive disruption is a function of the physical conditions in the region. The building stock and the infrastructure are old. The geography of the region has concentrated urban development between the hills and the bay, forcing limited transit corridors with little redundancy and creating significant distances between the urban core immediately surrounding the bay and outlying communities.

On July 17, 2014, the United States Geological Survey (USGS) announced updated U.S. National Seismic Hazard Maps, with the latest scientific views on where, how often, and how hard future earthquakes will be.  Some of the details have changed since the maps were last released in 2008 (National Seismic Hazard Project.)

Lack of Insurance in the San Francisco Bay Area

Over half of the loss after Hurricane Katrina (53%) was covered by insurance.  But only 6% to 10% of the total residential losses and 15% to 20% of the commercial losses of a major Hayward Fault earthquake are expected to be reimbursed by insurance. And those lucky enough to have earthquake insurance will not be completely reimbursed, overall, insurance payments will cover between 10% and 15% of the total loss—somewhere between $11 and $26 billion (Grossi).

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

NRC. 2011. National Earthquake Resilience: Research, Implementation, and Outreach. National Research Council

Earthquakes threaten much of the United States—damaging earthquakes struck Alaska in 1964 and 2002, California in 1857 and 1906, and the central Mississippi River Valley in 1811 and 1812. Moderate earthquakes causing substantial damage have repeatedly struck most of the western states as well as several mid-western and eastern states, e.g., South Carolina in 1886 and Massachusetts in 1755. The recent, disastrous, magnitude-9 earthquake that struck northern Japan demonstrates the threat that earthquakes pose, and the tragic impacts are especially striking because Japan is an acknowledged leader in implementing earthquake resilient measures. Moreover, the cascading nature of impacts—the earthquake causing a tsunami, cutting electrical power supplies, and stopping the pumps needed to cool nuclear reactors—demonstrates the potential complexity of an earthquake disaster. Such compound disasters can strike any earthquake-prone populated area.

Summary

The United States will certainly be subject to damaging earthquakes in the future, and some of those earthquakes will occur in highly populated and vulnerable areas. Just as Hurricane Katrina tragically demonstrated for hurricane events, coping with moderate earthquakes is not a reliable indicator of preparedness for a major earthquake in a populated area.

The United States has not experienced a great earthquake since 1964, when Alaska was struck by a magnitude-9.2 event, and the damage in Alaska was relatively light because of the sparse population. The 1906 San Francisco earthquake was the most recent truly devastating U.S. shock, because recent destructive earthquakes have been only moderate to strong in size. Consequently, a sense has developed that the country can cope effectively with the earthquake threat and is, in fact, “resilient.” However, coping with moderate events may not be a true indicator of preparedness for a great one. One means to understand the potential effects from major earthquakes is to use scenarios, where communities simulate the effects and responses to a specified earthquake.

Analysis of the 2008 ShakeOut scenario in California (Jones et al., 2008), which involved more than 5,000 emergency responders and the participation of more than 5.5 million citizens, indicated that the magnitude-7.8 scenario earthquake would have resulted in an estimated 1,800 fatalities, $113 billion in damages to buildings and lifelines, and nearly $70 billion in business interruption. Such an earthquake would clearly have a major effect on the nation as a whole,

Introduction

When a strong earthquake hits an urban area, structures collapse, people are injured or killed, infrastructure is disrupted, and business interruption begins. The immediate impacts caused by an earthquake can be devastating to a community, challenging it to launch rescue efforts, restore essential services, and initiate the process of recovery. The ability of a community to recover from such a disaster reflects its resilience,

The three most recent earthquake disasters in the United States all occurred in California—in 1994 near Los Angeles at Northridge, in 1989 near San Francisco centered on Loma Prieta, and in 1971 near Los Angeles at San Fernando. In each earthquake, large buildings and major highways were heavily damaged or collapsed and the economic activity in the afflicted area was severely disrupted. Remarkably, despite the severity of damage, deaths numbered fewer than a hundred for each event. Moreover, in a matter of days or weeks, these communities had restored many essential services or worked around major problems, completed rescue efforts, and economic activity—although impaired—had begun to recover. It could be argued that these communities were, in fact, quite resilient. But it should be emphasized that each of these earthquakes was only moderate to strong in size, less than magnitude-7, and that the impacted areas were limited in size. How well would these communities cope with a magnitude-8 earthquake?

Would an earthquake on the scale of the 1906 event in northern California or the 1857 event in southern California lead to a similar catastrophe? It is likely that an earthquake on the scale of these events in California would indeed lead to a catastrophe similar to hurricane Katrina, but of a significantly different nature. Flooding, of course, would not be the main hazard, but substantial casualties, collapse of structures, fires, and economic disruption could be of great consequence. Similarly, what would happen if there were to be a repeat of the New Madrid earthquakes of 1811-1812, in view of the vulnerability of the many bridges and chemical facilities in the region and the substantial barge traffic on the Mississippi River? Or, consider the impact if an earthquake like the 1886 Charleston tremor struck in other areas in the central or eastern United States, where earthquake-prone, unreinforced masonry structures abound and earthquake preparedness is not a prime concern?

EARTHQUAKE RISK AND HAZARD

Earthquakes proceed as cascades, in which the primary effects of faulting and ground shaking induce secondary effects such as landslides, liquefaction, and tsunami, which in turn set off destructive processes within the built environment such as fires and dam failures

The socioeconomic effects of large earthquakes can reverberate for decades.

Moreover, the scenario is essentially a compound event like Hurricane Katrina, with the projected urban fires caused by gas main breaks and other types of induced accidents projected to cause $40 billion of the property damage and more than $22 billion of the business interruption. Devastating fires occurred in the wake of the 1906 San Francisco, 1923 Tokyo, and 1995 Kobe earthquakes. Loss estimates have been published for a range of earthquake scenarios based on historic events—e.g., the 1906 San Francisco earthquake the 1811/1812 New Madrid earthquakes and the magnitude-9 Cascadia subduction earthquake of 1700 — or inferred from geologic data that show the magnitudes and locations of prehistoric fault ruptures (e.g., the Puente Hills blind thrust that runs beneath central Los Angeles). In all cases, the results from such estimates are staggering, with economic losses that run into the hundreds of billions of dollars.

Hazard insurance issues. NEHRP-sponsored social research has documented many difficulties in developing and maintaining an actuarially sound insurance program for earthquakes and floods—those who are most likely to purchase earthquake and flood insurance are, in fact, those who are most likely to file claims. This problem makes it virtually impossible to sustain an insurance market in the private sector for these hazards. Economists and psychologists have documented in laboratory studies a number of logical deficiencies in the way people process information related to risks as it relates to insurance decision-making. Market failure in earthquake and flood insurance remains an important social science research and public policy issue.

Post-disaster responses by the public and private sectors. Research before and since the establishment of NEHRP in 1977 has contradicted misconceptions that during disasters, panic will be widespread, that large percentages of those who are expected to respond will simply abandon disaster relief roles, that local institutions will break down, that crime and other forms of anti-social behavior will be rampant, and that the mental impairment of victims and first responders will be a major problem.

An analysis of the impacts of a magnitude-7.7 earthquake on all three New Madrid faults was performed by the Mid-America Earthquake Center under the FEMA New Madrid Catastrophic Planning Initiative (Elnashai et al., 2009). Results indicated that this event would have widespread, catastrophic consequences (Figure 2.1), including:

  • Nearly 715,000 buildings damaged in eight states.
  • Substantial damage to critical infrastructure (essential facilities, transportation, and utility lifelines) in 140 counties: 2.6 million households without electric power; 425,000 breaks and leaks to both local and interstate pipelines; and 3,500 damaged bridges, with 15 major bridges unusable.
  • 86,000 casualties for a 2:00 am scenario, with 3,500 fatalities.
  • 7.2 million people displaced, with 2 million seeking temporary shelter. • 130 hospitals damaged.
  • $300 billion in direct economic losses, including buildings, transportation, and utility lifelines, but excluding business interruption costs.

Moreover, infrastructure damage would have a major impact on interstate transport crossing the Central United States.

The report, When the Big One Strikes Again (Kircher et al., 2006), estimated that many of Northern California’s nearly 10 million residents would be affected. It would cost $90-$120 billion to repair or replace the more than 90,000 damaged buildings and their contents, and as many as 10,000 commercial buildings would sustain major structural damage. Between 160,000 and 250,000 households would be displaced from damaged residences. Depending upon whether the earthquake occurs during the day or night, building collapses would cause 800 to 3,400 deaths, and a conflagration similar in scale to the 1906 fire is possible and could cause an immense loss. Damage to utilities and transportation systems would increase losses by an additional 5% to 15%, and economic disruption from prolonged lifeline outages and loss of functional workspace would cost several times this amount. Considering all loss components, the total price tag for a repeat of the 1906 earthquake is likely to exceed $150 billion. In such a scenario, the city of San Francisco might not be able to recover from the cascading consequences and might lose its central place in the region.

Both the Bay Area and southern California scenarios impact some of the largest population centers in the United States, with damage estimates ranging between $100 and $200 billion and with thousands of fatalities and tens of thousands of injuries. Similarly, scenario indications that earthquake-induced levee failures in the Sacramento-San Joaquin River delta would disrupt drinking water supplies to more than 22 million Californians as well as irrigation water to delta and state agricultural lands.

One Cascadia earthquake scenario estimates more than $11 billion in building damages for the mid- and southern Willamette Valley (Burns

In the eastern United States, an earthquake loss estimation for the metropolitan New York–New Jersey–Connecticut area showed that even a moderate earthquake would significantly impact the region’s large population (18.5 million) and predominately unreinforced masonry building stock (Tantala et al., 2003). South Carolina recently completed a comprehensive risk assessment for the repeat of the 1886 magnitude-7.3 Charleston earthquake, producing an estimate of $20 billion in direct losses (URS et al., 2001).

As seen in Table 3.2, 43 metropolitan areas—led by Los Angeles and San Francisco—account for the majority (82%) of the earthquake risk in the United States. Outside of California, at risk communities including Seattle, WA, Portland, OR, Salt Lake City, UT, and Memphis, TN, show that earthquakes are not just a California problem.

Hayward, CA, earthquake indicate that only 6 to 10% of total residential losses and 15 to 20% of commercial losses would be covered by insurance following a repeat of the magnitude-6.8 to 7.0 earthquake. In contrast, approximately 53% of the economic losses to homes and businesses following hurricane Katrina were covered by insurance, including payouts from the National Flood Insurance Program

Confidentiality Issues

Many stakeholders, especially those in areas of critical infrastructure, are reluctant or, because of provisions in the Homeland Security Act of 2002, are unable to release inventory information beyond their organizations. These restrictions impact the ability of communities to recognize and plan for service disruptions during disasters.

Research over decades has contradicted misconceptions that during a disaster panic will be widespread, those expected to respond will abandon their roles, social institutions will break down, and anti-social behaviors will become rampant.

The poor, minorities, the aged, and the infirm are more vulnerable, and even the middle class and those well off can be rendered indigent as a result of a disaster.

Construction prices are likely to rise following a major earthquake. Although this is often attributed to the fact that there is an increased demand for repair and reconstruction, it also stems from the fact that construction equipment has been damaged, as have inventories of construction materials. Moreover, the production of even more materials may be limited because of damage to their manufacturers. This condition can raise the cost of recovery significantly. It involves an important tradeoff between recovering quickly at a high price and minimizing business interruption losses vs. incurring business interruption losses and waiting until prices settle down in order to reduce recovered costs.

We acknowledge that this is a challenging subject largely because of the complex network characteristics of electricity, gas, water, transportation, and communication lifelines.

A dramatic “wake up call” concerning the vulnerability of electric systems and the resultant regional and national consequences occurred as a result of the August 2003 Northeast Blackout. This blackout affected 5 states, 50 million people, and caused an estimated $4-10 billion in business interruption losses in the central and eastern United States. Moreover, the power outage caused “cascading” failures to water systems, transportation, hospitals, and numerous other critical infrastructures; such infrastructure failure interdependencies are common across many types of disasters. The 2003 Northeast Blackout demonstrated that while initiating events can vary (e.g., a falling tree, an earthquake, or an act of terrorism), the consequences can be similar.

House 112-13. April 7, 2011. Are we prepared? Assessing earthquake risk reduction in the U.S.  House hearing. 82 pages.

Excerpts:

The hearing will examine various elements of the Nation’s level of earthquake preparedness and resiliency including the U.S. capability to detect earthquakes and issue notifications and warnings, coordination between federal, state and local stakeholders for earthquake emergency preparation, and research and development measures supported by the federal government designed to improve the scientific understanding of earthquakes. Portions of all 50 states are vulnerable to earthquake hazards, although risks vary across the country and within individual states. Twenty-six urban areas in 14 U.S. states face significant seismic risk. Earthquake hazards are greatest in the western United States, particularly in California, Oregon, Washington, Alaska, and Hawaii. Though infrequent, earthquakes are unique among natural hazards in that they strike without warning. Earthquakes proceed as cascades, in which the primary effects of faulting and ground shaking induce secondary effects such as landslides, liquefaction, and tsunami, which in turn set off destructive processes within the built environment; structures collapse, people are injured or killed, infrastructure is disrupted, and business interruption begins. The socioeconomic effects of large earthquakes can reverberate for decades. The recent earthquake that struck off the coast of northern Japan on March 11, 2011, illustrates that the effects of an earthquake can be catastrophic. The earthquake, recorded as a 9.0 on the Richter scale, is the most powerful quake to hit the country, and it triggered a devastating tsunami that swept over cities and farmland in the northern part of the country. As Japan struggles with rescue efforts, it also faces a nuclear emergency due to damage to the nuclear reactors at the Fukushima Daiichi Nuclear Power Station. As of March 31, the official death toll from the earthquake and resulting tsunami includes more than 11,600, and more than 16,000 people were listed as missing. The final toll is expected to reach nearly 20,000. More than 190,000 people remained housed in temporary shelters; tens of thousands of others evacuated their homes due to the nuclear crisis and related fear.

In Japan, the after effects of the quakes have reduced supplies of water and electricity, hampering their ability to export many manufacturing products and forcing some businesses to slow or stop operation all together. Supply chains for important technology products here in the States have also been interrupted, directly impacting our productivity.

Clearly the consequences of a major earthquake are felt on a global scale. These hazards represent a serious threat to both national security and global commerce. Given our current economic situation, it would be even more painful for the United States to endure a disastrous earthquake, the socioeconomic effects of which would reverberate for decades.

CHRIS POLAND, CHAIRMAN AND CHIEF EXECUTIVE OFFICER, DEGENKOLB ENGINEERS AND CHAIRMAN, NEHRP ADVISORY COMMITTEE

I am testifying on behalf of the 140,000 members of the American Society of Civil Engineers (ASCE). At ASCE, I am Chairman of the Infrastructure and Research Policy Committee. Additionally, I serve as Chairman, Degenkolb Engineers; and I serve as Chairman of the National Earthquake Hazards Reduction Program (NEHRP) Advisory Committee. I am registered civil and structural engineer, and have worked for more than 35-years as an advisor on government programs for earthquake hazard mitigation and in related professional activities.

It also must be recognized that resilience is not just about the built environment. It starts with individuals, families, communities, and includes their organizations, businesses, and local governments. In addition to an appropriately constructed built environment, resilience includes plans for post event governance, reconstruction standards that assure better performance in the next event, and a financial roadmap for funding the recovery.

While the nation can promote resilience through improved design codes and mitigation strategies, implementation and response occur at the local level. Making such a shift to updated codes and generating community support for new policies are not possible without solid, unified support from all levels of government.

The federal government needs to set performance standards that can be embedded in the national design codes, be adamant that states adopt contemporary building codes including provisions for rigorous enforcement, provide financial incentives to stimulate mitigation that benefits the nation, and continue to support research that delivers new technologies that minimize the cost of mitigation, response, and recovery. Regions need to identify the vulnerability of their lifeline systems and set programs for their mitigation to the minimum level of need. Localities need to develop mandatory programs that mitigate their built environment as needed to assure recovery.

[In response to a question about how prepared we are on a scale of 1 to 100 for resiliency, preparation, and recovery]:  Are we prepared? No. I would say maybe 10. In areas of very high seismicity in California, Oregon and Washington, there have been building codes in place for 20 years that are going to help people be safe. Other parts of the country that we talk about, those things are not in place., From a scale of safety, I believe that California will maybe 50 or 60. On a scale of resilience to be able to recover quickly and not have a significant impact on the national economy, we are still down in the 10–20 range.

The vast majority of our building stock and utility systems in place today were not designed for earthquake effects let alone given the ability to recover quickly from strong shaking and land movement. Earthquake Engineering is a new and emerging field and only since the mid-1980s has sufficient information been available to assure safe designs. Design procedures that will assure resilience are just now being developed. Strong, community destroying earthquakes are expected to occur throughout the United States. In most regions outside of California, little is being done about it. While modern building codes and design standards are available, they are not routinely implemented on new construction or during major rehabilitation efforts because of the complexity and cost. Many communities do not believe they are vulnerable and if they do accept the vulnerability, find the demands of seismic mitigation unreachable.

The problem of implementation and acceptance does not just lie with the public, but also with the earthquake professionals. Because this is an emerging area of understanding, conservatism is added whenever there is significant uncertainty. Earth Science research has made great strides in identifying areas that will be affected by strong shaking. Unfortunately, each earthquake brings different styles of shaking and building performance. This leaves many structural engineers generally uncertain about what causes buildings to collapse, and unwilling to predict the extent of damage that will occur, let alone whether a building will be usable during repairs or if lifeline systems can be restored quickly enough. Resilience demands transparent performance and significant earthquake science and earthquake engineering research and guideline development is needed to bring that ability to communities.

Comprehensive worldwide monitoring and data gathering related to earthquake intensity and impact. Extensive instrumentation is needed to adequately record the size and characteristics of the energy released and the variation in intensity of strong shaking that affect the built environment. We are lucky if we obtain a handful of records for entire cities but in reality thousands are needed to record the dramatic differences that occur and to understand the damage that results. In addition, the geologic changes that occur due to faulting, landslides, and liquefaction need to be surveyed, recorded, and used to understand the future vulnerability of the built environment to land movement. A network of observation centers is needed to record, catalogue and maintain information related to the impacts on society, and the factors influencing communities’ disaster risk and resilience. At present, earthquake engineering is based more on anecdotal observations of damage that are translated into conservative design procedures without the benefit of accurate data about what actually happened. In my mind, expanded monitoring is the single most important area that will reduce the cost of seismic design and mitigation that will allow us to achieve greater resilience.

An Overarching Framework that defines resilience in terms of Performance Goals Resiliency is all about how a community of individuals and their built environment weather the damage, respond and recover. It is more about improvisation and redundancy than about how any single element or system performs. Buildings and systems are designed one structure at a time for the worst conditions they are expected to experience. This approach worked well when life safety was the goal, and there was no need to consider the overall performance of the built environment. Resiliency, however, demands that performance goals and their interdependencies are set at the community level for the classes of structures and systems communities depend during the recovery process. Facilities providing essential services during post-earthquake response and recovery must function without interruption. Electric power is needed before any other system can be fully restored. Emergency generators can only last a few days without additional deliveries of fuel. Power restoration, however, depends on access for emergency repair crews and their supplies. Community level recovery depends on neighborhoods being restored within a few weeks so the needed workforce is available to restart the local economy. People must be able to shelter in place in their homes, even without utilities, but cannot be expected to stay and work after a few days without basic utility services. To ensure that past and future advances in building, lifelines, urban design, technology, and socioeconomic research result in improved community resilience, a framework for measuring, monitoring and evaluating community resilience is needed. This framework must consider performance at various scales-e.g., building, lifeline, and community-and build on the experience and lessons of past events. Only the Federal government can break the stalemate related to setting performance goals that if left alone will eventually cripple the nation.

Senator David Wu, Oregon. As an Oregonian, I am particularly concerned with the prospect of a similar disaster occurring in the Pacific Northwest. Off the coast of Oregon, Washington and northern California, we have the Cascadia subduction zone, and this fault is currently locked in place, but research over the last 30 years indicates that the same stress now accumulating has been released as a large earthquake once about every 300 years dating back to the last ice age about 12,000 years ago. The last Cascadia earthquake occurred 309 or 310 years ago. It was a magnitude 9.0 earthquake, the same destructive magnitude as the one that stuck Japan. All indications show that we Oregonians can expect another quake any time. It is a matter of when, not a matter of if.

When the next earthquake occurs on our fault, there will be prolonged shaking, perhaps for as long as five minutes, with the potential to collapse buildings, create landslides, and destroy water, power, and other crucial infrastructure and lifelines. Such an earthquake will also likely trigger a devastating tsunami that could overwhelm the Oregon coast in less than 15 minutes, resulting in potentially thousands of fatalities and billions of dollars in damage. Unfortunately, this type of disaster scenario is not limited to the Western United States. In fact, more than 75 million Americans across 39 states face significant risk from earthquakes.

JACK HAYES, DIRECTOR, NATIONAL EARTHQUAKE HAZARDS REDUCTION PROGRAM, NIST.  Since the beginning of 2010, we have witnessed horrific losses of life in Haiti (over 230,000) and Japan (toll still unknown but numbering in the tens of thousands) due to the combined earthquake and tsunami impacts, and lesser, but nevertheless significant, losses of life in Chile and New Zealand. The toll in terms of human life is overwhelming, and we all offer our heartfelt sympathy to those nations and their citizens.

Haiti and Chile earthquakes provided a stark contrast in the effectiveness of modern building codes and sound construction practices. In Haiti, where such standards were minimal or non-existent, many thousands were killed in the collapses of homes and other buildings. In Chile, with much more modern building codes and engineering practices, the loss of life, while still tragic, was far smaller, about 500, despite the fact that the Chile earthquake had a significantly higher magnitude of 8.8 (M8.8) than the Haiti earthquake (M7.0). The fault rupture that caused the Chile earthquake released approximately 500 times the energy released in the Haiti earthquake. The Chilean building code provisions had been based in large part on U.S. model building codes that have been developed by researchers and practitioners who have been associated with and supported by NEHRP. Scientists and engineers have not yet had enough time since the 2011 earthquakes in New Zealand (M6.3) and Japan (M9.0) to draw detailed conclusions. We do know that Japan and New Zealand are international leaders in seismology and earthquake engineering—we in the U.S. partner with our counterparts in both countries, because we have much to learn from one another. Despite their technical prowess, leaders in both countries have been taken aback by the amount of damage that has occurred. One lesson we take from this before we even begin detailed studies is that we still have much to learn about the earthquake hazards we face and the engineering measures needed to minimize the risks from those hazards. Assuming that we already know everything we need to know is the surest strategy for catastrophe. The other broad lesson that has already become clear from both of these events is that local, and indeed national, resilience —to recover in a timely manner from the occurrence of an earthquake or other hazard event—is vital, going far beyond the essential, but narrowly focused, issue of ensuring life safety in buildings and other locations when an earthquake occurs. In Christchurch, NZ, the central business district has been largely closed since the February 21 earthquake, severely impacting the local economy. Some reports indicate as many as 50,000 people are out of work as a result of this closure. In Japan, the impact of the March 11 earthquake and resulting tsunami have been far worse on the national economy, with energy, agriculture, and commercial disruptions of monumental proportions. Some estimates already put the economic losses over $300 billion, and economic disruption is certain to continue for years and extend far beyond Japan’s shores.

The 2010 and 2011 events followed decades or even centuries of quiescence on the faults where they struck and are sobering reminders of the unexpected tragedies that can occur. The USGS has recently issued updated assessments of earthquake hazards in the U.S. that provide appropriate perspectives for us. For example, in 2008, the USGS, the Southern California Earthquake Center (SCEC), and the California Geological Survey (CGS), with support from the California Earthquake Authority (CEA), jointly forecast a greater than 99% certainty of California’s experiencing a M6.7 or greater earthquake within the next 30 years.

The recent New Zealand earthquake, at M6.3, is slightly less severe than that which is postulated for California. The recent Chile and Japan earthquakes, at M8.8–M9.0, occurred in tectonic plate collision zones where one plate overrides another; that characteristic is closely comparable to those which generated 1964 Alaska earthquake and more ancient earthquakes off the coasts of Oregon and Washington, in the Cascadia Subduction Zone. Seismologists thus believe that what we have recently observed in Chile and Japan should serve as clear indication to us for what may likely occur again someday off the Alaska, Oregon, and Washington coasts.

While concern for future earthquake activity is always great along our West Coast, the National Research Council has noted in its publications that 39 states in the U.S. have some degree of earthquake risk, with 18 of those having high or very high seismicity. In 2011 and 2012, earthquake practitioners and state and local leaders in Memphis, St. Louis, and other Midwestern locales will participate in events that will commemorate the bicentennial anniversary of the New Madrid sequence of earthquakes, which included at least four earthquakes with magnitudes estimated at 7.0 or greater.

If a southern California earthquake severely damaged the ports of Los Angeles and Long Beach, as happened to the port of Kobe, Japan, in 1995, there would be national economic implications. Similarly, if a major earthquake occurred in the Central U.S., one or more Mississippi River transcontinental rail or highway crossings in the Saint Louis to Memphis region, as well as oil and natural gas transmission lines could be severely disrupted.

In 2008, the USGS, California Geological Survey, and Southern California Earthquake Center produced a plausible scenario of a rupture of the southern end of the San Andreas fault that could result in about 1,800 deaths, 50,000 injuries, and economic losses exceeding $200 billion in the greater Los Angeles area. This scenario formed the basis for the 2008 Great Southern California Shakeout earthquake preparedness and response exercise.

JIM MULLEN, DIRECTOR, WASHINGTON STATE EMERGENCY MANAGEMENT DIVISION AND PRESIDENT, NATIONAL EMERGENCY MANAGEMENT ASSOCIATION

Response & Recovery. A major event involving multiple disciplines is complex and difficult to manage. While firefighters, law enforcement officials, and emergency medical personnel often constitute the traditional first responders, emergency managers provide the all important coordination function. This coordination far exceeds the initial response as emergency managers also maintain responsibility for the transition from the lights and sirens of response into the complex and often long-term efforts of recovery. Once an event occurs, the response is a three-tiered process of escalation where the level of support is directly related to the need of the impacted jurisdiction. The initial response is at the local level where first responders and local emergency managers provide assistance. Should the incident exceed the capacity of those local responders, the state may offer assistance in myriad ways including personnel, response resources, financial support, and mutual aid. On rare occasions, an event will even overwhelm the state’s ability to mount an effective response. This is usually the only time in which the Federal Emergency Management Agency (FEMA) is called upon to offer assistance. FEMA assistance is triggered by a direct request from the Governor to the President. Should the President deem the event worthy of federal assets, a Presidential Disaster Declaration is declared and FEMA can provide assistance such as assets from the Department of Defense, financial aid, and expertise. Disaster assistance from FEMA traditionally comes in one of three forms. The first is the Public Assistance (PA) Program which provides supplemental financial assistance to state and local governments as well as certain private non-profit organizations for response and recovery activities required as a result of a disaster. The PA Program provides assistance for debris removal, emergency protective measures, and permanent restoration of infrastructure. Federal share of these expenses are typically not less than 75 percent of eligible costs. The PA Program encourages protection from future damages by providing assistance for Hazard Mitigation

VICKI MCCONNELL, DIRECTOR, OREGON DEPARTMENT OF GEOLOGY AND MINERAL INDUSTRIES

Oregon’s Department of Transportation published in 2009 the Seismic Vulnerability of Oregon State Highway Bridges: Mitigation Strategies to Reduce Major Mobility Risks. This study incorporates FEMA HAZUS risk assessment modeling funded by NEHRP as well as NEHRP soil conditions data to determine peak ground acceleration (PGA). Their findings indicate that 38% of state-owned bridges in western Oregon would fail or be too heavily damaged to be serviceable after a magnitude 9.0 earthquake and that repair or replacement would take 3–5 years essentially cutting the Oregon coastal communities off from the rest of the state.

Chairman QUAYLE. Mr. Poland, in your testimony you compared the different results of the earthquakes that occurred in Haiti and Japan, and even what happened in the Northridge quake, and the quake that occurred in San Francisco. You mentioned that it would be cost-prohibitive to retrofit buildings across the United States. What is your suggestion to minimize the repercussions of an earthquake? Do you mostly look at where different communities lie along faults? For example, a city is close to the San Andreas fault, you obviously take different things into account than cities in middle America located away from the New Madrid fault line.

Mr. POLAND. The biggest problem we have is that the built environment that we have right now in the country has not been designed for earthquake effects, both in terms of public safety and in terms of being able to recover and resiliency. And so the biggest problem we have is, what do we do with 85 or 90 percent of our buildings and systems that are not adequate for the kind of performance that we want. When I spoke about it being cost-prohibitive, I was speaking about retrofitting those buildings and those systems so that they can perform properly, and that is what costs so much money.

Mr. WU. My second question is that we do have a number of nuclear reactors that are sitting on active seismic zones, and I believe one of them is on the West Coast. Can you all comment on what can be done to build resiliency and recovery into these nuclear facilities? You know, what we found in Japan is that it wasn’t the earthquake, it was the tsunami and the loss of electricity and it affected both the reactor itself and the fuel that was stored in pools on top of the reactor facility. Can you all comment on how we can do a better job with our own nuclear facilities?

Dr. HAYES.  NEHRP itself does not address the nuclear facilities in the United States. That is the responsibility of the Nuclear Regulatory Commission and the Department of Energy.

Mr. POLAND. I would just like to add that the design process that has been done for nuclear power plants since their inception has been extraordinarily rigorous and much more detailed and much more carefully done than for any other kind of construction by many orders of magnitude. Our facilities, our nuclear facilities from a standpoint of strong shaking are the safest buildings that we have in the Nation. The problem in Japan, as you mentioned, had to do with the tsunami, and it wasn’t that they didn’t think they were going to have a tsunami. They had a wall. The wall wasn’t tall enough. The backup systems didn’t work as well as they thought that they would.

Mr. SARBANES. Okay. Humans are notoriously shortsighted about everything, and even with the earthquake activity of recent days, we will get back to being shortsighted even on this question, and I wonder if you could speak to—I mean, I would imagine if you went to any budget hearing at a local level, at a city, municipality level or at the state level if earthquake preparation and resiliency was even on the budget document, it would be on the last page on the last line because there are so many other things obviously that are pulling on our resources and our attention. So it makes me wonder how much—and I think you have spoken to this a little bit, but the opportunity to piggyback the kinds of things you want to see done onto other kinds of initiatives that are out there that have greater priority in the minds of planners and budgeters and all the rest of it so that you can kind of come along with a little bit, of leverage and not so much add a cost, say, well, as long as you are doing X, Y and Z, why not add this into the mix, and that can go to codes and building standards and so forth. But it also could go particularly well with community resiliency planning, and I wonder if you could speak to that and maybe throw in whether sort of green building codes and sustainable building codes are ones where there can be some added elements with respect to resiliency and so

Mr. MULLEN. I will tell you that on the West Coast, there are significant discussions taking place in local communities about earthquakes and tsunami threats and measures that should be taken. One of the things we haven’t really talked about is the importance of the general public understanding not only the risk they face but the measures they can take to protect themselves. I am very enthusiastic about getting a warning about something that might be coming like the tsunami warning we got a few weeks ago really helped us but the type of events, the no- notice events that we would deal with in the central Puget Sound or in Oregon or on the coast, they are not going to get a lot of warning for an earthquake. One of the things that we need to do is make sure people are prepared to take the protective steps that they need immediately. They need to be able to drop cover and hold. They need to know that they have got—that they need to have some resources for themselves. And on the coast, we have been working hard with the communities about their evacuation programs, knowing what it means to move quickly. The ground motion in an earthquake that is right off our coast is your signal. We also have an elaborate system of warning systems that we can activate to tell people to move to high ground. The difficulty we have, the challenge that communities have as they prepare with us and they have worked with us is there is not a vertical evacuation site that is necessarily readily available to every community, and so we have been trying to plan for the type of vertical evacuation structure that would be necessary on the coast in the Port of Los Angeles or Long Beach or Ilwaco where those folks can get to a place of safety which may not be the warmest, driest place but it will at least be above any kind of potential wave. That is an important step. There is no such structure right now but the communities are planning with it. I think the key to this whole thing that you are getting at in terms of where people are, and I would not hazard a guess about the scale because I would just be making something up. I will tell you if you educate people about the risks that they face and you level with people about what they can do to protect themselves and their families, whether it is the average citizen, someone running a business or the emergency management community or the local elected officials, you begin to generate the kind of interest that will get people looking at this as another issue that they have to deal with and move it up on that committee agenda. The national-level exercise I spoke of in my testimony is an attempt in the Midwest, in eight Midwestern states to begin to educate people at the same time that we are determining whether our doctrines and plans are going to work for us or not. That will be an extremely challenging exercise. We expect failure to occur because we want to find out what our condition is. So we are very eager to find out where we are weak, where we have got strengths and make sure we capitalize on the strengths and shore up the weaknesses.

References

Mary C. Comerio. 2000. Paying for the Next Big One. Our system for financing recovery from natural disasters is in shambles. Issues in Science & Technology. National Academy of Sciences.

B. Rowshandel, et. al. 2003. Estimation of Future Earthquake Losses in California.  California Geological Survey.

Earthquake Engineering Research Institute (EERI), Scenario for a Magnitude 7.0 Earthquake on the Hayward Fault (Oakland, Calif.: EERI, 1996).

Grossi, P., et al. 2013. 1868 Hayward Earthquake: 145-year retrospective. Risk Management Solutions.

Ii, Rong-gong Lin. May 5, 2016. San Andreas Fault ‘locked, loaded and ready to roll’ with big quake, expert says. Los Angeles Times.

Lesle, T. 2014. Doomsday 4: A Massive Quake Could Be Only the Beginning of the Bay Area’s Woes. Cal Alumni Association, UC Berkeley.

NRC. 2011. National Earthquake Resilience: Research, Implementation, and Outreach. National Research Council

OTA (Office of Technology Assessment). 1990. Physical Vulnerability of Electric System to Natural Disasters and Sabotage. OTA-E-453. Washington, D.C.: U.S. Government Printing Office.

Peter May and Walter Williams, Disaster Policy Implementation: Managing Programs Under Shared Governance (New York: Plenum Press, 1986).

Risa Palm and Michael Hodgson, After a California Earthquake: Attitude and Behavior Change (Chicago, Ill.: University of Chicago Press, 1992).

Jeanie Perkins et al., Preventing the Nightmare (Oakland, Calif.: Association of Bay Area Governments, 1999).

Jeanie Perkins et al., Shaken Awake (Oakland, Calif.: Association of Bay Area Governments, 1996).

Rutherford H. Platt, Disasters and Democracy: The Politics of Extreme Natural Events (Washington, D.C: Island Press, 1999).

USGS. 2008. The ShakeOut Scenario. United States Geological Survey.    Report 2008-1150

Posted in Earthquakes, Infrastructure | Tagged , , , | 4 Comments

The U.S. Military on Peak Oil and Climate Change

Preface. I find that of all the government branches, the military is the most realistic about the implications of Peak Oil and Climate Change.  The Department of Defense is also the largest consumer of energy in the federal government, spending about $20 billion on energy in 2011, and within the military, the air force consumes the most energy, $10 billion (84% liquid fuel, 12% electricity).  DuPont consumes as much energy as the Department of defense, so they’re not the only mega-consumer of petroleum (NRC 2013).

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

CNA. May 2009. Powering America’s Defense: Energy and the Risks to National Security. Center for Naval Analyses. 74 pages.

[ Excerpts from this document follow ]

The destabilizing nature of increasingly scarce energy resources, the impacts of rising energy demand, and the impacts of climate change all are likely to increasingly drive military missions in this century.

GENERAL CHARLES F. “CHUCK” WALD, USAF (RET.) Former Deputy Commander, Headquarters U.S. European Command (USEUCOM); Chairman, CNA MAB

Retired Air Force General Chuck Wald wants to see major changes in how America produces and uses energy. He wants carbon emissions reduced to help stave off the destabilizing effects of climate change.

“We’ve always had to deal with unpredictable and diverse threats,” Gen. Wald said. “They’ve always been hard to judge, hard to gauge. Things that may seem innocuous become important. Things that seem small become big. Things that are far away can be felt close to home. Take the pirates off the African coast. To me, it’s surprising that pirates, today, would cause so much havoc. It’s a threat that comes out of nowhere, and it becomes a dangerous situation.

“I think climate change will give us more of these threats that come out of nowhere. It will be harder to predict them. A stable global climate is what shaped our civilizations. An unstable climate, which is what we’re creating now with global warming, will make for unstable civilizations. It will involve more surprises. It will involve more people needing to move or make huge changes in their lives. It pushes us into a period of nonlinear change. That is hugely destabilizing.

“Our hands are tied in many cases because we need something that others have. We need their oil.

He gives another reason for major changes in our energy policy: He wants to reduce the pressure on our military.

“My perception is that the world, in a general sense, has assumed the U.S. would ensure the flow of oil around the world,” Gen. Wald said. “It goes back to the Carter Doctrine. I remember seeing the picture of the five presidents in the Oval Office. [He referred to a January photo, taken just before President Obama assumed office. Most people would not guess it was Jimmy Carter who said the U.S. would protect the flow of Persian Gulf oil by any means necessary. But he did. He recognized it as a vital strategic resource.

“And since that time, as global demand has grown, we see oil used more and more often as a tool by foreign leaders. And that shapes where we send our military. You look at the amount of time we spend engaged, in one way or another, with oil producing countries, and it’s staggering. Hugo Chavez in Venezuela gets a lot of our attention because he has a lot of oil. We spend a lot of money and a lot of time focused on him, and on others like him.

Gen. Wald cautions against simplistic responses to the challenge of energy dependency.

“The problem is dependence, and by that I mean our hands are tied in many cases because we need something that others have. We need their oil. But the solution isn’t really independence. We’re not going to become truly independent of anything. None of this is that simple. Reaching for independence can lead us to unilateralism or isolationism, and neither of those would be good for the U.S. The answer involves a sort of interdependence. We need a diversity of supply, for us and for everybody. We need clean fuels that are affordable and readily available, to us and to everybody. That’s not independence. It might even be considered a form of dependency-but we’d be dependent on each other, not on fossil fuels.”

Many of our overseas deployments were defined… by the strategic decision to ensure the free flow of oil to the U.S. and our allies.

VICE ADMIRAL RICHARD H. TRULY, USN (RET.) Former NASA Administrator, Shuttle Astronaut and the first Commander of the Naval Space Command

On DoD’s Efficiency Needs

Having served as commander of the space shuttle, retired Vice Admiral Richard Truly has traveled great distances on a single tank of fuel. His views on energy, however, are shaped by his time as Director of the National Renewable Energy Laboratory, and by a clear sense of how America’s energy choices affect troops on the ground. He believes the fastest gains for the U.S. military will come from a focus on energy efficiency.

This issue “is well recognized by a lot of the troops. They’ve seen friends getting hurt because of poor energy choices we’ve made in the past.”

“Efficiency is the cheapest way to make traction,” Adm. Truly said. “There’s a thousand different ways for the military to take positive action. And these are things that can help them from a war-fighter’s point of view and also make things cheaper in the long run.

“You can see the need by what we’ve done in Iraq and Afghanistan on logistics,” he said. “We’ve put inefficient systems very deep into these regions. And as a result, we end up with long lines of fuel trucks driving in. And we have to protect those fuel trucks with soldiers and with other vehicles.”

Truly sees key obstacles in the way of change. “The Defense Department is the single largest fuel user in the country, but if you compare it to the fuel used by the American public, it’s a piker,” Adm. Truly said. “When you think of the companies that make heavy vehicles, DoD is an interesting customer to them, but it’s not how they make their money. These companies are in the business of selling large numbers of commercial vehicles. So even if our military wants a new semi with a heavy-duty fuel-efficient diesel engine, it’s not likely to happen unless there is enough interest from other sec sectors to justify mass production. The real demand, if it exists, comes from the other 99 percent of users. That’s the rest of us. The real big market is the American people, and it’s their attitude that needs to change.”

GENERAL PAUL J. KERN, USA (RET.) Former Commanding General, U.S. Army Materiel Command

On the Vulnerability of Energy Inefficiency

In 1991, General Paul Kern commanded the Second Brigade of the 24th Infantry Division in its advance toward Baghdad—a sweeping left hook around Kuwait and up the Euphrates River Valley. It involved moving 5,000 people, plus materiel support, across 150 kilometers of desert. The route covered more ground than the Red Ball Express, which moved materiel across the Western European front in World War II.

“As we considered the route and began planning, our biggest concern was not our ability to fight the Iraqis; it was keeping ourselves from running out of fuel,” Gen. Kern said. “We also made a decision to never let our tanks get below half full, because we didn’t want to refuel in the middle of a fight.”

Meeting this commitment, given the fuel inefficiency of the Abrams tank, required stopping every two and a half hours. Fueling was done with 2,250-gallon HEMMT fuel tankers, which in turn were refueled by 5,000-gallon line-haul tankers (similar to those seen on U.S. highways).

“We set up and moved out in a tactical configuration, and were ready to fight whenever necessary,” Gen. Kern said. “To refuel, we would stop by battalions and companies. As we advanced, we laid out a system with roughly 15 stations for refueling. This was occurring almost continuously. We did it at night in a blinding sandstorm— having rehearsed it was key.”

The vulnerability of these slow-moving, fuel-intense supply lines has made Gen. Kern a strong advocate for increasing fuel efficiency in military operations. “The point of all this is that the logistics demands for fuel are so significant. They drive tactical planning. They deter determine how you fight. More efficiency can give you more options. That’s what you want as a commander.”

Gen. Kern used a different example—the 2003 northeast power outage, when 50 million people lost electric power—to highlight another energy impact on military operations. “I was running the Army Materiel Command,” Gen. Kern said. “We had a forward operation in Afghanistan, which would forward all the requisitions back here. They had a generator and a satellite radio to talk, but when the outage hit here in the U.S., they had no one to talk to. We quickly came up with back-up plans, but it showed me the vulnerability of the infrastructure here to support a deployments.

“In some cases, the need to communicate with supply depots is day-to-day. The Afghan operation then was very fragile. Access was very important. Everything was getting flown in, and because you couldn’t get a lot in with each trip, we wanted a continuous flow. That’s a factor in agility—if you have less materiel on the ground, you can be more agile. But with the limited supplies, you do want to be in constant contact. You want that continuous flow. When the power goes out here, or if we have a lengthy collapse of the grid, that flow of materiel affects our troops in important ways.”

Gen. Kern said agility (and continuous communications) will be increasingly important.

“If you think of humanitarian relief, you don’t know what the community needs. You can’t know that in advance, so you have to be agile. The same is true with asymmetrical threats—you don’t know what you’ll face. You build strong communications networks to help you respond quickly—that’s the planning you can do in advance. But these networks depend, for the most part, on our power grids. That’s a vulnerability we need to address.”

GENERAL GORDON R. SULLIVAN, USA (RET.) Former Chief of Staff, U.S. Army; Former Chairman of the CNA MAB

On the Connections Between Energy, Climate, and Security

Former U.S. Army Chief of Staff General Gordon R. Sullivan served as chairman of the Military Advisory Board that released National Security and the Threat of Climate Change. He started that process with little connection to the issue of climate change, but the briefings have stayed with him. He keeps reaching out for new information on the topic.

“What we have learned from the most recent reports is that climate change is occurring at a much faster pace than the scientists previously thought it could,” Gen. Sullivan said. “The Arctic is a case-in-point. Two years ago, scientists were reporting that the Arctic could be ice-free by 2040. Now, the scientists are telling us that it could happen within just a few years. The acceleration of the changes in the Arctic is stunning. “The climate trends continue to suggest the globe is changing in profound ways,” Gen. Sullivan said. He noted that these lead indicators should be enough to prompt national and global responses to climate change, and referenced military training to explain why. “Military professionals are accustomed to making decisions during times of uncertainty. We were trained to make decisions in situations defined by ambiguous information and little concrete knowledge of the enemy intent. We based our decisions on trends, experience, and judgment. Even if you don’t have complete information, you still need to take action. Waiting for 100 percent certainty during a crisis can be disastrous.” Gen. Sullivan said the current economic crisis is not a reason to postpone climate solutions.

“There is a relationship between the major challenges we’re facing,” Gen. Sullivan said. “Energy, security, economics, climate change—these things are connected. And the extent to which these things really do affect one another is becoming more apparent. It’s a system of systems. It’s very complex, and we need to think of it that way. “And the solutions will need to be connected. It will take the industrialized nations of the world to band together to demonstrate leadership and a willingness to change— not only to solve the economic problems we’re having, but to address the issues related to global climate change. We need to look for solutions to one problem that can be helpful in solving other problems. And here, I’d say the U.S. has a responsibility to lead. If we don’t make changes, then others won’t.” Gen. Sullivan tends to keep his discussions of climate change focused on the national security aspects. But he occasionally talks about it from a different perspective, and describes some of the projected changes expected to hit his native New England if aggressive measures are not embraced. “I have images of New England that stick with me,” Gen. Sullivan said. “Tapping sugar maples in winter. Fishing off the Cape. These were images I held close when I was stationed overseas. They were important to me then. And they are important to me now when I think of how we’ll respond to climate change. Those treasures are at risk. There’s a lot at stake.”

GENERAL CHARLES G. BOYD, USAF (RET.) Former Deputy Commander-in-Chief, Headquarters U.S. European Command (USEUCOM)

On Climate Change and Human Migrations

Retired Air Force General Chuck Boyd, former Deputy Commander-in-Chief of U.S. Forces in Europe, sees the effects of climate change in a particular context, one he came to understand while serving as executive director of the U.S. Commission on National Security/ 21st Century (commonly known as the Hart-Rudman Commission). The Commission’s reports, issued in advance of the 9/11 terrorist attacks, predicted a direct attack on the homeland, noted that the risks of such an attack included responses that could undermine U.S. global leadership, and outlined preventative and responsive measures. He explains this context by telling the story of a dinner at the home of the Japanese ambassador to the United Nations.

“When I was at EUCOM, I formed a friendship with the UN High Commissioner for Refugees, Madame Sadako Ogata,” Gen. Boyd said. “I was seated next to her at this dinner. When I told her about the project, she said you cannot talk about security without talking about the movement of people. She said we had to come to Geneva to talk with her about it. “She’s this little bitty person with a moral presence that’s overwhelming,” said Gen. Boyd, after a pause. “She’s a bit like Mother Teresa in that way. So we went—we went to Geneva.

“We spent the day with her and a few members of her staff pouring over a map of the world,” he said. “We looked at the causes of dislocations—ethnic, national and religious fragmentation mostly. And we looked at the consequences. It was very clear that vast numbers of conflicts were being caused by these dislocations. She was very strategic in her thinking. And she made the point that this phenomenon—the movements of people—would be the single biggest cause of conflicts in the 21st century.”

For Gen. Boyd, climate change is an overlay to the map of dislocations and conflicts provided by Madame Ogata. “When you add in some of the effects of climate change —the disruption of agricultural production patterns, the disruption of water availability—it’s a formula for aggravating, in a dramatic way, the problem and consequences of large scale dislocation. The more I think about it, the more I believe it’s one of the major threats of climate change. And it’s not well understood.

“As water availability changes, people who need water will fight with people who have water and don’t want to share it. It’s the same with agriculture. When people move away from areas that can’t sustain life anymore, or that can’t sustain their standard of living, they move to areas where they are not welcome. People will fight these incursions. Their interaction with different cultures causes tension. It’s very much like the tension we see with religious fragmentation. It’s the same pattern of consequences Madame Ogata was describing, only on a larger scale. This is about instability. It is a destabilizing activity, with murderous consequences.”

VICE ADMIRAL DENNIS V. MCGINN, USN (RET.) Former Deputy Chief of Naval Operations for Warfare Requirements and Programs

On Supporting Our Troops

Resource scarcity is a key source of conflict, especially in developing regions of the world. Without substantial change in global energy choices, Vice Admiral Dennis McGinn sees a future of potential widespread conflict.

“Increasing demand for, and dwindling supplies of, fossil fuels will lead to conflict. In addition, the effects of global climate change will pose serious threats to water supplies and agricultural production, leading to intense competition for essentials,” said the former commander of the U.S. Third Fleet, and deputy chief of naval operations, warfare requirements and programs. “The U.S. cannot assume that we will be untouched by these conflicts. We have to understand how these conflicts could play out, and prepare for them.” With an issue as big as climate change, Adm. McGinn said, “You’re either part of the solution or part of the problem. And in this case, the U.S. has to be more than just part of the solution; we need to be a big part of it. We need to be a leader. If we are not, our credibility and our moral authority are diminished. Our political and military relationships are undermined by not walking the walk.”

He believes these issues of credibility have a direct impact on our military. It’s one of many reasons why he sees climate change and energy security as inextricably linked national security threats. “We have less than ten years to change our fossil fuel dependency course in significant ways. Our nation’s security depends on the swift, serious and thoughtful response to the inter-linked challenges of energy security and climate change. Our elected leaders and, most importantly, the American people should realize this set of challenges isn’t going away. We cannot continue business as usual. Embedded in these challenges are great opportunities to change the way we use energy and the places from which we get our energy. And the good news is that we can meet these challenges in ways that grow our economy and increases our quality of life.”

Adm. McGinn is clear about the important role to be played by the American public. “Our national security as a democracy is directly affected by our energy choices as individual citizens,” Adm. Mc- Ginn said. “The choices we make, however small they seem, can help reduce our dependence on oil and have a beneficial effect on our global climate.” Individually, it may be hard to see, but collectively we can all make a tangible contribution to our national security. One way of thinking about this is that our wise energy choices can provide genuine support for our troops. “A yellow ribbon on a car or truck is a wonderful message of symbolic support for our troops,” said Adm. McGinn. “I’d like to see the American people take it several steps further. If you say a yellow ribbon is the ‘talk,’ then being energy efficient is the ‘walk’. A yellow ribbon on a big, gas-guzzling SUV is a mixed message. We need to make better energy choices in our homes, businesses and transportation, as well as to support our leaders in making policies that change the way we develop and use energy. If we Americans truly embrace this idea, it is a triple win: it reduces our dependence on foreign oil, it reduces our impact on the climate and it makes our nation much more secure.”

Executive Summary

Our dependence on foreign oil

  • reduces our international leverage
  • places our troops in dangerous global regions
  • funds nations and individuals who wish us harm, and who have attacked our troops and cost lives
  • weakens our economy, which is critical to national security
  • The market for fossil fuels will be shaped by finite supplies and increasing demand. Continuing our heavy reliance on these fuels is a security risk.

The Electric Grid

Our domestic electrical system is also a significant risk to our national security: many of our large military installations rely on power from a fragile electrical grid that is vulnerable to malicious attacks or interruptions caused by natural disasters. A fragile domestic electricity grid makes our domestic military installations, and their critical infrastructure, unnecessarily vulnerable to incident, whether deliberate or accidental.

Climate change

Destabilization driven by ongoing climate change has the potential to add significantly to the mission burden of the U.S. military in fragile regions of the world.

The effects of global warming will require adaptive planning by our military. The effects of climate policies will require new fuels and energy systems.

A business as usual approach to energy security poses an unacceptably high threat level from a series of converging risks.   Due to the destabilizing nature of increasingly scarce resources, the impacts of energy demand and climate change could increasingly drive military missions in this century.

Economy

Diversifying energy sources and moving away from fossil fuels where possible is critical to future energy security. While the current financial crisis provides enormous pressure to delay addressing these critical energy challenges, the MAB warns against delay. The economic risks of this energy posture are also security risks.

The U.S. consumes 25% of world oil production, yet controls less than 3% percent

And the supply is getting increasingly tight. Oil is traded on a world market, and the lack of excess global production makes that market volatile and vulnerable to manipulation by those who control the largest shares. Reliance on fossil fuels, and the impact it has on other economic instruments, affects our national security, largely because nations with strong economies tend to have the upper hand in foreign policy and global leadership. As economic cycles ebb and flow, the volatile cycle of fuel prices will become sharper and shorter.

What the military wants

  • First crack at trying out new technologies and vehicles, because the Department of Defense (DoD) is the nation’s single largest consumer of energy. DoD should also try to use less energy via distributed and renewable energy and use low-carbon liquid fuels
  • The military would like to see personal transport electrified to make more liquid fuels available for aircraft and the armed services.
  • Americans should be called upon again to use less fuel (to free up fuel for us, the military) like they did in WW II, when they also grew food locally in Victory Gardens, and contributed in other ways to the war effort

These steps could be described as sacrifices, frugality, lifestyle changes—the wording depends on the era and one’s perspective. Whatever the terminology, these actions made the totality of America’s war effort more successful. They shortened the war and saved lives.

Energy for America’s transport sector depends almost wholly on the refined products of

a single material: crude oil. Energy for homes, businesses, and civic institutions relies heavily on an antiquated and fragile transmission grid to deliver electricity. Both systems—transport and electricity—are inefficient. This assessment applies to our military’s use of energy as well.

Our defense systems, including our domestic military installations, are dangerously oil dependent, wasteful, and weakened by a fragile electrical grid.

In our view, America’s energy posture constitutes a serious and urgent threat to national security—militarily, diplomatically, and economically. This vulnerability is exploitable by those who wish to do us harm. America’s current energy posture has resulted in the following national security risks:

  • U.S. dependence on oil weakens international leverage, undermines foreign policy objectives, and entangles America with unstable or hostile regimes.
  • Inefficient use and over-reliance on oil burdens the military, undermines combat effectiveness, and exacts a huge price tag—in dollars and lives.
  • S. dependence on fossil fuels undermines economic stability, which is critical to national security.
  • A fragile domestic electricity grid makes our domestic military installations, and their critical infrastructure, unnecessarily vulnerable to incident, whether deliberate or accidental.

Dependence on oil constitutes a threat to U.S. national security. The United States consumes 25% of the world’s oil production, yet controls less than 3% of an increasingly tight supply. 16 of the top 25 oil-producing companies are either majority or wholly state-controlled. These oil reserves can give extraordinary leverage to countries that may otherwise have little; some are using that power to harm Western governments and their values and policies.

Another troubling aspect of our oil addiction is the resulting transfer of wealth. American and overall world demand for oil puts large sums in the hands of a small group of nations; those sums, in the hands of certain governments or individuals, can be used to great harm. Iran’s oil exports, which reached an estimated $77 billion in 2008, provide 40 percent of the funding for a government that the U.S. State Department says is the world’s “most active state sponsor of terrorism”. Iran provides materiel to Hezbollah, supports insurgents in Iraq, and is pursuing a nuclear weapons program.

Saudi Arabian private individuals and organizations, enriched by the country’s $301 billion in estimated 2008 oil, reportedly fund organizations that promote violent extremism revenues [18]. The sad irony is that this indirectly funds our adversaries. As former CIA Director James Woolsey said, “This is the first time since the Civil War that we’ve financed both sides of a conflict”.

America’s strategic leadership, and the actions of our allies, can be greatly compromised by a need (or perceived need) to avoid antagonizing some critical oil suppliers. This has become increasingly obvious since the early 1970s, when the first OPEC embargo quadrupled oil prices, contributed to an inflationary spiral, and generated tensions across the Atlantic as European nations sought to distance themselves from U.S. policies not favored by oil-exporting nations.

Oil has been the central factor in the mutually supportive relationship between the U.S. and Saudi Arabia. While the Saudis have been key allies in the region since World War II and serve as one of the nation’s most critical oil suppliers, Saudi Arabia is also one of the most repressive governments in the world.

Sudan provides another example: in an effort to pressure the Sudanese government to stop the genocide occurring in Darfur, the U.S. and most of Europe have limited or

halted investment in Sudan. However, China and Malaysia have continued to make investments worth billions of dollars (mainly in the oil industry) while actively campaigning against international sanctions against the country. Sudan, which depends upon oil for 96% of its export revenues, exports the vast majority of its oil to China and provides China with nearly 8% of its oil imports

While oil can enable some nations to flex their muscles, it can also have a destabilizing effect on their economic, social, and political infrastructure.

When the natural resource that caused the Dutch disease goes from boom to bust (as has been the case with oil), the economy and social fabric of the afflicted nation can be left in tatters.

Nigeria, which accounts for nearly 9 percent of U.S. oil imports, has experienced a particularly high level of economic and civil unrest related to its oil.

In addition to Dutch disease, Nigeria also shows another corrosive impact of oil. The large oil trade (and unequal distribution of its profits) has fueled the Movement for the Emancipation of the Niger Delta (MEND), an armed group that stages attacks against the foreign multinational oil companies and the Nigerian government. In one of its most serious actions September 2008, the MEND retaliated against a strike by the Nigerian military by attacking pipelines, flow stations, and oil facilities; they also claimed 27 oil workers as hostages and killed 29 Nigerian soldiers. The result was a decrease in oil production of 115,000 barrels per day over the week of attacks. In the years preceding this attack, instability caused by the MEND decreased oil production in the Niger Delta by 20%.

The MEND is but one example of a group operating in an unstable region that targets oil and its infrastructure for its strategic, political, military, and economic consequences. By 2007 in Iraq, in comparison to pre-2003 levels, effects from the war and constant harassment of the oil infrastructure by insurgent groups and criminal smuggling elements reduced oil production capacity in the northern fields by an estimated 700,000 barrels per day.

In 2006, al Qaeda in the Arab Peninsula carried out a suicide bombing against the Abqaiq oil production facility in Saudi Arabia, which handles about two-thirds of the country’s oil production. Fortunately, due largely to the intense focus of the Saudis on hardening their processing facilities (to which they devote billions of dollars each year), the attack was suppressed before the bombers could penetrate the second level of security gates. However, both the Saudi level of protection and al Qaeda’s selection of the oil infrastructure as a target signify the strategic and economic value of such facilities.

These attacks have demonstrated the vulnerability of oil infrastructure to attack; a series of well-coordinated attacks on oil production and distribution facilities could have serious negative consequences on the global economy. Even these small-scale and mostly unsuccessful attacks have sent price surges through the world oil market. In the U.S., dependence on foreign oil has had a marked impact on national security policies.

Much of America’s foreign and defense policies have been defined, for nearly three decades, by what came to be known as the Carter Doctrine. In his State of the Union address in January 1980, not long after the Soviet Union invaded Afghanistan, President Jimmy Carter made it clear that the Soviets had strayed into a region that held “great strategic importance”. He said the Soviet Union’s attempt to consolidate a position so close to the Straits of Hormuz posed “a grave threat to the free movement of Middle East oil.” He then made a declaration that went beyond a condemnation of the Soviet invasion by proclaiming the following: An attempt by any outside force to gain control of the Persian Gulf region will be regarded as an assault on the vital interests of the United States of America, and such an assault will be repelled by any means necessary, including military force. When President Carter made his declaration, the U.S. imported roughly 40 percent of its oil.

 

That percentage has since doubled. In fact, due to the increase in U.S. demand, the total annual volume of oil imported into the U.S. has tripled since the early 1980s. As a result, the stakes are higher, and the U.S. has accordingly dedicated an enormous military presence to ensure the unimpeded flow of oil-in the Persian Gulf and all across the globe. Our Commanders-in-Chief chose this mission not because they want America to be the world’s oil police; they did so because America’s thirst for oil leaves little choice.

Supply lines delivering fuel and other supplies to forward operating bases can stretch over great distances, often requiring permission for overland transport through one or more neighboring countries. As these lines grow longer, and as convoys traverse hotly contested territory, they become attractive targets to enemy forces. A Defense Science Board (DSB) task force identified this movement of fuel from the point of commercial procurement to the point of use by operational systems and forces as a grave energy risk for DoD. Ensuring convoy safety and fuel delivery requires a tremendous show of force. Today, armored vehicles, helicopters, and fixed-wing fighter aircraft protect the movement of fuel and other supplies. This is an extraordinary commitment of combat resources, and it offers an instructive glimpse of the true costs of energy inefficiency and reliance on oil.

Let us be clear here: logistics operations and their associated vulnerabilities are nothing new to militaries; they have always been a military challenge. Even if the military did not need fuel for its operations, some amount of logistics supply lines would still be required to ensure our forces have the supplies they need to complete their missions. However, the fuel intensity of today’s combat missions adds to the costs and risks. As in-theater demand increases, more combat troops and assets must divert to protect fuel convoys rather than directly engage enemy combatants. This reduces our combat effectiveness, but there is no viable alternative: our troops need fuel to fight.

The broad battle space in their wake required heavy security-the supply convoys bringing new supplies of fuel were constantly under threat of attack. The security measures necessary to defend this vast space slowed American movements and reduced the options available to Army and Marine field commanders. It prompted a clear challenge from Marine Lieutenant General James Mattis: “Unleash us from the tether of fuel” [36]. This “Unleash us from the tether of fuel”. This mile fuel convoys are exposed as they crawl along dangerous mountainous routes.

Combat. Forward operating bases-staging grounds for direct military engagement-contain communications infrastructure, living quarters, administrative areas, eating facilities and industrial activities necessary to maintain combat systems. All of these require electricity. The electricity used to power these facilities is provided by towed-in generators fueled by JP-8, the same fuel used by combat systems. The fuel used by these generators comes from the same vulnerable supply chain that provides liquid fuel for motorized vehicles.

A study of the 2003 I Marine Expeditionary Force (I MEF) in Iraq found that only 10 percent of its ground fuel use was for the heavy vehicles that deliver lethal force, including M1A1 tanks, armored vehicles, and assault amphibious vehicles; the other 90 percent was consumed by vehicles-including Humvees, 7-ton trucks, and logistics vehicles-that deliver and protect the fuel and forces. It is the antithesis of efficiency: only a fraction of the fuel is used to deliver lethal force. A different study showed that, of the U.S. Army’s top ten battlefield fuel users, only two (numbers five and ten on the list) are combat platforms; four out of the top ten are trucks, many of them used to transport liquid fuel and electric generating equipment [39]. The use of electric power extends beyond the battlefield bases: an infantry soldier on a 72-hour mission in Afghanistan today carries more than 26 pounds of batteries, charged by these generators. The weight of the packs carried by these troops (of which 20 to 25% can be batteries) hinders their operational capability by limiting their maneuverability and causing muscular-skeletal injuries. Soldiers and marines may not be tethered directly to fuel lines, but they are weighed down by electrical and battery systems that are dangerously inefficient.

The military uses fuel for more than mobility. In fact, one of the most significant consumers of fuel at forward operating bases in operations in Afghanistan and Iraq is not trucks or combat systems; it is electric generators.

In 2006, while commanding troops in Iraq’s Al Anbar province, Marine Corps Major General Richard Zilmer submitted an urgent request because American supply lines were vulnerable to insurgent attack by ambush or roadside bombs. “Reducing the military’s dependence on fuel for power generation could reduce the number of road-bound convoys,” he said, adding that the absence of alternative energy systems means “personnel loss rates are likely to continue at their current rate.

In addition to burdening our military forces, over-reliance on oil exacts a huge monetary cost, both for our economy and our military. The fluctuating and volatile cost of oil greatly complicates the budgeting process within the Department: just a $10 change in the per-barrel cost of oil translates to a $1.3 billion change to the Pentagon’s energy costs. Over-allocating funds to cover energy costs comes with a high opportunity cost as other important functions are under-funded; an unexpected increase results in funds being transferred from other areas within the Department, causing significant disruptions to training, procurement and other essential functions2. In addition to buying the fuel, the U.S devotes enormous resources to ensure the military receives the fuel it needs to operate. A large component of the logistics planning and resources are devoted to buying, operating, training, and maintaining logistics assets for delivering fuel to the battlefield-and these delivery costs exceed the cost of buying the commodity. For example, each gallon of fuel delivered to an aircraft in- flight costs the Air Force roughly $42; for ground forces, the true cost of delivering fuel to the battlefield, while very scenario dependent, ranges from $15 per gallon to hundreds of dollars per gallon. A more realistic assessment of what is called the “fully burdened price of fuel” would consider the costs attributable to oil in protecting sea lanes, operating certain military bases and maintaining high levels of forward presence. Buying oil is expensive, but the cost of using it in the battlespace is far higher.

The volatile fossil fuel markets have a major impact on our national economy, which in turn affects national security. Upward spikes in energy prices-tied to the wild swings now common in the world’s fossil fuel markets-constrict the economy in the short-term, and undermine strategic planning in the long-term. Volatility is not limited to the oil market: the nation’s economy is also wrenched by the increasingly sharp swings in price of natural gas and coal. This volatility wreaks havoc with government revenue projections, making the task of addressing strategic and systemic national security problems much more challenging. It also makes it more difficult for companies to commit to the long-term investments needed to develop and deploy new energy technologies and upgrade major infrastructure.

A significant and long-lasting trade deficit can put us at a disadvantage in global economic competitions. In 2008, our economy paid an average of $28.5 billion each month to buy foreign oil. This amount is expected to grow: while oil prices wax and wane periodically, in the long term, oil prices are trending upward. This transfer of wealth means America borrows heavily from the rest of the world, making the U.S. dependent economically.

We are also dependent economically on a global energy supply market increasingly susceptible to manipulation. In recent years, even the smallest incident overseas, such as just a warning of pipeline attack from the MEND in Nigeria, has caused stock markets to roil and oil prices to jump. Perhaps most worrisome in regard to the manipulation of the global oil trade are the critical chokepoints in the delivery system: 40 percent of the global seaborne oil trade moves through the Strait of Hormuz; 36 percent through the Strait of Malacca, and 10 percent through the Suez Canal. The economic leverage provided by the Strait of Hormuz has not been lost on Iran, which has employed the threat of closing down the shipping lane to prevent an attack on its nuclear program.

For the U.S., our economic might and easy access to natural resources have been important components of national strength, particularly over the last century. They have also allowed us to use economic aid and soft power mechanisms to retain order in fragile regions-thereby avoiding the need to use military power. When economies are troubled, domestic strife increases, prospects of instability increase, and international leverage diminishes. This is why the discussions of energy and economy have been joined, and is why both are matters of national security.

At military installations across the country, a myriad of critical systems must be operational 24 hours a day, 365 days a year. They receive and analyze data to keep us safe from threats, they provide direction and support to combat troops, and stay ready to provide relief and recovery services when natural disasters strike or when someone attempts to attack our homeland. These installations are almost completely dependent on commercial electrical power delivered through the national electrical grid. When the DSB studied the 2003 blackout and the condition of the grid, they concluded it is “fragile and vulnerable… placing critical military and homeland defense missions at unacceptable risk of extended outage”.

As the resiliency of the grid continues to decline, it increases the potential for an expanded and/ or longer duration outage from natural events as well as deliberate attack. The DSB noted that the military’s backup power is inadequately sized for its missions and military bases cannot easily store sufficient fuel supplies to cope with a lengthy or widespread outage. An extended outage could jeopardize ongoing missions in far-flung battle spaces for a variety of reasons:

  • The American military’s logistics chains operate a just-in-time delivery system familiar to many global businesses. If an aircraft breaks down in Iraq, parts may be immediately shipped from a supply depot in the U.S. If the depot loses power, personnel there may not fill the order for days, increasing the risk to the troops in harm’s way.
  • Data collected in combat zones are often analyzed at data centers in the U.S. In many cases, the information helps battlefield commanders plan their next moves. If the data centers lose power, the next military move can be delayed, or taken without essential information.
  • The loss of electrical power affects refineries, ports, repair depots, and other commercial or military centers that help assure the readiness of American armed forces.

When power is lost for lengthy periods, vulnerability to attack increases.

Destabilization driven by ongoing climate change has the potential to add significantly to the mission burden of the U.S. military in fragile regions of the world. In our view, confronting these converging risks is critical to ensuring America’s energy- secure future.

The demand for oil is expected to increase even as the supply becomes constrained. A 2007 Government Accountability Office (GAO) report on peak oil, which considered a wide range of studies on the topic, concluded that the peak in production is likely to occur sometime before 2040. While that 30-year time-frame may seem long to some, it is familiar to military planners, who routinely consider the 30- to 40-year life span of major weapon systems. According to the International Energy Agency (IEA), most countries outside of the Middle East have already reached, or will soon reach, the peak of their oil production. This includes the U.S., where oil production peaked in 1970.

Our 2007 report identified the national security risks associated with climate change. Chief among the report’s findings:

  • The NIA finds that climate change impacts—including food and water shortages, the spread of infectious disease, mass migrations, property damage and loss, and an increase in the intensity of extreme weather events—will increase the potential for conflict.
  • The impacts may threaten the domestic stability of nations in multiple regions, particularly as factions seek access to increasingly scarce water resources.
  • Projected impacts of climate change pose a serious threat to America’s national security.
  • Climate change acts as a threat multiplier for instability in some of the most volatile regions of the world.
  • Projected impacts of climate change will add to tensions even in stable regions of the world. Climate change, national security, and energy dependence are a related set of global challenges.

The NIA describes potential impacts on global regions. In describing the projected impacts in Africa, for example, it suggests that some rainfall-dependent crops may see yields reduced by up to 50 percent by 2020. In testimony before the U.S. Congress, Dr. Fingar said the newly established Africa Command “is likely to face extensive and novel operational requirements. Sub-Saharan African countries, if they are hard hit by climate impacts, will be more susceptible to worsening disease exposure. Food insecurity, for reasons both of shortages and affordability, will be a growing concern in Africa as well as other parts of the world. Without food aid, the region will likely face higher levels of instability, particularly violent ethnic clashes over land ownership.” This proliferation of conflicts could affect what Dr. Fingar described as the “smooth-functioning international system ensuring the flow of trade and market access to critical raw materials” that is a key component of security strategies for the U.S. and our allies. A growing number of humanitarian emergencies will strain the international community’s response capacity, and increase the pressure for greater involvement by the U.S. Dr. Fingar stated that “the demands of these potential humanitarian responses may significantly tax U.S. military transportation and support force structures, resulting in a strained readiness posture and decreased strategic depth for combat operations.” In addition, the NIA cites threats to homeland security, including severe storms originating in the Gulf of Mexico and disruptions to domestic infrastructure.

Admiral Blair, in his February 2009 testimony, referenced the NIA and described some of the potential impacts of energy dependency and climate change: “Rising energy prices increase the cost for consumers and the environment of industrial-scale agriculture and application of petrochemical fertilizers. A switch from use of arable land for food to fuel crops provides a limited solution and could exacerbate both the energy and food situations. Climatically, rainfall anomalies and constricted seasonal flows of snow and glacial melts are aggravating water scarcities, harming agriculture in many parts of the globe. Energy and climate dynamics also combine to amplify a number of other ills such as health problems, agricultural losses to pests, and storm damage. The greatest danger may arise from the convergence and interaction of many stresses simultaneously. Such a complex and unprecedented syndrome of problems could cause outright state failure, or weaken important pivotal states counted on to act as anchors of regional stability.

Some of the many ways climate change will adversely affect our military’s ability to carry out its already challenging missions: A changing Arctic forces a change in strategy. As the Arctic Ocean has become progressively more accessible, several nations are responding by posturing for resource claims, increasing military activity, expanding commercial ventures, and elevating the volume of international dialogue. Due to the melting ice, the U.S. is already reconsidering its Arctic strategy. The change in strategy will lead to a change in military intelligence, planning, and operations. The Arctic stakes are high: 22% of the world’s undiscovered energy reserves are projected to be in the region (including 13% of the world’s petroleum and 30% of natural gas).

Damage to and loss of strategic bases and critical infrastructure

As sea level rises, storm waves and storm surges become much more problematic. Riding in at a higher base level, they are much more likely to overflow coastal barriers and cause severe damage. Recent studies project that, by the end of the century, sea levels could rise by nearly 1 meter. A 1-meter rise in sea level would have dramatic consequences for U.S. installations across the globe,

Storm intensity affects readiness and capabilities. The projected increase in storm intensity can affect our ability to quickly deploy troops and materiel to distant theaters.

Increased conflict stretches American military. In other sections, we have noted the likelihood of increased global conflicts, which in turn increases the likelihood that American military forces will be engaging in multiple theaters simultaneously. In addition, at the very same time, there may be increased demands for American-led

humanitarian engagements in response to natural disasters exacerbated or caused by climate change.

The destabilizing nature of increasingly scarce energy resources, the impacts of rising energy demand, and the impacts of climate change all are likely to increasingly drive military missions in this century.

Many Americans recall World War II references to the Pacific Theater and European Theater. Climate change introduces the notion of a global theater; its impacts cannot be contained or managed regionally. It changes planning in fundamental ways. It forces us to make changes in this new, broader context.

Given the risks outlined earlier, diversifying our energy sources and moving away from fossil fuels where possible is critical to our future energy security.

Some energy choices could contradict future national climate goals and policies, which should lead us to avoid such energy options. Developing coal-to-liquid (CTL) fuels for the U.S. Air Force is a useful example.

Because of America’s extensive coal resources, turning coal into liquid aviation fuel is, on the surface, an attractive option to make the nation more energy independent.

However, unless cost-effective and technologically sound means of sequestering the resulting carbon emissions are developed, producing liquid fuel from coal would emit nearly twice as much carbon as the equivalent amount of conventional liquid fuel.

What does a new energy future look like? It will have a number of features, including:

Diversity. Electricity produced with sources like wind, solar, and geothermal power would produce substantially more of our nation’s electricity than today. Solar thermal facilities (these not only generate electricity during sunlight hours, they heat liquids that can be used to power steam generators at night) offer a current example of how the intermittency of some renewable sources can be overcome.

Additional low carbon solutions, such as nuclear energy, will also be part of a diversified energy portfolio.

Stability. Because the sources of these renewable energy technologies are free and abundant—in the U.S. and in many regions around the world—they would bring stability to our economy. This is quite the opposite of the current crude oil, coal, and natural gas markets, which are highly unstable.

Smarter use of energy resources. The wide-scale adoption of “smart grid” technologies (such as advanced electricity meters that can indicate which household appliances are on and communicate that information back to the grid) would allow power to be used with maximum efficiency, be able to heal the grid in the event of natural disasters and cyber attacks, and allow for all sources of electricity to provide power to the grid.

Electrification of ground transport. Relying on transport vehicles powered largely with electricity derived from this low carbon sector, such as plug-in hybrids, would reduce America’s need for imported oil for use in transportation.

Bio-based mobility fuels. For mobility applications that are likely to require liquid fuels into the foreseeable future—including aviation and military operations—non-food-based biofuels would be employed that are made with materials and processes that do not tax productive farmlands. To ensure that domestically produced fuel does not need to be transported to theaters of military operations, these bio-based fuels would be designed to match the specifications of military fuels (such as JP-8). In the interim, significant gains in mobility efficiency could make liquid petroleum fuels more available and affordable to the military when or if it is needed.

A U.S. Department of Energy study indicated that 20% of America’s electrical supplies could come from wind power by 2030. Similar, but less aggressive, growth curves can be projected for utility-scale solar power generation. Google, which has experience in scaling new technologies, reports that the U.S. can generate nearly all of its electrical power from non-carbon sources by 2030. While renewable energy generating plants currently cost more than their fossil counterparts, renewable energy production is expected to become competitive with traditional electricity

“Islanding some major bases is a great idea,” Magnus said. “You want to make sure that, in a natural or manmade disaster, the basic functions of an electrical grid can be conducted from a military installation. That’s a great idea. And a great challenge. And you can not only island, but be in a position where you can take energy from the grid when needed, and deliver energy back to the grid when you have a surplus. There will be tremendous resistance from the public utilities, so we need to find a way for everyone to benefit.

“It’s going to change the shorelines. It’s going to change the amount of snowmelt from mountains and glaciers. Some areas will experience increased rainfall, and some will experience increased drought. These are destabilizing events, even if they happen slowly. People in marginal economic areas will be hardest hit—and guess where we send our military? “The more instability increases, the more pressure there will be to use our military,” he said. “That’s the issue with climate change.

The U.S. is all about preventing big wars by managing instability. But as populations get more desperate, the likelihood of military conflicts will go up. We’ll have to cope with the ill effects of climate change.”

Resource scarcity is a key source of conflict, especially in developing regions of the world. Without substantial change in global energy choices, Vice Admiral Dennis McGinn sees a future of potential widespread conflict. “Increasing demand for, and dwindling supplies of, fossil fuels will lead to conflict. In addition, the effects of global climate change will pose serious threats to water supplies and agricultural production, leading to intense competition for essentials,” said the former commander of the U.S. Third Fleet, and deputy chief of naval operations, warfare requirements and programs. “The U.S. cannot assume that we will be untouched by these conflicts. We have to understand how these conflicts could play out, and prepare for them.”

“We have less than ten years to change our fossil fuel dependency course in significant ways. Our nation’s security depends on the swift, serious and thoughtful response to the inter-linked challenges of energy security and climate change. Our elected leaders and, most importantly, the American people should realize this set of challenges isn’t going away. We cannot continue business as usual.

We can invest more heavily in technologies that may require more patience and risk than most traditional investors can tolerate. The Department can provide essential aid in moving important new energy systems through what venture capitalists call “the valley of death”-the period after prototyping and before fully developing the product to scale. DoD also excels at the combination of speed and scale-building a huge or complex system in a short period of time. This challenge to hit speed and scale is the same challenge facing developers of new energy technologies.

Task Force has been pursuing a number of projects, including testing exterior spray foam to insulate temporary structures such as tents and containerized living units. Based on an estimated energy savings of 40 to 75%, Multi-National Force Iraq awarded a $95 million contract to insulate nine million square feet of temporary structures. The use of spray foam is estimated to have taken about 12 fuel transport trucks off the road every day in Iraq.

Tinker and Robins Air Force Bases have worked with their neighboring utilities to install 50 to 80 MW combustion gas turbines with dual fuel capability that allow the bases to disconnect from grid (that is, “island” from the grid) in the event of an emergency;

The Army is playing a role in providing an early market for the nascent electric vehicle market. In January 2009, the Army announced the single largest acquisition of neighborhood electric vehicles (NEV) [102]. By 2011, the Army will have acquired 4,000 NEVs, which cost nearly 60-percent less to operate than the gasoline-powered vehicles they will replace.

The U.S. Air Force has demonstrated national leadership in adopting renewable energy at their installations.

“Aircraft carriers or nuclear subs at a port like Norfolk are a real challenge to the electrical system,” Adm. Nathman said. When those ships shut down and start pulling from the grid, it’s an enormous demand signal. And you can’t have interruptions in that power, because that power supports nuclear reactor operations.”

The U.S. military will be able to procure the petroleum fuels it requires to operate in the near-and mid-term time horizons. However, as carbon regulations are implemented and the global supplies of fossil fuels begin to plateau and diminish in the long-term, identifying an alternative to liquid fossil fuels is an important strategic choice for the Department.

Recognizing this circumstance, DARPA has signaled that it will invest $100 million in research and development funding to derive JP-8 from a source other than petroleum. In early 2009, DARPA awarded more than half of that funding to three firms in an effort to develop price-competitive JP-8 from non-food crops such as algae and other plant-based sources.

The ongoing research efforts and progress to date by DoD in finding alternative liquid fuels, however, should not be interpreted to mean that this will be an easy task to accomplish. The equipment and weapons platforms of the Services are complex in both their variety and their operational requirements. For example, when considering the U.S. Navy, the fleet uses 187 types of diesel engines, 30 variations of gas/steam turbine engines, 7,125 different motors (not to mention the various types of nuclear reactors for aircraft carriers and submarines). The Navy also procures liquid fuels for its carrier- and land-based aircraft, which feature a mix of turbojet, turboprop, turboshaft, and turbofan engines. Finding a fuel that contains the appropriate combination of energy content (per unit mass and volume) is a challenging area of research.

How America responds to the challenges of energy dependence and climate change will shape the security context for the remainder of this century; it will also shape the context for U.S. diplomatic and military priorities.

Over dependence on imported oil-by the U.S. and other nations-tethers America to unstable and hostile regimes, subverts foreign policy goals, and requires the U.S. to stretch its military presence across the globe; such force projection comes at great cost and with great risks. Within the military sector, energy inefficient systems burden the nation’s troops, tax their support systems, and impair operational effectiveness. The security threats, strategic and tactical, associated with energy use were decades in the making; meeting these challenges will require persistence.

Both the defense and civilian systems have been based on dangerous assumptions about the availability, price, and security of oil and other fossil fuel supplies. It is time to abandon those assumptions.

FINDINGS

Finding 1: The nation’s current energy posture is a serious and urgent threat to national security. The U.S.’s energy choices shape the global balance of power, influence where and how troops are deployed, define many of our alliances, and affect infrastructure critical to national security. Some of these risks are obvious to outside observers; some are not. Because of the breadth of this finding, we spell out two major groupings of risk.

Finding 1A: Dependence on oil undermines America’s national security on multiple fronts. America’s heavy dependency on oil—in virtually all sectors of society—stresses the economy, international relationships, and military operations—the most potent instruments of national power. Over dependence on imported oil—by the U.S. and other nations— tethers America to unstable and hostile regimes, subverts foreign policy goals, and requires the U.S. to stretch its military presence across the globe; such force projection comes at great cost and with great risks. Within the military sector, energy inefficient systems burden the nation’s troops, tax their support systems, and impair operational effectiveness. The security threats, strategic and tactical, associated with energy use were decades in the making; meeting these challenges will require persistence. Both the defense and civilian systems have been based on dangerous assumptions about the availability, price, and security of oil and other fossil fuel supplies. It is time to abandon those assumptions.

Finding 1B: The U.S.’s outdated, fragile, and overtaxed national electrical grid is a dangerously weak link in the national security infrastructure. The risks associated with critical homeland and national defense missions are heightened due to DoD’s reliance on an electric grid that is out-dated and vulnerable to intentional or natural disruptions. On the home front, border security, emergency response systems, telecommunications sys tems, and energy and water supplies are at risk because of the grid’s condition. For military personnel deployed overseas, missions can be impaired when logistics support and data analysis systems are affected by grid interruptions. An upgrade and expansion of the grid and an overhaul of the regulations governing its construction and operations are necessary enablers to growth of renewable energy production—which is also a key element of a sound energy and climate strategy. Others have made compelling arguments for this investment, citing the jobs growth and environmental benefits. We add our voices, but do so from a different perspective: Improving the grid is an investment in national security.

Finding 2: A business as usual approach to energy security poses an unacceptably high threat level from a series of converging risks. The future market for fossil fuels will be marked by increasing demand, dwindling supplies, volatile prices, and hostility by a number of key exporting nations. Impending regulatory frameworks will penalize carbon-intensive energy sources. Climate change poses severe security threats to the U.S. and will add to the mission burden of the military. If not dealt with through a systems-based approach, these factors will challenge the U.S. economically, diplomatically, and militarily. The convergence of these factors provides a clear and compelling impetus to change the national and military approach to energy.

Finding 3: Achieving energy security in a carbon-constrained world is possible, but will require concerted leadership and continuous focus. The value of achieving an energy security posture in a future shaped by the risks and regulatory framework of climate change is immense. The security and economic stability of the U.S. could be improved greatly through large-scale adoption of a diverse set of reliable, stable, low-carbon, electric energy sources coupled with the aggressive pursuit of energy efficiency. The electrification of the transportation sector would alleviate the negative foreign policy, economic, and military consequences of the nation’s current oil dependency. While this future is achievable, this transformation process will take decades; it will require patience, stamina, and the kind of vision that bridges generations. Ensuring consistency of the nation’s energy security strategy with emerging climate policies can also serve to broaden the base of support for sensible new energy development and help to unify a wide range of domestic policies.

Finding 4: The national security planning processes have not been sufficiently responsive to the security impacts of America’s current energy posture. For much of the post-World War II period, America’s foreign and defense policies were aimed at protecting stability where it existed, and promoting it where it did not. Our national security planning process has continuously evolved to mitigate and adapt to threats as they arose. From the perspective of energy security, this process has left the nation in a position where our energy needs undermine: our national ideals, our ability to project influence, our security at home, our economic stability, and the effectiveness of our military. America’s current energy and climate policies make the goal of stability much more difficult to achieve. While some progress has been made to recognize the risks of our energy posture (including within the U.S. military), the strategic direction of the nation has yet to change course sufficiently to avoid the serious threats that will arise as these risks continue to converge.

Finding 5: In the course of addressing its most serious energy challenges, the Department of Defense can contribute to national solutions as a technological innovator, early adopter, and test-bed. The scale of the energy security problems of the nation demands the focus of the Defense Department’s strong capabilities to research, develop, test, and evaluate new technologies. Historically, DoD has been a driving force behind delivering disruptive technologies that have maintained our military superiority since World War II. Many of these technical breakthroughs have had important applications in the civilian sector that have strengthened the nation economically by making it more competitive in the global marketplace. The same can be true with energy. By pursuing new energy innovations to solve its own energy security challenges, DoD can catalyze some solutions to our national energy challenges as well. By addressing its own energy security needs, DoD can stimulate the market for new energy technologies and vehicle efficiency tools offered by innovators. As a strategic buyer of nascent technologies, DoD can provide an impetus for small companies to obtain capital for expansion, enable them to forward-price their proven products, and provide evidence that their products enjoy the confidence of a sophisticated buyer with stringent standards. A key need in bringing new energy systems to market is to achieve speed and scale: these are hallmarks of American military performance.

Priority 1: Energy security and climate change goals should be clearly integrated into national security and military planning processes. The nation’s approach to energy and climate change will, to a large extent, shape the security context for the remainder of this century. It will shape the context for diplomatic and military engagements, and will affect how others view our diplomatic initiatives-long before the worst effects of climate change are visible to others. Strategy, National Military Strategy, and Quadrennial Defense Review should more realistically describe the nature and severity of the threat

Priority 2: DoD should design and deploy systems to reduce the burden that inefficient energy use places on our troops as they engage overseas. Because the burdens of energy use at forward operating bases present the most significant energy related vulnerabilities to deployed forces, reducing the energy consumed in these locations should be pursued as the highest level of priority. In the operational theater, inefficient use of energy can create serious vulnerabilities to our forces at multiple levels. The combat systems, combat support systems, and electrical generators at forward operating bases are energy intensive and require regular deliveries of fuel; the convoys that provide this fuel and other necessary supplies are long and vulnerable, sometimes requiring protection of combat systems such as fixed wing aircraft and attack helicopters. Individual troops operating in remote regions are subject to injury and reduced mobility due to the extreme weight of their equipment (which can include up to 26 pounds of batteries).

We encourage readers to view our earlier report: “National Security and the Threat of Climate Change.”)

NRC. 2013. Energy Reduction at U.S. Air Force Facilities Using Industrial Processes: A Workshop. National Research Council

Military Advisory Board (MAB) Members

CHAIRMAN: General Charles F. “Chuck” Wald, USAF (Ret.) Former Deputy Commander, Headquarters U.S. European Command (USEUCOM)

General Charles G. Boyd, USAF (Ret.) Former Deputy Commander-in-Chief, Headquarters U.S. European Command (USEUCOM)

Lieutenant General Lawrence P. Farrell, Jr., USAF (Ret.) Former Deputy Chief of Staff for Plans and Programs, Headquarters U.S. Air Force

General Paul J. Kern, USA (Ret.) Former Commanding General, U.S. Army Materiel Command

General Ronald E. Keys, USAF (Ret.) Former Commander, Air Combat Command

Admiral T. Joseph Lopez, USN (Ret.) Former Commander-in-Chief, U.S. Naval Forces Europe and of Allied Forces, Southern Europe

General Robert Magnus, USMC (Ret.) Former Assistant Commandant of the U.S. Marine Corps

Vice Admiral Dennis V. McGinn, USN (Ret.) Former Deputy Chief of Naval Operations for Warfare Requirements and Programs

Admiral John B. Nathman, USN (Ret.) Former Vice Chief of Naval Operations and Commander of U.S. Fleet Forces

Rear Admiral David R. Oliver, Jr., USN (Ret.) Former Principal Deputy to the Navy Acquisition Executive

General Gordon R. Sullivan, USA (Ret.) Former Chief of Staff, U.S. Army, and Former Chairman of the CNA MAB

Vice Admiral Richard H. Truly, USN (Ret.) Former NASA Administrator, Shuttle Astronaut and the first Commander of the Naval Space Command

MAB Executive Director: Ms. Sherri Goodman, General Counsel, CNA. Former Deputy Under Secretary of Defense for Environmental Security

We would also like to thank the following persons for briefing the Military Advisory Board (in order of appearance):

Dr. Martha Krebs, Deputy Director for Research and Development, California Energy Commission and former Director, Office of Science, U.S. Department of Energy; Mr. Dan Reicher, Director, Climate Change and Energy Initiatives, Google.org, and former Assistant Secretary of Energy for Energy Efficiency and Renewable Energy; Dr. Kathleen Hogan, Director, Climate Protection Partnerships Division, U.S. Environmental Protection Agency; The Honorable Kenneth Krieg, Distinguished Fellow, CNA, and former Under Secretary of Defense for Acquisition, Technology, and Logistics; Dr. Joseph Romm, Senior Fellow, Center for American Progress, and former Acting Assistant Secretary of Energy, Office of Energy Efficiency and Renewable Energy; Mr. Ray Anderson, Founder and Chairman, Interface, Inc.; Mr. Jeffrey Harris, Vice President for Programs, Alliance to Save Energy; Dr. Vaclav Smil, Distinguished Professor, Faculty of Environment, University of Manitoba; Mr. Kenneth J. Tierney, Corporate Senior Director of Environmental Health, Safety and Energy Conservation, Raytheon; Dr. Ben Schwegler, Vice President and Chief Scientist, Walt Disney Imagineering Research and Development; Mr. Fred Kneip, Associate Principal, McKinsey; The Honorable John Deutch, Institute Professor, MIT, former Director of Central Intelligence, Central Intelligence Agency, and former Deputy Secretary of Defense; Mr. David Hawkins, Director, Climate Programs, Natural Resources Defense Council; Dr. Jeffrey Marqusee, Executive Director of the Strategic Environmental Research and Development Program (SERDP) and the Director of the Environmental Security Technology Certification Program (ESTCP); Mr. Michael A. Aimone, Assistant Deputy Chief of Staff for Logistics, Installations and Mission Support, Headquarters U.S. Air Force; Mr. Alan R. Shaffer, Principal Deputy Director, Defense Research and Engineering, Office of Director of Defense Research and Engineering, U.S. Department of Defense; Mr. Christopher DiPetto, Deputy Director, Developmental Test & Evaluation, Systems and Software Engineering, U.S. Department of Defense; and, The researchers from the National Renewable Energy Laboratory: Ms. Bobi Garret, Mr. Dale Garder, Dr. Rob Farrington, Dr. Mike Cleary, Mr. Tony Markel, Dr. Mike Robinson, Dr. Dave Mooney, Dr. Kevin Harrison, Mr. Brent Nelson, Mr. Bob Westby

Posted in Alternative Energy, Military | Tagged , , , | 2 Comments

From Horsepower to Horse Power. When Trucks stop, Horses start.

Preface. Before the industrial revolution there were only four sources of mechanical power of any economic significance. They were human labor, animal labor, water power (near flowing streams) and wind power.   Work done by animals, especially on farms, was still important at the beginning of the 20th century and remained significant until mid-century, when trucks and tractors displaced horses and mules (Ayres 2003).

Just as horses were indispensable the past millennia, so have the cars and trucks of the 20th century become essential to our way of life.  If one horsepower equals the power one horse can generate (this is roughly true), then the 268.8 million cars and trucks in the United States, let’s say with an average horsepower of 120 HP, then that’s nearly 32.3 billion horses.  If each needs an acre of pasture, then that’s over 50 million square miles of land. But the U.S. is only 3.5 million square miles.  Clearly we can’t go back to horses – except we have to at some point because oil is finite (I’m assuming you’ve read my book When Trucks Stop Running: Energy and the Future of Transportation to understand why biofuels, CTL, batteries, overhead wires, natural gas, and hydrogen can’t replace petroleum powered internal combustion engines).

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

Eric Morris. April 1, 2007. “From Horse Power to Horsepower”. Access Magazine, University of California.

The horse was the dominant mode of transportation for thousands of years. Horses were absolutely essential for the functioning of the 19th-century city—for personal transportation, freight haulage, and even mechanical power. Without horses, cities would quite literally starve.

From 1800 to 1900, US per capita GDP rose from $1,148 to $4,676 (in 2000 dollars). This meant greater trade, and virtually all goods were, at some point in their journey, transported by horse. In ten major US cities, the number of teamsters rose 328 percent between 1870 and 1900, while the population as a whole rose only 105 percent. At first glance, it might seem as if the railroad would have offered relief from the horse pollution problem. But in fact it exacerbated it. Railroads were as much a complement for horses as a substitute for them. Nearly every item shipped by rail needed to be collected and distributed by horses at both ends of the journey. So as rail shipments boomed, so did shipments by horse. Ironically, railroads tended to own the largest fleets of horses in nineteenth-century cities.

This situation was made even worse by the introduction of the horse into an area from which it had been conspicuously absent: personal intra-urban transportation. Prior to the nineteenth century, cities were traversed almost exclusively on foot. Mounted riders in US cities were uncommon, and due to their expense, slow speeds, and jarring rides, private carriages were rare; in 1761, only eighteen families in the colony of Pennsylvania (population 250,000) owned one. The hackney cab, ancestor of the modern taxi, was priced far beyond the means of the ordinary citizen.

This changed with the introduction of the omnibus in the 1820s. Essentially large stagecoaches traveling fixed routes, these vehicles were reasonably priced enough to cater to a much larger swathe of the urban population. By 1853 New York omnibuses carried 120,000 passengers per day. Needless to say, this required a tremendous number of horses, given that a typical omnibus line used eleven horses per vehicle per day. And the need for horses was to spiral even further when omnibuses were placed on tracks, increasing their speeds by fifty percent and doubling the load a horse could pull. Fares dropped again, and passengers clamored for the new service. By 1890 New Yorkers took 297 horsecar rides per capita per year.

Horses need to eat. According to one estimate each urban horse probably consumed on the order of 1.4 tons of oats and 2.4 tons of hay per year. [7600 lbs/year = 21 lbs a day One contemporary British farmer calculated that each horse consumed the product of five acres of land, a footprint which could have produced enough to feed six to eight people. Probably fifteen million acres were needed to feed the urban horse population at its zenith, an area about the size of West Virginia. Directly or indirectly, feeding the horse meant placing new land under cultivation, clearing it of its natural animal life and vegetation, and sometimes diverting water to irrigate it, with considerable negative effects on the natural ecosystem.

And what goes in must come out. Experts of the day estimated that each horse produced between fifteen and thirty pounds of manure per day. For New York and Brooklyn, which had a combined horse population of between 150,000 and 175,000 in 1880 (long before the horse population reached its peak), this meant that between three and four million pounds of manure were deposited on city streets and in city stables every day. Each horse also produced about a quart of urine daily, which added up to around 40,000 gallons per day for New York and Brooklyn.

Horse manure is the favored breeding ground for the house fly, and clouds of flies hatched in it (one estimate is that three billion flies hatched in horse manure per day in US cities in the year 1900).

Flies are also potent disease vectors. Flies pick up bacteria and other pathogens on their feet, hairy appendages, and proboscides, then transmit them as they fly between filth and humans and their food. They also deposit germs through their feces and vomit. Flies transmit dozens of diseases, and studies have found that nineteenth century outbreaks of deadly infectious maladies like typhoid and infant diarrheal diseases can be traced to spikes in the fly population.

Horses killed in other, more direct ways as well. As difficult as it may be to believe given their low speeds, horse-drawn vehicles were far deadlier than their modern counterparts. In New York in 1900, 200 persons were killed by horses and horse-drawn vehicles. This contrasts with 344 auto-related fatalities in New York in 2003; given the modern city’s greater population, this means the fatality rate per capita in the horse era was roughly 75 percent higher than today. Data from Chicago show that in 1916 there were 16.9 horse-related fatalities for each 10,000 horse-drawn vehicles; this is nearly seven times the city’s fatality rate per auto in 1997.

The reason is that horse-drawn vehicles have an engine with a mind of its own. The skittishness of horses added a dangerous level of unpredictability to nineteenth-century transportation. This was particularly true in a bustling urban environment, full of surprises that could shock and spook the animals. Horses often stampeded, but a more common danger came from horses kicking, biting, or trampling bystanders. Children were particularly at risk.

In addition, the vehicles themselves (especially the omnibus) presented safety hazards. They were difficult to brake, and the need to minimize friction meant that they required large wheels. These made for top-heavy, ungainly carriages prone to capsizing, a problem exacerbated by winding street layouts. Moreover, drivers had a reputation for recklessness.

The clatter of horseshoes and wagon wheels on cobblestone pavement jangled nineteenth-century nerves.

Congestion was another problem. Traffic counts indicate that traffic across the nation more than doubled between 1885 and 1905. Not only was the number of vehicles rising rapidly, but the nature of the vehicles themselves caused tremendous problems. A horse and wagon occupied more street space than a modern truck. Obviously, horse-drawn vehicles traveled at very slow speeds, and horses, especially those pulling heavy loads or hitched in teams, started forward very slowly, a great difficulty in stop-and-go conditions. Streets of the era were not adequate to handle the traffic, and hills caused problems.

In addition, horses often fell, on average once every hundred miles of travel. When this took place, the horse (weighing on average 1,300 pounds) would have to be helped to its feet, which was no mean feat. If injured badly, a fallen horse would be shot on the spot or simply abandoned to die, creating an obstruction that clogged streets and brought traffic to a halt. Dead horses were extremely unwieldy, and although special horse removal vehicles were employed, the technology of the era could not easily move such a burden. As a result, street cleaners often waited for the corpses to putrefy so they could more easily be sawed into pieces and carted off. Thus the corpses rotted in the streets, sometimes for days, with less than appealing consequences for traffic circulation, aesthetics, and public health.

Falls were not the only reason horses expired in the streets. One might think it would be in the interest of horse owners to keep their animals in good condition; a horse was a fairly large capital investment. But unfortunately, economics caused owners to reach quite the opposite conclusion. Due to the costs of feeding the animals and stabling them on expensive urban land, it made financial sense to rapidly work a small number of horses to death rather than care for a larger group and work them more humanely. As a result, horses were rapidly driven to death; the average streetcar horse had a life expectancy of barely two years. In 1880, New York carted away nearly 15,000 dead equines from its streets, a rate of 41 per day.

In addition to frequent whippings and beatings from drivers, urban horses faced another peril: the condition of the street surfaces. Paved streets were far more slippery than the dirt roads they replaced. They were especially slick when wet or frozen. Horses, shod in iron shoes providing poor traction, frequently lost their step and tumbled, often to their deaths.

Stables were generally dark and lacked ventilation; some were rarely cleaned and reeked of excrement. Due to the expense of urban land, horses were crowded into them. This was not just uncomfortable; it was deadly as well, as it left horses open to the ravages of infectious disease. The Great Epizootic Epidemic of 1872 killed approximately five percent of the urban horses in the Northeast and debilitated many others. Transportation halted, food prices soared, goods piled up at the docks. Fire ravaged downtown Boston because there were not enough healthy horses to pull the fire trucks.

References

Ayres, R.U., et al. March 2003. Exergy, power and work in the US economy, 1900–1998. Energy Vol 28 #3 219-273.

Clay McShane. Down the Asphalt Path: The Automobile and the American City. (New York: Columbia University Press, 1994).

Lawrence H. Larsen, “Nineteenth-Century Street Sanitation: A Study of Filth and Frustration,” Wisconsin Magazine of History, vol. 52, no. 3, Spring 1969.

Clay McShane and Joel A. Tarr. “The Centrality of the Horse in the Nineteenth Century American City,” in The Making of Urban America, ed. Raymond A. Mohl (Wilmington DE: Scholarly Resources, 1997).

Nigel Morgan, “Infant Mortality, Flies and Horses in Later-Nineteenth-Century Towns:

A Case Study of Preston,” Continuity and Change, vol. 17, no. 1, 2002.

Joel A. Tarr, “The Horse: Polluter of the City,” in The Search for the Ultimate Sink: Urban Pollution in Historical Perspective, ed. Joel A. Tarr (Akron, Ohio: University of Akron Press, 1996).

Francis Michael Longstreth Thompson, ed. Horses in European Economic History: A Preliminary Canter (Great Britain: British Agricultural History Society, 1983).

 

 

Posted in Life Before Oil, Muscle Power | Tagged , , , | 10 Comments

Muscle Power

Preface. Before fossil fuels, the energy to do work came from muscle power and the heat from burning biomass, mainly wood.  When I visited the Deutsches museum in Munich in 2017, I saw two animal treadmills:

The first picture shows a dog treadwheel in a nailmaker’s workshop in 1850. The dog ran inside the “drum” with the treadwheel operating the bellow to fan the smith’s fire. When more fire was needed, the nailer shouted at the dog to make it run more quickly to fan the fire more strongly. Dog treadwheels were used at nailmaker’s until about 1930.

The picture below is a treadmill used by either horses or oxen in 1900 to drive farm machinery. The treadmill comprises an endless belt of wooden planks. Oxen or horses are roped to it in such a way that they are strangled if they do not continually walk on the belt. The rotary movement is taken up by a pulley. With a small working load a centrifugal brake ensures that the belt does not run too quickly. Treadmills were also built for two animals working side by side and for small animals, such as dogs or goats.

As fossil energy declines, muscle power will increasingly have to replace it.

Here’s what Vaclav Smil has to say about muscle power in his book “Energy and Civilization”:

“The simplest way of transporting loads is to carry them. Where roads were absent people could often do better than animals: their weaker performance was often more than compensated for by flexibility in loading, unloading, moving on narrow paths, and scrambling uphill. Similarly, donkeys and mules with panniers were often preferred to horses: steadier on narrow paths, with harder hooves and lower water needs they were more resilient. The most efficient method of carrying is to place the load’s center of gravity above the carrier’s own center of gravity-but balancing a load is not always practical. In relative terms, people were better carriers than animals. Typical loads were only about 30% of an animal’s weight (that is, mostly just 50-120 kg) on the level and 25% in the hills. Men aided by a wheel could move loads far surpassing their body weight. Recorded peaks are more than 150 kg in Chinese barrows where the load was centered right above the wheel’s axle.”

And look at what it took to open the road to Russia for Napolean: Philip Paul, comte de Ségur (1780–1873), one of Napoleon’s young generals and perhaps the most famous chronicler of the disastrous Russian invasion, described the Prussian contribution. By this treaty, Prussia agreed to furnish many goods: 22,046 tons of rye, 264 tons of rice, two million bottles of beer,  44,092 tons of wheat, 71,650 tons of straw, 38,581 tons of hay, six million bushels of oats, 44,000 oxen, 15,000 horses, 300,600 wagons with harness and drivers, each carrying a load of 1700 pounds; and finally, hospitals provided with everything necessary for 20,000 sick.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

Steven Vogel. 2003.  Prime Mover: A Natural History of Muscle. W. W. Norton

Reviewed in June 2004. Natural History Magazine

Today we have machines for making people move in place: run, walk uphill, push pedals back and forth or up and down, row, ski, or even climb a never-ending staircase. Machines that are designed to waste energy and that usually rely on still more energy, in the form of electricity, to run-who would have anticipated their popularity? Thanks to the ingenuity of these contraptions’ designers and purveyors (people who, one might say, live off the fat of the land), the toils of Sisyphus have been transformed into a healthful pastime.

Such machines mirror the ancient technology of animal-powered engines. Millennia ago, humans discovered that wheels could do something other than turn tables for shaping pottery or underpin a vehicle and that a domesticated animal could do something other than carry a load or pull a wagon. We figured out that animals could do lots of useful tasks by turning wheels that were fixed in place. In an era when the non-muscular power sources included only a few types of sails and waterwheels, such a realization was no small matter. The diverse devices that followed had in common a fixed position, rotational motion, and whole-animal muscle power. Few looked much like today’s fancy exercise machines. Some were designed to be powered by humans, others by (usually) domesticated animals, and a few by either (depending on availability) showing, perhaps, how little the inventors distinguished a serf, a slave, or a convict from other sources of involuntary labor.

We can recognize three major designs. In the oldest, a horizontal bar jutted from a vertical shaft. An animal attached to the bar turned the shaft by circling. Because the animal itself rotated, the machine needed no true crank, such as the kind we use to pedal bicycles. Sometimes the vertical shaft was itself part of the business end of the machine, as in systems that ground grain between stones. Sometimes a pair of gears connected the vertical shaft to a horizontal one, often to run a chain of buckets descending into a well.

In a second type, a human or a domestic animal worked within a huge hollow wheel, like a hamster in an exercise wheel. The motor thus climbed rather than pulled. A greater load on the wheel meant a greater resistance to its being turned, and this meant that the living motor had to climb farther up the inside of the wheel to keep it going. The increasing slope gave the wheel a neatly self-regulating character: the motor’s output was automatically matched with the load. Bipeds such as humans made particularly good motors, but these cage wheels didn’t require domesticated creatures-even bears, contend historians, could be pressed into service.

The third design also made an animal climb, but it used a moving sloped platform instead. This ancestor of our fitness treadmills presented severe mechanical challenges: its sliding platform had to support both an animal’s weight and the impact of its feet, as well as be durable and sufficiently flexible to go around revolving drums fore and aft. But the challenges were worth overcoming: its slope could be adjusted to match the motor with the load, and the variable slope made the machine less finicky about what sort of animal powered it. However, the engineering feats required to build such true treadmills meant that they were uncommon before the 19th century.

Not that the three basic designs exhausted the possibilities for muscle-powered engines: A person could sit on a plank and rotate an inclined disk by pushing the edge with his feet, or an animal could walk in place uphill on top of the disk. A person could pull downward on crossbars on the outside of a revolving vertical wheel (essentially replacing water on a waterwheel) or pull on a slack chain hung over such a wheel. Indeed, the makers of modern exercise equipment might profitably peruse the works of great Renaissance engineers such as Georgius Agricola, Agostino Ramelli, and Fausto Veranzio when deciding what new kinds of devices to unleash on the public.

Although muscle-powered engines are ancient, they originated long after chariots, carts, and potters’ wheels-probably about 4,000 years ago, in the Middle East or India. Their main use in that part of the world was (and remains) lifting water with a bucket chain, a task that demanded a reliable way to turn horizontal into vertical motion. The problem’s solution-putting a pair of gears at right angles to each other-was far from obvious, and this may have delayed its invention or adoption. Rotary grain mills needed no such gears, but these mills came into use later, perhaps initially invented by the Greeks about 400 B.C. By the first century A.D, the Romans used slaves and donkeys to power a highly effective version-their mola asinaria (asinine, or donkey, mill).

Revolving cage wheels may have been a Roman innovation. Unambiguous images of them occur on bas-reliefs, and remnants of the actual wheels have been found at Pompeii. As with rotary grain mills, humans and donkeys provided the energy. Such wheels powered cranes that lifted blocks into place during the building of tall structures. Centuries later, similar cranes helped a more technologically savvy culture erect the great medieval cathedrals.

Because wells and irrigation systems were widespread and the hulls of wooden ships usually leaked, pumping water was a common use of stationary muscle-driven engines. Sometimes these engines were used to lift water to heroic heights. In one Roman mine, a cascade of eight pairs of scoop wheels raised water almost a hundred feet. Grinding grain and lifting stonework were not these engines’ only other tasks; in medieval Europe they also ran sawmills, pile drivers, dough-kneading machines, dockside cranes, bellows, and even one wheel (powered by a dog) that turned a roasting spit.

Muscle-powered mechanical engines also provided an alternative to sails and oars for the propelling of boats. On China’s rivers, human-powered cage wheels drove paddle wheelers as long ago as the eighth century A.D, though this mating of two relatively efficient devices never caught on in the West. Here shipbuilders stuck with oars, but building large oar-powered ships proved difficult: no matter how many were added to power a bigger ship, the oars couldn’t keep pace with the increased drag of the vessel. Also, larger oars were heavy and clumsy to maneuver and required multiple oars-men. By contrast, paddle wheelers lose nothing by being big, and it’s easy to link the paddle wheel to the cage wheel amidships.

One form of animal-powered boat did appear in the West, mainly in eastern North America, during the first half of the nineteenth century. Shortly after steamboats came into use, horse-powered “teamboats” began serving as ferries across waterways such as New York’s Hudson River. They were less expensive than steam, more reliable than sail, and required none of the human labor–more scarce in the New World than in the Old–demanded by oars. In one design, two or more horses walked in a circle on deck, turning a capstan amidships that was geared to a paddle wheel set between a pair of catamaran-like hulls. Another version had a turntable below deck, with horses in fixed stalls and two paddle wheels, one on each side of a single hull.

Usually, however, the nineteenth century’s many muscle-powered engines operated away from water: in prisons and on the prairie. Penal versions abounded. In its entry for “treadmill,” the great Oxford English Dictionary, compiled at the century’s end, recognized only a penal application for the machines. Punishment and useful work-what a nice combination. British parliamentary commissions repeatedly examined and endorsed the treadmill’s safety and beneficial effects on the health of inmates.

Great Britain was not alone in using these mills as a correctional device. New York City began using one in Bellevue Penitentiary in about 1820. Eight prisoners climbed the wide rows of steps that formed a revolving drum, while four sat in reserve. Each team member (some were women) therefore worked two-thirds of the time. Still, treadmills remained relatively uncommon in America; the shortage of labor meant that humans were rarely employed as motors when other animals would do.

Used judiciously, however, a muscle-powered engine provided aerobic exercise to incarcerated people who otherwise would have lacked it. Furthermore, the engines supplied necessities, such as ground grain for prison bakeries. An 1824 report (“The History of the Treadmill,” by James Hardie) makes much of the safety of the New York wheel, and its assertions about the health of inmates using it don’t seem unreasonable. Inmates apparently received sufficient food for the work. But its punitive character is evident: the jailers claimed rapid attitude adjustments in formerly obstreperous prisoners. As Thomas Henry Huxley, famous defender of Charles Darwin, put it when complaining about an adversary, “I would willingly agree to any law which would send him to the treadmill.”

For size and technological audacity, nothing has ever come close to the agricultural machines used during the nineteenth century on the North American prairie. After about 1880, large “combines,” pulled by up to forty horses, reaped and threshed wheat as they moved through the fields. But in the preceding decades, the threshing of grain depended on stationary machines called sweeps, powered by as many as twenty horses pulling radial bars as they circled. The machine’s parts were brought to a threshing location the evening before the scheduled work was begun, and the thresher was assembled before dawn; after being run all day, it was dismantled and moved (by its own horses) to the next farm. While these big and otherwise sophisticated machines incorporated a novel level of portability, they also relied on the oldest of designs for muscle-powered engines.

Unlike the combines, most nineteenth-century American devices were relatively small, general- purpose machines powered by one to four horses or oxen. These often used the third basic design, a suspended treadmill of two rollers and an inclined “belt.” The belt consisted of wooden boards, laid perpendicularly to the animals’ path and hinged side-to-side, forming an endless moving platform that slid on greased tracks or small rollers positioned between the big front and rear rollers. A one- or two-animal model looked like an open horse trailer. This machine worked much like a modern tractor motor; it could be connected to various machines and could be pulled out of the barn and positioned wherever it was needed for chores such as sawing logs. These multipurpose units appeared in farm catalogs as late as 1890.

Smaller versions of the giant threshing sweeps also appeared as late as 1890. For some tasks, either a treadmill or a sweep was suitable. Sweeps, which were simpler and lighter relative to their potential power, demanded more goading of the driving animals, and many designs required that the animals step over a horizontal drive-shaft once during every revolution. Treadmills, which were more compact and easier to move and put to work than the sweeps, were harder on the animals’ hooves. But treadmills were advertised as being twice as effective, which was not an unreasonable claim: on an inclined belt, an animal works by lifting itself; attached to a sweep, a harnessed animal pulls-a less natural activity, severely constrained by the effectiveness of the harness.

Like other muscle-powered machinery, treadmill devices were not designed only for horses or oxen. Household (or dooryard) models even included a butter churn; the manufacturer asserted that dogs, goats, sheep, or children could power it.

Such devices demonstrate that mechanization needn’t involve muscle-displacing motorization. Nonetheless, by the end of the century, steam power had largely replaced muscle power. The small steam engines of the nineteenth century may have been less fuel-and weight-efficient than gasoline-powered engines, but in low-population areas they were a good solution. Engines in which steam pushed pistons operated at relatively low pressures and temperatures, so they could be built simply, of common metals, and were easy to repair. Furthermore, they could burn anything combustible. Farm museums across North America still display such steam engines-self-propelled tractors that could also work as stationary power sources, powering all the same accessories as the treadmills that preceded them.

How good were these ingenious animal-powered machines at making muscle do useful work?

Where humans worked against gravity, as they did inside cage wheels and upon treadmills, we can calculate the power outputs. As a benchmark, we might use data, first obtained in the eighteenth century by British scientists Jean Desaguliers and John Smeaton, of the power a human laborer could produce if working steadily all day: 90 to 100 watts. In contemporary terms, this means that if you attach a generator to an exercise machine, you can watch TV as long as you climb, pedal, or row.

By the same token, a Roman cage wheel sixteen feet in diameter and eight feet across accommodated six to eight men, who could, forty times per hour, jointly lift one ton a distance of twenty-seven feet-which equals a power output of 600 foot-pounds per second. Dividing that among eight workers, we calculate a power output per person of just over 100 watts. That figure attests both to respectable efficiency for the machine and to considerable effort for the workers (who may have worked in relays).

On the Bellevue Penitentiary treadmill, prisoners climbed on treads protruding from a wheel that was slightly over five feet in diameter and turned three times each minute. If one assumes that a typical prisoner weighed 132 pounds, then the prisoner must have worked at a power of almost 140 watts. Since the normal duty cycle allowed each prisoner to rest one-third of the time, the sustained output would have been a little over 90 watts-sustained, according to the report, for up to ten hours a day. That figure of 90 watts confirms the reported unpleasantness of the task. A similar output was demanded of nineteenth-century Australian convicts, who worked up to twelve hours per day; some said they’d rather hang than work their mill.

We can view that 90 watts in yet another context. At best, only about one-fourth of the energy in food emerges as useful mechanical work. Thus, laboring on the treadmill-sustaining 90 watts for ten hours-itself requires more than 3,000 Calories. So Bellevue’s inmates worked hard enough and long enough to require double the food intake of a normally active adult male.

Although nonhuman animals don’t complain and can be fed cheaper food, attaching them to machines greatly lessens their efficiency. Which animals give the best service? Biology tells us that bigger is better. The strength and power of muscle tissue vary little from animal to animal, and mammals all have about the same amount of muscle: about 40 percent of body weight. But larger creatures spend relatively less energy on basic body functions, and this increases the fraction of their food that can be appropriated for labor-so forget battalions of wheel-turning rodents. Insects put out lots of power for long periods while spending less on personal maintenance than even big mammals do; in particular, they don’t insist on staying warm when idle. Under laboratory conditions, flying insects such as fruit flies and migratory locusts have powered stationary engines with their beating wings. But even large insects remain impractically small for our purposes.

Nor can practicality be ignored at the other end of the scale. Elephants, whatever their potential efficiency, are awkwardly large. Oxen have given excellent service for millennia, and horses for a little more than a thousand years-since the invention of the horse collar, which enabled them to pull effectively-but only on sweeps and treadmills. Few if any tractable animals come close to the mechanical versatility of agile humans.

Today, only a few types of muscle-powered stationary engines remain in use. A “treadle pump” first disseminated in Bangladesh during the 1980s and now used by many farmers in Asia and sub-Saharan Africa) pumps water for irrigation; it’s run by a person who climbs what looks like a Stair-Master stair climber. More sophisticated and convenient I means of generating energy have largely taken the place of muscle-powered machinery Engines powered by fossil fuels require far less infrastructure than do working animals and come in a much wider i range of sizes and models. Still, why not take instruction from history and hook a generator to your exercise bike or rowing machine? That power source could run some entertainment device that performs only when you do likewise.

You Expect Me to Do Watt?

A worker doing hard physical labor all day long-or a felon turning a treadmill-can put out about 100 watts of power. That’s the output, in the form of a little light and a lot of heat, of the familiar light bulb. It’s a little more than the rate at which an inactive human heats a room. But to understand 100 watts of muscle power, one needs to turn to a quantifiable everyday task that humans do with reasonable efficiency. Climbing stairs fits that bill.

When you climb a flight of stairs, what’s your power output? Just multiply your weight in pounds by the height of each step in inches and by your climbing rate in steps per second, and then divide by 9. The last number takes care of gravity and converts the figure into watts. I weigh 140 pounds. When climbing seven-inch steps at two per second, I put out about 220 watts-a rate that I, an age-challenged man, can sustain only briefly. Ascending a down escalator, I work at 140 watts.

But what climbing rate corresponds to an output of 100 watts? Divide 900 by your weight in pounds and by the step height in inches; the resulting figure is how many steps per second you would have to climb. On seven-inch stairs, I’d have to ascend them at a little less than one step per second-trivial for the first couple of flights but a tiring regimen to keep up for even an hour.

What about fuel? We’re at best only about 25 percent efficient, so an output of 100 watts requires a minimum input of 400 watts, which translates (when we multiply by 0.86) into about 350 Calories per hour. Burning a tenth

1  pound of good fuel-fat-yields 350 Calories, so working at 100 watts for eight hours costs less than a pound of body fat-still nearly double a human male’s normal energy use.

By any technological yardstick, we animals, whether horses or humans, are strange engines. For a difficult task of only a few seconds’ duration, a person can put out thousands of watts–many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum. But by physiological standards, horses and humans (and dogs but not cats) sustain especially high power outputs. Fleet-footed ancestors bequeathed us the lungs and hearts that let us work long and hard.

Posted in Agriculture, Energy, Life Before Oil, Muscle Power, What to do | Tagged , | 3 Comments

Challenges to the Integration of Renewable Resources at High System Penetration

Preface.  This overview of challenges for wind and solar written in 2010 is still true today. We are far from being able to reach even a 50% renewable grid (excluding hydropower from the total) given the lack of storage, the problem that the best wind and solar are far from towns and cities – too far to justify extending transmission lines, we lack a “smart grid” system due to the many challenges of processing huge amounts of data, and so on.

California is up to 29% renewable power, but it is terribly seasonal, and not dependable for more than half of the year, when the majority of power needs to come from fossil fuels, mainly natural gas.

I liked this paper because it is less technical than most papers on this topic, probably because it was written for policymakers.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Meier, Alexandra von. 2010. Challenges to the Integration of Renewable Resources at High System Penetration. California Energy Commission, California Institute for Energy and Environment. Publication number: CEC-500-2014-042.

Renewable and distributed resources introduce space (spatial) and time (temporal) constraints on resource availability and are not always available where or when they are wanted. Although every energy resource has limitations, the constraints associated with renewables may be more stringent and different from those constraints that baseload power systems were designed and built around.

These unique constraints must be addressed to mitigate problems and overcome difficulties while maximizing the benefits of renewable resources. New efforts are required to coordinate time and space within the electric grid at greater resolution or with a higher degree of refinement than in the past.

This requires measuring and actively controlling diverse components of the power system on smaller time scales while working toward long-term goals. These smaller time scales may be hourly or by the minute, but could also be in the milli- or even microsecond range. It also important to plan and design around the diverse details of local distribution circuits while considering systemic interactions throughout the Western interconnect.

 

Temporal coordination specifically addresses the renewable resources time-varying behavior and how this intermittency interacts with other components on the grid where not only quantities of power but rates of change and response times are crucially important.

Research needs for temporal coordination relate to: resource intermittence, forecasting and modeling on finer time scales; electric storage and implementation on different time scales; demand response and its implementation as a firm resource; and dynamic behavior of the alternating current grid, including stability and low-frequency oscillations, and the related behavior of switch-controlled generation. Different technologies, management strategies and incentive mechanisms are necessary to address coordination on different time scales.

A challenge to “smart grid” coordination is managing unprecedented amounts of data associated with an unprecedented number of decisions and control actions at various levels throughout the grid.

More work is required to move from the status quo to a system with 33 percent of intermittent renewables. The complex nature of the grid and the refining temporal and spatial coordination represented a profound departure from the capabilities of the legacy or baseload system. Any “smart grid” development will require time for learning, especially by drawing on empirical performance data as they become available. Researchers concluded that time was of the essence in answering the many foundational questions about how to design and evaluate new system capabilities, how to re-write standards and procedures accordingly, how to create incentives to elicit the most constructive behavior from market participants and how to support operators in their efforts to keep the grid working reliably during these transitions. Addressing these questions early may help prevent costly mistakes and delays later on.

 

Renewable and distributed resources introduce space or location (spatial) and time (temporal) constraints on resource availability. It is not always possible to have the resources available where and when they are required

New efforts will be required to coordinate these resources in space and time within the electric grid.

A combination of economic and technical pressures has made grid operators pay more attention to the grid’s dynamic behaviors, some of which occur within a fraction of an alternating current cycle (one-sixtieth of a second). The entire range of these relevant time increments in electric grid operation and planning spans fifteen orders of magnitude: from the micro-second interval on which a solid-state switching device operates, to the tens of years it may take to bring a new fleet of generation and transmission resources online or as a billion seconds (Figure 1).

Figure 2: Distance Scales for Power System Planning and Operation

Because of their unique properties, any effort to integrate renewable resources to a high penetration level will push outward time and distance scales on which the grid is operated. For example, it will force distant resource locations to be considered as well as unprecedented levels of distributed generation on customer rooftops. The physical characteristics of these new generators will have important implications for system dynamic behavior. In extending the time and distance scales for grid operations and planning, integrating renewable resources adds to and possibly compounds other, pre-existing technical end economic pressures.

 

This white paper explains some of the crucial technical challenges, organized as temporal and spatial refinement of energy and information management. It identifies areas that are poorly or insufficiently understood, and where a clear need exists for new or continuing research.

Work must proceed simultaneously on multiple fronts.

The fact that solar and wind power are intermittent and non-dispatchable is widely recognized. More specifically, the problematic aspects of intermittence include the following:

High variability of wind power. Not only can wind speeds change rapidly, but because the mechanical power contained in the wind is proportional to wind speed cubed, a small change in wind speed causes a large change in power output from a wind rotor.

High correlation of hourly average wind speed among prime California wind areas.  With many wind farms on the grid, the variability of wind power is somewhat mitigated by randomness: especially the most rapid variations tend to be statistically smoothed out once the output from many wind areas is summed up. However, while brief gusts of wind do not tend to occur simultaneously everywhere, the overall daily and even hourly patterns for the best California wind sites tend to be quite similar, because they are driven by the same overall weather patterns across the state.

Time lag between solar generation peak and late afternoon demand peak.  The availability of solar power generally has an excellent coincidence with summer-peaking demand. However, while the highest load days are reliably sunny, the peak air-conditioning loads occur later in the afternoon due to the thermal inertia of buildings, typically lagging peak insolation by several hours.

Rapid solar output variation due to passing clouds. Passing cloud events tend to be randomized over larger areas, but can cause very rapid output variations locally. This effect is therefore more important for large, contiguous photovoltaic arrays (that can be affected by a cloud all at once) than for the sum of many smaller, distributed PV arrays. Passing clouds are also less important for solar thermal generation than for PV because the ramp rate is mitigated by thermal inertia (and because concentrating solar plants tend to be built in relatively cloudless climates, since they can only use direct, not diffuse sunlight).

Limited forecasting abilities. Rapid change of power output is especially problematic when it comes without warning.

In principle, intermittence can be addressed by firming resources, including o reserve generation capacity

  • dispatchable generation with high ramp rates o generation with regulation capability
  • dispatchable electric storage o electric demand response that can be used in various combinations to offset the variability of renewable generation output. Vital characteristics of these firming resources include not only the capacity they can provide, but their response times and ramp rates.

Figure 3: Load Duration Curve Filled with Renewables

 

Figure 3 suggests that while the integration of renewable resources at very high system penetration may present some serious problems, matching generation with load on an hourly basis, at least from the theoretical standpoint of resource availability, is probably not one of them. Rather, the more critical technical issues seem to appear at finer time resolution, as illustrated in Figure 4.

One problematic aspect is resource forecasting on a short time scale. Solar and wind power forecasting obviously hinges on the ability to predict temperature, sunshine and wind conditions. While weather services can offer reasonably good forecasts for larger areas within a resolution of hours to days, ranges of uncertainty increase significantly for very local forecasts.

Figure 1: Resource Modeling and Forecasting Time Scales

Needed:

  • Real-time forecasting tools for wind speed, temperature, total insolation (for PV) and direct normal insolation (for concentrating solar), down to the time scale of minutes
  • Tools for operators that translate weather forecast into renewable output forecast and action items to compensate for variations.

The most responsive resources would include hydroelectric generators and gas turbines.

The more difficult question is how much of each might be needed. Electric storage includes a range of standard and emerging technologies:

  • pumped hydro
  • stationary battery banks
  • thermal storage at solar plants
  • electric vehicles o compressed air (CAES)
  • supercapacitors
  • flywheels
  • superconducting magnetic (SMES)
  • hydrogen from electrolysis or thermal decomposition of H2O

The spectrum of time scales for different storage applications is illustrated in Figure 5.

  • months: seasonal energy storage
  • 4-8 hours: demand shifting
  • 2 hours: supplemental energy dispatch o 15-30 minutes: up- and down-regulation
  • seconds to minutes: solar & wind output smoothing
  • sub-milliseconds: power quality adjustment; flexible AC transmission system (FACTS) devices that shift power within a single cycle

3.1 Transmission Level: Long-distance Issues

The need for transmission capacity to remote areas with prime solar and wind resources is widely recognized.

 

We know where the most attractive resources are – and they are not where most people live.

On the technical side:  

  • Long-distance a.c. power transfers are constrained by stability limits (phase angle separation) regardless of thermal transmission capacity
  • Increased long-distance a.c. power transfers may exacerbate low-frequency oscillations (phase angle and voltage), potentially compromising system stability and security

Simply adding more, bigger wires will not always provide increased transmission capacity for the grid. Instead, it appears that legacy a.c. systems are reaching or have reached a maximum of geographic expansion and interconnectivity that still leaves them operable in terms of the system’s dynamic behavior.

Further expansion of long-distance power transfers, whether from renewable or other sources, will very likely require the increased use of newer technologies in transmission systems to overcome the dynamic constraints.

Needed:

  • Dynamic system modeling on large geographic scale (WECC) providing analysis of likely stability problems to be encountered in transmission expansion scenarios
  • benefit potential of various d.c. link options
  • Continuing R&D on new infrastructure materials, devices and techniques that enable transmission capacity increases, including
  • dynamic thermal rating
  • power flow control, e.g. FACTS devices
  • fault current controllers
  • intelligent protection systems, e.g. adaptive relaying
  • stochastic planning and modeling tools
  • new conductor materials and engineered line and system configurations

Brown, Merwin, et al., Transmission Technology Research for Renewable Integration, California Institute for Energy and Environment, University of California, 2008, provides a detailed discussion of these research needs.

With all the research needs detailed in this white paper, the hope is that questions addressed early may help prevent costly mistakes and delays later on. The more aggressively these research efforts are pursued, the more likely California will be able to meet its 2020 goals for renewable resource integration.

Posted in Electric Grid, Solar, Wind | Tagged , , , , , , | Leave a comment

Vaclav Smil. Making the modern world: materials and dematerialization

Preface.  I can’t believe I read this book, it is just a long litany of the  gigantic amounts of materials we exploit, with no analysis, implications, or the meaning of what impact this will have on the planet.

I certainly don’t expect anyone to read even this shortened version of his book, but it might be worthwhile to skim for an idea of how much material we’re consuming.

As I point out in my review of the United Nations 2016 report Global material flows and resources productivity” here, in order to accommodate an additional 2 billion people in 2050, material consumption will need to nearly triple to 180 billion tonnes of materials, almost three times today’s amount. If 180 billion tonnes grows in the future at a 5% compound rate, in 497 years the entire earth will be consumed, all 5.972 x 1021 tonnes of it, and we’ll be floating in outer space.

After reading this book, it’s hard to believe there’s anything left to exploit, though here it is 5 years later and the earth is still being pillaged.  But from Smil’s gargantuan numbers and the exponential exploitation of just about everything, clearly this will end badly.  The issue of peak sand has been in the news more frequently lately, which is essential for civilization to make concrete, computer chips, solar PV, and fracking.

Smil covers a wide range of materials that are essential to civilization that you may not have thought much about, and all the myriad uses of silicon, plastics, nitrogen, aluminum, steel, hydrogen, ammonia, cement, and more.  All of them made possible by oil.  All of them essential for civilization, so if one fails….(Liebig’s law of the minimum).

Nor can we avoid our predicament by recycling. Smil states that While some metals can be reused indefinitely (albeit with some mass losses) recycling of most materials often entails considerable loss of quality and functionality”.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Vaclav Smil. 2013. Making the Modern World: Materials and Dematerialization.  Wiley.

An overwhelming majority of people lived in pre-modern societies with only limited quantities of simple possessions that they made themselves or that were produced by artisanal labor as unique pieces or in small batches – while the products made in larger quantities, be they metal objects, fired bricks and tiles, or drinking glasses, were too expensive to be widely owned. The principal reason for this limited mastery of materials was the energy constraint: for millennia our abilities to extract, process, and transport biomaterials and minerals were limited by the capacities of animate prime movers (human and animal muscles) aided by simple mechanical devices and by only slowly improving capabilities of the three ancient mechanical prime movers: sails, water wheels, and wind mills.

An updated inventory, with data for aggregate categories extending until 2006, was published in 2009 (Matos 2009) and data on individual elements, compounds, and materials are updated annually (USGS 2013).

the series does not include materials contained in traded finished goods: given their mass and variety their tracking would be very difficult.

The ships that made the first Atlantic crossings were remarkably light: a Viking ship (based on a well-preserved Gokstad vessel built around 890 CE) required the wood of 74 oaks (including 16 pairs of oars).

The Egyptian pyramids at Giza are unique: Khufu’s pyramid not only remains the largest stone structure ever built (195 m high, it required 2.5 million stones whose average weight was 2.5 t) but this mass of more than 6 Mt of stone

Romans are credited with the invention of concrete, but this is an inaccurate attribution. Concrete is a mixture of cement, aggregates (sand, pebbles), and water and cement is a finely ground mixture of lime, clay, and metallic oxides fired in kilns at a high temperature. There was no cement in Roman opus cementitium and hence this sturdy mixture, strong enough to build large vaults and domed structures, was not the material now known as concrete. Opus cementitium contained aggregates (sand, gravel, stones, broken bricks, or tiles) and water but its bonding agent was lime mortar (Adam, 1994). The combination of slaked lime and volcanic sand from the vicinity of Puteoli near Mount Vesuvius (pulvere puteolano, later known as pozzolana), produced a superior mixture that could harden even under water and that could be used to build not only massive and durable walls but also spectacular vaults.

The most consequential material development in antiquity was the ability to smelt and to shape a growing array of metals. All of this devastated local and regional wood resources, and copper smelting was a leading cause of Mediterranean deforestation, particularly in Spain and Cyprus.

[We still live in] the Iron Age, with the total consumption of other metals adding up to a small fraction of iron use.

Global population increased by less than 60% during the 500 years between 1000 and 1500 but then more than doubled (from about 460 million to nearly a billion) by 1800 – but remained overwhelmingly rural, with cities accounting for less than 5% of all humanity

Fuel-wasting fireplaces and braziers resulted in a huge demand for fuelwood and charcoal to heat the expanding cities of the pre-coal era. In Paris, the demand rose from more than 400,000 loads of wood in 1735 to more than 750,000 loads in 1789 (about 1.6 Mm3) and the same amount of charcoal, prorating to more than a ton of fuel per capita (Roche, 2000).

Wood remained indispensable not only for building houses and transportation equipment (carts, wagons, coaches, boats, ships) but also—as iron smelting rose in parts of Europe—for charcoal production for blast furnaces (substitution by coke began only during the latter half of the eighteenth century and was limited to the UK). And as Europe’s maritime powers (Spain, Portugal, England, France, and Holland) competed in building large ocean-going vessels—both commercial and naval—the increasing number of such ships and their larger sizes brought unprecedented demand for the high-quality timber needed to build hulls, decks, and masts.

With wooden hulls, masts, and spars being as much as 70% of the total mass (the remainder was divided among ballast, supplies, sails, armaments, and crew) these pioneering vessels contained 60–75 t of sawn timber (Fernández-González, 2006).

Iron production in small blast furnaces required enormous quantities of charcoal and combined with inefficient wood-to-charcoal conversion this led to widespread deforestation in iron-smelting regions: by 1700 a typical English furnace consumed 12 000 t of wood a year (Hyde, 1977).

Only during the mid 1950s that Alastair Pilkington introduced the molten tin bath that allowed production of very large pieces of flat glass with near-perfect uniformity

By 1900 the railroads on five continents added up to 775,000 km, with about 250,000 km in Europe, more than 190,000 km in the USA, 53,000 km in Russia, and 30,000 km in the UK (Williams, 2006). Given the wide range of terrains covered by rail tracks it is impossible to estimate a typical volume of bulk construction materials – earth displaced and replaced to create cuts or embankments, stone cut to create tunnels or incision in mountainsides, and stone quarried to produce gravel for access roads and rail beds – that had to be handled for an average kilometer of new track. Even a highly conservative assumption of 3000 m3/km would result in nearly 2.5 Gm of bulk materials associated with the global railway construction of the second half of the nineteenth century. A similarly conservative assumption of at least 2000 t of ballast (crushed stones packed underneath and around ties) per kilometer would translate to at least 1.5, and more likely to 2 Gt, of coarse gravel applied to hold in place the tracks built between 1830 and 1900. Mineral aggregates were also needed in unprecedented volumes for the building of new factories, for the expansion of ports, and for the construction of hard-top roads.

All ties (sleepers) installed during the nineteenth century were wooden; concrete sleepers were introduced only around 1900 but remained uncommon until after World War II. Standard construction practice requires the placement of about 1900 sleepers per km of railroad track, and with a single tie weighing between roughly 70 kg (pine) and 100 kg (oak) every kilometer needed approximately 130–190 t of sawn (and preferably creosote-treated) wood. My calculations show that the rail tracks laid worldwide during the nineteenth century required at least 100 Mt of sawn wood for original construction and at least 60 Mt of additional timber for track repairs and replacements (Smil, 2013).

Rails used during the nineteenth century weighed between 20 and 30 kg/m and, assuming an average of 25 kg/m, the railway construction between 1850 and 1900 would have required about 20 Mt of steel, while replacement would have more than doubled that total. Steel became the favorite material for railway bridges:

Because of their renewability, annually harvested crop residues used to be indispensable materials in all traditional agricultural societies. In many deforested regions they were the only source of household fuel, straw–clay mixtures were made into bricks and straw bundles were used for roof thatching, in some countries peasants wore straw sandals and coats, and cereal straws were used as both feed and bedding for domesticated ruminants

There are no reliable data about the final fate of crop residues: in many agroecosystems they should be directly recycled to maintain soil organic matter and to prevent erosion, but often their mass is judged to be excessive and they are simply burned in fields. This undesirable practice is particularly common in rice-growing regions of Asia. Straw continues to be burned even in some affluent countries, most notably in Denmark where about 1.4 Mt of wheat straw (nearly a quarter of the total harvest) is used for house heating or even in centralized district heating and electricity generation (Stenkjaer, 2009).

A global aggregate of around 40 EJ in 2000 is thus a good consensus value and implies a nearly 70% increase in biomass fuel demand between 1950 and 2000 and a doubling of wood and crop residue harvests during the twentieth century. But the intervening high population growth greatly reduced the average per capita consumption and the huge expansion of fossil fuel extraction cut the biofuel share from 50% in 1900, to less than 10% of global primary energy supply in the year 2000, and (because of inferior efficiencies of wood and straw combustion) to less than 5% in terms of useful final energy supply. Among the major economies, wood has the highest national share of primary energy supplies in Brazil, at about 10%, while its share in affluent nations ranges from negligible values (just 1% in the UK and Spain) to about 20% in Sweden and Finland, with the US share falling from about 4.5% in 1950 to just 2% in 2010

Wooden railway ties, that quintessential nineteenth-century innovation, maintained their high share of the global market throughout the twentieth century. During the 1990s, 94% of America’s ties were wooden.

 

Better treatment of ties prolonged their average lifespan from about 35 years in 1940 to 40–50 years by the year 2000 (James, 2001). European and North American tie markets have been basically limited to replacements, mostly reinforced concrete

But most reinforced concrete has not gone into iconic structures but into ever-increasing numbers of nondescript or outright ugly (or brutal looking) apartment buildings, high rises, factories, garages, roads, overpass bridges, and parking lots.

Much more steel (in the form of sheets and rods) has gone into cars and trucks and new transportation infrastructures on land (ranging from multi-lane highways and bridges to new airports) and into the construction of large oil tankers, bulk carriers (transporting anything from grain to ores), and, starting in the 1960s, container ships and ports. Steel allows particularly captivating design of long suspension bridges with woven cables supporting lengthy road spans:

The transportation sector also became the leading user of aluminum: the combination of light weight and durability made the metal, and its alloys, an ideal choice for applications ranging from cooking pots to rapid train cars,

The fourth most important metal has been zinc, with a consumption of 12.6 Mt in 2010; but the steadily rising demand for lead has brought this formerly more distant number five close to the zinc total: in 2011 the global refined lead supply surpassed, 10 Mt for the first time, to reach 10.6 Mt, with about 45% being primary metal and the rest coming from recycled material

With a total of just over 1 billion cars and light and heavy trucks, and with an average mass of 10 kg Pb in automobiles and 13 kg Pb in truck batteries, there was nearly 11 Mt of lead on the world’s roads in 2010.

Silicon makes nearly 28% of the Earth’s crust, and while it is abundantly present as SiO2 (silica) in sand, sandstone, and quartz and in many silicates ranging from hard feldspars (rock-forming minerals) to soft kaolinite (a layered clay mineral), it is never found in pure, unbound elementary form. But the purest crystalline silicon is the material foundation of modern electronics: intricate webs of semiconductors

Global production of all plastics

265 Mt in 2010:

we could not have supported the twentieth century global increment of 4.5 billion people consuming increasingly better diets without a huge increase in nitrogen applications.

Global output of synthetic fertilizers (in terms of pure N): 85.13 Mt in and 2000, an increase of two orders of magnitude (roughly 570 times) in 80 years.

Global output of synthetic fertilizers (in terms of pur nitrogen) rose from just 150,000 tonnes in 1920 to 3.7 million tonnes in 1950 and 85.13 million tonnes in 2000, an increase of two orders of magnitude (roughly 570 times) in 80 years.

Remarkably that was not even an exceptionally large gain, as the global production of other new materials saw even greater increases over the course of the 20th century:

  • three orders of magnitude for aluminum (roughly 3600 times, from just 6800 t in 1900 to 24.3 Mt in 2000)
  • four orders of magnitude for plastics (from about 20,000 t in 1925 to 150 Mt in 2000).
  • 30 times more production of paper and steel (from 28.3 to 850 Mt)
  • 27 times more copper (from 495 000 t to 13.2 Mt)

In comparison, the global population increased 3.8 times between 1900 and 2000, and the gross world product (in constant monies) rose about 20-fold,

The annual output of bovine (cattle and water buffalo) hides surpasses 6 Mt, that of sheep and lambskins over 400,000 t, and some 300,000 t of goat and kidskins are turned into leather product annually (FAO, 2011). Production of wool, the most important animal fiber, rose from about 960 000 t in 1950 to 2.9 Mt in 1970, fluctuated afterwards (peaking at 3.3 Mt in 1990), and declined to just below 2 Mt in 2011 (FAO, 2013). In contrast, production of silkworm cocoons has more than doubled during the past 50 years, to about 500 000 t in 2010.

30% of humanity continues to live in structures whose material, locally available clay, has not undergone any elaborate processing and that can be made without any modern energy inputs.

Production of all durable soil- or earth-based materials requires firing in kilns, with temperatures ranging from less than 500 °C for low-quality bricks to as much as 1100 °C for ceramic tiles, 1300 °C for vitrified bricks, and 1400 °C for glass, while the pyro-processing of Portland cement requires 1400–1450 °C (Berge, 2009).

Sequential washing, screening, crushing, and dewatering eliminate any organic matter and clay and produce a specific coarseness of material with low moisture. The best available estimates indicate that, in the USA, 41% of construction sand and gravel ends up as concrete aggregates, a quarter of the total is destined for road building, 13% for construction fill, and 12% for asphaltic concrete and similar mixtures (USGS, 2012). The small remainder is used for filtration, snow and ice control on roads (some municipalities also use salt), railroad ballast and golf courses, as well as for replenishment of eroding beaches

The construction of the US Interstate Highway System was a major component of this rising demand (USGS, 2006). About 60% of these multi-lane highways are paved in concrete whose standard thickness is 28 cm and hence 1 km of a four lane highway (each lane is 3.7 m wide) requires about 4150 m3. This adds up to roughly 10,000 t of concrete for every kilometer and the entire system of 73,000 km embodies about 730 Mt of concrete in driving lanes, with more emplaced in shoulders, medians, approaches, and overpasses.

Global compilations of CO2 emissions from the cement industry show its contribution almost 5% in 2010 (CDIAC, 2013).

Concrete (particularly its reinforced form) is now by far the most important manmade material both in terms of global annual production and cumulatively emplaced mass.

While this material provides shelter and enables transportation and energy and industrial production, its accumulation also presents considerable risks and immense future burdens. These problems arise from the material’s vulnerability to premature deterioration that results in unsightly appearance, loss of strength, and unsafe conditions that sometimes lead to catastrophic failures, and whose prevention requires expensive periodic renovations and eventually costly dismantling. Concrete, both exposed and buried, is not a highly durable material and it deteriorates for many reasons (AWWS, 2004; Cwalina, 2008; Stuart, 2012). Exposed surfaces are attacked by moisture and freezing in cold climates, bacterial and algal growth in warm humid regions (biofouling recognizable by blackened surfaces), acid deposition in polluted (that is now in most) urban areas, and vibration. Buried concrete structures (water and sewage pipes, storage tanks, missile silos) are subjected to gradual or instant overloading that creates cracks, and to reactions with carbonates, chlorides, and sulfates filtering from above. Poor-quality concrete can show excessive wear and develop visible cracks and surficial staining due to efflorescence in a matter of months. Alternations of freezing and thawing damage both the horizontal surfaces (roads, parking) that collect standing water, as well as vertical layers that collect water in pores and cracks. While concrete’s high alkalinity (pH of about 12.5) limits the corrosion of the reinforcing steel embedded in the material, as soon as that cover is compromised (due to cracks or defoliation of external layers) the expansive corrosion process begins and tends to accelerate. Chloride attack (on structures submerged in seawater, from deicing of roads, in coastal areas from NaCl present in the air in much higher concentrations than inland) and damage by acid deposition (sulfate attack in polluted regions) are other common causes of deterioration, while some concretes exhibit alkali-silica and alkali-carbonate reactions that lead to cracking. Unsightly concrete blackened by growing algae embedded in the material’s pores is a common sight in all humid (especially when also warm) environments. Given the unprecedented rate of post-1990 global concretization, it is inevitable that the post-2030 world will face an unprecedented burden of concrete deterioration.

 

This challenge will be particularly daunting in China, the country with by far the highest rate of new concrete emplacement, where the combination of poor concrete quality, damaging natural environment, intensive industrial pollutants, and heavy use of concrete structures will lead to premature deterioration of tens of billions of tons of the material that has been poured into buildings, roads, bridges, dams, ports, and other structures during the past generation. Because maintenance and repair of deteriorating concrete have been inadequate, the future replacement costs of the material will run into trillions of dollars. To this should be added the disposal costs of the removed concrete: some concrete structures have been recycled but the separation of the concrete and reinforcing metal is expensive. The latest report card on the quality of American infrastructure gives poor to very poor grades to all sectors where concrete is the dominant structural material:

with an estimated investment of at least $3.6 trillion needed by 2020 in order to prevent further deterioration (ASCE, 2013).

Transposed to post-2030 China, this reality implies the need for an unprecedented rehabilitation and replacement of nearly 100 Gt of concrete emplaced during the first decade of the twenty-first century, at a cost of many tens of trillions of dollars.

The world’s impervious surface area (built-up, paved) at about 580 000 km2: that is less than 0.5% of ice-free surface, but an area equal to Kenya. In per capita terms, high-income countries in northern latitudes had the largest areas of impervious surfaces (Canada 350 m2, USA 300 m2, Sweden 220 m2)

Of course, not all impervious surfaces are concrete but the material accounts for their largest share.

 

In 2010, humanity put in place close to 40 Gt of them (dominated by 33 Gt of concrete and 4.5 Gt of bricks), an equivalent of at least 17 km3. For comparison, the volume of one of the world’s best known mountains, Japan’s Fuji, is about 400 km3

By the year 2000 the global output of iron ore, pig iron, and steel had reached new global records: at 1 Gt/year iron ore extraction was surpassed only by the output of fossil fuels and bulk construction materials, pig (cast) iron production rose to nearly 600 Mt, and at roughly 850 Mt/year steel output was about 30 times higher than in 1900. That total was also almost 20 times larger than the aggregate smelting of aluminum, copper, zinc, lead, and tin, and in per capita terms it rose from less than 20 to about 140 kg/year. Demand for copper increased by a similar rate (27-fold, to 13.2 Mt) and zinc production rose almost 20-fold, from about 480 000 t to 8.77 Mt (Kelly and Matos, 2013).

Gold output rose nearly 7-fold, but in absolute terms it amounted only to about 2600 t in the year 2000, compared to 18,100 t for silver,

Polyethylene (PE) is by far the most important thermoplastic (it accounted for 29% of the world’s aggregate plastic output, or roughly 77 Mt, in 2010), polypropylene (PP) comes next (with about 19% or 50 Mt in 2010), followed by polyvinyl chloride (PVC, about 12% or 32 Mt in 2010).

In 2010, packaging consumed almost 40% of the total (mostly as various kinds of PE and PP), construction about 20% (mostly for plastic sheets used as vapor barriers in wall and ceiling insulation), the auto industry claimed nearly 8% (interior trim, exterior parts), and the electrical and electronic industry took about 6% (mostly for insulation of wires and cables).

All of these products begin as ethane. In North America and the Middle East ethane is separated from natural gas, and low gas prices and abundant supply led to surplus production for export and favored further construction of new capacities: in 2012 Qatar launched the world’s largest LDPE plant and, largely as a result of shale gas extraction, new ethylene capacities are planned in the USA (Stephan, 2012). The dominant feedstock for ethane in Europe, where prices of imported natural gas are high, is naphtha derived by the distillation of crude oil.

Transparent or opaque bags (sandwich, grocery, or garbage), sheets (for covering crops and temporary greenhouses), wraps (Saran, Cling), and squeeze bottles (for honey), HDPE garbage cans, containers (for milk, detergents, motor oil), and toys (including Lego bricks). Among a myriad of hidden PE applications are HDPE for house wraps (Tyvek) and water pipes; PEX for water pipes and as insulation for electrical cables; and UHMWPE for knee and hip replacements.

Other plastic uses:

  • massive LDPE water tanks
  • indoor–outdoor carpeting to lightweight fabrics woven from PP yarn and used particularly for outdoor apparel,
  • insulated wires, water, and sewage pipes to food wraps and her car’s interior and body undercoating
  • disposable and surgical gloves, flexible tubing for feeding, breathing and pressure monitoring, catheters, blood bags, IV containers, sterile packaging, trays, basins, bed pans and rails, thermal blankets, lab ware (Smil, 2006, p. 131)
  • construction (house sidings, window frames), for outdoor furniture, water hoses, office gadgets, toys,

Plastics have a limited lifespan in terms of functional integrity: even materials that are not in contact with earth or water do not remain in excellent shape for decades. Service spans are no more than 2–15 years for PE, 3–8 years for PP, and 7–10 years for polyurethane; among the common plastics only PVC can last two or three decades and thick PVC cold water pipes can last even longer (Berge, 2009).

[In conclusion, then, it is clear] plastics, [and the fossil fuels they are derived from], are indispensable for the functioning of modern civilization.

Industrial Gases

The three most important elements – oxygen, hydrogen, and nitrogen – deserve such ranking because without them we could not produce steel in the most efficient way, and could not have our modern petrochemical and nitrogen fertilizer industries. Other elements and compounds classified as industrial gases include acetylene, argon, carbon dioxide, helium, neon, and nitrous oxide.

Without the synthesis of ammonia (predicated on large-scale supply of pure nitrogen) we would not be able to feed billions of people, and without oxygen we could not produce most of the world’s most important alloy. Ammonia synthesis is the world’s largest consumer of nitrogen: in 2010 it required 130 Mt of the gas (about 112 Gm3 of N2). Nitrogen’s other key uses as a feedstock include ammonia for the synthesis of nitric acid, hydrazines, and amines.

 

Nitrogen cooling of metal parts enables tight assembly fits and, in reverse, it allows the taking apart of closely-fitted parts. With the expansion of modern electronics, nitrogen found a new market in those instances (particularly during soldering) when it is necessary to reduce the presence of oxygen and to maintain a clean atmosphere (by 1985 this use claimed 15% of US consumption).

Ferrous metallurgy is by far the largest user of oxygen: the gas is blown into blast furnaces, EAFs, and BOFs

Chemical syntheses (above all ethylene oxidation) are the second largest market, and oxygen is also used in smelting color metals (lead, copper, and zinc furnaces), in the construction material industries (producing a more intense flame and reduced fuel use in the firing of glass, mineral wool, lime, and cement),

Argon, the cheapest truly inert gas, goes into incandescent and fluorescent lights

Hydrotreating, hydrodesulfurization, and hydrocracking used to process roughly 3.7 Gt of oil in 2010 claimed (assuming that H2 demand averaged 0.5% of the total crude input, or roughly 60 m3/t) about 20 Mt of the gas.

Hydrogen

Industrial gases are used in sectors that account for more than half of the world’s economic output and the value of their production has been growing faster than the growth rate of the global economy: in 2000 their global market was worth about $34 billion, a decade later it had nearly doubled as it exceeded $60 billion, and it is heading to about $80 billion by 2015

Liquid hydrocarbons (principally naphtha) are the feedstock for hydrogen production in crude oil refineries where the gas is needed for the catalytic conversion of heavier fractions to lighter fuels, and also in order to comply with ever stricter environmental regulation and to desulfurize the refined products.

Synthesis of ammonia remains the leading user of hydrogen, followed by refinery needs

Post-1950 expansion was rapid, with global ammonia synthesis rising from less than 6 Mt in 1950, to about 120 Mt in 1989, 164 Mt in 2011 (USGS, 2013).

Two-thirds (65–57%) of all synthesized NH3 has been recently used as fertilizer, with the total global usage more than tripling since 1970, from 33 to about 106 Mt N in 2010. Because ammonia is a gas under ambient pressure, it can be applied to crops only by using special equipment (hollow steel knives), a practice that has been limited to North America. The compound has been traditionally converted into a variety of fertilizers (nitrate, sulfate) but urea (containing 45% N) has emerged as the leading choice, especially in rice-growing Asia, now the world’s largest consumer of nitrogenous fertilizers; ammonium nitrate (35% N) comes second.

Compared to traditional harvests, the best national yields of these three most important grain crops have risen to about 10 t/ha for US corn (from 2 t/ha before World War II), 8–10 t/ha for European wheat (from about 2 t/ha during the 1930s), and 6 t/ha for East Asian rice (from around 2 t/ha).

 

High-yielding US corn now receives, on average, about 160 kg N/ha, European winter wheat more than 200 kg N/ha, and China’s rice gets 260 kg N/ha, which means that in double-cropping regions annual applications are about 500 kg N/ha. According to my calculations, in the year 2000 about 40% of nitrogen present in the world’s food proteins came from fertilizers that originated from the Haber–Bosch synthesis of ammonia (Smil, 2001).

The rising use of nitrogen had to be accompanied by a rising use of the other two essential macronutrients

Agricultural phosphate consumption: 20.3 Mt P in 2010.

Potassium is obtained mostly by underground mining of sylvinite, a mixture of about a third KCl and two-thirds NaCl; Saskatchewan has the largest reserves of the rock and is the leading global producer. Worldwide extraction (expressed in terms of K2O equivalent) rose to nearly 34 Mt by 2010, with Canada (nearly 10 Mt) and Russia (more than 6 Mt) being the largest producers and worldwide exporters. About 85% of all KC ends up as fertilizer.

Silicon

The raw material for producing silicon is abundant, but an energy-intensive high-temperature deoxidization with carbon – SiO2 + 2C + Si + 2CO (using graphite electrodes in electric furnaces) – is required to yield element that is 99% pure. But even 99% purity is quite unacceptable for solar and electronic industries, and hence the metallurgical-grade Si has to undergo elaborate and costly processing that makes it many orders of magnitude purer in order to meet the specifications for producing semiconductors, solar cells, and optical fibers (Föll, 2000).

In 1965, when the number of transistors on a microchip had doubled to 64 from 32 in 1964, Gordon Moore predicted that this rate of doubling would continue,

By 2012 the count reached 5 billion in Xeon Phi Coprocessor (Intel, 2012). Mass deployment of these increasingly powerful microprocessors in conjunction with increasingly capacious memory devices has transformed every sector of modern economies thanks to unprecedented capacities for communication, control, storage, and retrieval of information.

Wafer shipments for semiconductor applications rose from just $4 billion in 1977 to  $292 billion in 2012 (SIA, 2013).

During the first decade of the twenty-first century, electronics ceased to be the major consumer of high-grade silicon as most of that material now ends up in PV cells.

There are hundreds of PV-powered satellites used for weather and Earth monitoring, telecommunication, and spying;

The best commercially available models are rated at 19–22% (NREL, 2013; Solarplaza, 2013). For decades, PV cells were made with off-grade polycrystalline material that was not good enough for electronic applications, but as the heavily subsidized market for PV installation rose from less than 100 MW/year in 1995 to more than 10 GW/year in 2009, it was necessary to divert increasing amounts of purified polycrystalline metal into the solar cell industry. In 1997 the industry used only 800 t of such metal, by 2009 it required 69,100 t, three times as much as consumed by electronics, to produce about 44,500 t of solar cells, mostly by the casting of polycrystalline metal (Takiguchi and Morita, 2011).

While some metals can be reused indefinitely (albeit with some mass losses) recycling of most materials often entails considerable loss of quality and functionality.

Increasing burdens of environmental pollution and the critique of economic thinking that tended to ignore such matters. Ayres et al. (1969, pp. 283–84), describing the reality in clear physical terms, noted that such omissions “may result in viewing the production and consumption processes in a manner that is somewhat at variance with the fundamental law of the conservation of mass,” and pointed out the obvious consequences for the environment, namely that in the absence of trade and net stock accumulation “the amount of residuals inserted into the natural environment must be approximately equal to the weight of basic fuels, food, and raw materials entering the processing and production system, plus oxygen from the atmosphere.” But it took nearly two decades before this admonition was transformed into the first fairly comprehensive studies of material requirements on a national level, as it was only during the late 1990s that several research teams began to reconstruct direct material inputs (DMIs) as well as outflows, and total material requirements (TMRs) of the world’s leading affluent economies. Fischer-Kowalski et al. (2011)

There are other approaches to the investigation of material flows; one attempts to trace the life-cycles of individual commodities on a national, regional, or global level; another looks at the energy costs of commodities and products; and yet another traces the environmental impacts of their production, use, and abandonment (or recycling). Life-cycle assessments (or analyses, in either case the acronym is LCA) have been performed at different scales for many elements and compounds – for example, chlorine by Ayres (2008) and polyvinyl chloride (PVC) by the European Commission (EU, 2004) – and for products ranging from aluminum cans (Mitsubishi, 2005) to steel truck wheels (Alcoa, 2012).

Limiting the account to DMI will greatly underestimate the overall resource demand in all modern economies engaged in intensive international trade, and particularly in such major powers as the USA, Germany, or Japan that rely on imports for large shares of many materials. Correcting this by the inclusion of net imports of all raw materials is only a partial (and increasingly deficient) solution, because many metals and other minerals are not imported in the form of ores or concentrates or bulk shipments but are instead embodied in finished products. Identifying the specific material content of these products (even their limited inventory would run to many hundreds of individual machines, tools, components, and consumer items) presents a major challenge – but the adjustment should not end there, as many items imported from a particular country contain components made of materials in a number of other countries that, in turn, imported parts or raw materials from yet another country or, more likely, a set of countries.

The global on-line database and most of the global and national studies of material flows have been produced by a small group of researchers from Austria and Germany and that most of them have been published in just two sources, in the Journal of Industrial Ecology and in Ecological Economics.

I question the utility of constructing these all-encompassing national or global flow accounts because I am not sure what other revealing conclusions to derive from these summations of disparate input and output categories besides the obvious confirmations of substantial differences in national aggregates and in the rates of long-term growth. Of course, the maximalist aggregates of the all-encompassing variety also have an undoubted heuristic and curiosity value and they do convey the truly massive scale of global mobilization of raw materials.

Half a dozen studies of global material extraction at the beginning of the twenty-first century, that include all harvested biomass, all fossil fuels, ores and industrial minerals, and all bulk construction materials (but exclude hidden flows, water, or oxygen), cluster fairly tightly around 50 Gt/year. This is hardly surprising given the fact that these studies derive the flows from the same sets of data:

with roughly 18 Gt coming from biomass, 10 Gt from fossil fuels, nearly 5 Gt from ores and other minerals, and more than 17 Gt from bulk construction materials.

given the uncertainties in estimating the mass of bulk construction minerals (above all for the extraction of sand and gravel) that account for at least two-thirds of the all material flows, the mass of 0.5 Gt is well within the minimal range of estimation error

and the total of roughly 25 Gt thus remains my preferred aggregate of directly used global materials in the year 2000. That total prorates to just over 4 t of materials per person (the global population was 6.08 billion in the year 2000), with at least 2.5 t (and perhaps as much as 3 t) accounted for by bulk construction materials and only about 0.8 attributed to all metals and nonmetallic minerals. These rates compare to nearly 1 t of food and feed crops (fresh weight), close to 0.5 t of wood (excluding fuelwood), and about 1.7 t of fossil fuels (roughly 0.8 t of coal, 0.6 t of crude oil, and 0.3 t of natural gas) extracted for every inhabitant of the world in the year 2000.

 

pre-1950 global totals are nothing but questionable estimates, and even the recent aggregates depend critically on what is included. For example, Krausmann et al. (2009) put the worldwide biomass extraction (crops, their residues, roughages, and wood) at 19.061 Gt in 2005, while in my detailed account of phytomass harvest (Smil, 2013) I showed that in the year 2000 the total for woody phytomass alone could be anywhere between 2 and 13.4 Gt depending on the boundaries chosen for the analysis.

Consequently, there can be no single accurate total, as the search for global totals will be always determined by assumptions, and even if everybody agrees on common boundaries the basic results will be largely predictable. Physical realities dictate that the mass of sand and gravel used to emplace and maintain modern concrete-based infrastructures must be substantially greater than the mass of metallic ores; and that the mass of iron, a metal of outstanding properties produced from abundant ores with a moderate energy intensity, must be orders of magnitude higher than the mass of titanium, an even more remarkable metal but one derived from relatively rare ores with a great energy expense. At the same time, it must be kept in mind that data for inexpensive, readily available bulk construction materials (particularly for sand and gravel) that are usually sold not far from their points of extraction are generally much less reliable than the statistics for metal ores and industrial nonmetallic minerals that are globally traded.

the world now consumes in one year nearly as much steel as it did during the first post-World War II decade, and (even more incredibly) more cement than it consumed during the first half of the twentieth century.

little guidance for future decision-making (beyond the obvious point that the recent high growth rates cannot continue for many decades).

Useful insights can be gained from two kinds of finer focus: through closer examination of material flows on the national level, and by putting more restrictive analytical boundaries on the set of examined materials and tracing the flows of individual commodities with some clear goals in mind. This can be done by detailing their uses, dispersal, and persistence in a society, and by attempting life-cycle analyses of those materials that circulate on a human timescale, that is by quantifying their direct and indirect requirements for energy or by identifying and assessing the environmental impacts of their production and use.

During the twentieth century, natural growth potentiated by immigration increased the US population nearly 4-fold (3.7), and the country’s GDP (expressed in constant monies) was 26.5 times higher in 2000 than in 1900: not surprisingly, the combination of these two key factors drove absolute consumption increases in all material categories, with the multipliers ranging from 1.7 for materials produced by agriculture to more than 90 for nonrenewable organics (and 8 for primary metals, 34 for industrial minerals, and 47 for construction materials). The importance of renewable materials (wood, fibers, leather) fell from about 46% of the total mass (when bulk construction materials are included) or 74% (with stone, sand, and gravel excluded) in 1900 to just 5% (or 22%) for analogical rates in the year 2000, a trend that was expected given the increasing reliance on light metals and plastics. Aggregate wood demand rose less than 1.4-fold during the twentieth century, but consumption of primary paper and paperboard multiplied about 19 times and was supplemented by rising quantities of recycled paper: when the data collection in the latter category began in 1960, recycled paper accounted for about 24% of all paper and paperboard use, but by the year 2000 its share was up to 46% even as large quantities of waste paper are exported (in 2000 this amounted to about 22% of all domestic collections), primarily to China (FAO, 2013).

The fact that bulk minerals used in construction (crushed stone, sand, and gravel) have increasingly dominated America’s annual flows during the twentieth century – in 1900 they accounted for 38% all materials, by 2006 their share reached 77% – is not surprising given the enormous expansion of material-intensive transportation infrastructures after World War II. Construction of the Interstate system began in 1956 and required the building of many new bridges (USDOT, 2012), while the introduction of commercial jetliners led to the rapid expansion of airports, a process recently repeated in China. Large demands for bulk construction materials also came from the building of new container ports, stream regulation (above all in the Mississippi basin), electricity generation (hydroelectric dams, nuclear power plants), new factories, commercial real estate (warehouses, shopping centers), and housing. The mass of construction materials used in the USA rose about 7-fold between 1900 and 1940 and then doubled between 1945 and 1951, doubled again by 1959 to 1.1 Gt, but the next doubling, to 2.26 Gt, took until 1997.

End-use data indicate that the largest identifiable category of sand and gravel consumption (about a fifth of the total surpassing 1 Gt/year) is as aggregate added to cement in the production of concrete, followed by road base and coverings, fill, and as aggregate added to asphaltic and bituminous mixtures; but unspecified uses make up the largest category, accounting for about a quarter of the total. Differently-sized aggregates used in the production of concrete are also the leading final uses for crushed stone, and railroad ballast is another indispensable application. With a ballast minimum depth of 15 cm and up to 50 cm for high-speed lines, and an overall width of roughly 4.5 m, this amounts commonly to more than 1000 m3/km or (with density of 2.6 t/m3) to around 3000 t of crushed stone per kilometer.

In comparison to construction sand, the total use of industrial sand is minuscule but qualitatively very important. Annual consumption has recently fluctuated around 25 Mt/year: about 40% of this total is pure silica used in glassmaking, and a fifth goes to foundries to make moldings and refractories as well as silicon carbide for flux and metal smelting. Smaller but functionally irreplaceable uses include abrasives used in blasting and sanding, sands for water filtration, and sands for creating artificial beaches and sporting areas. A new, and rapidly rising, market is for the special kinds of sands used in hydraulic fracturing of gas- and oil-bearing shales, well-packing, and cementing.

 

390 Mt in 2010. Its largest constituents include salt (about 55 Mt in 2010), phosphate rock (about 30 Mt), nitrogen (about 14 Mt), and sulfur (about 11 Mt). America’s salt consumption is remarkably high (20% of the world total in 2010); the two dominant uses (each about 18 Mt in 2010) are production of alkaline compounds and chlorine, and for road deicing; amounts an order of magnitude smaller (both about 1.8 Mt) are used in food production and in animal feed, and more than 1 Mt/year is used in water treatment (in water-softening to remove mineral ions).

Consumption of 10.3 Mt of primary metals in 1900

by the year 2000, with total metal consumption at nearly 144 Mt, they supplied 44%.

I must reiterate that actual domestic US consumption of virtually all metals is, often significantly, higher than shown by the USGS balances because substantial amounts of various metals reach the country embedded in products, and that presence is excluded from nationwide aggregates of apparent domestic consumption. Major components of these unaccounted flows include not only such leading metals as steel and aluminum in cars, airplanes, machinery, and appliances, but also such toxic heavy metals as lead in automotive lead-acid (PbSO4-H2SO4) batteries and cadmium in rechargeable Ni-Cd batteries.

In 1900, the total consumption of nonrenewable organics (mostly paving materials and lubricating oils) was less than 2 Mt, but subsequent extension of paved highways, mass ownership of cars, the rise of the trucking industry, and, above all, rapid expansion of crude oil- and natural gas-based synthetic materials made this the fastest growing material category in the USA: by 1950 the flow surpassed 30 Mt and by 1999 it had reached 150 Mt, with nearly two thirds being hydrocarbon feedstocks (naphtha and natural gas) used to make ammonia, the starting compound for all synthetic nitrogen fertilizers. The second largest input by mass is asphalt and road oil; consumption of these paving and surfacing materials rose from less than 10 000 t in 1900 (when few paved roads existed outside cities) to 100 times that mass in less than two decades, it reached more than 10 Mt by 1950 and, until the 2008 recession, it was on the order of 30 Mt/year.

In aggregate terms, the USGS accounts translate to a domestic consumption of about 1.9 t/capita in 1900, 5.6 t in 1950, and 12 t/capita in the year 2000; after leaving out bulk construction materials these rates are reduced, respectively, to 1.2, 2.3, and 3 t/capita, which means that the use of construction materials rose from about 0.7 t/capita in 1900 to 3.3 t in 1950, and 9 t in the year 2000. Wood is the only material category showing a century-long decline of per capita consumption, from about 800 kg in 1900 to about 400 kg by 1950 and about 300 kg/capita in 2000. Materials produced by agriculture rose slightly from 40 to 47 kg/capita during the first half of the twentieth century, but afterwards they declined to just 18 kg

Consumption of all metals has shown a similar pattern, rising from 135 kg in 1900 to 515 kg/capita in 1950, but by the year 2000 were essentially the same at 510 kg/capita (once more a somewhat misleading rate given the country’s large post-1970 net imports of cars, airplanes, and machinery).

In comparison with the USA, the EU-27 has similar metal consumption (0.4 vs. 0.5 t/capita) but a much lower demand for construction minerals (4.6 vs. nearly 10 t/capita), a difference that is due mostly to the continent’s much higher population density and more compact transportation infrastructure.

 

According to official statistics, between 1980 and 2010 China’s annual rate of economic growth was only below 5% three times (1981, 1989, and 1990) and was above 10% 16 times, while the average for the three decades was 9.6% (IMF, 2013). This implies a doubling every 7.3 years resulting in a 2010 GDP (in constant prices) 17.8 times higher than in 1980. In per capita terms, the multiple was still roughly 13-fold (NBSC,

The pace of China’s frenzied concretization and its overall scale has been stunning. In 1980 the country produced just short of 80 Mt of cement, a decade later it had more than doubled the total to about 210 Mt, by the year 2000 it rose to 595 Mt and by 2010 that total had tripled and reached 1.88 Gt (nearly 24 times the 1980 total and 57% of the global production for less than 20% of the world’s population), and it rose further to 2 Gt in 2011 (NBSC, 2013).

such a pace of construction guarantees that a substantial share of newly poured concrete will be of substandard quality, a conclusion confirmed by the obvious dilapidation of China’s concrete structures built during the late 1980s and the early 1990s,

The quality of concrete used to construct many of China’s new dams (by 2010 the total stood at more than 87,000 structures of all sizes including the world’s largest dam, Sanxia) is of particular concern, even more so as thousands of them are located in areas of repeated, vigorous seismic activity.

raw steel output rose 17.2 times between 1980 and 2010, from 37.1 Mt in 1980 to 637.4 Mt in 2010, when it accounted for nearly 45% of the global output (WSA, 2013). But as the extraction of iron ores increased about 14 times (from 75 000 t to 1.07 Gt) an increasing share of this output has come from imported materials. In 2010, China imported 618 Mt of iron ore, more than a third of the total input into its blast furnaces, and it has been by far the largest iron ore importer (nearly 60% of the global total and close to 70% of the domestic demand) with Australia and Brazil being the major suppliers. And while the country has been the near-monopolistic exporter of rare earths and a major exporter of molybdenum and magnesium (also of graphite), it has been also the world’s largest importer of bauxite (44 Mt in 2010) and, at nearly 1.2 Mt in 2010, of copper ores and concentrates.

the material category that has seen the greatest production increase has been the synthesis of plastics, with a nearly 70-fold rise between 1980 and 2010. Of course, that large multiple is due to a rapid development from a very low base (less than 900 000 t in 1980) but the absolute output of 62 Mt in 2010 was larger than the production of about 57 Mt in EU27 (Europe Plastics, 2011).

The need to secure more food and better nutrition for a still-growing population has led to substantial gains in the production (and imports) of fertilizers. New Haber–Bosch plants were added to raise the output of nitrogenous fertilizers from 10.3 Mt N in 1980 to 45.2 Mt in 2010, but in 2009 the record output was 48.6 Mt N, a nearly 5-fold increase in three decades, while production of phosphate fertilizers posted a roughly 8-fold increase to 19 Mt. Disparity between N and P growth rates is explained by China’s attempt to move away from excessive nitrogen uses toward more balanced fertilization with N:P:K ratios improving the efficiency of applications. As a result, China has been buying record amounts of potash from Canada. China has also become a prominent importer of materials for recycling, and the USA has been their greatest supplier. In 2010, Chinese imports of waste paper were nearly 25 Mt/year, with the USA as the leading exporter (Magnaghi, 2011). Similarly, in 2010 China bought almost 6 Mt of scrap steel – becoming the world’s third largest importer of the material after Turkey and South Korea (WSA, 2013) – with the USA again as the leading supplier. This trade is certainly one of the most remarkable indicators of changing national fortunes, as the world’s largest affluent economy has become the primary supplier of waste materials to the second largest economy experiencing a rapid rate of growth. In 2011 the USA exported more than $11 billion of waste and scrap (materials belonging to the 910 category of the North American Industry Classification System) to China. This was less than the exports of transportation equipment or agricultural products – but more than the exports of all nonelectric machinery and more than five times as much as the shipments of all electrical equipment and appliances (Smil, 2013). China is also the world’s largest importer of plastic and electronic waste.

Before any materials can start flowing through economies, energies must flow to power their extraction from natural deposits or their production by industrial processes ranging from simple mechanical procedures to complex chemical reactions. These energies belong to two distinct streams: direct flows of fuels and electricity used to energize the production processes (producing mechanical energy, heat or pressure and lighting, and electronically controlling a process) and indirect flows (embedded energies) needed to produce the requisite materials, machines, equipment, and infrastructures.

the energy needed to smelt a ton of iron from its ore in a blast furnace (as coke and supplementary coal, gas, or oil) will be vastly greater than energy embedded in the furnace’s steel, lining, and charging apparatus and prorated per unit of output. Modern blast furnaces can operate without relining for two decades, and during that time can produce tens of millions of tons of hot metal. Similarly, the energy needed to create the combination of high temperature and pressure that is required by many chemical syntheses will be far greater that any prorated energy embedded in the initial construction of reaction vessels, pipes, boilers, compressors, and computerized controls. This explains why the second category of flows is almost always neglected

Most appraisals of energy costs have followed one of two distinct approaches: either a quantification based on input–output tables of economic activities, or a process analysis that traces all important energy flows needed to produce a specific commodity or manufactured item. In the first instance, relevant prices are used to convert values of energy flows in a matrix of economic inputs and outputs (for major industrial sectors or, where available, disaggregated to the level of product groups or individual major products) to energy equivalents in order to assemble direct and indirect energy requirements. In contrast to this aggregate approach, process analysis can focus on a particular product in specific circumstances as it identifies all direct energy inputs and as many relevant indirect needs as possible, a process that is in itself quite valuable as a management tool. As with all appraisals that deal with complex inputs and encompass sequential processes, the setting of analytical boundaries will affect the outcome of process analysis. In most cases, the truncation error inherent in counting only direct energy inputs (purchased fuels and electricity) will be small, but in some instances it could be surprisingly large. For example, Lenzen and Dey (2000) found the energy costs of Australian steel to be 19 GJ/t with process analysis, but 40.1 GJ/t with input–output analysis.

Another complication is introduced due to increasing shares of globally traded commodities and products: in some cases the additional energies required for import of raw materials and export of finished products will be a negligible share of the process energy cost, in other cases their omission will cause a serious undercount. For example, two identical looking steel beams used at two construction sites in New York may have two very different histories: the first being a domestic product made by the scrap-EAF-continuous (electric arc furnace) casting route in an integrated operation in Pennsylvania, the other coming from China where Australian iron ore and coke made from Indonesian coal were smelted in a blast furnace in one province and the beams were made from ingots in another one before loaded for a trans-Pacific shipment and then transported by railroads across the continent. Approximate energy costs of long-distance transportation can be easily calculated by assuming the following averages (all in ton-kilometers for easy comparability, ranked from the highest rates to the lowest): air transport 30 MJ, diesel-powered trucks (depending on their size) mostly between 1 and 2.5 MJ, diesel-powered trains 600–900 kJ, electricity-powered trains 200–400 kJ, smaller cargo ships 100–150 kJ, and large tankers and bulk cargo carriers just 50 kJ/tkm (Smil, 2010). Obviously, energy-intensive air shipments will be restricted to high value-added products, while bringing iron ore by a bulk carrier from a mine 3000 km from a Chinese blast furnace would entail energy expenditure equal to less than 10% of overall requirements for steel production – while the energy cost of shipping construction stone from Europe or Asia to the USA may be equal to 25–50% of the energy used to cut and polish it. These realities should be kept in mind when examining and comparing the values reviewed in this section. Energy costs – presented here in a uniform way as gigajoules per ton (GJ/t) of raw material or product –as with any analytical tool, they alone cannot be used to guide our choices and preferences of material use without concurrent considerations of affordability, quality, durability, or esthetic preference; if the latter were to be ignored concrete, a material of low energy intensity, would rule the modern world even more than it actually does.

The energy cost of market-ready lumber (timber) is low, comparable to the energy cost of many bulk mineral and basic construction materials produced by their processing. Tree felling, removal of boles from the forest, their squaring and air drying will add up to no more than about 500 MJ/t, and even with relatively energy-intensive kiln-drying (this operation may account for 80–90% of all thermal energy) the total could be as low as 1.5 and more than 3.5 GJ/t (including cutting and planing) for such common dimensional construction cuts as 2 × 4 studs used for framing North American houses.

The low energy cost of wood is also illustrated by the fact that, in Canada, the energy cost of wood products represents less than 5% of the cost of the goods sold (Meil et al., 2009). Energy costs on the order of 1–3 GJ/t are, of course, only small fractions of wood’s energy content that ranges from 15 to 17 GJ/t for air-dry material. Obviously, the energy cost of wood products rises with the degree of processing (FAO, 1990). Particle board (with a density between 0.66 and 0.70 g/cm3) may need as little as 3 GJ/t and no more than 7 GJ/t, with some 60% of all energy needed for particle drying and 20% for hot pressing.

The energy cost of papermaking varies with the final product and, given the size and production scale of modern papermaking machines (typically 150 m long, running speeds up to 1800 m/min., and annual output of 300 000 t of paper), is not amenable to drastic changes (Austin, 2010). Unbleached packaging paper made from thermo-mechanical pulp is the least energy-expensive kind (as little as 23 GJ/t); fine bleached uncoated paper made from kraft pulp consumes at least 27 GJ/t and commonly just over 30 GJ/t (Worrell et al., 2008). Most people find it surprising that this is as much as a high-quality steel.

Recycled and de-inked newsprint or tissue can be made with less than 18 GJ/t, but the material is often down-cycled into lower quality packaging materials.

Construction aggregates whose production requires only extraction and some physical treatment (sorting, sizing, crushing, milling, drying) have generally very low to low energy costs, and higher fuel and electricity use comes only with the pyro-processing required to make bricks, tiles, glass, and, above all, cement. The energy cost of natural stone products is low, usually just around 500 MJ/t for quarried blocks, somewhat less for crushed stone, but twice as much for roughly cut or split stones,

The energy costs of sand extraction and processing can easily vary by a factor of 2, but even the higher costs leave them in the category of the least energy-intensive materials when compared in mass terms. The simplest mining and preparation sequence to produce fairly clean sand and uniformly-sized sand may require no more than 100 MJ/t, and even more costly gravel sorting (or crushing as needed) should have an energy cost well below 500 MJ/t. The highest energy input is required for the preparation of the industrial sand that is used in glassmaking, ceramics and refractory materials, metal smelting and casting, paints, and now also increasingly in hydraulic fractioning of gas- and oil-bearing shales: its moisture must be reduced (in heavy-duty rotary or fluidized-bed dryers) to less than 0.5%, and this may consume close to 1 GJ/t. Bricks fired in inefficient rural furnaces in Asia may require as much as 2 GJ/t, just 1.1–1.2 GJ/t is typical for Chinese enterprises (Global Environmental Facility, 2012; Li, 2012) while US production of high-quality bricks requires 2.3 GJ/t (USEPA, 2003). Cement production is fairly energy-intensive because of high temperatures required for the thermo-chemical processing of the mineral charge. Limestone supplies Ca and other oxides, and clay, shale, or waste material provide silicon, aluminum, and iron; in order to produce a ton of cement about 1.8 t of raw minerals are ground and their mixture is heated to at least 1450 °C.

This sintering process combines the constituent molecules, and the resulting clinker is ground again with the addition of other materials to produce 1 t of clinker that is then ground to produce fine Portland cement. Fly ash (captured in coal-fired power plants) or blast furnace slag can be used to lower the amount of clinker. Additional energy is to needed to rotate large kilns. These inclined (3.5–4 °) metal cylinders are commonly around 100 m, up to 230 m, long, with diameters of 6–8 m, and they turn typically at 1–3 rpm, with the charged raw material moving down the tube against the rising hot gases (Peray, 1986; FLSmidth, 2011). Disaggregation of all energy inputs shows that the extraction of minerals (limestone, clay, shale) and their delivery to cement kilns is a minimal burden. Kiln feed preparation is electricity-intensive as crushing and grinding of the charge consumes about 25–35 kWh/t and the grinding and transportation of the finished product (clinker) claims at least 32–37 kWh/t (Worrell and Galitsky, 2008). This leaves the bulk of energy consumption for the pyro-processing, a sequence of water evaporation, decomposition of clays to yield SiO2, decomposition of limestone or dolomite (calcination) that releases CaCO3, formation of belite (Ca2SiO4, making up about 15% of clinker by mass), and finally sintering, production of alite (Ca3O·SiO4, that makes up some 65% of the clinker mass) (Winter, 2012). Total energy use in cement production varies with the principal fuel used, the origin of the electric supply, and the method of production. Average specific energy consumption in the cement industry has declined as a more efficient dry process replaced the old wet method. The highest electricity consumption in the dry process is for the grinding of raw materials and clinker and for the kiln and the cooler, in aggregate more than 80% of the total that averages mostly between 90 and 120 kWh/t of cement (Madlool et al., 2011). Heating of dry kilns (mostly with coal, petroleum coke, and waste materials in the USA, and with coal in China) consumes between mostly 3 and 4 GJ/t; the range is 3.0–3.5 GJ/t for kilns with four or five stages of preheating, while a six-stage process could work with as little as 2.9–3 GJ/t (IEA, 2007; Worrell and Galitsky, 2008).

World best practice can now produce Portland cement with total primary energy inputs of 3.3–3.5 GJ/t, while the rates for fly-ash cement and blast furnace slag cement can be as low as, respectively, 2.4 and 2.1 GJ/t (Worrell et al., 2008). In contrast, many plants in low-income countries still need around 4.5 GJ/t for Portland cement.

Energy requirements for glass production range mostly between 4 and 10 GJ/t, with about 7 GJ/t being a typical value

the energy cost of ceramic products rises with the degree of pyro-processing and the quality of items: unglazed tiles need only 6 GJ/t, glazed tiles up to 10 GJ/t, fine ceramics as much as 70 GJ/t,

The usual approach for quantifying the energy costs of the iron and steel industry is to include the energy costs of coke, pelletizing and sintering of ore, iron and steel making, cold and hot rolling, and galvanizing and coating; this leaves out the energy costs of coal and ore mining and transportation, of such energy-intensive inputs as electrodes and refractories, as well as the embodied energy cost of scrap metal. Analyses performed (more or less) within these boundaries show that average energy consumption in the global steel industry was about 20 GJ/t by the year 2000 (Yellishetty et al., 2010).

A review of best industry practices for the entire iron–steel sequence ended up with 16.3–18.2 GJ/t for the blast furnace-BOF-continuous (basic oxygen furnace) casting route, 18.6 GJ/t for direct iron reduction followed by EAF steelmaking and thin-slab casting, and 6 GJ/t for melting scrap metal in EAF and thin-slab casting (Worrell et al., 2008).

A comparative analysis of the energy costs of the iron and steel industry in the USA and China illustrates this reality: it shows that the aggregate input in 2006 was, respectively, 14.9 GJ/t and 23.11 GJ/t of crude steel (Hasanbeigi et al., 2012).

electricity’s share is 20% of the total primary energy used in the US steelmaking, but only 10% in China’s industry. Taking 25 GJ/t as a mean would suggest that in the year 2010 the global iron and steel industry needed roughly 36 EJ of energy, or about 6% of the worldwide consumption of primary commercial energy. For comparison, Allwood and Cullen (2012) put the global energy use in steelmaking at 38 EJ. Aluminum production is much more energy intensive than making steel. Fuel and electricity consumption in the Bayer process, between 10 and 13 GJ/t of alumina, is a small share of the overall cost that is dominated by the electrolysis that is done, preferably with the cheapest kind of electricity produced in large hydro stations (it supplies about 60% of the industry’s needs worldwide).

The IEA put the weighted energy cost of the entire sequence at 175 GJ/t in 2004, and a review of best industry practices came up with a nearly identical rate of 174 GJ/t of metal (Worrell et al., 2008). That is nearly twice the energy intensity of copper and almost 10 times as much as the least energy-intensive production of steel using the blast furnace-BOF-continuous casting route. Global 2010 production of 40.8 Mt of Al would have thus required about 7.1 EJ, less than 1.5% of the world’s total primary commercial energy supply. Metal’s high electricity requirements steer the location of primary aluminum production to countries with abundant hydro resources, and the four such largest producers (China, Russia, Canada, and the USA) account for half of the world output. Secondary aluminum requires only 7.6 GJ/t (for remelting only). As already noted, titanium has the highest energy cost among the other relatively commonly used metals (400 GJ/t), followed by nickel at about 160 GJ/t and copper (global average of 93 GJ/t), while chromium, manganese, tin, and zinc have a very similar energy cost of about 50 GJ/t (IEA, 2007). Not surprisingly, very low metal concentrations of even the best exploited deposits raise the energy intensities of silver and gold orders of magnitude above common metals: the average for silver is about 2.9 TJ/t (30 times higher than for copper) and for gold it is 53 TJ/t, roughly 300 times the energy cost of aluminum.

A World Bank review of PE energy costs found ranges of 87.4–107.8 GJ/t for high density polyethylene (HDPE) and 74.4–116.3 for low-density polyethylene (LDPE), with the processing energy being as low as 25–28 GJ/t and as high as 45 GJ/t (Vlachopoulos, 2009).

The heavy dependence of modern production of plastics on hydrocarbon feedstocks has not been, so far, a major burden, as the industry still claims less than 5% of the world’s natural gas and crude oil output. Rising demand for plastic materials and higher costs, particularly of crude oil, will change this, and in the long run plant-based bioplastics appear to be the only practical answer both to the eventually less abundant petrochemical feedstocks and to the presence of nonbiodegradable materials in the environment.

 

the lignin carbon fiber used in reducing the weight of passenger cars and other vehicles costs 670 GJ/t and carbon-fiber reinforced polymer (polyacrylonitrite) fiber requires just over 700 GJ/t (Das, 2011), more than three times that of aluminum.

synthesis of ammonia from its elements, the Haber–Bosch process

as little as 27 GJ/t NH3 in the year 2000. When Worrell et al. (2008) reviewed the best commercial practices, they rated natural gas-based synthesis at 28 GJ/t (roughly a third higher than the stoichiometric minimum), and coal-based process at 34.8 GJ/t. Naturally, typical performances are higher, around 30 GJ/t NH3 for gas-based plants, 36 GJ/t for heavy fuel oil feedstock, and more than 45 GJ/t NH3 for coal-based synthesis (Rafiqul et al., 2005). The IEA (2007) used regional means ranging from 48.4 GJ/t in China to 35 GJ/t in Western Europe, resulting in a global weighted mean of 41.6 GJ/t for the year 2005.

treating the insoluble rocks with sulfuric and nitric acids in order to produce water-soluble phosphorus compounds is much more energy intensive. Overall energy costs range from 18 to 20 GJ/t for superphosphates (single superphosphate with just 8.8% P, triple superphosphate with 20% P) to 28–33 GJ/t for diammonium phosphate containing 20% of soluble P (Smil, 2008). The energy cost of potash (sylvinite) extraction is low: in Saskatchewan, conventional underground mining followed by milling needs only 1–1.5 GJ/t, and surface mining and milling averages only about 300 MJ/t (NRC, 2009).

the entire production chain – starting with Si made from quartz and carbon through trichlorosilane, polysilicon, single crystal ingot, Si wafers, and actual fabrication and assembly of a microchip – consumes about 41 MJ for a 2-g chip. This implies a total electricity cost for shipped wafers of at least 2100 kWh/kg. Even if using only hydroelectricity this would prorate to about 7.6 GJ/kg, and energizing the entire process by electricity generated from fossil fuels would push the total primary energy to more than 20 GJ/kg for finished Si wafers, 2 orders of magnitude more than aluminum made from bauxite, and 3 orders of magnitude more than steel made from iron ore.

The typical rates presented in this section can be used (after rounding, to avoid impressions of unwarranted accuracy) to assess the global energy needs of major material sectors and to calculate their fractions of TPES (whose total was just over 500 EJ) in 2010 (BP, 2013). Not surprisingly, steel’s relatively high energy intensity (25 GJ/t) and its massive output (1.43 Gt in 2010, 1.5 Gt in 2011) make it the material with the highest total energy demand that dominates the total of about 50 EJ (or 10% of TPES) required to produce all metals in 2010. Plastics are next (assuming 80 GJ/t and an output of 265 Mt in 2010) with roughly 20 EJ (4% of TPES), well ahead of construction materials (cement, bricks, glass) with about 15 EJ, or 3% of TPES. Paper production required about 10 EJ and fertilizers added less than 8 EJ for the grand total of just over 100 EJ or 20% of the world’s TPES in 2010. For comparison, the IEA (2007) estimated energy input for the entire global industrial sector at almost 88 EJ for the year 2005. Paper (and paperboard) and aluminum each end up with very similar totals of close to 10 EJ (2% of TPES) as a result of aluminum’s much higher energy intensity (175 vs. 25 GJ/t) but much lower total output (53 vs. 400 Mt in 2010). Perhaps the most interesting result concerns the energy cost of inorganic fertilizers: given their truly existential importance it is reassuring to realize that the energy needed to produce them adds up to a surprisingly small share of global supply. Assuming averages of 55, 20, and 10 GJ/t for, respectively, N, P, and K (all including the cost of final formulation, packaging, and distribution) would result in a total demand of a bit more than 5 EJ in the year 2010 (with nitrogenous fertilizers accounting for about 90% of the total) – or only about 1% of the TPES.

 

the steadily increasing crowding of transistors has limited the annual mass of wafers needed to produce all of the world’s microchips to only about 7500 t in 2009 and to an aggregate energy expenditure of just 150 PJ, or about 0.03% of TPES.

These calculations also make it clear that modern civilization can afford all this steel and fertilizers and microchips because scientific discoveries and technical advances have greatly reduced their energy intensities.

LCA is now a mature analytical discipline that has its own periodical, International Journal of Life Cycle Assessment,

Varieties of LCA include complete cradle-to-grave sequences

LCAs of housing in cold climates show embodied energies as a small fraction of life-time total.

A 200 square meter Canadian house that cost 1.5 TJ to build, heating and lighting (averaging about 25 W/m2) will claim about 9.5 TJ in 60 years, reducing the construction share to just 14% of the overall cost. By coincidence, that share is nearly identical to the construction share of a medium-sized American car: it takes about 100 GJ to produce and (at about 8 l/100 km and 20 000 km/year) it will need about 550 GJ of fuel and oil in 10 years, while the initial construction will claim only about 15% of the overall cost, and even less once repairs and garaging are included (Smil, 2008). Embodied energies make up even lower shares in the life-cycles of machines that are in nearly constant operation: only 6–7% for jetliners, freight trains, and cargo ships (Allwood and Cullen, 2012).

Williams (2004) ended up with the reverse ratio. According to his analysis, the energy used to make a desktop with a Pentium III processor, 30 GB hard drive, and 42.5 cm monitor added up to 6.4 GJ, while during its relative life of three years the desktop would consume about 420 kWh of electricity or roughly 1.5 GJ of primary energy, yielding a manufacturing:usage energy split of 81:19.

For a Swiss desktop computer, the split was much closer at 46:54 (Ecoinvent, 2013).

LCAs also make it clear that over their life-time many infrastructures will cost nearly as much, or more, to maintain than their initial construction. A Canadian LCA for a high-volume two-lane concrete highway shows initial construction costs of 6.7 TJ/km and a rehabilitation cost of 4.1 TJ/km or 38% of the total 50-year cost of 10.8 TJ/km, and the burdens are actually reversed for a roadway made of flexible asphalt concrete that needs 15 TJ/km to build and 16% more (17.4 TJ/km) to rehabilitate over a 50-year life-cycle (Cement Association of Canada, 2006).

The LCA of repeatedly washed garments is yet another excellent illustration of the boundary problem noted at the beginning of the energy costs section, as products made from different materials have different durabilities and maintenance requirements and a complete account of these realities may shift the overall advantage from a material that requires less energy to produce to one that is more energy-intensive to make but whose life-long energy cost may be lower.

polyester production required twice as much energy as producing cotton lint (clear advantage cotton) but because of a higher cost of cloth manufacturing, the total energy cost of a cotton shirt was about 20% higher than that of its pure polyester counterpart (slight advantage polyester), and after including the energy costs of maintenance (washing, drying, ironing) the cotton shirt was about 3.6 time more energy intensive (clear advantage polyester).

Cotton appears even more disadvantaged once nonenergy impacts are compared: the water requirements of the blend are less than a third of those for pure cotton, and global warming and acidification potentials are 38% lower than those for producing and laundering two pure cotton sheets. Going even further, we can consider the long-term cost of excessive soil erosion in cotton fields, soil quality decline due to salinization in irrigated cotton fields in arid regions, and the presence of pesticide residues in soil and water (Smil, 2008). All of these are avoided by using a synthetic fiber – but its production depends on a nonrenewable feedstock. But so does the cultivation of cotton: PE synthesis consumes about 1.5 kg of hydrocarbons per kilogram of fiber but growing a kilogram of cotton requires nearly 500 g of fertilizers and 15 g of pesticides made from hydrocarbon feedstocks, as well as liquid fossil fuels for farm machinery.

But what LCAs have done is to allow comparisons within the same category of environmental consequences (when a choice of materials is possible, which one will have the lowest effect on water use or water pollution?) as well as more comprehensive rankings of materials according to several categories of environmental impact. This is important because material use is one of the three dominant ways that humans have been changing the biosphere: food production and energy supply (dominated by extraction and combustion of fossil fuels) are the other two great interventions. And when considered in its entirety, the intricate system of extraction, processing, transportation, use, reuse, and disposal of materials encompasses every major environmental interference, from land use changes (ranging from deforestation due to lumber and pulp production to destruction of plant cover and disruption of water cycle due to massive surface ore mines) to atmospheric emissions (ranging from acidifying gases to being a major contributor to anthropogenic warming).

 

there can be no ranking of anthropogenic environmental impacts. There is no unifying metric that would allow us to conclude that soil erosion should be a greater concern than photochemical smog, or that tropical deforestation is more worrisome than the enormous water demand of modern irrigated agriculture.

With CO2 usually being the leading contributor, this indicator will have a very high correlation with the mass of fossil fuel used in the initial production. For example, production of a kilogram of cement will release about 0.5 kg CO2 per kg of but making a kilogram of hot-rolled sheet steel will release about 2.25 kg of CO 2 liberated from fossil fuels (NREL, 2013b). The other two commonly assessed variables are acidification and eutrophication impacts. Emissions of sulfur and nitrogen oxides from combustion of fossil fuels, smelting of ores, and other industrial processes are the precursors of atmospheric sulfates and nitrates whose wet and dry deposition acidifies waters and soils. In LCAs this acidifying potential of products or processes is calculated in terms of grams of hydrogen ions per square meter (g H +/m2) or per volume of water (g H +/l). Eutrophication is the process of nutrient enrichment of fresh and coastal waters (most often by releases and leaching of nitrates and phosphates) that leads to excessive growth of algae whose decay deprives waters of dissolved oxygen and creates anoxic zones that either kill or impoverish heterotrophic life in affected waters

As just explained in the previous section, production of materials claims roughly a quarter of the world’s primary energy supply. Because the combustion of fossil fuels provides most of this energy – in global terms about 87% in 2010, with the rest coming from primary, that is mostly hydro and nuclear, electricity (BP, 2013) – production of materials is a leading source of emissions of particulate matter (including black carbon), SOx and NOx (whose conversion to sulfates and nitrates is the primary cause of acidifying precipitation), and GHGs. The water needed for processing, reaction, and cooling ends up contaminated or is released at elevated temperatures by many industries, and most of them also share the necessity of disposing of relatively large volumes of solid waste and small but potentially worrisome volumes of hazardous waste.

 

Production of a kilogram of typical construction steel sections generates about 1.5 kg of CO2 equivalent, 50 g of SO2 equivalent of acidification potential, a negligible amount of eutrophication potential (0.36 g), and just 0.8 g of the photochemical smog-inducing ethane (WSA, 2011b). Production of a kilogram of PVC (consuming around 60 MJ) requires about 10 kg of water (excluding cooling demand), produces 1.9–2.5 kg of CO2 equivalent, 5–7 g of acidification potential (as SO2), 0.6–0.9 g of nitrification potential (as PO4), nearly 0.5 of ethane (measure of photochemical ozone creation), 0.4–0.8 g of total particulate waste, and 5–8 g of hazardous waste (Sevenster, 2008).

Natural materials can be a better choice, and often resoundingly so in the case of biomaterials. For example, a comprehensive LCA by Bolin and Smith (2011) showed that a wood/plastic composite decking results in 14 times higher fossil fuel use, 3 times more GHG, almost 3 times more water use, 4 times higher acidification potential, and about 2 times more smog potential and ecotoxicity than a wooden deck built with lumber treated with alkaline copper.

Wooden floors are much less energy intensive than the common alternatives: the total energy per square meter of flooring per year of service was put at 1.6 MJ for wood (usually oak or maple) compared to 2.3 MJ for linoleum and 2.8 MJ for vinyl.

Remarkably, many life-cycle assessments take a cavalier approach to life-spans and just assume what seems to be a reasonable length or use (often unexplained)

There are no LCAs taking into account the great longevity of some plastics (on the order of hundreds of years) and their now ubiquitous, and clearly highly disruptive, distribution in aquatic environments. Their buoyancy, their breakdown into progressively smaller particles, and their eventual sinking through the water column to the sea bottom combine to make them a truly global and omnipresent environmental risk to marine biota: they are now found on the remotest islands as well as in the abyss, but their highest concentrations are in surface water and on beaches (Moore, 2008; Barnes et al., 2009).

Great Pacific Garbage Patch (Moore and Phillips, 2011). Later studies estimated that at least 6.4 Mt of plastic litter enters the oceans every year; that some 8 million pieces are discarded every day; that the floating plastic debris averages more than 13 000 pieces per km2 of ocean surface; and that some 60% of all marine litter stems from shoreline activities (UNEP, 2009). Additionally, the latest summaries show that, despite many efforts to limit this, now planet-wide, degradation the accumulation is still increasing (STAP, 2011). Dangers to ocean life are posed by every size unit of discarded plastics: among the largest items are abandoned or damaged fishing nets that can ensnare fishes, dolphins, and often even whales; while aquatic birds often mistake small-size pieces of plastic for small fish or invertebrates and regurgitate them to their fledglings: the stomachs of many species show a distressing collection of such objects. And microplastics – the smallest pieces (sizes less than 5, 2, or 1 mm) that are manufactured for cosmetics, drugs, and industrial uses and that arise from abrasion and photodegradation of larger pieces – can be ingested by marine biota and can have serious metabolic and toxic effects (Cole et al., 2011). Inevitably, masses of plastic microparticles have been also accumulating on shorelines where they endanger more organisms (Browne et al., 2011). This important example shows how incomplete and uncertain are even our best analytical procedures tracing the requirements and consequences of material production, use, and abandonment; it also provides a strong argument for much better management of materials and, obviously, recycling should be a key component of these efforts.

 

with most materials recycling is more accurately described as down-cycling: high-quality paper becomes packaging stock or cardboard, expensive plastics are turned into cheap items.

Production of recycled steel will require roughly 75% less energy and the LCA of common steel products (sections, hot-rolled coil, and hot-dip galvanized steel) showed recycling benefits of up to about 50% for both GWP and acidification potential (WSA, 2011b).

Delacquering (removal of any coatings) needs about 7 GJ/t, melting consumes 7 GJ/t, addition of pure aluminum to adjust the alloy composition takes about 8 GJ/t, and the production of containers (casting, rolling, blanketing, forming) adds 30 GJ/t for the total of about 52 GJ/t, a saving of 74% rather than 96% (Luo and Soria, 2008).

Benefits go beyond energy savings, as recycling lowers GWP, claims less water, and generates less water and air pollution: Grimes et al. (2008) provide many comparisons for steel, aluminum, and paper production. Recycled paper can be made with 40% less energy while generating 45% less waste water and 50% less solid waste (EPN, 2007). Energy savings for recycled high-density polyethylene are 45–50% and 40–45% for PVC.

recycling is also a quest that is often very difficult to pursue (be it because of logistic challenges, excessive costs, or negligible energy savings) and one that, unlike the flows of carbon or nitrogen atoms in grand biogeochemical cycles, often amounts to a fairly rapid down-cycling as the reused materials appear in less valuable guises.

The greatest challenge is to recycle the increasing amount of electronic waste that is, in mass terms, dominated by a few plastics, a few metals, and screen glass, but that also contains small, but in aggregate quite substantial, amounts of more than a dozen elements.

Of the 60 metals and metalloids examined by a UN report, more than half (including all rare earths, as well as germanium, selenium, indium, and tellurium) have recovery rates of less than 1%, for only five elements is the rate between 1 and 25%, and 18 common (or relatively common) metals have end-of-life recycling rates above 50%, but rarely above 60% (Graedel et al., 2011).

Unfortunately, some toxic heavy metals, whose release into the environment is particularly undesirable, have very low recycling rates. The only relatively common way of recycling cadmium is by returned Ni-Cd batteries, but their collection rate remains low. The recycling rate of fluorescent lights containing mercury is also too low.

Correct separation is difficult but imperative: a single PVC bottle in a load of 10 000 PET bottles can ruin the entire melt (ImpEE Project, 2013). Sorting is followed by cleaning (but removing print and labels cannot be 100% successful) and reduction to uniform pellet sizes ready for reuse, still mostly only in lower-grade applications such as cheap carpets, garbage cans, or park benches.

Collection of household waste paper is expensive, and a thorough processing of the material is needed to produce clean fibers for reuse. This includes defibering of paper, cleaning and removal of all nonfiber ingredients (most often adhesive tapes, plastics, and staples), and de-inking is needed if the fibers are to be reprocessed into white paper. Reprocessing shortens the cellulose fibers and this means that paper can be recycled no more than 4 to 7 times.

Recycling of e-waste is particularly challenging, because the devices commingle many compounds and elements that must be separated by a sequence of mechanical and chemical operations. Silicon is the core of modern computers, but transistors and microchips could not function without the presence of a multitude of materials, including many heavy metals.  Elements used to dope silicon include arsenic, phosphorus, boron, and gallium. Printed wire boards, disk drives, expansion cards, electrical supplies, and connections add up to a hoard of materials, small (and for many elements even miniature) per unit but highly consequention in the aggregate. Most of the mass in electronic devices is steel, glass, plastics, copper, and aluminum, but more than a dozen other metals are also present in tiny amounts, eight of them classified as hazardous (As, Cd, Cr, Co, Hg, Pb, Sb, and Se).

Lead is of the greatest concern. The lead in computer wire boards has 30 to 100 times the level of lead (5 mg/l) that classifies waste as hazardous.

In 1997 100 million cellphones were sold a year, a billion by 2009, and at the end of 2012 more than 6.5 billion devices in use, plus tablets, notebooks, and netbooks. These only last 1.5 to 2 years, so up to 1.2 billion cellphones a year are discarded.  In the USA recycling rates for all mobile devices was a dismal 8% in 2009 (and 17% for TVs, 38% for computers).  130 million discarded devices in the USA collectively have 2000 tons copper, 45 tons silver, 4 tons gold, 9,000 tons plastics, 2,500 tons ceramics and glass. Multiply by 6 or more for global amounts.

 

The USA buries nearly 80,000 tons of plastic in its landfills every day, with only 8% of discarded plastics recovered (23% for PET bottles but less than 1% for polypropylene waste).

ENERGY EFFICIENCIES

Late 19th to early 20th century hand-stoked coal stoves converted no more than 20-25% or less of the fuel’s chemical energy to useful heat, though that’s good compared to the less than 10% efficiency of wood-burning fireplaces before that. Oil-fired furnace efficiency can be up to 50%, natural gas home furnaces 70-75%.

Steam engines with top efficiency of 15% used to power small freight ships (less than 5,000 dwt). No bulk carriers (100,000 dwt+) are powered by diesel engines whose best efficiencies are close to or slightly above 50%.

The world’s copper resources are about 1.6 Gt.  To provide all 10 billion people in the future with the average per capita mean (170 kg) would require 1.7 Gt, more than the estimated resource in the crust (Gordon 2006).

Gordon, R.B., et al. 2006. Metals stocks and sustainability. Proceedings of the National Academy of Sciences (USA) 103:12-9-1214

Posted in Infrastructure, Life Before Oil, Limits To Growth, Peak Resources, Vaclav Smil | Tagged , , | 10 Comments

Steam engines. Exergy power. and work in the US

Preface.  At some point of fossil fuel decline future generations will be tempted to build steam engines again, and perhaps just as in America initially they’ll use wood to fuel the engines, since coal will be scarce at some point (steamships didn’t burn coal until 1850 when iron ships first appeared).  A good thing coal came along — burning wood in steam engines for locomotives, steam ships, factories, tractors, and other uses decimated America’s forests.

Steam engines are a great deal less efficient than internal combustion engines, making a recovery to today’s level of civilization unlikely.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Ayres, R.U., et al. March 2003. Exergy, power and work in the US economy, 1900-1998. Energy Vol 28 #3 219-273.

During the first half of the century (1900-1950) steam locomotives for railroads were the major users, with stationary steam engines in mines and factories also significant contributors.

Steam turbine design improvements and scaling up to larger sizes accounted for most of the early improvements. The use of pulverized coal, beginning in 1920, accounted for major gains in the 1920s and 30s. Better designs and metallurgical advances permitting higher temperatures and pressures accounted for further improvements in the 1950s. Since 1960, however, efficiency improvements have been very slow, largely because existing turbine steel alloys are close to their maximum temperature limits.

The conversion efficiency of steam–electric power plants has increased by nearly a factor of ten, from 3.6% in 1900 or so to nearly 34% on average (including distribution losses) and 48% for the most advanced units. The consumption of electricity in the US has increased since 1900 by a factor of 1200, and continued to increase rapidly even after 1960.

In the case of large stationary or marine steam engines operating under optimal conditions at constant loads, the thermal efficiency exceeded 15% in the best cases. However, locomotive steam engines were not nearly so efficient — between 4% and 8% on average — and the best locomotive engine in 1900 achieved around 11%, increasing to perhaps 13% by 1910

Factory engines were generally older and even less efficient and transmission losses in factories (where a central engine was connected to a number of machines by a series of leather belts) were enormous. For instance, if a stationary steam engine for a factory with machines operating off belt drives circa 1900 had a thermal efficiency of 6%, with 50% frictional losses, the net exergy efficiency was 3%. The Dewhurst estimate, which took into account these transmission losses, set the average efficiency of conversion of coal energy into mechanical work at the point of use at 3% in 1900 (when most factories still used steam power) increasing to 4.4% in 1910 and 7% in 1920, when the substitution of electric motors for steam power in factories was approaching completion. The use of steam power in railroads was peaking during the same period.

In the case of railroad steam locomotives, average thermal efficiency circa 1920 according to another estimate was about 10%, whereas a diesel electric locomotive half a century later (circa 1970) achieved 35%. Internal friction and transmission losses and variable load penalty are apparently not reflected in either figure, but they would have been similar (in percentage terms) in the two cases. If these losses amounted to 30%, the two estimates are consistent for 1920. Coal-burning steam locomotives circa 1950 still only achieved 7.5% thermal efficiency; however, oil-burning steam engines at that time obtained 10% efficiency and coal-fired gas turbines got 17%. But the corresponding efficiency of diesel electric locomotives c. 1950 was 28%, taking internal losses into account. The substitution of diesel–electric for steam locomotives began in the 1930s and accelerated in the 1950s.

The work done by internal combustion engines in automobiles, trucks and buses (road transport) must be estimated in a different way. In the case of heavy diesel-powered trucks with a compression ratio in the range of 15–18, operating over long distances at highway speeds, the analysis is comparable to that for railways. The engine power can be optimized for this mode of operation and the parasitic losses for a heavy truck (lights, heating, engine cooling, air-conditioning, power- assisted steering, etc.) are minor. Internal friction and drive-train losses and losses due to variable load operation can conceivably be as low as 20%, though 25% is probably more realistic.

In the case of railroads the traditional performance measure is tonne–km. From 1920 to 1950 the improvement by this measure was threefold, most of which was due to the replacement of coal-fired steam locomotives by diesel–electric or electric locomotives. This substitution began in the 1930s but accelerated after the second World War because diesel engines were far more fuel-efficient — probably by a factor of five.

According to a study published in 1952, diesel engines can perform ten times as much work as steam engines in switching operations, five times as much in freight service and three times as much in passenger service. The overall gain might have been a factor of about five — and also required significantly less maintenance. But from 1950 to 1960 the service output (measured in vehicle–km traveled) per unit exergy input quadrupled and from 1960 to 1987 there was a further gain of over 50%. The overall performance increase from 1920 to 1987 by this measure (tonne–km per unit of fuel input) was around 20-fold. In 1920 US railways consumed 122 million tonnes of coal, which was 16% of the nation’s energy supply. By 1967 the railway’s share of national energy consumption had fallen to 1% and continued to decline thereafter.

It is obvious that much of the improvement has occurred at the system level. One of the major factors was that trucks took over most of the short-haul freight carriage while cars and buses took most of the passengers, leaving the railroads to carry bulk cargoes over long distances at (comparatively) high and constant speeds and with much less switching — which is very exergy intensive. Under these conditions the work required to move a freight train is reduced because rolling friction and air resistance are minimized, while work required for repeated accelerations and decelerations was sharply reduced or eliminated.

Another factor behind the gains was that the work required to overcome air and rolling resistance had been reduced significantly by straightening some of the rights-of-way, improving couplings and suspensions, and introducing aerodynamic shapes. A third source of gain was increasing power-to-weight ratios for locomotives; locomotives in 1900 averaged 133 kg/kW. By 1950 this had fallen to about 33 kg/kW and by 1980 to around 24 kg/kW. The lighter the engine, the less power is needed to move it. (This is an instance of dematerialization contributing to reduced exergy consumption.) If the railways in 1987 were achieving 30% thermal efficiency, and if the coal-fired steam locomotives of 1920 were averaging 7% (for an overall factor of four and a fraction), then an additional factor of five or so was achieved by increasing system efficiency in other ways. In effect, the work required to haul rail cargoes has declined dramatically since 1960, but the exergy input required per unit of mechanical work done has hardly changed since then.

Substitution of diesel for steam locomotives in the USA, 1935–1957.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Posted in Coal | Tagged , | 5 Comments