Millions in danger of floods on Mississippi and Missouri

Preface. Here’s something for you young folks considering “where to be” after energy collapse. Flooding is a huge consideration. My great grandfather was a doctor in Oklahoma who saw many lose their homes and farms from floods and die from water diseases afterwards. If there were one lesson he wanted to pass on to us it was “don’t live in a flood plain!”

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Cusick D. 2020. Portions of Mississippi and Missouri Rivers Are Most Endangered in U.S. Scientific American.

Climate change and poor floodplain management have imperiled nearby communities, a nonprofit report says

Two of the nation’s essential commercial waterways, the Mississippi and Missouri, face extraordinary risk from climate change and associated flooding, a new report from the nonprofit American Rivers says.

In its updated list of “America’s Most Endangered Rivers,” the group says marked increases in precipitation across the Upper Mississippi and Lower Missouri rivers, combined with poor floodplain management, have placed millions of people and a multibillion-dollar economy in peril within the two basins.

The Upper Mississippi and Lower Missouri are Nos. 1 and 2, respectively, on the group’s 2020 list of most imperiled waterways. They also sustained some of the greatest property damage and crop losses from last year’s record river floods.

“Mixing poor river management with climate change has created a recipe for disaster,” Bob Irvin, the group’s president and chief executive officer, said in a statement. “Lives, businesses and property are at risk. It’s time for our leaders to prioritize solutions that protect rivers and strengthen communities. Our health and safety depend on it.”

The report attributes much of the responsibility for the rivers’ condition to the Army Corps of Engineers and state and local agencies tasked with managing the floodplain.

Along both the Upper Mississippi and Missouri rivers, local levee boards wield substantial authority over agricultural levees and other river control measures. While the Army Corps is tasked with permitting such work, critics have long argued the agency has done a poor job of regulating such levees.

With respect to the Lower Missouri, American Rivers said the Missouri, which joins the Upper Mississippi north of St. Louis, “is one of the most controlled waterways in our nation” and that “artificial channels, levees and dams vainly attempt to control flood damages.”

“Right now, we’re on a collision course with climate change and poor river management. Unless we embrace better solutions like giving the river room to flood safely, we’re going to see increasingly severe disasters,” Eileen Shader, director of river restoration for American Rivers, said in a statement.

Allen Marshall, an Army Corps spokesman in the agency’s Rock Island District office, said he could not comment on the issue of climate change or criticisms that the agency has failed to properly regulate levees.

American Rivers also called for the completion of a federal-state comprehensive study of the Upper Mississippi called the “Keys to the River.” Officials said the “keys” study will offer a more holistic approach and management strategy for the river and draw input from a broader group of stakeholders—including municipalities, navigation interests, water and wastewater utilities, farmers, sportsmen, and other recreational users.

All those sectors suffered major losses from the 2019 floods, a $20 billion catastrophe, according to an recent analysis based on NOAA and reinsurance industry data and released by the Mississippi River Cities and Towns Initiative.

“I think one of the big wake-up calls with last year’s flood event was the duration of it and the realization that this is our future,” said Olivia Dorothy, Upper Mississippi River Basin director for American Rivers. “We really need to think about floods of this magnitude becoming a permanent fixture rather than a temporary situation.”

Kirsten Wallace, executive director of the Upper Mississippi River Basin Association, which represents the governors of Illinois, Iowa, Minnesota, Missouri and Wisconsin, agreed the basin faces multiple problems, including from climate change.

“I think the overall call for urgency is important, so we’re glad that American Rivers is making that,” Wallace said. “We also recognize that the Upper Mississippi needs more money and more resources” to address major issues.

But, Wallace said, it remains unclear how climate change compares with other problems such as floodplain conversion for commercial and residential development, and the use of tile drainage systems that shed water off farm fields into local streams.

“I think the constant flooding is compelling people to talk to each other and to think of our system plan as bigger than any one stakeholder,” she said.

Wallace said progress in solving common problems is also hindered by a piecemeal management approach among multiple levels of government. Such approaches often pit stakeholders against one another, resulting in stasis and finger-pointing.

“You first need to get all these factions to agree to a fast-forward,” she said. “They need to put down their guard and trust in someone or something. … It’s a really fine balance that we have to strike going forward.”

American Rivers said stakeholders on the Missouri and Mississippi rivers could learn from California’s Central Valley, where a nature-based approach to flood control is delivering multiple benefits, from improved water quality to restoration of habitat and parks. Examples include setting levees back from the river to allow floodwaters to safely spread out and breaching levees in strategic areas to reconnect the river with its floodplain.

Other waterways on the 2020 endangered list include the Big Sunflower River in Mississippi, where the Army Corps is considering a massive water diversion project known as the Yazoo Pumps, and a half-dozen rivers threatened by mining and dam projects. Four of the 10 listed rivers are threatened by mining, American Rivers said, including the South Fork Salmon River in Idaho and the Okefenokee Swamp in Georgia and Florida.

Reprinted from Climatewire

Posted in Climate Change, Floods, Where to Be or Not to Be | Tagged , | 1 Comment

Where do we come from, who are we, and where are we going?

Preface.  This is a book of review of The Social Conquest of Earth, in which E. O. Wilson answers these questions.  Although tribes have invented thousands of creation myths since paleolithic times, Wilson finally has written a book explaining our true creation myth.   

We are shaped by both individual and group selection, which forever traps us between the conflict of the poorer and better angels of our nature. Individual selection is responsible for much of what we call sin, while group selection is responsible for the greater part of virtue. 

It is fortunate that we are intrinsically imperfectible, because in a constantly changing world, we need the flexibility that only imperfection provides.

Below are some of my kindle notes. This is one of the most profound books you could ever read, and I leave so much out that I hope you’ll simply have to buy the book, and perhaps pass it on to increase the Enlightenment and diminish superstition.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Wilson, E. O. 2012. The Social Conquest of Earth. Liveright.

There is no grail more elusive or precious in the life of the mind than the key to understanding the human condition. It has always been the custom of those who seek it to explore the labyrinth of myth: for religion, the myths of creation and the dreams of prophets; for philosophers, the insights of introspection and reasoning based upon them; for the creative arts, statements based upon a play of the senses.

Humanity today is like a waking dreamer, caught between the fantasies of sleep and the chaos of the real world.

We have created a Star Wars civilization, with Stone Age emotions, medieval institutions, and godlike technology. We thrash about. We are terribly confused by the mere fact of our existence, and a danger to ourselves and to the rest of life. Religion will never solve this great riddle.

Since Paleolithic times each tribe—of which there have been countless thousands—invented its own creation myth. During this long dreamtime of our ancestors, supernatural beings spoke to shamans and prophets. They identified themselves to the mortals variously as God, a tribe of Gods, a divine family, the Great Spirit, the Sun, ghosts of the forebears, supreme serpents, hybrids of sundry animals, chimeras of men and beasts, omnipotent sky spiders—anything, everything that could be conjured by the dreams, hallucinogens, and fertile imaginations of the spiritual leaders.

They were shaped in part by the environments of those who invented them. In Polynesia, gods pried the sky apart from the ground and sea, and the creation of life and humanity followed. In the desert-dwelling patriarchies of Judaism, Christianity, and Islam, prophets conceived, not surprisingly, a divine, all-powerful patriarch who speaks to his people through sacred scripture.

The creation stories gave the members of each tribe an explanation for their existence. It made them feel loved and protected above all other tribes. In return, their gods demanded absolute belief and obedience. And rightly so. The creation myth was the essential bond that held the tribe together. It provided its believers with a unique identity, commanded their fidelity, strengthened order, vouchsafed law, encouraged valor and sacrifice, and offered meaning to the cycles of life and death. No tribe could long survive without the meaning of its existence defined by a creation story. The option was to weaken, dissolve, and die. In the early history of each tribe, the myth therefore had to be set in stone.

The discovery of the origin and meaning of humanity might explain the origin and meaning of myths, hence the core of organized religion. Can these two worldviews ever be reconciled? The answer, to put the matter honestly and simply, is no. They cannot be reconciled. Their opposition defines the difference between science and religion, between trust in empiricism and belief in the supernatural.

Thinking about thinking is the core process of the creative arts, but it tells us very little about how we think the way we do, and nothing of why the creative arts originated in the first place. Consciousness, having evolved over millions of years of life-and-death struggle, and moreover because of that struggle, was not designed for self-examination. It was designed for survival and reproduction. Conscious thought is driven by emotion; to the purpose of survival and reproduction, it is ultimately and wholly committed. The intricate distortions of the mind may be transmitted by the creative arts in fine detail, but they are constructed as though human nature never had an evolutionary history. Their powerful metaphors have brought us no closer to solving the riddle than did the dramas and literature of ancient Greece.

What science promises, and has already supplied in part, is the following. There is a real creation story of humanity, and one only, and it is not a myth.  It answers the questions of where we came from and what we are.

The first question is why advanced social life exists at all, and has occurred so rarely in the history of life. The second is the identity of the driving forces that brought it into existence.

These problems can be solved by bringing together information from multiple disciplines, ranging from molecular genetics, neuroscience, and evolutionary biology to archaeology, ecology, social psychology, and history. To test any such theory of complex process, it is useful to bring into the light those other social conquerors of Earth, the highly social ants, bees, wasps, and termites, and I will do so. They are needed for perspective in developing the theory of social evolution. I realize I can be easily misinterpreted by putting insects next to people. Apes are bad enough, you might say, but insects?

Human beings create cultures by means of malleable languages. We invent symbols that are intended to be understood among ourselves, and we thereby generate networks of communication many orders of magnitude greater than that of any animal. We have conquered the biosphere and laid waste to it like no other species in the history of life.

We are an evolutionary chimera, living on intelligence steered by the demands of animal instinct. This is the reason we are mindlessly dismantling the biosphere and, with it, our own prospects for permanent existence.

A vast array of plant and animal species formed intimate symbioses with the social insects, accepting them as partners. A large percentage came to depend on them entirely for survival, variously as prey, symbionts, scavengers, pollinators, or turners of the soil. Overall, the pace of evolution of ants and termites was slow enough to be balanced by counter-evolution in the rest of life. As a result, these insects were not able to tear down the rest of the terrestrial biosphere by force of numbers, but became vital elements of it. The ecosystems they dominate today are not only sustainable but dependent on them.

In sharp contrast, human beings of the single species Homo sapiens emerged in the last several hundred thousand years and spread around the world only during the last sixty thousand years. There was not time for us to coevolve with the rest of the biosphere. Other species were not prepared for the onslaught. This shortfall soon had dire consequences for the rest of life.

Wherever humans saturated wildlands, biodiversity was returned to the paucity of its earliest period half a billion years previously. The rest of the living world could not coevolve fast enough to accommodate the onslaught of a spectacular conqueror that seemed to come from nowhere, and it began to crumble from the pressure.

Even by strictly technical definition as applied to animals, Homo sapiens is what biologists call “eusocial,” meaning group members containing multiple generations and prone to perform altruistic acts as part of their division of labor.

The necessity for fine-graded evaluation by alliance members meant that the pre-human ancestors had to achieve eusociality in a radically different way from the instinct-driven insects. The pathway to eusociality was charted by a contest between selection based on the relative success of individuals within groups versus relative success among groups. The strategies of this game were written as a complicated mix of closely calibrated altruism, cooperation, competition, domination, reciprocity, defection, and deceit. To play the game the human way, it was necessary for the evolving populations to acquire an ever higher degree of intelligence. They had to feel empathy for others, to measure the emotions of friend and enemy alike, to judge the intentions of all of them, and to plan a strategy for personal social interactions. As a result, the human brain became simultaneously highly intelligent and intensely social. It had to build mental scenarios of personal relationships rapidly, both short-term and long-term. Its memories had to travel far into the past to summon old scenarios and far into the future to imagine the consequences of every relationship.  

Thus was born the human condition, selfish at one time, selfless at another, the two impulses often conflicted. How did Homo sapiens reach this unique place in its journey through the great maze of evolution? The answer is that our destiny was foreordained by two biological properties of our distant ancestors: large size and limited mobility.

While eusocial species can dominate the insect world in terms of numbers of individuals, they had to rely on small brains and pure instinct for their conquest. Furthermore, and fundamentally, they were too small to ignite and control fire.

Mammals, especially carnivores, have much larger territories to defend when they settle down to build a nest. Wherever they travel, they are likely to encounter rivals. Females cannot store sperm in their bodies. They must find a male and mate for each parturition. Should the opportunities and pressures of the environment make social grouping profitable, it must be done with personal bonds and alliances based on intelligence and memory. To summarize to this point on the two social conquerors of Earth, the physiology and life cycle in the ancestors of the social insects and those of humans differed fundamentally in the evolutionary pathways followed to the formation of advanced societies.

In every game of evolutionary chance, played from one generation to the next, a very large number of individuals must live and die. The number, however, is not countless. A rough estimate can be made of it, providing at least a plausible order-of-magnitude guess. For the entire course of evolution leading from our primitive mammalian forebears of a hundred million years ago to the single lineage that threaded its way to become the first Homo sapiens, the total number of individuals it required might have been one hundred billion.

The first preadaptation was the aforementioned large size and relative immobility that predetermined the trajectory of mammalian evolution, as distinct from that of the social insects. The second preadaptation in the human-bound timeline was the specialization of the early primates, 70 to 80 million years ago, to life in the trees. The most important feature evolved in this change was hands and feet built for grasping. Moreover, their shape and muscles were better suited for swinging from branches, rather than merely grasping them for support. Their efficiency was increased by the simultaneous appearance of opposable thumbs and great toes. It was increased further by modification of the finger and toe tips into flat nails, as opposed to sharp down curving claws of the kind possessed by most other kinds of arboreal mammals. In addition, the palms and soles were covered by cutaneous ridges that aided in grasping; and they were supplied with pressure receptors that enhanced the sense of touch. Thus equipped, the early primate could use its hand to pick and tease apart pieces of fruit while pulling out individual seeds. The fingernail edges could both cut and scrape objects grasped by the hands. Such an animal, using its hind legs for locomotion, would be able to carry food for considerable distances.

The early prehuman primates evolved a larger brain. For the same reason, they came to depend more on vision and less on smell than did most other mammals. They acquired large eyes with color vision, which were placed forward on the head to give binocular vision and a better sense of depth. When walking, the pre-human primate did not move its hind legs well apart in parallel; instead, it alternated its legs almost in a single line, one foot placed in front of the other. The offspring, moreover, were fewer in number and required more time to develop.

When one line of these strange arboreal creatures evolved to live on the ground, as it happened in Africa, the next preadaptation was taken—one more fortunate turn in the evolutionary maze. Bipedalism was adopted, freeing the hands for other purposes.

The pre-humans, now distinguishable as a group of species called the australopithecines, took the trend to bipedal walking much farther. Their body as a whole was accordingly refashioned. The legs were lengthened and straightened, and the feet were elongated to create a rocking movement during locomotion. The pelvis was reformed into a shallow bowl to support the viscera, which now pressed toward the legs instead of being slung, ape-like, beneath the horizontal body.

The bipedal revolution was very likely responsible for the overall success of the australopithecine pre-humans—at least as measured by the diversity they achieved in body form, jaw musculature, and dentition.

Walking with arms swinging at the side in the new, australopith manner conferred speed at minimal energy cost, even as it inflicted back and knee problems in addition to the greater risk imposed by balancing the newly heavy globular head on a delicate vertical neck. For primates whose bodies had been originally crafted for life in the trees, the bipeds could run swiftly. But they could not match the four-legged animals they hunted as prey.

If the early humans, however, could not outsprint such animal Olympians, they could at least outlast them in a marathon. At some point, humans became long-distance runners. They needed only to commence a chase and track the prey for mile after mile until it was exhausted and could be overtaken. The pre-human body, thrusting itself off the ball of the foot with each step and holding a steady pace, evolved a high aerobic capacity. In time the body also shed all of its hair, except on the head and pubis and in the pheromone-producing armpits. It added sweat glands everywhere, allowing increased rapid cooling of the naked body surface.

Meanwhile, the forelimbs of the pre-human ancestors were redesigned for flexibility in the manipulation of objects. The arm, especially that of males, became efficient at throwing objects, including stones, and later spears as well, and so for the first time the pre-humans could kill at a distance. The advantage this ability gave them during conflict with other, less well-equipped groups must have been enormous.

The next step taken on the road to eusociality was the control of fire.  The roving pre-humans could not have failed to discover the importance of wildfires as a source of food. Moreover, they found some of the felled animals already cooked, with flesh easy to tear off and eat.

The use of fire was on the other hand forever denied to insects and other terrestrial invertebrates. They were physically too small to ignite tinder or carry a flaming object without becoming part of the fuel.

It was, of course, also denied aquatic animals

A Homo sapiens level of intelligence can arise only on land, whether here on Earth or on any other conceivable planet.

Why is a protected nest so important to eusociality?

The next step, and the decisive one for the origin of human eusociality, was the gathering of small groups at campsites. There is an a priori reason for believing campsites were the crucial adaptation on the path to eusociality: campsites are in essence nests made by human beings. All animal species that have achieved eusociality, without exception, at first built nests that they defended from enemies. They, as did their known antecedents, raised young in the nest, foraged away from it for food, and brought the bounty back to share with others.

Why is a protected nest so important? Because members of the group are forced to come together there. Required to explore and forage away from the nest, they must also return. Chimpanzees and bonobos occupy and defend territories, but wander through them while searching for food. Chimps and bonobos alternatively break into subgroups and re-aggregate. They advertise the discovery of fruit-laden trees by calling back and forth but do not share the fruit they pick. They occasionally hunt in small packs. Successful members of the pack share the meat among their fellow hunters, but charity mostly comes to an end there. Of greatest importance, the apes have no campfire around which to gather.

Carnivores at campsites are forced to behave in ways not needed by wanderers in the field. They must divide labor: some forage and hunt, others guard the campsite and young. They must share food, both vegetable and animal, in ways that are acceptable to all. Otherwise, the bonds that bind them will weaken. Further, the group members inevitably compete with one another, for status of a larger share of food, for access to an available mate, and for a comfortable sleeping place. All of these pressures confer an advantage on those able to read the intention of others, grow in the ability to gain trust and alliance, and manage rivals. Social intelligence was therefore always at a high premium. A sharp sense of empathy can make a huge difference, and with it an ability to manipulate, to gain cooperation, and to deceive. To put the matter as simply as possible, it pays to be socially smart. Without doubt, a group of smart pre-humans could defeat and displace a group of dumb, ignorant pre-humans, as true then as it is today for armies, corporations, and football teams.

Altricial bird species—those that rear helpless young—have a similar preadaptation. In a few species young adults remain with the parents for a while to help care for their siblings. But no bird species has gone on to evolve full-blown eusocial societies. Possessing only a beak and claws, they have never been equipped to handle tools with any degree of sophistication, or fire at all. Wolves and African wild dogs hunt in coordinated packs in the same manner as chimpanzees and bonobos, and African wild dogs also dig out dens, where one or two females have a large litter.

These remarkable canids, although having adopted the rarest and most difficult preadaptation, have not reached full eusociality, with a worker caste or even ape-level intelligence. They cannot make tools. They lack grasping hands and soft-tipped fingers. They remain four-legged, dependent on their carnassial teeth and fur-sheathed claws.

These hominid primates of two million years ago were diverse, yet no more so than the antelopes and circopithecoid monkeys teeming around them. They were rich in potential—as our own presence bears witness. Nevertheless, from one generation to the next their continued existence was precarious. Their populations were sparse in comparison with the large herbivores, and they were less abundant than some of the human-sized carnivores that hunted them.

Smaller mammals on average were able to buffer themselves better than large mammals, including humans, against extreme environmental changes. Their methods included burrowing, hibernation, and prolonged torpor, adaptations not available to large mammals. Paleontologists have determined that the turnover in species is still higher in mammals that form social groups. They have pointed out that social groups tend to stay apart from each other during breeding, thus creating smaller populations, making them subject to both quicker genetic divergence and higher extinction rates.

As continental glaciers advanced south across Eurasia, Africa suffered a period of prolonged drought and cooling. Much of the continent was covered by arid grassland and desert. In these times of stress the death of a few thousand individuals, possibly even just a few hundred, could have snapped the line to Homo sapiens altogether.

What drove the hominins on through to larger brains, higher intelligence, and thence language-based culture? That, of course, is the question of questions.

One of the australopith species shifted to the consumption of meat. More precisely, it became omnivorous by adding meat to an already existing vegetable and fruit diet.

Homo habilis became smarter than the other hominins around them.

Perhaps, the traditional argument goes, the challenges of new environments gave an advantage to genetic types able to discover and use novel resources to avoid enemies, as well as the capacity to defeat competitors for food and space. Those genetic types were able to innovate and learn from their competitors. They were the survivors of hard times. The flexible species evolved larger brains. How well does this familiar innovation-adaptiveness describe other animal species? One analysis made of 600 bird species introduced by humans into parts of the world outside their native ranges, and hence into alien environments, seems to support the idea. Those species with larger brains relative to their body size were on average better able to establish themselves in the new environments. Further, there is evidence that it was done by greater intelligence and inventiveness.

Every other kind of animal known that evolved eusociality started with a protected nest from which forays can be made to collect food. Other species of relatively large animals that have advanced almost as far as ants into eusociality are the naked mole rats (Heterocephalus glaber) of East Africa. They, too, obey the protected-nest principle.

It seems now possible to draw a reasonably good explanation of why the human condition is a singularity, why the likes of it has occurred only once and took so long in coming. The reason is simply the extreme improbability of the pre-adaptations necessary for it to occur at all. Each of these evolutionary steps has been a full-blown adaptation in its own right. Each has required a particular sequence of one or more pre-adaptations that occurred previously. Homo sapiens is the only species of large mammal—thus large enough to evolve a human-sized brain—to have made every one of the required lucky turns in the evolutionary maze. The first preadaptation was existence on the land. Progress in technology beyond knapped stones and wooden shafts requires fire.

The second preadaptation was a large body size, of a magnitude attained in Earth’s history only by a minute percentage of land-dwelling animal species. If an animal at maturity is less than a kilogram in weight, its brain size would be too severely limited for advanced reasoning and culture. Even on land, its body would be unable to make and control fire. That is one reason why leafcutter ants, although the most complex of any species other than humans, and even though they practice agriculture in air-conditioned cities of their own instinctual devising, have made no significant further advance during the twenty million years of their existence. Next in line of pre-adaptations was the origin of grasping hands tipped with soft spatulate fingers that were evolved to hold and manipulate detached objects. This is the trait of primates that distinguishes them from all other land-dwelling mammals.

To use such hands and fingers effectively, candidate species on the path to eusociality had to free them from locomotion in order to manipulate objects easily and skillfully. That was accomplished early by the first prehominids who, as far back as when our presumed ancient forebear Ardipithecus, climbed out of the trees, stood up, and began walking entirely on hind legs.

Claws and fangs, the ordinary armamentaria of the species, are ill suited for the development of technology.  

The subsequent step—the next correct turn in the evolutionary maze—was a shift in diet to include a substantial amount of meat.  Theadvantages of cooperation in the harvesting of meat led to the formation of highly organized groups.

About a million years ago the controlled use of fire followed, a unique hominid achievement.  Meat, fire, and cooking, campsites lasting for more than a few days at a time, and thus persistent enough to be guarded as a refuge, marked the next vital step. Such a nest, as it can also be called, has been the precursor to the attainment of eusociality by all other known animals. With fireside campsites came a division of labor.

By the time of Homo erectus, all of the steps that led this species to eusociality, save the use of controlled fire, had also been followed by modern chimpanzees and bonobos. Thanks to our unique pre-adaptations, we were ready to leave these distant cousins far behind.

Even though tiny in biomass—all of its more than seven billion members could be log-stacked into a cube two kilometers on each edge—the new species had become a geophysical force. They had harnessed the energies of the sun and fossil fuel, diverted a large part of the fresh water for their own use, acidified the ocean, and changed the atmosphere to a potentially lethal state.

The origin of modern humanity was a stroke of luck—good for our species for a while, bad for most of the rest of life forever.

Kin selection says parents, offspring, and their cousins and other collateral relatives are bound by the coordination and unity of purpose made possible by selfless acts toward one another. Altruism actually benefits each group member on average because each altruist shares genes by common descent with most other members of its group. Due to the sharing with relatives, its sacrifice increases the relative abundance of these genes in the next generation. If the increase is greater than the average number lost by reducing the number of genes passed on through personal offspring, then the altruism is favored and a society can evolve. Individuals divide themselves into reproductive and nonreproductive castes as a manifestation in part of self-sacrificing behavior on behalf of kin.

The foundations of the general theory of inclusive fitness based on the assumptions of kin selection have crumbled, while evidence for it has grown equivocal at best. The beautiful theory never worked well anyway, and now it has collapsed. A new theory of eusocial evolution provides separate accounts for the origin of eusocial insects on the one hand and the origin of human societies on the other. In the case of ants and other eusocial invertebrates, the process is perceived as neither kin selection nor group selection, but individual-level selection, from queen (in the case of ants and other hymenopteran insects) to queen, with the worker caste being an extension of the queen phenotype. Evolution can proceed in this manner because in the early stages of colonial evolution the queen travels far away from her natal colony and creates the members of the colony on her own.

The creation of new groups by humans, at the present time and all the way back into prehistory, has been fundamentally different. Their evolutionary dynamics is driven by both individual and group selection.

The multilevel process was first anticipated by Darwin in The Descent of Man: if one man in a tribe, more sagacious than the others, invented a new snare or weapon, or other means of attack or defense, the plainest self-interest, without the assistance of much reasoning power, would prompt the other members to imitate him; and all would thus profit. The habitual practice of each new art must likewise in some slight degree strengthen the intellect. If the new invention were an important one, the tribe would increase in number, spread, and supplant other tribes. In a tribe thus rendered more numerous there would always be a rather better chance of the birth of other superior and inventive members. If such men left children to inherit their mental superiority, the chance of the birth of still more ingenious members would be somewhat better, and in a very small tribe decidedly better. Even if they left no children, the tribe would still include their blood-relations; and it has been ascertained by agriculturists that by preserving and breeding from the family of an animal, which when slaughtered was found to be valuable, the desired character has been obtained. Multilevel selection consists of the interaction between forces of selection that target traits of individual members and other forces of selection that target traits of the group as a whole. The new theory is meant to replace the traditional theory based on pedigree kinship or some comparable measure of genetic relatedness.

The precursors of Homo sapiens, formed well-organized groups that competed with one another for territory and other scarce resources. In general, it is to be expected that between-group competition affects the genetic fitness of each member (that is, the proportion of personal offspring it contributes to the group’s future membership), whether up or down. A person can die or be disabled, and lose his individual genetic fitness as a result of increased group fitness during, for example, a war or under the rule of an aggressive dictatorship. If we assume that groups are approximately equal to one another in weaponry and other technology, which has been the case for most of the time among primitive societies over hundreds of thousands of years, we can expect that the outcome of between-group competition is determined largely by the details of social behavior within each group in turn. These traits are the size and tightness of the group, and the quality of communication and division of labor among its members. Such traits are heritable to some degree; in other words, variation in them is due in part to differences in genes among the members of the group, hence also among the groups themselves. The genetic fitness of each member, the number of reproducing descendants it leaves, is determined by the cost exacted and benefit gained from its membership in the group. These include the favor or disfavor it earns from other group members on the basis of its behavior. The currency of favor is paid by direct reciprocity and indirect reciprocity, the latter in the form of reputation and trust. How well a group performs depends on how well its members work together, regardless of the degree by which each is individually favored or disfavored within the group.

The genetic fitness of a human being must therefore be a consequence of both individual selection and group selection. But this is true only with reference to the targets of selection. Whether the targets are traits of the individual working in its own interest, or interactive traits among group members in the interest of the group, the ultimate unit affected is the entire genetic code of the individual. If the benefit from group membership falls below that from solitary life, evolution will favor departure or cheating by the individual. Taken far enough, the society will dissolve. If personal benefit from group memberships rises high enough or, alternatively, if selfish leaders can bend the colony to serve their personal interests, the members will be prone to altruism and conformity. Because all normal members have at least the capacity to reproduce, there is an inherent and irremediable conflict in human societies between natural selection at the individual level and natural selection at the group level.

Alleles (the various forms of each gene) that favor survival and reproduction of individual group members at the expense of others are always in conflict with alleles of the same and alleles of other genes favoring altruism and cohesion in determining the survival and reproduction of individuals. Selfishness, cowardice, and unethical competition further the interest of individually selected alleles, while diminishing the proportion of altruistic, group-selected alleles. These destructive propensities are opposed by alleles predisposing individuals toward heroic and altruistic behavior on behalf of members of the same group. Group-selected traits typically take the fiercest degree of resolve during conflicts between rival groups. It was therefore inevitable that the genetic code prescribing social behavior of modern humans is a chimera. One part prescribes traits that favor success of individuals within the group. The other part prescribes the traits that favor group success in competition with other groups.

Natural selection at the individual level, with strategies evolving that contribute maximum number of mature offspring, has prevailed throughout the history of life. It typically shapes the physiology and behavior of organisms to suit a solitary existence, or at most to membership in loosely organized groups. The origin of eusociality, in which organisms behave in the opposite manner, has been rare in the history of life because group selection must be exceptionally powerful to relax the grip of individual selection. Only then can it modify the conservative effect of individual selection and introduce highly cooperative behavior into the physiology and behavior of the group members. The ancestors of ants and other hymenopterous eusocial insects (ants, bees, wasps) faced the same problem as those of humans. They finessed it by evolving extreme plasticity of certain genes, programmed so that the altruistic workers have the same genes for physiology and behavior as the mother queen, even though they differ drastically from the queen and among one another in these traits. Selection has remained at the individual level, queen to queen. Yet selection in the insect societies continues at the group level, with colony pitted against colony. This seeming paradox is easily resolved. As far as natural selection in most forms of social behavior is concerned, the colony is operationally only the queen and her phenotypic extension in the form of robot-like assistants. At the same time, group selection promotes genetic diversity among the workers in other parts of the genome to help protect the colony from disease. This diversity is provided by the male with whom each queen mates. In this sense, the genotype of an individual is a genetic chimera. It contains genes that do not vary among colony members, with castes being plastic forms created from the same genes, and genes that do vary among colony members as a shield against disease.

In mammals such a finesse was not possible, because their life cycle is fundamentally different from that of insects. In the key reproductive step of the mammal life cycle, the female is rooted to the territory of her origin. She cannot separate herself from the group in which she was born, unless she crosses over directly to a neighboring group—a common but tightly controlled event in both animals and humans. In contrast, the insect female can be mated, then carry the sperm like a portable male in her spermatheca long distances. She is able to start new colonies all by herself far from the nest of her birth. The overpowering of individual selection by group selection has not only been rare in mammals and other vertebrates; it has never been and will likely never be complete. The fundamentals of the mammalian life cycle and population structure prevent it. No insect-like social system can be created in the theater of mammalian social evolution.

The expected consequences of this evolutionary process in humans are the following:

• Intense competition occurs between groups, in many circumstances including territorial aggression.

• Group composition is unstable, because of the advantage of increasing group size accruing from immigration, ideological proselytization, and conquest, pitted against the opportunities to gain advantage by usurpation within the group and fission to create new groups.

• An unavoidable and perpetual war exists between honor, virtue, and duty, the products of group selection, on one side, and selfishness, cowardice, and hypocrisy, the products of individual selection, on the other side.

• The perfecting of quick and expert reading of intention in others has been paramount in the evolution of human social behavior.

• Much of culture, including especially the content of the creative arts, has arisen from the inevitable clash of individual selection and group selection.

In summary, the human condition is an endemic turmoil rooted in the evolution processes that created us. The worst in our nature coexists with the best, and so it will ever be. To scrub it out, if such were possible, would make us less than human.

To form groups, drawing visceral comfort and pride from familiar fellowship, and to defend the group enthusiastically against rival groups—these are among the absolute universals of human nature and hence of culture. Once a group has been established with a defined purpose, however, its boundaries are malleable. Families are usually included as subgroups, although they are frequently split by loyalties to other groups. The same is true of allies, recruits, converts, honorary inductees, and traitors from rival groups who have crossed over. Identity and some degree of entitlement are given each member of a group. Conversely, any prestige and wealth he may acquire lends identity and power to his fellow members.  

People must have a tribe. It gives them a name in addition to their own and social meaning in a chaotic world. It makes the environment less disorienting and dangerous. The social world of each modern human is not a single tribe, but rather a system of interlocking tribes, among which it is often difficult to find a single compass. People savor the company of like-minded friends, and they yearn to be in one of the best—a combat marine regiment, perhaps, an elite college, the executive committee of a company, a religious sect, a fraternity, a garden club—any collectivity that can be compared favorably with other, competing groups of the same category.

People around the world today, growing cautious of war and fearful of its consequences, have turned increasingly to its moral equivalent in team sports. Their thirst for group membership and superiority of their group can be satisfied with victory by their warriors in clashes on ritualized battlefields.  The fans are lifted by seeing the uniforms and symbols and battle gear of the team, the championship cups and banners on display, the dancing seminude maidens appropriately called cheerleaders. Some of the fans wear bizarre costumes and face makeup in homage to their team. They attend triumphant galas after victories. Many, especially of warrior and maiden age, shed all restraint to join in the spirit of the battle and the joyous mayhem afterward.

“Celts Supreme!” The social psychologist Roger Brown, who witnessed the aftermath, commented, “It was not just the players who felt supreme but all their fans. There was ecstasy in the North End. The fans burst out of the Garden and nearby bars, practically break dancing in the air, stogies lit, arms uplifted, voices screaming. The hood of a car was flattened, about thirty people jubilantly piled aboard, and the driver—a fan—smiled happily. An improvised slow parade of honking cars circled through the neighborhood. It did not seem to me that those fans were just sympathizing or empathizing with their team. They personally were flying high. On that night each fan’s self-esteem felt supreme; a social identity did a lot for many personal identities.” Brown then added an important point: “Identification with a sports team has in it something of the arbitrariness of the minimal groups. To be a Celtic fan you need not be born in Boston or even live there, and the same is true of membership on the team. As individuals, or with other group memberships salient, both fans and team members might be very hostile. So long as the Celtic membership was salient, however, all rode the waves together.”

Experiments conducted over many years by social psychologists have revealed how swiftly and decisively people divide into groups, and then discriminate in favor of the one to which they belong. Even when the experimenters created the groups arbitrarily, then labeled them so the members could identify themselves, and even when the interactions prescribed were trivial, prejudice quickly established itself. Whether groups played for pennies or identified themselves groupishly as preferring some abstract painter to another, the participants always ranked the out-group below the in-group. They judged their “opponents” to be less likable, less fair, less trustworthy, less competent. The prejudices asserted themselves even when the subjects were told the in-groups and out-groups had been chosen arbitrarily.

In its power and universality, the tendency to form groups and then favor in-group members has the earmarks of instinct. It could be argued that in-group bias is conditioned by early training to affiliate with family members and by encouragement to play with neighboring children. But even if such experience does play a role, it would be an example of what psychologists call prepared learning, the inborn propensity to learn something swiftly and decisively. If the propensity toward in-group bias has all these criteria, it is likely to be inherited and, if so, can be reasonably supposed to have arisen through evolution by natural selection. Other cogent examples of prepared learning in the human repertoire include language, incest avoidance, and the acquisition of phobias. If groupist behavior is truly an instinct expressed by inherited prepared learning, we might expect to find signs of it even in very young children. And exactly this phenomenon has been discovered by cognitive psychologists.

The elementary drive to form and take deep pleasure from in-group membership easily translates at a higher level into tribalism. People are prone to ethnocentrism. It is an uncomfortable fact that even when given a guilt-free choice, individuals prefer the company of others of the same race, nation, clan, and religion. They trust them more, relax with them better in business and social events, and prefer them more often than not as marriage partners. They are quicker to anger at evidence that an out-group is behaving unfairly or receiving undeserved rewards. And they grow hostile to any out-group encroaching upon the territory or resources of their in-group.

Literature and history are strewn with accounts of what happens at the extreme, as in the following from Judges 12: 5–6 in the Old Testament: The Gileadites captured the fords of the Jordan leading to Ephraim, and whenever a survivor of Ephraim said, “Let me go over,” the men of Gilead asked him, “Are you an Ephraimite?” If he replied, “No,” they said, “All right, say ‘Shibboleth.’ ” If he said, “Sibboleth,” because he could not pronounce the word correctly, they seized him and killed him at the fords of the Jordan. Forty-two thousand Ephraimites were killed at that time.

When in experiments black and white Americans were flashed pictures of the other race, their amygdalas, the brain’s center of fear and anger, were activated so quickly and subtly that the conscious centers of the brain were unaware of the response. The subject, in effect, could not help himself. When, on the other hand, appropriate contexts were added—say, the approaching black was a doctor and the white his patient—two other sites of the brain integrated with the higher learning centers, the cingulate cortex and the dorsolateral preferential cortex, lit up, silencing input through the amygdala. Thus different parts of the brain have evolved by group selection to create groupishness.

Our bloody nature, it can now be argued in the context of modern biology, is ingrained because group-versus-group was a principal driving force that made us what we are. In prehistory, group selection lifted the hominids that became territorial carnivores to heights of solidarity, to genius, to enterprise. And to fear. Each tribe knew with justification that if it was not armed and ready, its very existence was imperiled. Throughout history, the escalation of a large part of technology has had combat as its central purpose. Today, the calendars of nations are punctuated by holidays to celebrate wars won and to perform memorial services for those who died waging them. Public support is best fired up by appeal to the emotions of deadly combat,

Any excuse for a real war will do, so long as it is seen as necessary to protect the tribe. Hence the war against terrorism and axis of evil.  Remembrance of past horrors has no effect.

From April to June in 1994, killers from the Hutu majority in Rwanda set out to exterminate the Tutsi minority, which at that time ruled the country. In a hundred days of unrestrained slaughter by knife and gun, 800,000 people died, mostly Tutsi. The total Rwandan population was reduced by 10%. When a halt was finally called, two million Hutu fled the country, fearing retribution.

The immediate causes for the bloodbath were political and social grievances, but they all stemmed from one root cause: Rwanda was the most overcrowded country in Africa. For a relentlessly growing population, the per capita arable land was shrinking toward its limit. The deadly argument was over which tribe would own and control the whole of it.  Many of those who attacked their neighbors were promised the land of the Tutsi they killed.

Once a group has been split off and sufficiently dehumanized, any brutality can be justified, at any level, and at any size of the victimized group up to and including race and nation. Russia’s Great Terror under Stalin resulted in the deliberate starvation to death of more than three million Soviet Ukrainians during the winter of 1932–33. In 1937 and 1938, 681,692 executions were carried out for alleged “political crimes,” of which more than 90% were peasants considered resistant to collectivization. The U.S.S.R. as a whole soon itself suffered equally from the brutal Nazi invasion, the stated purpose of which was to subdue the “inferior” Slavs and make room for expansion of the racially “pure” Aryan peoples.

If no other reason is convenient for waging a war of territorial expansion, there has always been God. It was the will of God that brought the Crusaders to the Levant.

It should not be thought that war, often accompanied by genocide, is a cultural artifact of a few societies. Nor has it been an aberration of history, a result of the growing pains of our species’ maturation. Wars and genocide have been universal and eternal, respecting no particular time or culture. Since the end of the Second World War, violent conflict between states has declined drastically, owing in part to the nuclear standoff of the major powers (two scorpions in a bottle writ large). But civil wars, insurgencies, and state-sponsored terrorism continue unabated. Overall, big wars have been replaced around the world by small wars of the kind and magnitude more typical of hunter-gatherer and primitively agricultural societies. Civilized societies have tried to eliminate torture, execution, and the murder of civilians, but those fighting little wars do not comply.

If cooperative groups were more likely to prevail in conflicts with other groups, has the level of intergroup violence been sufficient to influence the evolution of human social behavior? The estimates of adult mortality in hunter-gatherer groups from the beginning of Neolithic times to the present, shown in the accompanying table, support that proposition.  Nonlethal violence is far higher in the chimps, occurring between a hundred and possibly a thousand times more often than in humans.

Males are more gregarious than females. They are also intensely status conscious, frequently engaging in displays that lead to fighting. They form coalitions with others and use a wide array of maneuvers and deceptions to exploit or altogether evade the dominance order. The patterns of collective violence in which young chimp males engage are remarkably similar to those of young human males. Aside from constantly vying for status, both for themselves and for their gangs, they tend to avoid open mass confrontations with rival troops, instead relying on surprise attacks. The purpose of raids made by the male gangs on neighboring communities is evidently to kill or drive out its members and acquire new territory.

Uganda’s Kibale National Park. The war, conducted over ten years, was eerily human-like. Every 10 to 14 days, patrols of up to 20 males penetrated enemy territory, moving quietly in single file, scanning the terrain from ground to the treetops, and halting cautiously at every surrounding noise. If a force larger than their own was encountered, the invaders broke rank and ran back to their own territory. When they encountered a lone male, however, they piled on him in a crowd and pummeled and bit him to death. When a female was encountered, they usually let her go. This latter tolerance was not a display of gallantry. If she carried an infant, they took it from her and killed and ate it.

There is no certain way to decide on the basis of existing knowledge whether chimpanzee and humans inherited their pattern of territorial aggression from a common ancestor or whether they evolved it independently in response to parallel pressures of natural selection and opportunities encountered in the African homeland. Humans and chimpanzees are intensely territorial. That is the apparent population control hardwired into their social systems.

I believe, however, that the evidence best fits the following sequence. The original limiting factor, which intensified with the introduction of group hunting for animal protein, was food. Territorial behavior evolved as a device to sequester the food supply. Expansive wars and annexation resulted in enlarged territories and favored genes that prescribe group cohesion, networking, and the formation of alliances. For hundreds of millennia, the territorial imperative gave stability to the small, scattered communities of Homo sapiens, just as they do today in the small, scattered populations of surviving hunter-gatherers. During this long period, randomly spaced extremes in the environment alternately increased and decreased the population size that could be contained within territories. These “demographic shocks” led to forced emigration or aggressive expansion of territory size by conquest, or both together. They also raised the value of forming alliances outside of kin-based networks in order to subdue other neighboring groups.

Ten thousand years ago, the Neolithic revolution began to yield vastly larger amounts of food from cultivated crops and livestock, allowing rapid growth in human populations. But that advance did not change human nature. People simply increased their numbers as fast as the rich new resources allowed. As food again inevitably became the limiting factor, they obeyed the territorial imperative. Their descendants have never changed. At the present time, we are still fundamentally the same as our hunter-gatherer ancestors, but with more food and larger territories. Region by region, recent studies show, the populations have approached a limit set by the supply of food and water. And so it has always been for every tribe, except for the brief periods after new lands were discovered and its indigenous inhabitants displaced or killed. The struggle to control vital resources continues globally, and it is growing worse. The problem arose because humanity failed to seize the great opportunity given it at the dawn of the Neolithic.

Homo erectus, with a culture advanced well beyond that of its apish ancestors, and more adaptable to new and difficult environments, expanded its range to become the first cosmopolitan primate. It failed to reach only the isolated continents of Australia and the New World and the far-flung archipelagoes of the Pacific Ocean. Its great range buffered the species against early extinction. One of its genetic lines acquired potential immortality by evolving into Homo sapiens. The ancestral Homo erectus still lives. It is us.

In combination, some of our traits are unique among all animals:

  • A productive language based on infinite permutations of arbitrarily invented words and symbols.
  • Music, comprising a wide array of sounds, also in infinite permutations and played in individually chosen mood-creating patterns; but, most definitively, with a beat.
  • Prolonged childhood, allowing extended learning periods under the guidance of adults.
  • Anatomical concealment of female genitalia and the abandonment of advertisement of ovulation, both combined with continuous sexual activity. The latter promotes female-male bonding and biparental care, which are needed through the long period of helplessness in early childhood.
  • Uniquely fast and substantial growth in the brain size during early development, increasing 3.3 times from birth to maturity.
  • Relatively slender body form, small teeth, and weakened jaw muscles, indicative of an omnivorous diet.
  • A digestive system specialized to eat foods that have been tenderized by cooking.

Perhaps the time has come, in light of this and other advances in human genetics, to adopt a new ethic of racial and hereditary variation, one that places value on the whole of diversity rather than on the differences composing the diversity. It would give proper measure to our species’ genetic variation as an asset, prized for the adaptability it provides all of us during an increasingly uncertain future. Humanity is strengthened by a broad portfolio of genes that can generate new talents, additional resistance to diseases, and perhaps even new ways of seeing reality. For scientific as well as for moral reasons, we should learn to promote human biological diversity for its own sake instead of using it to justify prejudice and conflict.

This scenario of slow initial advance by a very few followed by local population growth is supported by two lines of evidence assembled by independent groups of researchers during the past ten years. First is the great genetic diversity of present-day southern Africans, suggesting that only a small part of the whole African population participated in the breakout.

To envision more precisely how the out-of-Africa pattern began, between 135,000 and 90,000 years ago, a period of aridity gripped tropical Africa far more extreme than any that had been experienced for tens of millennia previously. The result was the forced retreat of early humanity to a much smaller range and its fall to a perilously low level in population. Death by starvation and tribal conflict, both of which were to become routine in later historical times, must have been widespread in prehistory. The size of the total Homo sapiens population on the African continent descended into the thousands, and for a long while the future conqueror species risked complete extinction.

Then, finally, the great drought eased, and from 90,000 to 70,000 years ago tropical forests and savanna slowly expanded back to their previous ranges. Human populations grew and spread with them. At the same time, other parts of the continent became more arid, and the Middle East as well. With intermediate levels of rainfall prevailing throughout most of Africa, an especially favorable window of opportunity opened for the demographic expansion of pioneer populations out of the continent altogether. In particular, the interval was long enough to maintain a corridor of continuous habitable terrain up the Nile to Sinai and beyond, bisecting the arid land and allowing a northward sweep of colonizing humans. A second possible route was eastward, across the Bab el Mandeb Strait onto the southern Arabian Peninsula. There followed the penetration of Homo sapiens into Europe by no later than 42,000 years before the present. Anatomically modern humans spread up the Danube River,

The question of exactly when anatomically modern Homo sapiens arrived in the New World, with its catastrophic impact on the virgin fauna and flora, has gripped the attention of anthropologists for many years.  From genetic and archaeological studies across Siberia and the Americas, it now appears that a single Siberian population reached the Bering land bridge no sooner than 30,000 years ago, and possibly as recently as 22,000 years. Around 16,500 years before the present, the retreat of the ice sheets cleared the way south, and a full-scale invasion through Alaska began. By 15,000 years before the present, as revealed by archaeological discoveries in both North and South America, the colonization of the Americas was well under way. It appears likely that the first populations dispersed along the recently deglaciated Pacific coastline, along land still exposed by the incomplete withdrawal of the ice sheets but nowadays mostly underwater.

A more realistic view is that the creative explosion was not a single genetic event but the culmination of a gradual process that began in an archaic form of Homo sapiens as far back as 160,000 years. This view has been supported by recent discoveries of the use of pigment that old, as well as personal ornaments and abstract design scratched on bone and with ocher dating from between 100,000 and 70,000 years ago.

For the immediate future, however, emigration and ethnic intermarriage have taken over as the overwhelmingly dominant forces of microevolution, by homogenizing the global distribution of genes. The impact on humanity as a whole, even while still in this current early stage, is an unprecedented dramatic increase in the genetic variation within local populations around the world. The increase is matched by a reduction in differences between populations. Theoretically, if the flow continues long enough, the population of Stockholm could come to be the same genetically as that in Chicago or Lagos. Overall, more kinds of genotypes are being produced everywhere. This change, unique in human evolutionary history, offers a prospect of an immense increase in different kinds of people worldwide, and thereby newly created physical beauty and artistic and intellectual genius.

With all its quirks, irrationality, and risky productions, and all its conflict and inefficiency, the biological mind is the essence and the very meaning of the human condition.

Chiefs or “big men” rule by prestige, largesse, the support of elite members below them—and retribution against those who oppose them. They live on the surplus accumulated by the tribe, employing it to tighten control upon the tribe, to regulate trade, and to wage war with neighbors. Chiefs exercise authority only on the people immediately around them or in nearby villages, with whom they interact as needed on a daily basis. In practice this means subjects who can be reached within half a day traveling by foot. The reach is thus a maximum of 25 to 30 miles. It is to the advantage of chiefs to micromanage the affairs of their domain, delegating as little authority as possible in order to reduce the chance of insurrection or fission. Common tactics include the suppression of underlings and the fomenting of fear of rival chiefdoms.

States, the final step up in the cultural evolution of societies, have a centralized authority. Rulers exercise their authority in and around the capital, but also over villages, provinces, and other subordinate domains beyond the distance of a one day’s walk, hence beyond immediate communication with the rulers. The domain is too far-flung, the social order and communication system holding it together too complex, for any one person to monitor and control. Local power is therefore delegated to viceroys, princes, governors, and other chief-like rulers of the second rank. The state is also bureaucratic. Responsibility is divided among specialists, including soldiers, builders, clerks, and priests. With enough population and wealth, the public services of art, sciences, and education can be added—first for the benefit of the elite and then, trickling down, for the general public. The heads of state sit upon a throne, real or virtual. They ally themselves with the high priests, and clothe their authority with rituals of allegiance to the gods.

There are five basic human personality traits: extroversion versus introversion, antagonism versus agreeableness, conscientiousness, neuroticism, and openness to experience. Within populations each of these domains contains substantial heritability, mostly falling between one-third to two-thirds. This means that of the total variation of scores in each domain—the fraction due to differences in genes among individuals—falls somewhere between one-third and two-thirds. So from inheritance alone we would expect to find substantial variation in a population such as that in the Burkina Faso village. Added to differences in experience from one person to the next, especially during the formative periods of childhood, we should expect to find even greater variation, but more or less consistently from village to village, and from country to country. Does such substantial variation exist universally, and is it the same from one population to the next, or different? The variation turns out to be consistently great and universally to the same degree across populations. Such was the result of an extraordinary study conducted by a team of 87 researchers and published in 2005. The degree of variation in personality scores was similar across all of 49 cultures measured. The central tendencies of the five domains of personality differed only slightly from one to the next, in a way that was not consistent with prevailing stereotypes held by those outside the cultures.

It is highly unlikely that primary states emerged around the world as the result of convergent genetic evolution. It is all but certain that they appeared autonomously as elaborations of already existing genetic predispositions shared by human populations through common ancestry and dating back to the breakout period some 60,000 years ago.

Animals of the land environment are dominated by species with the most complex social systems. The second phenomenon is that these species have evolved only rarely in evolution.

The most complex systems are those possessing eusociality—literally “true social condition.” Members of a eusocial animal group, such as a colony of ants, belong to multiple generations. They divide labor in what outwardly at least appears to be an altruistic manner. Some take labor roles that shorten their life spans or reduce the number of their personal offspring, or both. Their sacrifice allows others who fill reproductive roles to live longer and produce proportionately more offspring. The sacrifices within the advanced societies go far beyond those between parents and their offspring. They extend to collateral relatives, including siblings, nieces, and nephews, and cousins at various degrees of remove. Sometimes they are bestowed on genetically unrelated individuals. A eusocial colony has marked advantages over solitary individuals competing for the same niche. Some of the colony members can search for food while others protect the nest from enemies. A solitary competitor belonging to another species can either hunt for food or defend its nest, but not do both at the same time. The colony can send out multiple foragers and stay home all at the same time, forming a webwork of surveillance both within and around the nest. When food is found by one colony member, it can inform the others, who then converge on the site like a closing net. When assembled, the nestmates have the ability to fight as a group against rivals and enemies. They can transport large quantities of food more rapidly to the nest, before competitors arrive. With multiple individuals serving as construction workers, the nest can quickly be made larger, its structure architecturally more efficient, and its entrances more easily defended. The nest can also be climate-controlled to some extent.

Large colonies of some species can also apply military-like formations and mass attacks to overcome prey that are invulnerable to solitary individuals.

The 20,000 known species of eusocial insects, mostly ants, bees, wasps, and termites, account for only 2% of the approximately one million known species of insects. Yet this tiny minority of species dominate the rest of the insects in their numbers, their weight, and their impact on the environment.

I have very crudely estimated the number of ants living today to be, at the nearest power of ten, 1016, ten thousand trillion. If each ant on average weighs one-millionth as each human on average, then, because there are a million times more ants than humans (at 1010), all the ants living on Earth weigh roughly as much as all the humans. This figure is not so impressive as it may sound. Consider: if every living person could be collected and log-stacked, we would make a cube less than one mile on each side. So if all the ants could be similarly collected and log-stacked, they would make a cube of similar size.

Eusociality, the condition of multiple generations organized into groups by means of an altruistic division of labor, was one of the major innovations in the history of life. It created superorganisms, the next level of biological complexity above that of organisms. It is comparable in impact to the conquest of land by aquatic air-breathing animals. It is equivalent in importance to the invention of powered flight by insects and vertebrates.

But the achievement has presented a puzzle not yet solved in evolutionary biology: the rarity of its occurrence.

In the last part of the Jurassic period, some 175 million years ago, the first termites, primitively cockroach-like in anatomy, appeared, followed about 25 million years later by ants. Even then, and continuing to the present time, the origin of other eusocial insects, or eusocial animals of any kind, has been rare. Today there are approximately 2,600 recognized taxonomic families of insects and other arthropods, such as the common fruit flies of the family Drosophilidae, orb-weaving spiders of the family Argiopidae, and land crabs of the family Grapsidae. Only 15 of the 2,600 families are known to contain eusocial species. Six of the families are termites, all of which appear to have been descended from a single eusocial ancestor. Eusociality arose in ants once, three times independently in wasps, and at least four times—probably more, but it is hard to tell—in bees.

A single case of eusociality is known in ambrosia beetles, and others have been discovered in aphids and thrips. Amazingly, eusocial behavior has originated three times in shrimps of the genus Synalpheus of the family Alphaeidae, which build nests in marine sponges. Such rare or relatively unstable originations could easily have gone undetected in the fossil record. Also, the multiplicity of eusocial origins in the Synalpheus shrimps has been discovered only recently.

Still rarer than in the invertebrates has been the appearance of eusociality in the vertebrates. It has occurred twice in the subterranean naked mole rats of Africa. It has occurred once in the line leading to modern humans, and in comparison with the invertebrate origins, only very recently in geological times—as recently as 3 million years ago. It is approached in helper-at-the-nest birds, in which the young remain with the parents for a time, but then either inherit the nest or leave to build one on their own. Eusociality is closely approached by African wild dogs, when an alpha female stays at the den to breed while the pack hunts for prey.

During Mesozoic times many evolving lines of dinosaurs attained at least some of the necessary prerequisites: human-sized, fast-moving carnivores, pack hunters, bipedal gait, and free hands. None took the final step to reach even primitive eusociality.

The sequence had two steps. First, in all of the animal species that have attained eusociality—all of them, without known exception—altruistic cooperation protects a persistent, defensible nest from enemies, whether predators, parasites, or competitors. Second, this step having been attained, the stage was set for the origin of eusociality, in which members of groups belong to more than one generation and divide labor in a way that sacrifices at least some of their personal interests to that of the group.

In the old, conventional image, that of kin selection and the “selfish gene,” the group is an alliance of related individuals that cooperate with one another because they are related. Although potentially in conflict, they nonetheless accede altruistically to the needs of the colony. Workers are willing to surrender some or all of their personal reproductive potential this way because they are kin and share genes with them by common descent. Thus each favors its own “selfish” genes by promoting identical genes that also occur in its fellow group members. Even if it gives its life for the benefit of a mother or sister, such an insect will increase the frequency of genes it shares with the relatives. The genes increased will include those that produced the altruistic behavior. If other colony members behave in similar manner, the colony as a whole can defeat groups composed of exclusively selfish individuals.

Among its basic flaws is that it treats the division of labor between the mother queen and her offspring as “cooperation,” and their dispersal from the mother nest as “defection.” But, as we pointed out, the fidelity to the group and the division of labor are not an evolutionary game. The workers are not players. When eusociality is firmly established, they are extensions of the queen’s phenotype, in other words alternative expressions of her personal genes and those of the male with whom she mated. In effect, the workers are robots she has created in her image that allow her to generate more queens and males than would be possible if she were solitary.

The origin and evolution of eusocial insects can be viewed as processes driven by individual-level natural selection. It is best tracked from queen to queen from one generation to the next, with the workers of each colony produced as phenotypic extensions of the mother queen. The queen and her offspring are often called superorganisms, but they may equally be called organisms. The worker of a wasp colony or ant colony that attacks you when you disturb its nest is a product of the mother queen’s genome. The defending worker is part of the queen’s phenotype, as teeth and fingers are part of your own phenotype. There may immediately seem to be a flaw in this comparison. The eusocial worker, of course, has a father as well as a mother, and therefore partly a different genotype from that of the mother queen. Each colony comprises an array of genomes, while the cells of a conventional organism, being clones, compose only the one genome of the organism’s zygote. Yet the process of natural selection and the single level of biological organization on which its operations occur are essentially the same. Each of us is an organism made up of well-integrated diploid cells. So is a eusocial colony. As your tissues proliferated, the molecular machinery of each cell was either turned on or silenced to create, say, a finger or a tooth. In the same way, the eusocial workers, developing into adults under the influence of pheromones from fellow colony members and other environmental cues, are directed to become one particular caste. It will perform one or a sequence of tasks out of a repertory of potential performances hardwired in the collective brains of the workers. For a period of time, rarely throughout its life, it is a soldier, a nest builder, a nurse, or an all-purpose laborer. Of course, it is a fact that genetic diversity of traits among the workers of eusocial colonies not only exists but functions on behalf of the colony—as documented for disease resistance and climate control of the nest. Would this make the colony a group of individuals, each of whom (in the perspective of kin selection theory) seeks to maximize the fitness of its own genes? That such need not be the case becomes apparent if one views the queen’s genome as consisting of parts relatively low in the variety of its alleles (different forms of each gene) whenever the traits they prescribe need to be inflexible, and yet in the same genome other parts are high in the variety of its alleles whenever those traits need to be flexible. Genetic inflexibility is a necessity of worker caste systems and the means by which they are organized and their personal labor distributed. In contrast, genetic flexibility in worker response is favored in disease resistance by the colony and in climate control inside the nest. The more genetic types that exist in a colony, the more likely that at least a few will survive if a disease sweeps through the nest. And the greater the breadth of sensitivity in detecting deviations from the desired temperature, humidity, and atmosphere, the closer these components of the nest environment can be held to their optimum for life of the colony. There is no important genetic difference between the queen and her daughters in the potential caste they can become. Each fertilized egg, from the moment the queen and male genomes unite, can become either a queen or a worker. Its fate depends on the particularities of the environment experienced by each colony member during its development, including the season in which it is born, the food it eats, and the pheromones it detects. In this sense the workers are robots, produced by the mother queen as ambulatory parts of her phenotype.

A state of conflict often results when workers try to reproduce on their own. The other workers typically thwart the usurpers, thus protecting the queen’s primacy. They may just drive her away from the brood chamber whenever she tries to lay eggs. They may pile on the offender to punish her, perhaps severely enough to cripple or kill her. If she manages to sneak her eggs into the brood chamber, her co-workers recognize their different odor and remove and eat them.  [and more about insect colonies not included, as well as why kin selection isn’t true and other detailed complexities of his theory]

Cheaters may win out within a group, gaining a larger share of resources, avoiding dangerous tasks, and breaking rules; but colonies of cheaters lose to colonies of cooperators.

Individual-versus-group selection results in a mix of altruism and selfishness, of virtue and sin among members of a society. If one member devotes its life to service over marriage, the individual is of benefit to society despite no offspring. A soldier going into battle will benefit his country, but runs a higher risk of death than one who doesn’t.  a cheater saves his own energy and reduces bodily risk and passes the social cost to others.

Wilson sees controlled fire, bipedal locomotion, hunting and so on as important innovations in human evolution, but not prime movers.

What is human nature?

The very existence of human nature was denied during the last century by most social scientists.  They clung to the dogma, in spite of mounting evidence, that all social behavior is learned and all culture is the product of history passed from one generation to the next.  Leaders of conservative religions, in contrast, have been prone to believe that human nature is a fixed property vouchsafed by God—to be explained to the masses by those privileged to understand his wishes.

Human nature is not the genes underlying it.  They prescribe the developmental rules of the brain, sensory system, and behavior that produce human nature.  Nor can the universals of culture found across all societies: age-grading, athletic sports, bodily adornment, calendar, cleanliness training, community organization, cooking, cooperative labor, cosmology, courtship, dancing, decorative art, divination, division of labor, dream interpretation, education, eschatology, ethics, ethnobotany, etiquette, faith healing, family feasting, fire making, folklore, food taboos, funeral rites, games, gestures, gift giving, government, greetings, hair styles, hospitality, housing, hygiene, incest taboos, inheritance rules, joking, kin groups, kinship nomenclature, language, law, luck superstitions, magic, marriage, mealtimes, medicine, obstetrics, penal sanctions, personal names, population policy, postnatal care, pregnancy usages, property rights, propitiation of supernatural beings, puberty customs, religious ritual, residence rules, sexual restrictions, soul concepts, status differentiation, surgery, tool making, trade, visiting, weaving, and weather control.

Human nature is the epigenetic rules, the inherited regularities of mental development. These rules are the genetic biases in the way our senses perceive the world, the symbolic coding by which we represent the world, the options we open to ourselves, and the responses we find easiest and most rewarding psychologically to make. In ways that are beginning to come into focus at the physiological and, in a few cases, the genetic level, the epigenetic rules alter the way we see and linguistically classify color, for example. They determine the individuals we as a rule find sexually most attractive. They cause us to evaluate the aesthetics of artistic design according to degree of complexity. They lead us differentially to acquire fears and phobias concerning dangers in the environment (as from snakes and heights), they induce us to communicate with certain facial expressions and forms of body language, to bond with infants, and so on across a wide range of categories in behavior and thought. Most, like incest avoidance, are evidently very ancient, dating back millions of years in mammalian ancestry. Others, such as the stages of linguistic development, are uniquely human and probably only hundreds of thousands of years old.  

The behaviors created by epigenetic rules are not hardwired like reflexes. It is the epigenetic rules instead that are hardwired, and hence compose the true core of human nature.  These behaviors are learned, but the process is what psychologists call ‘prepared.’ In prepared learning, we are innately predisposed to learn and thereby reinforce one option over another. We are “counter prepared” to make alternative choices, or even actively to avoid them.  For example, we are prepared to learn a fear of snakes very quickly yet not prepared by instinct to treat other reptiles like turtles and lizards with such a degree of revulsion.

The elaboration of culture depends upon long-term memory, and in this capacity humans rank far above all animals. The vast quantity stored in our immensely enlarged forebrains makes us consummate storytellers. We summon dreams and recollections of experience from across a lifetime and use them to create scenarios, past and future. We live in our conscious mind with the consequence of our actions, whether real or imagined. Placed out in alternative versions, our inner stories allow us to override immediate desires in favor of delayed pleasure. By long-range planning we defeat, for a while at least, the urging of our emotions. This inner life is why each person is unique and precious. When one dies, an entire library of both experience and imaginings is extinguished.

The crucial difference between human cognition and that of other animal species, including our closest genetic relatives, the chimpanzees, is the ability to collaborate for the purpose of achieving shared goals and intentions.

Homo erectus advanced to sociality — a level of cooperation among groups. Small groups had begun to establish campsites. They selected defensible sites and fortified them, with some members of the group staying for extended periods to protect the young while others hunted.

The human specialty is intentionality, fashioned from an extremely large working memory. We have become the experts at mind reading, and the world champions at inventing culture. We not only interact intensely with one another, as do other animals with advanced social organizations, but to a unique degree we have added the urge to collaborate. We express our intentions as appropriate to the moment and read those of others brilliantly, cooperating closely and competently to build tools and shelters, to train the young to plan foraging expeditions, to play on teams, to accomplish almost all we need to do to survive as human beings.

Humans are successful not because of an elevated general intelligence that addresses all challenges, but because they are born to be specialists in social skills. By cooperating through the communication and the reading of intention, groups accomplish far more than the effort of any one solitary person.

The highest level of social intelligence was aquired when our ancestors acquired a combination of three particular attributes.  They developed shared attention – the tendency to pay attention to the same object as others. They acquired a high level of the awareness they needed to act together to achieve a common goal or thwart others. And they acquired a “theory of mind,” the recognition that their own mental states would be shared by others.

After that, languages comparable to those today were invented at least 60,000 years ago. Language was the trail of human social evolution. It bestowed almost magical powers on the human species by using arbitrary symbols and words to convey meaning and an infinite number of messages. It can express to at least a crude degree everything we can perceive, dream or experience we can imagine, and every mathematical statement our analyses can construct. 

Wilson then goes on the explain why the bee waggle dance and other communications of other animals is not a language.  Some reasons why human language is are that we can make reference to objects and events not in the vicinity or that even exist.  We emphasize particular words to invoke emphasis and mood.  We can be indirect and insinuate instead of saying something baldly and leave open plausible deniability.

Turn-taking during conversations turns out to the similar no matter what the culture – the conversational gaps tend to avoid overlap, but not interruption. 

Are people innately good, but corruptible by the forces of evil? Or, are they instead innately wicked, and redeemable only by the forces of good? People are both. And so it will forever be unless we change our genes, because the human dilemma was foreordained in the way our species evolved, and therefore an unchangeable part of human nature.  Human beings and their social orders are intrinsically imperfectible and fortunately so. In a constantly changing world, we need the flexibility that only imperfection provides.

The dilemma of good and evil was created by multilevel selection, in which individual selection and group selection act together on the same individual but largely in opposition to each other. Individual selection is the result of competition for survival and reproduction among members of the same group. It shapes instincts in each member that are fundamentally selfish with reference to other members.  In contrast, group selection consists of competition between societies, through both direct conflict and differential competence at exploiting the environment.  Group selection shapes instincts that tend to make individuals altruistic toward one another but not toward members of other groups. 

Individual selection is responsible for much of what we call sin, while group selection is responsible for the greater part of virtue.  Together they have created the conflict between the poorer and the better angels of our nature.

Individual selection, defined precisely, is the differential longevity and fertility of individuals in competition with other members of the group.  Group selection is differential longevity and lifetime fertility of those genes that prescribe traits of interaction among members of the group, having arisen during competition with other groups.

How to think out and deal with the eternal ferment generated by multilevel selection is the role of the social sciences and humanities.  How to explain it is the role of the natural science, which if successful, should make the pathways to harmony among the three great branches of learning easier to create.  The social sciences and humanities are devoted to the proximate, outwardly expressed phenomena of human sensations and thought. In the same way that descriptive natural history is related to biology, the social sciences and humanities are related to human self-understanding. They describe how individuals feel and act, and with history and drama they tell a representative fraction of the infinite stores that human relationships can generate.  All of this, however, exists within a box. It is confined there because sensations and thought are ruled by human nature, and human nature is also in a box. It is only one of a vast number of possible natures that could have evolved. The one we have is the result of the improbably pathway followed across millions of years by our genetic ancestors that finally produced us. To see human nature as the product of this evolutionary trajectory is to unlock the ultimate causes of our sensations and thought. To put together both proximate and ultimate causes is the key to self-understanding, the means to see ourselves as we truly are and then to explore outside the box.

An iron rule exists in social evolution. It is that selfish individuals beat altruistic individuals, while groups of altruists beat groups of selfish individuals. The victory can never be complete; the balance of selection pressures cannot move to either extreme. If individual selection were to dominate, societies would dissolve.  If group selection were to dominate, human groups would come to resemble ant colonies. 

Each individual is linked to a network of other group members. Its own survival and reproductive capacity are dependent in part on its interaction with others in the network.  What counts is the propensity to form the myriad alliances, favors, exchanges of information, and betrayals that make up daily life in the network.

When villages and chiefdoms emerged around 10,000 years ago, the nature of networks changed, growing in size dramatically.  Groups became overlapping, hierarchical, and porous. Social existence became far less stable than when we were hunter gatherers. In industrialized nations, networks grow to a complexity that is bewildering to the Paleolithic mind we inherited. Our instincts still desire the tiny, united band-networks, unprepared for civilization.

This trend has thrown confusion into the joining of groups, one of the most powerful human impulses. Every person is a compulsive group-seeker, hence an intensely tribal anima. This need is satisfied in a extended family, organized religion, ideology, ethnic group, or sports club in combination.

To be human is also to level others, especially those who appear to receive more than they have earned.  To steer through jealous rivals, people try to be modest in demeanor as a stratagem.  And to enhance reputation with reciprocity so that altruism and cooperativeness are achieved, as the expression “do good and talk about it” states. Since everyone knows the game, people are willing to counter it if they can, acutely sensitive to hypocrisy and read to to level those with less than impeccable credentials.  Levelers have a formidable armament of roasts, jokes, parodies, and mocking laughter to weaken the haughty and overly ambitious. Some studies suggest that leveling is beneficial. Societies that do best for their citizens in quality of life, from education, medical car, and crime control also have the lowest income differential between the wealthy and poor.

People also enjoy seeing punishment of those who don’t cooperate, the freeloaders and criminals or idle rich.  They’re also willing to administer justice, scolding motorists running a red light, whistle-blowing their employer, and so on. 

Societies are mistaken to disapprove of homosexuality because gays have different sexual preferences and reproduce less. Their presence should be valued instead for what they contribute constructively to human diversity. A society that condemns homosexuality harms itself.

SCIENCE & RELIGION

The conflict between science and religion began in earnest during the late 20th century when scientists saw humans as a product of evolution by natural selection.  By 1998, members of the U.S. National academy of sciences, an elite elected group, were approaching complete atheism. Only 10% testified to a belief in either God or immortality, with just 2% of them biologists.

But in the late 1990s, over 95% of Americans believed in God or some kind of universal life force, and 45% attend church more than once a week.  Europeans are puzzled over this widespread biblical literalism and denial, by half the U.S. population, of biological evolution. 

The evidence in great abundance points to organized religion as an expression of tribalism.  Every religion teaches its adherents that they are a special fellowship and that their creation story, moral precepts, and privilege from divine power are superior to those claimed in other religions.  Their charity and other acts of altruism are concentrated on their coreligionists; when extended to outsiders, it is usually to proselytize and strengthen the size of the trip and its allies.  No religious leader ever urges people to consider rival religions and choose the one they find best for their person and society.  The conflict among religions is often instead an accelerant, if not a direct cause of war. Devout believers value their faith above all else and are quick to anger if it is challenged. The power of organized religious is based upon their contribution to social order and personal security, not the search for truth.  Acceptance of bizarre creation myths binds the members together.

[and a great deal more about religion, the arts, music]

Where are we going?

By any conceivable standard, humanity is far and away life’s geat achievement. We are the mind of the biosphere, the solar system, and who can say – perhaps the galaxy.  Our ancestors were one of the few to ever evolve eusociality, with group members across two or more generations staying together, cooperating, caring for the young, and dividing labor that favors some over others.  We hit upon symbol-based language, literacy, and science-based technology that gave us an edge over the rest of life.  We are godlike.

How did we get here?  Apparently multilevel natural selection of group and individual selection combined. This is why we are conflicted – feeling the pull of conscience, of heroism against cowardice, of truth against deception, of commitment against withdrawal. It isour fate to be tormented with large and small dilemmas as we daily wind our way through the risky, fractious world that gave us birth. We have mixed feelings. We are not sure of this or that course of action. We know too well that no one is immune from making a catastrophic mistake or any organization free of corruption.

We are pleased to endlessly watch and analyze our relatives, friends, and enemies. Gossip has always been the favorite occupation in every society.  To weigh as accurately as possible the intentions and trustworthiness of those who affect us is very human and highly adaptive.  And to judge the impact of others on the welfare of the group as a whole. We are geniuses at reading intentions of others as they struggle with their own angels and demons.  Civil law is how we moderate the damage of inevitable failures.

Confusion is compounded by humanity living in a largely mythic, spirit-haunted world which we owe to our early history.  When our ancestors realized their mortality about 100,000 years ago, they sought an explanation of who they were and the meaning of the world. They must have asked where do the dead go and most decided they went to the spirit world. And we could see the dead again in dreams, with drugs or self-inflicted privation. 

The best, the only way our forebears could explain existence was a creation myth, which without exception, affirmed the superiority of the tribe that invented it over all other trips.  Every religious believer saw himself as a chosen person.  To question the sacred myths is to question the identity and worth of those who believe them. That is why skeptics, even those committed to equally absurd myths, are disliked and can risk imprisonment or death.

Organized religions preside over the rites of passage, from birth to maturity, from marriage to death. They offer the best a tribe has to offer: a committed community that gives heartfelt emotional support, and welcomes, and forgives.  These beliefs in immorality and divine justice give comfort and steel resolution and bravery. Religions have been the source of much of the best of creative arts.

Why then is it wise to openly question the myths and gods of organized religions?  Because they are stultifying and divisive.  Because each is just one version of a competing multitude of scenarios that possibly can be true. Because they encourage ignorance, distract people from recognizing problems of the real world, and often lead them in wrong directions into disastrous actions.  True to their biological origins, they passionately encourage altruism within the membership.

A good first step toward the liberation of humanity from the oppressive forms of tribalism would be to repudiate the claims of those in power who say they speak for God, are a special representative of god, or have exclusive knowledge of God’s divine will.  Among these purveyors of theological narcissism are would-be prophets, the founders of religious cults, impassioned evangelical minsters, ayatollahs, imams of the grand mosques, chief rabbis, Rosh yeshivas, the Dalai Lama and the pope.  The same is true for dogmatic political ideologies based on unchallengeable precepts, left or right, and especially where justified with the dogmas of organized religions.

Another argument for a new Enlightenment is that we are alone on this planet with whatever reason and understanding we can muster, and hence solely responsible for our actions as a species.  The planet we have conquered is not just a stop along the way to a better world out there in some other dimension.  Surely one moral precept we can agree on is to stop destroying our birthplace, the only home humanity will ever have.  The evidence for climate warming, the industrial pollution as the principal cause is now overwhelming.  Also evident is the rapid disappearance of tropical forests and grasslands and other habitats where most of the diversity of life exists. Half of living species could be extinct by the end of the century.

Science is not just another enterprise like medicine or engineering or theology. It is the wellspring of all the knowledge we have of the real world that can be tested and fitted to preexisting knowledge.  It is the arsenal of technologies and inferential math needed to distinguish the true from the false. It formulates the principles and formulas that tie all this knowledge together. Science belongs to everybody.  Its constituent parts can be challenged by anybody in the world who has sufficient information to do so. It is not just another way of knowing, making it coequal with religious faith.  The conflict between scientific knowledge and the teachings of organized religions is irreconcilable. The chasm will continue to widen and cause no end of trouble as long as religious leaders go on making unsupportable claims about supernatural causes of reality.

Posted in Critical Thinking, Evolution, Natural History, Religion | Tagged , , , , | Comments Off on Where do we come from, who are we, and where are we going?

Offshore wind turbines: Expensive, risky, and last just 15 years

Preface: The Department of Energy high wind penetration plans require a lot of offshore wind. But is it possible, affordable, or wise to do this? Corrosion leads to a short lifespan of just 15 years. To reduce maintenance, offshore windmills use limited rare earth metals.  A 500 MW offshore wind farm could cost $3.04 Tillion dollars (Table A-1). The materials (i.e. steel & concrete) needed for 730,099 2 MW windmills in America are staggering. Offshore turbines of 6 MW weigh 757 tons, nearly 2.5 times more than onshore turbines (table 4.3), each one weighing as much as 505 cars of 3,000 pounds each. The latest proposed turbines in New Jersey will be 800 feet tall (Grandoni 2020). Most components are made in China and Europe, so supply chain disruptions would delay repairs or repowering.

Wind turbines can be battered, rusted, corroded, or destroyed by tides, storms, hurricanes, lightning, icebergs, floes, large waves, and marine growth, shortening their lives and increasing maintenance and operation costs.

Offshore wind turbines are often stood upright on ships and moved to their location, which can’t be done if there are any bridges in the way.  The U.S. only has five utility-scale wind turbines in 2021, all off of Rhode Island.

Why build risky, expensive, short-lived offshore wind farms if a renewable electric grid may not be possible given the lack of a national grid, lack of commercial-level utility-scale energy storage, and the insurmountable issue of seasonal wind and solar?  Peak oil occurred in 2018, so sometime within the next 10 years oil shocks will hit and oil will be too precious for building such contraptions, and eventually be rationed mainly to agriculture as the 1980 DOE Standby rationing plan called for, and after that, the center will not hold.  Rather than wind turbines, remaining energy should be used to go back to organic (regenerative) agriculture, burying nuclear wastes, and myriad other efforts for going back to Wood World for energy and infrastructure, like all civilizations before fossils.  

Floating offshore wind turbines

Offshore wind turbines are limited to water at most 165 feet deep.  Some 60% of available offshore wind resource in the U.S. is beyond the reach of fixed-bottom foundation turbines, including most of the West Coast.

The only commercial floating wind farm today is in Hywind Scotland. each Siemens SWT-6.0-154 turbine has a towerhead mass of around 350 tons and sits on a foundation with roughly 6,060 tons of solid ballast and a displacement of some 13,230 tons. The turbine is attached to a spar buoy that extends 260 feet below the surface and tethered up to 390 feet below (Deign 2020).

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Navigant. 2013. U.S. Offshore Wind Manufacturing and Supply Chain Development. U.S. Department of Energy.

https://www1.eere.energy.gov/wind/pdfs/us_offshore_wind_supply_chain_and_manufacturing_development.pdf

Can we afford offshore wind turbines? A 500 MW farm costs $3 trillion dollars and will last only 15 years

offshore wind capital costs 3 trillion dollars

Component Land-basedtons Offshoretons
Rotor 62 156
Nacelle 82 316
Tower 162 285
Total Assembly 306 757
Table 4-3. Comparison of Major Land-based and Offshore Turbine Component Weights (tons)

Developing offshore wind in the U.S. introduces the issues such as:

  1. Hurricane risk along the southern portions of the Eastern seaboard and Gulf Coast regions, especially extreme wind gusts. Extreme loads might also result from hurricane-generated waves, sustained high winds, increased wave frequency, rapid directional wind changes, and other forces
  2. Surface and blade icing in the freshwater Great Lakes and other northern latitudes. Icing risks primarily manifest in the form of surface ice, which can place significant additional loads on turbine foundations and towers.
  3. Freshwater surface-ice floes will be a major design driver for any offshore turbines.
  4. Potential for earthquakes and higher average sea-states may increase fatigue concerns on offshore wind turbine structures.
  5. Bearings are subject to a high level of stress over the lifetime of the turbine, and therefore represent significant risk in the case of unexpected failure. This is particularly the case for offshore wind turbines, where downtime and difficult access for maintenance can have expensive repercussions.
  6. Towers are too heavy to move from inland locations. Single-car weight limits for U.S. railroads are approximately 140 tons (BNSF 2012, Union Pacific 2012). 5 to 6-MW turbines range from 280-325 tons.
  7. Wider-bases for offshore wind towers may exceed underpass requirements for either rail or road transportation. Towers for today’s 80-meter land-based turbines (with diameters of 4.5 meters) already encounter difficulties when it comes to planning trucking routes, and next-generation land-based towers (105-meters tall and 5.4 meters in diameter) are likely to face even more restrictions (AWEA 2012). Offshore wind turbine towers are likely to range from 5 to 6.75 meters in diameter.
  8. In addition to traffic congestion, overland transport of wind turbines can cause road damage, due to the repeated passage of heavy-load convoys. Moving wind turbine components also requires increasingly complex coordination. Scheduling between trucking companies, railroad and port operators, and city and state authorities can be very onerous. Moreover, as components grow larger, transportation costs will increase.
  9. Reliability is more critical offshore than onshore due to multiple factors. The offshore environment can be much harsher, with high winds, significant wave loads, and corrosion-producing salt water. Offshore turbines must be manufactured to be able to withstand this environment.
  10. Harsh weather offshore not only threatens the performance of wind turbines, it can also inhibit access to turbines by maintenance staff. The inability to reach and repair sub-performing or inoperable turbines can cause significant lost power sales. Severe weather also increases safety concerns for maintenance crews.
  11. Compared to the land-based wind market, the offshore wind market entails many more risks that increase the level of quality needed. The marine environment can be much harsher than a land-based site. Corrosion can damage external as well as internal areas of the turbine. Moreover, access constraints to an offshore site, often caused by poor weather or lack of availability of appropriate vessels, can increase O&M costs as well as reduce revenue due to power losses.

Other capital costs

  • Specialized vessels to install turbines offshore can cost $100 to $250 million each. Three primary types of vessels are used in the installation of offshore wind turbines and foundations – heavy-lift vessels combined with working barges; jack-up barges without propulsion; and self-propelled jack-up vessels. In addition, subsea cable installation requires a specialized type of vessel. The continuing trend toward larger offshore turbines adds uncertainty (e.g., what size or type of vessel to construct) for investors and companies looking to build new ships to serve the offshore market.
  • Land cost. Fabrication and construction need to be near or along the shore because offshore turbines are too large and heavy to move overland, and require a large amount of land
  • $1.3 billion port facility. Includes blade, nacelle, tower, and foundation manufacturing facilities; new cranes, heavy-capacity terminals, staging areas, warehouse space, and infrastructure to connect to land-based transportation. $150 million for dredging and wharf reinforcements, $500 million for new infrastructure, including berthing space, storage area, and staging area; and nearly $50 million to improve lift capacity. An additional $700 million investment is assumed to go toward industrial manufacturing facilities. In addition to the blade and nacelle facilities included in the mid scenario, the high scenario includes a $240 million tower manufacturing plant and a $190 million foundation manufacturing facility.

Table D-7. Per-MW Turbine Component Costs for Hypothetical U.S. Offshore Wind Project. A 6 MW turbine would cost $11 million, this doesn't include the substructure/foundation, electric cables, substation, etc

Table D-7. Per-MW Turbine Component Costs for Hypothetical U.S. Offshore Wind Project. A 6 MW turbine would cost $11 million, this doesn’t include the substructure/foundation, electric cables, substation, etc

Offshore wind limited by supplies of rare earth metals

This study reveals the need for permanent magnet generators (PMGs), and a concern this could lead to exhausting limited supplies of rare earth metals Neodymium (Nd), Dysprosium (Dy) and Praseodymium (Pr), used by the PMGs, which serve the function of making the gearbox robust enough to overcome extreme marine conditions and withstand torques, extreme forces, operating speeds, and temperatures. Global demand may reach 12,200 tons per year of rare earth metals for PMGs by the end of 2016, yet only 7,840 tons of neodymium and 112 of dysprosium are produced per year now.

The enormous size of offshore wind turbines and their components is anticipated to make it increasingly difficult, if not impossible, to move turbine components over land.

Because of this, coastal manufacturing of blades, nacelle, tower, foundation, and substructure fabrication may be an industry requirement in the near future. This will require very large swaths of coastal land—Vestas’ recently abandoned Sheerness U.K. proposed facility was planned to be on the order of 70 hectares (173 acres).

Widespread deployment off the Pacific Coast of the U.S. and the Gulf of Maine, which are characterized by deeper near-shore water depths, will likely require floating foundations. Despite the theoretical benefits of floating platforms, it is not yet clear whether floating platforms are technically or economically viable over the long term.

A lack of current U.S. offshore demand means no domestic manufacturing facilities are currently serving the offshore wind market.

Three major barriers combine to have a dampening effect on the development of the U.S. offshore wind supply chain:

  1. the high cost of offshore wind energy;
  2. infrastructure challenges such as transmission and purpose-built ports and vessels;
  3. regulatory challenges such as new and uncertain leasing and permitting processes.

The result is that European and Asian suppliers who are currently supplying offshore wind turbines and components have a competitive advantage over their U.S. counterparts. The U.S. offshore wind industry faces a “chicken-and-egg” problem where plants will not be built unless the cost is reduced, and local factories (which will help bring down the cost) will not be built until there is a proven domestic market.

33% Turbine
22% Support Structure/ Foundation
19% Logistics and Installation
12% Electrical Infrastructure
12% Construction Financing
2% Development/ Services

Figure 1. Offshore Wind Plant Capital Cost Breakdown

To gain additional insights into Ability to Open New Markets: Innovations including floating substructures, hurricane tolerance, sea- and surface-ice tolerance, and transitional water-depth foundations are anticipated to have the greatest ability to open up new markets to offshore wind technology.

Installation Vessels

Historically, vessels converted from other analogous industries (e.g., oil and gas) have served the majority of the marine construction and transport needs of the industry. The lower cost of entry to convert an existing vessel, and the versatility of these machines has been attractive. However, offshore wind differs from offshore oil and gas; there are far more units (foundations, pilings, and turbines) to be installed and significantly more movement from one turbine site to the next. Dedicated offshore wind installation vessels have been constructed and are playing an increasing role in the European offshore wind market. As the industry grows and matures, the development of assembly-line style vessel coordination—where one vessel installs the foundation and is followed by a series of vessels installing the tower, nacelle, and blades in order—is a possibility. Depending on specific design specifications, floating foundations will likely require simpler vessel designs; turbines could be fully assembled on land and then simply towed with traditional or modestly modified tugs.

Despite the advantages of specialized vessels and integrated logistics solutions, realizing these opportunities requires significant infrastructure development and investment. Existing German ports have invested $100 million to $250 million in upgrades and infrastructure to support offshore wind. More fully integrated conceptual designs in Hull and Sheerness in the U.K. or Edinburgh, Scotland, could result in new infrastructure investment on the order of $500 million (NLC 2010). Dedicated installation vessels are estimated to be on the order of $100 million and higher (Musial and Ram 2010), with some recent estimates exceeding $250 million per vessel. Generating the demand volume to drive the level of investment that will be needed to realize the cost-reduction potential of more sophisticated and integrated manufacturing and vessel fleets will be a challenge moving forward.

Offshore wind is a capital-intensive industry, and significant investments will be required to realize the efficiencies offered by opportunities such as integrated manufacturing and port facilities or assembly-line vessels. Stability in both demand and the overall technology platform will likely be needed for such sizable investments to occur.

Investors will be hesitant to invest heavily in new technology platforms until a proven track record is achieved.

The future U.S. offshore wind market would have to compete with the European and Asian offshore markets as well as emerging land-based markets for manufacturer investment dollars.

While some U.S. manufacturing that supplies the land-based wind market is running at part load, manufacturing larger components for the offshore market may require significant investments in re-tooling or an altogether new facility located near the coasts where offshore projects are being developed.

Many of the large European turbine suppliers are increasingly outsourcing components and materials to Asia, particularly to China, which has the world’s largest wind power equipment manufacturing base. Although some OEMs hesitate to move away from established suppliers due to concerns over quality, economic pressures from declining turbine prices are driving manufacturers to accept higher risks to remain competitive (BTM 2011).

In the offshore market, the recent introduction of multi-MW turbines (mostly 5-6 MW) by turbine manufacturers in both Europe and China increases such supply concerns over these strategic components (e.g., bearings and other forgings) for these larger turbines. This is partly because it takes time for the supply chain to prepare for mass production of such large parts that can meet OEMs’ increased quality requirements for offshore turbines. Moreover, in some cases these components are larger than have ever been produced for any industry.

Gearboxes and Generators

The wind turbine gearbox serves the purpose of converting the high torque from the main shaft into the lower-torqued, high-speed shaft that drives the generator. Long-bladed wind turbine rotors produce substantial torque while turning the main drivetrain shaft at relatively low rotational rates. As such, the gearbox is one of the most mechanically advanced components of a wind turbine, consisting of precision gears, bearings, shafts, and other parts that experience extreme forces, operating speeds, and temperatures. Reliability is paramount in offshore applications due to the logistical challenges of maintenance and repair.

Offshore gearbox design, therefore, must be robust enough to withstand the torques experienced by large, multi-megawatt machines in marine conditions. Recently, wind turbine design optimization philosophy has been shifting from a predominant reliance on gearboxes towards an increased use of direct-drive technology with large permanent magnet generators (PMGs). Especially in the growing offshore market, this trend has been motivated by customer demand for increased reliability (following experiences with bearing-related gearbox failures) and the higher power-to-weight ratios attainable in turbines with direct drives. Across the entire wind power market (both land-based and offshore), direct drives (annular and PMGs) represented 17.6% of market share in 2010 and 21.2% in 2011. Based on manufacturer announcements, this trend will likely continue toward 25% market share by

PMGs. Permanent magnets are used to varying degrees in both direct-drive turbines (DD-PMG) and fast- or medium-speed, geared turbines (FSG/MSG) fitted with a PMG. Twenty different European and North American firms are capable of supplying PMGs to the wind industry, with three (ABB, The Switch, and Converteam) currently supplying the European offshore market.

Market actors have indicated an industry shift toward PMGs, whether they be direct drive or FSG/MSG (BTM 2011). As stated above, this shift and the number of global suppliers manufacturing DFIGs makes it unlikely that the market will face a shortage of such generators in the near term. Based on the early stage of direct drives’ application to offshore turbines and current manufacturing capacities, the offshore market does not currently face a shortage of capacity for manufacturing PMGs either.

Based on the likelihood that units must be designed and manufactured specifically for offshore applications, gearbox and generator manufacturing lines are likely to have limited transferability to land-based turbines. However, an individual facility could reasonably operate multiple lines intended to supply both land-based and offshore wind turbines, particularly if located on the coast. Size restrictions, however, may prevent larger offshore gearboxes and generators from shipping by rail.

Rare Earth Mineral Supply

Currently, the majority (97%) of rare earth elements come from mines in China, and recent material shortages and price increases (largely driven by Chinese export quotas) have drawn attention to the cost risks associated with PMGs’ reliance on these materials. For example, the cost index for neodymium has fluctuated by up to 600% over the past few years (BTM 2012). Assuming that PMG demand increases from its current 10% share of the overall wind turbine market, global demand may reach 12,200 tons per year of rare earth metals for PMGs by the end of 2016. Currently, the leading global supplier of permanent magnets (PMs) to the wind industry is the Chinese company JLMAG Rare-Earth Co. Ltd, which has a worldwide market share of approximately 60% (BTM 2011).

Although manufacturers expect a tight market for rare earth elements over the next 2-3 years, current trends suggest both potential increases in supply from mines outside of China as well as adaptive strategies to ease demand among turbine suppliers. Proven reserves of rare earth elements exist in the U.S., Canada, Australia, Malaysia, South Africa, and Brazil, and investors are moving to develop new mines or re-establish prior operations in these locations. In the U.S., this includes the Mountain Pass (California) and Bear Lodge (Wyoming) mines, with investment and development activity from RCF, Goldman Sachs, Traxys, and Rare Element Resources. However, industry consensus suggests that it will take 3-4 years before these new mines are producing significant capacities and 6-10 years to reach maximum capacity. In terms of adaptive strategies, both turbine suppliers and generator manufacturers are exploring opportunities such as hedging, long-term contracts, strategic joint ventures, acquisitions of rare earth mining companies or permanent magnet suppliers, and research into diversification away from rare earth elements (BTM 2011).

Turbine Electronics: Power Converters and Power Transformers

The use of power converters in variable-speed wind turbines enables the variable generator frequency and voltage of the turbine to be efficiently converted to the fixed frequency/voltage of the grid. The presence of converters inside modern wind turbines improves their performance and offers enlarged grid-friendly control capabilities. This is a rapidly developing technology whose price/power ratio is still falling. Similarly, in an effort to improve long-term reliability and lower costs, many OEMs are investing in higher-performance power transformer systems designed specifically for the environmental and operational challenges of offshore turbines.

Bearings play an important role in several key wind turbine systems, including several locations in the drive train (e.g., main shaft, gearbox and generator) and in pitch and yaw systems, which allow for directional control of the blades and the nacelle, respectively. Bearings are subject to a high level of stress over the lifetime of the turbine, and therefore represent significant risk in the case of unexpected failure. This is particularly the case for offshore wind turbines, where downtime and difficult access for maintenance can have expensive repercussions.

While supply capabilities for standard-sized bearings have increased sufficiently over the past several years to meet market demand, fewer manufacturers have been willing to pursue the market for larger bearings for several reasons. First, only a limited number of suppliers in the U.S. and Europe can provide steel at the quality levels preferred by bearing manufacturers. Similarly, quality manufacturing and reliable products supersede cost concerns for the offshore bearing market, where a failure can result in a significant hit to a project’s levelized cost.

Manufacture of bearings for larger offshore turbines requires dedicated investment in new machinery (with long lead times). The offshore market represents the primary source of demand for larger bearings, creating a risk of inconsistent demand. Limited transferability and large upfront investments for manufacturing larger bearings creates risk.

The technical machinery and equipment used to produce and test extra-large bearings requires a significant investment, which poses a potential risk when demand relies almost entirely on the offshore wind market. Current policy uncertainty in the U.S. may discourage the level of investment that would be required to build such a facility in the near term, particularly when local content provisions in India and Brazil are attracting interest from the bearings industry. In addition, supply constraints for specialty steels and large castings and forgings could add greater uncertainty to the mix.

Pitch and Yaw Systems Pitch systems control the blades on a wind turbine to help maximize energy production under various wind speeds or to turn the blades out of the wind (feather the blades) to avoid damage during adverse conditions. Yaw systems orient the entire nacelle in the direction of the wind and work in concert with the yaw bearing between the tower and the turbine’s nacelle. Both systems use either an electric or hydraulic system based primarily on turbine OEMs’ historical preferences. For offshore turbine pitch systems, the current system market share is 86% hydraulic (primarily Vestas and Siemens) and 14% electric systems, though electric systems’ share of the total is expected to increase slowly based on recent trends (BTM 2012).7 Electric pitch and yaw systems’ main subcomponents comprise electric motors, gears, sensor equipment, and control arms, while hydraulic systems consist primarily of hydraulic cylinders, rods, pumps, filters, and sensor equipment. While each turbine’s pitch system includes three sets of primary components (i.e., motors or cylinders), yaw systems for multi-MW offshore turbines may require up to eight individual motors per turbine.

Castings and Forgings. The main cast iron components in a wind turbine comprise the nacelle main frame and the rotor hub, followed by housings for the gearbox and bearings. The main forged item in a wind turbine is the main shaft; however, several other forged items contribute to various sub-assemblies, including gear wheels and rims in the gearbox; outer and inner rings for large bearings; tower flanges; and other smaller components. In both cases, OEMs have high-quality demands for the materials used, as the costs of downtime and maintenance for offshore turbines represents a significant risk.

Turbine blades constitute a key component of wind development and the supply chain due to their sheer size and technological attributes. They dictate the energy capture of the turbine and can define the logistical size constraints for transportation. With blade lengths for next-generation offshore turbines anticipated to exceed 60 or even 80 meters, transportation logistics will likely necessitate that those blades be manufactured in coastal locations near the point of final installation.

Nearer-term opportunities may exist for facilities already located on coasts; however, U.S. offshore wind potential tends to be located far from inland land-based project sites. Length limits for ground transportation fall between 60 and 75 meters.

No blades for offshore turbines are currently manufactured in the U.S., though many of the companies producing blades for land-based applications have the experience and intellectual know-how to expand into the U.S. offshore market. Presumably, some of these suppliers may shift production facilities from central locations currently serving the U.S. land-based demand to coastal locations that can accommodate the logistics of larger blade sizes for offshore machines. Once relocated, however, this manufacturing capacity will be less likely to continue serving the land-based market due to the added overland distance to those projects.

Key Blade Materials: Resin and Reinforcement Fibers Epoxy resins are the basic material for most wind turbine blades globally. Some blade manufacturers, including leading supplier LM Wind Power, use unsaturated polyester resins (UPR), a less expensive alternative to epoxy. In addition to epoxy resin or UPR, wind turbine blades require significant quantities of reinforcement fibers to provide the strength necessary to withstand heavy wind loads. While glass fiber remains the dominant source of reinforcement fiber in the blade market, carbon fiber will likely play an increasing role in longer (>60 meter) offshore turbine blades as manufacturers seek to increase stiffness-to-weight ratios. However, based on the slow growth of the offshore market and pressures to keep capital costs low (carbon fiber is more expensive), glass fibers will likely continue to dominate the market for several years.

While land-based wind turbine towers are relatively low-tech components, towers for offshore wind turbines generally come with additional quality requirements and risk potential. For example, offshore towers must have an effective anti-corrosion coating to protect the tower against extreme weather conditions and an effective repair system in case of damage during transport. Turbine OEMs, therefore, are more selective in the qualification and selection of firms to supply their projects.

As with the larger blades expected for next-generation turbines, the logistics for offshore towers are more critical in terms of location, often requiring the manufacturing facility to be in a coastal area close to the project.

Tabless 2-35, 2-37, and 2-41 show the material requirements for 2,125 offshore wind turbines: up to 494,700 tons of primary steel, up to 12,118 tons of secondary steel, up to 509,500 tons of concrete, 377 miles of inter-array cable, 143 miles of export cable, and 8 substations.

Offshore Subsea Cables. Offshore wind plants use two kinds of cables: inter-array cables and export cables. Inter-array cables (rated up to 35 kilovolts [kV]) link individual turbines and connect the turbines to the plant’s substation. Export cables (rated up to 600 kV) connect the substation to the land-based grid and are much longer and heavier than inter-array cables. Thus far, most offshore wind projects have relied on alternating current (AC) cables; however, as projects move further from shore, increased distances and potential line losses are encouraging the use of HVDC technology. In general, if an offshore wind farm is more than 80 to 100 km (43-51 Nmi) from its point of interconnection, HVDC cables are preferred

Offshore Substations. The substation collects the power generated from a plant’s turbines and power transformer and converts it for export over subsea cables to a land-based transformer and the electric grid

As of 2011, 53 vessels were available globally to carry out offshore wind installation, with 42 based in European countries and the remaining 11 based in China (BTM 2011). While many of these vessels can serve multiple purposes, several companies have invested in vessels customized for wind installation,

Another potential bottleneck in the vessel supply chain lies with the availability of cable installation vessels. Currently, only a few fully equipped and highly specialized cable installation vessels exist that can lay offshore wind power cables. Some investors remain hesitant to build additional vessels without a strong policy support and commitment from relevant government. Cable laying represents one of the highest-risk aspects of offshore wind project construction, comprising approximately 80% of project insurance claims stemming from damage during or after installation (BTM 2011). Subsea cables are manufactured and loaded directly onto cable installation ships adjacent to foreign coastal manufacturing facilities. These cable ships can then transport and lay the cable off the U.S. shore without first entering a U.S. port, thus avoiding the Jones Act constraint.

There are a few minimum requirements likely to be associated with any port that might serve the offshore wind industry. These minimum requirements are largely determined by components’ current or anticipated future size, which generally precludes overland transport (particularly for full or partially assembled pieces of equipment) and necessitates access for large vessels. Such requirements are also a function of land available for both staging and storage of components such as nacelles, rotors, and foundations. Table 2-44 lists minimum port requirements

Vestas and EWEA also specify the need for 11,000 to 16,000 ft2 of available warehousing space, and Tetra Tech highlights substantial air draft or vertical clearance and horizontal clearance in excess of 130 feet as minimum requirements (Tetra Tech 2010). All three groups listed transportation connectivity via rail and a nearby highway as important for smaller inputs. EWEA additionally lists a heliport as desirable.

Existing ports do not commonly have all of these features. Some, but not all, of these requirements are also necessary for receiving container ships, the predominant method of shipping cargo. Given differences in available space and existing infrastructure among ports, different ports could conceivably host the manufacture and staging of different components based on their individual characteristics. Table 2-45 summarizes such varying port requirements by component type.

Component Blades Nacelle Tower Monopile Jacket Table 2-45. Selected Port Requirements by Component Wharf Load Capacity (Wharf, Length Transition Area) 600 ft 200 lbs/ft2 600 ft 2,000 lbs/ft2 600 ft 200 lbs/ft2 600 ft 4,000 lbs/ft2 300 ft 2,000 lbs/ft2 Storage Area (Per Unit) 6.2 acres 1.2 acres 12.4 acres 0.2 acres 0.3 acres Mobile Crane Load Outs 100 tons 400 tons 550 tons 1,100 tons 800 tons Source: Blatiak, Garrett, & O’Neill (2012)

port development decisions will more likely be a function of the perceived opportunity cost for a given port or port authority; proximity to anticipated projects; and the ability to assemble the collective public and private investment necessary to advance port development. It is plausible that a highvolume container port may see more value in continuing to maximize container volume rather than diversifying into offshore wind.

In August 2011, a wind turbine blade suffered $275,000 in damage when the semi-trailer truck transporting it crashed into another vehicle in a busy intersection in Dubuque, Iowa.21 Figure 4-2. Blade Damage During Land Transport

http://www.thonline.com/news/breaking/article_fe7deb6c-c8d5-11e0-9ab6-001a4bcf6878.html

Rare Earth Materials Within two decades China has become the world’s largest rare earth element market in the world, home to approximately 97% of the world’s resource. Rare earth metals are in high demand as they are seen seminal to the development of advanced high-end clean technologies, as well as the defense and refinery industries. In 2010, 4,100 MW were required for permanent magnets, although the need for rare earth metals is anticipated to grow significantly with the expansion of the direct-drive PMG market. In early 2012, Molycorp, the owner of the largest rare earth deposit in the U.S., reopened the Mountain Pass mine in California and started production of rare earths in February 2012. In March 2012, Molycorp made a downstream vertical integration play by acquiring processing company Neo Material, which has plants in Asia that serve Chinese and Japanese markets. The company expects to have the capacity to produce 40,000 metric tons of rare earth oxide (REO) equivalent annually from its Mountain Pass mine by mid-2013. Roughly 15-20% of this total, or 6,000-8,000 metric tons, is expected to be comprised of Neodymium.

Equipment size. Offshore turbines are typically larger than land-based turbines and are growing even larger. Suppliers must have manufacturing equipment large enough to produce these large components. This can often prove difficult as some castings and forgings can weigh over 10 tons. Table 4-3. Comparison of Major Land-based and Offshore Turbine Component Weights (tons) Component Rotor Nacelle Tower Total Assembly Land-based Offshore (Siemens 2.3-101) (RePower 6M: 6.15 MW) 62 156 82 316 162 285 306 757 4.5.2.2 Capital

Logistics challenges

As mentioned before, growth of offshore wind turbines and their components is anticipated to make it increasingly difficult, if not impossible, to move turbine components over land. Coastal manufacturing for blades and nacelle assembly as well as tower, foundation, and substructure fabrication may be an effective industry requirement in the future. Under ideal circumstances, component storage and staging activities would occur alongside manufacturing and fabrication at an integrated manufacturing and port facility. However, this will require very large swaths of coastal land.

Siemens. (2012). “Record-Size Rotor Blades Transported to Destination” Press Release. http://www.siemens.com/innovation/en/news/2012/e_inno_1226_2.htm. Accessed October 17, 2012.

Statoil. (2011). “Hywind – The World’s First Full-Scale Floating Wind Turbine.” http://www.statoil.com/en/TechnologyInnovation/NewEnergy/RenewablePowerProd uction/Offshore/Hywind/Pages/HywindPuttingWindPowerToTheTest.aspx. Accessed March 1,

Principle Power (2011). “First WindFloat Successfully Deployed Offshore.” Press Release. http://www.principlepowerinc.com/news/press_PPI_WF_deployment.html.

D-1. Technology Profile Key for Deployment Scenarios Metric Nameplate Capacity (MW) Hub Height (meters) Rotor Diameter (meters) Water Depth (meters) Monopile Foundations Jacket Foundations Tripod Foundations Gravity Base Foundations Proximity to Staging Area** Proximity to Interconnection** Proximity to Service Port** Project Size (MW) Max Nacelle Weight*** Max Nacelle Footprint Today’s Standard Technology 3 – 6 70 – 90 90 – 130 10 – 40 yes yes yes yes < 100 miles < 50 miles < 30 miles 200 -300 215 metric tons (5 MW) Next-Generation Technology 5 – 7 > 90 120 – 170 10 – 50 no yes yes yes > 100 miles > 50 miles > 30 miles 500 – 1,000 410 metric tons (7 MW) *Proof of commercial viability (one step from prototype testing)

D-6. Detailed Estimate of Turbine Capital Costs by Component for the U.K. Market (2010 £)

Table D-7. Per-MW Turbine Component Costs for Hypothetical U.S. Offshore Wind Project

A more comprehensive description of technical port requirements can be found in the DOE companion report developed by Blatiak, Garrett, & O’Neill (2012).

Table 2-44. Suggested Minimum Port Requirements for Serving the Offshore Wind Industry Association EWEA Vestas Tetra Tech Draft or Harbor Depth 20 feet (draft) 20 feet (draft) 24 feet (depth) Wharf/Quay Staging and Length Storage Load Capacity 500 feet 15 acres 600 lbs/ft2 650 feet 9 acres 5,000 lbs/ft2 450 feet 10 acres 2,000 lbs/ft2 Sources: EWEA (2009), Vestas Offshore

Table D-8. Cost Assumptions Used to Estimate Component and Material Market Values

Table D-8. Cost Assumptions Used to Estimate Component and Material Market Values

A/s (2010 and 2011), Tetra Tech (2010)

Navigant. 2014. Offshore Wind Market & Economic Analysis Underlying Data. U.S. Department of Energy.

The U.S. offshore wind industry is transitioning from early development to demonstration of commercial viability. While there are no commercial-scale projects in operation, there are 14 U.S. projects in advanced development, defined as having either been awarded a lease, conducted baseline or geophysical studies, or obtained a power purchase agreement (PPA).

Globally, offshore wind projects continue to trend farther from shore into increasingly deeper waters; parallel increases in turbine sizes and hub heights are contributing to higher reported capacity factors. While the trend toward greater distances helps reduce visual impacts and public opposition to offshore wind, it also requires advancements in foundation technologies and affects the logistics and costs of installation and maintenance.

The average turbine size for advanced-stage projects in the United States is expected to range between 5.0 and 5.3 MW.

While much of the focus in recent years has been on alternatives to the conventional monopile approach (due to various limitations), the advent of the extra-large (XL) monopile (suitable to a 45 m water depth) may have somewhat lessened the impetus for significant change.

U.S. offshore wind development faces significant challenges: (1) the cost competitiveness of offshore wind energy;2 (2) a lack of infrastructure such as offshore transmission and purpose-built ports and vessels; and (3) uncertain and lengthy regulatory processes.

Key federal policies expired for projects that did not start construction by year-end 2013: the Renewable Electricity Production Tax Credit (PTC), the Business Energy Investment Tax Credit (ITC), and the 50 percent first-year bonus depreciation allowance.

Expected installed costs for a 500 MW farm are $2.86 Billion or $5,700/kW.

In terms of coal, Navigant analysis reveals executed and planned coal plant retirements through 2020 of nearly 40 GW.

these deeper waters and longer distances present new challenges and opportunities for foundations, drivetrains, installation logistics, and operations and maintenance (O&M). Time will tell how well initial U.S. projects align with those global trends in light of region-specific wind resource and seabed conditions. This section presents an overview of the global offshore

There are approximately 7 gigawatts (GW) of offshore wind installations worldwide.

new capacity installed in 2013, most is attributable to four countries – Belgium (192 MW of new capacity), Denmark (400 MW), Germany (230 MW) and the United Kingdom (812 MW) – with the U.K. comprising 47 percent of 2013 additions globally.

Uncertain political support for offshore wind in European nations and the challenges of bringing down costs mean that the pace of capacity growth may level off in the next two years (Global Wind Energy Council [GWEC] 2014).

Table 1-1. Summary of Cumulative Installed Global Offshore Capacity through

WindFloat Pacific (WFP) Seattle, Washington-based Principle Power has proposed to install five semi-submersible, floating foundations outfitted with Siemens 6 MW, direct-drive offshore wind turbines. The project will be sited 15 miles from Coos Bay, Oregon in approximately 350 meters of water. Principle Power maintains that the WindFloat design will be more cost-effective than traditional offshore wind foundations because the entire turbine and floating foundation will be built on shore and installed with conventional tug vessels. The innovations associated with the WindFloat design include the following: ? Static and dynamic stability provide pitch performance low enough to use conventional (i.e., fixed-foundation), commercial offshore turbines ? The design and size allow for onshore assembly and commissioning ? The shallow draft of the semi-submersible foundation allows the assemblies to be sited, transported (via wet tow), and deployed in a wide range of water depths WindFloat’s semi-submersible foundation includes patented water entrapment (heave) plates at the base of each of three vertical columns. A closed-loop, active water ballast system moves water between the columns in the semi-submersible foundation in response to changes in wind force and direction. This allows the mast to remain vertical, thereby optimizing electricity production.

the long-term capital cost increase has been a function of several trends: a movement toward deeper-water sites located farther offshore; increased siting complexity; and higher contingency reserves that result from more limited operational reserves and greater uncertainty when working in the offshore environment (Chapman et al. 2012).

Depth and Distance from Shore

The global trend toward deeper water sites and greater distances from shore continued in 2013, both for completed projects and those newly under construction. With this trend comes increased costs tied to more complex installation in deeper waters, longer export cables (and subsequent line losses), and greater distances for installation and ongoing O&M vessels to travel.

For commercial-scale projects with capacity additions in 2013, the average water depth was about 15 meters, and the average distance from shore was 13 miles.

Vestas V164 8 MW prototype turbine installed in early 2014 has a rotor diameter of 164 meters, greater than any other turbine currently slated for construction through 2015. Its 80 meter blades use a design that abandons the company’s conventional central spar approach (wherein a central “backbone” runs the length of the blade and absorbs most of the structural loads). Instead, the blades incorporate a “structural spar” design that

Samsung S7.0-171 prototype turbine was commissioned in June 2014 at the Fife Energy Park in Scotland (PE 2013). The blade, developed by SSP Technology, also uses carbon and holds the current record for the longest blade ever produced at 83.5 meters (171-meter rotor diameter). The blade is part of Samsung’s 7-MW turbine, which is expected to be deployed in 2015 in South Korea’s first offshore wind plant (CompositeWorld 2013).

Monopiles have historically dominated the offshore wind market. In the U.S., the Cape Wind project has committed to using monopiles to support its 3.6 MW turbines. Despite their popularity and familiarity, these large steel pipes (with diameters between 3 and 7 meters) have recently been challenged as increasing water depths and larger turbines sizes pose challenges related to installation logistics, turbine design, and material costs.

Despite the potential benefits of these XL monopiles, there may still be challenges to overcome. For example, as they continue to increase in size, these larger foundations may encounter limitations in the vessels that can handle their greater size, weight and diameter, which exceed the capabilities of available piling hammers (IHC Merwede 2012, A2SEA 2014).

Multi-piles Lead to Broader Diversity in Design Approaches

For sites in deeper water (from 25 to 60 meters), or with 5 MW and larger turbines, developers have historically shown a preference for multi-pile designs (e.g., jackets and tripods). Jacket structures derive from the common fixed-bottom offshore oil rig design, relying on a three- or four-sided lattice-framed structure that is “pinned” to the seabed using four smaller pilings, with one in each corner of the structure (EWEA 2011; Chapman et al. 2012). The tripod structure utilizes a three-legged structure assembled from steel tubing with a central shaft that consists of the transition piece and the turbine tower (EWEA 2011). Like jackets, the tripod is also pinned to the seabed with smaller pilings.

it is likely that multi-pile substructures will continue to gain market acceptance, especially in water depths greater than 30 meters and at sites with challenging subsea soil conditions.

Recent experience suggests that conventional gravity-base designs may encounter difficulties in water depths greater than 15 meters due to several key challenges:

  1. long fabrication durations to allow for curing of concrete;
  2. high dredging requirements to achieve precise seabed preparation;
  3. reliance on expensive heavy-lift vessels; and
  4. the installation schedules’ high sensitivity to weather conditions.

Shift to HVDC Transmission Lines

As projects have moved further from shore, industry interest in HVDC export cables has increased, as they create lower line losses than conventional HVAC lines. Various complications, however, have slowed the anticipated shift to HVDC over the past few years. For example, Siemens has suffered from significant write-offs (totaling €1.1 billion since 2011) for over-budget transmission HVDC projects intended to link offshore wind farms in the North Sea to the land-based grid (Webb 2014).

Notably, the AC-to-DC converter stations for these projects are enormous, expensive, and present some new logistical challenges for their construction installation. In June 2014, for example, Drydock World announced the completion of the DolWin beta HVDC converter platform, one of two major components for TenneT’s 900-MW DC offshore grid connection in the North Sea. The structure, an adaptation of semi-submersible offshore oil and gas rigs, weighs approximately 23,000 metric tonnes. The top-side equipment alone weighed 10,000 tonnes, and its installation onto the substructure established a new record for heavy lifts. From its construction port in Dubai, the converter station will be loaded onto a heavy lift vessel for transportation to its commission port in Norway, after which it will be towed to the project site. (Marine Log 2014). In response to these recent cost overruns and logistical challenges presented by conversion to HVDC, some developers are opting to reduce risk by instead running increasingly longer distances with AC export cables (Simon 2014).

In the U.S., the two most advanced U. S. projects, which are relatively near shore compared to the larger European projects, will rely on conventional AC transmission. Deepwater’s Block Island project will use a 34.5-kV AC export cable, while Cape Wind, plans to use a 115 -kV AC export cable (Tetra Tech 2012; DOE 2012a).

Developers and contractors have been working to create solutions to the limited availability of vessels, which could represent a potentially limiting factor for the growth rate of the U.S. offshore wind market. The offshore wind project life cycle includes four general phases: pre-construction, construction, project O&M, and decommissioning. Each of these phases comprises various types of services, each typically requiring one or more unique types of vessel.14 Recent developments in North America have focused primarily on vessels used during construction and O&M. As global demand for vessels to serve the offshore wind market has increased, vessel suppliers and construction teams have sought to reduce the time required for installation and for transferring foundations, towers, turbines, and blades to sites farther from shore. In particular, newer jack-up vessels are demonstrating several key trends, including the following:

  • Increasing deck space to facilitate storage of more and larger turbine components per trip
  • Greater crane capacities (i.e., lifting capacity typically greater than 1,000 metric tonnes and hook heights in excess of 105 meters) to lift increasingly large turbine and substructure components
  • Increasingly advanced dynamic positioning (DP2 and DP3) systems to increase operational efficiency and safety
  • Longer jack-up legs to enable lifting operations in deeper waters
  • Greater ability to continue operations in increasingly severe sea states (i.e., wave height limit of at least two meters) to minimize construction downtime

While crane lifting capacity continues to increase, the maximum lifting height appears to be a new key limitation in selecting the construction vessel, as the trend toward larger rotors and taller towers also continues (Hashem 2014). In addition, the impact of moving to XL monopiles is not yet fully understood by the vessel industry; however, there are a few existing vessels capable of lifting these extra-large monopiles’ extreme weights.

The full spectrum of vessels that may be needed at various points in the offshore wind life cycle is discussed in the previous iteration of this annual market assessment, published in October 2013.

16“Platform jacket” is defined as “a single physical component and includes any type of offshore exploration, development, or production structure or component thereof, including platform jackets, tension leg or SPAR platform superstructures (including the deck, drilling rig and support utilities, and supporting structure), hull (including vertical legs and connecting pontoons or vertical cylinder), tower and base sections of a platform jacket, jacket structures, and deck modules (known as “topsides”).

Other strategies being pursued include bottom-fixed foundations that are floating or semi-floating during transit to the installation site. For example, Freshwater Wind’s Shallow Water Wind Optimization for the proposed Great Lakes project relies on semi-floating, gravity-based foundation technology to eliminate the need for installation vessels during foundation installation. Note, however, that these projects would still require “traditional” jack up vessels to install the turbines.

A thriving U.S. offshore wind market will likely require the development of a more robust domestic fleet.

However, a general lack of O&M data for the still relatively young offshore wind industry (most turbines are still under warranty) make it difficult to draw any broad conclusions about the expected long-term costs and trends of O&M offshore wind farms.

Table 3-2. 2014 Detailed Cost Breakdown Cost (2011$) Cost % of Total (2011$ per Capital kW Cost Equipment Costs Turbine Costs $917,500,000 Foundation & Substructure $206,545,000 Collection System $78,490,000 HV Cable, Converter, & Substations $349,109,000 Labor Costs1 Foundation & Substructure Installation Labor $309,828,000 Project Management (Developer/owners $8,500,00 management costs) Development Costs Insurance During Construction $67,000,000 Development Services (Engineering, Legal, PR, $28,900,000 Permitting) Ports & Staging $45,000,000 Erection/Installation (equipment services only) $301,337,000 Air & Marine Transportation $79,890,000 Other Costs Decommissioning Bonding $100,000,000 Interest During Construction $165,843,000 Due diligence, Reserve Accounts, Bank Fees $163,331,000 Miscellaneous $17,394,000 Total Construction Cost2 $2,860,701,000 Source: Navigant analysis

an increased pro-nuclear attitude in the United States, potentially as a way to meet CO2- reduction targets, could reduce offshore wind activity in the United States if the levelized cost of new nuclear plants were to be more attractive than that of offshore wind. In early 2012, the United States Nuclear Regulatory Commission approved the construction license for four new nuclear reactors, two in South Carolina and two in Georgia. A fifth reactor is under construction in Tennessee. These would be the first nuclear reactors built from scratch in the last 30 years. If these reactors are successfully completed and become operational, their impact on the future of offshore wind in the United States is unclear. There is also uncertainty around the expected LCOE from these new nuclear plants, as the nuclear industry has not had a strong track record of meeting projected costs and schedules.

Figure 4-2. U.S. Power Generation Capacity Additions by Fuel Type 70

In recent years, some electric utilities in the U.S. have announced plans to retire coal-fired power plants or to convert them to natural gas. There are multiple factors involved in these retirement decisions. Many of the U.S.’s coal-fired power plants are over 50 years old and expensive to continue to operate and maintain. Complying with environmental requirements, such as the U. S. Environmental Protection Agency’s (EPA’s) mercury and air toxics standards can also be costly. Additionally, the rule submitted by the Environmental Protection Agency in June of 2014 to require a 30% reduction in CO2 emissions from existing power plants from 2005 emission levels by 2030 will likely impact retirement plans for existing coal generators. Navigant analysis reveals actual and announced retirements of nearly 40 GW through 2020. There is significant uncertainty in the projection of planned retirements before 2030 due to Section 111 regulations proposed under the Obama Administration’s Climate Action Plan. While the reduction in generation capacity created through coal plant retirements will certainly not be filled entirely by a variable-output resource such as wind, continued coal plant retirements could play a role in increasing the demand for offshore wind plants in the U.S.

U.S. solar installations reached record levels in 2013, accounting for nearly 30 percent of all new electricity generating capacity installations (SEIA 2014). U.S. onshore wind installations fell during 2013 due to the uncertainties in federal tax incentives at the end of 2012.

Wind turbine towers require significant quantities of steel, while foundations may require concrete and/or steel. Since towers represent about 7-8 percent of the cost of an offshore wind farm and the foundations and substructures represent about 22-25 percent (Navigant 2012), the level of construction activity in the United States outside of the offshore wind sector could impact the price of offshore wind power. Figure 4-4 shows the evolution of commodity prices since 2002, which is a trend of generally increasing (and volatile in the case of steel) prices.

Manufacturing 4.5.1 Change in Manufacturing of Products That Utilize Similar Types of Raw Materials as Offshore Wind

The manufacturing sector similarly uses many of the same raw materials as offshore wind. The manufacture of automobiles, heavy equipment, and appliances, for example, requires significant amounts of steel, a material used in wind turbine towers and offshore foundations. Manufacturing sectors such as aerospace, automotive, and marine vessels use composite materials similar to those used in wind turbine blades. Finally, rare earth materials such as neodymium are used in applications such as the permanent magnets that are used in certain types of electric motors and electrical generators, including those in many direct drive wind turbine generators. The DOE (DOE 2010) estimates that supply situation for rare earth oxides of neodymium and dysprosium will be “critical” not only over the short term (2010- 2015) but also over the medium term (2015-2025). The supply risk for praseodymium was characterized as “not critical”. Criticality matrices from this report are shown in Figure 4-5.

A 2012 report from the Massachusetts Institute of Technology’s Materials Systems Laboratory agrees that neodymium and dysprosium will face supply challenges in the coming years (Alonso et al. 2012). If the supply situation for rare earth metals remains tight and prices rise, so could the cost of offshore wind production.

Change in Demand for Subsea Cable-Laying Vessels

The specialized vessels that are appropriate for subsea cable-laying are relatively few in supply and high in demand (Navigant Research 2012). Not only are these vessels in high demand in Europe for offshore wind projects; many of them are also used to lay subsea cable for the telecommunications industry. An increase in deployment of subsea cables by global telecommunications companies could increase the development costs of offshore wind farms.

Cost-Competitiveness of Offshore Wind Energy Capital costs for the first generation of U.S. offshore wind projects are expected to be approximately $6,000 per installed kW, compared with approximately $1,940 per installed kW for U.S. land-based wind projects in 2012 (Wiser and Bollinger 2013). Offshore projects have higher capital costs for a number of reasons, including turbine upgrades required for operation at sea, turbine foundations, balance-of-system (BOS) infrastructure, the high cost of building at sea, and O&M warranty risk adjustments. These costs remain high because the offshore wind industry is immature and learning curve effects have not yet been fully realized. There are also a number of one-time costs incurred with the development of an offshore wind project, such as vessels for turbine installation, port and harbor upgrades, manufacturing facilities, and workforce training. Offshore wind energy also has a higher LCOE than comparable technologies. In addition to higher capital costs, offshore wind has higher O&M costs as a result of its location at sea. Higher permitting, transmission, and grid integration costs contribute to this higher cost of energy, which can be

Infrastructure Challenges

Offshore wind turbines are currently not manufactured in the United States. Domestic manufacturing needs to be in place in the United States in order for the industry to fully develop. The absence of a mature industry results in a lack of experienced labor for manufacturing, construction, and operations. Workforce training must therefore be part of the upfront costs for U.S. projects. The infrastructure required to install offshore wind farms, such as purpose-built ports and vessels, does not currently exist in the United States. There is also insufficient capability for domestic operation and maintenance.

The absence of strong demand for offshore wind in the United States makes it difficult to overcome these technical and infrastructure challenges. In order to develop the required infrastructure and technical expertise, there must first be sufficient demand for offshore wind, and that is not expected in the near term due to the high cost of offshore wind and the low cost of competing power generation resources, such as natural gas.

Regulatory Challenges A.3.1 Permitting

Offshore wind projects in the United States are facing new permitting processes. After issuing the Final Rule governing offshore wind leasing on the Outer Continental Shelf (OCS) in 2009, the Minerals Management Service (MMS)—now BOEM—staff estimated that the lease process might require three EISs and may extend seven to nine years.

Construction and operations plans proposing the installation of renewable energy generation facilities would be subject to additional project specific environmental reviews.

A number of state and federal entities have authority over the siting, permitting, and installation of offshore wind facilities. Cognizant federal agencies include BOEM, the U.S. Army Corps of Engineers (USACE), the EPA, the FWS, the National Oceanic and Atmospheric Administration (NOAA) National Marine Fisheries Service, and others.

REFERENCES

A2SEA. (2014). Big foundations, big challenges. A2SEA News. November 11, 2013. http://a2seanews.editionmanager.com/2013/11/11/big-foundations-big-challenges/

Andresen, T. and Nicola, S. (2012). RWE Sees German Climate Goals Threatened by Grid Delays: Energy. Bloomberg Online. July, 2012. http://www.bloomberg.com/news/2012-07-03/rwe-sees-germanclimate-goals- threatened-by-grid-delays-energy.html

Backwell, Ben. (2013). MHI to supply 700 7MW turbines to UK R3 projects – official. Recharge News. February 28, 2013.

Campbell, Shaun. (2013). “Setback for Deepwater Wind’s Block Island project. WindPower Offshore. August 6, 2013. Accessed September 16, 2013. Available at: http://www.windpoweroffshore.com/article/1194228/setback-deepwater-winds-block- island-project.

Composite World. (2013). DIAB core going into 83.5m long wind turbine blade. June 9, 2013.

Deign J (2020) So, What Exactly Is Floating Offshore Wind? Greentechmedia.com

EWEA. (2011). UpWind – Design limits and solutions for very large wind turbines. March 2011. Accessed September 16, 2013. Available at:

http://www.ewea.org/fileadmin/ewea_documents/documents/upwind/21895_UpWind_Repor t_low_ web.pdf.

Federal Maritime and Hydrographic Agency (BSH). (2013). Standard: Investigation of the Impacts of Offshore Wind Turbines on the Marine Environment (StUK4). BSH-Nr. 7003. http://www.bsh.de/en/Products/Books/Standard/7003eng.pdf

Foster, Martin. (2014). “Fukushima 7MW platform complete.” Wind Power Offshore. http://www.windpoweroffshore.com/article/1298310/fukushima-7mw-platform-complete

Grandoni D (2020) New Jersey aims to lead nation in offshore wind. So it’s building the biggest turbine port in the country. Washington Post.

Hamilton, B. (2011). “Offshore Wind O&M Costs, Trends, and Strategies.” EWEA Offshore 2011. Amsterdam, The Netherlands, 7 pp. Hashem,

LORC. (2011). The Gravity Based Structure – Weight Matters. Lindoe Offshore Renewables Center. Munkebo, DK. http://www.lorc.dk/offshore- wind/foundations/gravity-based.

OceanWind. (2010). Offshore Wind Experiences: A Bottom-Up Review of 16 Projects. 08 April 2010. %20A%20bottom-up%20review%20of%2016%20projects%20(Ocean %20Wind).pdf

“Giant 7MW Fife offshore turbine completed.” Institution of Mechanical Engineers. http://www.imeche.org/news/engineering/giant-7mw-fife-offshore- turbine-completed Peire, K.; Nonnneman, H.; Bosschem

SSP Technology. (2013). World’s longest rotor blade for wind turbine – core materials. http://www.ssptech.dk/nyheder.aspx? Action=1&NewsId=116&PID=357&World’s+longest+rotor+bla de+for+wind+turbine+- +core+materials

US. Department of Energy (DOE). (2010). Critical Materials Strategy. Washington, DC: U.S. Department of Energy.

http://energy.gov/sites/prod/files/edg/news/documents/criticalmaterialsstrategy.pdf

Webb, Alex. (2014). Siemens Inks $2.1 Billion Deal for 600-MW Dutch Offshore Wind Project. Renewable Energy World. May 15, 2014. http://www.renewableenergyworld.com/rea/news/article/2014/05/siemens-inks-2-1- billion-deal-for600-mw-dutch-offshore-wind-project

Wiersma, F., Grassin, J., Crockford, A., et. al. (2011). State of the Offshore Wind Industry in Northern Europe: Lessons Learnt in the First Decade.” Ecofys Netherlands BV. http://www.northsearegion.eu/files/repository/20120320110429_PC StateoftheOffshoreWindIndustryinNorthernEurope-Lessonslearntinthefirstdecade.pdf

Posted in Offshore | Tagged , , | 3 Comments

Book Review: The Fall of the Roman Empire: A new history of Rome and the barbarians

PrefaceMost historians see the fall of the Roman Empire as due to the invasion of barbarians from the North, partly pushed towards Italy by the brutal Huns.  These lands had never been conquered by Roman armies because they were too poor, too forested, produced too little food or other goods, and more costly to invade and occupy than any tribute or taxes that could be paid.  While the Romans were preoccupied with Persia as a threat, the barbarians to the north in Germania and Gaul were progressing rapidly in iron making, agriculture, and their population was exploding.

The Army: With no courts of human rights to worry about, instructors were at liberty to beat the disobedient – to death if necessary. And if a whole cohort disobeyed orders, the punishment was decimation: every tenth man flogged to death in front of his comrades.

Because I worked for a shipping / rail / trucking company for so long, I’m fascinated by logistics, especially since we are headed back to civilizations of old when wood was the main energy and infrastructure source, as in the Roman Empire:  Logistics made it likely that Rome’s European frontiers would end up on river lines somewhere. Rivers made supplying the many troops stationed on the frontier far easier. An early imperial Roman legion of about 5,000 men required about 7,500 kilos of grain and 450 kilos of fodder per day, or 225 and 13.5 tonnes, respectively, per month.  

In light of the deplorables and xenophobia today: Killing barbarians still went down extremely well with the average Roman audience. Roman amphitheaters saw many different acts of violence, of course, from gladiatorial combat to highly inventive forms of judicial execution. A staggering 200,000 people, it has been calculated, met a violent death in the Colosseum alone, and there were similar, smaller, arenas in every major city of the Empire. Watching barbarians die was a standard part of the fun. In 306, to celebrate his pacification of the Rhine frontier, the emperor Constantine had two captured Germanic Frankish kings, Ascaricus and Merogaisus, fed to wild beasts in the arena at Trier.

Barbarians thus provided the crucial ‘other’ in the Roman self-image: the inferior society whose failings underlined and legitimized the superiorities of the dominant imperial power. Indeed, the Roman state saw itself not as just marginally better than those beyond its frontiers – but massively and absolutely superior, because its social order was divinely ordained. This ideology not only made upper-class Romans feel good about themselves, but was part and parcel of the functioning of Empire. In the fourth century, regular references to the barbarian menace made its population broadly willing to pay their taxes, despite the particular increases necessitated by the third-century crisis.

Heather maintains that corruption was not a reason for the Collapse: Ever since Gibbon, the corruption of public life has been part of the story of Roman imperial collapse.  But whether any of this played a substantial role in the collapse of the western Empire is much more doubtful.

Uncomfortable as the idea might be, power has, throughout history, had a long and distinguished association with money making: in states both big and small, both seemingly healthy and on their last legs. In most past societies and many present ones, the link between power and profit was not even remotely problematic, profit for oneself and one’s friends being seen as the whole, and perfectly legitimate, point of making the effort to get power in the first place.  The whole system of appointments to bureaucratic office within the Empire worked on personal recommendation. Since there were no competitive examinations, patronage and connection played a critical role. Nepotism was systemic, office was generally accepted as an opportunity for feathering one’s nest, and a moderate degree of peculation more or less expected.

And this was nothing new. The early Roman Empire, even during its vigorous conquest period, was as much marked as were later eras by officials (friends of higher officials) misusing – or perhaps one should just say ‘using’ – power to profit themselves and their associates.  Great magnates of public life had always been preoccupied with self-advancement, and the early Empire had been no different. Much of what we might term ‘corruption’ in the Roman system merely reflects the normal relationship between power and profit.

It is important to be realistic about the way human beings use political power, and not to attach too much importance to particular instances of corruption. Since the power-profit factor had not impeded the rise of the Empire in the first place, there is no reason to suppose that it contributed fundamentally to its collapse.

LOCALIZATION. The electric grid will go down someday since it depends on fossil fuels (especially for natural gas as storage), so it is interesting to see how that will affect communications in the U.S. postcarbon and reverting to far fewer government services:

The problem was twofold: not only the slowness of ancient communications, but also the minimal number of lines of contact. We know that in emergencies, galloping messengers, with many changes of horse, might manage as much as 155 miles (250 km) a day. But Theophanes’ average on that journey of 3.5 weeks was the norm: in other words, about 25 miles (40 km), the speed of the oxcart. This was true of military as well as civilian operations, since all the army’s heavy equipment and baggage moved by this means too.

Running the Roman Empire with the communications then available was akin to running, in the modern day, an entity somewhere between five and ten times the size of the European Union. With places this far apart, and this far away from his capital, it is hardly surprising that an emperor would have few lines of contact with most of the localities that made up his Empire.

Primitive communication links combined with an absence of sophisticated means of processing information explain the bureaucratic limitations within which Roman emperors of all eras had to make and enforce executive decisions.

The main consequence of all this was that the state was unable to interfere systematically in the day-to-day running of its constituent communities. Not surprisingly, the range of business handled by Roman government was only a fraction of that of a modern state. Even if there had been ideologies to encourage it, Roman government lacked the bureaucratic capacity to handle broad-reaching social agendas, such as a health service or a social security budget. Proactive governmental involvement was necessarily restricted to a much narrower range of operation: maintaining an effective army, and running the tax system. And, even in the matter of taxation, the state bureaucracy’s role was limited to allocating overall sums to the cities of the Empire and monitoring the transfer of monies. The difficult work – the allocation of individual tax bills and the actual collection of money – was handled at the local level. Even here, so long as the agreed tax-take flowed out of the cities and into the central coffers, local communities were left autonomous, largely self-governing communities. Keep Roman central government happy, and life could often be lived as the locals wanted.

Landowning was always the way to be wealthy in the past, and it will soon be again. 

About half of what follows has to do with the eventual invasion of the Barbarians and fall of the Roman Empire, a great story, though only partly shown below, get the book for the greater amount of history left out.

Heather, Peter. 2005. The Fall of the Roman Empire: A New History of Rome and the Barbarians. Oxford University Press.

The Fall

In 357, 12,000 of the emperor Julian’s Romans routed an army of 30,000 Alamanni at the battle of Strasbourg. But within a generation, the Roman order was shaken to its core and Roman armies, as one contemporary put it, ‘vanished like shadows’.

In 376, a large band of Gothic refugees arrived at the Empire’s Danube frontier, asking for asylum. In a complete break with established Roman policy, they were allowed in, unsubdued. They revolted, and within two years had defeated and killed the emperor Valens – the one who had received them – along with two-thirds of his army, at the battle of Hadrianople.

On 4 September 476, 100 years after the Goths crossed the Danube, the last Roman emperor in the west, Romulus Augustulus, was deposed, and it was the descendants of those Gothic refugees who provided the military core of one of the main successor states to the Empire: the Visigothic kingdom. This kingdom of south-western France and Spain was only one of several, all based on the military power of immigrant outsiders, that emerges from the ruins of Roman Europe.

The fall of Rome, and with it the western half of the Empire, constitutes one of the formative revolutions of European history.

The Roman Army

If the roots of Roman imperial power lay firmly in the military might of its legions, the cornerstone of their astonishing fighting spirit can be attributed to their training. As with all elite military formations – ancient and modern – discipline was ferocious. With no courts of human rights to worry about, instructors were at liberty to beat the disobedient – to death if necessary. And if a whole cohort disobeyed orders, the punishment was decimation: every tenth man flogged to death in front of his comrades.

But you can never base morale on fear exclusively, and group cohesion was also generated by more positive methods. Recruits trained together, fought together and played together in groups of eight and shared a tent. And they were taken young: all armies prefer young men with plenty of testosterone. Legionaries were also denied regular sexual contact: wives and children might make them think twice about the risks of battle. Basic training was grueling. You had to learn to march 36 kilometers in five hours, weighed down with 25 kilos or more of armor and equipment. All the time you were being told how special you were, how special your friends were, what an elite force you belonged to.

The result of all this was groups of super-fit young men, partly brutalized and therefore brutal themselves, closely bonded with one another though denied other strong emotional ties, and taking a triumphant pride in the unit to which they belonged. This was symbolized in the religious oaths sworn to the unit standards, the legendary eagles. On successful graduation, the legionary vowed on his life and honor to follow the eagles and never desert them, even in death. Such was the determination not to let the standards fall into enemy hands that one of Cotta’s standard bearers, Lucius Petrosidius, hurled his eagle over the rampart at Tongres as he himself was struck down, rather than let it be captured. The honor of the unit, and the bond with fellow soldiers, became the most important element in a legionary’s life, sustaining a fighting spirit and willingness to obey orders which few opponents could match. To this psychological and physical conditioning, Roman training added first-rate practical skills. Roman legionaries were well armed by the standards of the day, but possessed no secret weapons. Much of their equipment was copied from their neighbors: the legionary’s distinctive and massive shield – the scutum – for instance, from the Celts. But they were carefully trained to make the best use of it. Individually, they were taught to despise wild swinging blows with the sword.

These were to be parried with the shield, and the legionary’s characteristic short sword – the gladius – brought up in a short stabbing motion into the side of an opponent exposed by his own swing. Legionaries were also equipped with defensive armor, and this, plus the weapons training, gave them a huge advantage in hand-to-hand combat. Throughout Caesar’s wars in Gaul, therefore, his troops were able to defeat much larger opposition forces;

A Roman legion also had other skills. Learning to build, and build quickly, was a standard element of training: roads, fortified camps and siege engines were but a few of the tasks undertaken. On one occasion, Caesar put a pontoon bridge across the Rhine in just ten days, and quite small contingents of Roman troops regularly controlled large territories from their own defensive ramparts.

Conquests

Caesar’s campaigns in Gaul belong to a relatively late phase in Rome’s rise to imperial domination. It had started life as one city-state among many, struggling first for survival and then for local hegemony in central and southern Italy. The city’s origins are shrouded in myth, as are the details of many of its early local wars. Something is known of these struggles from the late sixth century BC, however, and they continued periodically down to the early third century, when Rome’s dominance over its home sphere was established by the capitulation of the Etruscans in 283, and the defeat of the Greek city-states of southern Italy in 275. As winner of its local qualifiers, Rome graduated to regional matches against Carthage, the other major power of the western Mediterranean. The first of the so-called Punic wars lasted from 264 to 241 BC, and ended with the Romans turning Sicily into their first province. It took two further wars, spanning 218–202 and 149–146, for Carthaginian power finally to be crushed, but victory left Rome unchallenged in the western Mediterranean, and added North Africa and Spain to its existing power-base. At the same time, Roman power also began to spread more widely. Macedonia was conquered in 167 BC and direct rule over Greece was established from the 140s. This presaged the assertion of Roman hegemony over all the rich hinterlands of the eastern Mediterranean.

By about 100 BC, Cilicia, Phrygia, Lydia, Caria and many of the other provinces of Asia Minor were in Roman hands. Others quickly followed. The circle of Mediterranean domination was completed by Pompey’s annexation of Seleucid Syria in 64 BC, and Octavian’s of Egypt in 30 BC The Mediterranean and its coastlands were always the main focus of Rome’s imperial ambitions, but to secure them, it soon proved necessary to move the legions north of the Alps into non-Mediterranean Europe.

The assertion of Roman dominion over the Celts of northern Italy was followed in short order by the creation in the 120s BC of the province of Gallia Narbonensis, essentially Mediterranean France. This new territory was required to defend northern Italy, since mountain ranges – even high ones – do not by themselves a frontier make, as Hannibal had proved. In the late republican and early imperial periods, roughly the 50 years either side of the birth of Christ, the Empire also continued to grow because of the desire of individual leaders for self-glorification. By this date, conquest overseas had become a recognized route to power in Rome, so that conquests continued into areas that were neither so profitable, nor strategically vital.

Thanks to Julius Caesar, all of Gaul fell under Roman sway between 58 and 50 BC. Further conquests followed under his nephew and adopted successor Octavian, better known as Augustus, the first of the Roman emperors. By 15 BC, the legionaries’ hob-nailed sandals were moving into the Upper and Middle Danube regions – roughly modern Bavaria, Austria and Hungary. Some of these lands had long belonged to Roman client kings, but now they were turned into provinces and brought under direct control.

By 9 BC all the territory as far as the River Danube had been annexed, and an arc of territory around the Alpine passes into Italy added to the Empire. For the next thirty years or so, its north European boundary moved back and forth towards the River Elbe, before the difficulty of conquering the forests of Germany led to the abandonment of ambitions east of the Rhine. In AD 43, under Claudius, the conquest of Britain was begun, and the old Thracian kingdom (the territory of modern Bulgaria and beyond) was formally incorporated into the Empire as a province some three years later. The northern frontiers finally came to rest on the lines of two great rivers – the Rhine and the Danube – and there they broadly remained for the rest of the Empire’s history.

Rome ran this territory in pretty much its entirety for a staggering 450 years, from the age of Augustus to the fifth century AD.  The sheer extent of this success that has always made the study of its collapse so compelling.

Why the Roman Empire collapsed

For Edward Gibbon, famously, the Christianization of the Empire was a crucial moment, its pacifist ideologies sapping the fighting spirit of the Roman army and its theology spreading a superstition which undermined the rationality of classical culture.

In the 20th century, there was a stronger tendency to concentrate on economic factors: A. H. M. Jones, argued that the burden of taxation became so heavy in the fourth-century Empire that peasants were left with too little of their produce to ensure their families’ survival

Feeding Rome

Rome numbered perhaps a million in the fourth century, whereas no more than a handful of other cities had more than 100,000 inhabitants, and most had under 10,000. Feeding this population was a constant headache, especially as large numbers still qualified for free daily donations of bread, olive oil and wine assigned to the city as the perquisites of conquest. The most striking reflection of the resulting supply problem is the still stunning remains of Rome’s two port cities: Ostia and Tibur. One lot of docks was not enough to generate a sufficient through-put of food, so they built a second. The huge UNESCO-sponsored excavations at Carthage, capital of Roman North Africa, have illuminated the problem from the other end, unearthing the massive harbor installations constructed there for loading the ships with the grain destined to supply the heart of the Empire.

The increasing power of Roman Emperors

The fourth-century Senate numbered few, if any, direct descendants of these old families. There was a simple reason for this. Monogamous marriage tends to produce a male heir for no more than three generations at a time. In natural circumstances, about 20% of monogamous relationships will produce no children at all, and another 20% all girls.

The imperial title was a novelty when it was claimed and defined by Caesar’s nephew Octavian under the name Augustus. Since then the office had been transformed out of all recognition. For one thing, all pretense of republicanism had vanished. Augustus had worked hard at pretending that the power structures he had created around himself did not represent the overthrow of the old Republic, and that, in a mixed constitution, the Senate continued to have important functions. But even in his lifetime the veneer had looked pretty thin, and by the fourth century no one thought of the emperor as anything other than an autocratic monarch.

Ideologies argued that legitimate rulers were divinely inspired and divinely chosen. The first among equals became a sacred ruler, communing with the Divinity, and ordinary human beings had to act with due deference. By the fourth century, standard protocols included proskynesis – throwing yourself down on the ground when introduced into the sacred imperial presence – and, for the privileged few, being allowed to kiss the hem of the emperor’s robe. And emperors, of course, were expected to play their part in the drama.  For example, Emperor Constantius II in 357:  ‘As if his neck were in a vice, he kept the gaze of his eyes straight ahead, and turned his face neither to the right nor left, nor . . . did he nod when the wheel jolted, nor was he ever seen to spit, or to wipe or rub his face or nose, or move his hands about.’ Thus, when the occasion demanded it – and on the big days, as was only fitting in a divinely chosen ruler – Constantius could behave in superhuman fashion, showing no signs whatsoever of normal human frailty.

Nor did fourth-century emperors merely look more powerful than their first-century counterparts. From Augustus onwards, emperors had enjoyed enormous authority, but the job description widened still further over the centuries.

Law-making. Up to the middle of the third century, the Roman legal system developed via a variety of channels. The Senate could make laws, and so could the emperor. However, the group primarily responsible for legal innovation had been specialist academic lawyers called jurisconsults. These were licensed by the emperor to deal with questions of interpretation, and with new issues to which they applied established legal principles. From the first to the mid-third century Roman law had developed primarily on the back of their learned opinions. By the fourth, though, the jurisconsults had been eclipsed by the emperor; doubtful legal matters were now referred to him. As a result, the emperor completely dominated the process of law-making. A similar story could be told in a number of other areas, not least in the fiscal structure, where by the fourth century the emperor’s officials played a much more direct role in the taxing of the Empire than they had in the first. Emperors had always had the potential authority to expand their range of function. By the fourth century much of that potential had become reality, in both ceremonial presentation and function.

Equally fundamental, it was now well-established custom for the office to be divided – for more than one emperor to rule at the same time. In the fourth century, this never quite formalized into a system of distinct eastern and western halves of Empire, each with its own ruler, and there were times when one man did try to rule the entire Empire on his own. The emperor Constantius II (337–61) ruled alone for part of his reign, his immediate successors Julian and Jovian did so again during 361–4, and Theodosius I once more in the early 390s. But none of these experiments in sole rule lasted very long, and for most of the fourth century the job of governing the Empire was split. Power-sharing was organized in a variety of ways. Some emperors used younger relations – sons if they had them, nephews if they didn’t – as junior but nonetheless imperial colleagues with their own courts.  For much of the fourth century there were two emperors, one usually based in the west and the other in the east, and by the fifth this had crystallized more or less into a formal system.

By the fourth century, emperors hardly visited Rome at all. While the city remained the Empire’s symbolic capital, and still received a disproportionate percentage of imperial revenues in the form of free food and other subsidies, it was no longer a political or administrative center of importance.

Within Italy, Milan, several days’ journey north of Rome, had emerged as the main seat of active imperial government. Elsewhere, at different times, Trier on the Moselle, Sirmium by the confluence of the Save and the Danube, Nicomedia in Asia Minor, and Antioch close to the Persian front, had all become important, particularly under Diocletian’s Tetrarchy when the four active emperors had had separate geographical spheres. In the fourth century, things stabilized a little: Milan and Trier in the west, together with Antioch and a new capital, Constantinople, in the east, emerged as the dominant administrative and political centers of the Empire.

One reason for emperors abandoning their original home was administrative necessity. The pressing external threats that commanded their attention were to be found east of the River Rhine, north of the River Danube, and on the Persian front between the Tigris and the Euphrates. This meant that the strategic axis of the Empire ran on a rough diagonal from the North Sea along the Rhine and Danube as far as the Iron Gates where the Danube is crossed by the Carpathian Mountains, then overland across the Balkans and Asia Minor to the city of Antioch, from which point the eastern front could be supervised. All the fourth-century capitals were situated on or close to this line of power. Rome was just too far away from it to function effectively; information flowed in too slowly, and commands sent out took too long to take effect.

In Caesar’s time, all of this wealth had been redistributed within the confines of the city of Rome in order to win friends and influence people in that crucial arena. But to follow such a strategy in the fourth century would have been political suicide. Four hundred years on from the Ides of March, patronage had to be distributed much more widely.  Rather than in the Roman Senate, the critical political audience of the fourth-century Empire was to be found in two other quarters. One of these was a long-standing player of the game of imperial politics: the army, or, rather, its officer corps.

Distributing the wealth of Empire to the Military and Bureaucracy

By the fourth century, the key figures in the military hierarchy were the senior general officers and staffs of mobile regional field armies. Broadly speaking, there was always one important mobile force covering each of the three key frontiers: one in the west (grouped on the Rhine frontier and – often – in northern Italy as well), another in the Balkans covering the Danube, and a third in northern Mesopotamia covering the east.

The other key political force in the late Empire was the imperial bureaucracy (often called palatini: from palatium, Latin for ‘palace’). Although bureaucrats did not possess the military clout available to a senior general, they controlled both finance and the processes of law-making and enforcement, and no imperial regime could function without their active participation.

What was new in the late Empire was the size of the central bureaucratic machine. As late as AD 249 there were still only 250 senior bureaucratic functionaries in the entire Empire. By the year 400, just 150 years later, there were 6,000. Most operated at the major imperial headquarters from which the key frontiers were supervised: not in Rome, therefore, but, depending on the emperor, at Trier and/or Milan for the Rhine, Sirmium or increasingly Constantinople for the Danube, and Antioch for the east. It was no longer the Senate of Rome, but the comitatensian commanders, concentrated on key frontiers, and the senior bureaucrats, gathered in the capitals from which these frontiers were administered, who settled the political fate of the Empire.

A potent combination of logistics and politics had thus worked a fundamental change in the geography of power. Because of this, armies, emperors and bureaucrats had all emigrated out of Italy. This process also explains why, more than ever before, more than one emperor was needed. Administratively, Antioch or Constantinople was too far from the Rhine, and Trier or Milan too far from the east, for one emperor to exercise effective control over all three key frontiers. Politically, too, one center of patronage distribution was not sufficient to keep all the senior army officers and bureaucrats happy enough to prevent usurpations. Each of the three major army groups required a fair share of the spoils, paid to them in gold in relatively small annual amounts, and much larger ones on major imperial anniversaries

The imperial bureaucracy had emerged as the new Roman aristocracy, replacing the demilitarized and marginalized Senate of Rome.

The Roman world in Caesar’s day had been physically just as large, but there had been no need for two emperors or for such a wide distribution of patronage to prevent usurpation and revolt. What, then, had changed between 50 BC and AD 369?

The transformation of life in the conquered provinces thus led provincials everywhere to remake their lives after Roman patterns and value systems. Within a century or two of conquest, the whole of the Empire had become properly Roman.

By the late Empire, the Romans of Roman Britain were not immigrants from Italy but locals who had adopted the Roman lifestyle and everything that came with it. A bunch of legionaries departing the island would not bring Roman life to an end. Britain, as everywhere else between Hadrian’s Wall and the Euphrates, was no longer Roman merely by ‘occupation’.

These astonishing developments changed what it meant to be Roman. Once the same political culture, lifestyle and value system had established themselves more or less evenly from Hadrian’s Wall to the Euphrates, then all inhabitants of this huge area were legitimately Roman. ‘Roman’, no longer a geographic epithet, was now an entirely cultural identity accessible, potentially, to all. From this followed the most significant consequence of imperial success: having acquired Romanness, the new Romans were bound to assert their right to participate in the political process, to some share in the power and benefits that a stake in such a vast state brought with it. As early as AD 69 there was a major revolt in Gaul, partly motivated by this rising sense of a new identity. The revolt was defeated, but by the fourth century the balance of power had changed. Symmachus, in Trier, was shown in no uncertain terms that ‘the better part of humankind’ comprised not just the Senate of Rome, but civilized Romans throughout the Roman world.

Germania east of the Rhine was not swallowed up by Rome’s legions in the conquest period because its inhabitants fought tooth and nail against them, and eventually had their full revenge more than four centuries later in the destruction of the Empire.

Barbarians in Germany

Germanic-speaking groups dominated most of central and northern Europe beyond Rome’s riverine frontiers in the first century AD.

Trying to reconstruct the way of life and social institutions, not to mention the political and ideological structures of this vast territory, is a hugely difficult task. The main problem is that the societies of Germanic Europe, in the Roman period, were essentially illiterate. There is a fair amount of information of various kinds to be gleaned from Greek and Latin authors, but this has two major drawbacks. First, Roman writers were chiefly interested in Germanic societies for the threat – potential or actual – that they might pose to frontier security. What you find for the most part, therefore, are isolated pieces of narrative information concerning relations between the Empire and one or more of its immediate Germanic neighbors. Groups living away from the frontier hardly ever figure, and the inner workings of Germanic society are never explored. Second, what information exists is deeply colored by the fact that, to Roman eyes, all Germani were barbarians. Barbarians were expected to behave in certain ways and embody a particular range of negative characteristics, and Roman commentators went out of their way to prove that this was so. Little survives from inside the Germanic world to correct the misapprehensions, omissions and slanted perspectives of our Roman authors.

If you had asked any fourth-century Roman where the main threat to imperial security lay, he would undoubtedly have said with Persia in the east. This was only sensible, because in about AD 300 Persia posed an incomparably greater threat to Roman order than did Germania, and no other frontier offered any real threat whatsoever.  To some extent, the lack of first-hand contemporary Germanic sources has been filled by archaeological investigation.

It could hardly be clearer that 19th-century visions of an ancient German nation were way off-target. Temporary alliances and unusually powerful kings might for a time knit together a couple or more of its many small tribes, but the inhabitants of first-century Germania had no capacity to formulate and put into practice sustained and unifying political agendas. Why did Roman expansion fail to swallow this highly fragmented world whole, as it had done Celtic Europe?

River transport of supplies to armies determined the Empires borders

Logistics made it likely enough that Rome’s European frontiers would end up on river lines somewhere. Rivers made supplying the many troops stationed on the frontier a much more practical proposition. An early imperial Roman legion of about 5,000 men required about 7,500 kilos of grain and 450 kilos of fodder per day, or 225 and 13.5 tonnes, respectively, per month. Most Roman troops at this date were placed on or close to the frontier, and conditions in most border regions, before economic development had set in, meant that it was impossible to satisfy their needs from purely local sources. Halting the western frontier at the Rhine, rather than on any of the other north-south rivers of western or central Europe – of which there are many, notably the Elbe – had another advantage too. Using the Rhône and (via a brief portage) the Moselle, supplies could be moved by water directly from the Mediterranean to the Rhine without having to brave wilder waters.

The real reason why the Rhine eventually emerged as the frontier lay in the interaction of the motives underlying Roman expansion and comparative levels of social and economic development within pre-Roman Europe. Roman expansion was driven by the internal power struggles of republican oligarchs such as Julius Caesar and by early emperors’ desires for glory.

Barbarian territory not worth fighting for, too little wealth

Expansion as the route to political power at Rome had built up momentum at a point when there were still numerous unconquered wealthy communities around the Mediterranean waiting to be picked off. Once annexed, they became a new source of tribute flowing into Rome, as well as making the name of the general who had organized their conquest. Over time, however, the richest prizes were scooped up until, in the early imperial era, expansion was sucking in territories that did not really produce sufficient income to justify the costs of conquest. Britain in particular, the ancient sources stress, was taken only because the emperor Claudius wanted the glory. With this in mind, the limits of Rome’s northern expansion take on a particular significance when charted against levels of economic development in non-Roman Europe.

Expansion eventually ground to a halt in an intermediate zone between two major material cultures: the so-called La Tène and Jastorf cultures.  As well as villages, La Tène Europe had also generated, before the Roman conquest, much larger settlements, sometimes identified as towns (in Latin, oppida – hence its other common name, ‘the Oppida culture’). In some La Tène areas coins were in use, and some of its populations were literate. Caesar’s Gallic War describes the complex political and religious institutions that prevailed among at least some of the La Tène groups he conquered, particularly the Aedui of south-western Gaul. All of this rested upon an economy that could produce sufficient food surpluses to support warrior, priestly and artisan classes not engaged in primary agricultural production.

Jastorf Europe, by contrast, operated at a much starker level of subsistence, with a greater emphasis on pastoral agriculture and much less of a food surplus. Its population had no coinage or literacy, and, by the birth of Christ, had produced no substantial settlements – not even villages. Also, its remains have produced almost no evidence for any kind of specialized economic activity.  The Roman advance ground to a halt not on an ethnic divide, therefore, but around a major fault-line in European socio-economic organization. What happened was that most of more advanced La Tène Europe was taken into the Empire, while most of Jastorf Europe was excluded.

This fits a much broader pattern. As has also been observed in the case of China, there is a general tendency for the frontiers of an empire based on arable agriculture to stabilize in an intermediate, part-arable part-pastoral zone, where the productive capacity of the local economy is not by itself sufficient to support the empire’s armies.

Expansionary ideologies and individual rulers’ desires for glory will carry those armies some way beyond the gain line; but, eventually, the difficulties involved in incorporating the next patch of territory, combined with the relative lack of wealth that can be extracted from it, make further conquest unattractive. A two-speed Europe is not a new phenomenon, and the Romans drew the logical conclusion. Augustus’ successor Tiberius saw that Germania just wasn’t worth conquering. The more widely dispersed populations of these still heavily forested corners of Europe could be defeated in individual engagements, but the Jastorf regions proved much more difficult to dominate strategically than the concentrated and ordered populations occupying the La Tène towns. It was the logistic convenience of the Rhine-Moselle axis and cost-benefit calculations concerning the limited economy of Jastorf Europe that combined to stop the legions in their tracks. Germania as a whole was also far too disunited politically to pose a major threat to the richer lands already conquered. It was not the military prowess of the Germani that kept them outside the Empire, but their poverty.

As the Res Gestae Divi Saporis makes clear, the rise to prominence of the Sasanian dynasty was not just a major episode in the internal history of modern Iraq and Iran. Defeat at the hands of a succession of Roman emperors in the second century was a fundamental reason behind the collapse of Arsacid hegemony, and the Sasanians were able rapidly and effectively to reverse the prevailing balance of power. Ardashir I began the process. Invading Roman Mesopotamia for the first time between 237 and 240, he captured the major cities of Carrhae, Nisibis and Hatra (map 3). Rome responded to the challenge by launching three major counterattacks during the first twenty years of the reign of Ardashir’s son Shapur I (reigned 240–72). The results were as Shapur’s inscription records. Three huge defeats were inflicted on the Romans, two emperors were dead, and a third, Valerian, captured.

The rise of a Persian superpower next door

The rise of the Sasanians destroyed what was by then more or less a century of Roman hegemony in the east. Rome’s overall strategic situation had suddenly and decisively deteriorated, for the Sasanian superpower, this new Persian dynasty, despite Rome’s best efforts in the middle of the third century, would not quickly disappear. The Sasanians marshalled the resources of Mesopotamia and the Iranian plateau much more efficiently than their Arsacid predecessors had done. Outlying principalities were welded more fully into a single political structure, while the labor of Roman prisoners was used for massive irrigation projects that would eventually generate a 50% rise in the settlement and cultivation of the lands between the Tigris and Euphrates.

The rise of a rival superpower was a huge strategic shock. It had reverberations not just for the eastern frontier regions but for the Empire as a whole. Not only did a much more powerful enemy have to be confronted on its eastern frontier, but the defense of all the other frontiers still had to be maintained. For this to be possible, a major increase in the power of the Roman military was necessary. By the fourth century, this had produced both larger and substantially reorganized armed forces.

The late third and early fourth centuries saw a major financial restructuring of the Empire. The largest item of expenditure had always been the army: even an increase in size of one-third, a conservative estimate, represented a huge increase in the total amount of revenue that needed to be raised by the Roman state.  Revenues weren’t enough to cover the entire cost of the new army, and in the late third century emperors also pursued two further strategies. First, they debased the coinage, reducing the silver content of the denarii with which the army was customarily paid.

Debasement and price-fixing were no long-term solution, since merchants just took their goods off the shelves and operated a black market instead.

In the longer term, the only remedy was to extract a greater proportion of the Empire’s wealth – its Gross Imperial Product – via taxation. This too was instigated in the depths of the third-century crisis, when, at particular moments of stress, emperors would raise extraordinary taxes, in the form of foodstuffs. This bypassed problems with the coinage, but, by the nature of its unpredictability, was very unpopular.  

The sudden appearance of a Persian superpower in the east in the third century thus generated a massive restructuring of the Roman Empire. The effects of the measures it took to counter the threat were not instant, but the restructuring eventually achieved the desired outcome. By the end of the third century, Rome had the strategic situation broadly under control: enough extra troops had been paid for to stabilize the eastern front.  

Confiscating city revenues and reforming general taxation were not easy matters. It took over 50 years from the first appearance of the aggressive Sasanian dynasty for Rome to put its financial house in order, and all this required a massive expansion of the central government machine to supervise the process. From AD 250 onwards there was a substantial increase in the number of higher-level imperial bureaucratic posts. Military and financial restructuring, therefore, had profound political consequences. The geographical shift of power away from Rome and Italy, already apparent in embryo in the second century, was greatly accelerated by the Empire’s response to the rise of Persia. And while multiple emperors co-reigning had not been unknown in the second century, in the third it was the political as well as administrative need for more than one emperor that cemented the phenomenon as a general feature of late Roman public life.

As a string of emperors was forced eastwards from the 230s onwards to deal with the Persians, this left the west, and particularly the Rhine frontier region, denuded of an official imperial presence. As a result, too many soldiers and officials dropped out of the loop of patronage distribution, generating severe and long-lasting political turmoil at the top. In what has sometimes been called the ‘military anarchy’, the fifty years following the murder of emperor Alexander Severus in AD 235 saw the reins of Roman power pass through the hands of no fewer than 20 legitimate emperors and a host of usurpers, between them each averaging no more than 2.5 years in office. Such a flurry of emperors is a telling indicator of an underlying structural problem. Whenever emperors concentrated, at this time, on just one part of the Empire, it generated enough disgruntled army commanders and bureaucrats elsewhere to inspire thoughts of usurpation.

At least 200,000 barbarians violently killed in the colosseum alone

Killing barbarians still went down extremely well with the average Roman audience. Roman amphitheaters saw many different acts of violence, of course, from gladiatorial combat to highly inventive forms of judicial execution. A staggering 200,000 people, it has been calculated, met a violent death in the Colosseum alone, and there were similar, smaller, arenas in every major city of the Empire. Watching barbarians die was a standard part of the fun. In 306, to celebrate his pacification of the Rhine frontier, the emperor Constantine had two captured Germanic Frankish kings, Ascaricus and Merogaisus, fed to wild beasts in the arena at Trier.

Barbarians thus provided the crucial ‘other’ in the Roman self-image: the inferior society whose failings underlined and legitimized the superiorities of the dominant imperial power. Indeed, the Roman state saw itself not as just marginally better than those beyond its frontiers – but massively and absolutely superior, because its social order was divinely ordained. This ideology not only made upper-class Romans feel good about themselves, but was part and parcel of the functioning of Empire. In the fourth century, regular references to the barbarian menace made its population broadly willing to pay their taxes, despite the particular increases necessitated by the third-century crisis.

Overall, then, Rome’s relations with its fourth-century European frontier clients didn’t fit entirely comfortably within the ideological boundaries set by the traditional image of the barbarian. The two parties now enjoyed reciprocal, if unequal, relations on every level. The client kingdoms traded with the Empire, provided manpower for its armies, and were regularly subject to both its diplomatic interference and its cultural influence. In return, each year they generally received aid; and, sometimes at least, were awarded a degree of respect. One striking feature is that treaties were regularly formalized according to norms of the client kingdom as well as those of the Roman state. The Germani had come a long way from the ‘other’ of Roman imaginations, even if the Empire’s political elite had to pretend to Roman taxpayers that they hadn’t.

Germanic Agriculture led to wealth and social revolution

In the last few centuries BC, an extensive (rather than intensive) type of arable agriculture had prevailed across Germanic Europe. It alternated short periods of cultivation with long periods of fallow, and required a relatively large area of land to support a given population. These early Iron Age peoples lacked techniques for maintaining the fertility of their arable fields for prolonged production, and could use them for only a few years before moving on. Ploughing generally took the form of narrow, criss-crossed scrapings, rather than the turning-over of a proper furrow so that weeds rot their nutrients back into the soil. Ash was the main fertilizer.  Then, early in the Roman period, western Germani developed entirely new techniques, using the manure from their animals together, probably, with a more sophisticated kind of two-crop rotation scheme, both to increase yields and to keep the soil producing beyond the short term.

For the first time in northern Europe, it thus became possible for human beings to live together in more or less permanent, clustered (or ‘nucleated’) settlements.   In what is now Poland, the territories of the Wielbark and Przeworsk cultures, Germanic settlements remained small, short-lived and highly dispersed in the first two centuries AD. By the fourth, however, the new techniques had taken firm hold. Settlements north of the Black Sea, in areas dominated by the Goths, could be very substantial; the largest, Budesty, covered an area of 35 hectares. Enough pieces of ploughing equipment have been found to show that populations under Gothic control were now using iron coulters and ploughshares to turn the earth properly, if not to a great depth. Recent work has shown that villages had emerged in Scandinavia too. More intensive arable agriculture was on the march, and pollen diagrams confirm that between the birth of Christ and the 5th century, cereal pollens, at the expense of grass and tree pollens, reached an unprecedented high across wide areas of what is now Poland, the Czech Republic and Germany. Large tracts of new land were being brought into cultivation and worked with greater intensity.

The main outcome was that the population of Germanic-dominated Europe increased massively over these Roman centuries. The basic constraint upon the size of any population is the availability of food. The Germanic agricultural revolution massively increased the amount available.  Iron production in Germania increased massively as well.

Economic expansion was accompanied by social revolution.  Only in the third century BC did richer burials (the grandest among them often referred to by their German term, Fürstengraber, princely graves’) begin to appear. Clearly, therefore, the new wealth generated by the Germanic economic revolution did not end up evenly distributed, but was dominated by particular groups. Any new flow of wealth – such as that generated by the Industrial Revolution, in more modern times, or globalization – will always spark off intense competition for its control; and, if the amount of new wealth is large enough, those who control it will erect entirely new authority structures. In Western Europe, for instance, the Industrial Revolution eventually destroyed the social and political dominance of the landowning class who had run things since the Middle Ages, because the size of the new industrial fortunes made the amount of money you could make from farming even large areas look silly. It is hardly surprising, therefore, that Germania’s economic revolution triggered a sociopolitical one, and other archaeological finds have illuminated some of the processes involved.

The most astonishing set of finds of the third century, made at EjsbØl Mose in southern Jutland, gives us the profile of the force to which the weapons originally belonged. In this excavation archaeologists found the weapons of a small army of two hundred men armed with spears, lances and shields (at least sixty also carried swords and knives); an unknown number of archers (675 arrowheads were excavated) and twelve to fifteen men, nine of them mounted, with more exclusive equipment. This was a highly organized force, with a clear hierarchy and a considerable degree of military specialization: a leader and his retinue, not a bunch of peasant soldiers.  If one generation of a family could use its new wealth to recruit an organized military force of the kind found at EjsbØl Mose, and then pass on both wealth and retainers, its chances of replicating power over several generations were considerably increased.

From the early medieval texts we learn that generous entertaining was the main virtue required of Germanic leaders in return for loyal service, and there is no reason to suppose this a new phenomenon. It required not only large halls, but also a regular flow of foodstuffs and the means to purchase items such as Roman wine, not produced by the local economy. As the existence of specialist craftworkers also emphasizes, Germania’s economy had developed sufficiently beyond its old Jastorf norms to support a far larger number of non-agricultural producers.

Military retinues were not only the result of sociopolitical revolution, but also the vehicle by which it was generated, and large-scale internal violence was probably a feature of the Germanic world from the second to fourth centuries. The hereditary dynasts who dominated the new Alamannic, Frankish and Saxon confederations probably established their power through aggressive competition. In both east and west, the growing wealth of the region generated a fierce struggle for control, and allowed the emergence of specialist military forces as the means to win it. The outcome of these processes was the larger political confederation characteristic of Germania in the fourth century.

The changes that remade the Germanic world between the first and fourth centuries clearly shows why Roman attention remained so firmly fixed on Persia in the late imperial period. The rise of that state to superpower status had caused the massive third-century crisis, and Persia remained the much more obvious threat, even after the eastern front had stabilized. Germania, by contrast, even in the fourth century, had come nowhere close to generating a common identity amongst its peoples, or unifying its political structures. Highly contingent alliances had given way to stronger groupings, or confederations, the latter representing a major shift from the kaleidoscopic first-century world of changing loyalties. Although royal status could now be inherited, not even the most successful fourth-century Germanic leaders had begun to echo the success of Ardashir in uniting the Near East against Roman power. To judge by the weapons deposits and our written sources, fourth-century Germani remained just as likely to fight each other as the Roman state.

That said, the massive population increase, economic development and political restructuring of the first three centuries AD could not fail to make fourth-century Germania much more of a potential threat to Roman strategic dominance in Europe than its first-century counterpart. It is important to remember, too, that Germanic society had not yet found its equilibrium. The belt of Germanic client kingdoms extended only about 100 kilometers beyond the Rhine and Danube frontier lines: this left a lot of Germania excluded from the regular campaigning that kept frontier regions reasonably in line. The balance of power on the frontier was, therefore, vulnerable to something much more dangerous than the periodic over-ambition of client kings. One powerful exogenous shock had been delivered by Sasanian Persia in the previous century – did the Germanic world beyond the belt of closely controlled client kingdoms pose a similar threat?

Throughout the Roman imperial period, established Germanic client states periodically found themselves the targets of the predatory groups settled further away from the frontier. The explanation for this is straightforward. While the whole of Germania was undergoing economic revolution, frontier regions were disproportionately affected, their economies stimulated not least by the presence nearby of thousands of Roman soldiers with money to spend. The client states thus tended to become richer than outer Germania, and a target for aggression.

The first known case occurred in the mid-first century AD, when a mixed force from the north invaded the client kingdom of one Vannius of the Marcomanni, to seize the vast wealth he had accumulated during his thirty-year reign.

And it was peripheral northern groups in search of client-state wealth who also started the second-century convulsion generally known as the Marcomannic War. The same motivation underlay the arrival of the Goths beside the Black Sea. Before the mid-third century, these lands were dominated by Iranian-speaking Sarmatian groups who profited hugely from the close relations they enjoyed with the Roman state (their wealth manifest in a series of magnificently furnished burials dating from the first to third centuries). The Goths and other Germanic groups moved into the region to seize a share of this wealth.

The danger posed by the developing Germanic world, however, was still only latent, because of its lack of overall unity. In practice, the string of larger Germanic kingdoms and confederations – now stretching all the way from the mouth of the Rhine to the north Black Sea coast – provided a range of junior partners within a dominant late Roman system, rather than a real threat to Rome’s imperial power. The Empire did not always get what it wanted in this relationship, and maintaining the system provoked a major confrontation between senior and junior partners about once every generation. Nonetheless, for the most part, the barbarians knew their place:

The later Roman Empire was doing a pretty good job of keeping the barbarians in check. It had had to dig deep to respond to the Persian challenge, but it was still substantially in control of its European frontiers. It has long been traditional to argue, however, that extracting the extra resources needed to maintain this control placed too many strains on the system; that the effort involved was unsustainable. Stability did return to Rome’s eastern and European frontiers in the fourth century, but at too high a price, with the result that the Empire was destined to fall – or so the argument goes.

The argument against corruption as a main factor of collapse

Ever since Gibbon, the corruption of public life has been part of the story of Roman imperial collapse.

But whether any of this played a substantial role in the collapse of the western Empire is much more doubtful.

Uncomfortable as the idea might be, power has, throughout history, had a long and distinguished association with money making: in states both big and small, both seemingly healthy and on their last legs. In most past societies and many present ones, the link between power and profit was not even remotely problematic, profit for oneself and one’s friends being seen as the whole, and perfectly legitimate, point of making the effort to get power in the first place.  The whole system of appointments to bureaucratic office within the Empire worked on personal recommendation. Since there were no competitive examinations, patronage and connection played a critical role. Nepotism was systemic, office was generally accepted as an opportunity for feathering one’s nest, and a moderate degree of peculation more or less expected.

And this was nothing new. The early Roman Empire, even during its vigorous conquest period, was as much marked as were later eras by officials (friends of higher officials) misusing – or perhaps one should just say ‘using’ – power to profit themselves and their associatesGreat magnates of public life had always been preoccupied with self-advancement, and the early Empire had been no different. Much of what we might term ‘corruption’ in the Roman system merely reflects the normal relationship between power and profit.

It is important to be realistic about the way human beings use political power, and not to attach too much importance to particular instances of corruption. Since the power-profit factor had not impeded the rise of the Empire in the first place, there is no reason to suppose that it contributed fundamentally to its collapse.

Communication issues

A leap of imagination is required to grasp the difficulty of gathering accurate information in the Roman world. As ruler of just half of it, Valentinian was controlling an area significantly larger than the current European Union. Effective central action is difficult enough today on such a geographical scale, but the communication problems that Valentinian faced made it almost inconceivably harder for him than for his counterparts in modern Brussels. The problem was twofold: not only the slowness of ancient communications, but also the minimal number of lines of contact

We know that in emergencies, galloping messengers, with many changes of horse, might manage as much as 155 miles (250 km) a day. But Theophanes’ average on that journey of 3.5 weeks was the norm: in other words, about 25 miles (40 km), the speed of the oxcart. This was true of military as well as civilian operations, since all the army’s heavy equipment and baggage moved by this means too.

Packing lists also make highly illuminating reading. Theophanes obviously needed a variety of attires: lighter and heavier clothing for variations in weather and conditions, his official uniform for the office, and a robe for the baths. The traveler brought along his own bedding – not just sheets, but even a mattress – and a complete kitchen to see to the food situation. This suggests Theophanes did not travel alone. We don’t know how many went with him, but he was clearly accompanied by a party of slaves who dealt with all the household tasks. He generally spent on their daily sustenance just under half of what he spent on his own.

Running the Roman Empire with the communications then available was akin to running, in the modern day, an entity somewhere between five and ten times the size of the European Union. With places this far apart, and this far away from his capital, it is hardly surprising that an emperor would have few lines of contact with most of the localities that made up his Empire.

Moreover, even if his agents had somehow maintained a continuous flow of intelligence from every town of the Empire into the imperial center, there is little that he could have done with it anyway. All this putative information would have had to remain on bits of papyrus, and headquarters would soon have been buried under a mountain of paperwork. Finding any particular piece of information when required would have been virtually impossible, especially since Roman archivists seem to have filed only by year.

Primitive communication links combined with an absence of sophisticated means of processing information explain the bureaucratic limitations within which Roman emperors of all eras had to make and enforce executive decisions.

The main consequence of all this was that the state was unable to interfere systematically in the day-to-day running of its constituent communities. Not surprisingly, the range of business handled by Roman government was only a fraction of that of a modern state. Even if there had been ideologies to encourage it, Roman government lacked the bureaucratic capacity to handle broad-reaching social agendas, such as a health service or a social security budget. Proactive governmental involvement was necessarily restricted to a much narrower range of operation: maintaining an effective army, and running the tax system. And, even in the matter of taxation, the state bureaucracy’s role was limited to allocating overall sums to the cities of the Empire and monitoring the transfer of monies. The difficult work – the allocation of individual tax bills and the actual collection of money – was handled at the local level. Even here, so long as the agreed tax-take flowed out of the cities and into the central coffers, local communities were left autonomous, largely self-governing communities. Keep Roman central government happy, and life could often be lived as the locals wanted.

Until very recently, scholars have been confident that the higher tax-take of the late Roman state aggravated these conditions to the extent that it became impossible for the Empire’s peasant population to maintain itself even at existing low levels. The evidence comes mostly from written sources. To start with, the annual volume of inscriptions known from Roman antiquity declined suddenly in the mid-third century to something like 20% of previous levels. Since chances of survival remained pretty constant, this massive fall-off was naturally taken as an indicator that landowners, the social group generally responsible for commissioning these largely private inscriptions, had suddenly found themselves short of funds. A study of the chronology also led the heavier tax burden imposed by the late Roman state to be seen as the primary cause, since the decline coincided with the tax hikes that were necessary to fight off the increased Persian threat. Such views were reinforced by other sources documenting another well-known fourth-century phenomenon, commonly known as the ‘flight of the curials’ — landowners of sufficient wealth to get a seat on their town councils. They were the descendants of the men who had built the Roman towns, bought into classical ideologies of self-government, learned Latin, and generally benefited from Latin rights and Roman citizenship in the early imperial period. In the fourth century, these descendants became increasingly unwilling to serve on the town councils their ancestors had established. Some of the sources preserve complaints about the costs involved in being a councilor, others about the administrative burden imposed upon the curials by the Roman state. It has long been part of the orthodoxy of Roman collapse, therefore, that the old landowning classes of the Empire were overburdened into oblivion. Other fourth-century legal texts refer to a previously unknown phenomenon, the deserted lands. Most of these texts are very general, giving no indication of the amounts of land that might be involved, but one law, of AD 422, referring to North Africa, indicates that a staggering 3,000 square miles fell into this category in that region alone. A further run of late Roman legislation also attempted to tie certain categories of tenant farmers to their existing estates, to prevent them moving. It was easy, in fact irresistible, to weave these separate phenomena into a narrative of cause and effect, whereby the late Empire’s punitive tax regime made it uneconomic to farm all the land that had previously been under cultivation. This was said to have generated large-scale abandonment as well as governmental intervention to try to prevent this very abandoning of the lands that the new tax burden had made uneconomic. Stripped of a larger portion of their production, the peasantry could not maintain their numbers over the generations, which further lowered output.

Into this happy consensus a large bomb was lobbed, towards the end of the 1950s, by a French archaeologist named Georges Tchalenko who discovered that prosperity first hit the region in the later third and early fourth centuries, then continued into the fifth, sixth and seventh with no sign of decline. At the very moment when the generally accepted model suggested that the late Roman state was taxing the lifeblood out of its farmers, here was hard evidence of a farming region prospering.

Further archaeological work, using field surveys, has made it possible to test levels of rural settlement and agricultural activity across a wide geographical spread and at different points in the Roman period. Broadly speaking, these surveys have confirmed that Tchalenko’s Syrian villages were a far from unique example of late Roman rural prosperity. The central provinces of Roman North Africa (in particular Numidia, Byzacena and Proconsularis) saw a similar intensification of rural settlement and production at this time. This has been illuminated by separate surveys in Tunisia and southern Libya, where prosperity did not even begin to fall away until the fifth century. Surveys in Greece have produced a comparable picture. And elsewhere in the Near East, the fourth and fifth centuries have emerged as a period of maximum rural development – not minimum, as the orthodoxy would have led us to expect. Investigations in the Negev Desert region of modern Israel have shown that farming also flourished in this deeply marginal environment under the fourth-century Empire. The pattern is broadly similar in Spain and southern Gaul, while recent re-evaluations of rural settlement in Roman Britain have suggested that its fourth-century population reached levels that would only be seen again in the fourteenth. Argument continues as to what figure to put on this maximum, but that late Roman Britain was remarkably densely populated by ancient and medieval standards is now a given. The only areas, in fact, where, in the fourth century, prosperity was not at or close to its maximum for the entire Roman period were Italy and some of the northern European provinces, particularly Gallia Belgica and Germania Inferior on the Rhine frontier. Even here, though, estimates of settlement density have been revised substantially upwards in recent years.

The case of Italy is rather different. As befitted the heartland of a conquest state, Italy was thriving in the early imperial period. Not only did the spoils of conquest flood its territories, but its manufacturers of pottery, wine and other goods sold them throughout the western provinces and dominated the market. Also, Italian agricultural production was untaxed. As the economies of the conquered provinces developed, however, this early domination was curtailed by the development of rival enterprises closer to the centers of consumption and with much lower transport costs. By the fourth century, the process had pretty much run its course; and from Diocletian onwards, Italian agriculture had to pay the same taxes as the rest of the Empire. So the peninsula’s economy was bound to have suffered relative decline in the fourth century, and it is not surprising to find more marginal lands there being taken out of production. But as we have seen, the relative decline of Italy and perhaps also of north-eastern Gaul was more than compensated for by economic success elsewhere. Despite the heavier tax burden, the late Roman countryside was generally booming. The revolutionary nature of these findings cannot be overstated.

The laws forcing labor to stay in one place, for example, would only have been enforceable where rural population levels were relatively high. Otherwise, the general demand for labor would have seen landowners competing with one another for peasants, and being willing to take in each other’s runaways and protect them from the law.

Tenant subsistence farmers tend to produce only what they need: enough to provide for themselves and their dependents and to pay any essential additional dues such as rent. Within this context there will often be a certain amount of economic ‘slack’, consisting of extra foodstuffs they could produce but which they choose not to because they can neither store them, nor, thanks to high transport costs, sell them. In this kind of world, taxation – if not imposed at too high a level – can actually increase production: the tax imposed by the state is another due that has to be satisfied, and farmers do sufficient extra work to produce the additional output. Only if taxes are set so high that peasants starve, or the long-term fertility of their lands is impaired, will such dues have a damaging economic effect.

None of this means that it was fun to be a late Roman peasant. The state imposed heavier demands on him than it had on his ancestors, and he was prevented by law from moving around in search of the best tenancy terms. But there is nothing in the archaeological or written evidence to gainsay the general picture of a late Roman countryside at or near maximum levels of population, production and output.

Written sources and archaeological excavation both confirm that the late Roman landowning elite, like their forebears, would alternate between their urban houses and their country estates.  

There is also more than enough here to prompt a rethink about claims that, from the mid-third century, the army was so short of Roman manpower that it jeopardized its efficiency by drawing ever increasingly on ‘barbarians’. There is no doubt that the restructured Roman army did recruit such men in two main ways. First, self-contained contingents were recruited on a short-term basis for particular campaigns, returning home once they were over. Second, many individuals from across the frontier entered the Roman army and took up soldiering as a career, serving for a working lifetime in regular Roman units. Neither phenomenon was new. The auxiliary forces, both cavalry and infantry of the early imperial army had always been composed of non-citizens, and amounted to something like 50% of the military.

Nothing about the officer corps of the late Empire suggests that barbarian numbers had increased across the army as a whole. The main difference between early and late armies lay not in their numbers, but in the fact that barbarian recruits now sometimes served in the same units as citizens, rather than being segregated into auxiliary forces. Training in the fourth century remained pretty much as fierce as ever, producing bonded groups ready to obey orders. From Ammianus Marcellinus’ picture of the army in action we find no evidence that its standards of discipline had fallen in any substantial way, or that the barbarians in its ranks were less inclined to obey orders or any more likely to make common cause with the enemy.

It is entirely possible that the extra costs incurred in the running of the fourth-century Empire could have alienated the loyalty of the provincial populations that had bought into the values of Romanness with such vigor under the early Empire.

Different emperors sold their frontier policies in different ways, but there was no disagreement on this basic purpose of taxation. The population was daily reminded of the point on its coinage: one of the most common designs featured an enemy groveling at the emperor’s feet.  Fourth-century emperors did manage to sell to their population the idea that taxation was essential to civilized life, and generally collected the funds without ripping their society apart.

On the religious front Constantine’s conversion to Christianity certainly unleashed a cultural revolution. Physically, town landscapes were transformed as the practice of keeping the dead separate from the living, traditional in Graeco-Roman paganism, came to an end, and cemeteries sprang up within town walls. Churches replaced temples; as a consequence, from the 390s onwards there was so much cheap second-hand marble available that the new marble trade all but collapsed.

Christianity was in some senses a democratizing and equalizing force. It insisted that everybody, no matter what his economic or social status, had a soul and an equal stake in the cosmic drama of salvation, and some Gospel stories even suggested that worldly wealth was a barrier to salvation. All this ran contrary to the aristocratic values of Graeco-Roman culture, with its claim that true civilization could only be attained by the man with enough wealth and leisure to afford many years of private education and active participation in municipal affairs.

While the rise of Christianity was certainly a cultural revolution, Gibbon and others are much less convincing in claiming that the new religion had a seriously deleterious effect upon the functioning of the Empire. Christian institutions did, as Gibbon asserts, acquire large financial endowments. On the other hand, the non-Christian religious institutions that they replaced had also been wealthy, and their wealth was being progressively confiscated at the same time as Christianity waxed strong. It is unclear whether endowing Christianity involved an overall transfer of assets from secular to religious coffers. Likewise, while some manpower was certainly lost to the cloister, this was no more than a few thousand individuals at most, hardly a significant figure in a world that was maintaining, even increasing, population levels. Similarly, the number of upper-class individuals who renounced their wealth and lifestyles for a life of Christian devotion pales into insignificance beside the 6,000 or so who by AD 400 were actively participating in the state as top bureaucrats. In legislation passed in the 390s, all of these people were required to be Christian. For every Paulinus of Pella, there were many more newly Christianized Roman landowners happy to hold major state office, and no sign of any crisis of conscience among them.

Many Christian bishops, as well as secular commentators, were happy to restate the old claim of Roman imperialism in its new clothing. Bishop Eusebius of Caesarea was already arguing, as early as the reign of Constantine, that it was no accident that Christ had been incarnated during the lifetime of Augustus, the first Roman emperor. Despite the earlier history of persecutions, went his argument, this showed that Christianity and the Empire were destined for each other, with God making Rome all-powerful so that, through it, all mankind might eventually be saved.

Taxes were paid, elites participated in public life, and the new religion was effectively enough subsumed into the structures of the late Empire. Far from being the harbingers of disaster, both Christianization and bureaucratic expansion show the imperial center still able to exert a powerful pull on the allegiances and habits of the provinces. That pull had to be persuasive rather than coercive, but so it had always been. Renegotiated, the same kinds of bonds continued to hold center and locality together.

A fundamental problem in the structure of power within the late Empire

The imperial office had to be divided. Harmony between co-rulers was possible if one was so predominant as to be unchallengeable. The relationship between Theodosius and Valentinian worked happily enough on this basis, as had that between Constantine and various of his sons between the 310s and the 330s. But to function properly, the Empire required more or less equal helmsmen. A sustained inferiority was likely to be based on an unequal distribution of the key assets – financial and military – and if one was too obviously subordinate, the politically important factions in his realm were likely to encourage him to redress the balance – or, worse, encourage a usurper. This pattern had, for example, marred Constantius II’s attempts to share power with Gallus and Julian in the 350s.

Equal emperors functioning together harmoniously was extremely difficult to achieve, and happened only rarely. For a decade after 364, the brothers Valentinian I and Valens managed it, and so did Diocletian, first with one other emperor from 286, then with three from 293 to 305 (Diocletian’s so-called Tetrarchy). But none of these partnerships produced lasting stability, and even power-sharing between brothers was no guarantee of success. When they succeeded to the throne, the sons of Constantine I proceeded to compete among themselves, to the point that Constantine II died invading the territory of his younger brother Constans. Diocletian’s Tetrarchy, likewise, worked well enough during his political lifetime, but broke down after his abdication in 305 into nearly twenty years of dispute and civil war, which was ended only by Constantine’s defeat of Licinius in 324.

The organization of central power posed an insoluble dilemma in the late Roman period. It was an administrative and political necessity to divide that power: if you didn’t, usurpation, and often civil war, followed. Dividing it in such a way as not to generate war between rivals was, however, extremely difficult. And even if you solved the problem for one generation, it was pretty much impossible to pass on that harmony to your heirs, who would lack the habits of trust and respect that infused the original arrangement. Consequently, in each generation the division of power was improvised, even where the throne was passed on by dynastic succession. There was no ‘system’, and whether power was divided or not, periodic civil war was inescapable. This wasn’t just a product of the personal failings of individual emperors – although the paranoia of Constantius II, for example, certainly contributed to the excitement. Essentially, it reflected the fact that there were so many political concerns to be accommodated, such a large spread of interested landowners within the much more inclusive late Empire, that stability was much harder to achieve than in the old Roman conquest state, when it had been only the Senate of Rome playing imperial politics.

This is much better viewed, though, as a limitation than as a basic flaw: the Empire was not fundamentally undermined by it.  The civil wars of the fourth century did not make the Empire vulnerable, for instance, to Persian conquest.

The spread of Roman culture and the adoption of Roman citizenship in its conquered lands resulted from the fact that the Empire was the only avenue open to individuals of ambition. You had to play by its rules, and acquire its citizenship, if you were to get anywhere.

The one-party state analogy points us to two further drawbacks of the system. First, active political participation was very narrowly based. To participate in the workings of the Roman Empire, you had to belong to the wealthier landholding classes.  The politically active landowning class probably amounted, therefore, to less than 5% of the population. To this we might add another percentage or so for a semi-educated professional class, found particularly in the towns.

The vast majority of the population – whether free, tied or slave – worked the land and were more or less excluded from political participation. For these groups, the state existed largely in the form of tax-collectors making unwelcome demands upon their limited resources. Again, it is impossible to estimate precisely, but the peasantry cannot have been less than 85% of the population. So we have to reckon with a world in which over 80% had little or no stake in the political systems that governed them. Indifference may well have been the peasants’ overriding attitude towards the imperial establishment. Across most of the Empire, habitation and population levels increased in the course of its history, as we have noted, and it is hard not to see this as an effect of the Pax Romana – the conditions of greater peace and stability that the Empire generated.

The Empire had always been run for the benefit of an elite. And while this made for an exploited peasantry and a certain level of largely unfocused opposition, there is no sign in the fourth century that the situation had worsened.

The second, rather less obvious, drawback was potentially more significant, given the peasantry’s underlying inability to organize itself for sustained resistance. To understand it, we need to consider for a moment the lifestyles of the Roman rich.

There were other forms of wealth in the Roman world apart from landowning; money could be made from trade and manufacture, the law, influence-peddling and so on. But landowning was the supreme expression of wealth, and, as in pre-industrial England, those who made money elsewhere were quick to invest it in estates -because, above all, land was the only honorable form of wealth for a gentleman. Land was an extremely secure investment, and in return for the original outlay estates offered a steady income in the form of annual agricultural production. In the absence of stock markets, and given the limited and more precarious investment opportunities offered by trade and manufacture, land was the gilt-edged stock of the ancient world (and indeed of all worlds, pretty much, prior to the Industrial Revolution). First and foremost, landowners needed to keep the output of their estates up to scratch. A piece of land was in itself only a potential source of revenue; it needed to be worked, and worked efficiently, to produce a good annual income. The right crops had to be grown, for a start. Then, investment of time, effort and capital always offered the possibility of what in pre-industrial England was termed ‘improvement’: a dramatic increase in production.

Roman landowners spent much of their lives checking on the running of their estates, either directly or through agents. The lifestyle of Symmachus and his friends provides a blueprint for that of the European gentry and nobility over much of the next 1600 years. Leisured, cultured and landed: some extremely rich, some with just enough to get by in the expected manner, and everyone perfectly well aware of who was who.  And all engaged in an intricate, elegant dance around the hope and expectation of the great wealth that marriage settlement and inheritance would bring.

A further limitation imposed by the Roman imperial system stems from this elegant, leisured and highly privileged lifestyle. It rested upon the massively unequal distribution of landed property: as noted earlier, less than 5% of the population owned over 80%, and perhaps substantially more. And at the heart of this inequality was the Roman state itself, in that its laws both defined and protected the ownership rights of the property-owning class

A huge amount of Roman law dealt with property: basic ownership, modes of exploiting it (selling, leasing for longer or shorter terms, simple renting and sharecropping), and its transfer between generations through marriage settlements, inheritance and special bequests. The ferocity of Roman criminal law, likewise, protected ownership: death was the main punishment for theft for anything beyond petty pilfering. Again, we can see a resemblance here to later ‘genteel’ societies based on similarly unequal distributions of landed wealth in an overwhelmingly agricultural economy. When Jane Austen was writing her elegant tales of love, marriage and property transfer, you could be whipped (for theft valued at up to 10d), branded (for theft up to 4s 10d) or hanged (theft over 5 shillings).

The Roman state had to advance and protect the interests of these landowning classes because they were, in large measure, the same people who participated in its political structuresThe state relied on the administrative input of its provincial landowning classes at all levels of the governmental machine, and in particular to collect its taxes – the efficient collection of which hung on the willingness of these same landed classes to pay up.  This delicate balance manifested itself in two ways. First, and most obviously, taxes on agriculture could not rise so high that landowners would opt out of the state system. Second, the landowners’ elite status and lifestyles depended upon a property distribution so unequal that the have-nots had a massive numerical advantage – which should surely have led to a redistribution of wealth unless some other body prevented it. In the fourth century, this other body was, as it had been for centuries, the Roman state.

We might understand the participation of the landowners in the Roman system, therefore, as a cost-benefit equation. What it cost them was the money they paid annually into the state coffers. What they got in return was protection for the wealth on which their status was based. In the fourth century, benefit hugely outweighed cost.

But should the taxman become too demanding, or the state incapable of providing protection, then the loyalty of the landowning class could be up for renegotiation.

There is no sign in the fourth century that the Empire was about to collapse. The late Empire was essentially a success story. The rural economy was mostly flourishing, and unprecedented numbers of landowners were keen to fill the offices of state. As the response to the Persians showed, the Roman imperial structure was inherently rigid, with only a limited and slow-moving bureaucratic, economic and political capacity to mobilize resources in the face of a new threat. But the Persian challenge had been successfully seen off, and the overwhelming impression the Roman state gave was one of continuing unmatchable power. It was not, however, destined to be left to its own devices. While fourth-century Romans continued to look on Persia as the traditional enemy, a second major strategic revolution was about to unfold to the north.

 ‘The seed-bed and origin of all this destruction and of the various calamities inflicted by the wrath of Mars, which raged everywhere with extraordinary fury, I find to be this: the people of the Huns.’ Ammianus was writing nearly twenty years later, by which time the Romans had a better understanding of what had brought the Goths to the Danube. Even in the 390s, though, the full effects of the arrival of the Huns were far from apparent. The appearance of the Goths beside the river in the summer of 376 was the first link in a chain of events that would lead directly from the rise of Hunnic power on the fringes of Europe to the deposition of the last western emperor, Romulus Augustulus, almost exactly one hundred years later. None of this was even remotely conceivable in 376, and there would be many twists and turns on the way. The arrival of Goths on the Danube marked the start of a reshuffling of Europe-wide balances of power, and it is to this story that the rest of the book is devoted. We must begin, like Ammianus, with the Huns.

The origins of the Huns are mysterious and controversial. The one thing we know for certain is that they were nomads from the Great Eurasian Steppe. The Eurasian Steppe is a huge expanse, stretching about 3,400 miles (5,500 km) from the fringes of Europe to western China, with another 1865 miles (3,000 km) to its north and east. The north-south depth of the steppe ranges from only about 300 miles (500 km) in the west to nearly 3,000 in the wide-open plains of Mongolia. Geography and climate dictate the nomadic lifestyle. Natural steppe grasslands are the product of poor soils and limited rainfall, which make it impossible, in general terms, for trees and more luxurious vegetation to grow. The lack of rainfall also rules out arable farming of any sustained kind, so that the nomad makes a substantial part of his living from pastoral agriculture, herding a range of animals suited to the available grazing. Cattle can survive on worse pasture than horses, sheep on worse pasture than cattle, and goats on worse than sheep. Camels will eat anything left over. Nomadism is essentially a means of assembling distinct blocks of pasture, which between them add up to a year-round grazing strategy. Typically, modern nomads will move between upland summer pasture (where there is no grass in the winter because of snow and cold) and lowland winter pasture (where the lack of rain in summer means, again, no grass). In this world, grazing rights are as important in terms of economic capital as the herds, and as jealously guarded.

The distance between summer and winter pasture needs to be minimal, since all movement is hard both on the animals and on the weaker members of the human population. Before Stalin sedentarized them, the nomads of Kazakhstan tended to move about 50 miles (75 km) each way between their pastures. Nomadic societies also form close economic ties with settled arable farmers in the region, from whom they obtain much of the grain they need, though some they produce themselves. While part of the population cycles the herds around the summer pastures, the rest engage in other types of food production. But all the historically observed nomad populations have needed to supplement their grain production by exchanging with arable populations the surplus generated from their herds (hides, cheese and yoghurt, actual animals and so on). Often, this exchange has been one-sided, with the arable population getting in return no more than exemption from being raided, but sometimes the exchange has been properly reciprocal.

We can only guess at the motives behind the Huns’ decision to shift their center of operations westwards. The idea that it was the wealth of the northern shores of the Black Sea that attracted Hunnic attentions is perfectly plausible.  In the case of the Huns, we have no firm indication that a negative as well as a positive motivation was at work, but we can’t rule it out [my comment: another book suggests drought].

The Avars, who would have much the same kind of impact on Europe as the Huns, but two centuries later, were looking for a safe haven beyond the reach of the western Turks, when they appeared north of the Black Sea. At the end of the 9th century, likewise, the nomadic Magyars would move into Hungary because another nomad group, the Pechenegs, was making life intolerable for them further east.

Mysterious as the Huns’ origins and animating forces may remain, there is no doubt at all that they were behind the strategic revolution that brought the Goths to the Danube in the summer of 376.

In 375/6, there was no massive horde of Huns hotly pursuing the fleeing Goths: rather, independent Hunnic warbands were pursuing a variety of strategies against a variety of opponents. What was happening, then, was not that a force of Huns conquered the Goths in the sense we normally understand the word, but that some Goths decided to evacuate a world that was becoming ever more insecure. As late as 395, some 20 years later, the mass of Huns remained further east – much closer, in fact, to the northern exit of the Caucasus than to the mouth of the Danube.  And it was other Gothic groups, in fact, not the Tervingi or Greuthungi, who continued to provide Rome with its main opposition on the Lower Danube frontier for a decade or more after 376.  

What was it about the Huns, then, that allowed them in the later fourth century to redress the military balance in favor of the nomadic world?

The Huns were totally incapable and ignorant of conducting a battle on foot, but by wheeling, charging, retreating in good time and shooting from their horses, they wrought immense slaughter. The Huns were cavalry, and above all horse archers, who were able to engage at a safe distance until their opponents lost formation and cohesion. At this point, the Huns would move in for the kill with either bow or sabre. The essential ingredients in all this were skilled archery and horsemanship, the capacity to work together in small groups, and ferocious courage.

The key to Hunnic success seems to lie in one particular detail whose significance has not been fully recognized. Both the Huns and the Scythians used the composite bow, then, but whereas Scythian bows measured about 31 inches (80 centimeters) in length, the few Hunnic bows found in graves are much larger, measuring between 51-63 inches (130-160 centimeters. The point here, of course, is that size generates power. However, the maximum size of bow that a cavalryman can comfortably use is only about 40 inches (100 centimeters). The bow was held out, upright, directly in front of the rider, so that a longer bow would bang into the horse’s neck or get caught up in the reins. But Hunnic bows were asymmetric. The half below the handle was shorter than the half above, and it is this that allowed the longer bow to be used from horseback. It involved a trade-off, of course. The longer bow was clumsier and its asymmetry called for adjustments in aim on the part of the archer. But the Huns’ asymmetric 130-centimetre bow generated considerably more hitting power than the Scythians’ symmetrical 80-centimetre counterpart: unlike the Scythians’, it could penetrate Sarmatian armor while keeping the archer at a safe distance and not impeding his horsemanship.

Hunnic horse archers would probably have been effective against unarmored opponents such as the Goths from distances of 500 to 650 feet (150 to 200 meters), and against protected Alans from 250 to 325 feet (75 to 100 meters).

The bow wasn’t the Huns’ only weapon. Having destroyed the cohesion of an enemy’s formation from a distance, their cavalry would then close in to engage with their swords, and they often used lassos, too, to disable individual opponents. There is also some evidence that high-status Huns wore coats of mail. But the reflex bow was their pièce de résistance.

With a rare unanimity, the vast majority of our sources report that this sudden surge of would-be Gothic immigrants wasn’t seen as a problem at all. On the contrary, Valens happily admitted them because he saw in this flood of displaced humanity a great opportunity. The affair caused more joy than fear and educated flatterers immoderately praised the good fortune of the prince, which unexpectedly brought him so many young recruits from the ends of the earth, that by the union of his own and foreign forces he would have an invincible army. In addition, instead of the levy of soldiers, which was contributed annually by each province, there would accrue to the treasury a vast amount of gold.

Most of the sources also give a broadly similar account of what went wrong after the Goths crossed the river.  The blame for what happened next is placed mostly on the dishonesty of the Roman officials on the spot. For once the immigrants started to run short of supplies, these officials exploited their increasing desperation to run a highly profitable black market, taking slaves from them in return for food. Unsurprisingly, this generated huge resentment, which the Roman military, especially one Lupicinus, commander of the field forces in Thrace (comes Thraciae), only exacerbated. Having first profited from the black market, then having made the Goths move on to a second camp outside his regional headquarters, he made a botched attack on their leadership, at a banquet supposedly given in their honor. This pushed the Goths from resentment to revolt.

Normal Roman policy towards asylum seekers.  Immigrants, willing or otherwise, in 376 were a far from new phenomenon for the Roman Empire. Throughout its history, it had taken in outsiders: a constant stream of individuals looking to make their fortune (not least, as we have seen, in the Roman army), supplemented by occasional large-scale migrations. There was even a technical term for the latter: receptio. An inscription from the first century AD records that Nero’s governor transported 100,000 people from across [north of] the Danube  into Thrace. As recently as AD 300, the tetrarchic emperors had resettled tens of thousands of Dacian Carpi inside the Empire, dispersing them in communities the length of the Danube, from Hungary to the Black Sea.

There was no single blueprint for how immigrants were to be treated, clear patterns emerge. If relations between the Empire and the would-be asylum seekers were good, and the immigration happening by mutual consent, then some of the young adult males would be drafted into the Roman army, sometimes forming a single new unit, and the rest distributed fairly widely across the Empire as free peasant cultivators who would henceforth pay taxes. This was the kind of arrangement agreed between the emperor Constantius II and some Sarmatian Limigantes, for instance, in 359. If relations between the Empire and migrants were not so good, and, in particular, if they’d been captured during military operations, treatment was much harsher. Some might still be drafted into the army, though often with greater safeguards imposed.

An imperial edict dealing with a force of Sciri captured by the Romans in 409, for instance, records that 25 years – that is, a generation – should pass before any of them could be recruited. The rest, again, became peasant cultivators, but on less favorable terms. Many of the Sciri of 409 were sold into slavery, and the rest distributed as unfree peasants, with the stipulation that they had to be moved to points outside the Balkans, where they had been captured. All immigrants became soldiers or peasants, then, but there were more and less pleasant ways of effecting it.

There is, however, another common denominator to all documented cases of licensed immigration into the Empire. Emperors never admitted immigrants on trust. They always made sure that they were militarily in control of proceedings, either through having defeated the would-be immigrants first, or by having sufficient force on hand to deal with any trouble.

In 376 the Roman army was demonstrably not in charge of the situation, and when things started to go wrong after the crossing, order could not be restored. Lupicinus, whatever his personal culpability for the Goths’ revolt, simply didn’t have enough troops on hand. After the banquet, he immediately rushed his available forces into battle against the rebellious Goths and was soundly defeated. In the absence of total military superiority, which was central to normal Roman receptiones, it is just not credible that Valens was anything like as happy about the arrival of the Goths on the Danube as the sources claim.   When the Goths arrived on the Danube, therefore, Valens was already fully committed to an aggressive policy in the east, and it was bound to take him at least a year to extract his forces diplomatically, or even just to turn them around logistically.

Of the two Gothic groups who arrived at the Danube, only the Tervingi were admitted. The Greuthungi were refused permission to enter the Empire, and such troops and naval craft as were available in the Balkans were placed opposite them to keep them north of the river. Valens did not, then, rush to accept every Goth he could find so as to build up his army and fill the treasury’s coffers at one and the same time.

The main cause of the Tervingi’s revolt was food shortages and black-marketeering beside the Danube. The Goths, it seems, spent autumn and part of winter 376/7 beside the river, and only moved on to Marcianople sometime in late winter or early spring. Even when the revolt got under way, they still had difficulty in finding food, because all the necessities of life had been taken to the strong cities, none of which the enemy even then attempted to besiege

The Goths, of course, had no means of growing their own food at this point, since the agreement hadn’t yet got as far as land allocations. Once their stocks had been consumed, securing all other food supplies gave Valens a lever of control over them.

The botched attack was the result of misunderstanding and panic, but banquet hijacks were a standard tool of Roman frontier management. Removing dangerous or potentially dangerous leaders was an excellent means of spreading confusion amongst opponents.  Ammianus describes four other occasions over a span of just 24 years when Roman commanders made dinner invitations an opportunity for kidnap.

The arrival of a huge number of unsubdued Goths in Roman territory at a point when the main Roman army was mobilized elsewhere, was much too potentially dangerous not to have been thought through.

Lupicinus had been told that if things looked as if they might be getting out of hand, then he should do what he could to disrupt the Goths – and hijacking enemy leaders, as already mentioned, was a standard Roman reflex. But it was Lupicinus’ call. In the event, he went for that worst of all possible worlds: first one thing, then the other, with neither stratagem whole-heartedly pursued. Instead of a continued if uneasy peace or a leaderless opposition, he found himself facing an organized revolt under an established leader.

Both the Goths and the Romans had been thrown by the Huns into a new and more intense relationship. Neither side trusted the other, and neither was totally committed to the agreement negotiated – when both were under duress – in 376. That this initial agreement failed to hold cannot really have surprised anyone. The way was now clear for a test of military strength, upon whose outcome would hang the nature of a more durable settlement between the immigrant Goths and the Roman state.

Valens’ jealousy of Gratian, and his impatience, had undone the Empire.

Victory left the Goths masters not only of the battlefield, but of the entire Balkans. Roman military invincibility had been overturned in a single afternoon, and Gratian could only look on helplessly from the other side of the Succi Pass, about 300 kilometers distant, as the triumphant Goths rampaged through the southern Balkans. Against all the odds, and despite their opponents’ advantages in equipment and training, the Goths had triumphed and the path to Constantinople lay open. As Ammianus reports, From Hadrianople they hastened in rapid march to Constantinople, greedy for its vast heaps of treasure, marching in square formations.  Victory over Valens at Hadrianople in 378 was just enough to give the Goths a glimpse of the prize that was Constantinople; but that in turn was enough to convince them that they hadn’t the slightest chance of capturing it.

The Gothic force at large south of the Danube between 377 and 382 wasn’t just an army, but an entire population group: men, women and children, dragging themselves and their possessions around in a huge wagon train. With no secure lands available to them for food production, and unable to break into fortified storehouses, the Goths were forced to pillage in order to eat, and, because so much food was required, it was extremely difficult for them to stay in the one place. Already in autumn 377, there was nothing left north of the Haemus Mountains, and the pattern of the subsequent war years, in so far as we can reconstruct it, saw them moving from one part of the Balkans to another. Sometimes it was the Roman army that forced them on, but this restlessness was largely attributable to their lack of secure food supplies.

We are still a long way from imperial collapse. The war on the Danube had affected only the Empire’s Balkan provinces, a relatively poor and isolated frontier zone, and even here some kind of Romanness survived. The late 4th and early 5th century layers of the recently excavated Roman city of Nicopolis ad Istrum are striking for the number of rich houses – 45% of the urban area – that suddenly appeared inside the city walls. It looks as though, since their country villas were now too vulnerable, the rich were running their estates from safe inside the city walls. At the end of the war, moreover, both eastern and western emperors remained in secure occupation of their thrones, with their great revenue-producing centers such as Asia Minor, Syria, Egypt and North Africa entirely untouched.

On a hot August day in 410, the unthinkable happened. A large force of Goths entered Rome by the Salarian Gate and for three days helped themselves to the city’s wealth. The sources, without being specific, speak clearly of rape and pillage. There was, of course, much loot to be had, and the Goths had a field day. By the time they left, they had cleaned out many of the rich senatorial houses as well as all the temples, and had taken ancient Jewish treasures that had resided in Rome since the destruction of Solomon’s temple in Jerusalem over 300 years before.

The crisis of 405–8 must be seen as a rerun of 376, with the further movements of nomadic Huns as the trigger. Huns in large numbers had not themselves been directly involved in the action of 376. As late as 395, 20 years after the Goths crossed the Danube, most of the Huns were still well to the east.

Tens of thousands of warriors, which means well over 100,000 people all told – just possibly a few hundred thousand – were on the move.

The first step on the road to the sack of Rome in 410 was taken far off on the northern shores of the Black Sea. The further advances of the Huns threw Germania west of the Carpathians into crisis, and the major knock-on effect observed by the Romans was large-scale armed immigration into their Empire. For the eastern Empire, the new proximity of the Huns generated a heightened anxiety which betrayed itself in new and far-reaching defensive measures. But it was the western Empire that bore the brunt of the fall-out both immediately and in the longer term. The collision of the invaders with the central Roman authorities and local Roman elites would have momentous repercussions.

The immediate effects of these population displacements were exactly what you would expect. None of the refugees entered the Empire by agreement; all behaved as enemies and were treated as such. The Goths of Radagaisus at first met little opposition, but when they reached Florence, matters came to a head. They had blockaded the city and reduced it virtually to the point of capitulation, when a huge Roman relief force, commanded by Stilicho, generalissimo of the western Empire, arrived just in the nick of time.

Fire, rape and pillage in Gaul was thus followed by the forced annexation of Spain, but this is only the beginning of the catalogue of disasters that followed the breakdown of frontier security in the western Empire.  The Goths were now back in the political wilderness, worse off in some ways than before 406. At least then they had had a well-established base. Now, they were in an unfamiliar territory and lacking ties to any local food-producing population. But in one important respect, Alaric’s Goths were soon to be better off. Shortly after Stilicho’s execution, the native Roman element in the army of Italy launched a series of pogroms against the families and property of the barbarian troops that he had recruited. These families, who had been quartered in various Italian cities, were massacred wholesale. Outraged, the menfolk threw in their lot with Alaric, increasing his fighting force to perhaps around 30,000. Nor was this first reinforcement the end of the story. Later, when the Goths were encamped outside Rome in 409, they were joined by enough slaves to take Alaric’s force to a total of 40,000 warriors.

In the autumn of 408, now in command of a Gothic supergroup larger than any yet seen, Alaric made a bold play. Gathering all his men he marched across the Alps and into Italy, sowing destruction far and wide as he made a beeline for Rome. He arrived outside the city in November and quickly laid siege to it, thus preventing all food supplies from entering. It soon emerged, however, that Alaric had not the slightest intention of capturing the city. What he wanted – and what he got by the end of the year – was, most obviously, booty. The Roman Senate agreed to pay him a ransom of 5,000 pounds of gold and 30,000 of silver, together with huge quantities of silks, skins and spices

The Sack of Rome

There followed one of the most civilized sacks of a city ever witnessed. Alaric’s Goths were Christian, and treated many of Rome’s holiest places with great respect. The two main basilicas of St Peter and St Paul were nominated places of sanctuary. Those who fled there were left in peace, and refugees to Africa later reported with astonishment how the Goths had even conducted certain holy ladies there.

All in all, even after three days of Gothic attentions, the vast majority of the city’s monuments and buildings remained intact, even if stripped of their movable valuables.

The contrast with the last time the city had been sacked, by Celtic tribes in 390 BC, could not have been more marked. Celtic warbands were able to walk straight into Rome. The few men of fighting age left there defended the capitol with the help of some geese, which provided early warning of surprise attacks, but they abandoned the rest of the town. Older patricians refused to leave, but sat outside their houses in full ceremonial robes.

At first the Celts approached reverentially beings who . . . seemed in their majesty of countenance and in the gravity of their expression most like to gods. Then a Celt stroked the beard of one of them, Marcus Papirius, which he wore long as they all did then, at which point the Roman struck him over the head with his ivory mace, and, provoking his anger, was the first to be slain. After that, the rest were massacred where they sat and . . . there was no mercy then shown to anyone. The houses were ransacked, and after, being emptied, were set aflame.  In 390 BC, only the fortress on the Capitol survived the burning of the city; in AD 410 only the Senate house was set on fire.

The extent of Saxon inroads made between 410 and 420 is another hotly contested issue. Everything suggests that the real cataclysm came a bit later, but, for present purposes, the date doesn’t really matter. Whether at the hands of Saxons or of local self-defense forces, Britain dropped out of the Roman radar from about 410, and was no longer supplying revenues to Ravenna.

In addition to the territories lost outright to the Roman system, tax revenue was substantially down in those much larger parts of the west that had been affected by warfare or looting over the past decade. Much of Italy had been pillaged by the Goths, Spain by the survivors of the Rhine invasion, and Gaul by both. How much of these territories had been damaged is difficult to say, and agriculture could of course recover, but there is good evidence that warfare had caused serious medium-term damage.

The willingness of the landowning elite to do deals with barbarians was a very different phenomenon – and much more dangerous for the Empire – but it too had its origins in the nature of the system. Given its vast size and limited bureaucratic technology, the Roman Empire could not but be a world of self-governing localities held together by a mixture of force and the political bargain that paying tax to the center would bring protection to local landowning elites. The appearance of armed outside forces in the heart of the Roman world put that bargain under great strain. The speed with which some landowners rushed to support barbarian-sponsored regimes is not, as has sometimes been argued, a sign of lack of moral fiber among late Romans, so much as an indicator of the peculiar character of wealth when it comes in the form of land. In historical analysis, not to mention old wills, landed wealth is usually categorized in opposition to moveable goods, and that captures the essence of the problem. You cannot simply pick it up and move, as you would a sack of gold or diamonds, should conditions in your area change. If you do move on, you leave the source of your wealth, and all of your elite status, behind. Landowners have little choice, therefore, but to try to come to terms with changing conditions, and this is what was beginning to happen around Rome in 408/10 and in southern Gaul in 414/15. In fact, it didn’t get far, because Constantius reasserted central authority pretty quickly. He also seems to have been aware of the political problem, and acted swiftly to contain it.

Much of the real business of political negotiation and policy-making took place at a further remove from the public gaze, at council sessions with a few trusted officials present or in private rooms out of sight of pretty much everybody. The decision to admit the Goths into the Empire in 376 emerged only after heated debate amongst Valens and his closest advisers, but the public face put on the decision when announced in the consistory was cheerful consensus. Likewise, Priscus tells us that when wanting to suborn a Hunnic ambassador to murder his ruler, an east Roman official invited him back to his private apartments after the formal ceremonies in the consistory were done. The imperial court had to show complete unanimity in public, but knives were kept sharpened privately, and a constant flurry of rumours spread to advance friends and to destroy foes. Winning and exercising influence backstairs was how the political game was played by everyone.

The rewards of success were enormous: staggering personal wealth and a luxurious lifestyle, together with both social and political power, as you helped shape the affairs of the day and those below you courted your favors. But the price of failure was correspondingly high; Roman politics was a zero-sum game. A top-level political career generated far too many enemies for the individual to be able to take his finger off the pulse for a moment. You don’t hear of many retirements from the uppermost tiers of late Roman politics. The only exit for Stilicho, as we’ve seen, was in a marble sarcophagus, and the same was true for many other leading figures. Regime change, especially the death of an emperor, was the classic moment for the knives to come outIf you were lucky it was just you who snuffed it, but sometimes entire families were wiped out and their wealth confiscated.

Their power came from being seen to do a myriad of small favors, from people knowing that so much influence was within their gift. Patrons were constantly harassed by petitioners, therefore, who would go elsewhere if the particular favor was not forthcoming. Once you stepped on the up escalator, it was hard to get off.

TWELVE YEARS of political conflict, involving two major wars and a minor one, had finally produced a winner. By a combination of assassination, fair battle and good fortune, Aetius had emerged by the end of 433 as the de facto ruler of the western Empire. This kind of court drama was nothing new. It was, as we have seen, a structural limitation of the Roman world that every time a strong man bit the dust, be he emperor or power behind the throne, there was always a protracted struggle to determine his successor. Sometimes, the fall-out was far worse than that witnessed between 421 and 433. Diocletian’s power-sharing Tetrarchy had brought internal peace to the Empire during 285–305, but the price was horrific: multiple, large-scale civil wars over the next 19 years, until Constantine finally eclipsed the last of his rivals. This was a much longer and bloodier bout of mayhem than what took place in and around Italy between the death of Constantius and rise of Aetius.

There was nothing that unusual, then, in the jostling for power that took place during the 420s; but there was something deeply abnormal about its knock-on effects. While a new order was painfully emerging at the center, the rest of the Roman world would usually just get on with being Roman. The landed elites carried on administering their estates and writing letters and poetry to one another, their children busied themselves with mastering the subjunctive, and the peasantry got on with tilling and harvesting. But by the second and third decades of the fifth century there were untamed alien forces at large on Roman soil, and during the 12 years after the death of Constantius they were occupied with more than the business of being Roman. As a result, if the events of 421–33 were in themselves merely a retelling of an age-old Roman story, the same was not true of their consequences. Political paralysis at Ravenna gave the outside forces free rein to pursue their own agendas largely unhindered, and the overall effect was hugely detrimental to the Roman state. For one thing, the Visigothic supergroup settled so recently in Aquitaine got uppity again, aspiring to a more grandiose role in the running of the Empire than the peace of 418 had allowed them. There was also disquiet among some of the usual suspects on the Rhine frontier, particularly the Alamanni and the Franks.

Above all, the Rhine invaders of 406, the Vandals, Alans and Suevi, were on the move once again. They were in origin rather a mixed bunch.

The Alans, Iranian-speaking nomads, as recently as AD 370 had been roaming the steppe east of the River Don and north of the Caspian Sea. Only under the impact of Hunnic attack had some of them started to move west, in a number of separate groups, while others were conquered. The two groups of Vandals, the Hasdings and the Silings – each under their own leaderships like the Gothic Tervingi and Greuthungi of 376 – were Germanic-speaking agriculturalists living, in the fourth century, in central-southern Poland and the northern fringes of the Carpathians.

The Suevi consisted of several small groups from the upland fringes of the Great Hungarian Plain. This odd assortment of peoples may have made common cause in 406, but they were far from natural allies. First, the Hasdings and Silings and Suevi could certainly have understood each other, even if speaking slightly different Germanic dialects, but the Alans spoke another language entirely. Second, both Vandal groups and the Suevi are likely to have shared the tripartite oligarchic structure common to fourth-century Germanic Europe: a dominant, if quite numerous, minority free class, holding sway over freedmen and slaves.

Tied to a nomadic pastoral economy, however, the Alans’ social structure was completely different. Slavery was unknown amongst them, and everyone shared the same noble status. A more egalitarian social structure is natural to nomadic economies, where wealth, measured in the ownership of animals, has a less stable basis than the ownership of land.

Although rather odd bedfellows, the press of events prompted these groups to learn to work together, and this happened progressively over time. Even before crossing the Rhine, Gregory of Tours tells us in his Histories, the Alans under King Respendial rescued the Hasdings from a mauling at the hands of the Franks. We have no idea how closely the groups cooperated in Gaul immediately after the crossing, but in 409, in the face of the counterattacks organized by Constantine III, they again moved en bloc into Spain. By 411, when the threat of any effective Roman counteraction had disappeared, the groups went their own separate ways once more, dividing up the Spanish provinces between them. As we saw, the Hasdings and Suevi shared Gallaecia, the Alans took Lusitania and Carthaginensis, and the Siling Vandals Baetica. The fact that they took two provinces indicates that the Alans were, at this point, the dominant force in the coalition,

Between 416 and 418, the Silings in Baetica (part of modern Andalucia) were destroyed as an independent force, their king Fredibald ending up at Ravenna; and the Alans suffered such heavy casualties, Hydatius reports, that: ‘After the death of their king, Addax, the few survivors, with no thought for their own kingdom, placed themselves under the protection of Gunderic, the king of the [Hasding] Vandals.’ These counterattacks not only returned three Hispanic provinces – Lusitania, Carthagena and Baetica – to central Roman control, but also reversed the balance of power within the Vandal-Alan-Suevi coalition. The previously dominant Alans suffered severely enough to be demoted to junior partners, and for three of the four groups a much tighter political relationship came into force. Hasding Vandals, surviving Siling Vandals and Alans were all now operating under the umbrella of the Hasding monarchy.

In the face of both the greater danger and the greater opportunity that being on Roman territory brought with it, much in the manner of Alaric’s Gothic supergroup, by 418 the loose alliance of 406 had evolved into full political union.

A second barbarian supergroup had been born. these people knew perfectly well that, when a new supremo eventually emerged at court, they would be public enemy number one. They were in Spain by force, and had never negotiated any treaty with the central imperial authorities. So, while presumably making the most of the extended interregnum, they also knew that they needed to be making longer-term plans for their future. In 428, on Gunderic’s death, leadership of the Vandals and Alans passed to his half-brother Geiseric who now had his sights set on Africa. The move was a logical solution to the Vandals’ and Alans’ problems. What they needed was a strategically safe area; in particular, somewhere as far away as possible from any more Roman-Goth campaigns. Africa fitted the bill perfectly – it was only a short hop from southern Spain, and much safer.

It is normally reckoned that, for a successful landing, a seaborne force needs five or six times more troops than land-based defenders.  The explanation for Geiseric’s success is twofold. First, on simple logistic grounds, it is nigh inconceivable that he could have got together enough shipping to move his followers en masse across the sea. Roman ships were not that large. We know, for example, that in a later invasion of North Africa an east Roman expeditionary force averaged about 70 men (plus horses and supplies) per ship.

If Geiseric’s total strength was anywhere near 80,000, he would have needed over 1,000 ships to transport his people in one lift. But in the 460s the whole of the western Empire could raise no more than 300, and it took the combined resources of both Empires to assemble 1,000. In 429, Geiseric had nothing like this catchment area at his disposal, controlling only the coastal province of Baetica. It is overwhelmingly likely, therefore, that he would not have had enough ships to move all his followers in one go. To move a hostile force piecemeal into the heart of defended Roman North Africa would have been suicidal, offering the Romans the first contingent on a plate, while the ships went back for the second. So rather than trying to move his force a long distance by sea, Geiseric simply made the shortest hop across the Mediterranean, from modern-day Tarifa across the Straits of Gibraltar to Tangier (map 10): a distance of only 39 miles (62 km) – even a Roman ship could normally make it there and back again inside 24 hours. For the next month or so, from May 429 onwards, the Straits of Gibraltar must have seen a motley assortment of vessels shunting Vandal-Alans across the Mediterranean.

The intinerary is confirmed by the chronology of the subsequent campaign. It was not until June 430, a good 12 months later, that the Vandals and Alans finally appeared outside the walls of Augustine’s town Hippo Regius, about 2,000 kilometers from Tangier, having travelled there by the main Roman roads.

Once disembarked, the coalition headed slowly east.

Finding a province which was at peace and enjoying quiet, the whole land beautiful and flowering on all sides, they set to work on it with their wicked forces, laying it waste by devastation and bringing everything to ruin with fire and murders. They did not even spare the fruit-bearing orchards, in case people who had hidden in the caves of the mountains . . . would be able to eat the foods produced by them after they had passed. So it was that no place remained safe from being contaminated by them, as they raged with great cruelty, unchanging and relentless.

Finally, on the borders of Numidia, the advancing horde was met by Boniface and his army. Boniface was defeated, and retreated to the city of Hippo Regius, where in June 430 a siege began that would last for 14 months. While Geiseric’s main army got on with the business of besieging, some of his outlying troops, lacking credible opposition, spread out across the landscape. Leaving devastation in their wake, looting the houses of the rich and torturing the odd Catholic bishop, they moved further west towards Carthage and the surrounding province of Proconsularis.

Boniface’s failure to hold the line was the result of the same financial stringencies that had hampered Constantius’ reconstruction of Empire everywhere outside Italy.  In the fourth century there had been no field army in North Africa, only garrison troops.  Of the 31 regiments only four – maybe 2,000 men – were top-grade imperial field army units.

Boniface did what he could, but the Vandal-Alan coalition was much more awesome than the Berber nomads that most of his troops had been trained to deal with. The key North African provinces were now under direct threat, and the future of the western Empire lay in the balance.

Numidia and its two eastern neighbors, Procon-sularis and Byzacena, clustered around their administrative capital Carthage, were a different matter. These provinces played such a critical role in the Empire’s political economy that it is no exaggeration to state that, once the siege of Hippo had begun, Geiseric’s forces were looming directly over the jugular vein of the western Empire.

By the fourth century, Carthage was the port from which North African grain tributes flooded into Ostia, to be offloaded on to carts and smaller boats for the shorter trip inland and upstream to Rome. Carthage and its agricultural hinterland were responsible for feeding the bloated capital of Empire. But keeping the capital fed was no more than a specific application of a much more general point. By the fourth century AD, North Africa had become the economic powerhouse of the Roman west.

Where the average is 16 inches (400 millimeters) per year or more, wheat can be grown. The broad river valleys of Tunisia and the great northern plains of Algeria, together with parts of Morocco in the west, fall into this category. Where precipitation is between 8-16 inches (200-400 millimeters), irrigation is required, but Mediterranean dry farming can still be practiced.  Where rainfall is between 4 to 8 inches (100 – 200 millimeters), olive trees will grow – olives requiring less water, even, than palm trees.

Roman authorities grasped the potential of the well-watered coastal lands to provide grain for Rome. Caesar’s expanded province of Africa was already shipping to the capital 50,000 tons of grain a year. One hundred years later, after the expansion of direct rule, the figure was 500,000 tons, and North Africa had replaced Egypt as the city’s granary, supplying two-thirds of her needs. A substantial process of development was required to guarantee and facilitate this flow of grain.

The governmental capacity of the Roman state was at all periods limited by its primitive bureaucratic technologies. It tended to contract out, recruiting private parties to fulfil vital functions on its behalf. The African grain tax was a classic case in point. Rather than finding and monitoring the thousands of laborers that would be required to operate the huge public estates that had come into its hands in North Africa, it leased land out to private individuals in return for a portion of the produce.

By the fourth century, olive groves could be found 150 kilometers inland from the coast of Tripolitania, where there are none today.

it wasn’t just settlers from Italy who flourished. The kind of irrigation regimes being used in the late Roman period in North Africa were actually ancient and indigenous: everything from terraced hillsides – to catch water and prevent soil erosion – to cisterns, wells, dams, to full-blown and carefully negotiated water-sharing schemes such as that commemorated on an inscription from Lamasba (Ain Merwana). These traditional means of conserving water were simply being applied more vigorously.

Nor were nomads excluded from the action: not only did they provide crucial extra labor at harvest time, working the farms in travelling gangs, but their goods attracted preferential tax treatment.

Fourth-century Carthage, then, was a cultural and, above all, an economic pillar of the western Empire. Huge and bustling, it was a city where the cramped houses of tens of thousands of ordinary citizens offered a sharp contrast to the lofty public buildings and the mansions of the rich. Above all: high on productivity and low on maintenance, North Africa was a massive net contributor to western imperial coffers.

The revenue surplus from North Africa was essential for balancing the imperial books. Without it, the west could never have afforded armed forces large enough to defend its other, more exposed territories. Not only in Africa, but everywhere in the Roman west, predatory immigrants had been left to pursue their own agendas largely unhindered since the death of Constantius in 421. Along the Rhine frontier Franks, Burgundians and Alamanni, particularly the Iuthungi in the Alpine foothills to the south, had conducted raids over the frontier and were threatening further trouble. In southern France, the Visigoths had revolted and were making menacing noises in the direction of the main administrative capital of the region, Aries. In Spain, the Suevi were loose in the north-west and rampaging throughout the peninsula. With the arrival of the Vandal Geiseric on the fringes of Numidia in the year 430, the sword of Damocles was hanging over the entire western Empire.

Into the breach stepped the last great Roman hero of the fifth-century west, Flavius Aetius.  When Aetius finally took control of the western Empire in 433, the consequences of nearly ten years of paralysis at the center could be seen right across its territories. Each of the unsubdued immigrant groups within the frontiers of the west had taken the opportunity to improve its position, as had outsiders beyond. Also, as had happened in the aftermath of the Rhine crossing, the trouble generated by immigrants triggered the usurpation of imperial power by locals. In northern Gaul, in particular Brittany, disruption had been caused by so-called Bagaudae. Zosimus mentions other groups labelled Bagaudae in the foothills of the western Alps in 407/8, and Hydatius tells us in his Chronicle that they had appeared in Spain by the early 440s. Who these people were has long been a hot topic among historians. The term originated in the third century, when they were characterized as ‘country folk and bandits’. For historians of a Marxist inclination it has been impossible not to view them as social revolutionaries who from time to time generated a groundswell of protest against the inequalities of the Roman world, and who appeared whenever central control faltered. Certainly, Bagaudae do consistently make an appearance where central control was disrupted by the hostile activities of barbarians, but the glimpses we get of their social composition don’t always suggest revolutionaries. The smart money is on the term having become a catch-all for the perpetrators of any kind of dissident activity. Sometimes those labelled Bagaudae were bandits. Those of the Alps in 407/8, for instance, demanded money with menaces from a Roman general on the run. But self-help groups seeking to preserve the social order in their own localities when the long arm of the state no longer reached there, also seem to have been referred to as ‘Bagaudae’. In the 410s Armorica had already asserted independence in an attempt to quell disorder; later, something similar was happening in Spain.

Either way, Bagaudae plus barbarians spelt trouble. By the summer of 432, the threat was widespread and imminent: in north-west Gaul there were the Bagaudae; in south-west Gaul, Visigoths; on the Rhine frontier and in the Alpine foothills, Franks, Burgundians and Alamanni; in north-west Spain, Suevi; and in North Africa, the Vandals and Alans. In fact, much of Spain had not seen proper central control since the 410s. Given, too, that Britain had already dropped out of the western orbit, the only places in decent shape from an imperial viewpoint were Italy, Sicily and south-east Gaul.

Aetius’ achievement during the 430s was prodigious. Franks and Alamanni had been pushed back into their cantons beyond the Rhine, the Burgundians and Bagaudae had been thoroughly subdued, the Visigoths’ pretensions had been reined in, and much of Spain returned to imperial control. Not for nothing did Constantinopolitan opinion consider Aetius the last true Roman of the west.

Just as Merobaudes was putting the last full stop to his latest opus in Aetius’ praise, and Aetius was contemplating sending his trusty breastplate to the cleaners, a new storm burst on the horizon. In October 439, after four and a half years of peace, Geiseric’s forces broke out of their Mauretanian reservation and came thundering into the richer provinces of North Africa.

The logistic limitations characteristic of the Roman Empire ruled out all thoughts of an instant counterstrike, and for now the advantage lay with Geiseric. A series of laws issued in the name of Valentinian III in spring 440 testify to the impending sense of crisis. On 3 March, special license was granted to eastern traders in order to guarantee food supplies for the city of Rome: the cutting off of the African bread dole to the capital was not the least of Aetius’ worries. The same law also put in place measures to rectify holes in Rome’s defenses, and to ensure that everyone knew what their duty was with regard to garrisoning the city. On 20 March, another law summoned recruits to the colors, at the same time threatening anyone who harbored deserters with the direst of punishments. A third law, of 24 June, authorized people to carry arms again ‘because it is not sufficiently certain, under summertime opportunities for navigation, to what shore the ships of the enemy can come.’

Late in 440, after the onset of bad weather had forced the Vandals back to Carthage, a joint imperial army began to assemble in Sicily: 1,100 ships to carry men, horses and supplies. Aetius’ ‘large force’ crossed to the island, and was joined there by a substantial expeditionary force from the east. No source puts a figure to the Roman forces gathered there, but the shipping was enough to carry several tens of thousands of men.

Why did the joint expeditionary force never sail?

A new threat, well beyond anything the Vandals might pose, had arisen, and Aetius was needed back in harness to save the Roman world yet again.  It was this threat that compelled the troops gathered in Sicily to return to their bases, thus leaving Carthage in the hands of the Vandals. And the western Empire would have to cope as best it could with the consequences of Geiseric’s success.

Thus, in 442 a second treaty was made with the Vandals, this one licensing Geiseric’s control of Proconsularis and Byzacena, together, it seems, with part of Numidia.

In return for peace, now that he had got what he wanted, Geiseric was willing to be generous. A grain tribute of some kind, although presumably rather diminished, continued to arrive in Rome from the Vandal provinces, and his eldest son Huneric was sent to the imperial court as a hostage. In a massive break with tradition the ‘hostage’ Huneric was betrothed to Eudocia, the daughter of the emperor Valentinian III.  For the first time, a legitimate marriage was being contemplated between barbarian royalty and the imperial family. The continuation of food supplies to the city of Rome probably seemed worth the humiliation.

Just like Jovian’s surrender of provinces and cities to the Persians in 363, so the loss of Carthage to the Vandals and Alans in 442 was presented as a Roman victory, and for the same reasons. A God-protected Empire simply could not admit to defeat: the image of control had to be maintained, come what may.

None of this meant, of course, that the consequences of the new peace treaty weren’t disastrous. In Africa, Geiseric proceeded with the kind of pay-out that his followers were expecting, and that was essential to his own political survival. To provide the necessary wherewithal, he confiscated senatorial estates in Proconsularis such as those belonging to Symmachus’ descendants, and reallocated them to his followers.  The same old peasantry continued to farm the same old bits of land. The difference was that the rent was now paid to new landowners.

The loss of its best North African provinces, combined with a massive seven-eighths reduction in revenue from the rest, was a fiscal disaster for the west Roman state. A series of regulations from the 440s show unmistakable signs of the financial difficulties that now followed. In 440 and 441, initial efforts had been made to maximize revenues from its surviving sources of cash. A law of 24 January 440 withdrew all existing special imperial grants of tax exemption or reduction.80 In similar vein, a law of 4 June that year attempted to cut back on the practice of imperial officials – palatines – taking an extra percentage for themselves when out collecting taxes. On 14 March 441, the screw was tightened further: lands that had been rented annually from the imperial fisc, with tax privileges attached, were now to be assessed at the normal rate, as was all Church land. In addition, the law cast its glance towards a whole range of smaller burdens from which the lands of higher dignitaries had previously been immune: ‘the building and repair of military roads, the manufacture of arms, the restoration of walls, the provision of the annona, and the rest of the public works through which we achieve the splendour of public defence’. Now, for the first time, no one was to be exempt.

Roman historians tend to consider that the late Empire spent about two-thirds of its revenues on the army, and this figure can’t be far wrong. The army was bound to be the main loser, therefore, when imperial revenues declined drastically. There were no other major areas of spending to cut. And the piecemeal measures of 440–1 were insufficient to compensate for the overall loss in African revenue.

The total tax lost from these provinces in 445, because of the new remissions, amounted to 106,200 solidi per annum. A regular infantryman cost approximately six solidi per annum, and a cavalry trooper 10.5. This means that the reduced tax from Numidia and Mauretania alone implied a reduction in army size of about 18,000 infantrymen, or about 10,000 cavalry. This, of course, takes no account of the complete loss of revenue from the much richer provinces of Proconsularis and Byzacena, so that the total of lost revenues from all of North Africa must have implied a decline in military numbers of getting on for 40,000 infantry, or in excess of 20,000 cavalry. And these losses, of course, came on top of the earlier ones dating from the post-405 period.

Only a massive new threat, therefore, could have made Aetius call off the joint east-west expedition and accept these disastrous consequences. Where had this threat come from?

Arrow-firing hordes from Scythia? In the middle of the fifth century, that could mean only one thing: Huns. And the Huns were, indeed, the new problem, the reason why the North African expedition never set sail from Sicily. Just as it was making final preparations to depart, the Huns launched an attack over the River Danube into the territory of the east Roman Balkans. Constantinople’s contingent for Carthage, all taken from the Danube front, had to be recalled immediately, pulling the plug on any attempt to destroy Geiseric. Yet all through the 420s and 430s, as we have seen, the Huns had been a key ally, keeping Aetius in power and enabling him to crush the Burgundians and curb the Visigoths. Behind this change in attitude lay another central character in the story of Rome’s destruction. It’s time to meet Attila the Hun.

From 4411 to 453, the history of Europe was dominated by military campaigns on an unprecedented scale, the work of Attila, ‘scourge of God’. Historians’ opinions about him have ranged from one end of the spectrum to the other. After Gibbon, he tended to be viewed as a military and diplomatic genius. Edward Thompson, writing in the 1940s, sought to set the record straight by portraying him as a bungler. To Christian contemporaries, Attila’s armies seemed like a whip wielded by the Almighty. His pagan forces ranged across Europe, sweeping those of God-chosen Roman emperors before them. Roman imperial ideology was good at explaining victory, but not so good at explaining defeat, especially at the hands of non-Christians. Why was God allowing the unbelievers to destroy His people? In the 440s, Attila the Hun, spreading devastation from Constantinople to the gates of Paris, prompted this question as it had never been prompted before. As one contemporary put it, ‘Attila ground almost the whole of Europe into dust.’

In their first campaign against the east Roman Empire, Attila and Bleda had shown that they had the military capacity to take fully defended front-rank Roman fortresses. They may have gained Margus by stratagem; but Viminacium and Naissus were both large and well-fortified, and yet they had been able to force their way in. This represented a huge change in the balance of military power between the Roman and non-Roman worlds in the European theatre of war. As we have seen, the last serious attack on the Balkans had been by Goths between 376 and 382; and then, although they had been able to take smaller fortified posts or force their evacuation, large walled cities had been beyond them. Consequently, even though at times hard-pressed for food, the cities of the Roman Balkans had survived the war more or less intact. The same was true of western Germania. When Roman forces were distracted by civil wars, Rhine frontier groups had on occasion overrun large tracts of imperial territory: witness the Alamanni in the aftermath of the civil war between Magnentius and Constantius II in the early 350s. All they had done then, though, was occupy the outskirts of the cities and destroy small watch-towers. They did not attempt to take on the major fortified centers such as Cologne, Strasbourg, Speyer, Worms or Mainz, all of which survived more or less intact. Now, the Huns were able to mount successful sieges of such strongholds.

By the 440s the Huns had been in the employ of Aetius certainly, and quite possibly of Constantius before him, so that close observation of the Roman army could easily have been the source of their knowledge – in other eras, Roman techniques and weaponry had quickly been adopted by non-Romans.

As recently as 439, Hunnic auxiliaries had been part of the western Roman force that had besieged the Goths in Toulouse, and would have seen a siege at first hand.

Just as important for successful siege warfare was the availability of manpower. Men were needed to make and man machines, to dig trenches and to make the final assault. As we shall see later in this chapter, even if the designs for siege machinery did come from old knowledge, it was only recently that manpower on such a scale was available to the Huns.

Whatever its origins, the barbarians’ capacity to take key fortified centers was a huge strategic shock for the Roman Empire. Impregnable fortified cities were central to the Empire’s control of its territories. But, serious as the capture of Viminacium and Naissus was, what mattered most at this moment was that the Huns had picked their first fight with Constantinople at exactly the point when the joint east-west expedition force was gathering in Sicily to attempt to wrest Carthage from the Vandals. As we noted earlier, much of the eastern component for this expedition had been drawn from the field armies of the Balkans, and of this, no doubt, the Huns were well aware. Information passed too freely across the Roman frontier for it to be possible to hide the withdrawal of large numbers of troops from their normal stations. I suspect that by raising the annual tribute so readily at the start of the reign of Attila and Bleda to 700 pounds of gold, the authorities in Constantinople were trying to buy a big enough breathing space for the African expedition to be launched. If so, they spectacularly failed. Instead of being bought off, the Huns decided to exploit the Romans’ temporary weakness farther, and so, with havoc in mind, hurled their armies across the Danube. The authorities in Constantinople thus had no choice but to withdraw their troops from Sicily; and after the unprecedented loss of three major bases – Viminacium, Margus and Naissus (although the latter had probably not yet fallen when the orders were given) – it’s hard to blame them. The Hunnic army was now poised astride the great military road through the Balkans and pointing straight at Constantinople.

Man aspects of the reign of Attila are less certain. Illiterate when they first hit the fringes of Europe in the 370s, the Huns remained so 70 years later, and there are no Hunnic accounts of even the greatest of their leaders. Our Roman sources, as always, are much more concerned with the political and military impact of alien groups upon the Empire than with chronicling their deeds, so there are always points of huge interest, particularly in the internal history of such groups, that receive little or no coverage.

On 27 January 447, during the second hour after midnight, an earthquake had struck Constantinople. The whole district around the Golden Gate was in ruins, and, even worse, part of the city’s great landwalls had collapsed. Attila was on the point of invading anyway, but news of the earthquake may have altered his line of attack. By the time he got there, the crisis was over. The Praetorian Prefect of the east, Constantinus, had mobilized the circus factions to clear the moats of rubble and rebuild gates and towers. By the end of March the damage was repaired and, as a commemorative inscription put it, ‘even Athene could not have built it quicker and better’. Long before Attila’s forces got anywhere near the city the opportunity to take it had gone, and the Huns’ advance led not to a siege but to the second major confrontation of the year. Although the Thracian field army had been defeated and scattered, the east Romans still had central forces stationed around the capital on either side of the Bosporus. This second army was mobilized in the Chersonesus, where a second major battle, and a second huge defeat for the Romans, duly followed.

Attila had failed to force his way into Constantinople, but having reached the coast of both the Black Sea and the Dardanelles, at Sestus and Callipollis (modern-day Gallipoli) respectively, he had mastery of the Balkans in all other respects. And he proceeded to wield his domination to dire effect for the Roman provincial communities. In the aftermath of victory the Hunnic forces split up, raiding as far south as the pass of Thermopylae, site of Leonidas’ famous defense of Greece against the Persians nearly a thousand years before.  

The dig revealed that these houses, as well as the city center, terminated in a substantial destruction layer, which the end of a more or less continuous coin sequence dates firmly to the mid- to late-440s.

There is little doubt, therefore, that in the total destruction of the old city we are looking at the effects of its sack at the hands of Attila’s Huns in 447.

Roman urban development north of the Haemus Mountains, a phenomenon stretching back 300 years to the Romanization of the Balkans in the first and second centuries AD, was destroyed by the Huns, never to recover. This was evidently no cozy little sack, like that of Rome in 410 when the Goths were paid off, then went home. What we’re looking at in Nicopolis is large-scale destruction.

Whether it was like this everywhere the Huns descended is impossible to say. Of those places that managed to survive, the most famous is the town of Asemus, perched on an impregnable hilltop. Armed and organized, its citizens not only weathered Attila’s storm but emerged from the action with Hunnic prisoners. Their city would survive further storms in the centuries to come. But there can be no doubt that the campaigns of 447 were an unprecedented disaster for Roman life in the Balkans: two major field armies defeated, a host of defended strongholds captured and some destroyed. It’s hardly surprising, then, that in the aftermath of their second defeat in the Cherso-nesus the east Romans were forced to sue for peace. An extract from Priscus’ history gives us the terms: [Any] fugitives should be handed over to the Huns, and 6,000 pounds of gold be paid to complete the outstanding instalments of tribute; the tribute henceforth be set at 2,100 pounds of gold per year; for each Roman prisoner of war amongst the Huns who escaped and reached his home territory without ransom, twelve solidi [one-sixth of a pound of gold] were to be paid . . . and . . . the Romans were to receive no barbarian who fled to them. As Priscus went on to comment wryly: The Romans pretended that they had made these agreements voluntarily, but because of the overwhelming fear which gripped their commanders they were compelled to accept gladly every injunction, however harsh, in their eagerness for peace.

Attila’s habits and self-presentation were not what you might expect. Priscus reports on dining with him that for the other barbarians and for us there were lavishly prepared dishes served on silver platters, for Attila there was only meat on a wooden plate . . . Gold and silver goblets were handed to the men at the feast, whereas his cup was of wood. His clothing was plain and differed not at all from that of the rest, except that it was clean. Neither the sword that hung at his side nor the fastenings of his barbarian boots nor his horse’s bridle was adorned, like those of the other Scythians, with gold or precious stones.  For the god-appointed conqueror, plain was good.

Good relations also demanded the regular sharing of the booty of war. None of this takes us far inside Attila’s head, but it gives us some insight into his recipe for success: total self-confidence and the charisma that often flows from this; ruthlessness when called for, but also a capacity for moderation, married to shrewdness; and a respect for his subordinates, whose loyalty was so vital.

Set against what we know about nomad anthropology, political centralization – the first of the two transformations that concern us here – must also have been associated with a broader transformation among the Huns. Devolved power structures occur very naturally among nomadic groups, because their herds cannot be concentrated in large groupings, for fear of overgrazing. In the nomad world, the main purpose of any larger political structure is simply to provide a temporary forum where grazing rights can be negotiated, and a force put together, if necessary, to protect those rights against outsiders. This being the case, the permanent centralization of political power among the Huns strongly implies that they were no longer so economically dependent upon the produce of their flocks.

Nomads always need to form economic relationships with settled agricultural producers. This was clearly the case with the Huns, and commercial exchanges were still taking place in the 440s. But by the time of Attila, the main form of exchange between Hunnic nomad and Roman agriculturalist was not grain in return for animal products, but cash in return for military aid of one kind or another. This form of exchange had its origins in previous generations, when Huns had performed mercenary service for the Roman state. Uldin and his followers were the first we know of to have fulfilled this role, in the early 400s, and larger Hunnic forces may have aided Constantius in the 410s, and certainly supported Aetius in the 420s and 430s. Shortly after, military service for pay evolved into demands for money with menaces. Precisely when the line was crossed is impossible to say, but Attila’s uncle Rua certainly launched one major assault on the east Roman Empire with cash in mind, even if he also provided mercenary service for the west. By the reign of Attila, targeted foreign aid had become tribute, and it clearly emerges from Priscus’ record of Romano-Hunnic diplomacy that the main thing the Huns wanted from these exchanges, and from their periodic assaults across the frontier, was cash and yet more cash. As we saw earlier, the first treaty between Attila and Bleda and the east Romans fixed the size of this annual tribute at seven hundred pounds of gold – and from there the demands could only escalate. Hunnic warfare against the Romans also brought other one-sided economic exchanges in its wake: booty, slaves and ransoms such as the one Priscus and Maximinus negotiated. By the 440s, then, military predation upon the Roman Empire had become the source of an ever-expanding flow of funds into the Hunnic world.

Cornering the market in the flow of funds from the Empire was the ideal means of putting sufficient powers of patronage into the hands of just one man, and rendering the old political structures redundant. Only by controlling the flow of new funds could one king outbid the others in the struggle for support. Already in the mid- to late-fourth century, Huns had presumably been raiding and intimidating both other nomads and Germanic agriculturalists north of the Black Sea, but real centralization only became possible once the main body of the Huns was operating close to the Roman world. Raid and intimidate the Goths and you might get some slaves, a bit of silver and some agricultural produce, but that was about it – not enough to fund full-scale political revolution. But do the same vis-à-vis the Roman Empire, and the gold would begin to roll in, first in hundreds of pounds annually, then thousands – enough to transform both economic and political systems.

We could understand these transformations as an adaptation away from nomadism, rather than a complete break with the past. As mentioned earlier, in normal circumstances nomads rear a range of animals to make full use of the varying qualities of available grazing. The horse figures primarily as an expensive, almost luxury animal, used for raiding, war, transport and trade; its meat and milk provide only a very inefficient return in terms of usable protein compared with the quality and quantity of grazing required. As a result, nomads generally keep relatively few horses. If, however, warfare becomes a financially attractive proposition, as it did when the Huns came within range of the Roman Empire, then nomads might well start to breed increasing numbers of horses for war – evolving, in the process, into a particular type of militarily predatory nomadic group. This could never have worked as a subsistence strategy out on the steppe, where the potential proceeds from warfare were so much less.

It is impossible to prove that this is what happened, but one relevant factor is the size of the fifth-century Hunnic homeland, the Hungarian Plain: while providing good-quality grazing, it was much smaller than the plains of the Great Eurasian Steppe the Huns had left behind. Its 42,400 square kilometers amount to less than 4% of the grazing available, for instance, in the republic of Mongolia alone. And because the grazing was now so limited, some historians have wondered whether the Huns were evolving towards a fully sedentary existence in the fifth century. This is a possible argument, but not a necessary one. The Hungarian Plain notionally provides grazing for 320,000 horses, but this figure must be reduced so as to accommodate other animals, forest and so on; so it would be reasonable to suppose that it could support, maybe, 150,000. Given that each nomad warrior requires a string of ten horses to be able to rotate and not overtire them, the Hungarian Plain would thus provide sufficient space to support horses for up to 15,000 warriors. I would doubt that there were ever more Huns than this in total, so that, as late as the reign of Attila, there is in fact no firm indication that the Huns did not retain part of their nomad character. Whatever the case, the real point is that, once they found themselves within hailing distance of the Roman Empire, the Huns perceived a new and better way to make a living, based on military predation upon the relatively rich economy of the Mediterranean world.

Why had Germanic languages come to play a prominent role in the Hunnic Empire? The explanation lies in the broader evolution of Attila’s Empire. As far back as the 370s when they were attacking Goths beyond the Black Sea, Huns were forcing others they had already subdued to fight alongside them. When they first attacked the Greuthungi, starting the avalanche that ended at the battle of Hadrianople, they were operating in alliance with Iranian-speaking Alan nomads. And whenever we encounter them subsequently, we find that Hunnic forces always fought alongside non-Hunnic allies. Although Uldin was not a conqueror on the scale of Attila, once the east Romans had dismantled his following, most of the force they were left with to resettle turned out to be Germanic-speaking Sciri. Likewise, in the early 420s, east Roman forces intervening to curb Hunnic power west of the Carpathian Mountains found themselves left with a large number of Germanic Goths.

By the 440s, an unprecedented number of Germanic groups found themselves within the orbit defined by the formidable power of Attila the Hun. For example, his Empire contained at least three separate clusters of Goths.

We can’t put figures on this vast body of Germanic-speaking humanity, but the Amal-led Goths alone could muster 10,000-plus fighting men, and hence had maybe a total population of 50,000. And there is no reason to suppose that the other groups were much, if at all, smaller. Many tens of thousands, therefore, and probably several hundreds of thousands, of Germanic-speakers were caught up in the Hunnic Empire by the time of Attila. In fact, by the 440s there were probably many more Germanic-speakers than Huns, which explains why ‘Gothic’ should have become the Empire’s lingua franca. Nor do these Germani exhaust the list of Attila’s non-Hunnic subjects. Iranian-speaking Alanic and Sarmatian groups, as we saw earlier, had long been in alliance with the Huns, and Attila continued to grasp at opportunities to acquire new allies. 

The Hunnic Empire was all about incorporating people, not territory: hence Attila’s virtual lack of interest in annexing substantial chunks of the Roman Empire. It is clear that his armies, like those of his less powerful predecessors, were always composites, consisting both of Huns and of contingents from the numerous other peoples incorporated into his Empire.

Since 1945 a mass of material has been unearthed from cemetery excavations on the Great Hungarian Plain and its environs, dating to the period of Hunnic domination there. In this material, ‘proper’ Huns have proved extremely hard to find. In total – and this includes the Volga Steppe north of the Black Sea as well as the Hungarian Plain – archaeologists have identified no more than 200 burials as plausibly Hunnic.  The kinds of items found in the graves, the ways in which people were buried and, perhaps above all, the way women, in particular, wore their clothes – gathered with a safety-pin, or fibula, on each shoulder, with another closing the outer garment in front – all reflect the patterns observable in definitely Germanic remains of the fourth century. One possible answer to the question of the lack of Hunnic burials, then, is that, quite simply, they started to dress like their Germanic subject peoples, in just the same way that they learned the Gothic language. If so, it would be impossible to tell Hun from Goth. But even if our ‘real Huns’ are lying there in disguise, as it were, this doesn’t alter the fact that there were an awful lot of Germani buried in and around the Great Hungarian Plain in the Hunnic period.

Every time a new barbarian group was added to Attila’s Empire, that group’s manpower was mobilized for Hunnic campaigns. Hence the Huns’ military machine increased, and increased very quickly, by incorporating ever larger numbers of the Germani of central and eastern Europe. In the short term, this benefited the embattled Roman west. The reason, as many historians have remarked, that the rush of Germanic immigration into the Roman Empire ceased after the crisis of 405–8 was that those who had not crossed the frontier by about 410 found themselves incorporated instead into the Empire of the Huns; and there is an inverse relationship between the pace of migration into the Roman Empire and the rise of Hunnic power.

For the first time in imperial Roman history, the Huns managed to unite a large number of Rome’s European neighbors into something approaching a rival imperial superpower.

The full ferocity of this extraordinary new war machine was felt in the first instance by the east Roman Empire, whose Balkan communities suffered heavily in 441/2 and again in 447. After the two defeats of the 447 campaign, the east Romans had nothing left to throw in Attila’s direction. Hence, in 449, their resorting to the assassination attempt in which Maximinus and Priscus found themselves unwittingly embroiled. Still Attila didn’t let Constantinople off the hook. Having refused to settle the matter of the fugitives and repeated his demands for the establishment of a cordon sanitaire inside the Danube frontier, he now added another: that the east Romans should provide a nobly born wife (with an appropriate dowry) for his Roman-born secretary. These demands, if unsatisfied, were possible pretexts for war, and his constant agitating shows that Attila was still actively considering another major assault on the Balkans.

What quickly emerged, however, was that Attila had settled with Constantinople not because – as the stereotypical barbarian – he had been blown away by the wisdom of his east Roman interlocutors, but because he wanted a secure eastern front, having decided on a massive invasion of the Roman west.

Having followed the Upper Danube northwestwards out of the Great Hungarian Plain, the horde crossed the Rhine in the region of Coblenz and continued west. The city of Metz fell on 7 April, shortly followed by the old imperial capital of Trier. The army then thrust into the heart of Roman Gaul. By June, it was outside the city of Orleans, where a considerable force of Alans in Roman service had their headquarters. The city was placed under heavy siege; there are hints that Attila was hoping to lure Sangibanus, king of some of the Alans based in the city, over to his side. At the same time elements of the army had also reached the gates of Paris, where they were driven back by the miraculous intervention of the city’s patron Saint Genevieve. It looks as if the Hunnic army was swarming far and wide over Roman Gaul, looting and ransacking as it went.

Aetius was still generalissimo of the west, and he had been anticipating the possibility of a Hunnic assault on the west from at least 443. When it finally materialized, nearly a decade later, he sprang into action. Faced with this enormous threat, he strove to put together a coalition of forces that would stand some chance of success. Early summer 451 saw him advancing north through Gaul with contingents of the Roman armies of Italy and Gaul, plus forces from many allied groups, such as the Burgundians and the Aquitainian Visigoths under their king Theoderic. On 14 June, the approach of this motley force compelled Attila’s withdrawal from Orleans.

A stalemate followed, with the two armies facing each other, until the Huns began slowly to retreat. Aetius didn’t press them too hard, and disbanded his coalition of forces as quickly as possible – a task made much easier by the fact that the Visigoths were keen to return to Toulouse to sort out the succession to their dead king. Attila consented to his army’s continued withdrawal and, tails between their legs, the Huns returned to Hungary. Although the cost to the Roman communities in the Huns’ line of march was enormous, Attila’s first assault on the west had been repulsed. Yet again, Aetius had delivered at the moment of crisis. Despite the limited resources available, he had put together a coalition that had saved Gaul.

In the spring of 452, his force broke through the Alpine passes. The stork, of course (not to mention Attila), was right. The Huns’ precocious skill at taking fortified strongholds prevailed, and Aquileia fell to them in short order. Its capture opened up the main route into north-eastern Italy. The horde then followed the ancient Roman roads west across the Po Plain. One of the political heartlands of the western Empire and agriculturally rich, this region was endowed with many prosperous cities. Now, as in the Balkans, one after the other these cities fell to the Huns, and they took in swift succession Padua, Mantua, Vicentia, Verona, Brescia and Bergamo. Attila was at the gates of Milan, a long-time imperial capital. The siege was protracted, but again Attila triumphed, and another center of Empire was looted and sacked.

But, as in Gaul the previous year, Attila’s Italian campaign failed to go entirely to plan. It was essentially a series of sieges, and lacked substantial logistic support. In their often cramped conditions, the Hunnic army was vulnerable in more ways than one, succumbing to famine and disease.  

By the time Milan was captured, disease was taking a heavy toll, and food running dangerously short. Also, Constantinople now had a new ruler, the emperor Marcian, and his forces, together with what Aetius could put together, were far from idle: In addition, the Huns were slaughtered by auxiliaries sent by the Emperor Marcian and led by Aetius, and at the same time they were crushed in their settlements by both heaven sent disasters and the army of Marcian. It looks as though, while the Hunnic army in Italy was being harassed by Aetius leading a joint east-west force, other eastern forces were launching a raid north of the Danube, into Attila’s heartland. The combination was deadly, and, as in the previous year, the Hun had no choice but to retreat.

Nothing suggests that the Huns had any equivalent, therefore, of the Romans’ capacity for planning and putting in place the necessary logistic support, in terms of food and fodder, for major campaigns. No doubt, when the word went out to assemble for war, each warrior was expected to bring a certain amount of food along with him, but as the campaign dragged on, the Hunnic army was bound to be living mainly off the land. Hence, in campaigns over longer distances, the difficulties involved in maintaining the army as an effective fighting force increased exponentially. Fatigue as well as the likelihood of food shortages and disease increased with distance. There was also every chance that the army would spread so widely over an unfamiliar landscape in search of supplies that it would be difficult to concentrate for battle.

In 447, during the widest-reaching of the Balkan campaigns, for their first major battle Attila’s armies had marched west along the northern line of the Haemus Mountains, crossed them, then moved south towards Constantinople, then southwest to the Chersonesus for their second: a total distance of something like 500 kilometers. In 451, the army had to cover the distance from Hungary to Orleans, about 1,200 kilometers; and in 452 from Hungary to Milan, perhaps 800, but this time they were laying siege as they went, which made them yet more susceptible to disease. As many historians have commented, in campaigns covering such vast distances into the western Empire, Attila and his forces were almost bound to experience serious setbacks.

The Huns and Rome

The full effect upon the Roman world of the rise of the Hunnic Empire can be broken down into three phases. The first generated two great moments of crisis on the frontier for the Roman Empire, during 376–80 and 405–8, forcing it to accept upon its soil the establishment of enclaves of unsubdued barbarians. The existence of these enclaves in turn created new and hugely damaging centrifugal forces within the Empire’s body politic. In the second phase, in the generation before Attila, the Huns evolved from invaders into empire-builders in central Europe, and the flow of refugees into Roman territory ceased. The Huns wanted subjects to exploit, and strove to bring potential candidates under control. In this era, too, Constantius and Aetius were able to make use of Hunnic power to control the immigrant groups who had previously crossed the Empire’s frontier to escape from the Huns. Since none of these groups was actually destroyed, however, the palliative effects of phase two of the Hunnic impact upon the Roman world by no means outweighed the damage done in phase one. Attila’s massive military campaigns of the 440s and early 450s mark the third phase in Hunnic-Roman relations. Their effects, as one might expect, were far-reaching. The east Roman Empire’s Balkan provinces were devastated, with thousands killed as one stronghold after another was taken. As the remains of Nicopolis and Istrum so graphically show, Roman administration might be restored but not so the Latin- and Greek-speaking landowning class that had grown up over the preceding four centuries. The Gallic campaign of 451, and particularly the assault upon Italy in 452, inflicted enormous damage upon those unfortunate enough to find themselves in the Huns’ path.

But if we step back from the immediate drama and consider the Roman state in broader terms, Attila’s campaigns, though serious, were not life-threatening. The eastern half of the Roman Empire depended on the tax it collected from a rich arc of provinces stretching from Asia Minor to Egypt, territories out of reach of the Huns. For all the latter’s siege technology, the triple landwalls surrounding Constantinople made the eastern capital impregnable; and the Huns had no navy to take them across the narrow straits that separated the Balkans from the rich provinces of Asia.

A similar situation prevailed in the west. By the time of Attila, it was already feeling a heavy financial strain, as we have seen, but given the logistic limitations of the Hunnic military machine, Attila came nowhere near to conquering it. In fact, far more serious damage was indirectly inflicted upon the structures of Empire by the influx of armed immigrants between 376 and 408. Moreover, it was again the indirect effects of the age of Attila that posed the real threat to the integrity of the west Roman state. Because he had to concentrate on dealing with Attila, Aetius had less time and fewer resources for tackling other threats to the Roman west in the 440s. And these other threats cost the western Empire much more dearly than the Hunnic invasions of 451 and 452. The first and most serious loss was the enforced abandonment of the reconquest of North Africa from the Vandals.

The picture was bleak. The western Empire had by 452 lost a substantial percentage of its provinces: the whole of Britain, most of Spain, the richest provinces of North Africa, those parts of south-western Gaul ceded to the Visigoths, plus south-eastern Gaul ceded to the Burgundians. Furthermore, much of the rest had also seen serious fighting in the last decade or so, and the revenues from these areas too would have been substantially reduced. The problem of diminishing funds had become overwhelming. The Huns’ indirect role in this process of attrition, in having originally pushed many of the armed immigrants across the frontier, did far more harm than any damage directly inflicted by Attila.

The fall of the Hunnic empire, the fall of Attila’s empire, is an extraordinary story in its own right. Up to about AD 350, the Huns had figured not at all in European history. During 350–410, the only Huns most Romans had encountered were a few raiding parties. Ten years later, Huns in significant numbers had established themselves west of the Carpathian Mountains on the Great Hungarian Plain, but they still functioned mostly as useful allies to the Roman state. In 441, when Attila and Bleda launched their first attack across the Roman frontier, the ally revealed his new colors. In 40 years, the Huns had risen from nowhere to European superpower. By anyone’s standards, this was spectacular. But the collapse of Attila’s Empire was more spectacular still. By 469, just 16 years after his death, the last of the Huns were seeking asylum inside the eastern Roman Empire. Their extinction would cause deep reverberations in the Roman west.

Key stages in the process of Hunnic collapse

Over the years, many explanations have been offered for this extraordinary phenomenon. Historians of earlier eras tended to argue that it was testament to the extraordinary personal capacities of Attila: the Empire could only exist with him at the helm. Edward Thompson, by contrast, rooted the Huns’ demise in the divisive social effects of all the wealth they acquired from the Roman Empire. There is something in both of these theories. Attila the Hun, as we have seen, was an extraordinary operator, and no doubt the gold extracted from Rome was not distributed entirely evenly among his people. But a full understanding of the Hunnic Empire must turn on its relations with its largely Germanic subjects. As already suggested, it was the ability to suck in so many of these militarized groups that underlay the sudden explosion of Hunnic power in the 420s-40s. After Attila’s death, likewise, it was his successors’ increasing inability to maintain control over those same groups that spelled their own decline. The key starting-point is that the Hunnic Empire was not generally enrolled voluntarily. All the evidence we have suggests that non-Hunnic groups became caught up in it through a combination of conquest and intimidation.

While Attila was capable of deft political maneuvering when the occasion demanded, the basic tool of Hunnic imperial expansion was military conquest. It was, of course, to avoid Hunnic domination that the Tervingi and Greuthungi had come to the Danube in the summer of 376 in the first place. And it was after a savage mauling at the hands of the Huns in the 430s that the Burgundians also ended up in the Roman Empire. All this is consistent with the fact that there was, as we have seen, one way, and one way only, of quitting Attila’s Empire: warfare.

They reminded the Gothic contingent of exactly how the Huns generally behaved towards them: ‘These men have no concern for agriculture, but, like wolves, attack and steal the Goths’ food supplies, with the result that the latter remain in the position of slaves and themselves suffer food shortages.’ Taking the subject peoples’ supplies was, of course, only part of the story. They were also used, as we have seen, to fight the Huns’ wars. Few civilian prisoners are likely to have been very good at fighting, and casualty numbers during Hunnic campaigns were probably enormous.

Clearly, then, the Hunnic Empire was an inherently unstable political entity, riven with tensions between rulers and ruled. Tensions of a different kind also existed between the subject peoples themselves, who had a long history of mutual aggression even before the Huns appeared. This particular instability tends to receive little coverage from historians because most of our source material comes from a Roman, Priscus, and dates to the time when Attila’s power was unchallengeable. Cast the net wider, though, and the evidence rapidly gathers itself. The greatest strength of the Hunnic Empire – the ability to increase its power by quickly consuming subject peoples – was also its greatest weakness. The Romans, for instance, were happy to exploit, whenever they could, the fact that these subject peoples were not there of their own free will. In the 420s, the east Roman counteraction against the rising Hunnic power in Pannonia was to remove from their control a large number of Goths whom they then settled in Thrace.

Unlike the Roman Empire, which spent centuries dissipating the tensions of conquest turning their subjects – or, at least, the landowners among them – into full Romans, the Huns lacked the necessary stability and the bureaucratic capacity to run their subjects directly. Instead of revolutionizing the sociopolitical structures of the conquered peoples or imposing their own, they had to rely on an indigenous leadership to continue the daily management of the subject groups. As a result, the Huns could exert only a moderate degree of dominion and interference, and even that varied from one subject people to another. The Gepids, as we have seen, had their own overall leader at the time of Attila’s death, and so were quickly able to assert their independence. Other groups, like the Amal-led Goths, first had to produce a leader of their own before they could challenge Hunnic hegemony. Some, like the Goths in thrall to Dengizich when he invaded east Roman territory in the 460s, never managed to do so.

If the sources were more numerous and more informative, I suspect that the narrative would show the Hunnic Empire peeling apart like an onion after 453, with different subject layers asserting independence at different times, in inverse relation to the degree of domination the Huns had previously exercised over their lives. The two key variables were, first, the extent to which the subjects’ political structure had been left intact; and second their distance from the heartland of the Empire where Attila had his camps. Some groups, settled close to the Huns’ own territories, were kept on a very tight rein, with any propensity to unified leadership suppressed. Groups living further away preserved more of their own political structures and were less readily controlled.

Rich burials are not just quite rich: they are staggeringly so. They contain a huge array of gold fittings and ornamentation, the stars of the collections being the cloisonné gold and garnet jewelry in which the stones are mounted in their own gold cases to give an effect not unlike mosaic. This kind of work would later become the mark of elites everywhere in the late and post-Roman periods. For instance, the style of the cloisonné jewelry found in the Sutton Hoo ship burial of the early seventh century in East Anglia originally gained its hold on elite imaginations in Hunnic Europe. One burial at Apahida (modern Transylvania) produced over 60 gold items, including a solid gold eagle that fitted on to its owner’s saddle. Every other piece of this individual’s horse equipment was likewise made of gold, and he himself was decked from head to foot in golden jewelry. There are other similarly wealthy burials, as well as others containing smaller numbers of gold items. The presence of so much gold in Germanic central and eastern Europe is highly significant. Up to the birth of Christ, social differentiation in the Germanic world manifested itself funerarily, if at all, only by the presence in certain graves of larger than usual numbers of handmade pots, or of slightly more decorative bronze and iron safety-pins. By the third and fourth centuries AD, some families were burying their dead with silver safety-pins, lots of beads, and perhaps some wheel-turned pottery; but gold was not being used to distinguish even elite burials at this point – the best they could manage was a little silver. The Hunnic Empire changed this, and virtually overnight.

The gold-rich burials of the ‘Danubian style’ mark a sudden explosion of gold grave goods into this part of Europe. There is no doubt where the gold came from: what we’re looking at in the grave goods of fifth-century Hungary is the physical evidence of the transfer of wealth northwards from the Roman world

The Huns were after gold and other moveable wealth from the Empire – whether in the form of mercenary payments, booty or, especially, annual tributes. Clearly, large amounts of gold were recycled into the jewelry and appliques found in their graves. The fact that many of these were the rich burials of Germans indicates that the Huns did not just hang on to the gold themselves, but distributed quantities of it to the leaders of their Germanic subjects as well. These leaders, consequently, became very rich indeed.  The reasoning behind this strategy was that, if Germanic leaders could be given a stake in the successes of the Hunnic Empire, then dissent would be minimized and things would run relatively smoothly.

Gifts of gold to the subject princes would help lubricate the politics of Empire and fend off thoughts of revolt. Since there are quite a few burials containing gold items, these princes must have passed on some of the gold to favored supporters. The gold thus reflects the politics of Attila’s court.  Equally important, the role of such gold distributions in countering the endemic internal instability, combined with what we know of the source of that gold, underlines the role of predatory warfare in keeping afloat the leaky bark that was the Hunnic ship of state.

First and foremost, success in warfare built up the reputation of the current leader as a figure of overwhelming power. Witness the case of Attila and the sword of Mars. But there is every reason to suppose that military success had been just as important for his predecessors. A reputation for power brought with it the capacity to intimidate subject peoples, and it was also military success, of course, that provided the gold and other booty that kept their leaders in line -although the speed with which subject groups opted out of the Empire after Attila’s death suggests that the payments did not compensate for the burden of exploitation. In contrast to the Roman Empire, which, as we have seen, attempted to keep population levels low in frontier areas so as to minimize the potential for trouble, the Hunnic Empire sucked in subject peoples in huge numbers. The concentration of such a great body of manpower generated a magnificent war machine, which had to be used – it contained far too many inner tensions to be allowed to lie idle. The number of Hunnic subject groups outnumbered the Huns proper, probably in a ratio of several to one. It was essential to keep the subject peoples occupied, or restless elements would be looking for outlets for their energy and the Empire’s rickety structure might begin to crumble.

Attila was the greatest barbarian conqueror in European history, but he was riding a tiger of unparalleled ferocity. Should his grip falter, he would be mauled to death. To my mind, this in turn explains his otherwise mysterious turn to the west at the end of the 440s.

Between 441 and 447, Attila’s armies had ransacked the Balkans except for some small areas protected by two major obstacles: the Peloponnese because of its geographical isolation, and the city of Constantinople because of its stunning land defenses. The eastern Empire was on its knees: the annual tribute it was having to pay out was the largest ever expended by a factor of ten. The Huns had squeezed out of Constantinople just about everything they were likely to get; at the very least, further campaigning against it was bound to run into the law of diminishing returns. But there on the Hungarian Plain Attila sat, still surrounded by a huge military machine that could not be left idle. With nothing to attack in the Balkans, another target had to be found. Attila turned to the west, in other words, because he’d exhausted the decent targets available in the east.

This suggests a final judgement on the Hunnic Empire. Politically dependent upon military victory and the flow of gold, it was bound to make war to the point of its own defeat, then be pushed by that defeat into internal crisis. The setbacks in Gaul and Italy in 451 and 452 must anyway have begun to puncture Attila’s aura of invincibility. They certainly caused some diminution in the flow of gold, and some of the outlying subject peoples may already have been getting restive. Quite likely, Attila’s death and the civil war between his sons provided just the opportunity they were looking for. Overall, there can be no more vivid testament to the unresolved tensions between dominant Hunnic rulers and exploited non-Hunnic subjects than the astonishing demise of Attila’s Empire. The strange death of Hunnic Europe, however, was also integral to the collapse of the western Empire.

A New Balance of Power

Instead of one huge power centered on the Great Hungarian Plain, its tentacles reaching out towards the Rhine in one direction, the Black Sea in another, the Roman Empire both east and west now found itself facing a pack of successor states. Much of the time fighting amongst themselves, they also pressed periodically upon the Roman frontier. As the Empire became ever more deeply involved in the fallout from the Hunnic collapse, the nature of Roman foreign policy on the Danube frontier began to change. In confronting their new situation, the Roman authorities had two priorities. They needed to prevent the squabbling north of the Danube from spilling over into their own territory in the form of invasions or incursions, while safeguarding that what emerged from the chaos should not be another monolithic empire. The surviving sources refer to overflows of various kinds on to Roman territory, the result of the ferocious struggle for Lebensraum on the other side of the Danube. Into the western Empire large numbers of refugees now flooded, individuals and groups who had decided that life south of the river looked preferable to the continuing struggle north of it.  By the early 470s, the Roman army of Italy was dominated by central European refugees: Sciri are specifically mentioned, along with Herules, Alans and Torcilingi, who had all been recruited into its ranks.

The historical significance of Petronius Maximus’ first move as emperor

Both Flavius Constantius and Aetius had strained every political sinew to prevent the Visigoths from increasing their influence within western imperial politics. Alaric and his brother-in-law Athaulf had both had visions, if fleeting, of the Goths as protectors of the western Empire. Alaric had offered Honorius a deal whereby he would become senior general at court, and his Goths be settled not far from Ravenna. Athaulf married Honorius’ sister and named his son Theodosius. But Constantius and Aetius, those guardians of the western Empire, had resisted such pretensions; they had been willing to employ the Goths as junior allies against the Vandals, Alans and Suevi, but that was as far as it went. Aetius had preferred to pay and deploy Huns to keep the Goths within this very real political boundary rather than grant them a broader role in the business of Empire. Avitus’ embassy, which, as Sidonius makes clear, sought from the Visigoths not just peaceful acquiescence but a military alliance, reversed at a stroke a policy that had kept the Empire afloat for forty years.

The immediate aftermath only reinforces the point. While Avitus was still with the Visigoths, the Vandals under the leadership of Geiseric launched a naval expedition from North Africa which brought their forces to the outskirts of Rome. In part, its aim was fun and profit, but it also had more substantial motives. As part of the diplomatic horse-trading that had followed the frustration of Aetius’ attempts to reconquer North Africa, Huneric, eldest son of the Vandal king Geiseric, had been betrothed to Eudocia, daughter of Valentinian III. On seizing power, however, in an attempt to add extra credibility to his usurping regime, Petronius Maximus married Eudocia to his own son Palladius. The Vandal attack on Rome was also made, then, in outrage at being cheated, as Geiseric saw it, of this chance to play the great game of imperial politics. Hearing of the Vandals’ arrival, Maximus panicked, mounted a horse and fled. The imperial bodyguard and those free persons around him whom he particularly trusted deserted him, and those who saw him leaving abused him and reviled him for his cowardice. As he was about to leave the city, someone threw a rock, hitting him on the temple and killing him. The crowd fell upon his body, tore it to pieces and with shouts of triumph paraded the limbs about on a pole.

So ended the reign of Petronius Maximus, on 31 May 455; he had been emperor for no more than two and a half months.

When the imperial capital was sacked for the second time, the damage sustained was more serious than in 410. Geiseric’s Vandals looted and ransacked, taking much treasure and many prisoners back with them to Carthage, including the widow of Valentinian III, her two daughters, and Gaudentius, the surviving son of Aetius. Upon hearing this news, Avitus immediately made his own bid for the throne, declaring himself emperor while still at the Visigothic court in Bordeaux. It was later, on 9 July that year, that his claim was ratified by a group of Gallic aristocrats at Aries, the regional capital. From Aries, not long afterwards, Avitus moved on triumphantly to Rome and began negotiations for recognition with Constantinople. The senior Roman army commanders in Italy – Majorian and Ricimer -were ready to accept him because they were afraid of the Visigothic military power at his disposal.

A new order was thus born. Instead of western imperial regimes looking to keep the Visigoths and other immigrants at arm’s length, the newcomers had established themselves as part of the western Empire’s body politic. For the first time, a Visigothic king had played a key role in deciding the imperial succession.

The full significance of this revolution needs to be underlined. Without the Huns to keep the Goths and other immigrants into the Roman west in check, there was no choice but to embrace them. The western Empire’s military reservoirs were no longer full enough for it to continue to exclude them from central politics. The ambition first shown by Alaric and Athaulf, and later by Geiseric in his desire to marry his son to an imperial princess, had come to fruition. Contemporaries were fully aware of the political turn-around represented by Avitus’ elevation. Since time immemorial, the traditional education had portrayed barbarians – including Visigoths – as the ‘other’, the irrational, the uneducated; the destructive force constantly threatening the Roman Empire. In a sense, with the Visigoths now having served for a generation as minor Roman allies in south-western France, the ground had been well prepared. Nonetheless, Avitus’ regime was only too well aware that its Visigothic alliance was bound to be controversial.

The revolution was gathering pace. Barbarians were being presented as Romans to justify the inescapable reality that, since they could no longer be excluded, they now had to be included in the construction of working political regimes in the west.

At first sight, this inclusion of the alien would not seem to be a mortal blow to the integrity of the Empire. Theoderic was Roman enough to be willing to play along; he saw the need to portray him as a good Roman in order to satisfy landowning opinion. There were, however, a couple of very big catches which made a Romano-Visigothic military alliance not quite the asset you might initially suppose. First, political support always came at a price. Theoderic was entirely happy to support Avitus’ bid for power, but, not unreasonably, he expected something in return. In this instance, his desired reward was a free hand in Spain where, as we have seen, the Suevi had been running riot since Aetius’ attention had been turned towards the Danube in the early 440s. Theoderic’s request was granted, and he promptly sent a Visigothic army to Spain under the auspices of Avitus’ regime, notionally to curb Suevic depredations. Hitherto, of course, when the Visigoths had been deployed in Spain, it was always in conjunction with Roman forces. This time, Theoderic was left to operate essentially on his own initiative, and we have a first-hand -Spanish – description of what happened. The Visigothic army defeated the Suevi, we are told, capturing and executing their king. They also took every opportunity, both during the assault and in the cleaning-up operations that followed, to gather as much booty as they could, sacking and pillaging, amongst others, the towns of Braga, Asturica and Palentia. Not only did the Goths destroy the kingdom of the Suevi, they also helped themselves uninhibitedly to the wealth of Spain. Just like Attila, Theoderic had warriors to satisfy. His willingness to support Avitus was based on calculations of profits, and a lucrative Spanish spree was just the thing.

The inclusion of barbarians into the political game of regime-building in the Roman west meant that there were now many more groups maneuvering for position around the imperial court. Before 450, any functioning western regime had to incorporate and broadly satisfy three army groups – two main ones in Italy and Gaul, and a lesser one in Illyricum – plus the landed aristocracies of Italy and Gaul, who occupied the key posts in the imperial bureaucracy. The desires of Constantinople also had to be accommodated. As in the case of Valentinian III, should western forces be divided between different candidates, eastern emperors disposed of enough clout and brute force to impose their own candidate. Though too far away to rule the west directly, Constantinople could exercise a virtual veto over the choices of the other interested parties.-Incorporating this many interests could make arriving at a stable outcome a long-drawn-out business.

After the collapse of the Hunnic Empire, the Burgundians and Vandals were the next to start jockeying for position and clamoring for rewards. The Burgundians had been settled by Aetius around Lake Geneva in the mid-430s. Twenty years later, they took advantage of the new balance of power in the west to acquire a number of other Roman cities and the revenues they brought with them from their territories in the Rhône valley: Besançon, le Valais, Grenoble, Autun, Chalon-sur-Saône and Lyon. The Vandal-Alan coalition’s sack of Rome in 455, as we have seen, betrayed a desire to participate in imperial politics. On the death of Valentinian, Victor of Vita tells us, Geiseric too, expanding his powerbase, seized control of Tripolitania, Numidia and Mauretania, together with Sicily, Corsica and the Balear-ics. Allowing just some of the barbarian powers to participate in the Empire massively complicated western politics; and the greater the number, the harder it was to find sufficient rewards to generate long-term coalition.

We see here, then, in a nutshell the problem now facing the west. Avitus had the support of the Visigoths, the support of at least some Gallic senators, and of some of the Roman army of Gaul. But faced with the hostility of the Italian senators, and especially of the commanders of the Italian field army, the coalition didn’t stand a chance. By the early 460s, the extent of the crisis in the west generated by the collapse of Attila’s Empire was clear. There were too many interested parties and not enough rewards to go round.

END OF EMPIRE

Some historians have criticized Constantinople for not doing more in the fifth century to save the embattled west.

Its mobile forces, therefore, mustered between 65,000 and 100,000 men. Also, the east disposed of numerous units of frontier garrison troops. The archaeological field surveys of the last 20 years have confirmed, furthermore, that the fourth-century agricultural prosperity of the east’s key provinces – Asia Minor, the Middle East and Egypt – showed no sign of slackening during the fifth. Some believe that the eastern Empire thus had the wherewithal to intervene effectively in the west, but chose not to. In the most radical statement of the case, it has been argued that Constantinople was happy to see barbarians settle on western territory for the disabling effect this had on the west’s military establishment because it removed any possibility of an ambitious western pretender seeking to unseat his eastern counterpart and unite the Empire. This had happened periodically in the fourth century, when the emperors Constantine and Julian took over the entire Empire from an originally western power-base. But in fact, bearing in mind the problems it had to deal with on its own frontiers, Constantinople’s record for supplying aid to the west in the fifth century is perfectly respectable.

Constantinople and the West

The eastern Empire’s military establishment was very substantial, but large numbers of troops had always to be committed to the two key sectors of its eastern frontier in Armenia and Mesopotamia, where Rome confronted Persia. If you asked any fourth-century Roman where the main threat to imperial security lay, the answer would have been Persia under its new Sasanian rulers. And from the third century, when the Sasanian revolution worked its magic, Persia was indeed the second great superpower of the ancient world. As we saw earlier, the new military threat posed by the Sasanians plunged the Roman Empire into a military and fiscal crisis that lasted the best part of 50 years. By the time of Diocletian in the 280s, the Empire had mobilized the necessary funding and manpower, but the process of adjustment to the undisputed power of its eastern neighbor was long and painful. The rise of Persia also made it more or less unavoidable to have one emperor constantly in the east, and hence made power-sharing a feature of the imperial office in the late Roman period. As a result of these transformations, Rome began to hold its own again, and there were no fourth-century repeats of such third-century disasters as the Persian sack of Antioch.

When assessing the military contribution of the eastern Empire to the west in the fifth century, it is important to appreciate that, while broadly contained from about 300, the new Persian threat never disappeared.

The great Hunnic raid of 395 wreaked havoc not only in Rome’s provinces south of the Black Sea but also over a surprisingly large area of the Persian Empire. So, in this new era of compromise when both Empires had Huns on their minds, they came to an unprecedented agreement for mutual defense. The Persians would fortify and garrison the key Darial Pass through the Caucasus, and the Romans would help defray the costs.

None of this meant, however, that Constantinople could afford to lower its guard. Troop numbers were perhaps reduced in the fifth century, and less was spent on fortifications, but major forces still had to be kept on the eastern frontier. The Notitia Dignitatum – whose eastern sections date from about 395, after the Armenian accord – lists a field army of thirty-one regiments, roughly one-quarter of the whole, based in the east, together with 156 units of frontier garrison troops stationed in Armenia and the provinces comprising the Mesopotamian front, out of a total of 305 such units for the entire eastern Empire. And this in an era of relative stability. There were occasional quarrels with Persia, which sometimes came to blows, as in 421 and 441. The only reason the Persians didn’t capitalize more on Constantinople’s run-in with the Huns in the 440s seems to have been their own nomad problems.

Just as, for Rome, Persia was the great enemy, so Rome was for Persia, and each particularly prized victory over the other. As we noted earlier, the provinces from Egypt to western Asia Minor were the eastern Empire’s main source of revenue, and no emperor could afford to take chances with the region’s security. As a result, Constantinople had to keep upwards of 40% of its military committed to the Persian frontier, and another 92 units of garrison troops for the defense of Egypt and Libya. The only forces the eastern authorities could even think of using in the west were the one-sixth of its garrison troops stationed in the Balkans and the three-quarters of its field forces mustered in the Thracian and the two praesental armies.

Up until 450, Constantinople’s capacity to help the west was also deeply affected by the fact that it bore the brunt of Hunnic hostility. As early as 408, Uldin had briefly seized the east Roman fortress of Castra Martis in Dacia Ripensis, and by 413 the eastern authorities felt threatened enough to initiate a program for upgrading their riverine defenses on the Danube and to construct the triple land walls around Constantinople. Then, just a few years later, eastern forces engaged directly in attempts to limit the growth of Hunnic power. Probably in 421, they mounted a major expedition into Pannonia which was already, if temporarily, in Hunnic hands, extracted a large group of Goths from the Huns’ control and resettled them in east Roman territory, in Thrace. The next two decades were spent combating the ambitions of Attila and his uncle, and even after Attila’s death it again fell to the east Roman authorities to clean up most of the fall-out from the wreck of the Hunnic Empire. It was the eastern Empire that the remaining sons of Attila chose to invade in the later 460s. Slightly earlier in the decade, east Roman forces had also been in action against armed fragments of Attila’s disintegrating war machine, led by Hormidac and Bigelis. In 460, likewise, the Amal-led Goths in Pannonia had invaded the eastern Empire to extract their 300 pounds of gold

Judged against this strategic background, where military commitments could not be reduced on the Persian front, and where, thanks to the Huns, the Danube frontier required a greater share of resources than ever before, Constantinople’s record in providing assistance to the west in the fifth century looks perfectly respectable. Although in the throes of fending off Uldin, Constantinople had sent troops to Honorius in 410, when Alaric had taken Rome and was threatening North Africa. Six units in all, numbering 4,000 men, arrived at a critical moment, putting new fight into Honorius when flight, or sharing power with usurpers, was on the cards. The force was enough to secure Ravenna, whose garrison was becoming mutinous, and bought enough time for the emperor to be rescued. In 425, likewise, Constantinople had committed its praesental troops in large numbers to the task of establishing Valentinian III on the throne, and in the 430s Aspar the general had done enough in North Africa to prompt Geiseric to negotiate the first treaty, of 435, which denied him the conquest of Carthage and the richest provinces of the region.

Troops – we are not told how many – were sent to Aetius to assist him in harassing the Hunnic armies sweeping through northern Italy in 452. This is not the record of an eastern state that had no interest in sustaining the west.

The most obvious problem facing the Roman west round about 460 was a crisis of succession; since the death of Attila in 453 there had been little continuity. Valentinian III had been cut down by Aetius’ bodyguards, egged on by Petronius Maximus, who seized the throne but in no time at all was himself killed by the Roman mob. Soon afterwards, Avitus had appointed himself emperor in collusion with the Visigoths and elements of the Gallo-Roman landowning and military establishments. Then came his ousting in 456 by Ricimer and Majorian, commanders of the Italian field forces.

As western regimes came and went, then, eastern emperors tried, it seems, to identify and support those with some real hope of generating stability.

The disappearance of the Huns as an effective force left western imperial regimes with no choice but to buy support from at least some of the immigrant powers now established on its soil. Avitus won over the Visigoths by offering them a free hand – to their great profit, as it turned out – in Spain. Majorian had been forced to recognize the Burgundians’ desire to expand, and had allowed them to take over some more new cities in the Rhône valley; and he continued to allow the Visigoths to do pretty much as they wanted in Spain. To buy support for Libius Severus, similarly, Ricimer had handed over to the Visigoths the major Roman city of Narbonne with all its revenues. But now, there were simply too many players in the field, and this, combined with rapid regime change, had created a situation in which even the already much reduced western tax revenues were being further expended in a desperate struggle for stability.

Three things needed to happen in the west to prevent its annihilation. Legitimate authority had to be restored; the number of players needing to be conciliated by any incoming regime had to be reduced; and the Empire’s revenues had to rise. Analysts in the eastern Empire came to precisely this conclusion, and in the mid-460s hatched a plan that had a very real chance of putting new life back into the ailing west.

Anthemius went to Italy with a plan for dealing with the more fundamental problems facing his new Empire. First, he quickly restored a modicum of order north of the Alps in Gaul. It is difficult to estimate how much of Gaul was still functioning as part of the western Empire in 467. In the south the Visigoths, and certainly the Burgundians, accepted Anthemius’ rule; both of their territories remained legally part of the Empire. We know that institutions like the cursus publicus were still functioning here. Further north, things are less clear. The Roman army of the Rhine, or what was left of it, had gone into revolt on the deposition of Majorian, and part of it still formed the core of a semi-independent command west of Paris. Refugees from battle-torn Roman Britain also seem to have contributed to the rise of a new power in Brittany, and for the first time Frankish warbands were flexing their muscles on Roman soil. In the fourth century, Franks had played the same kind of role on the northern Rhine frontier as the Alamanni played to their south. Semi-subdued clients, they both raided and traded with the Roman Empire, and contributed substantially to its military manpower; several leading recruits, such as Bauto and Arbogast, rose to senior Roman commands. Also like the Alamanni, the Franks were a coalition of smaller groups, each with their own leadership. By the 460s, as Roman control collapsed in the north, some of these warband leaders began for the first time to operate exclusively on the Roman side of the frontier, selling their services, it seems, to the highest bidder.

The arrival in their midst of the engaging Anthemius led to queues of Gallo-Roman landowners anxious to court and be courted by the new emperor. We know that the cursus publicus was still working because Sidonius used it on his way to see Anthemius at the head of a Gallic deputation. Anthemius responded in kind. Sidonius wormed his way into the good graces of the two most important Italian senatorial power-brokers of the time, Gennadius Avienus and Flavius Caecina Decius Basilius, and with their help got the chance to deliver a panegyric to the emperor, on 1 January 468. As a result, he was appointed by Anthemius to the high office of Urban Prefect of Rome. A time-honored process was in operation: with self-advancement in mind, likely-looking landowners would turn up at the imperial court at the start of a new reign to offer support and receive gifts in return. But fiddling with the balance of power in Gaul wasn’t going to contribute anything much towards a restoration of the western Empire.

There was only one plan that stood any real chance of putting life back into the Roman west: reconquering North Africa. The Vandal–Alan coalition had never been accepted into the country club of allied immigrant powers that began to emerge in the mid-fifth century.

The treaty of 442, which recognized its seizure of Carthage, was granted when Aetius was at the nadir of his fortunes; it was an exception to the Vandals’ usual relationship with the Roman state, which was one of great hostility. The western Empire, as we have seen, from the 410s onwards had consistently allied with the Visigoths against the Vandals and Alans, and the latter’s history after 450 was one of similar exclusion. Unlike the Visigoths or the Burgundians, the Vandals and Alans did not contribute to Aetius’ military coalition that fought against Attila in Gaul in 451; nor were they subsequently courted or rewarded by the regimes of Avitus, Majorian or Libius Severus. Their leader Geiseric was certainly after membership of the club, as his sack of Rome at the time of Petronius Maximus paradoxically showed. This was partly motivated by the fact that Maximus had upset the marriage arrangements between his son Huneric and the elder daughter of Valentinian III. After they sacked Rome in 455, the Vandals continued to raid the coast of Sicily and various Mediterranean islands. This was an enterprise undertaken in large measure for profit.

Conclusion

When the amalgamation of groups and subgroups that had been going on for so long beyond Rome’s borders interacted with the arrival of the Huns, supergroups that would tear the western Roman Empire came into being. The Roman Empire had sown the seeds of its own destruction not due to internal weaknesses evolving over centuries, nor new ones, but as a consequence of its relationship with the Germanic world.  The Germanic society along with the Huns responded to Roman power in ways the Romans couldn’t have foreseen.  By virtue of unbounded aggression, Roman imperialism was ultimately responsible for its own destruction.

Posted in Collapse of Civilizations, Roman Empire | Tagged , | 4 Comments

Renewable costs don’t include transmission & energy storage backup from Nat Gas & Coal plants

Preface. Wind and solar advocates don’t include transmission and backup costs in their net energy and cost calculations. But without fossil backup, the electric grid will come down due to lack of storage.

There is almost nowhere left to put pumped hydro storage in the ten states that already have 80% of hydropower, there is only one Compressed Air Energy Storage plant in the U.S. in one of the few salt domes in three states along the Gulf coast that have salt domes (with half of it powered by natural gas turbines), and it would cost $41 trillion dollars to make Sodium Sulfur (NaS) batteries lasting 15 years to back up just one day of U.S. electricity generation (Friedemann 2015).

Mexico is asking renewable companies to pay for transmission and natural gas / coal backup for when the wind isn’t blowing or the sun shining. About time, because this is all a gigantic waste of money if wind and solar can’t stand on their own, a dumb investment when we could have used the money to convert to organic farming, plant more trees, beef up infrastructure and other efforts to prepare for peak oil, coal, and natural gas and the lifestyle that will be forced upon us.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Garcia, D. A. 2020. Renewable firms in Mexico must contribute to grid backup – CFE chief. Reuters.

MEXICO CITY (Reuters) – Private renewable energy firms in Mexico should pay for part of the baseload power underpinning the flow of electricity on the grid, the head of the state power company said….Renewable operators had not been pulling their weight in contributing to the infrastructure that sustains them.

Wind and photovoltaic (plants) don’t pay the CFE for the backup,” said Bartlett, referring to the cost of power generation from fossil fuels, mostly natural gas, to guarantee uninterrupted flow. “Do you think it’s fair for the CFE to subsidize these companies that don’t produce power all day?” he asked.

The firms should also start helping to pay transmission costs, he said.

“That’s not a free market, it’s theft,” said Bartlett, a close ally of President Andres Manuel Lopez Obrador, who has pledged to hold down electricity rates.

CENACE cited the coronavirus pandemic as a justification, arguing that intermittent wind and solar power is not consistent with ensuring constant electricity supply.

References

Friedemann, Alice. 2015. When Trucks stop running: Energy and the future of transportation. Springer

Posted in Electric Grid & Fast Collapse, Energy Storage, Solar, Solar EROI, Wind, Wind EROI | Tagged , , , , | 2 Comments

The Invisible oiliness of everything

pencilsPreface.  Even a simple object like a pencil takes hundreds of actions and objects requiring fossil energy to do and make. Not electricity.  This is on of many reasons why wind, solar, or other contraption that make electricity can’t replace fossil fuels.  Electricity is only about 15% of overall energy use, with fossils providing the rest transportation, manufacturing, heating, and the half a million products made from fossils as feedstock as well as the energy source to make them.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Just as fish swim in water, we swim in oil.  You can’t understand the predicament we’re in until you can see the oil that saturates every single aspect of our life.

What follows is a life cycle of a simple object, the pencil. I’ve cut back and reworded Leonard Read’s 1958 essay I Pencil, My Family Tree to show the fossil fuel energy inputs (OBJECTS made using energy, like the pencil, are in BOLD CAPITALS, ACTIONS are  BOLD ITALICIZED).

“My family tree begins with … a Cedar tree from Oregon. Now contemplate the antecedents — all the people, numberless skills, and fabrication:

All the SAWS. TRUCKS, ROPE and OTHER GEAR to HARVEST and CART cedar logs to the RAILROAD siding. The MINING of ore, MAKING of STEEL, and its REFINEMENT into SAWS, AXES, and MOTORS.

The growing of HEMP, LUBRICATED with OIL, DIRT REMOVED, COMBED, COMPRESSED, SPUN into yard, and BRAIDED into ROPE.

BUILDING of LOGGING CAMPS (BEDS, MESS HALLS). SHOP for, DELIVER, and COOK FOOD to feed the working men. Not to mention the untold thousands of persons who had a hand in every cup of COFFEE the loggers drank!

The LOGS are SHIPPED to a MILL in California. Can you imagine how many people were needed to MAKE FLAT CARS and RAILS and RAILROAD ENGINES?

At the mill, cedar logs are CUT into small, pencil-length slats less than a quarter inch thick, KILN-DRIED, TINTED, WAXED. and KILN-DRIED again. Think of all effort and skills to make the TINT and the KILNS, SUPPLY the HEAT, LIGHT, and POWER, the BELTS, MOTORS, and all the OTHER THINGS a MILL requires? Plus the SWEEPERS and the MEN who POURED the CONCRETE for the DAM of a Pacific Gas & Electric Company HYDRO-ELECTRIC PLANT which supplies the mill’s POWER!

Don’t overlook the WORKERS and OIL BURNED by the RAILROAD LOCOMOTIVE to TRANSPORT SIXTY TRAIN-CARS of SLATS ACROSS the nation.

Once in the PENCIL FACTORY—worth millions of dollars in MACHINERY and BUILDING—each slat has 8 GROOVES CUT into them by a GROOVE-CUTTING MACHINE, after which the LEAD-LAYING MACHINE PLACES a piece of LEAD in every other slat, APPLIES GLUE and PLACES another SLAT on top–—a lead sandwich. Seven brothers and I are mechanically CARVED from this “wood-clinched” sandwich.

My “lead” itself—it contains no lead at all—is complex. The GRAPHITE is MINED in Sri Lanka. Consider these MINERS and those who MAKE their many TOOLS and the makers of the PAPER SACKS in which the graphite is SHIPPED and those who make the STRING that ties the sacks and the MEN who LIFT them aboard SHIPS and the MEN who MAKE the SHIPS. Even the LIGHTHOUSE KEEPERS along the way assisted in my birth—and the HARBOR PILOTS.

The graphite is mixed with CLAY FROM Mississippi in which AMMONIUM HYDROXIDE is used in the REFINING process. Then WETTING AGENTS and animal fats are CHEMICALLY REACTED with sulfuric acid. After PASSING THROUGH NUMEROUS MACHINES, the mixture finally appears as endless extrusions—as from a sausage grinder-cut to size, dried, and baked for several hours at 1,850 DEGREES FAHRENHEIT. To increase their strength and smoothness the leads are then TREATED with a hot mixture which includes CANDELILLA WAX from Mexico, PARAFFIN WAX, and HYDROGENATED NATURAL FATS.

My cedar RECEIVES 6 coats of LACQUER. Do you know all the ingredients of lacquer? Who would think that the GROWERS of CASTOR BEANS and the REFINERS of CASTOR OIL are a part of it? They are. Why, even the processes by which the lacquer is made a beautiful yellow involve the skills of more persons than one can enumerate!

Observe the LABELING, a film FORMED by APPLYING HEAT to CARBON BLACK mixed with RESINS. How do you make resins and what is carbon black?

My bit of metal—the ferrule—is BRASS. Think of all the PERSONS who MINE ZINC and COPPER and those who have the skills to MAKE shiny SHEET BRASS from these products of nature. Those black rings on my ferrule are black NICKEL. What is black nickel and how is it applied? The complete story would take pages to explain.

Then there’s my crowning glory, the ERASER, a rubber-like product made by reacting RAPE-SEED OIL from Indonesia with SULFUR CHLORIDE, and numerous VULCANIZING and ACCELERATING AGENTS. The PUMICE comes from Italy; and the pigment which gives “the plug” its color is CADMIUM SULFIDE.

Does anyone wish to challenge my earlier assertion that no single person on the face of this earth knows how to make me?

Actually, millions of human beings have had a hand in my creation, no one of whom even knows more than a very few of the others. Now, you may say that I go too far in relating the picker of a coffee berry in far off Brazil and food growers elsewhere to my creation; that this is an extreme position. I shall stand by my claim. There isn’t a single person in all these millions, including the president of the pencil company, who contributes more than a tiny, infinitesimal bit of know-how. From the standpoint of know-how the only difference between the miner of graphite in Sri Lanka and the logger in Oregon is in the type of know-how. Neither the miner nor the logger can be dispensed with, any more than can the chemist at the factory or the worker in the oil field—paraffin being a by-product of petroleum.

I, Pencil, am a complex combination of miracles: a tree, zinc, copper, graphite, and so on.”

Energy use in Agriculture (USDA)

It is estimated that 10 kilocalories of fossil fuels are used to produce just 1 kilocalorie of food.  Not surprisingly, food-related energy use in the U.S. is quite large, growing from 14.4% of energy used in 2002 to 15.7% in 2007.

Energy is used throughout the U.S. food supply chain, from the manufacture and application of agricultural inputs, such as fertilizers and irrigation, through crop and livestock production, processing, and packaging; distribution services, such as shipping and cold storage; the running of refrigeration, preparation, and disposal equipment in food retailing and food service establishments; and in home kitchens. Dependence on energy throughout the food chain raises concerns about the impact of high or volatile energy prices on the price of food, as well as about domestic food security and the Nation’s reliance on imported energy.

Energy plays a large role in the life cycle of a food product. Consider energy’s contribution to a hypothetical purchase of a fresh-cut non-organic salad mix by a consumer living on the East Coast of the United States. After having read “I, Pencil” it is obvious that this description leaves out a great deal of actions and object energy embedded within the life cycle.

The farms’ fields are seeded months earlier with a precision seed planter operating as an attachment to a gasoline-powered farm tractor.

Fresh vegetable farms in California harvest the produce to be used in the salad mix a few weeks prior to its purchase.

Between planting and harvest, a diesel-powered broadcast spreader applies nitrogen-based fertilizers, pesticides, and herbicides, all manufactured using differing amounts of natural gas and electricity and shipped in diesel-powered trucks to a nearby farm supply wholesaler.

Local farmers drive to the wholesaler to purchase farm supplies.

The farms use electric-powered irrigation equipment throughout much of the growing period.

At harvest, field workers pack harvested vegetables in boxes produced at a paper mill and load them in trucks for shipment to a regional processing plant, where specialized machinery cleans, cuts, mixes, and packages the salad mixes.

Utility services at the paper mill, plastic packaging manufacturers, and salad mix plants use energy to produce the boxes used at harvest and the packaging used at the processing plant, and for processing and packaging the fresh produce. The packaged salad mix is shipped in refrigerated containers by a combination of rail and truck to an East Coast grocery store, where it is placed in market displays under constant refrigeration. To purchase this packaged salad mix, a consumer likely travels by car or public transportation to a nearby grocery store. For those traveling by car, a portion of the consumer’s automobile operational costs, and his or her associated energy-use requirements, help facilitate this food-related travel.

At home, the consumer refrigerates the salad mix for a time before eating it. Food-related household operations include energy use for storage, preparation, cleanup, and food-related travel, plus purchases of appliances, dishware, flatware, cookware, and tableware, as well as a small percentage of certain auto expenses to cover food-related travel.

Subsequently, dishes and utensils used to eat the salad may be placed in a dishwasher for cleaning and reuse—adding to the electricity use of the consumer’s household. Leftover salad may be partly grinded in a garbage disposal and washed away to a wastewater treatment facility, or disposed, collected, and hauled to a landfill. The consumer in the example purchased one of many units of this specific salad mix product sold each day in supermarkets nationwide, and this mixed salad product is one item among 45,000 distinct items with unique energy use requirements available in a typical U.S. supermarket.

Aside from the roughly constant 140,000 retail food and beverage stores operating in 2002 and 2007, there were also over 537,000 food and beverage service establishments in the United States in 2007, a 12-percent increase from 2002 (BLS, Quarterly Census of Employment and Wages). Each establishment purchases, stores, prepares, cleans, and disposes of food items. Other establishments, such as movie theaters, sports arenas, and hospitals, also perform some of these food-related services. 2 This salad mix example illustrates but is not a comprehensive accounting of all energy services related to producing, distributing, serving, and disposing of this product.

Life Cycle Assessment (LCA) and Energy Returned on Energy Invested (EROEI)

When it comes to replacing fossil fuels with another kind of energy, you want to be sure you aren’t merely building a Rube Goldberg contraption that churns out less power over its lifetime than the fossil fuel energy used to make the device.

There are decades-old scientific methods that try do do this.  The best-known is the Life Cycle Assessment (LCA), which calculates the monetary costs and helps businesses shave costs.

When it comes to evaluating a device that produces energy, a better measurement is the Energy Returned on Energy Invested (EROEI, EROI), which subtracts the fossil fuel energy used in every step and component from how much energy is output over the lifetime of the contraption.

At the start of the fossil fuel age, each barrel of oil discovered could be used to find 100 more, a huge EROI.  This enormous bounty of energy was used to build our fabulous civilization. Railroads finally ended famines, clean drinking water and sewage infrastructure raised the average lifespan from 40 to 80 years (Garrett), and oil made possible a million other things – cars, airplanes, movies, electronic goods, 100% comfort 100% of the time, just push a button to heat, air-condition, or cook.

Clearly a negative or break-even EROEI is a big problem.  for example, if the fossil fuel energy needed to make ethanol is greater than or equal to the energy in the ethanol produced, then there is no extra energy left over to do anything.  Many system ecologists have found the EROEI of ethanol to be negative (Pimentel), or so slightly positive that the tiny amount of excess energy produced wouldn’t be able to run society.

The problems with LCA and EROEI

No wonder complete studies with wide boundaries are seldom done. There are infinite regressions, since every object has its own LCA and EROEI. A Toyota car has about 30,000 parts. A windmill turbine has 8,000 components (AWEA).  The supply chains (transportation fuel) for both involve thousands of companies and dozens of countries.

LCA & EROEI studies are bound to miss some steps. Reed’s pencil story left out the design, marketing, packaging, sales, distribution, and energy to fuel the supply chains between California, Oregon, Mississippi, Brazil, Sri Lanka, Indonesia, etc., and the final ride the pencil takes to the garbage dump.

Every step in a process subtracts energy from the ultimate energy delivered. Oil is concentrated sunshine that was brewed for free by Mother Nature. Building alternative energy resources requires dozens of steps, thousands of components, and vast amounts of energy in the supply chains of providing the minerals and pieces of equipment to make an alternative energy contraption.

Life Cycle Assessments (LCA) often use money rather than energy to calculate “costs”.  Money is an artificial, abstract concept used to grease the wheels of commerce. Money varies in value over time for reasons of politics, financial cycles, and can’t be burned in combustion engines.

There are many different LCA tables to choose from.  So scientists accuse each other of cherry-picking data or argue the data is out-of-date.

EROEI studies often leave out LCA monetary costs because they’re difficult to quantify as energy costs.  For example, when the EROEI of a windmill farm is calculated, many costs are left out, such as insurance, administrative expenses, taxes, the cost of the land to rent or own, indirect labor (consultants, notary public, civil servants, legal costs, etc.), security and surveillance costs, the fairs, exhibitions, promotions, conferences attended by engineering staff, bonds, fees, and so on.  Although Prieto and Hall managed to do so in their analysis of solar power in Spain (Part 1 & Part 2).

External (environmental damage) costs are rarely mentioned or considered.  Making biofuels mines topsoil, depletes aquifers, creates immense eutrophication in the Gulf of Mexico and other waterways from fertilizer runoff, energy crops result in rainforests being cut down, and so on.

Weißbach, D., et al. also find these issues with LCA and EROI studies:

  • Don’t take into account the need to buffer intermittent power. The EROI for wind is 16, but when you subtract the EROI of the energy storage to buffer wind power, the EROI overall is 4, well below the  EMROI (Energy money returned on invested) requirement of 7 for an energy technology to be viable.
  • Focus too much on CO2 emissions rather than energy
  • Don’t take into account the much longer lifespans of fossil fuel power plants than Wind (20) or solar (30 – less, we don’t know yeet).  CCGT NG plants can last 35 years, coal and gas turbines 50 years, refurbished nuclear power plants over 60 years, and hydropower can last 100 years or more.
  • Assume some or all of the components are recycled and subtract the energy, even though new material is often cheaper than recycled often, and recycling takes energy
  • Don’t take into account that getting raw materials keeps getting more energy expensive as concentrations in ores goes down from the best reserves being extracted first
  • Don’t add in the energy costs to make devices safer and conform to environmental standards
  • Wind EROI can be ‘gamed’ by using very low amounts of copper and other materials, or leaving them out entirely, using data from the best possible locations only (i.e. offshore or the best onshore locations.
  • Ignore human labor costs

A report that chased down the energy in the infinite regressions of thousands of parts would take a lifetime and over a hundred thousand pages long. Therefore boundaries have to be set, which leads to never-ending fights between scientists. Just as tobacco industry funded scientific studies tended to find cigarettes did not cause cancer, energy industry-sponsored scientists tend to use very narrow boundaries and cherry-pick LCA data to come up with positive EROEI results, usually published in non-peer reviewed journals, which means the data and methods are often unavailable, making the results as trustworthy as science-fiction.  Systems ecologists, the experts and inventors of EROEI methodology, use wider boundaries, include more steps and components, energy rather than financial data whenever possible, and publish in peer-reviewed journals. Peer-reviewed journals require a review by scientists in the same field, and the data and methods are available to everyone so that the results can be verified and reproduced.

On average, the EROEI results of university systems ecologists in peer-reviewed, high quality, respected journals are much lower than the energy industry sponsored scientists in non-peer-reviewed industry publications.

Alternative energy resources must be sustainable and renewable

What’s the point of making biofuels if unsustainable amounts of fresh water, topsoil, natural gas fertilizers, oil-based pesticides, and phosphorous are used?

Or windmills and solar PV, since they both depend on scarce, energy-intensive, and extremely damaging mining to get the rare (earth) and platinum metals required, leading to even more wars than we have now over oil to get rare minerals that exist only in foreign countries.

Nevertheless, EROEI studies are valuable because you can see some of the oiliness, even if it’s only a tiny fraction given how long it would take to include all 30,000 parts of a car or 8,000 parts of a windmill. The more studies you read, the more you can decide whether the boundaries were too narrow and which scientists wrote the most complete and fair study.

Civilization needs energy resources with an EROEI of at least 12

Charles A. S. Hall, who founded EROEI methodology, initially thought an EROEI of at least 3 was needed to keep civilization as we know it operating. After three decades of research, he recently co-authored a paper that makes the case that an EROEI of at least 12-14 is needed (Lambert).

Conclusion

An alternative energy resource built to replace oil had better have an EROEI over 12, or it’s just a Rube Goldberg contraption.

Rube Goldberg Pencil Sharpener

Pencil sharpener Rube Goldberg

Open window (A) and fly kite (B). String (C) lifts small door (D) allowing moths (E) to escape and eat red flannel shirt (F). As weight of shirt becomes less, shoe (G) steps on switch (H) which heats electric iron (I) and burns hole in pants (J). Smoke (K) enters hole in tree (L), smoking out opossum (M) which jumps into basket (N), pulling rope (O) and lifting cage (P), allowing woodpecker (Q) to chew wood from pencil (R), exposing lead. Emergency knife (S) is always handy in case opossum or the woodpecker gets sick and can’t work.

References

AWEA. American Wind Energy Association. 2012. Anatomy of a Wind Turbine. There are over 8,000 components in each turbine assembly.

Farrell, et. al. Jan 27, 2006. Ethanol Can Contribute to Energy and Environmental Goals. Science Vol 311 506-508.

Garrett, L. 2003. Betrayal of Trust: The Collapse of Global Public Health. Oxford University Press.

Lambert, Jessica G., Hall Charles A. S. et al. 2014. Energy, EROI and quality of life. Energy Policy 64:153–167

Lambert, J. Hall, Charles, et al. Nov 2012.  EROI of Global Energy Resources Preliminary Status and Trends.  State University of New York, College of Environmental Science and Forestry.

Pimentel, D and Patzek, T. March 2005. Ethanol Production Using Corn, Switchgrass, and Wood; Biodiesel Production Using Soybean and Sunflower. Natural Resources Research, Vol. 14 #1.

USDA. March 2010. .Energy Use in the U.S. Food System United States Department of Agriculture.

Posted in Alternative Energy, An Overview, EROEI Energy Returned on Energy Invested, Manufacturing & Industrial Heat, Wind | Tagged , , , | 14 Comments

Climate Change dominates news coverage at expense of other existential planetary boundaries

Preface. In the half dozen science magazines and newspapers I get, almost the only environmental stories are about climate change. Yet there are 8 other ecological boundaries (Rockström 2009) we must not cross (shown in bold with an asterisk below) and dozens of other existential threats as well.

Global peak oil production may have already happened in 2018. It is likely the decline rate will be 6%, increasing exponentially by +0.015% a year (see post “Giant oil field decline rates and peak oil”). So, after 16 years remaining oil production will be just 10% of what it was at the peak. In a 2022 paper by Laherrere, Hall et al, they estimated that there are 25 years of conventional oil left, but 80% of that is owned by OPEC.

If peak oil happened in 2018, then CO2 ppm levels may be under 400 by 2100 as existing and much lower emissions of CO2 are absorbed by oceans and land. The IPCC never even modeled peak oil in their dozens of scenarios because they assumed we’d be exponentially increasing our use of fossils until 2400. They never asked geologists what the oil, coal, and natural gas reserves were, assumed we’d use methane hydrates, and many other wrong assumptions.

Meanwhile, all the ignored ecological disasters will become far more obvious. They’re papered over with fossils today. Out of fresh water? Just drill another 1,000 feet down. Eutrophied water? Build a $500 million dollar water treatment plant. Fisheries collapsed? Go to the ends of the earth to capture the remaining schools of fish.

The real threat is declining fossil production, yet climate change gets nearly all the coverage. And I’ve left out quite a few other threats, such as “nuclear war” with 17,900 results since 2016 in scholar.google.com.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

scholar. google. com 2016    USA Today 2013 Wall Street Journal 1997 NYT 2016  
2,360,000 11,100 28,400 9,379 “climate change” * 
         
74,800 260 255 47 “soil erosion” (50,900) “soil degradation” (23,900)
61,200 397 989 373 deforestation
43,000 7 10 3 eutrophication * (a result of too much nitrogen & phosphorus applied to farmland)
32,800 19 100 23 “biodiversity loss” *
24,000 342 215 94 overpopulation
22,800 63 38 44 “ocean acidification” *
11,800 23 17 15 “chemical pollution” *
8,743 58 5 2 “groundwater depletion” (7100) “aquifer depletion” (1320) “freshwater depletion” (323) *
8,030 17 736 9 “peak oil”
5,100 18 3 0 “stratospheric ozone depletion” *
4,400 0 0 0 bioinvasion
2,259 4 0 0 “phosphorus depletion” and “phosphate depletion”
2,210 34 207 25 “Proven oil reserves”
1,320 0 0 0 “land system change” *
971 0 0 0 “atmospheric aerosol loading” *
900 2 1 1 “fishery collapse” (657) “fishery depletion” (89) “fishery decline” (154)
47 0 0 0 “net plant production” * NPP encompasses 5 of Rockstrom’s 9 boundaries: land-use change, freshwater use, biodiversity loss, global nitrogen and phosphorus cycles as well as affected by climate change and chemical pollution. Running, S. W. 2012. A Measurable Planetary Boundary for the Biosphere. Science.
304,380 1,244 2,576 636 Total of not climate change
Table 1. Key words found in scholarly literature (scholar.google.com) and New York times since 2016-1-1, USA today since 2013-1-1, and WSJ since 1997-1-1

* Rockstrom J, et al (2009) Planetary Boundaries: Exploring Safe Operating Space for Humanity. Ecology and Society

Table 1 shows that in all of scholarly literature, NON-climate change issues comprise just 1.2% of publications, USA Today 11%, WSJ 9%, & NYT 6.8%.

The rant continues. The reason I am so annoyed with the attention to Climate Change is that it became THE PROBLEM and THE SOLUTION was to generate electricity with wind and solar power to lower emissions.  But as we all know, there have been no closures of fossil fuel plants (coal plants were replaced with natural gas plants double their size) because of lack of energy storage for renewables, the inability of wind and solar to scale up, and because fossil plants still supply two-thirds of generation and peak power.  Since rebuildables require fossils every single step of their life cycle, they were never were a solution.  They were simply a distraction from reality.

If the actual problem is that finite fossil fuels power our civilization and their peak production is near at hand, then carrying capacity will be far less. Pimentel (1991) estimated 40 to 100 million without fossil fuels in the U.S. So we should have been reducing LEGAL immigration to far less than the one million a year since the 1960s, made birth control and abortion free and easy to get, and have high taxes on more than 1 child.

Most importantly, by far, is that since peak fossils is the problem, rather than CC, we need to return to organic farming and stop using pesticides, build up the soils with composting and cover crops, plant windbreaks so that soil on thousands of square miles can’t wash and blow away so easily, stockpile phosphate, start growing multiple crops everywhere locally, and so on.  We need to train the youngest generation how to do this, since eventually 90% of Americans will be farmers.  And anyone who can grow a victory garden should be doing it since less consumption will lower standards of living until a new economic system not dependent on endless growth develops.

There needs to be less consumption across the board, and very high taxes on the top 1% to redistribute wealth.  

There needs to be a year or two of mandatory service after high school to do infrastructure and other worthwhile projects in agriculture, irrigation, and more to prepare for a low energy world and to lessen the need to create private sector jobs in an economy that is shrinking.

Planting of hardwood trees and no more export of forests to Europe to burn for their “renewable” energy since we’ll need a lot of trees when we return to biomass as our main source of energy and infrastructure for ships, buildings, and charcoal to make bricks, metal, ceramics, glass, etc.

Just look at postcarbon.org and transition towns for ideas, the reason for their existence.

Climate change efforts have done nothing and distracted us away from what needs to be done.  CC activists didn’t even try to lower the speed limit or ration gasoline usage or days when people could drive or mandate less consumption, and just about every single paper on anything to do with energy was how to lower emissions rather than energy efficiency.

I’ve collected reasons for why people deny a future energy crisis in “Telling others about peak oil and limits to growth”. Here’s an excerpt:

  1. It’s impossible because whad’ya mean energy crash, never heard of it.
  2. Because we’re doing fine. Just some hiccups in the supply.
  3. Because they know what they’re doing and would have told us by now.
  4. Because I haven’t got time for an energy crash right now.
  5. Even if I had time, I couldn’t afford one. Look at my credit card.
  6. The oils wells have never run dry before, so they never will.
  7. Rain refills water wells. For oil wells: acid rain or something.
  8. Because oil wells are big slot machines, put money in, get oil out.
  9. Because they’ll think of alternatives-ha-ha-silly-billy.
  10. The oil companies have things up their sleeve they’re going to bring in.
  11. Because God looks after me.
  12. I need a car for work so it’s impossible.
  13. Impossible because you’re just trying to scare us.
  14. It’s impossible because you’re crazy.
  15. It’s impossible because ya have to stay positive.

No wonder everyone preferred Climate Change. With windmills and solar panels we could continue our lifestyle and be squeaky clean and green.

Meanwhile, we’ve wasted decades of preparation on Climate Change instead of the energy crisis.

References

Pimentel, D. et al. 1991. Land, Energy, and Water. The Constraints Governing Ideal U.S. Population Size. Negative Population Growth.

Posted in Acidification, Biodiversity Loss, BioInvasion, Climate Change, Critical Thinking, Peak Oil, Planetary Boundaries | Tagged , , , , , , , | 17 Comments

What can California do about sea level rise?

Projected sea level rise from one meter (dark red) to six meters (light orange) in California’s Bay Area. (Weiss and Overpeck 2011)

Preface. Nearly all, if not all, possible solutions to rising sea levels along all the coasts in the world are listed below, along with their challenges.

Related: Which S.F. Bay areas will be affected here.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Parsons T (2021) The Weight of Cities: Urbanization Effects on Earth’s Subsurface. AGU Advances.

Cities don’t just have sea level rises to worry about – they’re also slowly sinking. San Francisco may have sunk 3.1 inches, and is at risk of 11.8 inches of sea level rise by 2050, so this may exacerbate flooding in the future.  How much weight? If you add up all the buildings and their contents, the study estimated the weight of the San Francisco Bay Area (population: 7.75 million) at around 3.5 trillion pounds the equivalent of 8.7 million Boeing 747s. This is probably an underestimate since it didn’t include objects outside of buildings such as transportation infrastructure, vehicles, or people.

BCDC. 2018. Adapting to Rising tides. Findings by Sector. San Francisco Bay Conservation and Development commission.

Clearly the homes, hotels, and other visible infrastructure along the bay will be hammered, or should I say dunked.  But there are other components of infrastructure that will be affected that may not be as obvious:

  • Energy facilities: The U.S. has 101 oil and gas, natural gas, and electric generation plants that would be affected by a 1 foot rise in sea level, and 206 at 10 feet, 41 of them in California (Strauss 2012). Plus 15 substations, a critical component of the electric system with expensive and dangerous equipment such as transformers, capacitors, voltage regulators, etc.
  • Although the Diablo Canyon and San Onofre power reactors are being closed, their nuclear waste will remain and be a problem for a long time now that Yucca Mountain has shut down and no new disposal sites have been proposed.
  • There are 350 contaminated land sites in the Bay Area that will be affected by a 16 inch rise, and 460 with a 55” sea level rise.  Contaminants include industrial solvents (such as acetone, benzene, and chlorinated solvents and their byproducts), acids, paint strippers, degreasers, caustic cleaners, pesticides, chromium and cyanide wastes, polychlorinated biphenyls (PCBs) and other chlorinated hydrocarbons, radium associated with dial painting and stripping, medical debris, unexploded ordnance, metals (e.g., lead, chromium, nickel), gasoline, diesel, and petroleum byproducts, and waste oils.
  • There are also many hazardous material sites with wastes that are toxic, ignitable, corrosive, or reactive, including pesticides, cleaning solvents, pharmaceutical waste, and so on.
  • As sea levels rise, storm water infrastructure will back up and cause inland flooding, salt water could corrode and damage infrastructure designed to handle only freshwater. If pump stations are flooded, their sensitive electrical and computerized components would stop functioning.  The soils around the bay liquefy in a seismic event, causing underground pipes to move, bend, or break, and excess storm water and rainfall events could make these soils even wetter and vulnerable to liquefaction.
  • In addition wastewater treatment systems, roads, railroads, airports, commuter rail, ferry terminals, and telecommunication infrastructure will be harmed.
  • Yet to be studied are the impacts on transmission lines, pipelines, or telecommunications infrastructure

What can California do about rising sea levels?

Levees and Seawalls. Protecting California from a 1.4 meter rise in sea level would require 1,100 miles of levees and seawalls, and would cost roughly $14 billion (table 1) to build and $1.4 billion a year to operate and maintain it.  No one is going to spend $14 billion on this, because there’s no guarantee the levees and seawalls would work, and the sea is going to keep rising for millennia, constantly overtopping whatever is put in place. An unusually large storm event can also cause it to rupture like the levees in New Orleans during Hurricane Katrina, even if it has been well maintained.

Paradoxically, it increases vulnerability. Hard shoreline protection is not as effective as natural shorelines at dissipating the energy from waves and tides. As a result, armored shorelines tend to be more vulnerable to erosion, and to increase erosion of nearby beaches. Structural flood protection can also increase human vulnerability by giving people a false sense of security and encouraging development in areas that are vulnerable to flooding.

A huge dike under the Golden Gate bridge won’t work for many reasons – it would cost four times as much as the Three Gorges Dam, and California gets huge floods (i.e. Arkstorm).  If the dike were up to protect from rising sea levels, we’d be flooded from inland water with upstream flooding in the freshwater tributaries of the Bay.

Elevated development is a short-term strategy. Unless it’s on stilts directly over water, characteristics of shorelines are altered and will need protection just like low-lying development. Its advantage is merely that it is not threatened by sea level rise for a longer time.  We don’t know if higher land or structures will support high-density, transit-oriented new development. Much of our region’s high-density neighborhoods and transit are near the Bay’s shoreline. If low-density development is allowed along the shoreline, it could increase global warming emissions, and may not warrant expensive protection measures in the future.

Floating development: structures that float on the surface of the water or that float during floods or tides.  Floating development works only in protected areas, not in areas subject to wind and wave action from storms, such as the ocean coastline. This type of development has not yet been demonstrated in high-density cities.  From an engineering perspective, many structures can be built to float, though they cannot be retrofitted to do so.

Barriers are ecologically damaging and would harm the Bay’s salinity, sedimentation, wetlands, wildlife and endangered species, and increase sedimentation, making parts of the Bay shallower, while increasing coastal erosion.

Floodable development: structures designed to handle flooding or retain storm water.  Floodable development could be hazardous. Storm water, particularly at the seaward end of a watershed, is usually polluted with heavy metals and organic chemicals, in addition to sediment and bacteria.

Large quantities of storm water sitting on the surface, or in underground storage facilities, could pose a public health hazard during a flood or leave contamination behind. This could be a particular problem in areas with combined sewer systems, such as San Francisco, where wastewater and street runoff go to the same treatment system. Also, wastewater treatment systems that commonly treat the hazards of combined sewer effluent before releasing it into the Bay do not work well with salt water mixed in. If floodable development strategies are designed to hold and release brackish water, new treatment methods will be needed for the released water to meet water quality standards.

Finally, emergency communication tools and extensive public outreach and management would be required to prevent people from misusing or getting trapped in flooding zones. Floodable development is untested. We don’t know if buildings and infrastructure can be designed or retrofitted to accommodate occasional flooding in a cost-effective way. It is not clear exactly how much volume new floodable development tools will hold. Some of the more heavily engineered solutions, such as a water-holding parking garage, may not turn out to be more beneficial than armoring or investments in upsizing an existing wastewater system.

Living shorelines. Wetlands are natural and absorb floods, slow erosion, and provide habitat.  Living shorelines require space and time to work. Wetlands are generally “thicker” than linear armoring strategies such as levees, so they need more land. They also require management, monitoring and time to become established.  Living shorelines are naturally adaptive to sea level rise, as long as two conditions are present. The first condition is that it must have space to migrate landward. The second condition is that they must be sufficiently supplied with sediment to be able to “keep up” with sea level rise. Due to the many dams and modified hydrology of the Delta and its major rivers, this is a concern for restoration success in San Francisco Bay.  Wetlands will never be restored to their historic extent along the Bay, in part because of the cost of moving development inland from urbanized areas at the water’s edge.   Important challenges for our region will be determining how much flooding new tidal marshes could attenuate, restoring them in appropriate places, and conducting restoration at a faster rate than we would without the looming threat of rising seas.

Managed Retreat. Abandon threatened areas near the shoreline. This strategy is a political quagmire. It involves tremendous legal and equity issues, because not all property owners are willing sellers. And in many places, shoreline communities are already disadvantaged and lack the adaptive capacity to relocate. In addition, retreat may require costs beyond relocation or property costs if site cleanup — such as to remove toxics — is needed following demolition

Consequences for the ports and airports

The main problem for shipping is not the port.  It’s the roads and railroad tracks surrounding the port that are vulnerable, many of them less than 10 feet above sea level, and there’s nowhere to move them.  Raising them would make them vulnerable to erosion and liquefied soils from floods or earthquakes.

An even bigger deal would be any harm done to the Port of Los Angeles-Long Beach, which handles 45%–50% of the containers shipped into the United States. Of these containers, 77% leave California—half by train and half by truck (Christensen 2008).

The Port of Los Angeles estimates that $2.85 billion in container terminals will need to be replaced.  If the port is shut down for any reason, the cost is roughly $1 billion per day as economic impacts ripple through the economy as shipments are delayed or re-routed according to the National Oceanic and Atmospheric Administration 2008-2017 Strategic Plan.  Replacing the roads, rails, and grade separations nearby would cost $1 billion. If the port’s electrical infrastructure were damaged, equipment such as cranes would be non-operational and cause delays and disruptions in cargo loading and offloading. These would cost $350 million to replace.  The port also has an 8.5 mile breakwater that prevents waves from entering the harbor with two openings to allow ships to enter the port. An impaired breakwater would render shipping terminals unusable and interrupt flows of cargo.  The breakwater has a $500 million replacement value and is managed by the Army Corps of Engineers.

Airports. Meanwhile, all of the airports in the SF Bay area are vulnerable to sea level rise, especially San Francisco and Oakland.  In 2007, the Oakland International airport transported 15 million passengers and 647,000 metric tons of freight.  San Francisco International Airport is the nation’s 13th busiest airport, transporting 36 million people in 2007 and handling 560,000 metric tons of freight $25 billion in exports and $32 billion in imports, more than double the $23.7 billion handled by vessels at the Port of Oakland.

County                        Miles of levees & Seawalls     Cost 2000 dollars

Alameda                      110                              $   950,000,000

Del Norte                    39                                $   330,000,000

Contra Costa               63                                $   520,000,000

Humboldt                    42                                $   460,000,000

Los Angeles                94                                $2,600,000,000

Marin                           130                              $   930,000,000

Mendocino                  1                                  $      34,000,000

Monterey                     53                                $   650,000,000

Napa                            64                                $   490,000,000

Orange                                    77                                $1,900,000,000

San Diego                   47                                $1,300,000,000

San Luis Obispo          13                                $   210,000,000

San Mateo                   73                                $   580,000,000

Santa Barbara              13                                $   180,000,000

Santa Clara                  51                                $   160,000,000

Santa Cruz                  15                                $   280,000,000

Solano                         73                                $   720,000,000

Sonoma                       47                                $   240,000,000

Ventura                       29                                $   790,000,000

Table 1. $14,000,000,000 cost to build 1,100 miles of defenses needed to guard against flooding from a 1.4 m sea-level rise, by county.

References

BCDC. 2018. Adapting to Rising tides. Findings by Sector. San Francisco Bay Conservation and Development commission.

Copeland, B, et al. November 24, 2012 What Could Disappear. Maps of 24 USA cities flooded as sea level rises.  New York Times.

Grifman, P., et al. 2013. Sea level Rise Vulnerability Study for the City of Los Angeles. University of Southern California.

Heberger, M. et al.  May 2009. The Impacts of Sea-Level rise on the California Coast. Pacific Institute.

Conti, K., et al. Nov 20, 2007. “Analysis of a Tidal Barrage at the Golden Gate,” BCDC

Preliminary Study of the Effect of Sea Level Rise on the Resources of the Hayward Shoreline. March 2010.  Philip Williams & Associates, Ltd.

Sorensen, R. M., et al. Erosion, Inundation, and Salinity Intrusion Chapter 6 Control of Erosion, Inundation, and Salinity Intrusion Caused by Sea Level Rise. Risingsea.net

Strauss, B., Ziemlinski, R., 2012. Sea Level Rise Threats to Energy Infrastructure. Climate Central, Washington, DC.

Posted in Energy Infrastructure, Infrastructure & Collapse, Rail, Roads, Sea Level Rise, Transportation | Tagged , , , , , , , | Comments Off on What can California do about sea level rise?

Book review of “The Death of Expertise: the campaign against established knowledge and why it matters”

Preface.  Those who attack experts are exactly the people who will not read this book review (well, mainly some Kindle notes) of Nichols “The Death of Expertise: The Campaign against Established Knowledge and Why it Matters”. They scare me, they scare the author — that’s why he wrote it.  The Zombies are among us already on FOX news and hate talk radio, brains not dead, but not functioning very well, and proud of it.

Energyskeptic is about the death by a thousand cuts that leads to collapse, with fossil fuel decline the main one. Rejecting expertise in favor of “gut feelings”, superstitions, preferred notions, and rejection of science are yet one more cut, one more factor leading to collapse.  A rejection of expertise is manifested by those who voted for Trump, whose ignorance and incompetence is literally killing people, quite a few of them his voters.  It’s happening in the covid-19 pandemic, trying to get rid of Obamacare, undoing environmental rules, and financial regulations that protected the poor and middle class from rapacious capitalists. 

A few quotes from the book:

  • What I find so striking today is not that people dismiss expertise, but that they do so with such frequency, on so many issues, and with such anger.  
  • The death of expertise is not just a rejection of existing knowledge. It is fundamentally a rejection of science and dispassionate rationality, which are the foundations of modern civilization.
  • We have come full circle from a premodern age, in which folk wisdom filled unavoidable gaps in human knowledge, through a period of rapid development based heavily on specialization and expertise, and now to a postindustrial, information-oriented world where all citizens believe themselves to be experts on everything.
  • Some of us, as indelicate as it might be to say it, are not intelligent enough to know when we’re wrong, no matter how good our intentions.
  • There’s also the basic problem that some people just aren’t very bright. And as we’ll see, the people who are the most certain about being right tend to be the people with the least reason to have such self-confidence.  The reason unskilled or incompetent people overestimate their abilities far more than others is because they lack a key skill called “metacognition.”  
  • the root of an inability among laypeople to understand that experts being wrong on occasion about certain issues is not the same thing as experts being wrong consistently on everything. Experts are more often right than wrong, especially on essential matters of fact. And yet the public constantly searches for the loopholes in expert knowledge that will allow them to disregard all expert advice they don’t like.  
  • We all have an inherent tendency to search for evidence that already meshes with our beliefs. Our brains are actually wired to work this way, which is why we argue even when we shouldn’t.  
  • Colleges also mislead their students about their competence through grade inflation. When college is a business, you can’t flunk the customers. A study of 200 colleges and universities up through 2009 found that A was the most common grade, and increase of 30% since 1960. 

Related links:

2020 Trumpers are resistant to experts — even their own

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles,Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Nichols, T. 2017. The Death of Expertise. The Campaign Against Established Knowledge and Why it Matters. Oxford University Press.

The big problem is that we’re proud of not knowing things. Americans have reached a point where ignorance, especially of anything related to public policy, is an actual virtue. To reject the advice of experts is to assert autonomy, a way for Americans to insulate their increasingly fragile egos from ever being told they’re wrong about anything. It is a new Declaration of Independence: no longer do we hold these truths to be self-evident, we hold all truths to be self-evident, even the ones that aren’t true. All things are knowable and every opinion on any subject is as good as any other.

I wrote this because I’m worried. People don’t just believe dumb things; they actively resist further learning rather than let go of those beliefs. I was not alive in the Middle Ages, so I cannot say it is unprecedented, but within my living memory I’ve never seen anything like it.

Back in the late 1980s, when I was working in Washington, DC, I learned how quickly people in even casual conversation would immediately instruct me in what needed to be done in any number of areas, especially in my own areas of arms control and foreign policy. I was young and not yet a seasoned expert, but I was astonished at the way people who did not have the first clue about those subjects would confidently direct me on how best to make peace between Moscow and Washington. To some extent, this was understandable. Politics invites discussion. And especially during the Cold War, when the stakes were global annihilation, people wanted to be heard. I accepted that this was just part of the cost of doing business in the public policy world. Over time, I found that other specialists in various policy areas had the same experiences, with laypeople subjecting them to ill-informed disquisitions on taxes, budgets, immigration, the environment, and many other subjects. If you’re a policy expert, it goes with the job.

In later years, however, I started hearing the same stories from doctors. And from lawyers. And from teachers. And, as it turns out, from many other professionals whose advice is usually not contradicted easily. These stories astonished me: they were not about patients or clients asking sensible questions, but about those same patients and clients actively telling professionals why their advice was wrong. In every case, the idea that the expert knew what he or she was doing was dismissed almost out of hand.

Worse, what I find so striking today is not that people dismiss expertise, but that they do so with such frequency, on so many issues, and with such anger. Again, it may be that attacks on expertise are more obvious due to the ubiquity of the Internet, the undisciplined nature of conversation on social media, or the demands of the 24-hour news cycle. But there is a self-righteousness and fury to this new rejection of expertise that suggest that this isn’t just mistrust or questioning or the pursuit of alternatives: it is narcissism, coupled to a disdain for expertise as some sort of exercise in self-actualization.

This makes it all the harder for experts to push back and to insist that people come to their senses. No matter what the subject, the argument always goes down the drain of an enraged ego and ends with minds unchanged, sometimes with professional relationships or even friendships damaged. Instead of arguing, experts today are supposed to accept such disagreements as, at worst, an honest difference of opinion. We are supposed to “agree to disagree,” a phrase now used indiscriminately as little more than a conversational fire extinguisher. And if we insist that not everything is a matter of opinion, that some things are right and others are wrong … well, then we’re just being jerks, apparently.

There is a cult of ignorance in the United States, and there always has been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that “my ignorance is just as good as your knowledge” as Isaac Asimov once said.

In the early 1990s, a small group of “AIDS denialists,” including a University of California professor named Peter Duesberg, argued against virtually the entire medical establishment’s consensus that the human immunodeficiency virus (HIV) was the cause of Acquired Immune Deficiency Syndrome. There was no evidence for Duesberg’s beliefs, which turned out to be baseless. The Duesberg business might have ended as just another quirky theory defeated by research.

In this case, however, a discredited idea nonetheless managed to capture the attention of a national leader, with deadly results. Thabo Mbeki, then the president of South Africa, seized on the idea that AIDS was caused not by a virus but by other factors, such as malnourishment and poor health, and so he rejected offers of drugs and other forms of assistance to combat HIV infection in South Africa. By the mid-2000s, his government relented, but not before Mbeki’s fixation on AIDS denialism ended up costing, by the estimates of doctors at the Harvard School of Public Health, well over 300,000 lives and the births of some 35,000 HIV-positive children.

These are dangerous times. Never have so many people had so much access to so much knowledge and yet have been so resistant to learning anything. In the United States and other developed nations, otherwise intelligent people denigrate intellectual achievement and reject the advice of experts. Not only do increasing numbers of laypeople lack basic knowledge, they reject fundamental rules of evidence and refuse to learn how to make a logical argument. In doing so, they risk throwing away centuries of accumulated knowledge and undermining the practices and habits that allow us to develop new knowledge.

All of these choices, from a nutritious diet to national defense, require a conversation between citizens and experts. Increasingly, it seems, citizens don’t want to have that conversation. For their part, they’d rather believe they’ve gained enough information to make those decisions on their own, insofar as they care about making any of those decisions at all. On the other hand, many experts, and particularly those in the academy, have abandoned their duty to engage with the public. They have retreated into jargon and irrelevance, preferring to interact with each other only.

The death of expertise is not just a rejection of existing knowledge. It is fundamentally a rejection of science and dispassionate rationality, which are the foundations of modern civilization. It is a sign, as the art critic Robert Hughes once described late twentieth-century America, of “a polity obsessed with therapies and filled with distrust of formal politics,” chronically “skeptical of authority” and “prey to superstition.” We have come full circle from a premodern age, in which folk wisdom filled unavoidable gaps in human knowledge, through a period of rapid development based heavily on specialization and expertise, and now to a postindustrial, information-oriented world where all citizens believe themselves to be experts on everything.

Any assertion of expertise from an actual expert, meanwhile, produces an explosion of anger from certain quarters of the American public, who immediately complain that such claims are nothing more than fallacious “appeals to authority,” sure signs of dreadful “elitism,” and an obvious effort to use credentials to stifle the dialogue required by a “real” democracy. Americans now believe that having equal rights in a political system also means that each person’s opinion about anything must be accepted as equal to anyone else’s.

The immediate response from most people when confronted with the death of expertise is to blame the Internet. Professionals, especially, tend to point to the Internet as the culprit when faced with clients and customers who think they know better. As we’ll see, that’s not entirely wrong, but it is also too simple an explanation. Attacks on established knowledge have a long pedigree, and the Internet is only the most recent tool in a recurring problem that in the past misused television, radio, the printing press, and other innovations the same way. So why all the

The secrets of life are no longer hidden in giant marble mausoleums and the great libraries of the world. So in the past, there was less stress between experts and laypeople, but only because citizens were simply unable to challenge experts in any substantive way. Moreover, there were few public venues in which to mount such challenges in the era before mass communications.  We now live in a society where the acquisition of even a little learning is the endpoint, rather than the beginning, of education. And this is a dangerous thing.

Some of us, as indelicate as it might be to say it, are not intelligent enough to know when we’re wrong, no matter how good our intentions. Just as we are not all equally able to carry a tune or draw a straight line, many people simply cannot recognize the gaps in their own knowledge or understand their own inability to construct a logical argument.

Education is supposed to help us to recognize problems like “confirmation bias” and to overcome the gaps in our knowledge so that we can be better citizens.

In this hypercompetitive media environment, editors and producers no longer have the patience—or the financial luxury—to allow journalists to develop their own expertise or deep knowledge of a subject. Nor is there any evidence that most news consumers want such detail. Experts are often reduced to sound bites or “pull quotes,” if they are consulted at all. And everyone involved in the news industry knows that if the reports aren’t pretty or glossy or entertaining enough, the fickle viewing public can find other, less taxing alternatives with the click of a mouse or the press of a button on a television remote.

Maybe it’s not that people are any dumber or any less willing to listen to experts than they were a hundred years ago: it’s just that we can hear them all now.

A certain amount of conflict between people who know some things and people who know other things is inevitable. There were probably arguments between the first hunters and gatherers over what to have for dinner. As various areas of human achievement became the province of professionals, disagreements were bound to grow and to become sharper. And as the distance between experts and the rest of the citizenry grew, so did the social gulf and the mistrust between them. All societies, no matter how advanced, have an undercurrent of resentment against educated elites, as well as persistent cultural attachments to folk wisdom, urban legends, and other irrational but normal human reactions to the complexity and confusion of modern life.

Democracies, with their noisy public spaces, have always been especially prone to challenges to established knowledge. Actually, they’re more prone to challenges to established anything: it’s one of the characteristics that makes them “democratic”.

The United States, with its intense focus on the liberties of the individual, enshrines this resistance to intellectual authority even more than other democracies. Alexis de Tocqueville, the French observer noted in 1835 that the denizens of the new United States were not exactly enamored of experts or their smarts. “In most of the operations of the mind, each American appeals only to the individual effort of his own understanding.” This distrust of intellectual authority was rooted, Tocqueville theorized, in the nature of American democracy. When “citizens, placed on an equal footing, are all closely seen by one another, they are constantly brought back to their own reason as the most obvious and proximate source of truth. It is not only confidence in this or that man which is destroyed, but the disposition to trust the authority of any man whatsoever.”

Such observations have not been limited to early America. Teachers, experts, and professional “knowers” have been venting about a lack of deference from their societies since Socrates was forced to drink his hemlock.

The Spanish philosopher José Ortega y Gasset in 1930 decried the “revolt of the masses” and the unfounded intellectual arrogance that characterized it:

Hofstadter argued back in 1963 that overwhelming complexity produced feelings of helplessness and anger among a citizenry that knew itself increasingly to be at the mercy of smarter elites. “What used to be a jocular and usually benign ridicule of intellect and formal training has turned into a malign resentment of the intellectual in his capacity as expert,” Hofstadter warned. “Once the intellectual was gently ridiculed because he was not needed; now he is fiercely resented because he is needed too much.

Somin wrote in 2015 that the “size and complexity of government” have made it “more difficult for voters with limited knowledge to monitor and evaluate the government’s many activities. The result is a polity in which the people often cannot exercise their sovereignty responsibly and effectively.” More disturbing is that Americans have done little in those intervening decades to remedy the gap between their own knowledge and the level of information required to participate in an advanced democracy. “The low level of political knowledge in the American electorate,” Somin correctly notes, “is still one of the best-established findings in social science.

The death of expertise, however, is a different problem than the historical fact of low levels of information among laypeople. The issue is not indifference to established knowledge; it’s the emergence of a positive hostility to such knowledge. This is new in American culture, and it represents the aggressive replacement of expert views or established knowledge with the insistence that every opinion on any matter is as good as every other. This is a remarkable change in our public discourse.

The death of expertise actually threatens to reverse the gains of years of knowledge among people who now assume they know more than they actually do. This is a threat to the material and civic well-being of citizens in a democracy.

Some folks seized on the contradictory news stories about eggs (much as they did on a bogus story about chocolate being a healthy snack that made the rounds earlier) to rationalize never listening to doctors, who clearly have a better track record than the average overweight American at keeping people alive with healthier diets.

At the root of all this is an inability among laypeople to understand that experts being wrong on occasion about certain issues is not the same thing as experts being wrong consistently on everything. The fact of the matter is that experts are more often right than wrong, especially on essential matters of fact. And yet the public constantly searches for the loopholes in expert knowledge that will allow them to disregard all expert advice they don’t like.

No one is arguing that experts can’t be wrong, but they are less likely to be wrong than nonexperts. The same people who anxiously point back in history to the thalidomide disaster routinely pop dozens of drugs into their mouths, from aspirin to antihistamines, which are among the thousands and thousands of medications shown to be safe by decades of trials and tests conducted by experts. It rarely occurs to the skeptics that for every terrible mistake, there are countless successes that prolong their lives.

There are many examples of these brawls among what pundits and analysts gently refer to now as “low-information voters.” Whether about science or policy, however, they all share the same disturbing characteristic: a solipsistic and thin-skinned insistence that every opinion be treated as truth. Americans no longer distinguish the phrase “you’re wrong” from the phrase “you’re stupid.” To disagree is to disrespect. To correct another is to insult. And to refuse to acknowledge all views as worthy of consideration, no matter how fantastic or inane they are, is to be closed-minded.

The epidemic of ignorance in public policy debates has real consequences for the quality of life and well-being of every American. During the debate in 2009 over the Affordable Care Act, for example, at least half of all Americans believed claims by opponents like former Republican vice presidential nominee Sarah Palin that the legislation included “death panels” that would decide who gets health care based on a bureaucratic decision about a patient’s worthiness to live. (Four years later, almost a third of surgeons apparently continued to believe this.) Nearly half of Americans also thought the ACA established a uniform government health plan. Love it or hate it, the program does none of these things. And two years after the bill passed, at least 40% of Americans weren’t even sure the program was still in force as a law.

First, while our clumsy dentist might not be the best tooth puller in town, he or she is better at it than you.  Second, and related to this point about relative skill, experts will make mistakes, but they are far less likely to make mistakes than a layperson. This is a crucial distinction between experts and everyone else, in that experts know better than anyone the pitfalls of their own profession. Both of these points should help us to understand why the pernicious idea that “everyone can be an expert” is so dangerous.

Knowing things is not the same as understanding them. Comprehension is not the same thing as analysis.

We all have an inherent and natural tendency to search for evidence that already meshes with our beliefs. Our brains are actually wired to work this way, which is why we argue even when we shouldn’t. And if we feel socially or personally threatened, we will argue until we’re blue in the face.

There’s also the basic problem that some people just aren’t very bright. And as we’ll see, the people who are the most certain about being right tend to be the people with the least reason to have such self-confidence.  The reason unskilled or incompetent people overestimate their abilities far more than others is because they lack a key skill called “metacognition.” This is the ability to know when you’re not good at something by stepping back, looking at what you’re doing, and then realizing that you’re doing it wrong. Good singers know when they’ve hit a sour note; good directors know when a scene in a play isn’t working; good marketers know when an ad campaign is going to be a flop. Their less competent counterparts, by comparison, have no such ability. They think they’re doing a great job.

Pair such people with experts, and, predictably enough, misery results. The lack of metacognition sets up a vicious loop, in which people who don’t know much about a subject do not know when they’re in over their head talking with an expert on that subject. An argument ensues, but people who have no idea how to make a logical argument cannot realize when they’re failing to make a logical argument. In short order, the expert is frustrated and the layperson is insulted. Everyone walks away angry.

Dunning described the research done at Cornell as something like comedian Jimmy Kimmels point that when people have no idea what they’re talking about, it does not deter them from talking anyway. In our work, we ask survey respondents if they are familiar with certain technical concepts from physics, biology, politics, and geography. A fair number claim familiarity with genuine terms like centripetal force and photon. But interestingly, they also claim some familiarity with concepts that are entirely made up, such as the plates of parallax, ultra-lipid, and cholarine. In one study, roughly 90% claimed some knowledge of at least one of the nine fictitious concepts we asked them about.

In other words, the least-competent people were the least likely to know they were wrong or to know that others were right, the most likely to try to fake it, and the least able to learn anything. Dunning and Kruger have several explanations for this problem. In general, people don’t like to hurt each other’s feelings, and in some workplaces, people and even supervisors might be reluctant to correct incompetent friends or colleagues. Some activities, like writing or speaking, do not have any evident means of producing immediate feedback. You can only miss so many swings in baseball before you have to admit you might not be a good hitter, but you can mangle grammar and syntax every day without ever realizing how poorly you speak.

Confirmation Bias

Not everyone, however, is incompetent, and almost no one is incompetent at everything. What kinds of errors do more intelligent or agile-minded people make in trying to comprehend complicated issues? Not surprisingly, ordinary citizens encounter pitfalls and biases that befall experts as well. “Confirmation bias” is the most common—and easily the most irritating—obstacle to productive conversation, and not just between experts and laypeople. The term refers to the tendency to look for information that only confirms what we believe, to accept facts that only strengthen our preferred explanations, and to dismiss data that challenge what we already accept as truth. If we’ve heard Boston drivers are rude, the next time we’re visiting Beantown we’ll remember the ones who honked at us or cut us off. We will promptly ignore or forget the ones who let us into traffic or waved a thank you. For the record, in 2014 the roadside assistance company AutoVantage rated Houston the worst city for rude drivers. Boston was fifth.

For people who believe flying is dangerous, there will never be enough safe landings to outweigh the fear of the one crash. “Confronted with these large numbers and with the correspondingly small probabilities associated with them,” Paulos wrote in 2001, “the innumerate will inevitably respond with the non sequitur, ‘Yes, but what if you’re that one,’ and then nod knowingly, as if they’ve demolished your argument with their penetrating insight

We are gripped by irrational fear rather than irrational optimism because confirmation bias is, in a way, a kind of survival mechanism. Good things come and go, but dying is forever. Your brain doesn’t much care about all those other people who survived a plane ride

Your intellect, operating on limited or erroneous information, is doing its job, trying to minimize any risk to your life, no matter how small. When we fight confirmation bias, we’re trying to correct for a basic function—a feature, not a bug—of the human mind.

Confirmation bias comes into play because people must rely on what they already know. They cannot approach every problem as though their minds are clean slates. This is not the way memory works, and more to the point, it would hardly be an effective strategy to begin every morning trying to figure everything out from scratch. Confirmation bias can lead even the most experienced experts astray. Doctors, for example, will sometimes get attached to a diagnosis and then look for evidence of the symptoms they suspect already exist in a patient while ignoring markers of another disease or injury.

In modern life outside of the academy, however, arguments and debates have no external review. Facts come and go as people find convenient at the moment. Thus, confirmation bias makes attempts at reasoned argument exhausting because it produces arguments and theories that are non-falsifiable. It is the nature of confirmation bias itself to dismiss all contradictory evidence as irrelevant, and so my evidence is always the rule, your evidence is always a mistake or an exception. It’s impossible to argue with this kind of explanation, because by definition it’s never wrong.

An additional problem is that most laypeople have never been taught, or have forgotten, the basics of the “scientific method.” This is the set of steps that lead from a general question to a hypothesis, testing, and analysis. Although people commonly use the word “evidence,” they use it too loosely; the tendency in conversation is to use “evidence” to mean “things which I perceive to be true,” rather than “things that have been subjected to a test of their factual nature by agreed-upon rules.

Conspiracy Theories

The most extreme cases of confirmation bias are found not in the wives’ tales and superstitions of the ignorant, but in the conspiracy theories of more educated or intelligent people. Unlike superstitions, which are simple, conspiracy theories are horrendously complicated. Indeed, it takes a reasonably smart person to construct a really interesting conspiracy theory, because conspiracy theories are actually highly complex explanations

Each rejoinder or contradiction only produces a more complicated theory. Conspiracy theorists manipulate all tangible evidence to fit their explanation, but worse, they will also point to the absence of evidence as even stronger confirmation. After all, what better sign of a really effective conspiracy is there than a complete lack of any trace that the conspiracy exists? Facts, the absence of facts, contradictory facts: everything is proof. Nothing can ever challenge the underlying belief.

One reason we all love a good conspiracy thriller is that it appeals to our sense of heroism. American culture in particular is attracted to the idea of the talented amateur (as opposed, say, to the experts and elites) who can take on entire governments—or even bigger organizations—and win.

More important and more relevant to the death of expertise, however, is that conspiracy theories are deeply attractive to people who have a hard time making sense of a complicated world and who have no patience for less dramatic explanations. Such theories also appeal to a strong streak of narcissism: there are people who would choose to believe in complicated nonsense rather than accept that their own circumstances are incomprehensible, the result of issues beyond their intellectual capacity to understand, or even their own fault.

Conspiracy theories are also a way for people to give context and meaning to events that frighten them. Without a coherent explanation for why terrible things happen to innocent people, they would have to accept such occurrences as nothing more than the random cruelty either of an uncaring universe or an incomprehensible deity.

The only way out of this dilemma is to imagine a world in which our troubles are the fault of powerful people who had it within their power to avert such misery. In such a world, a loved one’s incurable disease is not a natural event: it is the result of some larger malfeasance by industry or government.

Whatever it is, somebody is at fault, because otherwise we’re left blaming only God, pure chance, or ourselves.

Just as individuals facing grief and confusion look for reasons where none may exist, so, too, will entire societies gravitate toward outlandish theories when collectively subjected to a terrible national experience. Conspiracy theories and the flawed reasoning behind them, as the Canadian writer Jonathan Kay has noted, become especially seductive “in any society that has suffered an epic, collectively felt trauma. In the aftermath, millions of people find themselves casting about for an answer to the ancient question of why bad things happen to good people.” This is why conspiracy theories spiked in popularity after World War I, the Russian Revolution, the assassination of John F. Kennedy, and the terror attacks of September 2001, among other historical events.

Today, conspiracy theories are reactions mostly to the economic and social dislocations of globalization, just as they were to the aftermath of war and the advent of rapid industrialization in the 1920s and 1930s. This is not a trivial obstacle when it comes to the problems of expert engagement with the public: nearly 30% of Americans, for example, think “a secretive elite with a globalist agenda is conspiring to eventually rule the world.

If trying to get around confirmation bias is difficult, trying to deal with a conspiracy theory is impossible. Someone who believes that the oil companies are suppressing a new car that can run on seaweed is unlikely to be impressed by your new Prius or Volt. The people who think alien bodies were housed at Area 51 won’t change their minds if they take a tour of the base. The alien research lab is underground.

Such theories are the ultimate bulwark against expertise, because of course every expert who contradicts the theory is ipso facto part of the conspiracy.

Stereotyping & Generalizations

Stereotyping is an ugly social habit, but generalization is at the root of every form of science. Generalizations are probabilistic statements, based in observable facts. They are not, however, explanations in themselves—another important difference from stereotypes. They’re measurable and verifiable. Sometimes generalizations can lead us to posit cause and effect, and in some cases, we might even observe enough to create a theory or a law that under constant circumstances is always true.

The hard work of explanation comes after generalization. Why are Americans taller than the Chinese? Is it genetic? Is it the result of a different diet? Are there environmental factors at work? There are answers to this question somewhere, but whatever they are, it’s still not wrong to say that Americans tend to be taller than the Chinese, no matter how many slam-dunking exceptions we might find. To say that all Chinese people are short, however, is to stereotype. The key to a stereotype is that it is impervious to factual testing. A stereotype brooks no annoying interference with reality.  Stereotypes are not predictions, they’re conclusions. That’s why it’s called “prejudice”: it relies on pre-judging.

Dispassionate discussion helps

Conversations among laypeople, and between laypeople and experts, can get difficult because human emotions are involved, especially if they are about things that are true in general but might not apply to any one case or circumstance. That’s why one of the most important characteristics of an expert is the ability to remain dispassionate, even on the most controversial issues.

Experts must treat everything from cancer to nuclear war as problems to be solved with detachment and objectivity. Their distance from the subject enables open debate and consideration of alternatives, in ways meant to defeat emotional temptations, including fear, that lead to bias. This is a tall order, but otherwise conversation is not only arduous but sometimes explosive.

There are other social and psychological realities that hobble our ability to exchange information. No matter how much we might suffer from confirmation bias or the heavy hand of the Dunning-Kruger Effect, for example, we don’t like to tell people we know or care about that they’re wrong. Likewise, as much as we enjoy the natural feeling of being right about something, we’re sometimes reluctant to defend our actual expertise.

Not wanting to offend can lead to poor decisions, social insecurity, faking it

When two people were involved in repeated discussions and decision making—and establishing a bond between the participants was a key part of the study—researchers found that the less capable people advocated for their views more than might have been expected, and that the more competent member of the conversation deferred to those points of view even when they were demonstrably wrong.

This might make for a pleasant afternoon, but it’s a lousy way to make decisions. As Chris Mooney, a Washington Post science writer, noted, this kind of social dynamic might grease the wheels of human relationships, but it can do real harm where facts are at stake. The study, he wrote, underscored “that we need to recognize experts more, respect them, and listen to them. But it also shows how our evolution in social groups binds us powerfully together and enforces collective norms, but can go haywire when it comes to recognizing and accepting inconvenient truths.

The reality is that social insecurity trips up both the smart and the dumb. We all want to be liked. In a similar vein, few of us want to admit to being lost in a conversation, especially when so much information is now so easily accessible. Social pressure has always tempted even intelligent, well-informed people to pretend to know more than they do, but this impulse is magnified in the Information Age.

People skim headlines or articles and share them on social media, but they do not read them. Nonetheless, because people want to be perceived by others as intelligent and well informed, they fake it as best they can. As if all of this weren’t enough of a challenge, the addition of politics makes things even more complicated. Political beliefs among both laypeople and experts work in much the same way as confirmation bias. The difference is that beliefs about politics and other subjective matters are harder to shake, because our political views are deeply rooted in our self-image and our most cherished beliefs about who we are as people.

What we believe says something important about how we see ourselves as people. We can take being wrong about the kind of bird we just saw in our backyard, or who the first person was to circumnavigate the globe, but we cannot tolerate being wrong about the concepts and facts that we rely upon to govern how we live our lives. Take, for example, a fairly common American kitchen-table debate: the causes of unemployment. Bring up the problem of joblessness with almost any group of laypeople and every possible intellectual problem will rear its head. Stereotypes, confirmation bias, half-truths, and statistical incompetence all bedevil this discussion

Consider a person who holds firmly, as many Americans do, to the idea that unemployed people are just lazy and that unemployment benefits might even encourage that laziness. Like so many examples of confirmation bias, this could spring from personal experience. Perhaps it proceeds from a lifetime of continuous employment, or it may be the result of knowing someone who’s genuinely averse to work. Every “help wanted” sign—which confirmation bias will note and file away—is further proof of the laziness of the unemployed. A page of job advertisements or a chronically irresponsible nephew constitutes irrefutable evidence that unemployment is a personal failing rather than a problem requiring government intervention.

Now imagine someone else at the table who believes that the nature of the American economy itself forces people into unemployment. This person might draw from experience as well: he or she may know someone who moved to follow a start-up company and ended up broke and far from home, or who was unjustly fired by a corrupt or incompetent supervisor. Every corporate downsizing, every racist or sexist boss, and every failed enterprise is proof that the system is stacked against innocent people who would never choose unemployment over work. Unemployment benefits, rather than subsidizing indolence, are a lifeline and perhaps the only thing standing between an honest person and complete ruin.

It’s unarguable that unemployment benefits suppress the urge to work in at least some people; it’s also undeniable that some corporations have a history of ruthlessness at the expense of their workers, whose reliance on benefits is reluctant and temporary. This conversation can go on forever, because both the Hard Worker on one side and the Kind Heart on the other can adduce anecdotes, carefully vetted by their own confirmation bias, that are always true

There’s no way to win this argument, because in the end, there are no answers that will satisfy everyone. Laypeople want a definitive answer from the experts, but none can be had because there is not one answer but many, depending on circumstances. When do benefits encourage sloth? How often are people thrown out of work against their will, and for how long? These are nuances in a broad problem, and where our self-image is involved, nuance isn’t helpful. Unable to see their own biases, most people will simply drive each other crazy arguing rather than accept answers that contradict what they already think about the subject. The social psychologist Jonathan Haidt summed it up neatly when he observed that when facts conflict with our values, “almost everyone finds a way to stick with their values and reject the evidence.

Dumbing down of education, lack of critical thinking taught

Many of those American higher educational institutions are failing to provide to their students the basic knowledge and skills that form expertise. More important, they are failing to provide the ability to recognize expertise and to engage productively with experts and other professionals in daily life. The most important of these intellectual capabilities, and the one most under attack in American universities, is critical thinking: the ability to examine new information and competing ideas dispassionately, logically, and without emotional or personal preconceptions. This is because attendance at a postsecondary institution no longer guarantees a “college education.” Instead, colleges and universities now provide a full-service experience of “going to college.” These are not remotely the same thing, and students now graduate believing they know a lot more than they actually do. Today, when an expert says, “Well, I went to college,” it’s hard to blame the public for answering, “Who hasn’t?” Americans with college degrees now broadly think of themselves as “educated” when in reality the best that many of them can say is that they’ve continued on in some kind of classroom setting after high school, with wildly varying results.

Students at most schools today are treated as clients, rather than as students. Younger people, barely out of high school, are pandered to both materially and intellectually, reinforcing some of the worst tendencies in students who have not yet learned the self-discipline that once was essential to the pursuit of higher education. Colleges now are marketed like multiyear vacation packages,

The new culture of education in the United States is that everyone should, and must, go to college. This cultural change is important to the death of expertise, because as programs proliferate to meet demand, schools become diploma mills whose actual degrees are indicative less of education than of training.

Young people who might have done better in a trade sign up for college without a lot of thought given to how to graduate, or what they’ll do when it all ends. Four years turns into five, and increasingly six or more. A limited course of study eventually turns into repeated visits to an expensive educational buffet laden mostly with intellectual junk food, with very little adult supervision to ensure that the students choose nutrition over nonsense

Schools that are otherwise indistinguishable on the level of intellectual quality compete to offer better pizza in the food court, plushier dorms, and more activities besides the boring grind of actually going to class.  The cumulative result of too many “students,” too many “professors,” too many “universities,” and too many degrees is that college attendance is no longer a guarantee that people know what they’re talking about.

College is supposed to be an uncomfortable experience. It is where a person leaves behind the rote learning of childhood and accepts the anxiety, discomfort, and challenge of complexity that leads to the acquisition of deeper knowledge—hopefully, for a lifetime. A college degree, whether in physics or philosophy, is supposed to be the mark of a truly “educated” person who not only has command of a particular subject, but also has a wider understanding of his or her own culture and history. It’s not supposed to be easy.  

Over 75% of American undergraduates attend colleges that accept at least half their applicants. Only 4% attend schools that accept 25% or less, and fewer than 1% attend elite schools that accept fewer than 10% of their applicants. Students at these less competitive institutions then struggle to finish, with only half completing a bachelor’s degree within six years.

Many of these incoming students are not qualified to be in college and need significant remedial work. The colleges know it, but they accept students who are in over their heads, stick them in large (but cost-efficient) introductory courses, and hope for the best. Why would schools do this and obviously violate what few admissions standards they might still enforce? As James Piereson of the Manhattan Institute wrote in 2016, “Follow the money.”

Parenting obviously plays a major role here. Overprotective parents have become so intrusive that a former dean of first-year students at Stanford wrote an entire book in which she said that this “helicopter parenting” was ruining a generation of children.

More people than ever before are going to college, mostly by tapping a virtually inexhaustible supply of ruinous loans. Buoyed by this government-guaranteed money, and in response to aggressive marketing from tuition-driven institutions, teenagers from almost all of America’s social classes now shop for colleges the way the rest of us shop for cars. The idea that adolescents should first think about why they want to go to college at all, find schools that might best suit their abilities, apply only to those schools, and then visit the ones to which they’re accepted is now alien to many parents and their children.

This entire process means not only that children are in charge, but that they are already being taught to value schools for some reason other than the education it might provide them. Schools know this, and they’re ready for it. In the same way the local car dealership knows exactly how to place a new model in the showroom, or a casino knows exactly how to perfume the air that hits patrons just as they walk in the door, colleges have all kinds of perks and programs at the ready as selling points, mostly to edge out their competitors over things that matter only to kids.

Driven to compete for teenagers and their loan dollars, educational institutions promise an experience rather than an education. I am leaving aside for-profit schools here, which are largely only factories that create debt and that in general I exclude from the definition of “higher education.” There’s nothing wrong with creating an attractive student center or offering a slew of activities, but at some point it’s like having a hospital entice heart patients to choose it for a coronary bypass because it has great food.

At many colleges, new students already have been introduced to their roommates on social media and live in luxurious apartment-like dorms. That ensures they basically never have to share a room or a bathroom, or even eat in the dining halls if they don’t want to. Those were the places where previous generations learned to get along with different people and manage conflicts when they were chosen at random to live with strangers in close and communal quarters.

In 2006, the New York Times asked college educators about their experiences with student email, and their frustration was evident. “These days,” the Times wrote, “students seem to view [faculty] as available around the clock, sending a steady stream of e-mail messages … that are too informal or downright inappropriate.” As a Georgetown theology professor told the Times, “The tone that they would take in e-mail was pretty astounding. ‘I need to know this and you need to tell me right now,’ with a familiarity that can sometimes border on imperative

Email, like social media, is a great equalizer, and it makes students comfortable with the idea of messages to teachers as being like any communication with a customer-service department. This has a direct impact on respect for expertise, because it erases any distinction between the students who ask questions and the teachers who answer them. As the Times noted, while once professors may have expected deference, their expertise seems to have become just another service that students, as consumers, are buying. So students may have no fear of giving offense, imposing on the professor’s time or even of asking a question that may reflect badly on their own judgment. Kathleen E. Jenkins, a sociology professor at the College of William and Mary in Virginia, said she had even received e-mail requests from students who missed class and wanted copies of her teaching notes.

Professors are not intellectual valets or on-call pen pals. They do not exist to resolve every student question instantly—including, as one UC Davis professor reported, advice about whether to use a binder or a subject notebook. One of the things students are supposed to learn in college is self-reliance, but why bother looking something up when the faculty member is only a few keystrokes away?

Small colleges do not have the resources—including the libraries, research facilities, and multiple programs—of large universities.

When rebranded universities offer courses and degree programs as though they are roughly equivalent to their better-known counterparts, they are not only misleading prospective students but also undermining later learning. The quality gap between programs risks producing a sense of resentment: if you and I both have university degrees in history, why is your view about the Russian Revolution any better than mine? Why should it matter that your degree is from a top-ranked department, but mine is from a program so small it has a single teacher? If I studied film at a local state college, and you went to the film program at the University of Southern California, who are you to think you know more than I? We have the same degree, don’t we?

We may not like any of these comparisons, but they matter in sorting out expertise and relative knowledge. It’s true that great universities can graduate complete dunderheads. Would-be universities, however, try to punch above their intellectual weight for all the wrong reasons, including marketing, money, and faculty ego. In the end, they are doing a disservice to both their students and society. Studying the same thing might give people a common language for further discussion of a subject, but it does not automatically make them peers.

Colleges also mislead their students about their competence through grade inflation. When college is a business, you can’t flunk the customers. A study of 200 colleges and universities up through 2009 found that A was the most common grade, and increase of 30% since 1960.  Grades of A or B account for over 80% of all grades in all subjects.  Even at Harvard the most common grade was straight As.  Princeton tried to limit the faculty’s ability to give A grades in 2004, but the faculty fought it. When Wellesley tried to cap the average grade at a B+ those courses lost 20% of enrollments and participating departments lost a third of their majors.

In the end, grade inflation gives students unwarranted confidence in their abilities.  Almost all institutions collude on grades, driven by market pressures to make college fun, students attractive to employers, and professors to escape the wrath of dissatisfied students.

Kindle notes end

Next chapter: the internet, books, radio, Rush Limbaugh, and above all FOX news as the death of expertise.  How people choose the news that suits them.  People don’t hate the media, just the news they don’t like or that has views with which they don’t agree.

Helpful hints

Be humble. Assume that the people who wrote a story know more about the subject than you do and spent a lot more time on that issue.

Vary your diet, consume mixed sources of media, including from other countries.

Be less cynical, or so cynical. It’s rare someone is setting out intentionally to lie to you.

Lots of good stuff.  Too much to enter notes on.

Trump won because he connected with voters who believe that knowing about things like America’s nuclear deterrent is pointy-headed claptrap. They didn’t know or care Trump was ignorant or wrong, and most didn’t even recognize his errors.  Trump’s strongest supporters in 2016 were concentrated among people with low levels of education. “I love the poorly educated,” trump exulted and that love was clearly reciprocated.  In Trump, Americans who believe shadowy forces are ruining their lives and that intellectual ability is a suspicious characteristic in a national leader found their champion.  The believed that the political elite and their intellectual allies were conspiring against them.

Plummeting literacy and growth of willful ignorance is part of a vicious circle of disengagement between citizens and public policy. People know little and care less about how they are governed, or how their economic, scientific, or political structures actually function. And as these processes become more complex and incomprehensible, citizens feel more alienated.  Overwhelmed, they turn away from education and civic involvement and withdraw into other pursuits. This in turn makes them less capable citizens, and the cycle continues and strengthens, especially when there are so many entertainments to escape into.  Many Americans have become almost childlike in their refusal to learn enough to govern themselves or guide the policies that affect their lives.

And quite a bit more about what’s resulted from American’s rejection of expertise.

Posted in Critical Thinking, Political Books, Politics | Tagged , , , | Comments Off on Book review of “The Death of Expertise: the campaign against established knowledge and why it matters”

Far Out #4: Power out of thin air, power out of freezing air, & Fruit power

Graphic image of a thin film of protein nanowires generating electricity from atmospheric humidity. UMass Amherst researchers say the device can literally make electricity out of thin air. Credit: UMass Amherst/Yao and Lovley labs

Preface. To get power out of thin air after oil, the 90% of people who have had to go back to farming are going to be making protein nanowires from microbes in the chicken coop in their spare time. Scaling up microbes to keep the lights on and trucks running is about as likely as powering the world with flea circuses. Now there’s an idea!

Fruit power: At such a small scale, this won’t solve the energy crisis, and no doubt takes more energy to construct than what it can store over its lifetime, but I’m delighted that durian and jackfruit waste is good for anything at all, in this case super-capacitors to charge phones and laptops.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Lee, K., et al. 2020. Aerogel from fruit biowaste produces ultracapacitors with high energy density and stability. Journal of Energy Storage 27.

Durian fruit is so famous for its awful smell that it is banned on several mass transit systems and many hotels and airports. According to wikipedia, animals can smell it from half a mile away, with an odor of raw sewage, rotten onions, turpentine, pig-shit, vomit, skunk spray, and garnished with gym socks.

Researchers have found a way to turn durian and jackfruit into electrochemical super-capacitors, which are like energy reservoirs that dole out energy smoothly. They can quickly store large amounts of energy within a small battery-sized device and then supply energy to charge electronic devices, such as mobile phones, tablets and laptops, within a few seconds.

“Using a non-toxic and non-hazardous green engineering method that used heating in water and freeze drying of the fruit’s biomass, the durian and jackfruit were transformed into stable carbon aerogels — an extremely light and porous synthetic material used for a range of applications.”

“Carbon aerogels make great super-capacitors because they are highly porous. We then used the fruit-derived aerogels to make electrodes which we tested for their energy storage properties, which we found to be exceptional,” Gomes says. “Compared to batteries, super-capacitors are not only able to charge devices very quickly but also in orders of magnitude greater charging cycles than conventional devices.

The team found that the super-capacitors they prepared were significantly more efficient than current ones, which are made from activated carbon.

Fialka, J., et al. 2020. To store renewable energy, try freezing air. Scientific American.

A British company called Highview Power proposes a novel solution: a storage system that uses renewable electricity from solar or wind to freeze air into a liquid state at -196 C where it can be kept in insulated high pressure storage tanks for hours or even weeks. The frozen air is allowed to warm and turn itself back into a gas. It expands so quickly that its power can spin a turbine for an electric generator. The resulting electricity is fed into transmission lines when they are not congested.

[My comment: how much does this cost? How much energy to keep the air chilled to -321 Fahrenheit? Is the energy return on invested positive? ]

UMA. 2020. New green technology generates electricity ‘out of thin air’. University of Massachusetts, Amherst.

Scientists have made a protein that creates electricity from moisture in the air, a device they call an “Air-gen.” or air-powered generator, with electrically conductive protein nanowires less than 10 microns thick produced by the microbe Geobacter. which can generate electric current from water vapor in the air.

Posted in Far Out | Tagged | 5 Comments