The fall of the roman empire from plagues and climate change

Preface. The book “The Fate of Rome Climate Disease and the End of an Empire” by Kyle Harper shows the brutal effects of plagues, climate change, and their joint interaction of the Roman Empire. This doesn’t rule out all the other reasons for collapse, especially deforestation (see A Forest Journey: The Story of Wood and Civilization by John Perlin, topsoil erosion (see Dirt: The Erosion of Civilizations by David Montgomery), and barbarian invasions (“The Fall of Rome: And the End of Civilization” and “Empires and Barbarians: the fall of Rome and the birth of Europe”).

I’ll leave it to you to ponder how these factors and outcomes might affect our own civilization after the net energy cliff begins.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


How the Antonine plague from 165 to 180 AD affected the Pagan cults

To the ancient mind, plague was an instrument of divine anger. The Antonine Plague provoked spectacular acts of religious supplication at the civic level, fired by the great oracular temples of the god Apollo. The emperors started minting a new image on the currency, invoking “Apollo the Healer.” Religious solutions were desperately sought in Rome.

Pagan philosopher Porphyry blamed the insolence of the Christians for this health catastrophe. “And they marvel that the sickness has befallen the city for so many years, while Asclepius and the other gods are no longer dwellers among us. For no one has seen any succor for the people while Jesus is being honored.” Valerian implemented measures that were unequivocally aimed at hunting out Christians.

The rise of Christianity from the Cyprian plague 249-262 (ebola or smallpox)

This plague was instrumental in making Christianity popular, because the pagan religions did nothing for pandemic victims.  But because Christianity was able to forge kinship-like networks among perfect strangers based on an ethic of sacrificial love, Christian ethics turned the chaos of pestilence into a mission of aid. The vivid promise of the resurrection helped convince the faithful not to fear of death. Priests pleaded for them to show love to the enemy, so they helped everyone, pagans and Christians alike. The compassion was conspicuous and consequential. Basic nursing of the sick can have massive effects on case fatality rates; with Ebola, for instance, the provision of water and food may drastically reduce the incidence of death. The Christian ethic was a blaring advertisement for the faith. The church was a safe harbor in the storm.  The traditional civic cults lost favor.

So much death and the alternative of religious life made it hard to find soldiers

The empire’s fortunes reached a low tide in the AD 260s. The cities were never quite the same; even the healthiest late antique cities were smaller than they had formerly been, and in aggregate, even after the recovery, there were simply fewer major towns. The old days when army recruitment could be handled with a light touch were forever gone.

The fourth-century state had to contend with at least one truly novel alternative to military service: the allure of the religious life for men who might have heeded the call to arms. “The huge army of clergy and monks were for the most part idle mouths.” By the end of the fourth century, their total number was perhaps half the size of the actual army, a not inconsiderable drain on the manpower reserves of the empire. The civil service was also an attractive, and safe, career. The vexing issue of military recruitment in the fourth century was not directly a demographic problem.

Supply chains played a role in spreading disease

Supply chains and manufacturing were extensive. For example, consider the accoutrements of soldiers. The Roman soldier carried arms manufactured in over three dozen specialized imperial factories spaced across three continents. Officers wore bronze armor, embellished with silver and gold, made at five different plants. Roman archers would have used bows made in Pavia and arrows made in Mâcon. The foot soldier was dressed in a uniform (shirt, tunic, and cloak) made at imperial textile mills and finished at separate dye-works. He wore boots made at a specialized manufactory. When a Roman cavalryman of the later fourth century rode into battle, he was mounted on a mare or gelding that had been bred on imperial stud farms in Cappadocia, Thrace, or Spain. The troops were fed by a lumbering convoy system that carried provisions across continents in mind-boggling bulk. The emperor Constantius II ordered 3 million bushels of wheat to be stored in the depots of the Gallic frontier and another 3 million bushels in the Alps, before moving his field army to the west.

These extensive supply chains helped to spread the Antonine and Cyprian pandemics, followed by one of the worst pandemics in 542 AD from the plague. The fusion of global trade and rodent led to the greatest disease event human civilization had ever experienced.  The plague is an exceptional and promiscuous killer. Compared to smallpox, influenza, or a filovirus, Y. pestis is a huge microbe, lumbering along with an array of weapons. But, it is in constant need of a ride.

The plague moved at two speeds: swiftly by sea and slowly by land. The mere sight of ships stirred terror.  Once infected rats made landfall, the diffusion of the disease was accelerated by Roman transportation networks. Carts and wagons carried rodent stowaways along Roman roads. It could spread anywhere that rats could travel.

Climate change and the Huns

The 4th-century was a time of mega-drought. The two decades from ca. AD 350 to 370 were the worst multi-decadal drought event of the last two millennia. The nomads who called central Asia home suddenly faced a crisis as dramatic as the Dust Bowl. The Huns became armed climate refugees on horseback. Their mode of life enabled them to search out new pastures with amazing speed. In the middle of the fourth century, the center of gravity on the steppe shifted from the Altai region (on the borders of what is today Kazakhstan and Mongolia) to the west. By AD 370, Huns had started to cross the Volga River. The advent of these people on the western steppe was momentous, terrorizing the tribes north of Italy, who fled to the Roman Empire in great numbers to escape them (for a longer explanation of the effect of the Huns, see my Book review of “The Fall of Rome: And the End of Civilization” and “Empires and Barbarians: the fall of Rome and the birth of Europe”).

They brought new cavalry tactics that terrorized the inhabitants of the trans-Danubian plains. Their horses were ferociously effective. In the words of a Roman veterinary text, “For war, the horses of the Huns are by far the most useful, by reason of their endurance of hard work, cold and hunger.” What made the Huns overwhelming was their basic weapon, the composite reflex bow.

The Justinian Plague (541 to 749 AD)

Justinian reigned as emperor from AD 527 to 565. Less than a decade into his reign, he had already accomplished more than most who had ever held the title. The first part of his reign was a flurry of action virtually unparalleled in Roman history. Between his accession in AD 527 and the advent of plague in AD 541, Justinian made peace with Persia, reattached vast stretches of the western territories to Roman rule, codified the entire body of Roman law, overhauled the fiscal administration, and executed the grandest building spree in the annals of Roman history. He survived a perilous urban revolt and tried to forge orthodox unity in a fractious church, through his own theological labors.

In the spring of 542 AD the plague appeared for the first time (Yersinia pestic) in the capital Constantinople.   For the next 23 years it became difficult to find and field armies. Taxes rose to unseen heights.  There have been two major plague pandemics since then, the Black Death in AD 1346–53, which lasted nearly 500 years, and the third in 1984 AD in Yunnan China and spread globally.

The dependence of the imperial system on the transport and storage of grain made the Roman Empire a heaven for the black rat.

It required one last twist of fate for the bacterium to make its grand entrance into the Roman world. The Asian uplands had prepared a monster in the germ Y. pestis. The ecology of the empire had built an infrastructure awaiting a pandemic. The silk trade was ready to ferry the deadly package. But the final conjunction, what finally let the spark jump, was abrupt climate change. The year AD 536 is known as a “Year without Summer.” It was the terrifying first spasm in what is now known to be a cluster of volcanic explosions unmatched in the last three thousand years. Again in AD 540–41 there was a gripping volcanic winter. As we will see in the next chapter, the AD 530s and 540s were not just frosty. They were the coldest decades in the late Holocene. The reign of Justinian was beset by an epic, once-in-a-few-millennia cold snap, global in scale.

One thing is certain: the relation between climate and plague is not neat and linear. As with so many biological systems, it is marked by wild swings, narrow thresholds, and frenzied opportunism. Rainy years foster vegetation growth, which in turn sparks a trophic cascade in rodent populations. In excess, water can also flood the burrows of underground rodents and send them scurrying for new ground. Population explosions stir the emigration of rodents in search of new habitats.

Given that there is a strong correlation between volcanism and El Niño, the volcanic eruptions of the AD 530s may have stirred the Chinese marmots or gerbils carrying Y. pestis out of their familiar subterranean colonies, triggering an epizootic that reached the rodents of the seaborne trade routes heading west.

The first victims were the homeless. The toll started to rise. “…the mortality rose higher until the toll in deaths reached 5,000 a day, then 10,000, and then even more.” John’s daily counts are similar. He estimated from 5000 rising to 7000, 12000 and 16000 dead per day. At first, there remained a semblance of public order. “Men were standing by the harbors, at the crossroads and at the gates counting the dead.” According to John, the grisly tally continued until 230,000 had been numbered. “From then on the corpses were brought out without being counted.” John reckoned that over 300,000 were laid low. A tally of ca. 250,000–300,000 dead within a population of probably 500,000 would fall squarely within the most carefully derived estimates for the death rates in places hit by the Black Death at 50–60%.

Ancient societies were always tilted toward the countryside. By now some 85–90% of the population lived outside of cities. What set the plague apart from earlier pandemics was its ability to infiltrate rural areas.

Plague had another, even more insidious stratagem in the long run. An obligate human parasite like smallpox lacked an animal reservoir where it could hide between outbreaks. Plague was more patient. As the wave of the first visitation pulled back from a ravaged landscape, small tidal pools were left behind. The plague lurked in any number of rodent species. These biological weapons of the plague—the fact that it does not confer strong immunity and that it has animal reservoirs—allowed the first pandemic to stretch across two centuries and cause repeated mass mortality events.

The social order wobbled and then collapsed. Work of all kinds stopped. The retail markets were shuttered, and a strange food shortage followed. The harvest rotted in the fields. Food was scarce.

The Late Antique Little Ice Age (536 to 660 AD) climate change effects.

AD 536 was the coldest year of the last two millennia. Average summer temperatures in Europe fell instantly by up to 2.5°, a truly staggering drop. In the aftermath of the eruption in AD 539–40, temperatures plunged worldwide. In Europe, average summer temperatures fell again by up to 2.7°.

The decade of 536–545 was the coldest during this time.

Late in AD 589, torrential rains inundated Italy. The Adige flooded. The Tiber spilled its banks and crept higher than Rome’s walls. Whole regions of the city were under water. Churches collapsed, and the papal grain stores were ruined. No one remembered a flood so overwhelming. Then followed the plague again, in early AD 590.

The combination of plague and climate change sapped the strength of the empire.

The Justinian Plague effects on religion

For the first time in history, an apocalyptic mood came to permeate a large, complex society. Gregory’s sense of the approaching end was hardly his alone. The apocalyptic key transcended traditions, languages, and political boundaries in late antiquity. The plague was a last chance to turn from sin. And no sin weighed more heavily on the late antique heart than greed. Anxieties about wealth generated a perpetual moral crisis in late ancient Christianity. Earthly possessions were a trial of faith. Here the plague struck a tender nerve. The most memorable vignettes in John of Ephesus’ history of the plague linger over individuals singled out for punishment because of their greed. From one angle, the plague was God’s final, ghastly effort to pry loose our tight-gripped hold on material things.

Materially and imaginatively, the ascent of Islam would have been inconceivable without the upheavals of nature. The imminent judgment was a call to repentance.

Monotheism and eschatological warning were central to the prophet Muhammad’s religious message. “The coming judgment is in fact the second most common theme of the Quran, preceded only by the call to monotheism.” The Quran proclaims itself to be “a warning like those warnings of old: that Last Hour which is so near draws ever nearer.” “God’s is the knowledge of the hidden reality of the heavens and the earth. And so, the advent of the Last Hour will but manifest itself like the twinkling of an eye, or closer still.” The origins of Islam lie in an urgent eschatological movement, willing to spread its revelation by the sword, proclaiming the Hour to be at hand. Here, the eschatological energy of the seventh century found its most unrestrained development. It was electrifying. The message was the last element in the perfect storm. The southeastern frontier of the empire was erased almost overnight. Political lines of a thousand years were instantaneously and permanently redrawn.

Egypt and the Justinian Plague effects

The Nile valley was the most heavily engineered ecological district in the ancient world. Every year, at the inundation, its divine waters were diverted through an immense network of canals to irrigate the land. The intricate machinery of dikes, canals, pumps, and wheels was a huge symphony of human ingenuity and hard labor. The sudden disappearance of manpower in lands upriver threw the network of water control into disrepair. The controlled flow of water in the valley had been interrupted, and the downstream inhabitants in the fertile delta were overwhelmed. Remarkably, these events were replayed almost exactly in the aftermath of the medieval Black Death.

Famine effects

The twittering climate regime of late antiquity also had an intimate relationship with the pulses of epidemic mortality. Food shortage was a corollary of disease outbreak. Anomalous weather events might trigger explosive breeding of disease vectors. A devastating famine in Italy in AD 450–51 was coincident with a wave of malaria, for instance. Food crisis fanned desperate migrants in search of survival, overwhelming the normal environmental controls embedded in urban order. Food shortages forced the hungry to resort to consuming inedible or even poisonous food, all while depleting the power of their immune systems to resist infection.

A famine and pestilence swept Edessa and its hinterland. In March of AD 500, a plague of locusts destroyed the crops in the field. By April, the price of grain skyrocketed to about eight times the normal price. An alarmed populace quickly sowed a crop of millet, an insurance crop. It too faltered. People began to sell their possessions, but the bottom fell out of the market. Starving migrants poured into the city. Pestilence – very probably smallpox – followed. Imperial relief came too late. The poor “wandered through the streets, colonnades, and squares begging for a scrap of bread, but no one had any spare bread in his house.  In desperation, the poor started to boil and eat the remnants of flesh from dead carcasses. They turned to vetches and droppings from vines. “They slept in the colonnades and streets, howling night and day from the pangs of hunger.” When the December frosts arrived, the “sleep of death” laid low those exposed to the elements.

The migrants were worst affected, but by spring no one was spared. “Many of the rich died, who had not suffered from hunger.” The loss of environmental control collapsed even the buffers that subtly insulated the wealthy from the worst hazards of contagion.

During a famine that swept Syria in AD 384–85, Antioch found its streets filled with hungry refugees, who had been unable to find even grass to eat and suddenly massed in town to scavenge

Rise of Slavery

After the dislocations of the third century, the slave system experienced a brutal resurgence.

Melania the Younger, from one of the most blue-blooded lines in Rome, owned over 8,000 slaves.

Slave-ownership on Melania’s scale was rare. More consequential were the elites, late antiquity’s 1 percent, who owned “multitudes,” “herds,” “swarms,” “armies,” or simply “innumerable” slaves, both in their households and in the fields. To own a slave was a standard of minimum respectability. In the fourth century, priests, doctors, painters, prostitutes, petty military officers, actors, inn-keepers, and fig-sellers are found owning slaves. Many slaves owned slaves. All over the empire we find working peasants with households that included slaves.

Posted in Climate Change, Pandemic, Roman Empire, Transportation | Tagged , , , , | Leave a comment

From wood to fossil fueled civilizations — the greatest tragedy mankind will ever know

[ These are my notes from this book about how we went from an organic sustainable economy to a temporary fossil-fueled one.  It’s one of the few books I’ve found that explains what life was like before fossil fuels in a biophysical way that focuses on energy and population.  This book might even convince an economist that there are limits to growth, since it explains why a biomass-based society couldn’t exponentially grow, but that might be hoping for too much (since neoclassical economics is a religion but this book is based on science).

Wrigley also compares the Western European marriage system, where couples were much older because they had to wait until they could support themselves, which might require say, the parents to die, since the land was not subdivided usually but went to the first male child.  But in Eastern European countries, most women were married at a very young age not long after puberty, and ended up having far more children as well. 

The Western European marriage system prevented the outcome Malthus had predicted in his first writings — that inevitably the standard of living was bound to be depressed to bare subsistence level and misery for most of the population.  He later saw that in fact marriage systems could prevent this from happening and wrote about it in later books.

Wrigley closes his book with the following warning:

“The industrial revolution may come to be regarded not as a beneficial event which liberated mankind from the shackles which limited growth possibilities in all organic economies but as the precursor of an overwhelming tragedy – assuming that there are still survivors to tell the tale.”

P.S. I discovered this book in the excellent list at the BioPhysical Economics Policy center:

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Wrigley, E. A.  2016. The Path to Sustained Growth: England’s Transition from an Organic Economy to an Industrial Revolution. University of Cambridge.

The three centuries between the reigns of Elizabeth I and Victoria, are conventionally termed the industrial revolution. At the beginning of the period England was not one of the leading European economies. It was a deeply rural country where agricultural production was largely focused on local self-sufficiency. In part this was a function of the low level of urbanization at the time. England was one of the least urbanized of European countries: the only large town was London. The market for any agricultural surplus was limited other than close to the capital city.

Before the industrial revolution, prolonged economic growth was unachievable. All economies were organic, dependent on plant photosynthesis to provide food, raw materials, and energy. This was true both of heat energy, derived from burning wood, and mechanical energy, provided chiefly by human and animal muscle. The flow of energy from the sun captured by plant photosynthesis was the basis of all production and consumption. Britain began to escape the old restrictions by making increasing use of the vast stock of energy contained in coal measures, initially as a source of heat energy but eventually also of mechanical energy, thus making possible the industrial revolution.

In organic economies negative feedback between different factors of production was common. For example, if the population increased it would involve at some point taking into cultivation marginal land, or farming existing land more intensively, or increasing the arable acreage at the expense of pasture, changes which tended to reduce labor productivity, inhibiting further growth and reducing living standards. In early modern England the rising importance of a fossil fuel as an energy source meant that many of the relationships which involved negative feedback in organic economies changed: positive feedback became more common. The growth process tended to foster further advance, whereas in organic economies the reverse was the case.

If the woolen industry was flourishing and the demand for wool therefore rising, more land would be devoted to sheep pasture, but this must mean less land available to grow corn for human consumption, or less land under forest. Expanding the production of woolen cloth must at some point create difficulties for the supply of food, or of fuel for domestic heating, or for the production of charcoal iron. If the land was the source of virtually all the material products of value to man, expansion in one area of the economy was all too likely to be secured only by shrinkage elsewhere.

Most of the raw materials used by industry in organic economies were also vegetable, such as wood, wool, cotton, or leather. Even when the raw material was mineral, plant photosynthesis was essential to production, since converting ores into metals required a large expenditure of heat energy that came from burning wood or charcoal.

Coal is a stock, not a flow. Each ton of coal dug from a mine marginally reduces the size of the stock, and the same is true of all fossil fuels. Drawing upon a stock will ultimately lead to its exhaustion.

On this estimate of woodland productivity, therefore, it would be necessary to reserve 2 million acres of land for forest to produce the same quantity of heat energy each year as could be secured from burning 1 million tons of coal.

The advantage gained by employing draught animals was perhaps greatest in relation to overland transport. The output in terms of ton-miles performed during a working day by a man with a sack on his back or pushing a wheelbarrow is almost derisory compared with what is possible by a man with a horse and cart on a firm road surface. In many agricultural systems draught animals were essential. This was normally true of the cultivation of cereals such as wheat. If the yield per acre of a cereal is modest, it may be beyond the physical capacity of one man to cultivate a large enough area by his own efforts to support himself and his family. The land had to be ploughed by oxen or horses.

If the produce of 5 acres of land is needed to feed a working horse, the area available to feed people is reduced commensurately. As Cottrell remarked: ‘Where land is plentiful, population sparse and draught animals available, there may be an economy in substituting draught animals for manpower; but with increased population and competition for land for the production of food and feed, the situation may be reversed, the survival of man being more important than the feeding of work animals.’ It was an unfortunate feature of organic economies after the neolithic agricultural revolution that a period of growth and prosperity when the population was rising tended to restrict the area that could be devoted to growing fodder for draught animals unless productivity per acre was rising sufficiently to offset the population rise.

Domestic heating in towns. Bairoch estimated that each town dweller typically needed between 1.0 and 1.6 tons of firewood each year, which Van der Woude et al. estimated would represent the annual product of 0.5 to 0.8 hectares of woodland, or roughly 1.25 to 2 acres. For simplicity, I assume that 1.6 acres would cover the firewood needs of the average town dweller.

A town with 10,000 inhabitants, therefore, would need access to the annual growth of wood taking place in woodland covering 16,000 acres. For an urban population totalling, say, half a million people and therefore needing 650,000 tons of firewood a year, it would be necessary to devote the wood growth of roughly 800,000 acres to meeting their domestic heating needs. The same quantity of heat energy could be secured from burning approximately 325,000 tons of coal, since burning 1 ton of coal produced as much heat as 2 tons of dry firewood.

The switch from wood to coal therefore enabled approximately 800,000 acres of woodland to be used instead to produce food, or wool and hides, rather than fuel.

The classical economists saw all activity giving rise to material production as involving three component elements: capital, labor, and land. The quantity of capital and labor available to allow production to take place might in principle be increased as necessary and without apparent limit, but the same was not true of land. The area of land was limited and could not be increased. Advances in technology might permit significant improvements in aggregate output. The output from any given area of land might be increased by the introduction of a new crop, as when the potato arrived from the Americas; or by innovations which reduced the proportion of arable land kept in fallow each year; or the area of land under cultivation might be increased by drainage of marshland, enclosure of heath, or reclamation from the sea, but the general problem was permanent and insoluble. If growth occurred it must at some point increase the pressure on the land since the land was the source of all food and the great bulk of the raw materials of industry. If either poorer land was taken into cultivation or existing land used more intensively, this must tend to involve declining returns both to capital and labor, and eventually growth would grind to a halt or be reversed.

Ricardo made it clear that his gloomy conclusion was due not to institutional shortcomings, the character of economic systems, or the failure of human judgement, but to the operation of the laws of nature. He summarized his analysis in a manner that left no grounds for optimism about the secular trends of real wages or profit levels. His reasoning excluded any possibility of the type of sustained growth that came to be termed an industrial revolution: Whilst the land yields abundantly, wages may temporarily rise, and the producers may consume more than their accustomed proportion; but the stimulus which will thus be given to population, will speedily reduce the laborers to their usual consumption. But when poor lands are taken into cultivation, or when more capital and labor are expended on the old land, with a less return of produce, the effect must be permanent. A greater proportion of that part of the produce which remains to be divided, after paying rent, between the owners of stock and the laborers will be apportioned to the latter. Each man may, and probably will, have a less absolute quantity; but as more laborers are employed in proportion to the whole produce retained by the farmer, the value of a greater proportion of the whole produce will be absorbed by wages, and consequently the value of a smaller proportion will be devoted to profits. This will necessarily be rendered permanent by the laws of nature, which have limited the productive powers of the land.

To someone sitting in a congregation today the sentence in the Lord’s Prayer, ‘Give us this day our daily bread’, may occasion mild surprise. It is seldom a grave concern in societies that have been transformed in the wake of the industrial revolution, but would have had pressing and immediate relevance from time to time for congregations in Tudor times. Poverty and the difficulty of securing an adequate supply of basic food were ever-present features of organic economies.

Adam Smith had previously expressed it bluntly: Every species of animals naturally multiplies in proportion to their means of subsistence, and no species can ever multiply beyond it. But in civilized society it is only among the inferior ranks of people that the scantiness of subsistence can set limits to the further multiplication of the human species; and it can do so in no other way than by destroying a great part of the children which their fruitful marriages produce.

In times of prosperity the population would rise quickly, outpacing production. Living standards would therefore fall and, as the bulk of the population became poorer, mortality would rise, eventually to the point where it matched the level of fertility. The population would therefore cease growing and the laboring poor would hover on the verge of destitution.

What was distinctive about the system when compared with other marriage systems was that decisions to marry were strongly affected by economic circumstances. This in turn was the result of the convention that on marriage a couple should create a new household. Instead of joining an existing household, a couple on marriage was expected to establish a new one. This involved accumulating the resources necessary to acquire and equip a household. For many couples it was necessary to save from income over a period of time to make the marriage possible. If incomes were depressed or irregular it took longer to do so than in more prosperous times. As a result the average age of marriage might rise or fall in sympathy. In western Europe societies, moreover, a significant fraction of each rising generation never married, and this proportion was also influenced by economic circumstances. In other societies the timing of marriage was governed by the prevailing conventional norms that meant that the vast majority of women married young.

It was frequently the case that celibacy was almost unknown and the average age of marriage for women was far lower than in western Europe, often close to the attainment of sexual maturity. The fact that in western Europe between a tenth and a fifth of each generation never married, combined with a relatively late average age at marriage for women, implied that fertility levels were normally lower than in other societies. This generalization is too sweeping. Fertility levels were influenced by many factors other than age at marriage and celibacy levels. Relatively modest levels of general fertility sometimes prevailed through the effect of social and personal conventions and practices very different from the west European system. And the west European marriage system itself took varying forms. Nevertheless, Malthus’ recognition that the ‘preventive checks of moral restraint’ implied the possibility of stationing a society at some distance from the Malthusian precipice is relevant to any consideration of the circumstances in which escape from the constraints of an organic economy might occur.

Jevons’ book, The coal question. The first edition was published in 1865. His subject was the ‘Age of Coal’. He remarked: Coal in truth stands not beside, but entirely above all other commodities. It is the material source of the energy of this country – the universal aid – the factor in everything we do. With coal almost any feat is possible or easy; without it we are thrown back on the laborious poverty of early times.

He was deeply concerned about the depletion of coal reserves generally and the export of coal in particular: To part in commerce with the surplus yearly interest of the soil may be unquestioned gain; but to disperse so lavishly the cream of our mineral wealth is to be spendthrifts of our capital – to part with that which can never be reproduced.  In short, the export of corn was less hazardous than the export of coal because the former was the product of an energy flow, whereas the latter was an exhaustible stock.

If mechanical energy had continued to be provided almost exclusively by human and animal muscle, the constraints of an organic economy would have continued to limit growth. Because draught animals were the most important single source of mechanical energy in early modern England, increasing use of mechanical energy would only have been possible by devoting a larger and larger acreage to animal fodder.

In the mid-19th century, it was 270 times larger than it had been in the 1560s, and 20 times larger than in 1700.

The annual growth rate for coal production varied between 1.2 and 1.9 per cent per annum throughout the period from the 1560s to 1800, with only limited variation. In the final half-century 1800 to 1850/4, however, the annual rate of growth accelerated markedly to 3 per cent, in part a reflection of the fact that coal was an increasingly important source of mechanical as well as heat energy.

The total rose massively between the mid 16th and mid 19th centuries. In 1560–9 the annual average figure was 65 petajoules, a quantity roughly equivalent to the energy contained in 2.2 million tons of coal. Three centuries later, in 1850–9, energy consumption had risen to 1,833 petajoules, a total more than 28 times as large as the earlier figure. The very large increase in energy consumption that took place was mainly due to the rapid expansion in coal production over the three centuries in question. Coal provided an annual average of 7 petajoules in 1560–9; in 1850–9 the equivalent figure was 1,689 petajoules.

Coal supplied only 11% of total energy consumption in the 1560s, rising to 33% in the 1650s, 61% in the 1750s, and no less than 92% in the 1850s.

Peat represents an accumulation of the product of plant photosynthesis over thousands of years; coal a similar accumulation over millions of years. Sieferle estimated that in the 17th century 0.3–0.5 per cent of the stock then existent in the Netherlands was used annually, suggesting that over the century as a whole approaching half of it was consumed.

The use of peat as an energy source was feasible only where the cost of transport could be kept to a minimum, and this constraint is especially severe in the case of peat because of its greater bulk in relation to its energy potential. Van der Woude et al. made a calculation that brings home forcefully how strong this constraint was in an organic economy: Water transport was essential to the economical digging and transporting of this bulky commodity. Had road transport been used to bring the peat to its urban markets, 110,000 horses would have been required, and to feed these horses 230,000 hectares – one third of the nation’s arable land – would have been withdrawn from the production of crops destined for human consumption.

Coal transport

The great bulk of both coal and grain were consumed at a distance from their points of production and therefore in both cases their cost at the point of consumption included significant transport costs.

A country’s grain crop required millions of acres.  As a result, the transport network needed to take grain to market was dendritic, that is, resembling the structure of a tree. The route from the farm to a neighboring village represented a twig which linked first to a thin branch and then through thicker branches to boughs, before finally reaching the main trunk.

Whereas the coal pitheads were a scattering of points covering only a few acres rather than millions of acres. In contrast the transport network needed to bring coal to the settlement or industrial plant where it was consumed was linear in character. Large volumes moved from the mine head to a limited number of final destinations,

The fact that coal production was punctiform, that coal was bulky and heavy, and that its transport to market was often linear rather than dendritic in character, created a powerful incentive to invest in transport improvements. In particular, it transformed the economics of canal construction. A large proportion of canal construction was explicitly undertaken to reduce the cost of coal in centers that promised to become large-scale consumers if the price could be lowered.

Canals passing through predominantly rural areas brought many benefits to farms close to the canal route by reducing the cost of lime, marl, coal, and other bulky or heavy materials, but they seldom proved profitable investments if largely dependent on rural custom, since traffic volumes were modest compared to canals linking coalfields to industrial and commercial centers.

Ironically, although the nature of coal production and its rapidly increasing scale encouraged major improvements in transport facilities, until the early decades of the 19th century the transport improvements were all made subject to the limitations inherent in organic economies. The mechanical energy source used in moving raw materials and finished goods by road and canal remained animal muscle, and therefore the scope for increasing the scale, speed, and reliability of transport facilities remained limited. It was only with the construction of a national railway system in the middle decades of the 19th century, using coal rather than muscle as its source of mechanical energy, that transport could achieve advances to parallel those already long achieved in the branches of industry in which cheap and abundant heat energy was the key to rapid expansion.

In organic economies it was always the case that the size of the urban sector was strongly influenced by the productivity of agriculture. City dwellers needed food and drink no less than those living in the countryside and since they produced little food themselves, they depended upon the existence of a rural surplus. If, for example, the agricultural sector produced 25% more food than would cover the needs of the rural population, the food needs of an urban population that constituted a fifth of the total population could be satisfied. Agricultural productivity set limits to the urban growth that could take place, but agricultural productivity was itself strongly influenced by urban demand. In the absence of a substantial urban sector, in rural areas there was little incentive to produce an output greater than that needed to meet local needs. In other words, agricultural productivity and urban growth might be characterized by either negative or positive feedback. If the urban sector was trivially small and stagnant there would be minimal incentive for increased agricultural output since any surplus over local rural needs would be unable to find a market. If, however, the urban sector was significant and growing it created an incentive to increase agricultural output, thus ensuring that demand and supply remained in balance as urban growth progressed. Positive feedback between urban growth and improved agricultural productivity was always possible in organic economies. If it occurred, however, although the level of urbanization might increase for a time, matched by an increasing rural surplus, the positive feedback could not continue indefinitely, because of the implications of the fixed supply of land which the classical economists described so effectively.

The size of London’s population meant that the area needed to satisfy its food requirements was large even in 1600. Gras estimated London’s annual consumption of grain as 0.5 million quarters (4 million bushels) at the beginning of the 17th century when the population of the city was about 200,000. This suggests that each Londoner was consuming 20 bushels annually on average.  Chartres considered that food and drink, bread and beer, contributed roughly equally to the total of grain consumed. The gross yield per acre of a combination of grains at the time is not known with any certainty. I assume a figure of 12 bushels per acre for a mixture of wheat and barley, the two main food and drink cereals. When calculating the acreage of arable land needed to supply the food and drink needs of the population, however, the gross yields are misleading. Account must be taken of two factors that reduce it considerably. Net yield may be taken as 9 bushels after allowing for the reservation of 3 bushels as seed for the next harvest. Furthermore, about 30% of the arable acreage was fallowed each year. This means that the quantity of grain available for consumption from each arable acre should be taken as only 6.3 bushels (9 × 0.7 = 6.3).15

To provide 20 bushels for each Londoner therefore meant securing the grain output from about 3.2 acres of arable land, implying that London’s ‘footprint’ in meeting the grain needs of its 200,000 inhabitants in 1600 extended to 640,000 acres, or 1,000 square miles. On the same assumptions in 1800 with a population of 960,000 London’s grain ‘footprint’ would have covered 3,100,000 acres or 4,800 square miles; and the national urban requirement in 1800, when the national urban population total was 2,380,000, would have been 11,900 square miles, an impressively large total, given that the total arable acreage in England and Wales is estimated to have been 11.5 million acres, or 18,000 square miles. Moreover, the urban ‘footprint’ resulting from the urban demand for food is considerably understated by this calculation since meeting the urban demand for meat, cheese, butter, fruit, and vegetables would have enlarged its size substantially; and providing fodder to feed the horses used to transport rural produce to the towns would have extended the ‘footprint’ still further.

It seems plain that if the circumstances of urban food provision, determined by cereal yields per acre, which prevailed throughout Europe in, say, 1500 had continued to hold good thereafter, urban growth in England would have come to a halt well short of the level it had actually reached in 1800. What, then, had changed?  A remarkable advance in net agricultural output per acre. For example, gross grain yields roughly doubled between the end of the 16th century and the beginning of the 19th, rising from 12 to 24 bushels per acre. Allowing again 3 bushels for seed, the net yield was 21 bushels at the end of the period. The proportion of arable land that was fallowed each year had declined substantially to c. 16 per cent. As a result, the net output secured from an acre of arable land used for grain production rose to 17.6 bushels per acre (21 × 0.84 = 17.6) from 6.3 bushels two centuries earlier. London’s claim on arable land in 1800, therefore, may be taken as 1,100,000 acres, or 1,700 square miles compared with a figure of 4,800 square miles if the yield per acre and fallowing percentage had remained at their levels two centuries earlier. The comparable figure for the English towns as a whole is 2,700,000 acres, or 4,200 square miles compared with 11,900 square miles if yields had not changed. In 1800 the national urban population total in towns with 5,000 or more inhabitants had risen 7-fold from 1600 but an area only two-and-a-half times as large as in 1600 could supply their grain requirements.

The area of land involved in meeting urban grain and fuel needs rose in round numbers from 2,000 square miles in 1600 to 4,500 square miles in 1800, a rise of 125 per cent during a period when the urban population rose from 335,000 to 2,380,000, or by more than 600 per cent.

In organic economies it was normal for 70–80% of the workforce to be employed on the land, reflecting the fact that labor productivity in agriculture was low.

Ten peasants might produce enough food for their own families and perhaps two or three other families who were then able to engage in textile manufacture, handicrafts, building, retailing, transport, etc., but the surplus in question was limited and might prove fragile in hard times.

Equally, the absence of a large urban demand for food meant that there was little incentive for a peasant farmer to increase his output since there was no guarantee that it would find a market.

It has been estimated that to meet its firewood requirements, ‘A town of 10,000 inhabitants would need to witness the annual arrival of between 10,000 and 16,000 horse-drawn carts’ carrying the firewood in question.

There were almost 110,000 shoemakers in England in 1831. They were the largest occupation in the retail trade and handicraft category in the 1831 census. One man in thirty of all male workers in England at that date was a shoemaker. In the tertiary sector clerical work was largely sedentary, and in most other tertiary sectors the level of energy expended was modest by the standards of agricultural work. Given the scale of occupational change between the mid-17th and mid-19th centuries, an unchanging average level of calorie intake would imply an improvement in the average nutritional level. It also suggests that a fall in the level of calorie intake did not necessarily mean worsening nutrition.

An autumn peak in marriages was characteristic of a farming year predominantly concerned with the harvesting of corn. In pastoral parishes the peak was in the late spring or early summer. In both farming types, the peak of marriages followed the season of the year in which the demand for labor had been at its height. In arable areas this occurred when the grain had been harvested, in pastoral areas when lambing and calving had taken place.

It was increasingly the case that market-orientated farming was determining land use rather than a ‘peasant’ focus on local self-sufficiency.

This change may well have been greatly expedited by the very large acreage that passed from royal to private hands following the dissolution of the monasteries. Clay suggested that: ‘If estates granted away to courtiers and royal servants in the mid-16th century are also included, perhaps 25 per cent of the land of England had passed from royal into private hands by 1642. He considered that royal estates had been poorly managed.

The demographic characteristics of a society may have an important bearing on its prevailing standard of living and economic growth prospects. This was an issue explored by Hajnal in his remarkable essay on marriage in western and Eastern Europe, published in 1965. He was intent on exploring the nature and significance of the west European marriage system.

The differences between the two marriage systems are striking. They are especially pronounced in the case of women. In the western pattern, approaching half of the women in the age group 25–29 are unmarried, and this remains true of roughly a sixth of women even in the 45–49 age group. In eastern Europe in both these age groups the proportion of women who had never married was negligible. Hajnal provided evidence that what was true of eastern Europe was true of almost all societies elsewhere in the world for which he had reliable data. The difference in proportions ever married in the two systems clearly implies wide differences in the average age at first marriage.

The mean age at first marriage for women was 19.7 years in Serbia. In the west European marriage system the average female age at first marriage, though it varied considerably, was 3-8 years later in life.

Even though exponential growth was physically impossible in organic economies, the prevailing standard of living was not foredoomed to be depressed close to bare subsistence for the mass of the population in societies in which the west European marriage system had become established. In drawing attention to this fact, exemplified in the economic history of countries in north-west Europe, Hajnal emphasized that he was essentially re-expressing views which Malthus had propounded as a mature thinker.

If the prevailing fertility level is somewhat lower, because marriage takes place later in life and a proportion of each generation remains single, and if marriage decisions are influenced by prevailing economic conditions – in short, if fertility as well as mortality is sensitive to the level and trend of living standards – a different outcome is readily possible.

Given the nature of organic economies, the potential disadvantages of a society in which fertility is high and invariant are clear. The poor will indeed always be with you. But this is only a limiting possibility. There were many circumstances that might cause fertility to fall well short of the highest level attainable. Clearly this will be true where, as in the west European marriage system, there is a high average age at marriage for women and conventions that lead to a proportion of each rising generation of women never marrying.

What is remarkable about the populations of pre-industrial western Europe is that they not only evolved a set of social rules, which effectively linked their rate of family formation with changes in their environment, but also managed to secure such low fertility that they achieved both a demographically efficient replacement of their population, and an age-structure which was economically more advantageous than the age-structures generally to be found among non-industrial societies today.

At one extreme there were societies in which every woman was married at or close to the age of arriving at sexual maturity unless she was seriously handicapped physically or mentally. The timing of marriage for women was determined by physiological change. At the other extreme in the west European marriage system, economic circumstances played a major role in influencing the timing and frequency of marriage. The social convention that brought this about lay in the expectation that on marriage the newly married couple would set up a new household rather than joining an existing household as was the norm in many other organic societies. This created an economic hurdle to be surmounted before a marriage could take place.

Rather than the timing of marriage being governed by reaching or approaching sexual maturity, it was strongly influenced by the time spent by the couple in securing an adequate sum in advance of marriage to enable them to create a new household. This meant that the average age at marriage for women was characteristically in the mid-20s rather the mid to late teens. Family sizes were therefore significantly smaller. With a mean birth interval of 30 months, for example, marriage at 25 rather than 18 would reduce completed family size by 2.8 children on average. If the economic barrier to be surmounted was severe, or saving was difficult and parents were unable or unwilling to assist, it also meant that a proportion of both sexes would never marry because they had failed to assemble the wherewithal to do so.

In peasant communities, for example, the ability to marry might depend upon gaining access to a holding. If holdings were not subdivided this would result in an unchanging number of married couples.

When living costs rose because a bad harvest caused grain prices to soar, marriages were delayed. Long-term economic trends that affected living standards might also influence the timing and extent of marriage. Worsening economic circumstances tended to produce a rise in the proportion of men and women remaining single; and those who did marry would do so later in life.

An implication of relatively high mortality is that fertility must also be high if the population is not to decline. This in turn implies that, ceteris paribus, age at marriage will be lower and celibacy less common than in countries where a ‘low-pressure’ rather than a ‘high-pressure’ demographic system exists.

Late marriage and the fact that a significant proportion of women remained single affected the composition of the labor force. In England unmarried women normally entered the labor force.

A single woman is usually regarded as contributing more to national output than a married woman.

To transfer of a load of grain weighing 2,400 pounds by a wagon drawn by four horses 23 miles the horses at almost ten percent of their cargo, 150 pounds of grain, so only 2,250 pounds of grain was delivered.

The heavier and bulkier the product, the more severely the accessible market area was limited.

In the wealth of nations, Adam Smith stressed the significance of transport costs in relation to the size of an accessible market. In an assessment of the importance of good transport facilities, he asserted that: ‘Good roads, canals, and navigable rivers, by diminishing the expense of carriage, put the remote parts of the country more nearly upon a level with those in the neighborhood of the town. They are upon that account the greatest of all improvements. They encourage the cultivation of the remote, which must always be the most extensive circle of the country.’

In the band closest to the town the land is devoted to market gardening, fruit-growing, and milk production (perishability rather than transport cost determines this usage). The next band illustrates vividly the restrictive nature of high transport cost. It is forest land from which the town meets its fuel needs both for domestic heating and for local industry. Access to timber is also vital for other purposes, notably for the construction industry. Because of its bulk and weight timber has to be grown close to the town. Its price rapidly becomes prohibitive as the length of the journey to market increases.

The outermost circle is devoted to pasture since, for example, beef cattle can provide their own transport by walking to market at a relatively low cost, and sheep’s wool is both light, durable, and of relatively high value per unit weight. In von Thünen’s model the outermost circle is the sixth band. The three bands between the timber and pastoral bands are devoted to cereal growing.

High transport costs operate rather like tariff barriers. Most local industries are, in effect, protected in much the same way that a tariff would provide protection. Competition is restricted, except in regard to products of high value per unit weight. In contrast, if transport costs are low an efficient producer will be able to sell at a profit over a larger area, and the consumer will benefit. Hence Adam Smith’s insistence that transport improvements are ‘the greatest of all improvements’.

A river that passed through a market town gave some farmers a huge advantage.  The strips of land on either side of the river distort the original simple pattern of concentric bands of land use. The bands are extended outwards on either side of the river because close to the river the cost of transporting a crop or other produce to the town might be no higher at, say, three times the distance from the town at which the same cost is incurred if the product is moved over land.

Until the advent of the railway, transport continued to be entirely an ‘organic economy’ activity. In contrast with other major branches of the economy, the energy used in transport was exclusively mechanical energy and until the middle decades of the 19th century this continued to be provided, as in the past, by animal muscle on land and by the wind at sea. Only with the development of an effective method of converting heat energy into mechanical energy did this change.

If production is areal the associated transport system will be dendritic. Much of the agricultural production takes place towards the periphery of the farmland surrounding a town and is therefore transported to the town from the outermost twigs of the system. In order to reach an urban market the grain must journey first along the twigs to reach the small branches and then the larger boughs before reaching a main trunk of the system. Similarly, for urban products to reach rural markets they must journey through the dendritic system in the opposite direction. The volume of traffic along any given stretch of road will be modest except on the roads close to the main market. In organic economies this meant that it was difficult to secure an adequate return on road improvement since the resulting saving in reduced transport cost could seldom justify the initial expenditure.

Greene, writing about horse usage in the United States, notes that the average density of horses in the forty-six largest cities in the country when urban horse usage peaked in 1900 was 426 horses per square mile. She estimates that in Philadelphia, where the density was about 400 per square mile, there were more than 50,000 horses in the city as a whole. The pressure on horse supply had long been apparent at the local level.

For example, it was noticed in the 18th century coal mines at mines some distance from the nearest navigable water. Langton, describing this problem in Lancashire, wrote: ‘At Haydock in 1756, just before the Sankey was opened, coal sales stopped when ploughing began and in 1769, when the canal was presumably the colliery’s main market, sales dipped during haying time as agriculture took its prime claim on the available horses.’  Horses had, of course, long been employed in large numbers in moving coal over short distances. It is said that 20,000 horses were employed in the Newcastle coal trade in 1696.  Musson noted that horses were still widely used as a source of power in the classic period of the industrial revolution: ‘They had long worked drainage pumps and winding whims for mines and were commonly employed to drive grinding wheels in potteries and glassworks (flint-mills), in tanneries (bark-mills), in lime-kilns for grinding chalk and in brickworks for mixing clay (pug-mills); they also came to be used frequently to drive carding, scribbling and spinning machinery in early textile horse-mills.’




Posted in Agriculture, Experts, Supply Chains | Tagged , , , , , , | 2 Comments

Can the lights be kept on with distributed generation? 2015 U.S. House hearing on a reliable electric system

[ Corporate speakers testify – could they have any self-interest, hope for government grants?  Since Congress often asks the National Academy of Sciences (NAS) to write unbiased papers on topics,  why didn’t NAS and National Laboratory scientists speak? Corporations are selling a product, and likely to exaggerate what their product can do.

The most interesting testimony is from Dean Kamen, who is “selling” his company’s Stirling engine generator to congress as a way to decentralize the grid by using them for distributed power from natural gas or oil.  Well, that doesn’t solve the finite fossil fuel problem.  It’s spun as a way to balance renewables, but I doubt that this can be done yet – the technology to manage hundreds of thousands of stirling engines, solar panels, and other distributed devices doesn’t exist, nor the math, algorithms, or computers to attempt to do so.  Nor can we revolutionize the grid quickly, because deregulation has forced every player into strictly and narrowly defined roles (i.e. just generation, just distribution, just selling electricity, etc). And if we did decentralize – would there be enough fossil fuels left to power all of them, or just those in the 11 energy producing states?

The advantage of a distributed system is that cyber-attacks, natural disasters, and power outages could be kept within a much smaller area and after a natural disaster, neighborhoods would still have power because they use underground natural gas lines, less likely to be damaged than overhead power lines, and they can also run on gasoline, propane, and other fossil fuels.

A distributed system like this would waste a lot less fuel than our huge centralized natural gas and coal power plants in which two-thirds of the energy generated is lost as heat, and up to 10% more power over power lines.

Meanwhile, electric vehicles are creating a new demand for power equal to an entire home.  That will require even more natural gas and coal power plants that waste most of their fuel as heat.  Since we are likely within 20 years of peak natural gas and peak coal, this means dozens of new import Liquefied Natural gas facilities along our coasts that are potential terrorist targets and continued dependency on other nations with natural gas, and wars over not just oil but natural gas as well.

Whether centralized or decentralized, the electric grid mainly runs on fossil fuels, which are finite.  My own guess is that as the electric grid becomes increasingly unreliable, whether from cyber-attack, natural disasters, lack of hydropower, coal, or natural gas, or breakdowns from lack of maintenance, the richest 10% of Americans will buy a generator, which is how the wealthy cope now in the third world, and invest in more insulation.  The bottom 90% of Americans won’t be able to afford to do this, and will turn to wood to heat and cook with.  Already 10% of American homes use wood for heat, so that will lead to unsustainable cutting of forests.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

House 114-18. March 4, 2015. The 21st Century electricity challenge: ensuring a secure, reliable, and modern electricity system. U.S. House hearing.  116 pages.

Excerpts (with some alterations and cuts to make the testimony clearer):

ED WHITFIELD, KENTUCKYMr Patel, you said “Customer adoption of electric vehicles is creating new demand for power, each vehicle equivalent to an entire home while charging, requiring new utility demand control measures to avert overloading of existing infrastructure. Please provide the study or data which you are using as a basis for your statement that electric vehicles are creating new demand for power equivalent to an entire home.

Naimish Patel, CEO, Gridco Systems.  Various sources of data are available to support my statement that electric vehicles are creating new demand for power, equivalent to an entire home.  One such source is a December 2014 ARRA report produced by the U.S. Department of Energy titled: “Evaluating electric vehicle charging impacts and customer charging behaviors: Experiences from six smart grid investment grant projects”  at  On page iv of the report, in the Grid Impacts section of Table 1. Summary of Key Project Experiences, it is noted that “The average power demand to charge most vehicles was 3-6 kilowatts, which is roughly equivalent to powering a small, residential air conditioning unit.” It is also noted in the same section that “…depending on the model, the load from one electric vehicle model can be as much as 19 kilowatts, which is more than the load for most large, single-family homes.


  • The most common type of charger is a portable 120-volt special charging cord, referred to as AC Level 1 charging, which typically provides 3-5 miles of range per hour of charge. Depending on the size of the battery, and the initial state of charge, this could take 8 to 20 hours to fully charge a depleted battery.
  • Some makes and models — particularly all-electric vehicles or those with larger battery packs — may take about 20 to 60 hours to charge a fully depleted battery at 120 volts. While 120-volt charging is relatively slow, it can often be accomplished with little to no additional cost or installation work if an outlet is already available at home.
  • Users can cut charging times significantly by installing AC Le vel 2, 240-volt charging stations. However, these sy stems can add $600-$3,600 to the cost of in-home charging, depending on the availability of power in the electric panel. Typically, installations require permits and licensed electricians.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

USDOE. December 2014. Evaluating electric vehicle charging impacts and customer charging behaviors – experiences from six smart grid investment grant projects. United States Department of Energy, electricity delivery and energy reliability.

ED WHITFIELD, KENTUCKY. The U.S. was the first nation to electrify, and our system of generation, transmission, distribution and related communications remains the best in the world. Nonetheless, new challenges are emerging, as are opportunities to modernize and improve the electric grid. The challenges are significant:

  • much of our grid is outdated
  • coal-fired generation facilities are shutting down at an alarming rate
  • reserve margins are inadequate in several regions
  • intermittent and remote renewable capacity is coming online
  • cyber threats pose a growing concern

While encouraging technology and innovation in the electricity sector should be a priority, policies must ensure that new grid-related products do no leave the grid more exposed or compromise customer information and privacy.

DEAN KAMEN, Founder and President, DEKA Research and Development Corporation.

In the 1880s there was these guys Edison and Tesla, who gave us big centralized plans, …now it is 150-year- old architecture. What do we know about it? Is it ready for disruption? Well, it is old, it is inefficient, it is unreliable, it is expensive, and it is dirty.

Quick facts about what the grid is today. We have about 1 terawatt, 1,000 gigawatts, of production capacity at an average of $1 a watt to produce. That is $1 trillion in generation assets.  More than half of it is 30 years old, and if you only replace the stuff that is that old at $1 a watt, it will cost $500 billion. Once you make that energy, you have to move it. High voltage transmission lines cost about $1 million a mile, and oops, sometimes they are not quite what we would like them to be. And 70% of those lines are 25 years old or more, and there are 280,000 miles of that high voltage lines, another $200 billion to replace.

Then you have the low voltage lines in all your neighborhoods–wires hanging on wooden poles. What could possibly go wrong? So these are a real deal, only $140,000 a mile, and there are 2.2 million miles and half of them are over 30 years old. And if you just replace just the old stuff, that’s another $150 billion. And then, of course, you have the annual capital cost of that infrastructure, which is $90 billion now to keep this architecture operating.

Everybody I know loves solar panels. So how do you catalyze more people to do this? Well, the more you put solar panels up without doing something else, you are actually hurting the grid because they add instability…

You have to do something that can catalyze this stuff to happen in a way that helps everybody, including the people supplying the power.  New technologies bring new opportunities, but they also sometimes bring problems, especially to stranded infrastructure.

From a practical point of view, the entire system that has ran for 150 years was premised on the generating company only making money by selling more power, so there is no encouragement to save.

I don’t think regulators understand that those big power plants are very good at producing a constant amount of power and it can take hours and hours to bring those big boilers up. When you start putting transient capability [like solar power] online –what happens when that cloud goes by and suddenly a couple of hundred megawatts that was there goes away, or when wind stops, you are asking that big tired grid you were trying to avoid paying their bill an hour ago, suddenly you are desperate for more power. They have a tougher time reacting and keeping a stable grid with these other systems online than they had before, and they are making less money.

In the case of Germany, the instability from a pure technology point of view, not an economic or financial point of view, but the instability induced in their large systems by all these new transient systems is making life difficult, causing a reliability issue and a security issue. And I think we should avoid that in this country.

I think we could make generators that use the largest buried infrastructure in the country – natural gas — that we don’t need $100 million a mile for. Plus many buildings have buried tanks with oil, propane. If our device could be moved close to where it is needed, but still on the energy producer’s side of that equation, yet just outside the meter, then the energy producers could have millions of these small devices that they own and operate, because grandma doesn’t want to become her own utility company because she has a solar panel, but if the utility companies and energy providers could compete with each other to have small units that are so close to the loads, they still get the full advantage of being a supplier of energy, except with just millions of little plants, they can avoid needing transmission lines, distribution lines, substations, et cetera, that everybody is talking about being expensive, unreliable, and subject to issues.

DAVID B. MCKINLEY, WEST VIRGINIA. I thought that the hearing was the ensuring a secure, reliable and modern electric system, and we were going to be talking a lot more about the grid, and I have gotten more confused as I have heard all this discussion. I am an engineer. I have heard very professorial comments, very in-depth, your white papers that you have all developed about this topic, but I wonder whether or not we have been able to reach America with the story, because we have been talking about source agnostic architecture. We have even heard about balkanizing. We have heard about platforms, we have talked about polar vortexes. Mr. Kamen, you were about as close to talking to the American public as I have seen in this panel. One thing I have learned in Congress in my 4 years here, that we have trouble when we are confronted with more than one option, and I haven’t heard the option.

I have heard seven or eight different themes of where we should go, and I am really trying to get to a point with the grid of a consensus on where we should go to develop grid reliability, because what we have not talked about is the public’s resistance. The public doesn’t want high-tension lines over their property, in their back yard’. We haven’t talked about electromagnetic pulse, the threat to our grid reliability with that, because we know that is a serious challenge. We have talked about the fact that we can shut off someone else’s grid in another country, and they can shut off our grid. There was some mention about the EPA regulations and shutting down some of our powerhouses that when we had this polar vortex, that we came within 700 megawatts of having a brown-out last winter. That is really threatening.   And then the option of the age issue, I would like for you to just explain in terms that we don’t use here in the beltway for Mildred Schmidt to understand, what does that have to do with age because we have waterlines and sewer lines, and buildings and roads and bridges that are far older than 25 years old. Why should I be worried about electric grid power lines being 25 years old? I would like to hear, is there a consensus of where we should go, where Congress should be putting its first priority in getting greater reliance or dependability, or are we just kind of talking abstract again? Is there a consensus?

Mr. KAMEN. We call it coopertition. We believe that if you apply technologies properly, everybody can win as they compete because the public gets the best that way. And I think what you have heard from everybody is the grid is getting older, it is getting, for various reasons — the environment, terrorism, cyber-attacks — more fragile.

You are hearing a lot of people adding a lot of new technologies.  Where there is a consensus should be that you have to get all the people that provide the net result to the public working together so that you don’t create an if-I-win-you-lose situation.

And the energy providers, the transmission or the generation—for instance, our partner for our little box is a major generator, NRG, yet they are now becoming one of the biggest suppliers of solar panels, and working with us on these small distributed boxes. In one perverse way, you could say they are undermining their core business, but, you know, like they always say, the railroads went away because they thought they were in the train business, not the transportation business.

And to your point, the public doesn’t care about CDMA and TDMA and Time Division—they care about a cell phone being more convenient than a landline. So if the public could have a simple appliance put into their home that already used infrastructure that we have great confidence in, because it is buried under the ground, like gas lines, like their oil, like their propane, and it could be made to work in parallel with solar and wind and the grid, because it sits at the intersection of all those things, somebody with an appliance like that would say, my costs went down because the waste heat from this thing is now my water system and my furnace, and I have more security and reliability because it is distributed, like getting a back-up generator free.  And the people that run the grid and all the other systems win as well because it deals with transient problems, is compatible with solar panels, is compatible with batteries, and is compatible with the big producers.

[  Mr. Kamen has made the case for distributed generation above,, but as Mr. Ivy points out below, there is a need for the opposite – states will need to run lines over a wider area to cope with the instability of variable power to balance it. Texas is an island of power (ERCOT) but in the future if wind penetration reaches 30% or more, not a big enough island to cope with that much variablity ]

Mr. IVY. As renewable energy gets to be much more prolific in our industry, our ability to offload the variability is a way to help manage the system reliability. If any one of us believes that we are going to get up to 30, 40, 50 percent penetration and manage it all on our own, we are not drinking the right Kool-Aid. So I think it is very important that we start looking at [running lines from Texas to other states]. It is almost blasphemy to say that you are going to build transmission outside the State, but you may well get to the point where that needs to be the thing that you do just to be able to help manage the variability.

Thomas Siebel, Chairman and Chief Executive Officer, C3 Energy.  You have an 800-pound gorilla in the room here:  the cybersecurity problem. This is an opportunity where the Federal Government can play a role.  The fact is any hostile government, or just 10 smart engineers from UC Berkeley,   could bring down the grid from Boston to New York in 4 days.   And if you bring in the leadership from Homeland Security, DHS, in here  what they will say is before we really do something about this, we are going to have the equivalent of 9/11. And then we will get serious and spend $100 billion a year on it.

DAVID  LOEBSACK, IOWA.   I am thinking in terms of a regulatory framework, to make sure that we integrate some of these things into the generation and provision of power to folks, because it was mentioned that we have to have the right regulatory framework, right policy, right regulatory approach. What is that approach?

Paul Nahi, Chief Executive Officer, Enphase Energy. I completely agree with Mr. Kamen that the right answer is distributor generation. In terms of the regulatory and policy changes that need to be adopted for that, we have to recognize that the potential for an adverse relationship between the renewable energy companies and the utilities exists. It doesn’t have to.  There are ways these companies can work together, there are ways that we can help the utilities adopt to a business model that would provide for more distributed generation. Right now, most of the distributed generation, not all but most, is done by third-party companies. There is no reason why the utilities themselves can’t take a greater ownership and greater responsibility for putting on more of that distributed generation.

BILL JOHNSON, OHIO. Mr. Atkinson, your testimony suggests that the grid of the future will enable electrons to flow into or even multiple directions. Why is having flexibility in power flows significant, and how can advanced grid technologies facilitate this?

Michael Atkinson, President, Alstom Grid, Inc., on Behalf of GridWise Alliance. In the traditional hub and spoke that was mentioned before, you have an outage upstream, everybody downstream is out of power. When you have multiple directional flow, you get a chance to re-switch your system, reconfigure your grid on the fly, thus allowing, all or some of the people to be brought back up immediately and not suffer that outage. The technologies today exist to do this and continue to get better, the algorithms continue to improve.

Mr. JOHNSON.  As a chief information officer for a global manufacturing company, I had to be concerned we had steady power. A lot of folks don’t realize in today’s high-tech arena what a power outage, a power surge, and constantly changing power parameters do to solid state circuitry. It wreaks havoc.

Joel Ivy, General Manager, Lakeland Electric, on Behalf of American Public Power Association.  If there is an outage somewhere in the field in the original hub and spoke method, you are out if you are downstream of that. There are high-speed switches that are sensing where these short-circuits are in the system, and talking to each other to try to figure out how to isolate it. And then the goal is to have an outage isolated to the smallest area that you can possibly have it in. So then that allows us to dispatch somebody straight to where the problem is, because normally it is lightning, it is trees, it is an animal, something that can be cleared up very quickly, we can get the lights back on very, very quickly.

MARK WAYNE MULLIN, OKLAHOMA. Mr. Kamen, you made a point in your written testimony that more than 50% of the generating capacity in the U.S. is 30 years old, and that 70% of the 280,000 miles of transmission line is more than 25 years old. What do you feel your company, as well as other companies like yours bring to the table in addressing this issue?

Mr. KAMEN. I think that like with a used car, you reach a point where it is cheaper to buy a new one than to keep fixing the old one.  If the proper incentives were put before the people that produce the energy, transmit the energy, distribute the energy, supply it to the end user, if they had a clean piece of paper and could invest their money in alternatives to fix the problems you’ve heard about  There are only a few plants that are hub and spoke, easy to take down, and it is very hard to make them self-healing. But if you could have thousands and thousands of small, locally operated and controlled units close to where you need the electricity and that you can also use their waste heat, you would get as a bonus to replace furnaces and heat water, you would be much safer against anybody taking one [centralized electric] system down. It might require more sophisticated controls and interaction, but as we have heard, that is becoming easier and easier. So if you could create a system instead of taking these very, very old systems, which they sort of have no other choice but to keep up and operating, and allow them to transition to a new alternative technology, they would do better.

Mr. MULLIN. What is keeping the companies from being able to do this?

Mr. KAMEN. From my understanding, when I have talked to people that do generation, that do transmission, it is a—it boggles my mind,… I have heard CEOs of major energy-related companies say I am not allowed to do transmission, I [am only allowed to] generate, or I am not allowed to generate, I [can only] do transmission. I can’t put your box somewhere there.

MORGAN GRIFFITH, VIRGINIA. I live on a cul-de-sac with 13 houses, [so how would your generators work in my neighborhood?]

Mr. KAMEN. The average American home consumes less than 2 kilowatts. So I would put a cluster of four 10 kilowatt units on a pad to handle your neighborhood. If one of them went down, there is enough redundancy for the other three to keep everybody happy, and at their convenience, somebody would fix the one that went down.  [And after a big storm] there is another advantage — we run on any fuel, and typically your neighborhood has buried lines in it that are bringing natural gas. You probably have buried tanks with heating oil or propane. Those things are way less susceptible to problems than wires running through all the trees that get taken down by ice or wind or hurricanes, and these boxes then are so close to where you need them that the rest of the system going down hundreds of miles away isn’t going to affect you, and again, they are so close to your loads that you can also take their ‘‘waste heat’’ and turn it into your heat and hot water.

We right now run on natural gas, propane, diesel fuel, gasoline. The device is actually running on something that looks like a burner in your hot water heater, which is why it doesn’t make lots of noise. An engine, diesel cycle, Rankine cycle, auto cycle, typical—an engine has a very specific kind of fuel because it touches every part of the inside of your engine. It gets atomized, a spark comes in, compression come—an engine typically has a very, very selective appetite for fuel, but your hot water heater will keep water hot if there is a flame under it, and it doesn’t really care what the fuel is. We are running a system that looks much more similar to your hot water heater, but we turn some of that energy into electricity instead of heat.

Mr. Green.  We have refineries and chemical plants, they are always looking for ways that they can efficiently run those plants as cheap as possible. And some of them probably have cut their fuel requirements over the years because the cogeneration and other things, in fact, I don’t think we have a chemical plant that doesn’t have a cogen facility, but do you expect industrial and consumer demand to increase over the new few years? We can’t save our way out of the power.



Posted in Congressional Record U.S., Distributed Generation, Grid instability | Tagged , , | Leave a comment

Why facts don’t change our mind

[ Below are excerpts from this article.  Longish descriptions of various studies at Stanford and elsewhere lead to conclusions such as

Once formed impressions are remarkably perseverant

Even after the evidence for their beliefs has been totally refuted, people failed to make appropriate revisions in these beliefs, a failure was quite impressive

Elizabeth Kolbert. February 27, 2017 Issue Why Facts Don’t Change Our Minds New discoveries about the human mind show the limitations of reason. The New Yorker.

Any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to cooperate. Cooperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber point out that aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

This is because reason evolved to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”

People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of a toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.

“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.

Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

This is how a community of knowledge can become dangerous. In a study conducted in 2012, people were asked their position on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.

One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.

In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)

The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.

The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”

“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring.

Posted in Critical Thinking | Tagged | Leave a comment

Peak Cobalt

Preface. I don’t think electric vehicles are going to happen for reasons specified in these related articles.  I’ve left out all the posts in Peak Everything dealing with important, rare earth, and lithium minerals from this list plus the posts in peak everything about important and rare minerals.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

Related Articles

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]


A shortage of cobalt could create a bottleneck for electric vehicles.  Most of these rely on lithium batteries, prompting concern about lithium supplies (see Peak Lithium posts here). It’s said by many there are 400 years of lithium left, but much of this is in minerals that will be very expensive to get it out of. We’re getting the easy stuff now.

The battery industry currently uses 42% of cobalt production.  The remaining 58 percent is used in diverse industrial and military applications such as jet engines that rely exclusively on the material.  They can afford to pay regardless of the price, which battery makers can’t afford to do.

But guess what? Lithium batteries also need cobalt. The best lithium battery cathodes (negative electrodes) all contain cobalt.

If 10 million EV sales are made in 2025, the cobalt required would be 330,000 metric tons, but the supply then is expected to be at most 290,000 metric tons. In 2015, 124,000 metric tons were produced, so triple the current supply must somehow happen.

There are seven major obstacles to overcome:

  1. Cobalt is a by-product of copper and nickel mining, so its production depends on the demand for those metals. But copper and nickel prices are plunging, making many deposits uneconomic.
  2. Even if new primary cobalt mines come online, exploration, licensing and development take years and billions of dollars.
  3. So few countries produce cobalt, 60% comes from the politically unstable Democratic Republic of the Congo, which has 49% of cobalt reserves. China has a controlling interest in the main mine producing cobalt, so at some point they might stop selling cobalt to keep their own electric car industry going. They have controlling interest in two other mines as well.
  4. If cobalt prices go up, the price of EV will go up, because lithium batteries use a lot of cobalt — the Tesla 85 KWh battery needs 17.6 (8 kg) of cobalt.
  5. and the number sold go down
  6. If there’s a global recession, depression, or crash, the demand for copper and nickel will dry up, as will the capital to build new cobalt mines.
  7. It’s not ethical: A large share of the country’s cobalt exports comes from “artisanal” mines — those dug by locals under the control of various strongmen. Child labor and harsh exploitation are rife, according to an Amnesty International report released last month.

Recycling lithium batteries is rarely done, it’s too complicated. Even if it were at a higher rate and using yet to be invented cheaper processes, recycling wouldn’t make a dent until 10 or more years after the mass-market penetration of EVs.

Maybe if aluminum or something cheaper and more abundant can be found to substitute for cobalt the situation will be less dire, but we’ve been trying to improve batteries 205 years and they are only 5 times more powerful, so this can’t be counted on.  Each element has very specific special properties that can’t be substituted for something else.

This is why these applications don’t use common, cheap, abundant minerals, and use these, most of which are scarcer than cobalt:

Rare Earth metals are used in many products:

  1. Magnets (Neodymium, Praseodymium, Terbium, Dysprosium): Motors, disc drives, MRI, power generation, microphones and speakers, magnetic refrigeration
  2. Metallurgical alloys (Lanthanum, Cerium, Praseodymium, Neodymium, Yttrium): NimH batteries, fuel cells, steel, lighter flints, super alloys, aluminum/magnesium
  3. Phosphors (Europium, Yttrium, Terbium, Neodymium, Erbium, Gadolinium, Cerium, Praseodymium): display phosphors CRT, LPD, LCD; fluorescent lighting, medical imaging, lasers, fiber optics
  4. Glass and Polishing (Cerium, Lanthanum, Praseodymium, Neodymium, Gadolinium, Erbium, Holmium): polishing compounds, decolorizers, UV resistant glass, X-ray imaging
  5. Catalysts (Lanthanum, Cerium, Praseodymium, Neodymium): petroleum refining, catalytic converter, diesel additives, chemical processing, industrial pollution scrubbing
  6. Other applications:
  • Nuclear (Europium, Gadolinium, Cerium, Yttrium, Sm, Erbium)
  • Defense (Neodymium, Praseodymium, Dysprosium, Terbium, Europium, Yttrium, Lanthanum, Lutetium, Scandium, Samarium)
  • Water Treatment
  • Pigments
  • Fertilizers
  • Fuel cells (SOFC use lanthaneum, cerium, prasedymium)

8 Rare Earth Metals are used in hybrid electric vehiclesSource: Ree applications in a hybrid electric vehicle. Molycorp Inc. 2010

  1. Cerium: UV cut glass, Glass and mirrors, polishing powder, LCD screen, catalytic converter, hybrid NiMH battery, Diesel fuel additive
  2. Dysprosium: Hybrid electric motor and generator
  3. Europium: LCD screen
  4. Lanthanum: Catalytic Converter, Hybrid NiMH battery, diesel fuel additive
  5. Neodymium: magnets in 25+ electric motors throughout vehicle, Headlight Glass, Hybrid electric motor and generator
  6. Praseodymium: Hybrid electric motor and generator
  7. Terbium: Hybrid electric motor and generator
  8. Yttrium: LCD screen, component sensors


Bershidsky, L. Oct 17, 2017. Electric Car Makers have an Africa problem. Automakers find it hard to lock in the price of cobalt for batteries. Bloomberg.

Friedemann, A. 2014. High-tech can’t last: Limited minerals and metals essential for wind, solar, microchips, cars, and other high-tech gadgets.

Gandon, S. Jan 1, 2017. No Cobalt, no Tesla?

Patel, P. January 2018. Could Cobalt choke our vehicle future? Demand for the metal, which is critical to EV batteries, could soon outstrip supply. Scientific American.


Posted in Important Minerals | Tagged , , | 4 Comments

The Lights are going out in the Middle East (New Yorker)

Robin Wright. May 20, 2017. The lights are going out in the Middle East.  New Yorker.

Six months ago, I was in the National Museum in Beirut, marvelling at two Phoenician sarcophagi among the treasures from ancient Middle Eastern civilizations, when the lights suddenly went out. A few days later, I was in the Bekaa Valley, whose towns hadn’t had power for half the day, as on many days. More recently, I was in oil-rich Iraq, where electricity was intermittent, at best. “One day we’ll have twelve hours. The next day no power at all,” Aras Maman, a journalist, told me, after the power went off in the restaurant where we were waiting for lunch. In Egypt, the government has appealed to the public to cut back on the use of light bulbs and appliances and to turn off air-conditioning even in sweltering heat to prevent wider outages. Parts of Libya, which has the largest oil reserves in Africa, have gone weeks without power this year. In the Gaza Strip, two million Palestinians get only two to four hours of electricity a day, after yet another cutback in April.

The world’s most volatile region faces a challenge that doesn’t involve guns, militias, warlords, or bloodshed, yet is also destroying societies. The Middle East, though energy-rich, no longer has enough electricity. From Beirut to Baghdad, tens of millions of people now suffer daily outages, with a crippling impact on businesses, schools, health care, and other basic services, including running water and sewerage. Little works without electricity.

“The social, economic and political consequences of this impending energy crisis should not be underestimated,” the U.N. special coördinator for the Middle East peace process, Nickolay Mladenov, warned last month, about the Gaza crisis. The same applies across the region.

Public fury over rampant outages has sparked protests. In January, in one of the largest demonstrations since Hamas took control in Gaza a decade ago, ten thousand Palestinians, angered by the lack of power during a frigid winter, hurled stones and set tires ablaze outside the electricity company. Iraq has the world’s fifth-largest oil reserves, but, during the past two years, repeated anti-government demonstrations have erupted over blackouts that are rarely announced in advance and are of indefinite duration. It’s one issue that unites fractious Sunnis in the west, Shiites in the arid south, and Kurds in the mountainous north. In the midst of Yemen’s complex war, hundreds dared to take to the streets of Aden in February to protest prolonged outages. In Syria, supporters of President Bashar al-Assad in Latakia, the dynasty’s main stronghold, who had remained loyal for six years of civil war, drew the line over electricity. They staged a protest in January over a cutback to only one hour of power a day.

Over the past eight months, I’ve been struck by people talking less about the prospects of peace, the dangers of ISIS, or President Trump’s intentions in the Middle East than their own exhaustion from the trials of daily life. Families recounted groggily getting up in the middle of the night when power abruptly comes on in order to do laundry, carry out business transactions on computers, charge phones, or just bathe and flush toilets, until electricity, just as unpredictably, goes off again. Some families have stopped taking elevators; their terrified children have been stuck too often between floors. Students complained of freezing classrooms in winter, trying to study or write papers without computers, and reading at night by candlelight. The challenges will soon increase with the demands for power—and air-conditioning—surge, as summer temperatures reach a hundred and twenty-five degrees.

The reasons for these outages vary. With the exception of the Gulf states, infrastructure is old or inadequate in many of the twenty-three Arab countries. The region’s disparate wars, past and present, have damaged or destroyed electrical grids. Some governments, even in Iraq, can’t afford the cost of fuelling plants around the clock. Epic corruption has compounded physical challenges. Politicians have delayed or prevented solutions if their cronies don’t get contracts to fuel, maintain, or build power plants.

The movement of refugees has further strained equipment. Lebanon, Jordan, Iraq, and Egypt, already struggling, have each taken in hundreds of thousands of Syrian refugees since 2011. The frazzled governor of Erbil, Nawzad Hadi Mawlood, told me that Iraq’s northern Kurdistan—home to four million Kurds—has taken in almost two million displaced Iraqis who fled the Islamic State since 2014, as well as more than a hundred thousand refugees fleeing the war in neighboring Syria since 2011. Kurdistan no longer has the facilities, fuel, or funds to provide power. It averages between nine and ten hours a day, a senior technician in Kurdistan’s power company told me, although it’s worse in other parts of Iraq.

In Erbil, as in cities across the Middle East and North Africa, the only alternatives are noisy and polluting generators that cost three to ten times state rates. “I have no generator,” the technician noted.

Posted in Blackouts, Blackouts | Tagged , , | 3 Comments

Why it is futile to think that Wind could ever make a significant contribution to energy supplies

Matt Ridley. May 15, 2017. Wind turbines are neither clean nor green and they provide zero global energy. Even after 30 years of huge subsidies, it provides about zero energy. The Spectator.

The Global Wind Energy Council recently released its latest report, excitedly boasting that ‘the proliferation of wind energy into the global power market continues at a furious pace, after it was revealed that more than 54 gigawatts of clean renewable wind power was installed across the global market last year’.

You may have got the impression from announcements like that, and from the obligatory pictures of wind turbines in any BBC story or airport advert about energy, that wind power is making a big contribution to world energy today. You would be wrong. Its contribution is still, after decades — nay centuries — of development, trivial to the point of irrelevance.

Even put together, wind and photovoltaic solar are supplying less than 1 per cent of global energy demand. From the International Energy Agency’s 2016 Key Renewables Trends, we can see that wind provided 0.46 per cent of global energy consumption in 2014, and solar and tide combined provided 0.35 per cent. Remember this is total energy, not just electricity, which is less than a fifth of all final energy, the rest being the solid, gaseous, and liquid fuels that do the heavy lifting for heat, transport and industry.

[One critic suggested I should have used the BP numbers instead, which show wind achieving 1.2% in 2014 rather than 0.46%. I chose not to do so mainly because that number is arrived at by falsely exaggerating the actual output of wind farms threefold in order to take into account that wind farms do not waste two-thirds of their energy as heat; also the source is an oil company, which would have given green blobbers a excuse to dismiss it, whereas the IEA is unimpleachable But it’s still a very small number, so it makes little difference.]

Such numbers are not hard to find, but they don’t figure prominently in reports on energy derived from the unreliables lobby (solar and wind). Their trick is to hide behind the statement that close to 14 per cent of the world’s energy is renewable, with the implication that this is wind and solar. In fact the vast majority — three quarters — is biomass (mainly wood), and a very large part of that is ‘traditional biomass’; sticks and logs and dung burned by the poor in their homes to cook with. Those people need that energy, but they pay a big price in health problems caused by smoke inhalation.

Even in rich countries playing with subsidised wind and solar, a huge slug of their renewable energy comes from wood and hydro, the reliable renewables. Meanwhile, world energy demand has been growing at about 2 per cent a year for nearly 40 years. Between 2013 and 2014, again using International Energy Agency data, it grew by just under 2,000 terawatt-hours.

If wind turbines were to supply all of that growth but no more, how many would need to be built each year? The answer is nearly 350,000, since a two-megawatt turbine can produce about 0.005 terawatt-hours per annum. That’s one-and-a-half times as many as have been built in the world since governments started pouring consumer funds into this so-called industry in the early 2000s.

At a density of, very roughly, 50 acres per megawatt, typical for wind farms, that many turbines would require a land area [half the size of] the British Isles, including Ireland. Every year. If we kept this up for 50 years, we would have covered every square mile of a land area [half] the size of Russia with wind farms. Remember, this would be just to fulfil the new demand for energy, not to displace the vast existing supply of energy from fossil fuels, which currently supply 80 per cent of global energy needs. [para corrected from original.]

Do not take refuge in the idea that wind turbines could become more efficient. There is a limit to how much energy you can extract from a moving fluid, the Betz limit, and wind turbines are already close to it. Their effectiveness (the load factor, to use the engineering term) is determined by the wind that is available, and that varies at its own sweet will from second to second, day to day, year to year.

As machines, wind turbines are pretty good already; the problem is the wind resource itself, and we cannot change that. It’s a fluctuating stream of low–density energy. Mankind stopped using it for mission-critical transport and mechanical power long ago, for sound reasons. It’s just not very good.

As for resource consumption and environmental impacts, the direct effects of wind turbines — killing birds and bats, sinking concrete foundations deep into wild lands — is bad enough. But out of sight and out of mind is the dirty pollution generated in Inner Mongolia by the mining of rare-earth metals for the magnets in the turbines. This generates toxic and radioactive waste on an epic scale, which is why the phrase ‘clean energy’ is such a sick joke and ministers should be ashamed every time it passes their lips.

It gets worse. Wind turbines, apart from the fibreglass blades, are made mostly of steel, with concrete bases. They need about 200 times as much material per unit of capacity as a modern combined cycle gas turbine. Steel is made with coal, not just to provide the heat for smelting ore, but to supply the carbon in the alloy. Cement is also often made using coal. The machinery of ‘clean’ renewables is the output of the fossil fuel economy, and largely the coal economy.

A two-megawatt wind turbine weighs about 250 tonnes, including the tower, nacelle, rotor and blades. Globally, it takes about half a tonne of coal to make a tonne of steel. Add another 25 tonnes of coal for making the cement and you’re talking 150 tonnes of coal per turbine. Now if we are to build 350,000 wind turbines a year (or a smaller number of bigger ones), just to keep up with increasing energy demand, that will require 50 million tonnes of coal a year. That’s about half the EU’s hard coal–mining output.

The point of running through these numbers is to demonstrate that it is utterly futile, on a priori grounds, even to think that wind power can make any significant contribution to world energy supply, let alone to emissions reductions, without ruining the planet. As the late David MacKay pointed out years back, the arithmetic is against such unreliable renewables.

MacKay, former chief scientific adviser to the Department of Energy and Climate Change, said in the final interview before his tragic death last year that the idea that renewable energy could power the UK is an “appalling delusion” — for this reason, that there is not enough land.

Posted in Alternative Energy, Wind | Tagged , , | 5 Comments

Richard Heinberg on why low oil prices do not mean there is plenty of oil, EROI, collapse

[ Yet another wise, thoughtful, and wide-ranging essay from my favorite writer of the many facets of a civilization about to decline as it is starved of the fossil fuels that feed it.  Although the topics are quite varied, Heinberg weaves them into a cloth that is more than the sum of the parts in explaining how the future may unfold.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Richard Heinberg. 2017-4-25. Juggling Live Hand Grenades. Post Carbon Institute.

Here are a few useful recent contributions to the global sustainability conversation, with relevant comments interspersed. Toward the end of this essay I offer some general thoughts about converging challenges to the civilizational system.

  1. “Oil Extraction, Economic Growth, and Oil Price Dynamics,” by Aude Illig and Ian Schiller. BioPhysical Economics and Resource Quality, March 2017, 2:1.

Once upon a time it was assumed that as world oil supplies were depleted and burned, prices would simply march upward until they either crashed the economy or incentivized both substitute fuels and changes to systems that use petroleum (mainly transportation). With a little hindsight—that is, in view of the past decade of extreme oil price volatility—it’s obvious that that assumption was simplistic and useless for planning purposes. Illig’s and Schiller’s paper is an effort to find a more realistic and rigorously supported (i.e., with lots of data and equations) explanation for the behavior of oil prices and the economy as the oil resource further depletes.

The authors find, in short, that before oil production begins to decline, high prices incentivize new production without affecting demand too much, while low prices incentivize rising demand without reducing production too much. The economy grows. It’s a self-balancing, self-regulating system that’s familiar territory to every trained economist.

However, because oil is a key factor of economic production, a depleting non-renewable resource, and is hard to replace, conventional economic theory does a lousy job describing the declining phase of extraction. It turns out that once depletion has proceeded to the point where extraction rates start to decline, the relationship between oil prices and the economy shifts significantly. Now high prices kill demand without doing much to incentivize new production that’s actually profitable), while low prices kill production without doing much to increase demand. The system becomes self-destabilizing, the economy stagnates or contracts, the oil industry invests less in future production capacity, and oil production rates begin to fall faster and faster.

The authors conclude:

Our analysis and empirical evidence are consistent with oil being a fundamental quantity in economic production. Our analysis indicates that once the contraction period for oil extraction begins, price dynamics will accelerate the decline in extraction rates: extraction rates decline because of a decrease in profitability of the extraction business. . . . We believe that the contraction period in oil extraction has begun and that policy makers should be making contingency plans.

As I was reading this paper, the following thoughts crossed my mind. Perhaps the real deficiency of the peak oil “movement” was not its inability to forecast the exact timing of the peak (at least one prominent contributor to the discussion, petroleum geologist Jean Laherrère, made in 2002 what could turn out to have been an astonishingly accurate estimate for the global conventional oil peak in 2010, and global unconventional oil peak in 2015). Rather, its shortcoming was twofold: 1) it didn’t appreciate the complexity of the likely (and, as noted above, poorly understood) price-economy dynamics that would accompany the peak, and 2) it lacked capacity to significantly influence policy makers. Of course, the purpose of the peak oil movement’s efforts was not to score points with forecasting precision but to change the trajectory of society so that the inevitable peak in world oil production, whenever it occurred, would not result in economic collapse. The Hirsch Report of 2005 showed that that change of trajectory would need to start at least a decade before the peak in order to achieve the goal of averting collapse. As it turned out, the peak oil movement did provide society with a decade of warning, but there was no trajectory change on the part of policy makers. Instead, many pundits clouded the issue by spending that crucial decade deriding the peak oil argument because of insufficient predictive accuracy on the part of some of its proponents. And now? See this article:

  1. “Saudi Aramco Chief Warns of Looming Oil Shortage,” by Anjli Raval and Ed Crooks, Financial Times, April 14, 2017.

The message itself should be no surprise. Everyone who’s been paying attention to the oil industry knows that investments in future production capacity have fallen dramatically in the past three years as prices have languished. It’s important to have some longer-term historical perspective, though: today’s price of $50 per barrel is actually a high price for the fuel in the post-WWII era, even taking inflation into account. The industry’s problem isn’t really that prices are too low; it’s that the costs of finding and producing the remaining oil are too high. In any case, with prices not high enough to generate profits, the industry has no choice but to cut back on investments, and that means production will soon start to lag. Again, anyone who’s paying attention knows this.

What’s remarkable is hearing the head of Saudi Arabia’s state energy company convey the news. Here’s an excerpt from the article:

Amin Nasser, chief executive of Saudi Aramco, the world’s largest oil producing company, said on Friday that 20 [million] barrels a day in future production capacity was required to meet demand growth and offset natural field declines in the coming years. “That is a lot of production capacity, and the investments we now see coming back—which are mostly smaller and shorter term—are not going to be enough to get us there,” he said at the Columbia University Energy Summit in New York. Mr. Nasser said that the oil market was getting closer to rebalancing supply and demand, but the short-term market still points to a surplus as U.S. drilling rig levels rise and growth in shale output returns. Even so, he said it was not enough to meet supplies required in the coming years, which were “falling behind substantially.” About $1 [trillion] in oil and gas investments had been deferred and cancelled since the oil downturn began in 2014.

Mr. Nasser went on to point out that conventional oil discoveries have more than halved during the past four years.

The Saudis have never promoted the notion of peak oil. Their mantra has always been, “supplies are sufficient.” Now their tune has changed—though Mr. Nasser’s statement does not mention peak oil by name. No doubt he would argue that resources are plentiful; the problem lies with prices and investment levels. Yes, of course. Never mention depletion; that would give away the game.

  1. “How Does Energy Resource Depletion Affect Prosperity? Mathematics of a Minimum Energy Return on Investment (EROI),” by Adam R. Brandt. BioPhysical Economics and Resource Quality, (2017) 2:2.

Adam Brandt’s latest paper follows on work by Charlie Hall and others, inquiring whether there is a minimum energy return on investment (EROI) required in order for industrial societies to function. Unfortunately EROI calculations tend to be slippery because they depend upon system boundaries. Draw a close boundary around an energy production system and you are likely to arrive at a higher EROI calculation; draw a wide boundary, and the EROI ratio will be lower. That’s why some EROI calculations for solar PV are in the range of 20:1 while others are closer to 2:1. That’s a very wide divergence, with enormous practical implications.

In his paper, Brandt largely avoids the boundary question and therefore doesn’t attempt to come up with a hard number for a minimum societal EROI. What he does is to validate the general notion of minimum EROI; he also notes that society’s overall EROI has been falling during the last decade. Brandt likewise offers support for the notion of an EROI “cliff”: that is, if EROI is greater than 10:1, the practical impact of an incremental rise or decline in the ratio is relatively small; however, if EROI is below 10:1, each increment becomes much more significant. This also supports Ugo Bardi’s idea of the “Seneca cliff,” according to which societal decline following a peak in energy production, consumption, and EROI may be far quicker than the build-up to the peak.

  1. “Burden of Proof: A Comprehensive Review of the Feasibility of 100% Renewable-Electricity Systems,” by B.P. Heard, B.W. Brook, T.M.L. Wigley, and C.J.A. Bradshaw. Renewable and Sustainable Energy Reviews, Volume 76, September 2017, Pages 1122–1133.

This study largely underscores what David Fridley and I wrote in our recent book Our Renewable Future. None of the plans reviewed here (including those by Mark Jacobson and co-authors) passes muster. Clearly, it is possible to reduce fossil fuels while partly replacing them with wind and solar, using current fossil generation capacity as a fallback (this is already happening in many countries). But getting to 100 percent renewables will be very difficult and expensive. It will ultimately require a dramatic reduction in energy usage, and a redesign of entire systems (food, transport, buildings, and manufacturing), as we detail in our book.

  1. “Social Instability Lies Ahead, Researcher Says,” by Peter Turchin. January 4, 2017,

Over a decade ago, ecologist Peter Turchin began developing a science he calls cliodynamics, which treats history using empirical methods including statistical analysis and modeling. He has applied the same methods to his home country, the United States, and arrives at startling conclusions.

My research showed that about 40 seemingly disparate (but, according to cliodynamics, related) social indicators experienced turning points during the 1970s. Historically, such developments have served as leading indicators of political turmoil. My model indicated that social instability and political violence would peak in the 2020s.

Turchin sees the recent U.S. presidential election as confirming his forecast: “We seem to be well on track for the 2020s instability peak. . . . If anything, the negative trends seem to be accelerating.” He regards Donald Trump as more of a symptom, rather than a driver, of these trends.

The author’s model tracks factors including “growing income and wealth inequality, stagnating and even declining well-being of most Americans, growing political fragmentation and governmental dysfunction.” Often social scientists focus on just one of these issues; but in Turchin’s view, “these developments are all interconnected. Our society is a system in which different parts affect each other, often in unexpected ways.

One issue he gives special weight is what he calls “elite overproduction,” where a society generates more elites than can practically participate in shaping policy. The result is increasing competition among the elites that wastes resources needlessly and drives overall social decline and disintegration. He sees plenty of historical antecedents where elite overproduction drove waves of political violence. In today’s America there are far more millionaires than was the case only a couple of decades ago, and rich people tend to be more politically active than poor ones. This causes increasing political polarization (millionaires funding extreme candidates), erodes cooperation, and results in a political class that is incapable of solving real problems.

I think Turchin’s method of identifying and tracking social variables, using history as a guide, is relevant and useful. And it certainly offers a sober warning about where America is headed during the next few years. However, I would argue that in the current instance his method actually misses several layers of threat. Historical societies were not subject to the same extraordinary boom-bust cycle driven by the use of fossil fuels as our civilization saw during the past century. Nor did they experience such rapid population growth as we’ve experienced in recent decades (Syria and Egypt saw 4 percent per annum growth in the years after 1960), nor were they subject to global anthropogenic climate change. Thus the case for near-term societal and ecosystem collapse is actually stronger than the one he makes.

Some Concluding Thoughts

Maintaining a civilization is always a delicate balancing act that is sooner or later destined to fail. Some combination of population pressure, resource depletion, economic inequality, pollution, and climate change has undermined every complex society since the beginnings of recorded history roughly seven thousand years ago. Urban centers managed to flourish for a while by importing resources from their peripheries, exporting wastes and disorder beyond their borders, and using social stratification to generate temporary surpluses of wealth. But these processes are all subject to the law of diminishing returns: eventually, every boom turns to bust. In some respects the cycles of civilizational advance and decline mirror the adaptive cycle in ecological systems, where the crash of one cycle clears the way for the start of a new one. Maybe civilization will have yet another chance, and possibly the next iteration will be better, built on mutual aid and balance with nature. We should be planting the seeds now.

Yet while modern civilization is subject to cyclical constraints, in our case the boom has been fueled to an unprecedented extreme by a one-time-only energy subsidy from tens of millions of years’ worth of bio-energy transformed into fossil fuels by agonizingly slow geological processes. One way or another, our locomotive of industrial progress is destined to run off the rails, and because we’ve chugged to such perilous heights of population size and consumption rates, we have a long way to fall—much further than any previous civilization.

Perhaps a few million people globally know enough of history, anthropology, environmental science, and ecological economics to have arrived at general understandings and expectations along these lines. For those who are paying attention, only the specific details of the inevitable processes of societal simplification and economic/population shrinkage remain unknown.

There’s a small cottage industry of websites and commenters keeping track of signs of imminent collapse and hypothesizing various possible future collapse trajectories. Efforts to this end may have practical usefulness for those who hope to escape the worst of the mayhem in the process—which is likely to be prolonged and uneven—and perhaps even improve lives by building community resilience. However, many collapsitarians are quite admittedly just indulging a morbid fascination with history’s greatest train wreck. In many of my writings I try my best to avoid morbid fascination and focus on practical usefulness. But every so often it’s helpful to step back and take it all in. It’s quite a show.

Posted in Oil, Peter Turchin, Richard Heinberg, Social Disorder | Tagged , , , , , , , | 8 Comments

Oil theft around the world: Cartels and exploding donkeys

Preface. Oil thefts cost Nigerian oil producers at least $18 billion a year. In Mexico, cartels spend only $5-8,000 to tap into pipelines and withdraw “unlimited” amounts of gasoline, and did so 7,000 times in 2016, resulting in $1 billion losses to PEMEX a year.

I expect that after oil production peaks, and is declining at 6% exponentially a year, desperation will grow everywhere, and in oil producing nations, the temptation of tapping into pipes inevitable, and the ability of governments to prevent oil piracy to dwindle as their revenues decline and the never-ending depression grows worse.

It will be so easy to steal in the U.S. where there are thousands of miles of oil pipelines in remote areas, that can’t all be patrolled, as was seen in Iraq when pipelines were blown up hundreds of times.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Calcuttawala, Z. May 31, 2017. Oil theft around the world: Cartels and exploding donkeys.

Thousands of miles of oil pipelines connect the world’s oil producing hubs to their key customers’ refineries and electrical grids. By replacing trucks, the advent of the pipeline has dramatically reduced the carbon emissions associated with transporting fuel from point A to point B, but their unattended nature makes them an attractive target for thieves who could refine and sell the products for rock-bottom prices on the black market or other illicit venues.

Oil theft has become incrementally more sophisticated over the past few decades as transnational gangs and separatist organizations steal fuel to fund their operations. Deepening geopolitical rivalries among states that share pipeline routes also encourage state-backed embezzlement of energy resources.


Nigeria notoriously suffers from oil theft perpetrated by pirates in the Gulf of Guinea, separatist groups in the Niger Delta, and private citizens looking to make a quick buck. In 2016, Michele Sison, the U.S. deputy ambassador for the United Nations at the time, said Africa’s largest producer lost $1.5 billion in revenues every month due to the extensiveness of the oil laundering game across the nation.

Illegal refineries near the Niger Delta allow stolen oil to be processed and used in the local economy, but a new initiative by Lagos would allow the facilities to become legal and secure oil from the government at a negotiated price. Nigeria’s oil minister Emmanuel Ibe Kachikwu advanced the plans last month in order to dissuade locals from attacking oil infrastructure.

A portion of the stolen crude leaves Nigerian borders for processing elsewhere. Evidence from 2014 suggests Nigerian crude had been smuggled into Ghana, mixed with Ghanaian crude before refining and export to Morocco and other European markets for sale on legal markets.


Cartels active in the northern half of this Latin American country are ready to be free from their drug addiction. With a modest upfront capital investment of $5,000 – $8,000, cartels have realized they can tap directly into state-owned gas pipelines and withdraw seemingly unlimited supplies of gasoline, which they then sell along the highway at a discount to official government prices. It’s a win-win situation whereby the drug cartels make 100 percent profit margins and citizens get “cheap” fuel.

Last year’s numbers from state-run PEMEX said the nation’s pipeline had been tapped almost 7,000 times in order to supply these illegal markets, which grow in size every year due to heavy participation from ordinary citizens as demand catalysts. The more prolific the fuel thieves become, the more expensive gasoline becomes for legal customers, which only encourages them to become the cartels’ newest buyers. It’s a vicious cycle.

The thefts amount to about $1 billion in losses annually, says Luis Miguel Labardini, an energy consultant at Marcos y Asociados and senior adviser to Pemex’s chief financial officer in the 1990s. “If Pemex were a public company, they would be in financial trouble just because of the theft of fuel,” he said to Zero Hedge. “It’s that bad.”


Like Nigeria, groups aiming to steal fuel from this country target crude resources. The criminal organizations that specialize in the practice fill up trucks of the illegally obtained raw goods and transport them to neighboring countries in trucks and trains—which do not have to be searched by customs officers due to the terms of trade agreements with neighboring countries, according to a report by Forbes.

Global Risk Insights notes that almost 60% of the Azeri economy operates underground, with untaxed and unregulated oil activities representing a large chunk of the dark underbelly of the national GDP.


The border between regional rivals Morocco and Algeria was closed for security reasons in 1994 following an attack on the Atlas Asni Hotel in Marrakesh. Still, thousands of barrels make it across the border on donkeys, according to a report by The Guardian in 2013. That summer, Algerian authorities became resolved to end petrol trafficking on the desert border, resorting to the execution of the animals from a distance. In two instances, donkeys had even been blown up.

Morocco imports the vast majority of its fuel needs, since it has virtually no fuel reserves of its own. The cheap illegal fuel from its neighbor saves Rabat hundreds of thousands of dollars in import costs, so law enforcement turns a blind eye to the smuggling, which occurs far from official checkpoints.


The differential between oil and gas prices in Malaysia and Thailand spurs the unauthorized transfer of the former’s fuel via land and sea routes through the Gulf of Thailand. Even after Malay authorities lowered gasoline subsidies, prices remained lower than those in its northern neighbor.

Most of the fuel movement occurs on a small scale in this region. For example, in February, law enforcement near the Thai-Malay border caught three cars with modifications to hold 500 liters of fuel in the vehicle’s bodies.

Their modus operandi was to fill up petrol at stations near the Bukit Bunga bridge, Tanah Merah, using local vehicles before smuggling them into Thailand,” said the local chief of police. “We believe their activities had been going on for some time. Stern action will be taken against petrol stations found to be in cahoots with smugglers.”

Ships carrying refined oil and gas cargo have the options of selling their goods to other ships at sea. The carriers, most often disguised as fake fishing vessels, bring the remaining fuel to Thai shores, where the traffickers will find dozens of new costumers.

Posted in Peak Oil, Threats to oil supply | Tagged , , | 1 Comment

How horses changed native cultures after 1492

[ This is a very brief overview of Peter Mitchel’s “Horse Nations”.  As oil and other fossils decline, will we will almost certainly return to using more horse “muscle power” as we did in the past.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Mitchel, Peter.  2015. Horse Nations. The worldwide impact of the horse on indigenous societies post-1492. Oxford University Press.

To remind you of how important horses were to past tribes and civilizations, consider their use in cavalry battle, plowing, transporting goods, pulling carts and wagons, and so on.

The latest archeological evidence shows that horses were probably first domesticated in the mid-fourth-millennium BC in northern Kazakhstan for milk and meat.  Before that horses were one of many hunted animals.  When people first began riding horses is less certain, but certainly by the mid- to late third millennium BC.

This book covers the impact of the horse on cultures all over the world, though I was mainly interested in Native American impacts.  This is such a brief history though, since it wasn’t long before the near extinction of bison and European settlement ended the horse culture of the Plains and other regions with lots of pastureland could sustain thousands of horses.

Horses could radically change a culture.  They determined when and where people camped since they had to have winter fodder and shelter.  For example, in the 1740s Europeans observed that Comanche’s’ large herds forced them to live apart in order to find enough pasture and water for their horses.  To accomplish this, groups split up seasonally.  Before horses, bison were the main decider of where to camp.  By the early 1800s there were an average of four horses per person, and the highest priority, since bison and other essentials couldn’t be procured without horses.

Natives had their horses and dogs transport goods on a travois that they pulled along the ground, and lasted about a year. They could also be used to dry meat or provide shade.  The heaviest cargo was the Tipi.  A Blackfoot home was typically made of 12 to 14 bison hides and altogether (with pegs and lining) weigh about 70-85 kg, and the 19 poles to support it another 180 kg, with total weight of 250-265 kg pulled by 3 horses. Wealthier natives hauling larger lodges needed even more horses.  An average family of 8 living in one tipi probably needed at least 12 horses: 3 to carry the tipi, 2 for packing personal possessions, 3 ridden by women and children plus 2 more for men, and 2 kept for hunting bison.  When families didn’t have enough horses, they overloaded those they had, borrowed horses and became beholden to them, or used more dogs.

It may appear that the unfair distribution of wealth with the 1% owning nearly as much as the bottom 99% is new to modern civilization, but centuries ago some native Americans amassed far more wealth than other tribal members.  For example, about one in twenty Blackfoot members owned 50 or more horses while 25% of families had less than 6, half the ideal number.  The wealthy individuals could now haul around more material possessions, trade horses for more wives, process more bison hides to sell or trade for guns.   This made poorer men who wanted to marry keen to raid other tribes and steal their horses, escalating tribal warfare.  A man could also gain much prestige by giving away some of the horses obtained in a raid.  Poorer men became their laborers, herding and processing hides for the wealthy which enriched them further.  Since horses were lent out far more often than given away, some families began to grow increasingly rich and their wealth hereditary, and less likely to starve.

“Horse wars” became more common, with the fiercest tribes those that lived in the most northern areas where horses were much harder to keep in these harsh environments.

There was an ecological price to pay for this though, repeated use of good winter campsites caused overgrazing, degrading the ability of the land to sustain large numbers of horses in good health.

Pack animals

Experiments show that dogs can drag loads of 60 lbs (27 kg) up to 16.8 miles (27 km) a day, though less than that if temperatures go above 68 F (20 C).  Historic observations recorded loads of 66-100 lbs (30-45 kg) up to 50 km.

Horses can carry 5 times as much further than dogs, as well as pull much harder, with an extra advantage of not competing with people or dogs for food.

You might expect that would be the end of dogs, but they have their advantages.  Horses can’t eat while working, but to maintain their heavy weight a horse must eat about 2% of its mass a day, which means grazing most of the time. Horses also need extra food to survive the harsh winters of the Plains.

And all the horse sweat means double or triple as much water, and even more if a mare is lactating to feed their colt milk that comprises 5% of their body mass every day.  Water is especially problematic in the winter, because the dry grass they eat doesn’t provide them any water.  Often nearby water is polluted by waterfowl, has too much algae, and other problems that sicken horses.  So their better ability to move a lot of weight long distances has a price.  Dogs on the other hand are less liable to be stolen, reproduce faster, grow quicker, withstand cold winters better, and can eat snow to gain enough water.  They can share human shelters, need less training, spend little time eating, and usually don’t stray far.

Packhorses could carry up to 705 lbs (320 kg) at a time, and carried thinks like pemmican made of bison meat, fat, and berries that could last for years.  Mules could do the work of 2 horses and valued that way.

Horses vastly expanded trade.  First because horses were very valuable. Between the Nez Perces and Northern Shoshones, a horse was worth 2 bearskins, or ten sheepskins, or 4 bags of salmon. More goods could be hauled to trade further distances. Before horses most trade routes were along rivers with canoes hauling the gods.

Not all Indians adopted horses in to their culture.  Great Basin bands rejected them because they depended on plants to survive, not animals.  The aridity of the environment meant few could be kept, and there was a risk that horses would eat some of the plants and grasses they depended on.  It was feared they’d harm the land, which turned out to be true, ecological studies show horses reduced grass and shrub cover, impoverished reptile, rodent, and ant populations and diminished soil organic matter, shade, and precipitation interception, causing more erosion.  Destruction which continues to this day, since Nevada has more feral horses than any other state, tens of thousands of them, which are minimally managed and can’t be hunted.


Posted in Agriculture, Agriculture, Muscle Power | Tagged , , | 1 Comment