Life After Fossil Fuels: manufacturing will be less precise

Preface. This is a book review and excerpts of Winchester’s “The Perfectionists: How Precision Engineers created the modern world”. The book describes how the industrial revolution was made possible with ever more precision.  First came the steam engine, possible to build when a way to make them to one tenth of an inch precision so the steam didn’t escape was invented.  By World War II parts could be made precise to within a millionth of an inch and today to 35 zeros of precision (0.00000000000000000000000000000000001), which is required for microchips, jet engines, and other high-tech.

This amazing precision is possible using machine tools to make precise parts by shaping metal, glass, plastic, ceramics and other rigid materials by cutting, boring, grinding, shearing, squeezing, rolling, and stamping plus riveting metals, plastic and other hard materials.  Most precision machine tools are powered by electricity today, and steam engines in the past.

Machine tools also revolutionized our ability to kill each other.  Winchester writes: “When any part of a gun failed, another part had to be handmade by an army blacksmith, a process that, with an inevitable backlog caused by other failures, could take days. As a soldier, you then went into battle without an effective gun, or waited for someone to die and took his, or did your impotent best with your bayonet, or else you ran. Once a gun had been physically damaged in some way, the entire weapon had to be returned to its maker or to a competent gunsmith to be remade or else replaced. It was not possible, incredible though this might, simply to identify the broken part and replace it with another. No one had ever thought to make a gun from component parts that were each so precisely constructed that they were identical one with another.”

Machine tools can not be used for wood because it is flexible. It swells and contracts in unpredictable ways. It can never be a fixed dimension and whether planed or jointed, lapped or milled, or varnished to a brilliant luster, since wood is fundamentally and inherently imprecise.

Since both my books, “When trucks stop running” and “Life After Fossil Fuels” make the case that we are returning to a world where the electric grid is down for good, and wood is the main energy source and infrastructure material after fossil fuels become scarce, the level of civilization we can achieve will depend greatly on how precisely we can make objects in the future.  Because wood charcoal makes inferior and weaker iron, steel, and other metals than coal, today’s precision will no longer be possible. Microchips, jet engines, and much more will be lost forever.  Wood, because of eventual deforestation, will lead to orders of magnitude less metal, brick, ceramics, glass and other products because of lack of wood charcoal. And since peak coal is here, and the remaining reserves in the U.S. are mostly lignite, not great for the high heat needed in manufacturing, civilization as we know it has a limited time-span.

“The Great Simplification” will reduce precision. The good news is that hand-crafting of beautiful objects will return, a far more rewarding way of life than production lines at factories today.

Alice Friedemann  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Winchester, S. 2018. The Perfectionists: How Precision Engineers created the modern world. HarperCollins.

Two particular aspects of precision need to be addressed. First, its ubiquity in the contemporary conversation—the fact that precision is an integral, unchallenged, and seemingly essential component of our modern social, mercantile, scientific, mechanical, and intellectual landscapes. It pervades our lives entirely, comprehensively, wholly.

Because an ever-increasing desire for ever-higher precision seems to be a leitmotif of modern society, I have arranged the chapters that follow in ascending order of tolerance, with low tolerances of 0.1 and 0.01 starting the story and the absurdly, near-impossibly high tolerances to which some scientists work today—claims of measurements of differences of as little as 0.000 000 000 000 000 000 000 000 000 01 grams, 10 to the -28th grams, have recently been made, for example—toward the end.

Any piece of manufactured metal (or glass or ceramic) must have chemical and physical properties: it must have mass, density, a coefficient of expansion, a degree of hardness, specific heat, and so on. It must also have dimensions: length, height, and width. It must possess geometric characteristics: it must have measurable degrees of straightness, of flatness, of circularity, cylindricity, perpendicularity, symmetry, parallelism, and position—among a mesmerizing host of other qualities even more arcane and obscure.

The piece of machined metal must have a degree of what has come to be known as tolerance. It has to have a tolerance of some degree if it is to fit in some way in a machine, whether that machine is a clock, a ballpoint pen, a jet engine, a telescope, or a guidance system for a torpedo.

To fit with another equally finely machined piece of metal, the piece in question must have an agreed or stated amount of permissible variation in its dimensions or geometry that will allow it to fit. That allowable variation is the tolerance, and the more precise the manufactured piece, the greater the tolerance that will be needed and specified.

The tolerances of the machines at the LIGO site are almost unimaginably huge, and the consequent precision of its components is of a level and nature neither known nor achieved anywhere else on Earth. LIGO is an observatory, the Laser Interferometer Gravitational-Wave Observatory.  The LIGO machines had to be constructed to standards of mechanical perfection that only a few years before were well-nigh inconceivable and that, before then, were neither imaginable nor even achievable.

Precision’s birth derives from the then-imagined possibility of maybe holding and managing and directing this steam, this invisible gaseous form of boiling water, so as to create power from it,

The father of true precision was an eighteenth-century Englishman named John Wilkinson, who was denounced sardonically as lovably mad, and especially so because of his passion for and obsession with metallic iron. He made an iron boat, worked at an iron desk, built an iron pulpit, ordered that he be buried in an iron coffin, which he kept in his workshop (and out of which he would jump to amuse his comely female visitors), and is memorialized by an iron pillar he had erected in advance of his passing in a remote village in south Lancashire.

Though the eventual function of the mechanical clock, brought into being by a variety of claimants during the fourteenth century, was to display the hours and minutes of the passing days, it remains one of the eccentricities of the period (from our current viewpoint) that time itself first played in these mechanisms a subordinate role. In their earliest medieval incarnations, clockwork clocks, through their employment of complex Antikythera-style gear trains and florid and beautifully crafted decorations and dials, displayed astronomical information at least as an equal to the presentation of time.

The behavior of the heavenly bodies was ordained by gods, and therefore was a matter of spiritual significance. As such, it was far worthier of human consideration than our numerical constructions of hours and minutes, and was thus more amply deserving of flamboyant mechanical display.

John Harrison, the man who most famously gave mariners a sure means of determining a vessel’s longitude. This he did by painstakingly constructing a family of extraordinarily precise clocks and watches, each accurate to just a few seconds in years, no matter how sea-punished its travels in the wheelhouse of a ship.

An official Board of Longitude was set up in London in 1714, and a prize of 20,000 pounds offered to anyone who could determine longitude with an accuracy of 30 miles. John Harrison eventually, and after a lifetime of heroic work on five timekeeper designs, would claim the bulk of the prize.

The fact that the Harrison clocks were British-invented and their successor clocks firstly British-made allowed Britain in the heyday of her empire to become for more than a century the undisputed ruler of all the world’s oceans and seas. Precise-running clockwork made for precise navigation; precise navigation made for maritime knowledge, control, and power.

In place of the oscillating beam balances that made the magic of his large clocks so spectacular to see, he substituted a temperature-controlled spiral mainspring, together with a fast-beating balance wheel that spun back and forth at the hitherto unprecedented rate of some 18,000 times an hour. He also had an automatic remontoir, which rewound the mainspring eight times a minute, keeping the tension constant, the beats unvarying. There was a downside, though: this watch needed oiling, and so, in an effort to reduce friction and keep the needed application of oil to a minimum, Harrison introduced, where possible, bearings made of diamond, one of the early instances of a jeweled escapement.

It remains a mystery just how, without the use of precision machine tools—the development of which will be central to the story that follows—Harrison was able to accomplish all this. Certainly, all those who have made watches since then have had to use machine tools to fashion the more delicate parts of the watches: the notion that such work could possibly be done by the hand of a 66-year-old John Harrison still beggars belief. But John Harrison’s clockworks enjoyed perhaps only three centuries’ worth of practical usefulness.

For precision to be a phenomenon that would entirely alter human society, it has to be expressed in a form that is duplicable; it has to be possible for the same precise artifact to be made again and again with comparative ease and at a reasonable frequency and cost.

It was only when precision was created for the many that precision as a concept began to have the profound impact on society as a whole that it does today. And the man who accomplished that single feat, of creating something with great exactitude and making it not by hand but with a machine, and, moreover, with a machine that was specifically created to create it

A machine that makes machines, known today as a “machine tool,” was, is, and will long remain an essential part of the precision story—was the 18th-century Englishman denounced for his supposed lunacy because of his passion for iron, the then-uniquely suitable metal from which all his remarkable new devices could be made.

Wilkinson is today rather little remembered. He is overshadowed quite comprehensively by his much-better-known colleague and customer, the Scotsman James Watt, whose early steam engines came into being, essentially, by way of John Wilkinson’s exceptional technical skills.

On January 27, 1774, John Wilkinson, whose local furnaces, all fired by coal, were producing a healthy twenty tons of good-quality iron a week, invented a technique for the manufacture of guns. The technique had an immediate cascade effect very much more profound than those he ever imagined, and of greater long-term importance.  Up until then, naval cannons were cast hollow, with the interior tube through which the powder and projectile were pushed and fired

The problem with this technique was that the cutting tool would naturally follow the passage of the tube, which may well not have been cast perfectly straight in the first place. This would then cause the finished and polished tube to have eccentricities, and for the inner wall of the cannon to have thin spots where the tool wandered off track.  And thin spots were dangerous—they meant explosions and bursting tubes and destroyed cannon and injuries to the sailors who manned the notoriously dangerous gun decks.

Then came John Wilkinson and his new idea. He decided that he would cast the iron cannon not hollow but solid. This, for a start, had the effect of guaranteeing the integrity of the iron itself—there were fewer parts that cooled early and came out with bubbles and  spongy sections (“honeycomb problems,” as they were called) for which hollow-cast cannon were then notorious.

The secret was in the boring of the cannon hole. Both ends of the operation, the part that did the boring and the part to be bored, had to be held in place, rigid and immovable, because to cut or polish something into dimensions that are fully precise, both tool and workpiece have to be clasped and clamped as tightly as possible to secure immobility.

Cannon after cannon tumbled from the mill, each accurate to the measurements the navy demanded, each one, once unbolted from the mill, identical to its predecessor, each one certain to be the same as the successor that would next be bolted onto it. The new system worked impeccably from the very start.

Yet what elevates Wilkinson’s new method to the status of a world-changing invention would come the following year, 1775, when he started to do serious business with James Watt.

The principle of a steam engine is familiar, and is based on the simple physical fact that when liquid water is heated to its boiling point it becomes a gas. Because the gas occupies some 1,700 times greater volume than the original water, it can be made to perform work.

Newcomen then realized he could increase the work by injecting cold water into the steam-filled cylinder, condensing the steam and bringing it back to 1/1,700 of its volume—creating, in essence, a vacuum, which enabled the pressure of the atmosphere to force the piston back down again. This downstroke could then lift the far end of the rocker beam and, in doing so, perform real work. The beam could lift floodwater, say, out of a waterlogged tin mine.  Thus was born a very rudimentary kind of steam engine, almost useless for any application beyond pumping water.  The Newcomen engine and its like remained in production for more than 70 years, its popularity beginning to lessen only in the mid-1760s, when James Watt showed that it could be markedly improved.

Watt realized that the central inefficiency of the engine he was examining was that the cooling water injected into the cylinder to condense the steam and produce the vacuum also managed to cool the cylinder itself. To keep the engine running efficiently, the cylinder needed to be kept as hot as possible at all times, so the cooling water should perhaps condense the steam not in the cylinder but in a separate vessel, keeping the vacuum in the main cylinder, which would thus retain the cylinder’s heat and allow it to take on steam once more. To make matters even more efficient, the fresh steam could be introduced at the top of the piston rather than the bottom, with stuffing of some sort placed and packed into the cylinder around the piston rod to prevent any steam from leaking out in the process.

These two improvements (the inclusion of a separate steam condenser and the changing of the inlet pipes to allow for the injection of new steam into the upper rather than the lower part of the main cylinder) changed Newcomen’s so-called fire-engine into a fully functioning steam-powered machine.

Once perfected, it was to be the central power source for almost all factories and foundries and transportation systems in Britain and around the world for the next century and more.

Yet perpetually enveloping his engine in a damp, hot, opaque gray fog, were billowing clouds of steam, which incensed James Watt. Try as he might, do as he could, steam always seemed to be leaking in prodigious gushes from the engine’s enormous main cylinder. He tried blocking the leak with all kinds of devices and substances. The gap between the piston’s outer surface and the cylinder’s inner wall should, in theory, have been minimal, and more or less the same wherever it was measured. But because the cylinders were made of iron sheets hammered and forged into a circle, and their edges then sealed together, the gap actually varied enormously from place to place. In some places, piston and cylinder touched, causing friction and wear. In other places, as much as half an inch separated them, and each injection of steam was followed by an immediate eruption from the gap.

Watt tried tucking in pieces of linseed oil–soaked leather; stuffing the gap with a paste made from soaked paper and flour; hammering in corkboard shims, pieces of rubber, even dollops of half-dried horse dung.

By the purest accident, John Wilkinson asked for an engine to be built for him, to act as a bellows for one of his iron forges—and in an instant, he saw and recognized Watt’s steam-leaking problem, and in an equal instant, he knew he had the solution: he would apply his cannon-boring technique to the making of cylinders for steam engines.  Watt beamed with delight. Wilkinson had solved his problem, and the Industrial Revolution—we can say now what those two never imagined—could now formally begin.

And so came the number, the crucial number, the figure that is central to this story, that which appears at the head of this chapter and which will be refined in its exactitude in all the remaining parts of this story. This is the figure of 0.1—one-tenth of an inch. This was the tolerance to which John Wilkinson had ground out his first cylinder.  All of a sudden, there was an interest in tolerance, in the clearance by which one part was made to fit with or into another. This was something quite new, and it begins, essentially, with the delivery of that first machine on May 4, 1776.

The central functioning part of the steam engine was possessed of a mechanical tolerance never before either imagined or achieved, a tolerance of 0.1 inches.

Locks were a British obsession at the time. The social and legislative changes that were sweeping the country in the late eighteenth century were having the undesirable effect of dividing society quite brutally: while the landed aristocracy had for centuries protected itself in grand houses behind walls and parks and ha-has, and with resident staff to keep mischief at bay, the enriched beneficiaries of the new business climate were much more accessible to the persistent poor.

Envy was abroad. Robbery was frequent. Fear was in the air. Doors and windows needed to be bolted. Locks had to be made, and made well. A lock such as Mr. Marshall’s, pickable in 15 minutes by a skilled man, and by a desperate and hungry man maybe in 10, was clearly not good enough. Joseph Bramah decided he would design and make a better one. He did so in 1784, less than a year after picking the Marshall lock. His patent made it almost impossible for a burglar with a wax-covered key blank, the tool most favored by the criminals who could use it to work out the position of the various levers and tumblers inside a lock, to divine what was beyond the keyhole, inside the workings.

Maudslay solved Bramah’s supply problems in an inkling by creating a machine to make them.  He built a whole family of machine tools, in fact, that would each make, or help to make, the various parts of the fantastically complicated locks Joseph Bramah had designed. They could make the parts fast and well and cheaply, without the errors that handcrafting and hand tools inevitably cause. The machines that Maudslay made would, in other words, make the necessary parts with precision.

Metal pieces can be machined into a range of shapes and sizes and configurations, and provided that the settings of the leadscrew and the slide rest are the same for every procedure, and the lathe operator can record these positions and make certain they are the same, time after time, then every machined piece will be the same—will look the same, measure the same, weigh the same (if of the same density of metal) as every other. The pieces are all replicable. They are, crucially, interchangeable. If the machined pieces are to be the parts of a further machine—if they are gearwheels, say, or triggers, or handgrips, or barrels—then they will be interchangeable parts, the ultimate cornerstone components of modern manufacturing. Of equally fundamental importance, a lathe so abundantly equipped as Maudslay’s was also able to make that most essential component of the industrialized world, the screw.

Screws were made to a standard of tolerance of one in one ten-thousandth of an inch.

A slide rest allowed for the making of myriad items, from door hinges to jet engines to cylinder blocks, pistons, and the deadly plutonium cores of atomic bombs

Maudslay next created in truly massive numbers, a vital component for British sailing ships. He built the wondrously complicated machines that would, for the next 150 years, make ships’ pulley blocks, the essential parts of a sailing ship’s rigging that helped give the Royal Navy its ability to travel, police, and, for a while, rule the world’s oceans.  At the time, sails were large pieces of canvas suspended, supported, and controlled by way of endless miles of rigging, of stays and yards and shrouds and footropes, most of which had to pass through systems of tough wooden pulleys that were known simply to navy men as blocks—pulley blocks, beyond the maritime world as block and tackle.

A large ship might have as many as 1400 pulley blocks of varying types and sizes depending on the task required. The lifting of a very heavy object such as an anchor might need an arrangement of six blocks, each with three sheaves, or pulleys, and with a rope passing through all six such that a single sailor might exert a pull of only a few easy pounds in order to lift an anchor weighing half a ton.

Blocks for use on a ship are traditionally exceptionally strong, having to endure years of pounding water, freezing winds, tropical humidity, searing doldrums heat, salt spray, heavy duties, and careless handling by brutish seamen. Back in sailing ship days, they were made principally of elm, with iron plates bolted onto their sides, iron hooks securely attached to their upper and lower ends, and with their sheaves, or pulleys, sandwiched between their cheeks, and around which ropes would be threaded. The sheaves themselves were often made of Lignum vitae (trees from South America),

What principally concerned the admirals was not so much the building of enough ships but the supply of the vital blocks that would allow the sailing ships to sail. The Admiralty needed 130,000 of them every year The complexity of their construction meant that they could be fashioned only by hand. Scores of artisanal woodworkers in and around southern England but were notoriously unreliable.

The Block Mills still stand as testament to many things, most famously to the sheer perfection of each and every one of the hand-built iron machines housed inside. So well were they made—they were masterpieces, most modern engineers agree—that most were still working a century and a half later; the Royal Navy made its last pulley blocks in 1965.

The Block Mills were the first factory to run entirely by steam engine.  The next invention that mattered depended on flatness, without curvature, indentation or protuberance. It involves the creation of a base from which all precise measurement and manufacture can be originated. For, as Maudslay realized, a machine tool can make an accurate machine only if the surface on which the tool is mounted is perfectly flat, is perfectly plane, exactly level, its geometry entirely exact.

A bench micrometer would be able to measure the actual dimension of a physical object to make sure that the components of the machines they were constructing would all fit together, with exact tolerances, and be precise for each machine and accurate to the design standard.

The micrometer that performed all these measurements turned out to be extremely accurate and consistent: this invention of his could measure down to one one-thousandth of an inch and, according to some, maybe even one ten-thousandth of an inch: to a tolerance of 0.0001.

To any schoolchild today, Eli Whitney means just one thing: the cotton gin. To any informed engineer, he signifies something very different: confidence man, trickster, fraud, charlatan almost entirely from his association with the gun trade, with precision manufacturing, and with the promise of being able to deliver weapons assembled from interchangeable parts.  When Whitney won the commission and signed a government contract to do so in 1798, he knew nothing about muskets and even less about their components: he won the order largely because of his Yale connections and the old alumni network that, even then, flourished in the corridors of power in Washington, DC.

It was John Hall who succeeded in making precision guns. At every stage of the work, from the forging of the barrel to the turning of the rifling and the shaping of the barrel, his 63 gauges were set to work, more than any engineer before him, to ensure as best he could that every part of every gun was exactly the same as every other—and that all were made to far stricter tolerances than hitherto: for a lock merely to work required a tolerance of maybe a fifth of a millimeter; to ensure that it not only worked but was infinitely interchangeable, he needed to have the pieces machined to a fiftieth of a millimeter.

Precision shoes were made by turning a shapeless block of wood into a foot-shaped entity of specific dimensions, and repeated time and time again. These shoemaker lasts were of exact sizes, seven inches long, nine, and so on. Before precise shoes were made, they were offered up in barrels and customers pulled them out randomly trying to find a shoe that more or less fit.

Oliver Evans was making flour-milling machinery; Isaac Singer introduced precision into the manufacturing of sewing machines; Cyrus McCormick was creating reapers, mowers, and, later, combine harvesters; and Albert Pope was making bicycles for the masses.

Joseph Whitworth was an absolute champion of accuracy, an uncompromising devotee of precision, and the creator of a device, unprecedented at the time, that could truly measure to an unimaginable one-millionth of an inch.  Using his superb mechanical skills, in 1859 he created a micrometer that allowed for one complete turn of the micrometer wheel to advance the screw not by 1/20 of an inch, but by 1/4,000 of an inch, a truly tiny amount.

Whitworth then incised 250 divisions on the turning wheel’s circumference, which meant that the operator of the machine, by turning the wheel by just one division, could advance or retard the screw and provided the ends of the item being measured are as plane as the plates on the micrometer, opening the gap by that 1/1,000,000 of an inch would make the difference between the item being held firmly, or falling, under the influence of gravity.

Now metal pieces could be made and measured to a tolerance of one-millionth of an inch.

Until Whitworth, each screw and nut and bolt was unique to itself, and the chance that any one-tenth-inch screw, say, might fit any randomly chosen one-tenth-inch nut was slender at best.

With the Model T, Henry Ford changed everything. From the start, he was insistent that no metal filing ever be done in his motor-making factories, because all the parts, components, and pieces he used for the machine would come to him already precisely finished, and to tolerances of cruelly exacting standards such that each would fit exactly without the need for even the most delicate of further adjustment. Once that aspect of his manufacturing system was firmly established, he created a whole new means of assembling the bits and pieces into cars.  He demanded a standard of precision for his components that had seldom been either known or achieved before, and he now married this standard to a new system of manufacture seldom tried before.

The Model T had fewer than 100 parts. A modern car has more than 30,000.

Within Rolls-Royce, it may seem as though the worship of the precise was entirely central to the making of these enormously comfortable, stylish, swift, and comprehensively memorable cars. In fact, it was far more crucial to the making of the less costly, less complex, less remembered machines that poured from the Ford plants around the world. And for a simple reason: the production lines required a limitless supply of parts that were exactly interchangeable.

If one happened not to be so exact, and if an assembly-line worker tried to fit this inexact and imprecise component into a passing workpiece and it refused to fit and the worker tried to make it fit, and wrestled with it—then, just like Charlie Chaplin’s assembly-line worker in Modern Times or, less amusingly, one in Fritz Lang’s Metropolis, the line would slow and falter and eventually stop, and workers for yards around would find their work disrupted, and parts being fed into the system would create unwieldy piles, and the supply chain would clog, and the entire production would slow and falter and maybe even grind, quite literally, to a painful halt. Precision, in other words, is an absolute essential for keeping the unforgiving tyranny of a production line going.

Henry Ford had been helped in his aim of making it so by using one component (and then buying the firm that made it), a component whose creation, by a Swedish man of great modesty, turned out to be of profoundly lasting importance to the world of precision. The Swede was Carl Edvard Johansson, popularly and proudly known by every knowledgeable Swede today as the world’s Master of Measurement. He was the inventor of the set of precise pieces of perfectly flat, hardened steel known to this day as gauge blocks, slip gauges, or, to his honor and in his memory, as Johansson gauges, or quite simply, Jo blocks.

His idea was to create a set of gauge blocks that, if held together in combination, could in theory measure any needed dimension. He calculated that the minimum number of blocks that would be needed was 103 blocks made of certain carefully specified sizes. Arranged in three series, it was possible to take some 20,000 measurements in increments of one one-thousandth of a millimeter, by laying two or more blocks together. His 103-piece combination gauge block set has since directly and indirectly taught engineers, foremen and mechanics to treat tools with care, and at the same time given them familiarity with dimensions of thousandths and ten thousandths of a millimeter.

Gauge blocks first came to the United States in 1908.  Cars were precise only to themselves; maybe every manufactured piece fit impeccably because it was interchangeable to itself, but once another absolutely impeccably manufactured, gauge-block-confirmed piece from another company (a ball bearing from SKF, say) was introduced into the Ford system, then maybe its absolute perfection trumped that of Ford’s, and Ford was wrong—ever so slightly maybe, but wrong nonetheless

Gauge blocks after the Great War, achieved accuracies of up to one-millionth of an inch.

Modern jet engines have hundreds of parts jerking to and fro and they cannot be made more powerful without becoming too complicated.  Modern jet engines can produce more than 100,000 horsepower—still, essentially, they have only a single moving part: a spindle, a rotor, which is induced to spin and, in doing so, causes many pieces of high-precision metal to spin with it.

All that ensures they work as well as they do are the rare and costly materials from which they are made, the protection of the integrity of the pieces machined from these materials, and the superfine tolerances of the manufacture of every part of which they are composed.  Since any increase in engine power and thus aircraft speed would lead to heavier engines, perhaps too heavy for an aircraft to carry, a new kind of engine was invented. The gas turbine.  A crucial element in any combustion engine is air—air is drawn into the engine, mixed with fuel, and then burns or explodes. The thermal energy from that event is turned into kinetic energy, and the engine’s moving parts powered. But a factor in the amount of air sucked into a piston engine is limited by the size of the cylinders. In a gas turbine, there is almost no limit: a gigantic fan at the opening of such an engine can swallow vastly more air than can be taken into a piston engine.

Gas turbines were already beginning to power ships, to generate electricity, to run factories. The simplicity of the basic idea was immensely attractive. Air was drawn in through a cavernous doorway at the front of the engine and immediately compressed, and made hot in the process, and was then mixed with fuel, and ignited. It was the resulting ferociously hot, tightly compressed, and controlled explosion that then drove the turbine, which spun its blades and then performed two functions. It used some of its power to drive the aforementioned compressor, which sucked in and squeezed the air, but it then had a very considerable fraction of its power left, and so was available to do other things, such as turn the propeller of a ship, or turn a generator of electricity, or turn the driving wheels of a railway locomotive (didn’t happen, too many problems), or provide the power for a thousand machines in a factory and keep them running, tirelessly.

The first jet plane was invented in 1941 in Britain, and in 1944 that the public learned about it.  Inside a jet engine, everything is a diabolic labyrinth, a maze of fans and pipes and rotors and discs and tubes and sensors and a Turk’s head of wires of such confusion that it doesn’t seem possible that any metal thing inside it could possibly even move without striking and cutting and dismembering all the other metal things that are crammed together in such dangerously interfering proximity. Yet work and move a jet engine most certainly does, with every bit of it impressively engineered to do so, time and again, and under the harshest and fiercest of working conditions.

There are scores of blades of various sizes in a modern jet engine, whirling this way and that and performing various tasks that help push the hundreds of tons of airplane up and through the sky. But the blades of the high-pressure turbines represent the singularly truest marvel of engineering achievement—and this is primarily because the blades themselves, rotating at incredible speeds and each one of them generating during its maximum operation as much power as a Formula One racing car, operate in a stream of gases that are far hotter than the melting point of the metal from which the blades were made. What stopped these blades from melting?

It turns out to be possible to cool the blades by drilling hundreds of tiny holes in each blade, and of making inside each blade a network of tiny cooling tunnels, all of them manufactured at a size and to such minuscule tolerances as were quite unthinkable only a few years ago.

The first blades that Whittle made were of steel, which somewhat limited the performance of his early prototypes, since steel loses its structural integrity at temperatures higher than about 500 degrees Celsius. But alloys were soon found that made matters much easier, after which blades were constructed from these new metal compounds. They did not run the risk of melting, because the temperatures at which they operated were on the order of a thousand degrees, and the special nickel-and-chromium alloy from which they were made, known as Nimonic, remained solid and secure and stiff up to 1,400 degrees Celsius (2550 F).

the next generation of engines required that the gas mixture roaring out from the combustion chamber be heated to around 1,600 degrees Celsius, and even the finest of the alloys then used melted at around 1,455 degrees Celsius. The metals tended to lose their strength and become soft and vulnerable to all kinds of shape changes and expansions at even lower temperatures. In fact, extended thermal pummeling of the blades at anything above 1,300 degrees Celsius was regarded by early researchers as just too difficult and risky.

Most of that air bypasses the engine (for reasons that are beyond the scope of this chapter), but a substantial portion of it is sent through a witheringly complex maze of blades, some whirling, some bolted and static, that make up the front and relatively cool end of a jet engine and that compress the air, by as much as 50 times. The one ton of air taken each second by the fan, and which would in normal circumstances entirely fill the space equivalent of a squash court, is squeezed to a point where it could fit into a decent-size suitcase. It is dense, and it is hot, and it is ready for high drama. For very nearly all this compressed air is directed straight into the combustion chamber, where it mixes with sprayed kerosene, is ignited by an array of electronic matches, as it were, and explodes directly into the whirling wheel of turbine blades. These blades (more than ninety of them in a modern jet engine, and attached to the outer edge of a disc rotating at great speed) are the first port of call for the air before it passes through the rest of the turbine and, joining the bypassed cool air from the fan, gushes wildly out of the rear of the engine and pushes the plane forward. “Nearly all” is the key. Some of this cool air, the Rolls-Royce engineers realized, could actually be diverted before it reached the combustion chamber, and could be fed into tubes in the disc onto which the blades were bolted. From there it could be directed into a branching network of channels or tunnels that had been machined into the interior of the blade itself. And now that the blade was filled with cool air—cool only by comparison; the simple act of compressing it made it quite hot, about 650 degrees Celsius, but still cooler by a thousand degrees than the post–combustion chamber fuel-air mixture. To make use of this cool air, scores of unimaginably tiny holes were then drilled into the blade surface, drilled with great precision and delicacy and in configurations that had been dictated by the computers, and drilled down through the blade alloy until each one of them reached just into the cool-air-filled tunnels—thus immediately allowing the cool air within to escape or seep or flow or thrust outward, and onto the gleaming hot surface of the blade.

It is here that the awesome computational power that has been available since the late 1960s comes into its own, becomes so crucially useful. Aside from the complex geometry of the hundreds of tiny pinholes, is the fact that the blades are grown from, incredibly, a single crystal of metallic nickel alloy. This makes them extremely strong—which they need to be, as in their high-temperature whirlings, they are subjected to centrifugal forces equivalent to the weight of a double-decker London bus. Very basically, the molten metal (an alloy of nickel, aluminum, chromium, tantalum, titanium, and five other rare-earth elements that Rolls-Royce coyly refuses to discuss) is poured into a mold that has at its base a little and curiously three-turned twisted tube, which resembles nothing more than the tail of P and ends up with all its molecules lined up evenly.

It has become a single crystal of metal, and thus, its eventual resistance to all the physical problems that normally plague metal pieces like this is mightily enhanced. It is very much stronger—which it needs to be, considering the enormous centrifugal forces.

Electrical discharge machining, or EDM, as it is more generally known, employs just a wire and a spark, both of them tiny, the whole process directed by computer and inspected by humans, using powerful microscopes, as it is happening.  The more complex the engines, the more holes need to be drilled into the various surfaces of a single blade: in a Trent XWB engine, there are some 600, arranged in bewildering geometries to ensure that the blade remains stiff, solid, and as cool as possible. Their integrity owes much to the geometry of the cooling holes that are being drilled, which is measured and computed and checked by skilled human beings. No tolerance whatsoever can be accorded to any errors that might creep into the manufacturing process, for a failure in this part of a jet engine can turn into a swiftly accelerating disaster.

As the tolerances shrink still further and limits are set to which even the most well-honed human skills cannot be matched, automation has to take over. The Advanced Blade Casting Facility can perform all these tasks (from the injection of the losable wax to the growing of single-crystal alloys to the drilling of the cooling holes) with the employment of no more than a handful of skilled men and women. It can turn out 100,000 blades a year, all free of errors.

But failure was still possible. The fate of passengers depended on the performance of one tiny metal pipe no more than five centimeters long and three-quarters of a centimeter in diameter, into which someone at a factory in the northern English Midlands had bored a tiny hole, but had mistakenly bored it fractionally out of true. The engine part in question is called an oil feed stub pipe, and though there are many small steel tubes wandering snakelike through any engine, this particular one, a slightly wider stub at the end of longer but narrower snakelike pipe, was positioned in the red-hot air chamber between the high- and intermediate-pressure turbine discs. It was designed to send oil down to the bearings on the rotor that carried the fast-spinning disc. It was machined improperly due to a drill bit that did the work being misaligned, with the result that along one small portion of its circumference, the tube was about half a millimeter too thin.

Metal fatigue is what caused the engine to fail. The aircraft had spent 8,500 hours aloft, and had performed 1,800 takeoff and landing cycles. It is these last that punish the mechanical parts of a plane: the landing gear, the flaps, the brakes, and the internal components of the jet engines. For, every time there is a truly fast or steep takeoff, or every time there is a hard landing, these parts are put under stress that is momentarily greater than the running stresses of temperature and pressure for which the innards of a jet engine are notorious.

Heisenberg, in helping in the 1920s to father the concepts of quantum mechanics, made discoveries and presented calculations that first suggested this might be true: that in dealing with the tiniest of particles, the tiniest of tolerances, the normal rules of precise measurement simply cease to apply. At near-and subatomic levels, solidity becomes merely a chimera; matter comes packaged as either waves or particles that are by themselves both indistinguishable and immeasurable and, even to the greatest talents, only vaguely comprehensible.

The making of the smallest parts for today’s great jet engines, we are reaching down nowhere near the limits that so exercise the minds of quantum mechanicians. Yet we have reached a point in the story where we begin to notice our own possible limitations and, by extension and extrapolation, also the possible end point of our search for perfection.

An overlooked measurement error on the mirror amounting to one-fiftieth the thickness of a human hair managed to render most of the images beamed down from Hubble fuzzy and almost wholly useless.

Chapter 9 (TOLERANCE: 0.000 000 000 000 000 000 000 000 000 000 000 01)  35 places

Here we come to the culmination of precision’s quarter-millennium evolutionary journey. Up until this moment, almost all the devices and creations that required a degree of precision in their making had been made of metal, and performed their various functions through physical movements of one kind or another. Pistons rose and fell; locks opened and closed; rifles fired; sewing machines secured pieces of fabric and created hems and selvedges; bicycles wobbled along lanes; cars ran along highways; ball bearings spun and whirled; trains snorted out of tunnels; aircraft flew through the skies; telescopes deployed; clocks ticked or hummed, and their hands moved ever forward, never back, one precise second at a time. Then came the computer, into an immobile and silent universe, one where electrons and protons and neutrons have replaced iron and oil and bearings and lubricants and trunnions and the paradigm-altering idea of interchangeable parts.

Precision had by now reached a degree of exactitude that would be of relevance and use only at the near-atomic level.

Fab 42—of electronic microprocessor chips, the operating brains of almost all the world’s computers. The enormous ASML devices allow the firm to manufacture these chips, and to place transistors on them in huge numbers and to any almost unreal level of precision and minute scale that today’s computer industry, pressing for ever-speedier and more powerful computers, endlessly demands.

Gordon Moore, one of the founders of Intel, is most probably the man to blame for this trend toward ultraprecision in the electronics world. He made an immense fortune by devising the means to make ever-smaller and smaller transistors and to cram millions, then billions of them onto a single microprocessing chip. There are now more transistors at work on this planet (some 15 quintillion, or 15,000,000,000,000,000,000) than there are leaves on all the trees in the world. In 2015, the four major chip-making firms were making 14 trillion transistors every single second. Also, the sizes of the individual transistors are well down into the atomic level.

When the Broadwell family of chips was created in 2016, node size was down to a previously inconceivably tiny fourteen-billionths of a meter (the size of the smallest of viruses), and each wafer contained no fewer than seven billion transistors. The Skylake chips made by Intel at the time of this writing have transistors that are sixty times smaller than the wavelength of light used by human eyes, and so are literally invisible.

It takes three months to complete a microprocessing chip, starting with the growing of a 400-pound, very fragile, cylindrical boule of pure smelted silicon, which fine-wire saws will cut into dinner plate–size wafers, each an exact two-thirds of a millimeter thick. Chemicals and polishing machines will then smooth the upper surface of each wafer to a mirror finish, after which the polished discs are loaded into ASML machines for the long and tedious process toward becoming operational computer chips. Each wafer will eventually be cut along the lines of a grid that will extract a thousand chip dice from it—and each single die, an exactly cut fragment of the wafer, will eventually hold the billions of transistors that form the non-beating heart of every computer, cellphone, video game, navigation system, and calculator on modern Earth, and every satellite and space vehicle above and beyond it. What happens to the wafers before the chips are cut out of them demands an almost unimaginable degree of miniaturization. Patterns of newly designed transistor arrays are drawn with immense care onto transparent fused silica masks, and then lasers are fired through these masks and the beams directed through arrays of lenses or bounced off long reaches of mirrors, eventually to imprint a highly shrunken version of the patterns onto an exact spot on the gridded wafer, so that the pattern is reproduced, in tiny exactitude, time and time again. After the first pass by the laser light, the wafer is removed, is carefully washed and dried, and then is brought back to the machine, whence the process of having another submicroscopic pattern imprinted on it by a laser is repeated, and then again and again, until thirty, forty, as many as sixty infinitesimally thin layers of patterns (each layer and each tiny piece of each layer a complex array of electronic circuitry) are engraved, one on top of the other.

Rooms within the ASML facility in Holland are very much cleaner than that. They are clean to the far more brutally restrictive demands of ISO number 1, which permits only 10 particles of just one-tenth of a micron per cubic meter, and no particles of any size larger than that. A human being existing in a normal environment swims in a miasma of air and vapor that is five million times less clean.

The test masses on the LIGO devices in Washington State and Louisiana are so exact in their making that the light reflected by them can be measured to one ten-thousandth of the diameter of a proton.

Alpha Centauri A, which lies 4.3 light-years away. The distance in miles of 4.3 light-years is 26 trillion miles, or, in full, 26,000,000,000,000 miles. It is now known with absolute certainty that the cylindrical masses on LIGO can help to measure that vast distance to within the width of a single human hair.


Posted in Infrastructure, Jobs and Skills, Life After Fossil Fuels, Manufacturing & Industrial Heat | Tagged , , , , , | Leave a comment

Rationing. Book review of “Any way you slice it” by Stan Cox

Preface. I can’t imagine that there’s a better book on rationing out there, but of course I can’t be sure, I don’t feel the need to find others on this topic after reading this book. As usual, I had to leave quite a bit out of this review, skipping medical care rationing entirely among many other topics. Nor did I capture the myriad ways rationing can go wrong, so if you ever find yourself in a position of trying to implement a rationing system, or advocating for a rationing system, you’ll wish you’d bought this book. I can guarantee you the time is coming when rationing will be needed, in fact, it’s already here with covid-19. I’ve seen food lines over a mile long.

As energy declines, food prices will go up and at some point gasoline, food, electricity, and heating as well, all of them ought to be rationed.

Though this might not happen in the U.S. where the most extreme and brutal capitalism exists.  Here the U.S. is the richest nation that ever existed but the distribution of wealth is among the most unfair on the planet.  When the need to ration strikes, economists will argue against it I’m sure, saying there’ll be too much cheating and it will be too hard to implement.  Capitalism hates price controls. That’s why “publicly raising the question of curbing growth or uttering the dread word “rationing” in the midst of a profit-driven economy has been compared to shouting an obscenity in church”.

Republicans constantly want to cut back the affordable care act and the food stamp program SNAP.  Companies keep their workforces as small as possible and shift jobs and factories overseas to nations with lower wages and fewer regulations. They fight hard to restrict the rights of organized labor. All this has resulted in higher productivity, but the rewards go to shareholders and executives, not employees.

So, I wouldn’t count on rationing when times get hard – hell, that’s already apparent with covid-19 aid. The Trump administration & republicans were happy to hand out a $2 trillion dollar tax cut to the already rich, but when it came to covid-19 relief so people wouldn’t be evicted from their homes and afford to buy food with, they gave out money just once and as I write this in mid-October 2020 Republicans won’t compromise with the Democrats to give out any more relief money. Even if Biden is elected, the economy can’t recover until a vaccine is invented and given to everyone. By then the economy may be so broken it will be hard to fix. And since peak oil has already happened, we can’t recover, growth is at an end. Soon “The Long Emergency” Kunstler wrote about it begins.

Let’s hope I’m wrong and that Homeland Security or some other government agency has already got emergency rationing plans in place.  I’ve seen the cities of Denver, Chicago, and other city-level plans. They are usually very high level, and cover who should call who, lists of nursing homes to evacuate and the like.  But there’s no actual stockpile of food or blankets or rationing plans. When I spoke to someone in California’s emergency planning unit, I was told it this won’t happen because it would be too costly a bureaucracy to set up, and any perceived maldistribution would undo the political fortunes of the party in power.

So, you’d better plan to grow as much of your own food as possible during energy decline, the level of inequality and selfishness in the United States is truly striking. There may be rationing in some localities. Try to find a good community (by reading the posts here), gain skills, and help others out whenever you have a chance to create a bubble of mutual aid and kindness in this cold cruel capitalistic world.

Stan Cox. 2013. Any Way You Slice It: The Past, Present, and Future of Rationing. The New Press.

When energy trading companies, led by Enron Corporation, created shortages in the state’s recently deregulated power industry, they caused wholesale electricity prices to jump by as much as 800%, with economic turmoil and suffering the result. The loss to the state was estimated at more than $40 billion. That same year, Brazil had a nationwide electricity shortfall of 10%, which was proportionally larger than the shortage in California. But the Brazilian government avoided inflation and blackouts simply by capping prices and limiting all customers, residential and commercial, to 10% lower consumption than that of the previous year, with severe penalties for exceeding the limit. No significant suffering resulted. The California crisis is viewed as one of America’s worst energy disasters, but, says Farley, “No one even remembers a ‘crisis’ in Brazil in 2001.

In Zanzibar, for example, resort hotels and guesthouses sited in three coastal villages consume 180 to 850 gallons of water per room per day (with the more luxurious hotels consuming the most), while local households have access to only 25 gallons per day for all purposes.

The mechanisms for non-price rationing are many and varied. The more familiar include rationing by queuing, as at the gas pump in the 1970s; by time, as with day-of-week lawn sprinkling during droughts; by lottery, as with immigration visas and some clinical trials of scarce drugs; by triage, as in battlefield or emergency medicine; by straight quantity, as governments did with gasoline, tires, and shoes during World War II; or by keeping score with a nonmonetary device such as carbon emissions or the points that were assigned to meats and canned goods in wartime.

If we allow the future to be created by veiled corporate planning, the fairly predictable consequence will be resource conflicts between the haves and have-nots—or rather, among the haves, the hads, and the never-hads.

It’s quite possible (indeed very common, I would guess) to be simultaneously concerned about the fate of the Earth and worried that the necessary degree of restraint just isn’t achievable. We’ve been painted into a corner by an economy that has a bottomless supply of paint. Overproduction, the chronic ailment of any mature capitalist economy, creates the need for a culture whose consumption is geared accordingly.

Whenever there’s a ceiling on overall availability of goods, no one is happy. And when a consumer unlucky enough to be caught in such a situation is confronted with explicit rationing—a policy that she experiences as the day-to-day face of that scarcity—it’s no wonder that rationing becomes a dirty word. That has always been true, but an economy that is as deeply dependent on consumer spending as ours would view explicit rationing as a doubly dirty proposition. In America, freedom of consumption has become essential to realizing many of our more fundamental rights—freedom of movement, freedom of association, ability to communicate, satisfactory employment, good health care, even the ability to choose what to eat and drink—and no policy that compromises those rights by limiting access to resources is going to be at all welcome.

No patriotic American can or will ask men to risk their lives to preserve motoring-as-usual. —Secretary of the Interior Harold Ickes explaining the U.S. government’s gasoline rationing plan, April 23, 1942

Carter was neither the first nor the last leader to use martial language when urging conservation and sacrifice. According to the environmental scholar Maurie Cohen, “Experience suggests that the use of militaristic representations can be an effective device with which to convey seriousness of purpose, to marshal financial resources, to disable opponents, and to mobilize diverse constituencies behind a common banner. Martial language can also communicate a political message that success may take time and that public sacrifice may be required as part of the struggle.”

As World War I ground on into its third year in the summer of 1917, U.S. exports of wheat and other foods were all that stood between Europe’s battle-weary populations and mass hunger. America’s annual food exports rose from 7 to 19 million tons during the war. As a result, the farms of the time, which were far less productive than those of today, were hard-pressed to satisfy domestic demand. By August 1917, with the United States four months into its involvement in the war, Congress passed the Lever Act, creating the United States Food Administration and the Federal Fuel Administration and giving them broad control of production and prices. Commodity dealers were required to obtain licenses from Food Administrator Herbert Hoover, and he had the power to revoke licenses, shut down or take over firms, and seize commodities for use by the military. In September, the Toledo News-Bee announced that the “entire world may be put on rations soon” with Hoover acting as “food dictator of the world.”8 But as it turned out, Hoover wasn’t much of a dictator. According to the historian Helen Zoe Veit, restrictions consisted mostly of jawboning, as “food administrators simultaneously exalted voluntarism while threatening to impose compulsory rations should these weak, ‘democratic’ means prove insufficient”; however, “many Americans wrote to the Food Administration to say that they believed that compulsion actually inspired cheerful willingness, whereas voluntarism got largely apathetic results.”9 There were in fact a few mandatory restrictions. Hoarding of all kinds of products was prohibited, and violators could be punished with fines or even imprisonment. “Fair-price lists” ran in local newspapers and retailers were expected to adhere to them. But controls on prices of wheat and sugar were not backed up with regulation of demand. That led to scarcities of both commodities, as consumers who could afford to buy excessive quantities often did so.

Meanwhile, the Fuel Administration had to deal with shortages of coal, which at that time was the nation’s most important source of energy for heating, transportation, and manufacturing. Heavy heating demand in the frigid winter of 1917–18 converged with higher-than-normal use of the railway system (largely for troop movements) to precipitate what has been called the nation’s first energy crisis. The administration resorted to a wide range of stratagems to conserve coal, including factory shutdowns, division of the country into “coal zones” with no trade between zones, and a total cutoff of supplies to country clubs and yacht owners. The administration announced that Americans would be allowed to buy only as much coal as was needed to keep their houses at 68 degrees F. in winter.

The need to conserve petroleum led to a campaign to stop the pastime of Sunday driving.

The campaign against Sunday driving was carried out enthusiastically, perhaps overly so, by self-appointed volunteer citizens. Garfield complained that the volunteers had become “tyrranous,” punishing violators in ways that “would have staggered the imagination of Americans twelve months earlier.” Government officials assumed that their

Although food shortages persisted despite the drive for voluntary moderation, rationing remained off the table. Veit explained how the U.S. government’s insistence on voluntarism was an effort to draw a contrast between democratic America and autocratic, “overrationed” Germany. Rationing, the argument went, had undermined German morale while the United States was managing to rescue Europe and feed its own population “precisely because it never forced Americans to sacrifice, but instead inspired them to do so willingly.” (Hoover’s Home Conservation director Ray Wilbur asserted that, before the war, “we were a soft people,” and that voluntary sacrifice had strengthened the nation.) But in World War I America, price controls acting alone did not prevent shortages, unfair distribution, and deprivation. From that experience, the economic historian Hugh Rockoff concluded that “with prices fixed, the government must substitute some form of rationing or other means of reducing demand” because “appeals to voluntary cooperation, even when backed by patriotism, are of limited value in solving this problem.” The reluctance to use rationing was tied to views on democracy. According to Veit, the most powerful men in Washington, including Hoover and President Woodrow Wilson, viewed democracy as “synonymous with individual freedom,” while another view of democracy that was widely held at the time required “equality of burden.” Under the second definition, “rationing was inherently more democratic as it prevented one group (the patriotic) from bearing the double-burden of compensating for another (the shirkers).

In practice, Hoover’s Food Administration valued free-market economics more highly than either personal freedom or fairness. Official language was always of voluntary sacrifice, but there’s more than one way to rope in a “volunteer.” Ad hoc committees in schools, workplaces, churches, and communities kept track of who did and didn’t sign Hoover’s food-conservation pledge or display government posters in their kitchen windows. In urging women to sign and comply with the Hoover pledge, door-to-door canvassers laid on the hard sell, often with the implication of an “or else” if the pledge was refused. Statements from government officials to the effect of “we know who you are” and explicit branding of nonsigners as “traitors” were highly effective recruiting techniques. But millions of poor and often hungry Americans had no excess consumption to give up. A Missouri woman told Hoover canvassers that yes, she would accept a pledge card so that she could “wipe her butt with it,” because she “wasn’t going to feed rich people.” As Veit put it, “the choice to live more ascetically was a luxury, and the notion of righteous food conservation struck those who couldn’t afford it as a cruel joke.”

With pressure building, the U.S. government probably would have resorted to rationing had World War I continued through 1919. The major European combatants, whose ordeal had been longer and tougher, did have civilian rationing, and the practice reappeared across Europe with the return of war in 1939.

When World War II broke out in Europe, the United States once again mounted a campaign to export food and war materials to its allies. Soon after America entered the war, the first items to require rationing were tires and gasoline. Those moves can be explained, in Rockoff’s words, by the “siege motive,” the result of absolute scarcity imposed by an external cutoff of supply. The rubber and tire industries were indeed under siege, with supplies from the Pacific having been suddenly cut off. The processes for making synthetic rubber were known, but there had not been time to build sufficient manufacturing capacity. The government’s first move was to buy time by calling a halt to all tire sales. With military rubber requirements approaching the level of the economy’s entire prewar output, Leon Henderson, head of the Office of Price Administration (OPA), urged Americans to reduce their driving voluntarily to save rubber. But, unwilling to rely solely on drivers’ cooperation, the government got creative and decided to ration gasoline as an indirect means of reducing tire wear. The need for gas rationing had already arisen independently in the eastern states. At the outbreak of the war, the United States was supplying most of its own oil needs. With much of the production located in the south-central states, tankers transported petroleum from ports on the Gulf Coast to the population centers of the East Coast. But in the summer of 1941, oil tankers began to be diverted from domestic to transatlantic trade in support of the war effort, and all shipping routes became highly vulnerable to attack by German submarines. With supplies strictly limited, authorities issued ration coupons that all drivers purchasing gasoline were required to submit, and also banned nonessential driving in many areas.

Police were asked to stop and question motorists if they suspected a violation and “check on motorists found at race tracks, amusement parks, beaches, and other places where their presence is prima facie evidence of a violation.” Drivers also were required to show that they had arranged to carry two or more passengers whenever possible. Energy consumption was further curtailed by restrictions on the manufacture of durable goods, including cars. At one point, passenger-car production was shut down altogether. That, according to Rockoff, was in a sense “the fairest form of rationing. Each consumer got an exactly equal share: nothing.”


It became clear early on that rationing of food and other goods would become necessary as well. The OPA announced that “sad experience has proven the inadequacy of voluntary rationing. . . . Although none would be happier than we if mere statements of intent and hortatory efforts were sufficient to check overbuying of scarce commodities, we are firmly convinced that voluntary programs will not work.”26 With some exceptions, such as coffee and bananas, the trigger for rationing foodstuffs was not the siege motive. The United States was producing ample harvests and continued to do so throughout the war, but the military buildup of 1942 included a commitment to supply each soldier and sailor in the rapidly expanding armed services with as much as four thousand calories per day. Those hefty war rations, along with exports of large tonnages of grain to Britain and other allies, pulled vast quantities of food out of the domestic economy. Without price controls, inflation would have ripped through America’s food system and the economy, and the price controls could not have held without rationing.

The first year of America’s involvement in the war, there was only loose coordination among agencies responsible for production controls, price controls, and consumer rationing, and as a result the government was unable to either keep prices down or meet demand for necessities. In late 1941 and early 1942, polls showed strong public demand for broader price controls. Across-the-board controls were imposed in April 1942. But over the next year, prices still rose at a 7.6 percent annual rate, so in early 1943 comprehensive rationing of foods and other goods was announced. In April, Roosevelt issued a strict “Hold-the-Line Order” that allowed no further price increases for most goods and services. Only that sweeping proclamation, backed up as it was by a comprehensive rationing system, was able to keep inflation in check and achieve fair distribution of civilian goods. In late 1943, the OPA was getting very low marks in polls—not because of opposition to rationing or price controls, but because people were complaining that they needed even broader and stricter enforcement. It’s important to note that OPA actions were often motivated as much by wariness of political unrest as by a concern for fairness. Amy Bentley, a historian, explains that the experience of the Great Depression was fresh in the minds of government officials, and they felt that, with the war having re-imposed nationwide scarcity, ensuring equitable sharing of basic needs was essential if a new wave of upheaval and labor radicalization was to be avoided. In publicity materials, the OPA stressed the positive, buoyed by comments from citizen surveys, such as the view of one woman that “rationing is good democracy.”

Consumer rationing by quantity took two general forms: (1) straight rationing (also referred to at various times as “specific” or “unit” rationing), which specified quantities of certain goods (tires, gas, shoes, some durable goods) that could be bought during a specified time period at a designated price; and (2) points rationing, in which officials assigned point values to each individual product (say, green beans or T-bone steak) within each class of commodity (canned vegetables or meats). Each household was allocated a certain number of points that could be spent during the specified period. Price ceilings were eventually placed on 80 percent of foodstuffs, and ceilings were adjusted for cost of living city by city. Determining which goods to ration and what constituted a “fair share” required a major data-collection effort. The OPA drew information from a panel of 2,500 women who kept and submitted household food diaries. The general rules and mechanics of wartime rationing, while cumbersome, were at least straightforward. Ration stamps were handled much like currency, except that they had fixed expiration dates. Businesses were required to collect the proper value in stamps with each purchase so that they could pass them up the line to wholesalers and replenish inventories. Many retailers had ration bank accounts from which they could write ration checks when purchasing inventory; that spared them the inconvenience of handling bulky quantities of stamps and avoided the risk of loss or theft. Although stamps expired at the end of the month for consumers, they were valid for exchange by retailers and wholesalers for some time afterward. Therefore, the OPA urged that households destroy all expired ration stamps, warning that pools of excess stamps could “breed black markets.” The link between the physical stamp and the consumer was tightly controlled. Only a member of the family owning a ration book could use the stamps, and stamps had to be torn from the book by the retailer, not the customer. Stamps for butter had to be given to the milkman in person at time of delivery; they were not to be left with a note.

When consumption of some products is restricted by rationing, people spend the saved money on nonrationed products, driving up their prices. Therefore, Britain’s initial, limited program covering bacon and butter did little to protect the wider economy. Families were plagued by inflation, as well as by shortages and unfair distribution of still-uncontrolled goods; demand swelled for “all-around rationing.”32 Restrictions on sugar and meat began early in 1940, in order to keep prices down, ensure fairness, and reduce dependence on imports. Tea, margarine, and cooking fats were included at midyear. As food scarcity took hold, worsening in the winter of 1940–41, Britons demanded that rationing be extended to a wider range of products to remedy growing inequalities in distribution. They got what they asked for.

The quantities allowed per person varied during the course of the rationing period but were never especially generous: typical weekly amounts were four to eight ounces of bacon plus ham, eight to sixteen of sugar, two to four of tea, one to three of cheese, six of butter, four to eight of jam, and one to three of cooking oil. Allowances were made. Pregnant women and children received extra shares of milk and of foods high in vitamins and minerals, while farmworkers and others who did not have access to workplace canteens at lunchtime received extra cheese rations. Quantities were adjusted for vegetarians. In its mechanics, the system differed from America’s in that each household was required to register with one—and only one—neighborhood shop, which would supply the entire core group of rationed foods. As the war continued, it became clear that this exclusive consumer-retailer tie was unpopular, so the government introduced a point-rationing plan in December 1941, permitting consumers to use points at any shop they chose.33 In both the UK and America, most of the day-to-day management of the rationing systems was, necessarily, handled at the local level. Administration of the system was decentralized. According to Bentley, “The 5,500 local OPA boards scattered across the country, run by over 63,000 paid employees and 275,000 volunteers, possessed a significant amount of autonomy, enabling them to base decisions on local considerations. The real strength of the OPA, then, lay less in the federal office than in its local boards.” In large cities from Baltimore to San Francisco, a “block leader plan” was instituted to help families deal with scarcity.

The block leader, always a woman, would be responsible for discussing nutritional information and sometimes rationing procedures and scrap drives with all residents of her city block. The Home Front Pledge (“I will pay no more than the top legal prices—I will accept no rationed goods without giving up ration points”), administered to citizens by the millions, was backed by clear-cut rules and was legally enforceable, so it was taken much more seriously than the Hoover Pledge of 1917–18. In Britain’s system, the Ministry of Food oversaw up to nineteen Divisional Food Offices, and below them more than 1,500 Food Control Committees, each of which included ten to twelve consumers, five local retailers, and one shop worker, that dealt with the public through local Food Offices.

 “FAIR SHARES FOR ALL” ARE ESSENTIAL Other Allied nations, as well as Germany and the other Axis powers, also imposed strict rationing. In the countries they occupied, the Nazis enforced extremely harsh forms of rationing among local populations in order to provide more plentiful resources to German troops and civilians. A 1946 report by two Netherlands government officials, poignant in its matter-of-factness, shows in meticulous detail through numerous graphs and descriptions how the calorie consumption and health status of that country’s population suffered and how many lives were lost under such strict rationing. Average adult consumption dropped as low as 1,400 calories per day during 1944. Meager as it was, that was an average; because of restrictions on food distribution, many people, especially in the western part of the country, received much less food and starved. By that stage of the war, according to the authors, “People were forced more and more to leave the towns in search of food in the production areas. Many of them, however, did not live through these food expeditions.

The OPA’s job was made easier, notes Bentley, by the fact that “most Americans understood that their wartime difficulties were minor compared with the hardships in war-torn countries.” Soon after the initiation of food rationing, the Office of War Information estimated that, conservatively, “civilians will have about 3 percent more food than in the pre-war years but about 6 percent less than in 1942. There will be little fancy food; but there will be enough if it is fairly shared and conserved. Food waste will be intolerable.” Total availability of coffee, canned vegetables, meat, cheese, and canned milk was often as high as before the war. Those items were rationed not because they were especially scarce but in order to hold down demand that otherwise would have ballooned under the price controls that were in effect. There was, for instance, an explosion of demand for milk in the early 1940s, when prices were fixed, but the dairy industry blocked attempts to initiate rationing. Consumption shot up, and severe shortages developed in pockets all over the country. Everything but rationing was attempted: relaxing health-quality standards, prohibiting the sale of heavy whipping cream, and reducing butterfat requirements. But the problem of excess demand persisted. Huge quantities of fruits and vegetables were exported in support of the war effort, leaving limited stocks for civilian use. The OPA kicked off 1943 with a plan under which households would be allowed to keep in their own homes no more than five cans of fruits or vegetables per occupant at any one time. A penalty of eight ration points would be assessed for each excess can. There is little evidence that the ban was actually enforced, and neither home-canned goods nor fresh produce was covered by the order.40 Home canners could get a pound of sugar for each four quarts of fruits they planned to can without surrendering ration coupons; however, sugar restrictions sidelined home brewers and amateur wine makers. Commercial distilling for civilian consumption ceased, but the industry reassured customers that it had a three-year supply of liquor stockpiled and ready to sell, so there was no need to ration.

Bread and potatoes were exempted from rationing, to provide a dietary backstop. With caloric requirements thus satisfied by starchy foods, protein became the chief preoccupation. Red meat had already held center stage in the American diet for decades; consumption at the beginning of World War II was more than 140 pounds per person per year, well above today’s average of about 115 pounds. During the war, the government aimed to provide a full pound of red meat per day to each soldier; therefore, according to officials, only 130 pounds per year would remain for each civilian. A voluntary “Share the Meat” program, introduced in 1942, managed to lower average annual consumption by a mere three pounds. When the necessity for stronger curbs became evident, rationing was introduced in 1943, and soon consumption dropped steeply, to 104 pounds per civilian. Farm families were permitted to consume as much as they wanted of any kind of meat they produced on the farm without surrendering ration coupons, but farm owners who did not cultivate their own land were not. Elsewhere, the feeling of scarcity was pervasive. For those who craved more meat, there was little consolation to be found in a chicken leg. At that time, poultry was not highly regarded as a substitute for red meat, so average consumption was only a little over twenty pounds per year—less than one-third of today’s level.42 The OPA tightened price ceilings on poultry but did not ration it.43

By April 1, 1943, even vegetable-protein sources such as dried beans, peas, and lentils had been added to the list of rationed items. To make a half pound of hamburger go further, the American Red Cross Nutrition Service suggested the use of “extenders,” including dried bread and bread crumbs, breakfast cereals, and “new commercial types” of filler. Cooks became accustomed to substituting jelly and preserves for butter; preparing sardine loaf, vegetable loaf, cheese, lard, and luncheon meat; and substituting “combination dishes such as stews, chop suey, chili, and the like for the old standby dishes such as choice roasts and steaks and chops.” Americans sought out protein-and calorie-heavy food wherever they could, partly because, in those days, thinness evoked memories of hard times. The OPA, for example, “served notice on Americans . . . that they will do well, if they want to preserve that well-fed appearance, to stop dreaming of steaks and focus their appetites and point purchasing power on hamburger, stew, and such delicacies as pig’s ears, pork kidneys, and beef brains.”

Starting the next morning, footwear would be subject to rationing, with each American entitled to three pairs of shoes per year.

Tthe reaction to rationing was instantaneous and frantic. Most shoe and department stores were closed on Sundays in that era, but in the few hours that remained before shoe rationing began, there was a rush on footwear at the handful of open stores. During the following week, after the order went into effect, the stampede continued, partly because some shoppers had misunderstood the rationing order to mean that shoes were already in short supply.

The apparel industry succeeded in blocking rationing plans from being implemented for any articles other than footwear, and that made it very difficult to control demand for clothing.48 But efforts to reduce resource consumption at the manufacturing stage were ambitious. For most clothing, the WPB established “square-inch limitations on the amount of material which may be used for all trimmings, collars, pockets, etc.,” while clothing was designed “to keep existing wardrobes in fashion” so that consumers would wear them longer. In discussing a WPB order regulating women’s clothing, the government publication Victory Bulletin observed, “The Basic Silhouette—termed the ‘body basic’ by the order—must conform to specified measurements of length, sweep, hip, hem, etc., listed in the order.” Such micromanagement even extended to swimwear, when a skimpier two-piece bathing suit was promoted for requiring less fabric.

Appliance manufacturing for civilian use was tightly restricted. From April 1942 to April 1943, no mechanical refrigerators were produced; that saved a quarter million tons of critical metals and other materials for use in war production. Starting in April 1943, sale of refrigerators, whether electric-run, gas-run, or a nonmechanical icebox type, was resumed; however, in order to be allowed to make a purchase, a household member had to attest on a federal form that “I have no other domestic mechanical refrigerator, nor do I have any other refrigerator equipment that I can use.” Stoves for heating and cooking were similarly rationed, requiring a declaration that the purchaser owned no functional stove. The OPA ruled that the 150,000 stovetop pressure cookers to be produced in 1943 would be allocated by County Farm Rationing Committees and that “community pools,” each comprising several families who agreed to the joint use of a pressure cooker, would receive preference. The WPB exerted its influence on production of radio tubes, light fixtures, lightbulbs, and even can openers. Bed and mattress production was maintained at three-fourths its normal level.

Reports noted, “Sacrificing metal-consuming inner springs, mattress manufacturers have reverted to the construction of an earlier period,” using materials such as cotton felt, hair, flax, and twine. The industry produced “women’s slips made from old summer dresses; buttons from tough pear-tree twigs; life-jacket padding from cattails; and household utensils from synthetic resins.”

In Britain, a series of “Limitations of Supplies” orders governed sales.

Soap was rationed because its production required fats, which had to be shared with the food and munitions industries.

The idea of clothes rationing was no more popular in Britain than it was in the United States. Prime Minister Winston Churchill didn’t like the idea at all, but neither he nor anyone else could come up with an alternative means to keep prices down and all Britons clothed. Apparel was given a page in the ration book originally reserved for margarine, which at the time was not being rationed. Annual allowances fluctuated between approximately one-third and two-thirds of average prewar consumption.

Price-controlled, tax-exempt “utility clothing” was made of fabric manufactured to precise specifications meant to ensure quality and long life. It was conceived in part by “top London fashion designers” and was not necessarily cheap. Yet it was generally well received because of its potential to delay one’s next clothing purchase. Utility plans eventually encompassed other textiles, shoes, hosiery, pottery, and furniture. Items had to be made to detailed specifications, and the number of styles was tightly limited.

Average people also got many opportunities to sit back and enjoy the public humiliation of well-heeled or politically powerful ration violators. In the summer of 1943, the OPA initiated proceedings against eight residents of Detroit’s posh Grosse Pointe suburb for buying meat, sugar, and other products without ration coupons. This gang of “socialites,” as they were characterized, included a prominent insurance executive, the wife of the president of the Detroit News, and the widow of one the founders of the Packard Motor Car Company who tried to buy four pounds of cheese under the table and got caught. In Maryland, the wife of the governor had to surrender her gas ration book after engaging in pleasure driving in a government vehicle.

In a column, Mathews demanded that “Washington start cracking down on the big fellows if you expect cooperation from the little fellows.” But it was Mathews himself who was arrested, on libel

\Wealthy Britons did not suffer much either under food rationing. Upscale restaurants could serve as much food as their customers could eat, and they were not subject to price controls. Such costly luxuries as wild game and shellfish were not rationed,

The rationing of goods at controlled prices provides a strong incentive for cheating, as the World War II example shows. For administering wartime rationing and price controls, the UK Ministry of Food had an enforcement staff of around a thousand, peaking in 1948 at over thirteen hundred. They pursued cases involving pricing violations; license violations; theft of food in transit; selling outside the ration system; forgery of ration documents; and, most prominently, illicit slaughter of livestock and sale of animal products. Illegal transactions accounted for 3 percent of all motor fuel sales and 10% of those involving passenger cars. Enforcement of rationing regulations and price controls by the ministry from 1939 to 1951 resulted in more than 230,000 convictions; the majority of offenders were retail merchants guilty of mostly minor offenses. An estimated 80% of the convictions resulted in fines of less than £5, and only 3 to 5 percent led to imprisonment of the offender. There were fewer problems involving quantity-rationed goods (for which consumers were paired up with a single retailer) than there were with rationing via points, which could be used anywhere. Zweiniger-Bargielowska writes that, although most people at one time or another made unauthorized purchases, the most corrosive effect of illicit markets was to subvert the ideal of “fair shares for all,” since it was only those better off who could afford to buy more costly contraband goods routinely.

In the United States, enforcement of price controls and rationing regulations made up 16% of the OPA’s budget. The agency identified more than 333,000 violations in 1944 alone but prosecuted just 64,000 people that year. Forty percent of prosecutions were for food violations, with the largest share for meat and dairy, and 17% were for gasoline. Along with flagrant overcharging, selling without ration-stamp exchange, and counterfeiting of stamps and certificates, businesses resorted to work-arounds: “tie-in sales” that required purchase of another product in addition to the rationed one, “upgrading” of low-quality merchandise to sell at the high-quality price, and short-weighting. As in Britain, the off-the-books meat trade got a large share of attention.

Illicit meat was sold for approximately double the legal price, and it tended to be the better cuts that ended up in illegal channels. Official numbers of hogs slaughtered under USDA inspection dropped 30% from February 1942 to February 1943, with the vanished swine presumably diverted into illegal trade. Off-the-books deals by middlemen were common, as was “the rustler, who rides the range at night, shooting animals where he finds them, dressing them on the spot, and driving away with the carcasses in the truck.” It wasn’t only meat that was squandered. Victory Bulletin warned, “Potential surgical sutures, adrenalin, insulin, gelatin for military films and bone meal for feeds are disregarded by the men who slaughter livestock illegally”; also lost was glycerin, needed for manufacturing explosives.

Retailers didn’t always play strictly by the ration book. A coalition of women’s organizations in Brooklyn urged Chester Bowles, director of the OPA, to prohibit shopkeepers from holding back goods for selected customers, demanding that all sales be first come, first served. But some OPA officials pointed out that such a policy would discriminate against working women who had time to shop only late in the day. Restaurants were free to serve any size portions they liked; however, if they decided to continue serving ample portions (for which they were allowed to charge a fittingly high price), they faced the prospect of having to close for several days each week when their meat ration ran out. Private banquets featuring full portions could be held with the permission of local rationing boards.

The ration stamps issued to a single household were not usually sufficient to purchase a large cut of meat such as a roast, and because stamps had expiration dates they could not be saved up from one ration period to the next in order to do so. Because consumers were required to present their own ration books in person when buying meat, announced the OPA, guests invited to a dinner party would have to buy their own meat and deliver it beforehand to the host cook—an awkward but workable solution if, say, pork chops were on the menu. However, if a single large cut such as a pot roast were to be served, the OPA noted, the host and invitees would have to “go to the butcher shop together, each buying a piece of the roast, and ask the butcher to leave it in one piece.”

The extension of rationing to bread in 1946–48, a move intended to ensure the flow of larger volumes of wheat to areas of continental Europe and North Africa that were threatened by famine, was highly controversial. People had come to depend on bread, along with potatoes, as a “buffer food” that helped feed manual workers and others for whom ration allowances did not provide sufficient calories. Rationing of the staff of life was unpopular from the start, even though allowances were adjusted to meet varying nutritional requirements and rations themselves were ample.

On November 15, Nixon asked all gasoline stations to close voluntarily each weekend, from Saturday evening to Sunday morning. As during World War II, a national allocation plan was put in place to ensure that each geographic region had access to adequate fuel supplies. In establishing allocation plans, the Federal Energy Office assigned low priority to the travel industry and, in an echo of World War II, explicitly discouraged pleasure driving. That same month, Nixon announced cuts in deliveries of heating oil—reductions of 15 percent for homes, 25% for commercial establishments, and 10%t for manufacturers—under a “mandatory allocation program.” The homes of Americans who heated with oil were to be kept six to ten degrees cooler that winter. Locally appointed boards paired fuel dealers with customers and saw to it that the limits were observed. Supplies of aviation fuel were cut by 15 percent. The national speed limit was lowered to 55 miles per hour. With Christmas approaching, ornamental lighting was prohibited. Finally, Nixon took the dramatic step of ordering that almost 5 billion gasoline ration coupons be printed and stored at the Pueblo Army Depot in Colorado, in preparation for the day when gas rationing would become necessary.

Here is how Time magazine depicted the national struggle for fuel during the 1973–74 embargo: The full-tank syndrome is bringing out the worst in both buyers and sellers of that volatile fluid. When a motorist in Pittsburgh topped off his tank with only $1.10 worth and then tried to pay for it with a credit card, the pump attendant spat in his face. A driver in Bethel, Conn., and another in Neptune, N.J., last week escaped serious injury when their cars were demolished by passenger trains as they sat stubbornly in lines that stretched across railroad tracks. “These people are like animals foraging for food,” says Don Jacobson, who runs an Amoco station in Miami. “If you can’t sell them gas, they’ll threaten to beat you up, wreck your station, run over you with a car.” Laments Bob Graves, a Lexington, Mass., Texaco dealer: “They’ve broken my pump handles and smashed the glass on the pumps, and tried to start fights when we close. We’re all so busy at the pumps that somebody walked in and stole my adding machine and the leukemia-fund can.”

President Gerald Ford laid out a plan to reduce American dependence on imported oil by imposing tariffs and taxes on petroleum products. His plan was met with almost universal condemnation. A majority of Americans polled said they would prefer gasoline rationing to the tax scheme. Time agreed, arguing that rationing would have three crucial qualities going for it—directness, fairness, and familiarity—and adding that “support for rationing is probably strongest among lower-income citizens who worry most about the pocketbook impact of Ford’s plan.

The federal government, he said, should challenge Americans to make sacrifices, and its policies must be fair, predictable, and unambiguous. But, he warned, “we can be sure that all the special-interest groups in the country will attack the part of this plan that affects them directly. They will say that sacrifice is fine as long as other people do it, but that their sacrifice is unreasonable or unfair or harmful to the country. If they succeed with this approach, then the burden on the ordinary citizen, who is not organized into an interest group, would be crushing.” He was right. Critics in both the private and public sectors rejected Carter’s characterization of the energy crisis as the “moral equivalent of war” and viewed any discussion of limits, conservation, or sacrifice as a threat to the economy. Opponents then mocked his call to arms by abbreviating it to “MEOW,” while Congress simply ignored Carter’s warnings and avoided taking any effective action on energy.

Ground zero for the gas shortages of 1979 was California. The state imposed rationing on May 6, allowing gas purchases only on alternate days: cars with odd license-plate numbers could be filled on odd days of the month and even numbers on even days. Several other states followed suit, but that move alone didn’t relieve the stress on gas stations. Many station attendants refused to fill tanks that were already half full or more. That first Monday morning, many drivers who woke up early to allow time to buy gas on the way to work instead found empty, locked cars already standing in long lines at the pumps. The cars had been left there the previous evening by drivers who then walked or hitchhiked back to the station in the morning. Two Beverly Hills attorneys tied their new rides—a pair of Arabian horses—to parking meters outside their office as they prepared to petition the city to suspend an ordinance against horse riding in the streets. The National Guard was called out to deliver gas to southern Florida stations. A commercial driver hauling a tankful to a Miami station found a line of 25 cars following him as if, he later said, he’d been “the Pied Piper.” In some cities, drivers were seen setting up tables alongside their cars in gas lines so the family could have breakfast together while waiting to fill the tank.

One of the worst incidents occurred in Levittown, Pennsylvania, where a crowd of 1500 gasoline rioters “torched cars, destroyed gas pumps, and pelted police with rocks and bottles.” A police officer responded to a question from a motorist by smashing his windshield, whacking the driver’s son with his club, and putting the man’s wife in a choke hold. In all, 82 people were injured, and almost 200 were arrested. Large numbers of long-haul truckers across the nation went on strike that summer, parking their rigs. Some blockaded refineries, and a few fired shots at non-striking truckers. The National Guard was called out in nine states, as “the psychology of scarcity took hold.” A White House staffer told Newsweek, “This country is getting ugly.

During World War II, gasoline scarcity was far worse in some regions than in others. But increasing desperation in the nation’s dry spots prompted talk of rationing even in conservative quarters. The columnist George F. Will observed, “There are, as yet, no gas lines nationwide. If there ever are, the nation may reasonably prefer rationing by coupon, with all its untidiness and irrationality, to the wear and tear involved in rationing by inconvenience.” A New York Times–CBS News poll in early June found 60 percent of respondents preferring rationing to shortages and high prices.

Carter wanted the government to have the ability to ration gas, thereby freeing up supplies that could then go to regions that were suffering shortages. Thanks largely to the oil companies’ fierce opposition, Congress refused to pass standby rationing in May, but support for the idea continued to grow.

Most of his policy recommendations were again focused on conservation. His most specific move was asking Congress once again for authority to order mandatory conservation and set up a standby gasoline rationing system. Of the five thousand or so telegrams and phone calls received by the White House in response to that speech, an astonishing 85% were positive. Carter’s approval jumped 11 points overnight. The next day, he spoke in Detroit and Kansas City, both times to standing ovations. But Carter was still being vague about what, specifically, Americans were supposed to do. Meanwhile, renewed political wrangling on other issues and a drop in gas prices drained away the nation’s sense of urgency over energy. The deeper problems had not gone away, but without the threat of societal breakdown that had so alarmed the public and stirred Carter to bold oratory, the incentive to take action vanished.

Despite a 28% improvement in vehicle fuel economy, America’s total annual gasoline consumption has increased 47% since 1980, with the consumption rate per person 10% higher today than in 1980. Had there been a 20% gasoline shortfall at the start of the 1980s, triggering Congress’s gas-rationing plan, and had we managed to hold per-capita consumption at the rationed level for the next 30 years (taking into account the rate of population increase that we actually experienced), we would have saved 800 billion gallons—equal to about six years of output from U.S. domestic gasoline refiners. That’s a lot in itself, but such long-term restraint would have caused a chain reaction of dramatic changes throughout the economy, changes so profound that America would probably be a very different place today had rationing been instituted and had it continued. That didn’t happen. Instead, the U.S. economy focused again on developing new energy-dependent goods and services.

The clearest expression of the current goals of our foreign policy came in an address to the 1992 Earth Summit in Rio de Janeiro by President George H.W. Bush, a year after the first Persian Gulf war. There he announced to the world that “the American way of life is not negotiable,” signaling that the country had changed profoundly since the day almost exactly fifty years earlier when Harold Ickes had declared that patriotic citizens would never risk the lives of their soldiers to preserve “motoring as usual.”

According to calculations by Vaclav Smil of the University of Manitoba, the human economy has already reduced the total weight of plant biomass on Earth’s surface by 45%. About 25% of each year’s plant growth worldwide, and a similar proportion of all freshwater flowing on Earth’s surface, is already being taken for human use. If you could put all of our livestock and other domestic animals on one giant scale, they would weigh 25 times as much as Earth’s entire dwindling population of wild mammals. In 2009, a group of 29 scientists from seven countries published a paper in which they defined nine “planetary boundaries” that define a “safe operating space” for humanity. If we cross those boundaries and don’t pull back, they concluded, the result will be catastrophic ecological breakdown. Given the uncertainties involved in any such projections, they proposed to set the limits not at the edge of the precipice but at some point this side of it, prudently leaving a modest “zone of uncertainty” as a buffer. The boundaries were defined by limits on atmospheric carbon dioxide concentration; air pollutants other than carbon dioxide; stratospheric ozone damage; industrial production of nitrogen fertilizer; breakdown of aragonite, a calcium compound that’s an indicator of the health of coral and microscopic sea organisms; human use of freshwater; land area used for cultivation of crops; species extinction; and chemical pollution. The group noted that we have already transgressed three of the limits: carbon dioxide concentration, species extinction, and nitrogen output. Furthermore, they concluded, “humanity is approaching, at a rapid pace, the boundaries for freshwater use and land-system change,” while we’re dangerously degrading the land that is already sown to crops.

The International Energy Agency (IEA) concludes that extraction of conventional oil peaked in 2006, but that with increases in mining of oil from unconventional deposits like the tar sands of Canada, the plateau will bump along for decades.

Demand for gas may rise even faster than that. It seems that everyone these days is looking to natural gas to bail the world out of all kinds of crises: big environmental groups urge that it be substituted for coal to reduce carbon emissions; the transportation industry wants to substitute it for increasingly costly oil by burning it directly, converting it to liquid fuel, or by feeding power plants that in turn will feed the batteries in electric cars; enormous quantities will be consumed in the process of extracting oil from tar-sand deposits; and high-yield agriculture requires increasing quantities of nitrogen fertilizer manufactured with natural gas.

In the near term, the process of hauling enough rock phosphate, lime, livestock manure, or even human waste to restore phosphorus-deficient farm soils will be burdened by increasing transportation costs. Then there are tractors, 4.2 million of them on farms and ranches in the United States alone. Field operations on almost all farms in America, including organic farms, are heavily dependent on diesel fuel or gasoline. Finally, the farm economy supports a much larger off-farm food economy, one that is heavily dependent on fossil energy. Now we are asking the industrial mode of agriculture, with its own low energy efficiency, to supply not only food and on-farm power but also billions of gallons of ethanol and biodiesel for transportation.

If enough good soils and waters are to be maintained to support that life, the currently wasteful means of using water and growing food must be not just adjusted but transformed. Until that happens, the interactions among energy, water, and food will come to look even more like a game of rock-paper-scissors. Energy shortages or restrictions can keep irrigation pumps, tractors, and fertilizer plants idle or make food unaffordable.

Current methods for producing food are huge energy sinks and major contributors to greenhouse-gas warming, while the conversion of food-producing land to substitute for mineral resources in providing fuel, fabric, rubber, and other industrial crops will accelerate soil degradation while contributing to wasteful energy consumption.

The next best course is to make it later rather than sooner by leaving fossil fuels in the ground longer. But can economies resist burning fossil fuels that are easily within reach? Might even renewable energy sources be harnessed to the task of obtaining much more potent and versatile fossil energy? That is already happening in various parts of the world, including the poverty-plagued but coal-rich state of Jharkhand in India. Strip mining there is pushing indigenous people off their land, ruining their water supply, and driving them to desperate means of earning an income. Every day for the past decade, it has been possible to witness a remarkable spectacle along a main highway between the coal-mining district of Hazaribagh and the state capital, Ranchi: men hauling coal on bicycles. Each bike, with its reinforced frame, supports up to four hundred pounds of coal in large sacks. The men, often traveling in long convoys, push the bicycles up steep ridges and sometimes stand on one pedal to coast down. Their cargo has been scavenged from small, shallow, village-dug mines, from government-owned deposits that are no longer economically suitable for large-scale mining, or from roadsides where it has fallen, they say, “off the back of a truck.” Hauling coal the forty miles from Hazaribagh to Ranchi takes two days, and the men make the round-trip twice a week. These “cycle-wallahs” travel roads throughout the region, delivering an estimated 2.5 million metric tons of coal and coke annually to towns and cities for cooking, heating, and small industry.

If scarcity, either absolute or self-imposed, becomes a pervasive fact of life, will rationing no longer be left to the market? Will more of it be done through public deliberation? Ask ecologists and environmentalists that question today, and you frequently hear that quantity rationing is coming and that we should get ready for it. David Orr, a professor of environmental studies and politics at Oberlin College in Ohio and a leading environmental thinker, believes that “one way or another we’re going to have rationing. Rationing goes to the heart of the matter.” Although “we assume that growth is humanity’s destiny and that markets can manage scarcity,” Orr believes that “letting markets manage scarcity is simply a way of not grappling with the problem.” And because “there is no question that rationing will happen,” he says the key question is how. “Will it be through growth in governance, either top-down or local, or will we let it happen ‘naturally,’” through prices? The latter course, Orr believes, would lead to chaos.36 Likewise, Fred Magdoff, co-author (with John Bellamy Foster) of What Every Environmentalist Needs to Know About Capitalism, among other books, sees rationing as very likely necessary in any future economy that takes the global ecological crisis seriously.

He says there is no escaping the problem of distribution: “There is rationing today, but it’s never called that. Allocation in our economy is determined almost entirely in one of two ways: goods tend to go to whoever has the most money or wherever someone can make the most profit.” As an alternative, he says, rationing by quantity rather than ability to pay “makes sense if you want to allocate fairly. It’s something that will have to be faced down the line. I don’t see any way to achieve substantive equality without some form of rationing.” But, Magdoff adds, “there’s a problem with using that terminology. There are certain ‘naughty’ words you don’t use. ‘Rationing’ is not considered as naughty as ‘socialism,’ but it’s still equivalent to a four-letter word.”37

Ask almost any economist today, however, and you will learn that non-price rationing simply doesn’t work and should be avoided. For example, Martin Weitzman, at Harvard University, who developed some of the basic theory of rationing decades ago, takes the view that “generally speaking, most economists, myself included, think that rationing is inferior to raising prices for ordinary goods. It can work for a limited time on patriotic appeal, say during wartime. But without this aspect, people find a way around the rationing.” He adds that rationing would also “require a large bureaucracy and encounter a lot of resistance. I am hard-pressed to think of when rationing and price controls would be justified for scarce materials.” Others see rationing as unworkable not only for technical reasons but simply because people in affluent societies today cannot even imagine life under consumption limits. Maurie Cohen has little confidence that residents of any industrialized society would accept comprehensive limits on consumption because, in his view, “following a half century of extraordinary material abundance, public commitments to consumerist lifestyles are now more powerfully resolute.”39 David Orr agrees that prospects for consumption restraints in America today are dim at best: “We have to reckon with the fact that from about 1947 to 2008 we had a collision with affluence, and it changed us as a people. It changed our political expectations, it changed us morally, and we lost a sense of discipline. Try to impose a carbon tax, let alone rationing, today and you’ll hear moaning and groaning from all over.”40

In theory, shortages are always temporary. As the price of a scarce good rises, fewer and fewer people are able and willing to buy it, while at the same time producers are stimulated to increase their output. The price stops rising when demand has been driven low enough to meet the rising supply. If for whatever reason (often because of absolute scarcity, as with Yosemite campsites) the price is not allowed to rise to the heights required to bring demand and supply into alignment, and there is no substitute product that can draw away demand, the good is apportioned in some other way. At that point, nonprice rationing, often referred to simply as “rationing,” begins.

With basic necessities as much as with toys, rationing by queuing tends to create not buzz but belligerence. Dreadful memories of rationing by queuing—like the lines that formed at gas stations across America and outside bakeries in the Soviet Union in the 1970s—are burned into the memories of those who lived through those times; few regard such methods of allocation as satisfactory when it comes to essential goods.

Weitzman then summarized the case in favor of rationing: The rejoinder is that using rationing, not the price mechanism, is in fact the better way of ensuring that true needs are met. If a market clearing price is used, this guarantees only that it will get driven up until those with more money end up with more of the deficit commodity. How can it honestly be said of such a system that it selects out and fulfills real needs when awards are being made as much on the basis of income as anything else? One fair way to make sure that everyone has an equal chance to satisfy his wants would be to give more or less the same share to each consumer independent of his budget size. Acknowledging that arguments both for and against rationing of basic needs “are right, or at least each contains a strong element of truth,” Weitzman went on to demonstrate mathematically how rationing by price performs better when people’s preferences for a commodity vary widely but there is relative equality of income. Rationing by quantity appeared superior in the reverse situation, when there is broad inequality of buying power and demand for the commodity is more uniform (as can be the case with food or fuel, for example).45 In a follow-up to Weitzman’s analysis, Francisco

Rivera-Batiz showed that rationing’s advantage increases further if the income distribution is skewed—that is, if the majority of households are “bunched up” below the average income while a small share of the population with very high incomes occupies the long upper “tail” of the distribution. Rivera-Batiz concluded that quantity rationing “would work more effectively (relative to the price system) in allocating a deficit commodity to those who need it most in those countries in which economic power and income are concentrated in the hands of the few.

Writing back in the early days of World War II, the Dutch economist Jacques Polak had come to a similar conclusion: that rationing had become necessary because even a small rise in price can make it impossible for the person of modest income to meet basic needs, while in a society with high inequality there is a wealthy class that can “push up almost to infinity the prices of a few essential commodities.” Therefore, he stressed, it is not shortages alone that create the need for rationing with price controls; rather, it is a shortage that occurs in a society with “substantial inequalities of income.”

The burden of consumption taxes weighs most heavily on people in lower-income brackets. It has been suggested that governments can handle that problem by redistributing proceeds from consumption taxes in the form of cash payments to low-income households. But determining the size of those payments is no easier than finding the right tax rate; furthermore, means-tested redistribution programs often come to be seen by more affluent non-recipients as “handouts” to undeserving people and are therefore more politically vulnerable than universal programs or policies. Weitzman has also observed that problems always seem to arise when attempts are made to put compensation systems into practice. The argument that the subsidies can blunt the impact of the taxes, he says, “is true enough in principle, but not typically very useful for policy prescriptions because the necessary compensation is practically never paid.”

Eighteen years later, continuing his examination of the potential of taxes and income subsidies for addressing inequality, Tobin observed that redistributing enough income to the lower portion of the American economic scale through a mechanism like the “negative income tax” being contemplated by the Nixon administration at the time (which would have provided subsidies to low-income households much like today’s Earned Income Tax Credit) would require very high—and, by implication, politically impossible—tax rates on higher incomes.

With rationing by quantity, people or households use coupons, stamps, electronic credits, or other parallel currencies that entitle them to a given weight or measure of a specific good—no more, no less—over a given time period. Normally, as was the case in World War II–era America and Britain, rationed goods or the credits to obtain them may be shared among members of a household but may not be sold or traded outside the household. The plan may be accompanied by subsidies and/or price controls.

 “Rationing in time,” cannot ensure that savings of the resource will be proportional to the length of time for which supply is denied. For example, consumption doesn’t fall by half when alternate-day lawn-watering restrictions are in force, because people can water as much as they like on their assigned days.

Unlike straight rationing, quantity rationing by points cannot guarantee everyone access to every item in the group of rationed items, but it can ensure a fair share of consumption from a “menu” of similar items. Points, like all ration credits, are a currency. Every covered item requires payment in both a cash price and a point price. But points differ from money in that every recipient has the same point “income,” which does not have to be earned; points can be spent only on designated commodities; point prices are not necessarily determined by supply and demand in the market; and trading in points is usually not permitted.

The range of goods covered by a given point scheme could in theory be as narrow as it was with canned goods during World War II or as broad as desired—if, for example, there were a point scheme covering all forms of energy, with different point values for natural gas, gasoline, electricity, etc.

The values of items in terms of points can be set according to any of several criteria. In the case of wartime meat, items with higher dollar prices also tended to be the ones assigned higher point values (for a time in Britain, dollar and point values were identical), but for other types of products, an item’s point value might reflect the quantity of a scarce resource required to produce it—or, as we will see, the greenhouse-gas emissions created during its manufacture, transport, and use. The more closely point values are adjusted to reflect the level of consumer demand that would exist without rationing, the less they interfere with functioning of the market.

Among people with differing preferences, there will be winners and losers.

If only a few items are restricted, people take the extra money that they would otherwise have spent on additional rationed goods and spend it on non-rationed ones, driving up their prices. If price controls are then extended to other goods without rationing them, demand for those goods shoots up even higher, and stocks are further depleted. These goods are then brought into the rationing scheme, thereby extending it to larger and larger numbers of essential goods.

But what about nonessential goods, such as swimming pools or rare wines? If the main concern is fair access to necessities, there seems little reason to ration nonessentials. If wealthy people, prohibited from buying as much gasoline or food as they would like, use their increased disposable income to bid up the prices of luxuries, is too little harm done even to worry about? Maybe, but it would depend on the motive for rationing. If the goal is to reduce total resource consumption, the prices of vintage wines or rare books might be left to the market, while the construction of swimming pools would be restricted.

As an alternative to a vast, complex system of quantity-rationing schemes for many products, Kalecki proposed simply to ration total spending. Each person would be permitted expenditures only up to a fixed weekly limit in retail shops, with the transactions tracked through coupon exchange. Up to that monetary limit, families could buy any combination of goods and quantities, as long as their total per-person spending stayed under the limit. No such system of “general” or “expenditure” rationing has ever been adopted, but during and after the war, several British and American economists examined the possible consequences of employing it during some future crisis. Once again, they realized, income inequality would complicate things. If the spending ceiling was the same for everyone, as was proposed, then lower-income families could spend their entire paycheck and still have coupons left over. Such families might be tempted to sell their excess coupons to people who had more cash to spend than their coupon allotment would allow. Some economists worried that that would not only stimulate unwanted demand but violate the “fair shares for all” principle.56 It was Kalecki who finally proposed a workable solution: that the government offer to buy back any portion of a person’s expenditure allowance that the person could not afford to use. For example, if the expenditure ration were £30 but a family had only £10 worth of cash to spend, they could, under Kalecki’s proposal, sell one-third of their allowance back to the government and be paid £10 in cash. That could be added to the £10 they had on hand, and they could spend it all while staying within the limit imposed by their £20 worth of remaining ration coupons.

The government buyback system would be intended to prevent the exploitation of the worse off by the better off and ensure a firm limit on total consumption, creating a “fairly comprehensive, democratic, and elastic system of distributing commodities in short supply,” and it would provide an automatic benefit to low-income families without an artificial division of the population into “poor” and “non-poor” categories. Well-to-do families would tend to accumulate savings under expenditure rationing, and Kalecki urged that those savings be captured by the government through an increase in upper-bracket income tax rates. That would not only curb inflation, it could also help pay for the coupon buyback scheme. Kalecki’s idea of allowing people to return ration credits for a refund rather than sell them to others has since been suggested as a feature of future carbon-rationing schemes. Quantity rationing of specific goods and general

In a report written for the U.S. National Security Resources Board in 1952, the economist Gustav Papanek looked back at the wartime discussion of expenditure rationing and saw plenty of deficiencies when he compared the concept with that of straight rationing of individual goods. He noted that if the same spending ceiling were applied to everyone, it could mean a dramatic change in lifestyle for the wealthy, who would probably push back hard against such restrictions. As one of many examples, he cited people with larger houses, who would plead that they had much higher winter heating bills and that allowances would have to be made. Nevertheless, a uniform spending ceiling would be necessary, wrote Papanek, because allowing those with larger incomes to spend more money “not only would make inequality of sacrifice in wartime evident, but would also place upon it the stamp of government approval.”

In many countries, a large portion of the supply of food, water, cooking fuel, or other essentials is subsidized and rationed by quantity, while the remaining supply is traded on the open market. Such two-tier systems provide a floor to ensure access to the necessary minimum but have no ceiling to contain total consumption. Some also treat different people or households differently by, for example, steering subsidized, rationed goods toward lower income brackets. And in some, there is the option of allowing the barter or sale of unused ration credits among consumers and producers. Such markets were proposed as part of Carter’s standby gas rationing plan, and they have been included in more recent proposals for gas rationing and for limiting greenhouse-gas emissions.

The consequences of rationing can be difficult to predict, but one thing is certain: nobody wants to be told what we can and cannot buy. There will be cheating. There are always people—often many people—who want to buy more of a rationed product than they can obtain legally; otherwise, there would be no need to ration it.

Widespread circumvention of regulations poses a dilemma. On the one hand, Cohen wrote, attempts to enforce total compliance are “ineffectual in the short term and counterproductive in the long term,” while on the other, lax enforcement “will lead to the erosion of public support.” Cohen concluded that “there is no easy solution to this dilemma other than agile and adept management.

Evasion of wartime price controls and rationing was less extensive in Britain than in the United States. Conventional wisdom long held that the difference could be explained by a greater respect for government authority among British citizens, reinforced by their more immediate sense of shared peril (the “Dunkirk spirit”). But an analysis of wage and price data before, during, and after the war shows that the most important factor was the British government’s tighter and more comprehensive control of supply and demand. Enforcement in both countries went through mid-war expansion in response to illicit activity, but key measures taken by the British—complete control of the food supply; standardization of manufacturing and design in clothing, furniture, and other products; concentration of manufacturing in a smaller number of plants; the consumer–retailer tie; and rationing of textiles and clothing—were not adopted in America, where industry opposition to such interference was stronger. The British also invested much more heavily in the system. In 1944, with rationing at its peak in both countries, British agencies were spending four and a half times as much (relative to GDP) as their American counterparts on enforcement of price controls. They were also employing far more enforcement personnel and filing eight times as many cases against ration violators, relative to population.

The differential impact of rationing and underground markets on economic classes should not be ignored. Theory says that the rich are better off in a pure market economy, while those with the lowest incomes are better off in an economy that incorporates rationing; however, the poor benefit even more under rationing (whether it’s by coupons or queuing) that is accompanied by an underground market, because secure access to necessities is accompanied by some flexibility in satisfying family needs. This is thought to be one of the many reasons that there was such widespread dissatisfaction with the conversion from a controlled economy with illegal markets to an open, legal market economy in the former Soviet Union and Eastern Europe in the 1990s.

Ration cards, books, stamps, and coupons are not only clumsy and inconvenient; they invite mischief as well. Some of the biggest headaches for past and current systems have involved the theft, sale, and counterfeiting of ration currency. Some have suggested that in the future it would be easier to head off cheating by using technologies such as smart cards and automatic bank debits that were not available during previous rationing eras.

Several countries are currently pursuing electronic transfers for their food-ration systems. Of course, electronic media are far from immune to outlaw trading; consumers, businesses, and governments have long battled a multibillion-dollar criminal market that exploits credit and debit cards, ATMs, and online vulnerabilities. Were rationing mechanisms added to that list of targets, enforcers would be drawn into similar kinds of cat-and-mouse games with hackers and thieves. Cohen argues that while smart cards and similar technologies can reduce administrative costs and red tape, they cannot eliminate cheating and that it is unrealistic to expect “high-tech rationing” to wipe out evasion and fraud. It will still be necessary, he predicts, “to use customary enforcement tools to limit the corrosive effects of unlawful practices.”64 Some have

No law is ever met with total compliance, but there are many examples of laws and regulations that appear to be accomplishing their goals despite routine violations. Compare limits imposed by rationing to speed limits. Around the world it is common for a large share of motorists to be exceeding posted speed limits at any one time. Like the majority of wartime rationing violators, who dipped only lightly into the underground market, most drivers fudge just a few miles per hour. A relatively small proportion of drivers break the limit by ten miles per hour or more, and a still smaller percentage of speeders are ticketed; nevertheless, speed limits succeed in preventing accidents and fatalities. Existence of a speed limit is cited by drivers as an important reason for driving more slowly than they otherwise would, whereas concern about pollution or fuel consumption is not. Also relevant to a discussion of rationing

compliance is the example of tax laws, which are routinely violated in all countries. U.S. federal income tax evasion has been estimated to result in a loss of approximately 19% of the total due—almost $500 billion. The 2012 budget of the Internal Revenue Service (IRS) was about $13 billion, less than half of which was for enforcement. The IRS estimates that it can recover $4.5 million in lost revenue for each $1 million spent on enforcement. The federal government could reduce its budget deficits by spending heavily to eliminate much of the fraud and evasion that occurs, but the necessary expansion of government intervention and the ill will that would result are prices too high to pay. Here, polls show that the percentage of Americans who choose not to believe in the existence of human-induced greenhouse gas warming has increased dramatically. Much of that change can be credited to a vigorous climate-denial industry that conjures up apocalyptic visions of deprivation, lost liberty, and stunted opportunity that, it claims, would result from any interference with the right of people and corporations to emit greenhouse gases without restraint. Denial industry spokespeople have developed a kind of shorthand language for talking about the nightmare world they say awaits us, and the key word in that language is “rationing.” Here, for example, is the conservative commentator Daniel Greenfield writing in 2011: “For environmentalists alternative energy was never really about independence, it was about austerity and rationing for the good of the earth. . . . [T]hey will use any conceivable argument to ram their agenda through, but they are not loyal to anything but their core austerity rationing manifesto. Their goal is expensive sustainable energy. If it isn’t sustainable, than it had damn well better be expensive.

Opponents of recent proposals to install two-way thermostats in all California homes or use the Endangered Species Act to protect polar bears raise the specter of “energy rationing.” Companies that have resigned their memberships in the U.S. Chamber of Commerce over its opposition to climate legislation are “energy rationing profiteers.” Green urban design leads to “land rationing.” Even First Lady Michelle Obama’s anti-obesity campaign was, according to the website, aimed at “preparing us for food rationing.

right-wing groups are implicitly urging that sub rosa rationing of necessary goods via individual ability to pay continue as the norm in society, however unfair the results.

The rare occasions when total carbon emissions in the United States have declined were almost always years of economic hardship: 1981–83, 1990–91, 2001, and 2008–12. Over that last five-year period, emissions from energy use fell a startling 13 percent.

economic crises and the rising unemployment, growing hunger, and general misery that they engender are not in themselves acceptable means to achieve ecological stability.

if, for example, numbers ending in 1 or 2 must stay off the street on Mondays, 3 and 4 on Tuesdays, etc., with weekends open to all cars, then weekday traffic, in theory, would be reduced by 20 percent. Extending such restrictions to an entire nation and to all times and days of the week has been proposed as a simple strategy to curb greenhouse-gas emissions from transportation. Experience, however, says it wouldn’t work. The longest-running such program, initiated by Mexico City in 1989, was almost instantly undermined by several factors. People drove cars more miles on those days when they were allowed to circulate, and there was increased traffic volume on weekends. Soon after the program was initiated, the more well-off families began acquiring additional cars with contrasting final digits on the tags; often the extra car was a cheap, older used car with poor fuel economy and high pollutant emissions. Traffic volume was reduced by 7.6 percent, not 20, and gas consumption continued to increase. Research on license-plate rationing systems in Mexico City, São Paulo, Bogotá, Beijing, and Tianjin concluded that “there is no evidence that these restrictions have improved the overall air quality.

 “Congestion isn’t an environmental problem; it’s a driving problem. If reducing it merely makes life easier for those who drive, then the improved traffic flow can actually increase the environmental damage done by cars, by raising overall traffic volume, encouraging sprawl and long car commutes.”

Rationing via rolling blackouts has been employed to deal with emergency shortages in the global North, but scheduled blackouts, emergency outages, and denial of connections have been used routinely in India, China, South Africa, Venezuela, and a host of other countries that suffer chronic shortfalls in generation capacity. Chiefly a device to hold down peak demand, the rolling blackout has little or no impact on total consumption and emissions, because people and businesses tend to increase their rate of use when the power is on. The way the burden of blackouts is shared among communities determines how fair the compromise is.

In India during the summer season, farmers with irrigation pumps are given priority over city dwellers, and poorer areas tend to have much longer blackouts than wealthy ones. The State of California, in an effort to avoid the necessity for rolling blackouts like the ones that struck in 2001, while at the same time curbing greenhouse-gas emissions, has been using “progressive pricing” of electricity, a rationing-by-price mechanism that seeks to ensure a basic supply to everyone while providing a heavy disincentive for overconsumption. Pacific Gas and Electric Company’s customers, for example, pay 12 cents per kilowatt hour for all monthly consumption up to the “baseline,” which is the average consumption for a household in a customer’s climatic region, adjusted by season. Customers who use more electricity pay higher rates for the amount above the baseline. For consumption ranging between 100 percent and 130 percent of the baseline, the rate is just 14 cents, but between 130 and 200 percent, it is 29 cents and rises to 40 cents for consumption exceeding 200 percent of the baseline. An analysis published in 2008 found that the California system provides a modest benefit to lower-income consumers; however, there is no statistical evidence that consumers consciously alter their consumption patterns to stay below any of the thresholds.16 Progressive pricing of electricity is also used in parts of India, China, and other countries, with similar results. However, neither rolling power cuts nor progressive pricing was sufficient to prevent the total electrical eclipse that struck India on July 30–31, 2012, leaving 684 million people—more than half the population of the world’s second-largest nation—without power.

Along with transportation, home energy consumption accounts for a large share of personal emissions.

The TEQ system can serve to illustrate how personal carbon trading might work, in its generalities if not all specifics. A 2011 report published by the organization Lean Economy Connection (founded by David Fleming), along with twenty members of Parliament, provides many of the details.19 The plan envisions the UK government’s Committee on Climate Change setting an overall annual carbonemissions budget, one that starts somewhere below the nation’s current emissions total and is lowered further every year thereafter. About 40 percent of total UK emissions currently come from direct energy use by individuals and households, primarily for heating, electricity consumption, and driving. Under the TEQ scheme, one year’s worth of “carbon units” (each unit represents the release of one kilogram of carbon dioxide) toward such usage is issued. Forty percent of the total national stock of units is shared among individuals, with one equal portion going to each adult, while the other 60 percent of units are sold to primary energy users (utilities, industry, government, etc.) at the same weekly auction where treasury bills are sold. Individuals must surrender carbon units with each purchase of electricity or fuel, typically when paying the household utility bill or filling up the family car. Payments are debited directly from each person’s “carbon account.” To facilitate transactions, a card similar to a bank debit card is issued to every adult. According to the proposal, “The TEQ units received by the energy retailer for the sale of fuel or electricity are then surrendered when the retailer buys energy from the wholesaler who, in turn, surrenders them to the primary provider. Finally, the primary provider surrenders units back to the Registrar when it pumps, mines or imports the fuel. This closes the loop.” Low energy users who build up a surplus of TEQ units in their accounts can sell them on the national market, and those who require energy above their personal entitlement can buy them.

Households buy and sell TEQ units in the same market where primary energy dealers trade. Most units in the market are bought through the weekly auction and sold by banks and other large-scale brokers; however, businesses, other organizations, and individuals can buy and sell TEQ units as well. Brokers sell to private businesses, public agencies, and other organizations, all of whom need units when buying energy. A farmer, for example, buys units on the TEQ market in order to buy tractor fuel, while a furniture manufacturer and the corner pub buy and use units to pay their electric bills. Any firm, institution, or individual can sell excess units back into the national market through brokers. Energy sellers like gas stations and utilities can buy units through the market and sell them to customers. In a typical situation, the customer who has run out of units but needs to buy gas or pay the electric bill will have to not only pay for the energy but also buy enough TEQ units (ideally, available for sale directly from the gas station or utility) to cover the transaction.

The mechanics of PCAs are similar to those of TEQs. The PCA idea has emerged in various forms, including in the 2004 book How We Can Save the Planet by Mayer Hillman, Tina Fawcett, and Sudhir Chella Rajan. PCAs would cover both home and transportation emissions but leave upstream emissions to be dealt with through other mechanisms. Like TEQs, PCAs feature an annual carbon budget that declines over time, equal distribution of personal allowances (with the exception that children receive a partial allowance), electronic accounting, and a market in allowances. PCAs cover only individual consumption, however; businesses and other organizations are not included in the plan. Hillman and his co-authors foresee smart cards and automatic bank debits playing key roles. Alongside a sketch of their proposed “Carbon Allowance Card,” they explain, “Each person would receive an electronic card containing the year’s credits. This smart card would be presented every time energy or travel services are purchased, and the correct number of units would be deducted. The technologies and logistics would be no different from debit card systems.” (Today, handheld wireless devices presumably could be used as well.) Carbon allowances for electricity or gas consumption could be surrendered as part of paying the monthly utility bill, most conveniently through automatic bank debit.

When access to energy is thereby limited, the plan’s authors anticipate that consumers will seek out more efficient technologies in order to maintain their accustomed lifestyle. They write, “It will be in the interests of manufacturers to supply low-energy goods because this is where the demand will lie.”21 The authors acknowledge that PCAs will not necessarily reduce emissions produced by industry and commerce—they might even increase them if more efficient, more durable consumer goods require more energy and resources to manufacture—so they suggest, “There may need to be a parallel system of rationing with a reducing allocation over time” applied to business and government as well. No national, mandatory carbon-rationing scheme

Hansen has also roundly denounced all forms of carbon trading, telling the U.S. House Ways and Means Committee in 2009, “Except for its stealth approach to taxing the public, and its attraction to special interests, ‘cap and trade’ seems to have little merit.” Hansen calls his alternative a “tax and 100% dividend” plan, arguing that “a tax on coal, oil and gas is simple. It can be collected easily and reliably at the first point of sale, at the mine or oil well, or at the port of entry.

Some think Hansen is overselling the carbon tax. Because the biggest individual emitters tend to be more affluent and more willing to spend up to a very high level to maintain their lifestyle, analysts at the Lean Economy Connection argue that “if taxation were high enough to influence the behavior of the better-off, it would price the poor out of the market.” Even with redistribution of the tax receipts back to the public, they say there would be no assurance of fair access to energy in times of scarcity. They also contend that a carbon tax won’t work if the goal is a stable decline in emissions over the long term: “It is impossible for tax to give a long-term steady signal: if it remains constant, it will be inappropriate at certain periods of the economic cycle; if it fluctuates, it does not provide a steady signal.

Carbon rations deal more explicitly with emissions reduction and require deeper engagement by the public; therefore, Fawcett and her colleagues have “generally proposed them in opposition to taxes, not as a complement.” There’s always that third, far more palatable alternative, voluntary restraint; however, she has written, such a policy “could not even begin to tackle the scale of the problem because few individuals could be expected to start taking action for the common good, with ‘free riders’ having so much to gain.

The Congressional Budget Office, considering a broad range of possible scenarios under the bill were it passed, estimated that the pool of basic carbon allowances would reach an annual total of $50 billion to $300 billion by 2020—hefty sums in themselves—but that the carbon-derivatives market based on the value of those allowances could have been seven to forty times as large, reaching $2 trillion by 2017.52 The ostensible purpose of such devices would be to dampen the volatility of the carbon market, but what are the chances that the tail would end up wagging the dog, as happened in the pre-2008 U.S. mortgage market? Here is how the Minneapolis-based Institute for Agriculture and Trade Policy describes the risks of creating a carbon derivatives market—something that a member of the U.S. Commodity Futures Trading Commission has predicted could be “the most important commodity market ever”: “Once a carbon market, and its associated secondary market, is established, it is likely that carbon derivatives will be bundled into index funds. The sharp projected increase in the volume and value of carbon derivative contracts will induce extreme price volatility in commodity markets. To the extent that carbon derivatives are bundled into commodity index funds, it is likely that carbon prices will strongly influence both agricultural futures contract and cash prices.

Speculation in carbon and carbon-derivatives markets could put the cost of extra carbon rations beyond the reach of many. That wouldn’t matter to those who manage to keep their energy consumption below their allowance. But many people live far from work with no transportation other than an old car, or reside in poorly insulated houses, and when they cannot afford to move closer to work or buy energy-efficient technology, they could go broke buying increasingly costly carbon credits. It has been proposed that the government take the money received in the auction of carbon credits and spend it on programs to improve insulation for low-income households or provide affordable means of commuting, but there are no estimates of how much could be accomplished with the funds available.

Mark Roodhouse, a historian at the University of York, has drawn lessons from Britain’s wartime experience that he feels help explain the low level of interest. Britons accepted rationing in good spirits because they knew the war would end within a few years and were bolstered by promises that a future of plenty lay ahead. Likewise, writes Roodhouse, people might accept carbon rationing if “the scheme is a temporary measure during the transition from a high-carbon economy to a low-carbon economy and will be removed when the carbon price and/or consumption levels drop below a certain level.” But PCT schemes, in view of the very deep reductions necessary to avoid climate disaster, all envision the ceiling on emissions lowering year by year well into the future at a rate that would outpace any conceivable improvements in efficiency or renewable energy. They are, in effect, permanent schemes that would utterly transform our lives and could not be sold to the public as anything else.

During World War II, there simply was not enough gasoline to go around, so people were not permitted to buy as much as they could afford to buy. Today, even after everyone’s demand for fuel is satisfied, there is some left over. (Even though that will not always be the case, the economy behaves as if it will.) It’s far harder to institute rationing in a world of such apparent abundance than it is in a world of obvious scarcity. So will carbon rationing have to wait until supplies of fossil fuels are depleted to a point that falls far short of demand? If so, it may be too late.

BIRTH RATIONING? One wing of the climate movement has argued for almost half a century that unless decisive action is taken to halt or reverse human population growth, all other efforts to prevent runaway climate change or other catastrophes will fail. For example, J. Kenneth Smail wrote in 2003, “Earth’s long-term sustainable carrying capacity, at what most would define as an ‘adequate’ to ‘moderately comfortable’ standard of living, is probably not much greater than 2–3 billion people.” Given that, he argued, “time is short, with a window for implementation that will last no more than the next 50–75 years, and perhaps considerably less. A deliberate program of population stabilization and reduction should have begun some two or more generations ago (say in 1950, when human numbers were ‘only’ 2.5 billion and demographic momentum more easily arrested) and certainly cannot be delayed much longer.” Prominent in the population-reduction campaign is the London-based Population Matters. Leading figures in the trust have argued that the number of people on Earth should somehow be reduced by 60%. One of Population Matters’s initiatives includes the selling of “pop offsets,” through which anyone can, on paper at least, cancel out the greenhouse impact of, say, a Caribbean vacation by contributing money that will go to fund birth-control programs. This, critics have said, can be interpreted as giving people the opportunity to say, “If I can stop them having babies, we won’t have to change our ways.

Although the global emergency described by population activists would appear to be a problem far too formidable to be resolved by voluntary means, few have proposed mandatory curbs—and with good reason. In most countries, public reaction against laws governing reproduction would be almost certainly far more negative than reactions against rationing of, say, gasoline. It would not just be anti-contraception politicians, anti-environment libertarians, and pro-procreation religious leaders who would condemn any form of reproductive rationing; the resistance would be almost universal. In light of that, it is worth examining some of the very few examples of non-voluntary privatization and commodification of water are unsustainable and fragmenting forces ecologically, temporally, geographically, socially, ethically, politically, and even economically. The examples are legion: Atlanta’s water privatization debacle; failed privatization ventures in Laredo, Texas; Felton, California; and East Cleveland, Ohio; the severely stressed Colorado River; the conflict-ridden Upper Klamath Basin in Oregon and northeastern California; the unresolved and unsustainable demands on the Apalachicola–Chattahoochee–Flint River System in the Southeastern U.S.; the once-declining but now-recovering Mono Lake; excessive groundwater pumping in Tucson, Arizona, Tampa, Florida, San Antonio, Texas, and Massachusetts’s Ipswich River Basin; and even emerging water crises. Given such problems, many local governments have de-privatized, once again treating water as a public utility. In times of severe water shortage, water utilities, whether public or private, face no choice but to impose rationing by slapping restrictions on lawn watering and other outdoor uses in order to achieve an immediate reduction.

It isn’t a simple matter to enforce indoor water conservation (what with customers lingering longer under their low-flow showerheads), whereas lawn watering and car washing are highly visible to neighbors and local authorities. Therefore, by far the most common methods of water rationing in America aim at outdoor use.

Mandatory restrictions reduced water use, whereas voluntary restrictions were of little value;

When Los Angeles was hit with a rash of major water-main blowouts in the summer of 2009—some of which sent geysers several stories into the air and one of which opened a sinkhole that half-swallowed a responding fire truck—officials who tried to identify a cause were initially stumped. Then they realized that the incidence of line breaks had risen immediately following the initiation of a citywide water rationing mandate. Faced with severe drought, the city had, for the first time ever, limited lawn watering to only two days per week. But the schedule did not involve rotating days; everyone was supposed to use water for lawns only on Mondays and Thursdays. Experts suspected (and a report six months later confirmed) that sudden pressure drops in aging pipes, caused when sprinklers came on across the city on Mondays and Thursdays, followed by pressure spikes when the sprinklers were turned off, caused many of the blowouts.

raising the price of water to match the cost of providing it is more cost-effective than non-price approaches such as installation of low-flow fixtures or imposition of lawn-watering restrictions.

raising prices is always politically unpopular, while restricting outdoor water use during a drought creates a sense of common adversity and shared burden (and people are more likely to assign the blame to natural causes rather than public officials). Therefore, “water demand management through non-price techniques is the overwhelmingly dominant paradigm in the United States,” the report concludes.

the most effective conservation messages during droughts in Australia were ones alerting consumers to the fact that the water level was dropping in the reservoir that supplied the target area.

When information on reservoir level was provided by electronic roadside signs, people responded to alarming drops in the reservoir by reducing their consumption.

Around the world, 1.8 billion more people have safe drinking water today than had it in 1990.

almost a billion people lack adequate access to water; more than 60 percent of those people live in sub-Saharan Africa or South Asia.

In the city’s wealthier neighborhoods, which receive up to ten hours of water supply each day, there are few big domestic guzzlers. Thirty percent of Mumbai homes exceed the government’s goal of 26 gallons per person daily, but only 7% get more than thirty-seven gallons. In comparison, per-capita domestic consumption of publicly supplied water in the United States is about 100 gallons daily.

Mumbai’s municipal government has made plans to more than double its water supply by 2021. That may bring some relief to people in Kadam Chawl, who get only three to seven gallons of water per person, but it also will require construction of new dams that will submerge tens of thousands more acres and dozens of villages east of the city, driving those villagers off their land. Many will end up in Mumbai, filling their hundis each day with water piped from those new reservoirs.33

the castellum’s purpose was to rotate the supply, providing most or all of the incoming water to only one-third of the city during any given part of the day. Today, such rotation of water services is the most common method of rationing in cities with a shortage of water. A survey of 22 cities and countries in Asia, Africa, and Latin America found that water services were provided for various portions of the day rather than continuously: from less than four hours in Karachi, Pakistan, to four hours in Delhi and Chennai, India; six hours in Haiti, Honduras, and Kathmandu; three to ten hours in Dar es Salaam, Tanzania; and 17 hours in Manila.  

Rationing water in time rather than quantity is a blunt instrument, providing anything from a deficient to an ample supply to each home or business. The more generous the ration, the lower the incentive to conserve. Higher-income areas often receive more hours per day of service, and more affluent residents also have the economic means to install and fill storage tanks that would allow relief from rationing and subvert the goal of reducing consumption. Few among the poor enjoy such a buffer.

Governments around the world, including Egypt’s, learned long ago about the dangers of exposing their citizens’ daily food needs to the whims of global markets. Recognizing the existence of a right to food—and anticipating the political and social upheaval that can happen if that right is not fulfilled—many countries routinely buy and store staple grains and other foods and then ration consumers’ access to those stores at subsidized prices. Like water- and energy-rationing policies, existing public food-distribution systems are designed to provide fair, affordable access to a limited pool of resources. As we will see, no food-ration program so far has been entirely successful; nevertheless, a ration card or food stamp booklet may be all that stands between a family and a week (or even a lifetime) of hunger. And with food rationing, unlike carbon rationing, we can be guided by experience. Public provision of subsidized food rations has been pursued in countries as diverse as Argentina, Bangladesh, Brazil, Chile, China, Colombia, Cuba, Egypt, India, Iran, Iraq, Israel, Mexico, Morocco, Pakistan, the Philippines, the Soviet Union, Sri Lanka, Sudan, Thailand, Venezuela, and Zambia.  In those and other nations, we can find examples ranging from excellent to terrible, sometimes within the same country.

The frequent failure of markets acting alone to direct food to where it is most needed can be seen not only in hungry nations but in well-fed ones as well. In the United States, the share of households that suffer from food insecurity has climbed to almost one in six, according to the Department of Agriculture,

In countries rich and poor, the publicly funded monthly food ration has two faces: first, staving off widespread hunger and the societal disruption that could well arise in its absence; and second, making it possible for the private sector to pay below-subsistence wages. In the latter role, the ration can provide a subsidy to business, allow society to tolerate high unemployment and underemployment, or help undemocratic governments keep a lid on political unrest.

Many food-ration plans take the form of a public distribution system (PDS) that provides specific food items to consumers on a regular, usually monthly, basis. In the typical PDS, consumer food rations are situated at the downstream end of a network that buys up and stockpiles grain at guaranteed prices, imposes price controls and provides subsidies, and rations the stocks it provides to retailers. Today’s typical PDS parallels World War II–era food systems in that ration entitlements are adjusted according to the available supply. But most contemporary PDSs differ from wartime floor-and-ceiling rationing in that they provide only for a floor, a minimum supply. The supply of, say, subsidized wheat controlled by the government constitutes only a portion (if sometimes a large portion) of total consumption; there are usually, but not always, other supplies legally available on the open market outside the system as well, for those who can afford them.

India, for example, normally maintains national stocks of between 60 and 70 million metric tons. To do that, the government buys up about one-third of the nation’s crop each year—enough to fill a five-thousand-mile-long train of hopper cars stretching from Delhi to Casablanca.

Widespread food insecurity is a risk few governments are willing to run, so few PDSs have been completely eliminated. The usual compromise has been to “target” food assistance to households in greatest need. Attempts to replace PDSs with cash payments have typically failed; for many, apparently, money is not an adequate substitute for an ensured ration of food that can be touched, smelled, and tasted.

India and Egypt have operated PDSs of staggering sizes for decades, ones that continue to evolve, while Iraq and Cuba have run comprehensive rationing systems to deal with absolute scarcity.

in 1997, with the liberalization of the Indian economy, the PDS was narrowed to target primarily the low end of the income scale. Two types of ration cards were created, one for “above poverty line” (APL) and one for “below poverty line” (BPL) households.

in and around Mumbai, as in most places, it is rationed kerosene for cooking that is in greatest demand: two liters can be bought on the ration for thirty rupees, whereas the market price is eighty. When kerosene is in stock, word gets around Diwa fast and endless queues form. A longtime customer named Vijaya says that once her mother stood in line at the Diwa shop from noon until four o’clock without reaching the head of the queue; at that point, Vijaya took over her spot and waited another four hours before finally getting the fuel.15

The government buys, at a fixed price, every kilogram of grain that Indian farmers offer to sell to it, but before hauling their harvest to the government market, farmers sell as much of their higher-quality grain as possible on the private market, where they can get a better price. The government gets what’s left, and PDS customers who cannot afford to buy most or all of their food from private markets often get stuck with inferior rice and wheat. Likewise, in some areas the vegetable oil is usually low-quality palm oil, which, customers say, they would rather use in ceremonial lamps than in their food. Offering low-quality foods saves the government some money, but it’s also a means of informal targeting; it discourages middle-class families, who choose to buy better food on the open market instead. In any case, today in most villages and

Targeting errors have become so serious that by 2010, 50% to 75% of all families living below the poverty line in the most badly poverty-stricken states had no subsidized ration card. And India is not alone. In virtually every country that has ended universal food rations in favor of targeting, the rate of errors of exclusion has increased.

Between 1998 & 2005 per-capita calorie consumption decreased for all income groups in India. For the poor it was probably due to stagnant incomes and rising food prices. Over 36% of adult Indian women are now underweight, one of the highest rates in the world. About 70% of the grain that could be used to feed the poor is lost from spoilage, but even more diverted illegally into the cash market for profit. Moneylenders often hold a client’s ration card until their debt is paid.  In West Bangal people grow so desperate they looted shops, ran their owners out of their homes, set fire to barrels of kerosene, and fought police with rods, swords, and brickbats.

Studies have shown that the best way to feed people is food aid. For example, in the U.S. the SNAP program (formerly known as Food Stamps) increases nutrient consumption by two to 10 times as much as a dollar in cash.

Food rations are as old as civilization. In Mesopotamia five thousand years ago, the families of semi-free laborers received rations amounting to about 60 quarts of barley and 2-5 of cooking oil a month, plus four pounds of wool a year, and occasionally wheat, flour, bread, fish, dates, peas, and cloth.

Ancient Egypt was also fueled by public distribution of food rations, and today the government is managing one of the most comprehensive food subsidy / ration systems in the world. On almost any street corner in Cairo one can buy a falafel sandwich for a few cents. The fava beans in the sandwich are subsidized, as is the oil, bread, and more. There is little room to maneuver. There is just one-tenth acre of cropland per person, one of the smallest in the world. So today Egypt is the #1 importer of wheat and much of their other food as well. The government stockpiles food, with the key food being wheat in the form of flour or bread, which Egyptians on average eat a pound of daily. By 2012, 63 million Egyptians had access to the ration system. One of the reasons Mubarak was thrown out in a revolution were his plans to phase out universal food subsidies.

Iraq imports 70% of their food. Citizens depend on public handouts of food, and despite that 15% have problems getting food, and a third of Iraqi’s would be in trouble if the distribution system were terminated. There are 45,000 licensed “food and flour” agents who distribute the food.

In Cuba before the 1959 revolution, most poor people suffered from malnutrition. One of Castro’s goal was to ensure no one went hungry. So, they set about boosting the earnings of the poor, fuller employment, and other reforms.  Basic social serves were free including schooling, medical care, medicine, and social security.  In addition, there was free water, sports facilities, and public telephones. The cost of electricity, gas, and public transport was subsidized.  Finally, people could afford to eat more and better food, which drove up prices, so price controls were imposed and the government took charge of distribution. But demand continued to outstrip supply, and people hoarded. Rather than make some staples available to the poor at low prices, they instituted a ration system for all Cubans in 1962. Each household registered with a specific shop.  Despite the U.S. embargo, childhood malnutrition was almost completely eradicated and public health improved through the 1980s, and all this despite no improvements in food production.

When oil stopped in 1990 when the Soviet Union collapsed, oil, fertilizers, pesticides and other products stopped coming. The government still managed to supply two-thirds of calories to the people.  Today, Cuba imports 80% of its food, and 70% depend on rationed food. Raul Castro is trying to increase agricultural production by land redistribution, higher prices to farmers, and legalizing some private sales. Some rations have been cut as well, so urban farming continues. Without rationing, may Cubans would face starvation, some of it sent from China.

UNITED STATES: despite 4,000 calories of food available per person, the U.S. has one of the largest food-assistance programs in the world: SNAP.

As grain production continues to fall, soil erodes and fresh water vanishing, it is likely that more countries will adopt rationing or subsidize food.

Posted in Agriculture, Rationing | Tagged , | Leave a comment

The end of fracked shale oil?

Preface. Conventional crude oil production may have already peaked in 2008 at 69.5 million barrels per day (mb/d) according to Europe’s International Energy Agency (IEA 2018 p45). The U.S. Energy Information Agency shows global peak crude oil production at a later date in 2018 at 82.9 mb/d (EIA 2020) because they included tight oil, oil sands, and deep-sea oil.   Though it will take several years of lower oil production to be sure the peak occurred. Regardless, world production has been on a plateau since 2005.

What’s saved the world from oil decline was unconventional tight “fracked” oil, which accounted for 63% of total U.S. crude oil production in 2019 and 83% of global oil growth from 2009 to 2019. So it’s a big deal if we’ve reached the peak of fracked oil, because that is also the peak of both conventional and unconventional oil and the decline of all oil in the future.

Some key points from this Financial Times article:

  • A fracking binge in the American shale industry has permanently damaged the country’s oil and gas reserves, threatening hopes for a production recovery and US energy independence.
  • The damage was done by operators who carried out such “massive fracks” that “artificial, permanent porosity” was inadvertently created, reducing the pressure in reservoirs and therefore the available oil.
  • Wil VanLoh, chief executive of Quantum Energy Partners, a private equity firm that through its portfolio companies is the biggest US driller after ExxonMobil, said too much fracking had “sterilized a lot of the reservoir in North America”.
  • Wells have often been drilled too closely to one another…. so “for the last five years is we’ve drilled the heart out of the watermelon.”
  • Mr VanLoh goes on to say that it is physically impossible to produce more than 13 mbd because the reservoirs are so messed up. Predictions from 3-4 years ago will not come true.
  • Production from the Permian, the prolific shale field of west Texas and New Mexico, peaked even before the crash this year, Mr Waterous said. At current prices, only 25 per cent of US shale was economical

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Brower D (2020) Shale binge has spoiled US reserves, top investor warns. Financial Times.

A fracking binge in the American shale industry has permanently damaged the country’s oil and gas reserves, threatening hopes for a production recovery and US energy independence, according to one of the sector’s top investors.

Wil VanLoh, chief executive of Quantum Energy Partners, a private equity firm that through its portfolio companies is the biggest US driller after ExxonMobil, said too much fracking had “sterilised a lot of the reservoir in North America”.

“That’s the dirty secret about shale,” Mr VanLoh told the Financial Times, noting wells had often been drilled too closely to one another. “What we’ve done for the last five years is we’ve drilled the heart out of the watermelon.”

Soaring shale production in recent years took US crude output to 13m barrels a day this year and brought a rise in oil exports, allowing President Donald Trump to proclaim an era of “American energy dominance”. 

Total US oil reserves have more than doubled since the start of the century as hydraulic fracturing, or fracking, and horizontal drilling unleashed reserves previously considered out of reach.

Line chart of million of barrels a day showing US oil production has tumbled this year

But the pandemic-induced crash, which sent US crude prices to less than zero in April, has devastated a shale patch that was already out of favor with Wall Street for its failure to generate profits, even while it made the country the world’s biggest oil and gas producer. 

The number of operating rigs has collapsed by more than 60% since the start of the year. US output is now about 11m barrels a day, according to the US Energy Information Administration, or 15% less than the peak. Line chart of number of rigs showing US drilling activity has plummeted

“Even if we wanted to, I don’t think we could get much above 13m” barrels a day, Mr VanLoh said. “I don’t think it’s physically possible, because we’ve messed up so much reservoir. I would argue that what the US was touting three or four years ago, in theoretical deliverability, is nowhere close to what we think it is now.”

He said operators had carried out “massive fracks” that created “artificial, permanent porosity”, inadvertently reducing the pressure in reservoirs and therefore the available oil. 

The comments will cause alarm in the shale patch, given the crucial role of investors such as QEP in financing the onshore American oil business.

The Houston-based investor has assets under management of about $11.2bn, according to data provider PitchBook, and is one of the few private equity groups still focused on shale.

Private companies account for about 30% of US oil production excluding Alaska and Hawaii, about 2.7m b/d, according to consultancy Rystad Energy.

Other private equity investors have warned that the shale growth story has ended, despite an oil-price recovery in recent months to about $40 a barrel.

“They were making lousy returns at $65 a barrel,” said Adam Waterous, head of Waterous Energy Fund. “You need at least north of $70 before you start achieving a cost-of-capital return in the US oil business.”

Production from the Permian, the prolific shale field of west Texas and New Mexico, peaked even before the crash this year, Mr Waterous said. At current prices, only 25 per cent of US shale was economical, he added.

Analysts also say US oil output will struggle to recover its previous heights. Artem Abramov, head of shale research at Rystad, said production would remain between 11.5m b/d and 12m b/d at $40 a barrel. S&P Global Platts forecasts a decline to 10m b/d by mid-2021. 

But the crash could create opportunities for QEP in the short term, Mr VanLoh said, especially if prices recovered.

While listed producers had mostly sworn off production growth, some QEP-backed companies, such as DoublePoint Energy — which played host to Mr Trump during the president’s July fundraising visit to Midland, Texas — were increasing drilling activity. It says its Permian acreage can still be profitable at current prices.

QEP’s portfolio companies would increase output this year by about 25 per cent, to 500,000 barrels of oil and gas a day, Mr VanLoh said. 

“The next five years may be the best five years we’ve ever had for hydrocarbon investing,” he said. 

But he is also adjusting his company’s strategy to reflect investors’ growing disquiet with fossil fuels. QEP’s new 10-year fund, VIII, would be launched in early November, he said, with $1bn of about $5.6bn of total capital commitment reserved for “energy transition” investments. 

The company would soon appoint someone from outside the oil industry to enforce better environment, social and governance performance at QEP’s companies, Mr VanLoh added. 

He said they would have to improve ESG “because ultimately you’re not going to get capital from us if you don’t . . . And we won’t be able to get capital from our limited partners if you don’t.”

A more efficient US shale sector would re-emerge from the crash, Mr VanLoh said, but it would be smaller and require a reduced workforce. He is now advising his friends’ children not to pursue a career in oil

“I tell all of them — honestly, it’s a very risky bet and, if I were you, I would not go into it today.”

A comment from a reader:

“A technical point: shale has a lot of porosity as a function of its many tiny grain particles, but no permeability as the pore necks are too small to allow flow. All reservoirs are under pressure due to the burial of the reservoir. Fracking creates instantaneous fissures into which hydro carbons are spontaneously released and the pressure keeps flow going. But that pressure flags quickly. Hence the rapid decline of shale wells, typically 50% of initial flow after 6 month. The bigger the frack, the higher the first release in the perimeter of the well. Refacking will not do anything as the matrix is already destroyed. Poor practice does the reservoir in, but the frackers needed to keep it up to maintain production. The fag end of the oil businesses.”


EIA. 2020. International Energy Statistics. Petroleum and other liquids. Data Options. U.S. Energy Information Administration. Select crude oil including lease condensate to see data past 2017.

IEA. 2018. International Energy Agency World Energy Outlook 2018, figures 1.19 and 3.13. International Energy Agency.

Posted in Oil & Gas Fracked, Peak Oil | Tagged , | 1 Comment

Forests make the wind that carries the rain across continents

Preface. This is a controversial theory that if true, “could help explain why, despite their distance from the oceans, the remote interiors of forested continents receive as much rain as the coasts—and why the interiors of unforested continents tend to be arid. It also implies that forests from the Russian taiga to the Amazon rainforest don’t just grow where the weather is right. They also make the weather.

This biotic pump theory has faced a head wind of criticism, especially from climate modelers, some of whom say its effects are negligible and dismiss the idea completely. The dispute has made Makarieva an outsider: a theoretical physicist in a world of modelers, a Russian in a field led by Western scientists, and a woman in a field dominated by men.”

Keep in mind that the idea forests could generate rain wasn’t accepted until 1979. This theory was first proposed in 2007, but it hasn’t been proven or disproved, and is hard to test.

Their theory may also explain why cyclones rarely form in the South Atlantic Ocean: The Amazon and Congo rainforests between them draw so much moisture away that there is too little left to fuel hurricanes.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Pearce F. 2020. Weather makers. Forests supply the world with rain. A controversial Russian theor claims they also make wind. Science 368: 1302-5.

For more than a decade, Makarieva has championed a theory, developed with Victor Gorshkov, her mentor and colleague at the Petersburg Nuclear Physics Institute (PNPI), on how Russia’s boreal forests, the largest expanse of trees on Earth, regulate the climate of northern Asia. It is simple physics with far-reaching consequences, describing how water vapor exhaled by trees drives winds: winds that cross the continent, taking moist air from Europe, through Siberia, and on into Mongolia and China; winds that deliver rains that keep the giant rivers of eastern Siberia flowing; winds that water China’s northern plain, the breadbasket of the most populous nation on Earth.

With their ability to soak up carbon dioxide and breathe out oxygen, the world’s great forests are often referred to as the planet’s lungs. But Makarieva and Gorshkov, who died last year, say they are its beating heart, too. “Forests are complex self-sustaining rainmaking systems, and the major driver of atmospheric circulation on Earth,” Makarieva says. They recycle vast amounts of moisture into the air and, in the process, also whip up winds that pump that water around the world. The first part of that idea—forests as rainmakers—originated with other scientists and is increasingly appreciated by water resource managers in a world of rampant deforestation. But the second part, a theory Makarieva calls the biotic pump, is far more controversial.

Many meteorology textbooks still teach a caricature of the water cycle, with ocean evaporation responsible for most of the atmospheric moisture that condenses in clouds and falls as rain. The picture ignores the role of vegetation and, in particular, trees, which act like giant water fountains. Their roots capture water from the soil for photosynthesis, and microscopic pores in leaves release unused water as vapor into the air. The process, the arboreal equivalent of sweating, is known as transpiration. In this way, a single mature tree can release hundreds of liters of water a day. With its foliage offering abundant surface area for the exchange, a forest can often deliver more moisture to the air than evaporation from a water body of the same size.

The importance of this recycled moisture for nourishing rains was largely disregarded until 1979, when Brazilian meteorologist Eneas Salati reported studies of the isotopic composition of rainwater sampled from the Amazon Basin. Water recycled by transpiration contains more molecules with the heavy oxygen-18 isotope than water evaporated from the ocean. Salati used this fact to show that half of the rainfall over the Amazon came from the transpiration of the forest itself.

Salati and others surmised the jet carried much of the transpired moisture, and dubbed it a “flying river.” The Amazon flying river is now reckoned to carry as much water as the giant terrestrial river below it, says Antonio Nobre, a climate researcher at Brazil’s National Institute for Space Research.

For some years, flying rivers were thought to be limited to the Amazon. In the 1990s, Hubert Savenije, a hydrologist at the Delft University of Technology, began to study moisture recycling in West Africa. Using a hydrological model based on weather data, he found that, as one moved inland from the coast, the proportion of the rainfall that came from forests grew, reaching 90% in the interior. The finding helped explain why the interior Sahel region became dryer as coastal forests disappeared over the past half-century.

In 2010, van der Ent and his colleagues reported the model’s conclusion: Globally, 40% of all precipitation comes from the land rather than the ocean. Often it is more. The Amazon’s flying river provides 70% of the rain falling in the Río de la Plata Basin, which stretches across southeastern South America. Van der Ent was most surprised to find that China gets 80% of its water from the west, mostly Atlantic moisture recycled by the boreal forests of Scandinavia and Russia. The journey involves several stages—cycles of transpiration followed by downwind rain and subsequent transpiration—and takes 6 months or more. “It contradicted previous knowledge that you learn in high school,” he says. “China is next to an ocean, the Pacific, yet most of its rainfall is moisture recycled from land far to the west.”

If this theory is correct, forests supply not just the moisture, but the winds that carry it. In 2007, in Hydrology and Earth System Sciences, they first outlined their vision for the biotic pump. It was provocative from the outset because it contradicted a long-standing tenet of meteorology: that winds are driven largely by the differential heating of the atmosphere. When warm air rises, it lowers the air pressure below it, in effect creating space at the surface into which air moves. In summer, for example, land surfaces tend to heat faster and draw in moist breezes from the cooler ocean.

Makarieva and Gorshkov argued that a second process can sometimes dominate. When water vapor from forests condenses into clouds, a gas becomes a liquid that occupies less volume. That reduces air pressure, and draws in air horizontally from areas with less condensation. In practice, it means condensation above coastal forests turbocharges sea breezes, sucking moist air inland where it will eventually condense and fall as rain. If the forests continue inland, the cycle can continue, maintaining moist winds for thousands of kilometers.

The theory inverts traditional thinking: It is not atmospheric circulation that drives the hydrological cycle, but the hydrological cycle that drives the mass circulation of air.

Sheil, who became a supporter of the theory more than a decade ago, thinks of it as an embellishment of the flying river idea. “They are not mutually exclusive,” he says. “The pump offers an explanation of the power of the rivers.” He says the biotic pump could explain the “cold Amazon paradox.” From January to June, when the Amazon Basin is colder than the ocean, strong winds blow from the Atlantic to the Amazon—the opposite of what would be expected if they resulted from differential heating.

Even those who doubt the theory agree that forest loss can have far-reaching climatic consequences. Many scientists have argued that deforestation thousands of years ago was to blame for desertification in the Australian Outback and West Africa. The fear is that future deforestation could dry up other regions, for example, tipping parts of the Amazon rainforest to savanna. Agricultural regions of China, the African Sahel, and the Argentine Pampas are also at risk.

In 2018, Keys and his colleagues used a model, similar to van der Ent’s, to track the sources of rainfall for 29 global megacities. He found that 19 were highly dependent on distant forests for much of their water supply, including Karachi, Pakistan; Wuhan and Shanghai, China; and New Delhi and Kolkata, India. “Even small changes in precipitation arising from upwind land-use change could have big impacts on the fragility of urban water supplies,” he says.

Some modeling even suggests that by removing a moisture source, deforestation could alter weather patterns beyond the paths of flying rivers. Just as El Niño, a shift in currents and winds in the tropical Pacific Ocean, is known to influence weather in faraway places through “teleconnections,” so, too, could Amazon deforestation diminish rainfall in the U.S. Midwest and snowpack in the Sierra Nevada.

Another example: a study showed that as much as 40% of the total rainfall in the Ethiopian highlands, the main source of the Nile, is provided by moisture recycled from the forests of the Congo Basin. Egypt, Sudan, and Ethiopia are negotiating a long-overdue deal on sharing the waters of the Nile. But such an agreement would be worthless if deforestation in the Congo Basin, far from those three nations, dries up the moisture source.

If true, water resource managers in the Midwest, Sierra Nevada, and Middle East need to care as much about the deforestation in the far away Amazon or Congo Basins as much as their local water.

The biotic pump would raise the stakes even further, with its suggestion that forest loss alters not just moisture sources, but also wind patterns. The theory, if correct, would have crucial implications for planetary air circulation patterns, especially those that take moist air inland to continental interiors.

Posted in Climate Change, Deforestation | Tagged , , , | Comments Off on Forests make the wind that carries the rain across continents

How to make biomass last longer

Preface. Before fossil fuels, societies were able to make their forests last longer than today. Felling tall trees and killing them was rare except for special needs such as making bridges or ships. For firewood and other needs, trees were cut in a way that encouraged new shoots to sprout that could be harvested every few years. Coppiced forests were more biodiverse than today’s plantations, since many kinds of wood were planted, each kind suited to different purposes.  Coppiced firewood needed to be harvested within 9-18 miles (15-30 km) away over land, since the wood was hauled on carts over bad roads. Beyond 18 miles the energy content of the wood was less than the energy of the pasture used to feed the horse.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Kris De Decker. 2020. How to make biomass energy sustainable again. Low-tech magazine.

Nowadays, most wood is harvested by killing trees. Before the Industrial Revolution, a lot of wood was harvested from living trees, which were coppiced. The principle of coppicing is based on the natural ability of many broad-leaved species to regrow from damaged stems or roots – damage caused by fire, wind, snow, animals, pathogens, or (on slopes) falling rocks. Coppice management involves the cutting down of trees close to ground level, after which the base – called the “stool” – develops several new shoots, resulting in a multi-stemmed tree.


A coppice stool. Image: Geert Van der Linden.


A recently coppiced patch of oak forest. Image: Henk vD. (CC BY-SA 3.0)


Coppice stools in Surrey, England. Image: Martinvl (CC BY-SA 4.0)

When we think of a forest or a tree plantation, we imagine it as a landscape stacked with tall trees. However, until the beginning of the twentieth century, at least half of the forests in Europe were coppiced, giving them a more bush-like appearance. [1] The coppicing of trees can be dated back to the stone age, when people built pile dwellings and trackways crossing prehistoric fenlands using thousands of branches of equal size – a feat that can only be accomplished by coppicing. [2]


The approximate historical range of coppice forests in the Czech Republic (above, in red) and in Spain (below, in blue). Source: “Coppice forests in Europe”, see [1]

Ever since then, the technique formed the standard approach to wood production – not just in Europe but almost all over the world. Coppicing expanded greatly during the eighteenth and nineteenth centuries, when population growth and the rise of industrial activity (glass, iron, tile and lime manufacturing) put increasing pressure on wood reserves.

Short Rotation Cycles

Because the young shoots of a coppiced tree can exploit an already well-developed root system, a coppiced tree produces wood faster than a tall tree. Or, to be more precise: although its photosynthetic efficiency is the same, a tall tree provides more biomass below ground (in the roots) while a coppiced tree produces more biomass above ground (in the shoots) – which is clearly more practical for harvesting. [3] Partly because of this, coppicing was based on short rotation cycles, often of around two to four years, although both yearly rotations and rotations up to 12 years or longer also occurred.


Coppice stools with different rotation cycles. Images: Geert Van der Linden. 

Because of the short rotation cycles, a coppice forest was a very quick, regular and reliable supplier of firewood. Often, it was cut up into a number of equal compartments that corresponded to the number of years in the planned rotation. For example, if the shoots were harvested every three years, the forest was divided into three parts, and one of these was coppiced each year. Short rotation cycles also meant that it took only a few years before the carbon released by the burning of the wood was compensated by the carbon that was absorbed by new growth, making a coppice forest truly carbon neutral. In very short rotation cycles, new growth could even be ready for harvest by the time the old growth wood had dried enough to be burned.

In some tree species, the stump sprouting ability decreases with age. After several rotations, these trees were either harvested in their entirety and replaced by new trees, or converted into a coppice with a longer rotation. Other tree species resprout well from stumps of all ages, and can provide shoots for centuries, especially on rich soils with a good water supply. Surviving coppice stools can be more than 1,000 years old.


A coppice can be called a “coppice forest” or a “coppice plantation”, but in reality it was neither a forest nor a plantation – perhaps something in between. Although managed by humans, coppice forests were not environmentally destructive, on the contrary. Harvesting wood from living trees instead of killing them is beneficial for the life forms that depend on them. Coppice forests can have a richer biodiversity than unmanaged forests, because they always contain areas with different stages of light and growth. None of this is true in industrial wood plantations, which support little or no plant and animal life, and which have longer rotation cycles (of at least twenty years).


Coppice stools in the Netherlands. Image: K. Vliet (CC BY-SA 4.0)


Sweet chestnut coppice at Flexham Park, Sussex, England. Image: Charlesdrakew, public domain.

Our forebears also cut down tall, standing trees with large-diameter stems – just not for firewood. Large trees were only “killed” when large timber was required, for example for the construction of ships, buildings, bridges, and windmills. [4] Coppice forests could contain tall trees (a “coppice-with-standards”), which were left to grow for decades while the surrounding trees were regularly pruned. However, even these standing trees could be partly coppiced, for example by harvesting their side branches while they were alive (shredding).

Multipurpose Trees

The archetypical wood plantation promoted by the industrial world involves regularly spaced rows of trees in even-aged, monocultural stands, providing a single output – timber for construction, pulpwood for paper production, or fuelwood for power plants. In contrast, trees in pre-industrial coppice forests had multiple purposes. They provided firewood, but also construction materials and animal fodder.

The targeted wood dimensions, determined by the use of the shoots, set the rotation period of the coppice. Because not every type of wood was suited for every type of use, coppiced forests often consisted of a variety of tree species at different ages. Several age classes of stems could even be rotated on the same coppice stool (“selection coppice”), and the rotations could evolve over time according to the needs and priorities of the economic activities.


A small woodland with a diverse mix of coppiced, pollarded and standard trees. Image: Geert Van der Linden.  

Coppiced wood was used to build almost anything that was needed in a community. [5] For example, young willow shoots, which are very flexible, were braided into baskets and crates, while sweet chestnut prunings, which do not expand or shrink after drying, were used to make all kinds of barrels. Ash and goat willow, which yield straight and sturdy wood, provided the material for making the handles of brooms, axes, shovels, rakes and other tools.

Young hazel shoots were split along the entire length, braided between the wooden beams of buildings, and then sealed with loam and cow manure – the so-called wattle-and-daub construction. Hazel shoots also kept thatched roofs together. Alder and willow, which have almost limitless life expectancy under water, were used as foundation piles and river bank reinforcements. The construction wood that was taken out of a coppice forest did not diminish its energy supply: because the artefacts were often used locally, at the end of their lives they could still be burned as firewood.


Harvesting leaf fodder in Leikanger kommune, Norway. Image: Leif Hauge. Source: [19]

Coppice forests also supplied food. On the one hand, they provided people with fruits, berries, truffles, nuts, mushrooms, herbs, honey, and game. On the other hand, they were an important source of winter fodder for farm animals. Before the Industrial Revolution, many sheep and goats were fed with so-called “leaf fodder” or “leaf hay” – leaves with or without twigs. [6]

Elm and ash were among the most nutritious species, but sheep also got birch, hazel, linden, bird cherry and even oak, while goats were also fed with alder. In mountainous regions, horses, cattle, pigs and silk worms could be given leaf hay too. Leaf fodder was grown in rotations of three to six years, when the branches provided the highest ratio of leaves to wood. When the leaves were eaten by the animals, the wood could still be burned.

Pollards & Hedgerows

Coppice stools are vulnerable to grazing animals, especially when the shoots are young. Therefore, coppice forests were usually protected against animals by building a ditch, fence or hedge around them. In contrast, pollarding allowed animals and trees to be mixed on the same land. Pollarded trees were pruned like coppices, but to a height of at least two metres to keep the young shoots out of reach of grazing animals.


Pollarded trees in Segovia, Spain. Image: Ecologistas en Acción.

Wooded meadows and wood pastures – mosaics of pasture and forest – combined the grazing of animals with the production of fodder, firewood and/or construction wood from pollarded trees. “Pannage” or “mast feeding” was the method of sending pigs into pollarded oak forests during autumn, where they could feed on fallen acorns. The system formed the mainstay of pork production in Europe for centuries. [7] The “meadow orchard” or “grazed orchard” combined fruit cultivation and grazing — pollarded fruit trees offered shade to the animals, while the animals could not reach the fruit but fertilised the trees.


Forest or pasture? Something in between. A “dehesa” (pig forest farm) in Spain. Image by Basotxerri (CC BY-SA 4.0).


Cattle grazes among pollarded trees in Huelva, Spain. (CC BY-SA 2.5)


A meadow orchard surrounded by a living hedge in Rijkhoven, Belgium. Image: Geert Van der Linden.

While agriculture and forestry are now strictly separated activities, in earlier times the farm was the forest and vice versa. It would make a lot of sense to bring them back together, because agriculture and livestock production – not wood production – are the main drivers of deforestation. If trees provide animal fodder, meat and dairy production should not lead to deforestation. If crops can be grown in fields with trees, agriculture should not lead to deforestation. Forest farms would also improve animal welfare, soil fertility and erosion control.

Line Plantings

Extensive plantations could consist of coppiced or pollarded trees, and were often managed as a commons. However, coppicing and pollarding were not techniques seen only in large-scale forest management. Small woodlands in between fields or next to a rural house and managed by an individual household would be coppiced or pollarded. A lot of wood was also grown as line plantings around farmyards, fields and meadows, near buildings, and along paths, roads and waterways. Here, lopped trees and shrubs could also appear in the form of hedgerows, thickly planted hedges. [8]


Hedge landscape in Normandy, France, around 1940. Image: W Wolny, public domain.


Line plantings in Flanders, Belgium. Detail from the Ferraris map, 1771-78. 

Although line plantings are usually associated with the use of hedgerows in England, they were common in large parts of Europe. In 1804, English historian Abbé Mann expressed his surprise when he wrote about his trip to Flanders (today part of Belgium): “All fields are enclosed with hedges, and thick set with trees, insomuch that the whole face of the country, seen from a little height, seems one continued wood”. Typical for the region was the large number of pollarded trees. [8]

Like coppice forests, line plantings were diverse and provided people with firewood, construction materials and leaf fodder. However, unlike coppice forests, they had extra functions because of their specific location. [9] One of these was plot separation: keeping farm animals in, and keeping wild animals or cattle grazing on common lands out. Various techniques existed to make hedgerows impenetrable, even for small animals such as rabbits. Around meadows, hedgerows or rows of very closely planted pollarded trees (“pollarded tree hedges”) could stop large animals such as cows. If willow wicker was braided between them, such a line planting could also keep small animals out. [8]


Detail of a yew hedge. Image: Geert Van der Linden. 


Hedgerow. Image: Geert Van der Linden. 


Pollarded tree hedge in Nieuwekerken, Belgium. Image: Geert Van der Linden.


Coppice stools in a pasture. Image: Jan Bastiaens.

Trees and line plantings also offered protection against the weather. Line plantings protected fields, orchards and vegetable gardens against the wind, which could erode the soil and damage the crops. In warmer climates, trees could shield crops from the sun and fertilize the soil. Pollarded lime trees, which have very dense foliage, were often planted right next to wattle-and-daub buildings in order to protect them from wind, rain and sun. [10]

Dunghills were protected by one or more trees, preventing the valuable resource from evaporating due to sun or wind. In the yard of a watermill, the wooden water wheel was shielded by a tree to prevent the wood from shrinking or expanding in times of drought or inactivity. [8]


A pollarded tree protects a water wheel. Image: Geert Van der Linden. 


Pollarded lime trees protect a farm building in Nederbrakel, Belgium. Image: Geert Van der Linden.

Location Matters

Along paths, roads and waterways, line plantings had many of the same location-specific functions as on farms. Cattle and pigs were hoarded over dedicated droveways lined with hedgerows, coppices and/or pollards. When the railroads appeared, line plantings prevented collisions with animals. They protected road travellers from the weather, and marked the route so that people and animals would not get off the road in a snowy landscape. They prevented soil erosion at riverbanks and hollow roads.

All functions of line plantings could be managed by dead wood fences, which can be moved more easily than hedgerows, take up less space, don’t compete for light and food with crops, and can be ready in a short time. [11] However, in times and places were wood was scarce a living hedge was often preferred (and sometimes obliged) because it was a continuous wood producer, while a dead wood fence was a continuous wood consumer. A dead wood fence may save space and time on the spot, but it implies that the wood for its construction and maintenance is grown and harvested elsewhere in the surroundings.


Image: Pollarded tree hedge in Belgium. Image: Geert Van der Linden.

Local use of wood resources was maximised. For example, the tree that was planted next to the waterwheel, was not just any tree. It was red dogwood or elm, the wood that was best suited for constructing the interior gearwork of the mill. When a new part was needed for repairs, the wood could be harvested right next to the mill. Likewise, line plantings along dirt roads were used for the maintenance of those roads. The shoots were tied together in bundles and used as a foundation or to fill up holes. Because the trees were coppiced or pollarded and not cut down, no function was ever at the expense of another.

Nowadays, when people advocate for the planting of trees, targets are set in terms of forested area or the number of trees, and little attention is given to their location – which could even be on the other side of the world. However, as these examples show, planting trees closeby and in the right location can significantly optimise their potential.

Shaped by Limits

Coppicing has largely disappeared in industrial societies, although pollarded trees can still be found along streets and in parks. Their prunings, which once sustained entire communities, are now considered waste products. If it worked so well, why was coppicing abandoned as a source of energy, materials and food? The answer is short: fossil fuels. Our forebears relied on coppice because they had no access to fossil fuels, and we don’t rely on coppice because we have.

Our forebears relied on coppice because they had no access to fossil fuels, and we don’t rely on coppice because we have

Most obviously, fossil fuels have replaced wood as a source of energy and materials. Coal, gas and oil took the place of firewood for cooking, space heating, water heating and industrial processes based on thermal energy. Metal, concrete and brick – materials that had been around for many centuries – only became widespread alternatives to wood after they could be made with fossil fuels, which also brought us plastics. Artificial fertilizers – products of fossil fuels – boosted the supply and the global trade of animal fodder, making leaf fodder obsolete. The mechanization of agriculture – driven by fossil fuels – led to farming on much larger plots along with the elimination of trees and line plantings on farms.

Less obvious, but at least as important, is that fossil fuels have transformed forestry itself. Nowadays, the harvesting, processing and transporting of wood is heavily supported by the use of fossil fuels, while in earlier times they were entirely based on human and animal power – which themselves get their fuel from biomass. It was the limitations of these power sources that created and shaped coppice management all over the world.


Harvesting wood from pollarded trees in Belgium, 1947. Credit: Zeylemaker, Co., Nationaal Archief (CCO)


Transporting firewood in the Basque Country. Source: Notes on pollards: best practices’ guide for pollarding. Gipuzkoaka Foru Aldundía-Diputación Foral de Giuzkoa, 2014.

Wood was harvested and processed by hand, using simple tools such as knives, machetes, billhooks, axes and (later) saws. Because the labor requirements of harvesting trees by hand increase with stem diameter, it was cheaper and more convenient to harvest many small branches instead of cutting down a few large trees. Furthermore, there was no need to split coppiced wood after it was harvested. Shoots were cut to a length of around one metre, and tied together in “faggots”, which were an easy size to handle manually.

It was the limitations of human and animal power that created and shaped coppice management all over the world

To transport firewood, our forebears relied on animal drawn carts over often very bad roads. This meant that, unless it could be transported over water, firewood had to be harvested within a radius of at most 15-30 km from the place where it was used. [12] Beyond those distances, the animal power required for transporting the firewood was larger than its energy content, and it would have made more sense to grow firewood on the pasture that fed the draft animal. [13] There were some exceptions to this rule. Some industrial activities, like iron and potash production, could be moved to more distant forests – transporting iron or potash was more economical than transporting the firewood required for their production. However, in general, coppice forests (and of course also line plantings) were located in the immediate vicinity of the settlement where the wood was used.

In short, coppicing appeared in a context of limits. Because of its faster growth and versatile use of space, it maximized the local wood supply of a given area. Because of its use of small branches, it made manual harvesting and transporting as economical and convenient as possible.

Can Coppicing be Mechanized?

From the twentieth century onwards, harvesting was done by motor saw, and since the 1980s, wood is increasingly harvested by powerful vehicles that can fell entire trees and cut them on the spot in a matter of minutes. Fossil fuels have also brought better transportation infrastructures, which have unlocked wood reserves that were inaccessible in earlier times. Consequently, firewood can now be grown on one side of the planet and consumed at the other.

The use of fossil fuels adds carbon emissions to what used to be a completely carbon neutral activity, but much more important is that it has pushed wood production to a larger – unsustainable – scale. [14] Fossil fueled transportation has destroyed the connection between supply and demand that governed local forestry. If the wood supply is limited, a community has no other choice than to make sure that the wood harvest rate and the wood renewal rate are in balance. Otherwise, it risks running out of fuelwood, craft wood and animal fodder, and it would be abandoned.


Mechanically harvested willow coppice plantation. Shortly after coppicing (right), 3-years old growth (left). Image: Lignovis GmbH (CC BY-SA 4.0). 

Likewise, fully mechanized harvesting has pushed forestry to a scale that is incompatible with sustainable forest management. Our forebears did not cut down large trees for firewood, because it was not economical. Today, the forest industry does exactly that because mechanization makes it the most profitable thing to do. Compared to industrial forestry, where one worker can harvest up to 60 m3 of wood per hour, coppicing is extremely labor-intensive. Consequently, it cannot compete in an economic system that fosters the replacement of human labor with machines powered by fossil fuels.

Some scientists and engineers have tried to solve this by demonstrating coppice harvesting machines. [15] However, mechanization is a slippery slope. The machines are only practical and economical on somewhat larger tracts of woodland (>1 ha) which contain coppiced trees of the same species and the same age, with only one purpose (often fuelwood for power generation). As we have seen, this excludes many older forms of coppice management, such as the use of multipurpose trees and line plantings. Add fossil fueled transportation to the mix, and the result is a type of industrial coppice management that brings few improvements.


Coppiced trees along a brook in ‘s Gravenvoeren, Belgium. Image: Geert Van der Linden. 

Sustainable forest management is essentially local and manual. This doesn’t mean that we need to copy the past to make biomass energy sustainable again. For example, the radius of the wood supply could be increased by low energy transport options, such as cargo bikes and aerial ropeways, which are much more efficient than horse or ox drawn carts over bad roads, and which could be operated without fossil fuels. Hand tools have also improved in terms of efficiency and ergonomics. We could even use motor saws that run on biofuels – a much more realistic application than their use in car engines. [16]

The Past Lives On

This article has compared industrial biomass production with historical forms of forest management in Europe, but in fact there was no need to look to the past for inspiration. The 40% of the global population consisting of people in poor societies that still burn wood for cooking and water and/or space heating, are no clients of industrial forestry. Instead, they obtain firewood in much of the same ways that we did in earlier times, although the tree species and the environmental conditions can be very different. [17]

A 2017 study calculated that the wood consumption by people in “developing” societies – good for 55% of the global wood harvest and 9-15% of total global energy consumption – only causes 2-8% of anthropogenic climate impacts. [18] Why so little? Because around two-thirds of the wood that is harvested in developing societies is harvested sustainably, write the scientists. People collect mainly dead wood, they grow a lot of wood outside the forest, they coppice and pollard trees, and they prefer the use of multipurpose trees, which are too valuable to cut down. The motives are the same as those of our ancestors: people have no access to fossil fuels and are thus tied to a local wood supply, which needs to be harvested and transported manually.


African women carrying firewood. (CC BY-SA 4.0)

These numbers confirm that it is not biomass energy that’s unsustainable. If the whole of humanity would live as the 40% that still burns biomass regularly, climate change would not be an issue. What is really unsustainable is a high energy lifestyle. We can obviously not sustain a high-tech industrial society on coppice forests and line plantings alone. But the same is true for any other energy source, including uranium and fossil fuels. 

Written by Kris De Decker. Proofread by Alice Essam. 


[1] Multiple references:

Unrau, Alicia, et al. Coppice forests in Europe. University of Freiburg, 2018. 

Notes on pollards: best practices’ guide for pollarding. Gipuzkoako Foru Aldundia-Diputación Foral de Gipuzkoa, 2014.

A study of practical pollarding techniques in Northern Europe. Report of a three month study tour August to November 2003, Helen J. Read.

Aarden wallen in Europa, in “Tot hier en niet verder: historische wallen in het Nederlandse landschap”, Henk Baas, Bert Groenewoudt, Pim Jungerius and Hans Renes, Rijksdienst voor het Cultureel Erfgoed, 2012.

[2] Logan, William Bryant. Sprout lands: tending the endless gift of trees. WW Norton & Company, 2019.

[3] Holišová, Petra, et al. “Comparison of assimilation parameters of coppiced and non-coppiced sessile oaks“. Forest-Biogeosciences and Forestry 9.4 (2016): 553. 

[4] Perlin, John. A forest journey: the story of wood and civilization. The Countryman Press, 2005.

[5] Most of this information comes from a Belgian publication (in Dutch language): Handleiding voor het inventariseren van houten beplantingen met erfgoedwaarde. Geert Van der Linden, Nele Vanmaele, Koen Smets en Annelies Schepens, Agentschap Onroerend Erfgoed, 2020. For a good (but concise) reference in English, see Rotherham, Ian. Ancient Woodland: history, industry and crafts. Bloomsbury Publishing, 2013.

[6] While leaf fodder was used all over Europe, it was especially widespread in mountainous regions, such as Scandinavia, the Alps and the Pyrenees. For example, in Sweden in 1850, 1.3 million sheep and goats consumed a total of 190 million sheaves annually, for which at least 1 million hectares deciduous woodland was exploited, often in the form of pollards. The harvest of leaf fodder predates the use of hay as winter fodder. Branches could be cut with stone tools, while cutting grass requires bronze or iron tools. While most coppicing and pollarding was done in winter, harvesting leaf fodder logically happened in summer. Bundles of leaf fodder were often put in the pollarded trees to dry. References: 

Logan, William Bryant. Sprout lands: tending the endless gift of trees. WW Norton & Company, 2019.

A study of practical pollarding techniques in Northern Europe. Report of a three month study tour August to November 2003, Helen J. Read.

Slotte H., “Harvesting of leaf hay shaped the Swedish landscape“, Landscape Ecology 16.8 (2001): 691-702. 

[7] Wealleans, Alexandra L. “Such as pigs eat: the rise and fall of the pannage pig in the UK“. Journal of the Science of Food and Agriculture 93.9 (2013): 2076-2083.

[8] This information is based on several Dutch language publications: 

Handleiding voor het inventariseren van houten beplantingen met erfgoedwaarde. Geert Van der Linden, Nele Vanmaele, Koen Smets en Annelies Schepens, Agentschap Onroerend Erfgoed, 2020.

Handleiding voor het beheer van hagen en houtkanten met erfgoedwaarde. Thomas Van Driessche, Agentschap Onroerend Erfgoed, 2019

Knotbomen, knoestige knapen: een praktische gids. Geert Van der Linden, Jos Schenk, Bert Geeraerts, Provincie Vlaams-Brabant, 2017.

Handleiding: Het beheer van historische dreven en wegbeplantingen. Thomas Van Driessche, Paul Van den Bremt and Koen Smets. Agentschap Onroerend Erfgoed, 2017.

Dirkmaat, Jaap. Nederland weer mooi: op weg naar een natuurlijk en idyllisch landschap. ANWB Media-Boeken & Gidsen, 2006.

For a good source in English, see: Müller, Georg. Europe’s Field Boundaries: Hedged banks, hedgerows, field walls (stone walls, dry stone walls), dead brushwood hedges, bent hedges, woven hedges, wattle fences and traditional wooden fences. Neuer Kunstverlag, 2013.

If line plantings were mainly used for wood production, they were planted at some distance from each other, allowing more light and thus a higher wood production. If they were mainly used as plot boundaries, they were planted more closely together. This diminished the wood harvest but allowed for a thicker growth.

[9] In fact, coppice forests could also have a location-specific function: they could be placed around a city or settlement to form an impenetrable obstacle for attackers, either by foot or by horse. They could not easily be destroyed by shooting, in contrast to a wall. Source: [5]

[10] Lime trees were even used for fire prevention. They were planted right next to the baking house in order to stop the spread of sparks to wood piles, haystacks and thatched roofs. Source: [5]

[11]  The fact that living hedges and trees are harder to move than dead wood fences and posts also has practical advantages. In Europe until the French era, there was no land register and boundaries where physically indicated in the landscape. The surveyor’s work was sealed with the planting of a tree, which is much harder to move on the sly than a pole or a fence. Source: [5]

[12] And, if it could be brought in over water from longer distances, the wood had to be harvested within 15-30 km of the river or coast. 

[13] Sieferle, Rolf Pieter. The Subterranean Forest: energy systems and the industrial revolution. White Horse Press, 2001.

[14] On different scales of wood production, see also: 

Jalas, Mikko, and Jenny, Rinkinen. “Stacking wood and staying warm: time, temporality and housework around domestic heating systems“, Journal of Consumer Culture 16.1 (2016): 43-60.

Rinkinen, Jenny. “Demanding energy in everyday life: insights from wood heating into theories of social practice.” (2015).

[15] Vanbeveren, S.P.P., et al. “Operational short rotation woody crop plantations: manual or mechanised harvesting?” Biomass and Bioenergy 72 (2015): 8-18.

[16] However, chainsaws can have adverse effects on some tree species, such as reduced growth or greater ability to transfer disease. 

[17] Multiple sources that refer to traditional forestry practices in Africa:

Leach, Gerald, and Robin Mearns. Beyond the woodfuel crisis: people, land and trees in Africa. Earthscan, 1988. 

Leach, Melissa, and Robin Mearns. “The lie of the land: challenging received wisdom on the African environment.” (1998)

Cline-Cole, Reginald A. “Political economy, fuelwood relations, and vegetation conservation: Kasar Kano, Northerm Nigeria, 1850-1915.” Forest & Conservation History 38.2 (1994): 67-78.

[18] Multiple references:

Bailis, Rob, et al. “Getting the number right: revisiting woodfuel sustainability in the developing world.” Environmental Research Letters 12.11 (2017): 115002

Masera, Omar R., et al. “Environmental burden of traditional bioenergy use.” Annual Review of Environment and Resources 40 (2015): 121-150.

Study downgrades climate impact of wood burning, John Upton, Climate Central, 2015.

[19] Haustingsskog. [revidert] Rettleiar for restaurering og skjøtsel, Garnås, Ingvill; Hauge, Leif ; Svalheim, Ellen, NIBIO RAPPORT | VOL. 4 | NR. 150 | 2018. 

Posted in Plant Trees | Tagged , , , | Comments Off on How to make biomass last longer

Book Review: The Age of Wood: Our Most Useful Material & the Construction of Civilization

Preface. This is a book review, mainly with excerpts, of Ennos’s book “The Age of Wood. Our Most Useful Material and the Construction of Civilization”. If you know anything about woodworking, you will enjoy the detailed descriptions of how and why wood is so versatile and how various objects are made with wood, from wheels to cathedrals.

Wood was essential towards the evolution of us becoming homo sapiens, not just for fires, but all kinds of tools and weapons, that archeologists ignore  in favor of stone and metal since wood objects composted long ago. Even today, wood is essential in our fossil-fueled world.   And in the past, many wooden inventions transformed civilizations, wooden wheels, ships for war and trade, musical instruments, myriad tools, furniture, and barrels, which were the equivalent of tin cans, plastic bottles, and shipping containers today.

My book, Life after fossil fuels explains why we will return to the age of wood as our energy source and infrastructure, which all civilizations before fossil fuels was based on.  If I’d read it before publication, some of the material in it would have been cited in my book.  And if you are trying to preserve knowledge for our postcarbon future, this would be a good one to have on the shelf.

Alice Friedemann  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report



Ennos R (2021) The Age of Wood: Our Most Useful Material and the Construction of Civilization.

Wood and Human Evolution

For so long history and the story of humanity have been defined by stone, bronze, and iron, it’s time to recognize the equally important role wood played, and still does.

Anthropologists wax lyrical about the developments of stone tools, and the intellectual and motor skills needed to shape them, while brushing aside the importance of the digging sticks, spears, and bows and arrows with which early humans actually obtained their food. Archaeologists downplay the role wood fires played in enabling modern humans to cook their food and smelt metals. Technologists ignore the way in which new metal tools facilitated better woodworking to develop the groundbreaking new technologies of wheels and plank ships. And architectural historians ignore the crucial role of wood in roofing medieval cathedrals, insulating country houses, and underpinning whole cities.

Wood is the one material that has provided continuity in our long evolutionary and cultural story, from apes moving about the forest, through spear-throwing hunter-gatherers and ax-wielding farmers to roof-building carpenters and paper-reading scholars. The foundations of our relationship with wood lie in its remarkable properties. As an all-round structural material, it is unmatched. It is lighter than water, yet weight for weight is as stiff, strong, and tough as steel and can resist both being stretched and compressed. It is easy to shape, as it readily splits along the grain, and is soft enough to carve, especially when green. It can be found in pieces large enough to hold up houses, yet can be cut up into tools as small as a toothpick. It can last for centuries if it is kept permanently dry or wet, yet it can also be burned to keep us warm, to cook our food, and drive a wide range of industrial processes. With all these advantages, the central role of wood in the human story was not just explicable, but inevitable.

The key to getting a better grip on a smooth surface is not to use a hard material such as a claw, but a soft one, such as skin.  We could cover our finger pads with a biological rubber such as elastin, but this would wear away too fast. The solution evolved by primates is more ingenious: we use a soft internal fluid within our finger pads and surround it by a stiffer lining—producing a structure rather like a partially deflated car tire. Beneath the tips of our fingers are pads of fat, which deform easily to allow a large surface area of the more rigid surrounding skin to make contact.

This arrangement gives us an excellent grip on hard surfaces such as glass, ten times as good as that of hard hooves or claws—explaining why we remain sure-footed on smooth concrete and tiles, whereas horses are prone to slip in their stables, and panicking dogs often scrabble about on the kitchen floor. Also we have ridges known as fingerprints. On smooth materials such as glass, this makes our grip worse, since it reduces the area of contact, just as grooved tires in racing cars have poorer grip in the dry than slicks. However, fingerprints do give some important advantages. They can improve our grip in the wet (just like grooved tires) since they can channel away the surface film of water, and also on rough surfaces, such as branches, since the ridges interlock with ones in the bark. And the skin ridges where our touch receptors are located can magnify strains and so improve the sensitivity of our fingers. Finally, the alternation of strong ridges with flexible troughs in the skin allow it to deform smoothly when we grip an object, preventing blistering.

Primatologists are learning that the reason monkeys increased in size as they evolved was related to changes in their diets. Bush babies and their relatives the lorises are insectivores; they eat insects and other invertebrates, which are hard to find, hard to catch, and rather small. Insects provide enough energy to support a bush baby. However, a larger creature would be no better at finding, catching, and eating insects, but the amount of energy it would have to expend moving about to do so would be much greater.  Other possibilities for primates include eating leaves or fruit

A leaf-eating primate has to eat huge quantities of young leaves and hold them for days in its stomach to detoxify and digest them; this limits its energy intake. Leaf-eating monkeys tend to be large, potbellied animals with a slow metabolism and limited intelligence—they cannot afford to develop a large brain, but then again, as leaves aren’t hard to find, they don’t need to!

Those primates that changed their diet to eat fruit rather than leaves also tended to get bigger because fruit is plentiful in rain forests and is full of energy. With so many different types of trees in tropical rain forests, each species is widely scattered through the forest. Moreover, because of the lack of seasonality, trees can fruit at any time. Trees that are in fruit are rare and hard to find.

So fruit-eating primates not only have to be able to spot when fruits are ripe, they also have to be able to remember where fruiting trees are located within the forest, and to predict when they are likely to fruit, so they can get to them before the fruit is eaten by other animals. Consequently fruit-eating animals have to hold a great deal of information in their heads, mapping the world in space and time. Field studies and experiments on captive fruit-eating primates have shown that they can remember the location of large numbers of fruiting trees and compute accurate routes to travel rapidly and economically to the next tree to ripen. So it is no surprise to find that fruit-eating primates such as macaques and spider monkeys have brains that are on average about 25% bigger than those of their leaf-eating cousins, the langurs and howler monkeys. This has enabled them to develop more sophisticated social behavior and live in more cohesive groups.

The intelligence of monkeys’ pales in comparison with that of our closest relatives, the great apes: orangutans, gorillas, chimpanzees, and bonobos, whose brains are twice as large relative to their body weight. Most primatologists believe the apes acquired their larger brains to help them communicate with and manipulate their peers.

An orangutan would probably be killed by a fall from the canopy that would scarcely harm a small monkey. It struck me then that the early apes might have also evolved larger brains to help them navigate safely around their perilous arboreal environment and allow them to plan and follow the best routes through the trees. To do this they would also have had to develop a self-image; they would have to realize that their body weight altered their mechanical world by bending down the branches that were supporting them. In other words, their intelligence had a physical basis, not a social one: a feeling for the mechanical properties of wood.   Many years later I was surprised and pleased to learn that my idea was now a bona fide theory of the evolution of intelligence in apes—the “clambering hypothesis” of Daniel Povinelli and John Cant. Since the publication of their hypothesis in 1995, other field-workers have built up evidence that orangutans, in particular, do have a high level of understanding of the mechanics of trees.

Understanding the mechanics of tree branches gives the great apes another advantage: they can use them to construct a nest in which they can safely sleep. All the great apes are capable of making themselves complex cup-shaped nests in the tree canopy, while monkeys sit on as thick a branch as they can find, resting their weight on pads of skin that develop on their buttocks, but even so they repeatedly wake up throughout the night. An ape, sleeping within a broad, cup-shaped nest, is far safer and can sleep for longer periods and more deeply.

This is reflected in the neural activity in sleeping monkeys and sleeping apes. The apes have more frequent bouts of both NREM (non–rapid eye movement) and REM (rapid eye movement) sleep. These types of sleep are important in reordering and fixing memories, which can in turn help improve cognitive ability. Building nests could have helped apes get even cleverer.

It might seem to be a simple task to construct a nest, and that certainly appears to be what primatologists have thought, as they have given them scant attention. But it is not just a matter of breaking a few branches off and weaving them together. It is nearly impossible to snap a living branch off a tree by bending it. And this is not because the branches are too strong, but because the structure of wood affects how it breaks.

Wood is eight to ten times stronger longitudinally than transversely, and most types of wood are also 20–50% stronger in the radial direction than in the tangential. This pattern matches the forces the wood has to withstand. The high strength and stiffness of wood along the grain enables it to withstand the bending forces to which tree trunks and branches are subjected by gravity and the wind.  This structural arrangement also makes it almost impossible to detach a living branch. If you bend a branch of green wood, what you are doing is stretching the wood on the convex side, and compressing the wood on the concave side. In a typical branch the wood will fail first in tension, and the branch will start to break across, like a carrot or stick of celery. But it won’t break all the way. As the crack reaches the center of the branch, it gets diverted, traveling up and down the weak center line of the branch,

An orangutan would find a good strong horizontal branch to rest on, then construct its nest around this support. First, it would lean out and with one hand draw thick branches in toward itself, breaking them in greenstick fracture and hinging them inward, before finally weaving the branches together. The result was a cup-shaped elliptical nest around four feet long and two and a half feet wide. Sitting in the completed structure, the ape would reach out to grab thinner branches and, holding them in two hands, first break them in greenstick fracture, then twist them to break the two ends apart. It then stuffed the broken branches, complete with twigs and leaves, into the nest, behind and around itself to produce a mattress and a pillow, and finally on its lap to produce a blanket. The whole process was remarkably rapid. In Julia’s film, the male ape took only five minutes to build his nest, and half of that time was spent resting between the two stages.  It takes young orangutans years of observing their mothers and practicing by themselves for them to perfect their constructions. And orangutans are the only other great ape that walks more or less upright and with straight legs like us.

Our ancestors gained their ability to walk bipedally when they still lived in the trees. Moreover, it is becoming clear that far from striding out immediately into the plains, our ancestors remained in well-wooded regions and stayed in the canopy long after they had become able to walk upright. We have already seen that orangutans frequently walk upright along narrow branches, and that when they do so, they also cling to higher branches with their hands. As an animal puts its foot down, the branch moves downward under its weight, storing energy, before springing up again and returning that energy. The orangutan could therefore bounce along the branch almost effortlessly, like a person walking on a trampoline.  By holding on to branches could help an animal overcome another major difficulty of evolving bipedalism: keeping its balance.

It seems that it was only with the emergence of Homo erectus, less than 2 million years ago, that humans became fully adapted to a terrestrial lifestyle. What previously was continuous tropical and monsoon forest has opened up, the trees unable to cope with the longer dry seasons except in the damper soils along river valleys. Clearly, this change in vegetation was bad news for forest-dwelling apes.  They would have been forced to the forest floor, first of all to travel between the scattered trees, but also in search of other types of food to supplement their diet of fruit, such as eating the termites that abound in savannas, raiding honey from bees’ nests, and hunting small mammals such as bush babies. Like the chimps, they probably fashioned wooden tools, such as probes, chisels, and spears to do this, and maybe used stone hammers to break open the hard nuts and seeds that the new types of drought-tolerant plants produced. But their main source of food in the dry season, like modern-day hunter-gatherers such as the Hadza people of Tanzania, who live in similar savanna woodland, would have been underground roots and bulbs.

Roots are strongly defended. First, plants protect them mechanically, by incorporating tough fibers within them. Both the early australopiths and Homo habilis developed their dentition to cope with these mechanical defenses. Later australopiths, such as Paranthropus boisei and Paranthropus robustus, also developed large sagittal crests on the top of their heads, rather like ones you can see on modern hyenas, which acted as the insertion points of huge jaw muscles. It is thought that this would have helped them grind up the tough roots and crack open hard nuts and seeds.

Plants also defend their underground storage organs chemically, by incorporating astringent chemicals to precipitate out digestive enzymes, and toxins to poison consumers. Australopiths developed large guts to help digest this difficult food. They must have been potbellied, just like proboscis monkeys. But the main difficulty in eating roots is accessing this subsoil resource in the first place. Baboons, the only primates that currently live on the African plains, use their hands to dig in the soil, but they can only reach shallow bulbs and corms. Warthogs use their impressive tusks to dig a bit deeper. The hominins would have had to develop a new technology to access even longer, deeper roots.

The digging sticks used by modern-day hunter-gatherers, such as the women of the Hadza tribe, are even larger and more sophisticated. They cut sticks that are over a yard in length, an inch and a half thick, and weigh anything from one to two pounds. Their favorite ekwa hasa roots are around four feet long and highly nutritious. The Hadza women dig them up by pounding the pointed end of their sticks into the soil to break it up and levering out the loosened soil with a digging motion; the process is so efficient that the women can collect enough roots in a few hours for the daily needs of their band.

There must have been strong selection pressure in early hominins to learn how to break off and prepare thicker, longer, and stronger sticks. This may have driven them to develop new stone tools with sharp edges that could saw through wooden branches and whittle the ends into points. To do this, and to handle the digging sticks effectively, they would also have had to evolve stronger gripping hands with fully opposable thumbs.  Its strength, stiffness, and toughness is down to the molecular structure of the cell walls themselves. The cell walls are stiffened by crystalline microfibrils of cellulose, which are embedded in a softer matrix of hemicellulose that is stabilized by a polymer called lignin.  When the cell wall finally breaks, the fibrils uncoil like a stretched spring, creating a rough fracture surface with thousands of tiny hairlike fibrils projecting out of the wood. This process absorbs huge amounts of energy, making wood around a hundred times as tough as fiberglass, and giving wood its resistance to fracture. It’s the reason why trees stand up so well to hurricanes that can destroy more rigid man-made structures, and why wooden boats are far more resistant to bumps than fiberglass ones.

But the early hominins would also have been helped by the first of two incredibly fortuitous properties of wood, properties that are of no actual benefit to the trees that make it. If wood is broken off a tree and starts to dry out, its mechanical properties improve! This is most unusual for biological materials; bones, horn, and nails all get weaker and more brittle as they desiccate. At the 60% relative humidity of the savanna dry season, the water content of wood typically drops from 30% to 12% and its stiffness triples. Early hominins would have made use of this transformation – a fully dried stick would be able to dig a hole around 50% deeper than a green stick.

It seems puzzling that they continued to return to the trees; there must have been a major problem that prevented them from coming down permanently. Looking at the present-day African plains, it is clear what that problem must have been: they would have been extremely vulnerable to being eaten by predators such as saber-toothed cats, scimitar-toothed cats, and the ancestors of present-day lions and hyenas.  Baboons are the only large primates that live on the plains of Africa, and they have real problems with predation. Compared to early hominins, they are physically far better able to defend themselves; they have huge canine teeth, and a fully grown male may weigh as much as ninety pounds, more than a match for many large cats. Even so, baboons have to live together in groups of 20 to 200 individuals to protect one another, and yet they still get a rotten night’s rest. Even when they are living in zoos, baboons wake up 18 times a night, only sleep for 60% of their rest period, and get into deep REM sleep only around 10% of the time. This contrasts with 18% of the time for chimpanzees, which sleep in nests, and 22% for modern humans.

The only plausible way that our ancestors could have protected themselves on the ground at night from predators was by using fire. This is where the second of wood’s fortuitous properties comes in: it is flammable, especially when dry, and when it is burned, it releases a large amount of heat and light. The flammability of wood is of no use to trees; it’s just another fortunate accident that it does burn, though most living trees, especially ones growing in rain forests, are extremely resistant to being set alight.

The cell walls of living wood contain a lot of water, around 30% of their dry weight, and the cell lumens in the sapwood around the outside of the trunk and branches are filled with water; a tree trunk can therefore contain three times its dry weight of loose water. Before wood can burn, all this water has to be heated up and evaporated off, which requires as much as a third of the energy that is released when the wood finally burns. Cell wall material is chemically stable, even at temperatures above 212°F; the lignin keeps the cellulose fibers rigidly bound together, which explains why we can’t cook wood and make it into a useful food by boiling it!

Starting fires without matches or firelighters is no easy business. The usual methods employed by modern hunter-gatherers are either to generate heat by rubbing sticks together, or to make sparks by striking flints against each other. It is unlikely that early hominins would have been able to do either.

Predators such as cheetahs and birds of prey are drawn to bush fires, feeding on the small mammals and birds that are flushed out in a panic by the flames. Savanna chimpanzees are also attracted to fires, gathering the cooked seeds of bean trees and eating them. From following and using naturally occurring fires it is a small step to keeping those fires alight.  They simply carry smoldering logs with them as they travel about the bush, lighting fires when they need them. And from keeping fire alight in smoldering logs, it is only another small step to keep a fire burning at a permanent camp, and building it up at night to repel predators.

Setting up a permanent camp, and being able to sit together around the campfire, would have had other advantages. It would help to keep the hominins warmer during the cool nights typical of savanna regions. The light from the fire would also help lengthen the time when individuals could carry out tasks such as making and mending tools. There would also be opportunities for a greater variety of social interactions: sharing food and exchanging information. Having a permanent fire would help speed up the evolution of both practical and social skills.

The advantages of cooking are perhaps best shown by what happens to those human health fanatics who eat only raw food. Even if they grind up their food carefully before eating it, raw foodists have problems in digesting what they eat and invariably lose weight and conditioning. Typical weight loss is around forty-four pounds for men and fifty-five pounds for women.

The Naked Ape & the sweating hypothesis

Hairlessness is an extremely unusual trait for terrestrial mammals: only the naked mole rat comes to mind.  A newly hairless hominin would have had to produce more melanin to absorb the harmful ultraviolet rays, turning its skin black.  It has been at least 1.2 million years since hominins lost their hair in our ancestor Homo erectus

But what drove hair loss? The generally accepted explanation among anthropologists is that losing hair allowed early humans to keep cool in the hot savanna regions into which they had moved (and it still is the main hypothesis). So effective is heat removal by sweating that anthropologists have gone on to suggest that losing our hair was crucial for another advance in the evolution of humans: the ability to hunt large animals.

But it is not certain that it is hairlessness that gives the San Bushmen the advantage; two other mammalian predators also hunt in this way in savannas, African hunting dogs and spotted hyenas, and both of them are covered all over with hair like the prey they hunt. In fact, endurance hunting is rare in hunter-gatherer societies, maybe because it has the disadvantage that though the hunter can keep cool by sweating, by doing so he loses large amounts of water.

The hunting hypothesis suggests that early men could run farther and for longer in the heat of the day than prey animals because sweating would keep them cooler for longer. If they tracked their prey for long enough, their prey would eventually overheat and become immobilized. But wait, that’s a problem, for example,  US army recruits have been known to lose over four quarts of water per hour when exercising in the desert. The resulting dehydration can be fatal if weight losses exceed 2% of body weight. Nowadays hunters can carry water bottles with them to keep up their fluid levels, but there is no guarantee that early humans had invented vessels that were capable of carrying water.

The sweating hypothesis has a more fundamental problem, one rarely mentioned by anthropologists. In the heat of the day a naked body would actually absorb more heat than one covered in hair, meaning it would need to be more actively cooled. You might think that this would only occur when the air temperature exceeds our body temperature of 98.6°F, when heat would enter our bodies via convection. This only rarely happens in savannas, where the mean daytime maximum temperature is usually around 84°F. However, this leaves out the most important mode of heat transfer between our bodies and the environment: radiation. On a hot sunny day a hairless human body will absorb long-wave radiation emitted from the hot ground, and even more important, the much larger amount of short-wave radiation (mostly light) that comes from the sun. On such a day the net radiation entering our bodies can amount to around 670 watts per square yard, much more than the amount of energy we ourselves generate. The layers of hair follicles on a hairy animal will shield it from practically all of this radiation, so while the surface of its pelt might be hot, its skin remains at body temperature. For this reason, most savanna mammals are hairier than their cousins that live in dense forest and tend to have particularly dense hair on their upper flanks to ward off the sun’s rays. Protected by their heavy fur coats, they have to use far less water to keep themselves cool than naked humans.

In deserts the problems of keeping cool in the daytime are most acute. It is noteworthy, then, that those “ships of the desert,” the camels, have particularly heavy coats of hair on their upper flanks, while their human riders cover themselves with loose flowing robes. The shielding effect of hair also helps explain why humans have maintained a dense covering of hair on the tops of our heads; it helps us keep our most vital organ—our brains—cool.

The importance of hair in thermo-regulating our brain was driven home to many English cricket supporters back in 1994, when the English all-rounder Chris Lewis shaved his head at the start of a tour of the West Indies. He promptly went down with sunstroke So important is our head hair in keeping our brains cool that the human races that inhabit hotter parts of the world, such as Native Americans and Africans, have lower rates of male-pattern baldness than the Caucasian inhabitants of the cool regions

And you may have spotted another problematical aspect of the hunter hypothesis: its inbuilt sexism. The researchers who have investigated this theory (almost all men) have concentrated on an activity, hunting, that they have assumed was also carried out entirely by men. They totally ignored the contribution of women, who, they assume, spent much of their time “gathering” or perhaps simply waiting for the men to bring home their catch. They do not explain how hairlessness would have helped the women dig up roots, make fires, or cook. Indeed, according to the theory, women should actually be hairier than men since they would not have had such great cooling demands on their metabolism, whereas the reverse is true.

Several scientists have championed an alternative hypothesis, one that was first put forward in 1874 by the naturalist Thomas Belt, and one that applies to both sexes: that humans lost their hair to reduce their ectoparasite load. The reasoning is that hair loss occurred because early humans were now living and sleeping together in semipermanent camps, rather than in solitary nests.

Ectoparasites would therefore be more likely to build up around the camp and become more of a problem. It is certainly true that before the advent of modern insecticides, we were highly troubled by such parasites. Our mattresses were infested by bedbugs, our head hair by lice, and pubic hair by crab lice. Moreover, humans are the only one of the 193 species of monkeys and apes to have its own species of flea, Pulex irritans, something that is only possible because we live in permanent settlements; the larvae fall to the floor and live on organic debris in our houses, guaranteed to find new humans to bite once they have emerged from their pupae as adults.

Ectoparasites are not only irritating and suck our blood; they also carry dangerous infectious diseases such as typhus, various forms of spotted fever, and bubonic plague. There would therefore have been strong selection pressure on any morphological feature in early humans that would have reduced the ectoparasites’ numbers. The ectoparasite theory suggests that the best way to do this was to lose our body hair. in World War I it was found that cutting soldiers’ hair shorter greatly reduced the buildup of head lice.

Reducing the length and thickness of our hair not only makes it easier to visually spot fleas and lice on our skin; recent research by Isabelle Dean and Michael Siva-Jothy of the University of Sheffield, England, has also shown that our fine body hairs act as excellent movement detectors, allowing us to feel where the parasites are. Finally, the theory also provides a satisfying explanation of why women are less hairy than men: staying longer at camp than the men, they may have been more prone to being loaded with parasites.

Whichever hypothesis you favor, the benefits would have had to be large enough to overcome a serious disadvantage of nakedness. Naked Homo erectus individuals would have suffered from a quite different thermoregulatory problem than overheating during the day: they would have been extremely prone to getting cold at night.

All warm-blooded animals have a range of air temperatures at which they are comfortable and at which they can keep their core body temperature constant without having to raise their resting metabolism. Within these temperatures they can regulate their body heat merely by changing their behavior—by curling up, for instance, or stretching out. As you might expect from what I have outlined above, our upper critical temperature is quite low, around 97°F, even in deep shade, and our lower critical temperature is high, around 77°F. We could be comfortable living naked in a rain forest, therefore, where air temperatures range around 82°F–90°F (and rain forest tribes consequently tend to wear few clothes), but not elsewhere.

At night in the Serengeti it can effectively feel more like 43°F–50°F; tourists to the region are advised to bring sweaters and jackets for the cool evenings. A naked Homo erectus living 1.2 million years ago on the open plains of East Africa would therefore have got cold at night and have had disturbed sleep. There are three possible ways out of this conundrum

Early humans could have huddled around the fires that they built and maintained overnight for protection from predators. Most of us have sat around campfires sometime in our youth, and they certainly warm the side that faces the fire. However, the side of our bodies that faces away from the fire and the top of our shoulders, which face the sky, can get cold. Out in the open our bodies also lose heat rapidly to the cold ground.

Another way they could have kept warm is to have used animal skins as bedclothes. However, it is difficult to believe that physical evolution could have been moving one way—making people colder—while behavioral evolution at the same time had to try to make up for it. Besides, the first actual physical evidence for clothes, or the tools such as needles needed to make them, comes far later in the story of humans—scraped hide 300,000 years ago, and sewn clothes just 20,000 years ago.

It is far more likely that the Homo erectus were already doing something that would help keep them warm at night before they lost their hair; at their campsites they were already building shelters that helped protect them from the rain, shelters that would also have helped keep them warm. They would certainly have a good incentive to do this in the rainy season. None of the great apes like getting wet; Sumatran orangutans, for instance, often make second nests directly above their sleeping nests and use them as canopies to keep off the rain. For early humans, long used to building sleeping nests on which they rested, it would not have been a problem to construct simple huts to shelter themselves. Indeed, many tribes of hunter-gatherers still build small semipermanent huts from thin branches that they cut off savanna trees; they insert the thick ends of the branches into a ring of postholes in the ground and fasten them together at the top in the same way that apes weave their nests together. The frames are then covered with leaves or skins or even coated in mud.  The huts of modern hunter-gatherers fall apart within a few weeks or months of being abandoned and leave no trace.

You might think that flimsy wooden huts would provide little warmth, since cold air could rapidly penetrate such a drafty structure, but they can be quite effective, and anything that shields us from the cold night sky helps.  Even sleeping under trees provides more warmth, one reason the hunter-gatherers of the Hadza tribe of Tanzania still sleep beneath trees during the dry season. Largely because the huts cut down air currents and shielded them from the cold night sky, sleeping inside a hut would feel 8°F–10°F warmer than outside, enough to allow for a comfortable night’s sleep.

Because early hominins were sleeping inside wooden huts they could afford to lose their body hair. And this would in turn have made us even more dependent on our practical woodworking skills, to make fires and build ever-more-elaborate shelters, and eventually to use other materials to make sheets and clothing. Paradoxically, as we got better at these activities, we would have started to be able to colonize cooler climates. Becoming hairless forced us to become more ingenious and to rely on our intelligence to help us manipulate our environment, rather than have to adapt to it as other animals do. It would have helped a fairly feeble primate conquer the world.

Stone & wood tools

The study of stone tools has dominated anthropology and archaeology ever since 1831, when the Danish antiquarian Christian Thomsen introduced the concept of classifying the “ages of man” according to their dominant materials—stone, bronze, and iron.  Archaeologists have spent huge amounts of time and effort classifying stone tools, arranging them in chronological order, replicating their manufacture and use, and following their development. In doing so, they cemented in place a worldview in which the lives of our early ancestors, and in particular their material culture, was dominated by their relationship with stone. It was generally assumed that early “Stone Age” men were the first to produce tools; that the first tools were made of stone; that stone tools dominated their world; and that the sophistication of early stone tools demonstrated the mental superiority of early humans.

Stone tools were the only human artifacts that appeared to have survived from the time of early hominins; anything made of organic materials—skin, plant fibers, or wood—had long since vanished. However, in the last 50 years new discoveries by primatologists and anthropologists mean that we now know that none of the assumptions made by nineteenth-century archaeologists are valid.

Apes produce a wide range of tools, so humans cannot be exalted over other animals because they were the first toolmakers. Most ape tools —spears, chisels, digging sticks, and nests—are made of wood, not stone, and it is highly likely that early hominins would have inherited their woodworking skills from the apes. So the first tools used by early hominins would have been made of wood, not stone. The reconstructions of the lives of early hominins that have been made by devotees of stone make it obvious that they used mainly wooden tools—to hunt animals, to dig up plant roots, and to construct shelters—and that they burned wood to keep off predators, keep themselves warm, and cook their food. If we cast our mind back to those dioramas in local museums, for instance, most of the tools they depict were actually made of wood. The men had wooden spears to kill game and used wooden poles to hang it from, and back at camp the women were standing beside wooden huts and cooking their food over wood fires. Stone tools were only used to butcher the animals that had already been killed and to scrape their hides to make skins.

Finally, the first stone tools were hardly sophisticated objects, particularly if we compare them with the artfully constructed nests of apes. The earliest ones, the Oldowan tools, which date from 3.2–2.5 million years ago, often resemble random pebbles, and even the flakes produced by the Acheulean technology, which emerged 2.2 million years ago, are pretty rough and ready. After all they were produced rapidly, simply by hammering two lumps of stone together, or by hitting a piece of stone with a bone or a log of wood. Hand axes, which were first produced around 2 million years ago, certainly look more impressive and show evidence for the first time of clear design. However, even hand axes can be made in as little as 20 minutes and are essentially just tear-shaped flakes of rock with two edges. Their design remained largely unchanged for hundreds of thousands of years, so their manufacture demonstrates little evidence of intellectual progress.

Only much later, with the sophisticated retouching techniques that were developed in the Upper Paleolithic period, around one hundred thousand years ago, did stone tools become sophisticated enough to impress any small child that we might have brought into the museum. Only then did humans shape blades that actually look like modern daggers, harpoons, and barbed arrowheads. So stone tools were by no means as novel or central to the life of early humans as has been assumed.

In any branch of learning, once a culture is established, it seems to be hard for those initiated into it to break free. Anthropologists have continued to this day to overemphasize the importance of stone tools and ignore those made from other materials.

The properties of stones result from their composition; they are made up of crystals or amorphous blocks of inorganic chemicals. In typical igneous rocks, such as granite and dolomite, these have solidified from a molten state, while in flint they have precipitated out from solution. Sedimentary rocks, such sandstones and shales, are composed of bits of igneous rock that have been pressed together, while chalk and limestone are made from the fossilized inorganic skeletons of dead organisms. The strong bonds between the atoms make stone extremely stiff and hard. This makes it ideal as an impact tool. If you use a stone to strike a nut or hit a piece of bone, it’s the nut or the bone that will deform more, and all the kinetic energy in the stone will be used to break them. None will be absorbed by the stone. However, if two stones are hit together, the energy has nowhere else to go, and cracks will readily run through or between the crystals, breaking one or both stones. Stone is brittle and breaks easily, and if there are no predetermined lines of weakness, as in flint, hitting the stones together in the right way can create fractures in a predicted direction, creating sharp edges. The hardness of stone makes these edges ideal for cutting; they can withstand the large compressive stresses set up as the sharp point is pressed into or slid across a softer material such as flesh or even bone and slice through it. This is why sharp flint tools are ideally suited to butchering animals and scraping skins.

The brittleness of stone has a major downside, though. It makes the material weak in tension, since small surface cracks can readily run through the whole stone; rods of stone, just like sticks of blackboard chalk, are easily snapped. Stone knives therefore need to be short and thick to prevent their blades from being loaded in tension, and even if a stone spear could be fashioned, it would be far too delicate to use; it would fall apart at its first throw.

In contrast, we have already seen that wood has evolved in trees to be strong in both compression and tension, and extremely tough along the grain, which is why tree trunks and branches are so good at resisting bending. Dried wooden branches have even better properties, being just as strong and tough as green wood, and three times as stiff. They are, therefore, ideal for making digging sticks and spears: they are rigid and strong in bending, so they don’t flex or break when subjected to bending force; they are tough enough to withstand impacts; and they are still hard enough to pierce skin or soil. They are also relatively easy to make; they can be shaped when the wood is green, when it is still soft enough to be cut, carved, and finished.

It’s thoroughly predictable, therefore, that most of the large tools that the early hominins used were made of wood and only the small cutting tools were made of stone. Their huts would essentially have been inverted versions of the nests built by their ape cousins, and their spears and digging sticks would have been similar to those created and used by savanna chimps. And there was probably little difference in the planning involved in producing wooden and stone tools. The tools that modern apes create are made for the moment and used immediately, either to sleep, dig, or hunt, and they are hardly modified at all from branches and twigs.

The increasing success of humans is best explained by their development of their wooden tools, particularly their weapons. The first real intellectual advance that hominins made must have occurred when our ancestors started to use stone tools not just to process their kills but to construct wooden tools. This would have had to happen when early hominins moved onto the savannas. They would have needed to make thicker digging sticks to get at roots and tubers in the dry season, and larger spears to hunt game bigger than bush babies. And they would have had to use larger branches to construct the huts in which they sheltered.

The first fully terrestrial hominin, Homo erectus, would not have been able to do this without using tools. With their small incisors, they would not have been able to sharpen their spears and digging sticks, and having less powerful arms than their arboreal ancestors, they would not have been able to break off big enough branches to make their shelters. They would have needed to use stone scrapers to sharpen the points of their tools, and to use stone knives, axes, or saws to cut off branches. Homo erectus would have had to become the world’s first carpenters. In doing so they would be the first primates to make a tool not just for immediate use, but to make another tool.

The chimp fashions every aspect of its spear where and when it will be used. It strips off the leaves and side branches of the branch with its hands and sharpens the thin end with its teeth. When hominins made a spear using a hand ax, their actual actions are not necessarily more complex, but the process did involve two separate sets of actions that could take place at different times and places: making a hand ax, and then using it to make the spear. The whole process therefore involved not only integrating information from the past, using so-called working memory, but also imagining future actions, using what has been called constructive memory.

Making the chimp’s spear involves 14 steps, which acted on three “foci,” the chimp itself, the prey item, and the tool. In contrast, the human spear took 29 steps, acting on eight foci. The complexity of the task had been more than doubled.  Early humans may have carried around hand axes that were made by someone else. The process might, therefore, also show evidence of greater social organization, not just better individual mental capacity, within Homo erectus.

We have found no actual wooden objects for the first million years following those first signs of woodworking, so we do not know what tools Homo erectus made. This has led many anthropologists to doubt the importance of wooden tools, and in particular to doubt the hunting capability of these early humans.  They often thought that hominins might until recently have been at best opportunistic scavengers, only able to rob the carcasses of large herbivores, maybe acting together with small stabbing spears to drive away other carnivores from the prize.

Only when humans colonized the cooler, wetter parts of the globe did conditions allow wooden tools to be preserved. One of the main reasons we have such a fine archaeological record of early humans in Europe is that the wet, acidic peat soils that accumulate in the colder regions protect organic materials such as wood from rotting and preserve them surprisingly intact.  The earliest recorded wooden tool is the Clacton Spear, 450,000 years old.

The sheer number of spears and corpses that have been found suggested that some sites must have been an ambush area; the early humans, who would have belonged either to the species Homo heidelbergensis or to our even closer relative Homo neanderthalensis, must have acted together in a group to cut off the horses between dry land and water before slaughtering them, though the horses had probably not all been killed at the same time. Altogether, the finds speak volumes of the sophistication of these early humans. They were not only capable of fine carving, of being able to imagine the shape of the spears within the trees and shape them with stone tools, but were also able to organize themselves into efficient hunting parties, to exploit the behavior of their prey animals, and kill them safely from a distance.

A major finding 1990 was that rather than relying on a sharp wooden point, Neanderthals and early Homo sapiens started to haft a sharp stone tool, rather like a hand ax, to the front of their spears, cutting a groove in the end to receive it, and holding the blade in place using a combination of animal glue and sinew binding. The manufacture of composite spears was therefore extremely complex, with several separate tasks or “modules”—preparing the rope; boiling up the glue; shaping the stone point; and cutting the groove in the handle—even before the final assembly. This shows even greater organizational and technical ability and intelligence on the part of the Neanderthals. I find it hard to imagine that I would be able to carry out such a complex task without lengthy training.

The experimenters were clearly expecting the stone-tipped spears to be better at penetrating flesh, they found little evidence of this. Both wood and stone are harder than skin, so they both cut through it with ease. In some studies, the wooden tips even penetrated farther than the stone ones, though there was some evidence that the wider stone blades could cause damage over a greater volume of flesh.

Composite spears have the disadvantage that the brittle stone tip is more prone to snapping off, so they would need mending more often. The real advantage may have been due to the higher density of the stone. The heavy tip of the spear would bring its center of gravity forward, enabling it to be thrown effectively, while it could also be held and used as a stabbing spear. Composite spears could therefore act both at close quarters and at a distance and be used as both offensive and defensive weapons.

But both wooden javelins and composite spears have a limited killing range. The shortness of our arms means that we need to contract our arm muscles much faster than at their optimal speed to move the hand holding the spear forward and upward. Furthermore, of all the energy used to accelerate our arms and hands, around half is wasted. This limits the speed we can impart to a hand-thrown object, so few people can throw spears of any type more than thirty yards. Fortunately, though, our ancestors developed several ways to overcome this problem and make themselves into more efficient hunters; and most of them did so using techniques that worked by artificially extending the length of their arms.

Early humans threw their javelins with the aid of leather thongs called amenta, which they looped over two fingers. It is also becoming clear that from around twenty-three thousand years ago Upper Paleolithic Homo sapiens did much the same, but using a special tool to hold the string. From the beginning of the twentieth century, archaeologists had been unearthing decorated rods of wood or antler into which a hole had been drilled toward the wider end.

In the past, wood craftsmen had many jobs, still seen in the last names from wood-based trades: the Carpenters, Wrights, Wainwrights (who made carts and wagons), Bodgers, Bowyers, Fletchers (arrows), Turners, Bowlers, Coopers, sawyers, Foresters, Colliers (charcoal), Masons, Millers, and Glaziers, Potters, and Smiths (charcoal to heat their furnaces).

Spears, Bows & Arrows

Even better results can be achieved to throw a spear farther and with greater accuracy by using a spear thrower. Also developed in the Upper Paleolithic and still used extensively in Central and South America (where they are called atlatls)

The spear thrower is a simple stick six to 18 inches long with a cup or hook at the far end. To use it, the thrower is held horizontally under the spear, its hook overlapping the back of the shaft, while the hand holds on to the shaft farther forward. The thrower acts as a third joint to the arm, the spear or dart being propelled forward by rotating the thrower forward with the wrist at the same time as the arm. The mechanics is identical to that of the modern dog-ball throwers

Yet another technique to increase the killing range of wooden tools was to use the stick itself as an extension to the arm and rotate it forward as it was thrown, like a person throwing a stick for a dog. This technique is fairly effective at increasing the speed of the stick when it is released, but as the stick tumbles through the air, it slows down far faster than a spear because of the increased aerodynamic drag. These problems were overcome, though, by the people who perfected this method, the Aborigines of Australia. They invented a wide range of boomerangs, all of which have a streamlined cross section to reduce drag and help them fly through the air.

Some (the less crooked ones) are designed to fly straight and can be lethal at up to 200 yards. But of all the ways of improving the killing performance of wooden projectiles, the best is the bow and arrow. This combination was probably first invented some 65,000 years ago in Africa, though evidence from Europe only seems to go back some 20,000 years. Rather than relying on the fast-twitch performance of our arm and shoulder muscles, bows make use of the larger forces and greater energy we can produce when these muscles contract slowly. As we pull back on the string, elastic energy is stored in the bow, which is subsequently released when we let go of the string, propelling the arrow forward.

Bows have three major advantages over all the other techniques we have seen. First, since our muscles can produce more energy when contracting slowly, a bow can release more energy to a projectile, so that arrows can be shot over nine hundred feet.

Second, since a bow is drawn with a slow, smooth movement, it can be aimed far better and is a far more accurate weapon than a spear. Finally, since from the front the archer barely seems to move, she or he is far less conspicuous to prey than a javelin thrower, so the bow and arrow makes a much better stealth weapon.

It takes 102 tasks, spread across 10 subassemblies, to make a complete bow and arrow set. The development of wooden weapons had made us an apex predator, allowing us to inflict a mass extinction on the world around us. Even before we had learned to modify our environment by farming it, we had used wooden tools to kill off mammoths and other magnificent beasts in Asia, North & South America, Australia and Europe.

Clearing forests

Only in the last 60 years have we realized how effective Neolithic polished axes could be at cutting wood and their vital role in our rise to civilization. They help cut down forests, farm, and build towns in wetter areas where burning them down wasn’t possible.  We developed polished stone axes when the climate changed 15,000 years ago to open small clearings in forests where fresh regrowth would attract game and to set up camps.  Sawing is only good for branches of an inch or less, while axes allowed us to cut down trees and build roomy houses out of beams and planks split from tree trunks.  Now we could build boats to venture farther away and trade with other peoples. The earliest boats were canoes dug out of a trunk, and log boats.


After forests could be cleared, crops could be grown.Farming depended on wooden tools as well, wooden digging sticks to plant seeds, spades to dig irrigation canals, and wooden buckets to pour water on crops.  Polished stone tools enabled them to build large homes, fence fields, make furniture, hoseware, boats, and tools.


After using logs and planks from tree trunks to build homes, people discovered coppicing woodlands, which could produce smaller manageable pieces of wood to build homes faster and easily. Many trees don’t die when cut down, but resprout shoots that can be harvested repeatedly (i.e. oak, ash, chestnut, hazel, willow).  After a while rods of consistent diameter and length will grow.  These shoots grow rapidly from the stump that already has a root system supplying the tree with water, and the water doesn’t have to be transported up a tall trunk.  They grow faster than tree branches and more wood per area of ground.  It’s great for firewood and eventually to produce charcoal.  Also since they grow so fast, the leaves are farther apart, making a straighter, stiffer, and stronger piece of wood than a branch.

Coppicing is incredibly energy efficient. In the 1650s, people in England and Wales obtained about 20 terajoules of heat energy a year burning firewood, just over the energy people and farm animals expended on their own metabolism. Burning wood produces about 7.3 megajoules per pound, so that meant about 1.2 million tons of firewood burned a year. A coppiced woodland can produce 2 tons of wood per acre, son only 600,000 acres, or 950 square miles of coppiced woodland, 1.6% of the surface area of England and Wales could produce this much wood.

Peat is another possible fuel, but was even more uneconomic to move and has only have the energy per kilogram as wood with 20% of its density, or 10% of the energy per unit volume. Despite that, the dutch got 25 petajoules of heat from peat, 3 times more energy per person than England, and removing peat exposed rich clay soils which were drained and converted to arable farmland. Peat also fired glassworks, potteries, brickworks, saltworks and more. By by 1700 the easy peat reserves were mostly exhausted.

But growing timber for all the non-fuel uses of wood takes a lot longer.  Forests can produce half as much wood a year, about one ton per acre.  Though still, that meant in 1650 1400 square miles of land, 4% of the land area could meet demand.  And in fact, forests covered about 10% of the land before the industrial revolution, so it would appear to not be a problem.

Wood Transport to cities & industries

But it was. The problem lay in how hard it was to cut and transport the wood to where it was needed.  It is very time-consuming to harvest, cut into usable pieces, and pack into a small space.  Coppiced firewood was cut into relatively straight twigs and bound into faggots 3 feet long with an 8 inch diameter, then put on wagons for transport.  If no rivers were involved, the poor states of roads made wheeled transport even more slow and expensive with exorbitant prices if carried more than a few miles.  So possible for villages and small towns, but impossible as larger towns and cities grew.

So in medieval Europe, larger cities could only exist at ports or on large navigable rivers.  In the largest city on the continent, Paris, the whole Seine River basis was adapted to allow the rafting of wood.  The same was trye on the Rhine where enormous rafts were floated. By the end of the 18th century, Dutch rafts could be 400 yards long and 90 yards wide.

Any more wood for industries was impossible to provide, so industries lay well outside of cities, where the forests were.  Glassmakers used potash from burning beech trees, soap with potash and animal fats, gunpowder from alder wood charcoal.  And the largest industry, the ironworks, were located where there was both iron ore and forests, with the iron smelted from charcoal of oak, beech, and hazel.

Metal Smelting

Ironically, metals made people even more reliant wood and used far more of it to smelt metals.  We were already heating clay to waterproof it.  The first clay pots were found in East Asia about 10 to 20,000 years ago. With waterproof pots, we could store food and liquids, and cook food on fires. We also learned how to make bricks.  To make really good pots, bricks, smelt metal, and around 2300 BC glass, required charcoal, which created temperatures up to 1800 F.

Metal axes were far superior to the old polished stone axes, and allowed people to make precise joints such as the mortise and tenon, overlapping joints, and dovetails, allowing plank ships and wheels to be constructed.   Finally boats that had watertight joints could be created and ships made much larger and more stable.  The Roman Empire couldn’t have existed without the huge plank ships that transported wheat from Egypt to Rome.


Only at around 400°F does heat start to break the wood down. The huge polymer molecules—cellulose, hemicellulose, and lignin—start to split up and to form a wide range of smaller liquid molecules. This process, known to scientists as pyrolysis, releases energy, which for the first time starts to generate heat to drive the burning. As the temperature rises further from 400°F to 600°F, these small molecules evaporate, and some of them react with the oxygen in the air to produce a flame, generating further heat. Some of the gases escape, however, along with some carbon particles, and are released as smoke. Finally, when the breakdown of the cell wall has been completed, only carbon is left; the wood has been transformed into charcoal.

Unlike the volatile chemicals produced by pyrolysis, the carbon does not evaporate and only burns when the temperature reaches 900°F; it reacts with oxygen at its surface to produce carbon dioxide and energy. Since nothing evaporates from the charcoal, however, no flame is produced and there is no smoke, which is why the embers of a fire just glow red-hot.

Material life was much the same from the Iron Age until the industrial revolution

As you can see in museums and living history attractions, people depended heavily on wood. The homes were made of wood or had wood frames roofed with wooden shingles. The furniture was almost all made of wood – the beds, tables, chairs, cupboards, and kitchenware such as barrels, jugs, cups, bowls, and spoons. Their fuel were piles of logs to heat homes and cook food. Farm carts and wagons were wood as well as tool handles, plows, hay rakes, mattocks, and scythes. The Power plants: water mills and windmills, were nearly entirely made of wood.  Non-wood items such as iron cutting tools or iron pots and pans had been smelted with wood charcoal. Clothes spun on wooden spinning wheels, and leather tanned with tree bark. And wood was burned to make salt, brew beer, and more.

So of course the rich favored glass, pottery, and metal objects since commoners could not afford them. Despite not using as much wood, the enormous amount of wood it took to make the charcoal to make these finer items would have left the poor colder and less well sheltered.

Wood has many disadvantages that metals, plastics, and other technologies eventually replaced.  Wood isn’t great for complex three-dimensional items, it can’t be molded into a shape like clay or metal, and its hard to join pieces together.  Because it’s more weak and brittle across the grain than along it, wood is hard to carve and vulnerable to splitting.

Iron was better than copper or tin because it is far commoner and possible to mine and smelt locally.  It also had better mechanical properties and could be made into finer and harder-wearing cutting tools, especially bar iron.

What about stone homes?

The impermanence of wood led many cultures to attempt to build stone buildings.  But in the end, they weren’t watertight or large, and ended up housing the dead usually.  Though they can be made, especially round buildings.  Or using wood, but hiding it from view.  Which is why the Notre-Dame Cathedral in Paris burned so spectacularly, above the stone vault roof the actual roof consists of giant wooden trusses made of huge tree-trunk-size beams to hold the roof up.

Stone buildings are perfect for Italy where they stay cool on hot summer days.  But in Northern Europe the stone loses heat rapidly and once cold, take ages to heat up again.  This was somewhat overcome by hanging tapestries on the walls, and later wood paneling, since wood is a far better insulator of heat than stone, since it’s innumerable tiny air spaces restrict heat flow.  In fact, wood is 10 times better at stopping heat loss than wood.

Types of trees and their wood qualities

To withstand high winds, large broad-leaved canopy trees produce wood with large water-conducting vessels and fairly hollow fibers giving them a medium specific density of 0.5 (oak, ash, beech, pine, spruce, fir). Understory trees are shorter, need less water, and so are slower growing and longer lived, producing denser and harder timber (holly, dogwood). Fast growing pioneer trees that colonize open ground (birch, poplar, maple, aspen, willow) have wide vessels and thin walled fibers to enable rapid growth and a low specific density around 0.35.  Tropical rain forests have slow growing trees with a density of 1.0 (ebony and ironwood) so heavy they sink in the water.

Color: this varies a lot depending on the defensive chemicals used, such as tannins and phenolics to kill fungal diseases and prevent rot. The longer a tree lives, and the warmer the climate, the darker the wood from defensive chemicals.  So oaks and cedars have the darkest and most durable timber, and flimsier poplars and willows lighter wood.

Carpenters and green woodworkers use mostly medium-density wood from large canopy trees.  Caok and cedar for buildings, ships, and carts that might get wet, or ash and beech for tools and indoor furniture.

Wood for Ship’s Masts and American Independence

In Britain the problem of obtaining masts became acute. The country had a tree cover below 10%, and its forests had long before been put under management. Few conifers grew there, and no trees tall and straight enough to be made into ships’ masts. Even by the sixteenth century, Britain had been forced to obtain almost all its masts from the countries adjoining the Baltic Sea. The problem was that the fleets of its northern rivals, Holland and Sweden, were always threatening to cut off this supply, and in any case tall trees were becoming scarcer and more expensive.  And Australian gum trees were useless.

The old-growth forests of New England contained huge, straight-trunked eastern white pine trees in seemingly limitless numbers. From the mid-seventeenth century onward these trees, which could grow up to 230 feet tall with a diameter of over five feet, became the tree of choice for the British navy.

Unfortunately, in seeking to secure their supply of masts, the British government made a series of policy blunders that were to have disastrous consequences. They had difficulty buying tree trunks on the open market because the colonists preferred to saw them up for timber; this was after all a much easier way of processing them, considering their huge size, rather than hauling the unwieldy trunks for miles down to navigable rivers. The British could have bought up areas of forest and managed them themselves, but instead, in 1691 they implemented what was known as the King’s Broad Arrow policy. White pine trees above 24 inches in trunk diameter were marked with three strokes of a hatchet in the shape of an upward-pointing arrow and were deemed to be crown property.

Unfortunately, this policy soon proved to be wildly unpopular and totally unenforceable. Colonists continued to fell the huge trees and cut them into boards 23 inches wide or less, to dispose of the evidence. Indeed, wide floorboards became highly fashionable, as a mark of an independent spirit. The British responded by rewriting the protection act to prohibit the felling of all white pine trees over 12 inches in diameter. However, because trees were protected only if they were not “growing within any township or the bounds, lines and limits thereof,” the people of New Hampshire and Massachusetts promptly realigned their borders so that the provinces were divided almost entirely into townships.

Many rural colonists just ignored the rules, pleaded ignorance of them, or deliberately targeted the marked trees because of their obvious value. The surveyors general of His Majesty’s Woods, employing few men and needing to cover tens of thousands of square miles, were almost powerless to stop the depredations of the colonists, and the local authorities were unwilling to enforce an unpopular law. The situation reached a crisis in 1772, exactly when the Chemin de la Mâture was being completed, with the event known as the Pine Tree Riot.

News of the riot spread around New England and became a major inspiration for the much more famous Boston Tea Party in December 1773. The Pine Tree Flag even became a symbol of colonial resistance, being one of those used by the revolutionaries in the ensuing War of Independence. Designed by George Washington’s secretary Colonel Joseph Reed, it was flown atop the masts of the colonial warships.

The British were forced to use smaller trees from the Baltic for their masts, and had to clamp together several trunks with iron hoops to construct “made masts.” This arrangement was at best unsatisfactory, and many British ships spent most of the ensuing war out of action in port with broken masts. To make matters worse, the colonists started to sell their pines to the French, who had opportunistically sided with the rebels.

Without Britain’s usual naval superiority, America prevailed and became independent in 1783.

Woodcraft before coal started the Industrial age

Even the simplest of wood items took a long time to make with the hand tools of a carpenter. Time to cut to size, to make the joints with careful measurements, markings, and finally cutting and making animal glues not nearly as strong as today’s.  A door would take several days – selecting and cutting down trees, sawyers to cut into planks, and years of drying the wood out.  Even wealthy households had very little furniture like chairs, tables, or chests, expected to last for generations.

Wheels took several days, carts several weeks, and ships years to construct.

If a craftsman came up with an innovation, it wasn’t likely to spread to others.  Crafts were handed down through the generations.  Techniques were learned by watching over many years in apprenticeships, not from written instructions.  In many ways, following past traditions can maintain

high standards and mistakes avoided, but this limits innovation, especially since improvements were kept secret from outsiders or even within a guild.

A lack of scientific understanding of the properties of wood was especially a problem for ships, which for most of history let in quite a lot of water since the joints between planks weren’t waterproofed.  Finally in 1805 diagonal bracing was invented allowing ships to become much larger, sturdier, and waterproof, but soon after ships were mainly made from iron.

Coal begins the Industrial Revolution

Coal has 5 times more energy than wood and 50 times more than peat.  Great Britain had huge reserves of coal near ocean and river transport. Its use went from 150,000 tons a year in 1600 to 500,000 tons in 1700, enabling population to grow from 200,000 to 575,000.  The iron, glass, salt, and other industries far away in forests moved to London burning huge amounts of coal in addition to that used in homes to heat and cook with.

The Royal Society began publications of DIY manuals on smithing, joinery, bricklaying and more, allowing innovations to spread quickly.

To transport all the new goods being made, new canals were built.

For a while ironmaking was held back by limited amounts of wood charcoal, but then the kinks of using coal were figured out (explained in more detail in the book).  Iron pots, pans, fireplaces, and of course cannons – 14,000 of them were made to win wars.

America had so much wood that it wasn’t until 1850 that coal finally overtook wood charcoal in making iron.  Even the steam engines burned wood rather than coal almost until the 20th century.

Chapter 12 Wood in the 19th century

As mentioned earlier, timber is prone to splitting, difficult to join, flammable, and vulnerable to warping & rotting in the open air.  So you’d think that by the end of the 18th century iron bridges and other infrastructure would have replaced wood, but cast iron is brittle and breaks when stretched from cracks. So it couldn’t be used for chains or beams that might be bend, or withstand impacts. As an material, it couldn’t replace wood, and was only safely used to replace masonry, in the pillars for instance.

It was wrought iron that changed everything. Bridges of record length, buildings of unprecedented size, and gigantic ships that were finally watertight and far more protected from cannon balls, able to destroy any wooden ship.  Wrought iron is 10 times stiffer than wood and up to 3 times as strong in tension, and 10 times as tough. It could also be made in large quantities and large pieces. And chains for a new type of bridge: the suspension bridge.  Railways hadn’t been able to use cast iron, but wrought iron was so successful that huge locomotives could be built with wrought iron boilers less likely to explode and wrought iron rails that could handle heavy trains. Greenhouses were built.

Wrought iron precision machinery could manufacture goods once made by hand, and overcome the difficulty of joining wood with metal joints, rods, and eventually nails.  Mines could go deeper as wrought-iron rods linking lattice works of timber.

By 1830 nails were already revolutionizing how homes were built and could be mass produced. Instead of logs cut into heavy beams, precision steel saws cut logs into thin planks and two0by-fours nailed together to make the light framing that allowed walls to be fully assembled on the ground and lifted into place and nailed.  On the outside the whole structure was sheathed in planks, and on the inside with boards, and insulation placed in between to keep the house warm in winter and cool in summer.

This enabled American settlers to be cheaply housed, and most Americans live in wood-framed homes today, further improved with the invention of the wrought-iron screw making houses, furniture, fencing and faster and cheaper to construct.

Books and newspapers benefited from cheap ways to turn wood pulp into paper. Today over 440 million tons of paper are made a year.

But coal and iron can’t hold a candle to the world that we know today, dependent on petroleum and its ability to make massive amounts of steel, plastic, and concrete (the making of which is described on pages 228-229).

Plastic was a miracle product because it could be poured into molds, while complex objects made of wood would take a considerable amount of time to carve. Some plastics are stronger than wood yet much lighter than iron or steel.

Plywood overcame the tendency of wood to split along the grain, plus could be bent and molded into two and even 3-dimensional curves. In the 1920s glued veneers solved the waterproofing problems in construction. Chipboard and fiberboards have their uses too.  And wood-laminate architecture is leading to the construction of wood high-rise buildings, such as an 18-story tower in Norway.  It weighs just a fifth as much as a conventional concrete-and-steel structure, use half as much energy to construct, and more resistant to fires than steel frames.

Wood is not obsolete – quite to opposite, 1.9 billion cubic yards were used in 2018, more than the 1.7 billion cubic yards of cement.

Plantation forestry

Monocultural stands of trees are especially vulnerable to wind damage, fungal diseases, and pests, which can destroy whole forests.  Not much can be done since trees have too long a life cycle to selectively breed to become resistant to diseases.  To work around this, exotic trees are often planted, like the Monterey pine, and it brings with it exotic pests and diseases that can kill native trees not adapted to cope with them.  To name a few: ash trees killed by ash borer beetles and fungus Chalara; chestnuts in America with chestnut blight from Japan, hundreds of species of tree are now threatened by Armillaria root rot around the globe killing everything from conifers to eucalyptus – indeed, only larches and birches appear to have any resistance.

Forests don’t fit into the short time and scale of the modern world.  It takes decades to grow trees.

And if the goal is being carbon neutral, forget it.  It takes fossil energy to harvest, transport, and machine trees.  The most energy-intensive step of all is kiln drying. The energy to evaporate water is 1 megajoule per pound (MG/lb), and newly felled wood has so much water that kiln drying makes up the lion’s share of all wood products, about 4.5 MJ/lb of dry wood.










Posted in Energy, Jobs and Skills, Life After Fossil Fuels, Wood | Tagged , , , , , | 1 Comment

Tree planting is not a simple solution

Preface.  The article lists both negative and positive outcomes from where trees are planted. Here are a few of them.

Unintended negative effects: 1) reduced water supply, 2) destruction of native grasslands and spread of invasive tree species, 3) displacement of farmland

Potential beneficial outcomes: 1) greater carbon storage, 2) greater water storage, 3) reduced soil erosion, 4) increased biodiversity, 5) A source of food, wood, and shade, 6) income generation

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Holl KD, et al. 2020. Tree planting is not a simple solution. Science 368: 580-581

A plethora of articles suggest that tree planting can overcome a host of environmental problems, including climate change, water shortages, and the sixth mass extinction. Business leaders and politicians have jumped on the tree-planting bandwagon, and numerous nonprofit organizations and governments worldwide have started initiatives to plant billions or even trillions of trees for a host of social, ecological, and aesthetic reasons. Well-planned tree-planting projects are an important component of global efforts to improve ecological and human well-being. But tree planting becomes problematic when it is promoted as a simple, silver bullet solution and overshadows other actions that have greater potential for addressing the drivers of specific environmental problems, such as taking bold and rapid steps to reduce deforestation and greenhouse gas emissions.

These ambitious tree-planting efforts are well-intentioned and have numerous potential benefits, such as conserving biodiversity, improving water quality, providing shade in urban areas, and sequestering carbon. Nonetheless, the widespread obsession over planting trees can lead to negative consequences, which depend strongly on both how and where trees are planted. For example, whereas tree planting often enhances floral and faunal diversity, planting trees in historic grasslands and savannas can harm native ecosystems and species. Likewise, trees are often suggested as an important income source for small landholders but may increase social inequity and dispossess local people from land if tree-planting programs are imposed by governments and external investors without stakeholder engagement. Repeatedly, top-down reforestation projects have failed because the planted trees are not maintained, farmers use the land for livestock grazing, or the land is recleared.

The massive Chinese government Grainfor-Green tree-planting program, which cost an estimated $66 billion, illustrates a number of these trade-offs. The program is credited with increasing tree cover by 32% and reducing soil erosion by 45% in southwestern China over a 10- to 15-year period. But like many large-scale reforestation programs, most new tree cover is composed of one or a few non-native species that have much lower biodiversity than native forests. Moreover, large-scale tree planting in the semiarid Loess Plateau in central China has reduced river runoff and in turn the amount of water available for human activities, owing to the large amount of water transpired by rapidly growing trees. Most of the trees for this program were planted in former agricultural land, resulting in a 24% decrease in cropland. During the same time period, native forest cover decreased by 7%. This illustrates a major overarching concern about tree planting, which is the displacement of agriculture from the land being reforested to areas occupied by native forests, thus resulting in further deforestation.

Reforestation projects can be an important component of ensuring the well-being of the planet in coming decades, but only if they are tailored to the local socioecological context and consider potential trade-offs. To achieve the desired outcomes, tree-planting efforts must be integrated as one piece of a multifaceted approach to address complex environmental problems; be carefully planned to consider where and how to most effectively realize specific project goals; and include a long-term commitment to land protection, management, and funding.

The first priority to increase the overall number of trees on the planet must be to reduce the current rapid rate of forest clearing and degradation in many areas of the world. The immediate response of the G7 nations to the 2019 Amazon fires was to offer funding to reforest these areas, rather than to address the core issues of enforcing laws, protecting lands of indigenous people, and providing incentives to landowners to maintain forest cover. The simplistic assumption that tree planting can immediately compensate for clearing intact forest is not uncommon. Nonetheless, a large body of literature shows that even the best-planned restoration projects rarely fully recover the biodiversity of intact forest, owing to a lack of sources of forest-dependent flora and fauna in deforested landscapes, as well as degraded abiotic conditions resulting from anthropogenic activities.

Tree planting is not a substitute for taking rapid and drastic actions to reduce greenhouse gas emissions. Certainly, planting trees in formerly forested lands is one of the best options to offset a portion of anthropogenic carbon emissions, but increasing global tree cover will only constitute a fraction of the carbon reductions needed to keep temperature increases below 1.5° to 2°C. Potential carbon sequestration estimates of increasing tree cover range more than 10-fold, depending on assumptions about the rate of carbon uptake, the amount of land considered appropriate for reforestation, and how long those trees remain on the land. Moreover, much uncertainty remains about how much carbon trees will sequester in the future, given that increasing drought and temperatures from climate change can lead to substantial tree mortality either directly or indirectly through feedback loops involving fire and insect outbreaks. Conversely, some high-latitude areas that were unsuitable for trees may become favorable in the future.

Maximizing the benefits of tree planting requires balancing multiple ecological and social goals to prioritize where to increase tree cover regionally and globally. Some global maps estimate potential land area for reforestation without factoring in that people need places to live, produce food, and extract natural resources. Large-scale reforestation may be feasible in some areas, particularly those in public ownership, but reforestation will mostly occur in multiuse landscapes. Several recent studies suggest that prioritizing forest restoration on the basis of criteria, such as past land use, potential for natural regrowth of forest, conservation value, and opportunity cost from other land uses, can increase feasibility and improve reforestation success. For example, choosing appropriate locations for tree planting in the Brazilian Atlantic Forest biome can triple conservation gains and halve costs. Large-scale planning is more likely to result in successful reforestation projects over the long term and prevent deforestation elsewhere. But recognizing competing land uses means that the actual land area feasible for reforestation is much lower than the amount proposed by some ambitious global reforestation maps and national commitments.

Successful tree planting requires careful planning at the project level, which starts by working with all stakeholders to clearly identify project goals. People plant trees for many different reasons, such as restoring forest, sequestering carbon, providing income from timber harvesting, or improving water quality. A single tree-planting project may achieve multiple goals, but it is rarely possible to simultaneously maximize them all, because goals often conflict, and prioritizing one goal may result in other undesirable outcomes. Clear goals are key to being able to evaluate whether the project was successful and to selecting the most cost-effective way to increase the number of trees. For example, if a primary project goal is to restore historically forested habitat, simply allowing the forest to regrow naturally often results in the establishment of more trees at a much lower cost than actively planting trees, particularly in locations with nearby seed sources and less-intensive previous land use. By contrast, if the goal is to provide landowners with fruit trees or species with valuable timber, then plantations of non-native species may be the most suitable approach. Many additional questions must be addressed prior to project implementation, such as potential unintended consequences of tree planting, which species to plant, how landowners will be compensated for lost income, and who is responsible for maintaining trees over the long term.

Most projects set targets of how many trees to plant, rather than how many survive over time or, more importantly, whether the desired benefits are achieved. By contrast, most tree-planting goals, such as carbon sequestration and providing timber and non-timber forest products to landowners, require decades to achieve. This short-term view has resulted in large expenditures on tree-planting efforts that have failed. For example, approximately $13 million were spent to plant mangrove trees in Sri Lanka following the Indian Ocean tsunami in 2004, yet monitoring of 23 restoration planting sites five or more years later found that more than 75% of the sites had <10% tree survival because of poor project planning and lack of seedling maintenance.

Hence, successful tree-planting projects require a multiyear commitment to maintaining trees, monitoring whether project goals have been achieved, and providing funding for corrective actions if they are not. Using this adaptive management approach will certainly increase the price tag of tree planting, but it is money better spent than simply planting trees that mostly do not survive.

To realize the potential benefits of increasing tree cover, it is essential that tree-planting projects include thorough goal setting, community involvement, planning, and implementation, and that the time scale for maintenance and monitoring is sufficient. Otherwise the extensive human energy and financial resources invested in tree planting are likely to be wasted and have undesirable consequences, thus undermining the potential of this activity to deliver the expected environmental benefits that are critically needed for humans and nature in this time of rapid global change.

Posted in Plant Trees | Tagged | 4 Comments

Gen IV SMR nuclear reactors

Preface. Peak conventional oil, which supplies over 95% of our oil, may have peaked in 2008 (IEA 2018) or 2018 (EIA 2020). We are running out of time. And is it really worth building these small modular reactors (SMR) given that peak uranium is coming soon? And until nuclear waste disposal exists, they should be on hold, like in California and 13 other states.

And since trucks can’t run on electricity (When Trucks Stop Running: Energy and the Future of Transportation 2015, Springer), what’s the point? Nor can manufacturing be run on electricity or blue hydrogen (Friedmann 2019). Once oil declines, the cost to get uranium will skyrocket since oil is likely to be rationed to transportation, especially agriculture.

Alice Friedemann  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


Cho A. 2020. Critics question whether novel reactor is ‘walk-away safe’. Science 369: 888-889

Engineers at NuScale Power believe they can revive the moribund U.S. nuclear industry by thinking small. Spun out of Oregon State University in 2007, the company is striving to win approval from the U.S. Nuclear Regulatory Commission (NRC) for the design of a new factory-built, modular fission reactor meant to be smaller, safer, and cheaper than the gigawatt behemoths operating today (Science, 22 February 2019, p. 806). But even as that 4-year process culminates, reviewers have unearthed design problems, including one that critics say undermines NuScale’s claim that in an emergency, its small modular reactor (SMR) would shut itself down without operator intervention.

NuScale’s likely first customer, Utah Associated Municipal Power Systems (UAMPS), has delayed plans to build a NuScale plant, which would include a dozen of the reactors, at the Department of Energy’s (DOE’s) Idaho National Laboratory. The $6.1 billion plant would now be completed by 2030, 3 years later than previously planned, says UAMPS spokesperson LaVarr Webb. The deal depends on DOE contributing $1.4 billion to the cost of the plant, he adds.

In March, however, a panel of independent experts found a potential flaw in that scheme. To help control the chain reaction, the reactor’s cooling water contains boron, which, unlike water, absorbs neutrons. But the steam leaves the boron behind, so the element will be missing from the water condensing in the reactor and containment vessel, NRC’s Advisory Committee on Reactor Safeguards (ACRS) noted. When the boron-poor water re-enters the core, it could conceivably revive the chain reaction and possibly melt the core, ACRS concluded in a report on its 5–6 March meeting.

NuScale modified its design to ensure that more boron would spread to the returning water. The small changes eliminated any potential problem, Reyes says. However, at a 21 July meeting, ACRS concluded that operators could still inadvertently drive deborated water into the core when trying to recover from an accident.

The issue pokes a hole in NuScale’s credibility, says Edwin Lyman, a physicist with the Union of Concerned Scientists. “This is a case of the public relations driving the science instead of the other way around,” he says. Sarah Fields, program director of the environmental group Uranium Watch, says the safety questions argue against NuScale’s request to operate without an emergency planning zone. “That’s a crazy thing to do for a reactor design that’s totally new and with which you have no operating experience.”

NRC plans to publish its safety evaluation report next month, and by year’s end it is expected to issue draft “rules” that would essentially approve the design. But that won’t end the regulatory odyssey. The current design specifies a reactor output of 50 megawatts of electricity, whereas the UAMPS plan calls for 60 megawatts. The change requires a separate NRC approval, Reyes says, during which NuScale will resolve the outstanding technical issues. That additional 2-year review should start in 2022.


  • EIA. 2020. International Energy Statistics. Petroleum and other liquids. Data Options. U.S. Energy Information Administration. Select crude oil including lease condensate to see data past 2017.
  • Friedmann J, et al. 2019. Low-carbon heat solutions for heavy industry: sources, options, and costs today. Columbia University.
  • IEA. 2018. International Energy Agency World Energy Outlook 2018, figures 1.19 and 3.13. International Energy Agency.
Posted in Gen IV SMR reactors | Tagged , , , | 4 Comments

Australian Senate hearings on Peak Oil & Transportation 2006

Preface.  This post has a summary of two of the nine senate hearings on Peak Oil in Australia in 2006. Someday historians may want to know which politicians knew about the energy crisis and when they knew it, probably to blame them for doing nothing, even though there’s not much they can do.

There is also a pdf here about peak oil from Feb 7, 2007: Australia’s future oil supply and alternative transport fuels that Australian’s may find of interest, and a summary here as well:

Bakhtiari addresses the Australian Senate Committee

Alice Friedemann  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


2006 Summary of two of nine Australian Senate hearings on Peak Oil


Official Committee Hansard, Senate Rural & Regional Affairs & Transport References Committee

Australia’s future oil supply and alternative transport fuels

Australia’s future oil supply and alternative transport fuels, with particular reference to projections of oil production and demand in Australia and globally and the implications for availability and pricing of transport fuels in Australia; potential of new sources of oil and alternative transport fuels to meet a significant share of Australia’s fuel demands, taking into account technological developments and environmental and economic costs; flow-on economic and social impacts in Australia from continuing rises in the price of transport fuel and potential reductions in oil supply; and options for reducing Australia’s transport fuel demands.

BENNETT, Dr David, Founder, Sustainable Transport Coalition BEVERIDGE, Mr Andrew, Project Manager, Commercialisation, Office of Industry and Innovation, University of Western Australia

BOWRAN, Dr David, Grains Industry Development Director, Department of Agriculture and Food, Western Australia

FLEAY, Mr Brian Jesse, Private capacity.

HARRIES, Professor David, Director, Research Institute for Sustainable Energy.

HEAD, Mr Glen Michael, Director, Perth Fuel Cell Bus Trial and Transport Sustainability, Department for Planning and Infrastructure, Western Australia

IRESON, Mr Gary, Director, Gas and Power, Wesfarmers Energy, and President, LPG Australia

RICE, Mr David, Principal Network Planning Office, Department for Planning and Infrastructure

ROBINSON, Mr Bruce, Convenor, ASPO Australia

ROSSER, Mr Matthew, Chair, Sustainable Energy Association, Western Australia

UPTON, Mr Michael Leslie, Manager, Vehicle Policy, Royal Automobile Club, Western Australia

WOOLERSON, Mr Tim, Bus Fleet Manager, Public Transport Authority

WORTH, Dr David John, Convenor, Sustainable Transport Coalition

Mr. Robinson – There are large numbers of solutions as to how we can do things better. The clearly sensible thing to do is to put up the fuel tax, which I hope we come to later. The clearly sensible thing to do as a politician is to avoid mentioning that. The only way we can do that is to engage the community. We had petrol rationing during the war. If people understand the situation then, firstly, they will think of a lot of ways that they can lower their own oil vulnerability. They can do their own risk assessment. There will be a whole growth industry of consultants who can go around and help people go through that—and ASPO Australia is hoping to be part of that, because no-one else is. Also if people understand they can look at things as people have done in wartime and other times to change the situation.

It starts at a sensible, professional level—not just saying that $3 a liter is unacceptable, which we heard a community service organization say. We have to accept the scenario that these things might happen and we have to have a plan B. We might have something like the hurricane that hit New Orleans, and at federal, state and local levels the US was shown worldwide to be completely bloody useless. They had rows of buses sitting in a lake when there was no transport to take people out from nursing homes. We are going down that road but if, from this Senate inquiry, we can engage the community there are all sorts of plan B’s from oil vulnerability assessments. That is crucial. We cannot just go back to talking about whether we do biodiesel or fish and chip shop oil.

Mr Head—I would like to respond to both senators’ questions about market and market failure and then lead into a potential government response to that. One of the concerns is that there is massive investment at the moment in the status quo. We have our transport companies investing in their production plant for 10 to 15 years out and airline companies investing 20 years out. We know that any societal change to a new technology has very long lead times. We have discussed natural gas for vehicles, LNG and CNG, and these lead times are significant and substantial. That means that the companies and the markets that the economists are relying on to take the lead are going to play out their existing hand of cards for as long as they possibly can. I respectfully suggest that they might not want to look at a different set of cards until retail prices have doubled or tripled.

As a taxpayer, I would like someone in the political sphere to stand up and say, ‘This is the future that is coming—whether it is today, whether it is tomorrow, it is coming.’ We can deal with that when it arises, at the point where we will be paying $1 billion per year every single year, or we can perhaps invest a few billion dollars up front and make sure that never happens. You will only get a rational analysis of that from the taxpayers and the voters if they are informed.

This brings it back to the points that have already been raised about oil vulnerability maps and the level of public engagement we need. We need this like we have never needed it before. We have to be innovative. We have to not go to people with information but engage them in an intellectual way and also at an emotional level. People have to understand the consequences. For all those reasons we cannot rely on the markets. The government does need to take a strong lead.

Prof. Harries—I liken our current situation with oil to the situation we were in with electricity going back many decades. We had monopoly providers and there was very little planning. We virtually said: ‘What did we do before? Let us build another coal-fired power station.’ Oil has been a far greater problem because we have relied on very large monopoly oil producers from overseas and we have felt (a) that we had very little capacity to anything and (b) there is very little need. One of the things that have exacerbated that problem is that at the state government policy level there is actually very little in terms of transport policy. No-one owns the transport policy agenda in this state, like they do stationary energy. There is no office for sustainable transport policy.

We are facing massive uncertainty. I am personally very reluctant to look at crystal balls and guess what fuel prices are going to be. I think we have to accept that we have got huge uncertainty. We are behind the eight ball in that we have not had the planning systems in place to help us deal with that. The sensible strategy—and I have heard some of them around the table— is to start putting ourselves in a position where we can start planning. That means not just informing the public and working with groups. It also means—and this is very dear to my heart—understanding what we are going to need to have a very flexible approach to be able to deal with that uncertainty. That is going to go right across the board. How can we help companies develop the liquefied natural gas infrastructure they need? What training are we going to need? What skills are we going to need? What information are we going to need to be able to help us move when we need to move?

My plea is that we are going to have to look at our information needs and how we can address them. As politicians you have a very unenviable task because of that uncertainty and because of the limited planning capability we have. It is going to be very hard for you to engage the public—‘Hey, we have a problem; let’s do something.’ I think we are going to have to take it step by step. We need to look at what we are missing—what are the gaps. We need to do a real SWOT planning analysis of what we need to know.

Mr Rice—We are working on producing oil vulnerability maps for Sydney, Melbourne, and Perth. Are they vulnerable because they do not have access to public transport? In which case, we can start long-term strategic planning.

Mr Fleay—There is some real landmark guidance as to how to go. The first thing I want to deal with is how in all of the discussion here people have come up with problems and things that need to be done. The problem with all of those things is that they impact on all people and all businesses in different ways and on different time scales. There is an inherent complexity in them and all sorts of feedback loops and the like, such that you cannot use a top-down management approach. The very nature of such systems is that no one person can fully understand them. This is some of the modern thinking that arises from the so-called chaos theory. That means that you need a process whereby you can engage all the stakeholders and the public in this process to deal with those things. It was done successfully with the Network City plan, which was quite a significant effort. It has also been done on smaller scales. Basically it involves getting various stakeholders with their different viewpoints to state their case—people giving an overview of things and the like. With the Network City planning, there were 140 tables with eight people each, each with a computer. This enabled good, close dialogue between the people on a table which could then be fed into the whole group. All sorts of solutions emerge from that.

I was also involved in a similar thing on a smaller scale at Geraldton when there was a conflict between road trains and residents. At the end of that meeting, even though there were strong differences, we looked at what it would be like if we continued the way we were and what it would be like if we went down a different pathway. At various stages in the process each side had to argue the other’s case. It reached a point at the end where everybody, all of the 70 people involved, agreed that we could not carry on as we were and that we had to change, and there was a perspective on doing so. This is the way forward. What governments have to do is not manage and come up with solutions but give leadership of this kind in order to get an informed population and to unleash the creativity of people to find solutions. We cannot do it any other way. This is the way forward.

It has to be a continuing process. The essence of it is, I think, that it combines in one process people cooperating and competing at the same time. The two are not mutually exclusive; they are complementary. It is finding that mix that is absolutely essential to seeing the way forward here. We cannot move forward before that, particularly if we start bringing the peak oil into it. If we start to say, ‘It looks as though we’ve got to reduce our use of oil by this amount,’ it puts a perspective on it that enables us to make the change. You have the potential in that for everybody to see that everybody is making adjustments and to see it as just and equitable. That is critical.

In connection with the question of trade, coming back to the theories of economists and economics the so-called law of competitive advantage, which dates back to the early 19th century, is based on the premise that rather than being self-sufficient countries should specialize on what they can do best and trade, which of course means increased transport. That is the basis of all free trade and even interstate trade—if you think of each state in this country as being a bit like a country. Ever since the early 19th century, the cost of transport has been diminishing. Initially it was coal fired, then oil came in in the early 20th century. Oil was significantly superior as a transport fuel, especially with the very cheap oil from the giant oil fields which dominated in that period. My view is that that period has come to an end, and therefore we have to start thinking of a focus on being more self-sufficient as a strategy into the 21st century. The old view is losing its validity. However, that is a very complex question because of the great deal of interdependence that occurs around the world. Just to give you an indication, Japan was nearly self-sufficient in grain in 1950, but as a consequence of its industrial development it now imports about 70 per cent of its grain. A similar thing applies to South Korea and so on.

So a huge period of readjustment has to take place, but nobody is giving much thought to that at the moment. I mention in my submission that it is something we need to start thinking about, and we need to start a dialogue with the economists about the deficiencies in their theories regarding the way they handle energy. That must be a part of this. I also mention that it raises the question of the current dependence of food production and the whole food chain to households on fossil fuel energy—mainly petroleum. I also throw in that modern industrial agriculture has been described as a way of using land to convert petroleum into food. I deal with that in my submission. We need to know important information about where all the embodied energy in all these steps is so that we can have a clear picture of which things are the critical ones to tackle first and so we can create a long-term strategy. My view is that the most important use for the remaining oil—the first priority—is that food supply and food chain, for obvious reasons.

Dr. Bennett – I want to move on to alternative fuels. I take a particular interest in biofuels. We recently had a conference, at which Senator Milne spoke, on bioenergy and biofuels. At that conference, the speaker from BP Australia stated that BP would not touch palm oil. This is one of the moral hazards of biofuels. The fact is that the increasing demand for biofuels is now a significant hazard in the preservation of biodiversity and tropical rainforests around the world. Similar activities are taking place in relation to tropical rainforests and sugar cane plantations. It amounts to the fact that the more we make demands on the plant kingdom of this earth in terms of both food and fuel, the more we are going to do damage to it. The situation is that one rates human food first, animal food second and fuel third. It is disturbing to see the diversion of human and animal food into fuel. It seems to me that one of the actions that government can take is to make no more concessions and no more subsidies for the production of biofuels.

CHAIR—Stunned silence.

Mr Rice—We have a problem with obesity—why not turn that into transport, particularly walking and cycling. Again, do not overlook those for personal transport. About half of our trips in Perth—and Perth is a very spread out city—are less than five kilometers long. We could save a huge amount of fuel, we could have big health benefits and we could have social benefits with more eyes on the streets for a relatively little cost. We are talking about the ‘no regrets’ option. We are talking about how you as politicians are going to get some of these things in place. It is not going to be by just saying nasty things like, ‘You’ll have to cope with a high increase in fuel.’ It is going to be by saying some useful things as well, like how this is going to benefit people.

If we look at social changes, it is now commonplace to wear seatbelts and it is not commonplace anymore for thinking people to smoke. It may be not commonplace in the future for thinking people to drive V8s unless they absolutely have to. There is an encouragement thing. There is a health thing. There are a lot of pluses in this, particularly if you adopt that broader overall sustainability view of what government is all about. If you do not govern for sustainability, why are you governing at all?

Senator MILNE—I wanted to follow up on the issue of China, because it is very difficult to even contemplate the scale of the global impacts of China and India combined but China primarily. Lester Brown in his book Plan B basically says that at its rate of growth China will absorb virtually all the cereal and grain crops of the whole planet. What else is anyone else going to eat? Plus, they are in the black—America is in the red—and they can afford to buy up as much food-producing land as is necessary… His conclusion is that the current economic model does not work for China and that, as it does not work for China, it does not work for the rest of the world. It is pretty profound to try and take in the scale of the impact. On this question of liquid natural gas, I have seen all the stuff in recent times about Australia touting its liquid natural gas to the US, to China and to anyone who will buy it. I am interested in the collective view here on whether it is appropriate for Australia to be selling—and I understand the WTO rules; let us just put those aside—liquid natural gas?

Mr Ireson—In response to whether it is appropriate to be exporting the LNG that we have: I think the reality is that, without the export income, these kinds of investments would not be undertaken in the first place. The scale of investment that is required to develop these resources is very hard to get your head around. For a country like Australia, without the interest from the oil majors and their seeing this is a country that they want to develop because it has free trade and certainty in terms of taxation treatment and the like, we would not have the developments. Without the developments, we would not have the domestic access to the natural gas that I was talking about earlier that in fact now gives us a competitive advantage against imported diesel. So I think we have to be very careful that we do not isolate ourselves. At the end of the day, it is business on a global scale that we are talking about and it is very hard to be isolated from that.

Mr Head—I am going to be a bit controversial, and Gary may wish to come back on this one. According to data from our Department of Industry and Resources—don’t get hung up on the figures; it is the magnitudes we are looking at here—we have 80 to 130 years worth of natural gas supplies. That is at current levels of use, which we know is not going to happen: demand is going to increase. If we were to translate a significant proportion of our transport task to natural gas, that duration, that window of opportunity, would reduce right back down to between 20 and 50 years, notwithstanding that the peak is going to occur somewhere halfway along that period. With the time lags for introducing new technologies and getting societies to make that transition, it will take us 20 years to get to the point where we are all using natural gas. And then what do we do? We say: ‘Shit, natural gas is running out; we’ll have to do something. We’ll have to introduce the new technology now so it’ll be ready in 20 years time.’ So it kind of makes you think that it might be worthwhile leapfrogging some technologies which we have a pretty good idea are problematic from the point of view of a long-term solution to supply and which also have the greenhouse gas and climate implications. Gary’s point about developing export markets and local markets for it—I cannot see a justification for that.

Mr Fleay—I dealt with this question of alternative fuel in one part of my submission. I finished up with a chart showing the energy-profit ratio on a vertical axis and increasing economic effectiveness on a horizontal axis. The energy-profit ratio is the energy content of the fuel divided by the energy used to get it. The higher that figure, the more useful the fuel. There is a difference in effectiveness. We are never going to see a coal-fired airplane, for example. It is that sort of picture. This chart gives a picture on the basis of the information I have available.

What comes out in that picture is that the petroleum products that have been taken from giant oil fields stand out above everything else. Nothing else can match them. It would be useful to look at that. But we do need a lot more information in this country. A lot of work needs to be done to find out what the energy-profit ratios of our various fuels are to update that figure. This is an important task so that we are able to sort the wheat from the chaff, know what you can and cannot do and know what can be used for a transition to help us to get to one point. In this sense, everything that everybody says has some role somewhere in it, including using natural gas as a bridging fuel for transport while we make a lot of other changes. This is the essential point.

Hydrogen is not an energy source—it is an energy carrier. You have to manufacture it. When you look at that aspect of it—and you obviously cannot in the long term think of using a fossil fuel to manufacture it—you find that, with the problems of storing it and the energy needed to compress it and so forth, the prospect of hydrogen being a successful transport fuel is quite remote. You have to have the right approach to make the right sort of analysis. This is important to develop.

Mr. Robinson — There is a book called The Hype About Hydrogen which echoes Brian’s point that we need a source for hydrogen, whether we make it from coal or gas or nuclear power. There is no foreseeable source of hydrogen. So we cannot talk about the transition to a hydrogen economy. The hardest thing is not storing the hydrogen but finding it or getting it or making it first. As to the biofuel thing that we were talking about, for instance, if we took all of Australia’s wheat crop, which is on average 20 million tonnes per annum, and turned all of that into ethanol, we would get some nine per cent or 10 per cent of Australia’s oil usage. There would be no bickies in the parliamentary tea rooms and no bread in Woolies. We would not be exporting any wheat around the world. So biofuels have very serious scale limitations. In terms of alternative fuels, I think it is quite clear that conservation is the best alternative fuel—that is, not using it rather than replacing it.

Dr Bennett—In my personal submission I make the point that, for defense reasons, Australia has to do something about its long-term oil resources. I am not quite sure that natural gas is the thing but, if you think back through the wars of the 20th century, they were all essentially about oil. Hitler was stopped on his way to the Caspian Sea in Stalingrad and on his way to the Middle East at El Alamein. China is already pumping oil into the ground as a strategic resource. As far as we could gather from the appropriate government committee, Australia is bound by some international regulation that we have to have 90 days of supply, and most of that has been in the Bass Strait oil pipelines rather than in a standard resource. It seems to me that we have to start thinking very quickly about having a resource. Whether we have that resource as an untapped oilfield, or as an oilfield that has been refilled with oil which has been purchased on the world market, is not up to me to say.

Mr Rosser—I just want to pick up a couple of points on the previous topic. I was at the Farmers Federation conference two weeks ago. The conference was entitled ‘Fueling the future’. The farmers were very keen to stand up and say that they had no moral obligation to supply foods, that they would sell to the highest bidder and if the highest bidder was going to be fuel then so be it, because, when grain hits low record levels, no-one feels a moral obligation to pay them a fair price for their product. They were as one; there was unanimous consensus in that room. I suppose it is something you could only understand by being a primary producer.

Mr Upton—I would like to issue a word of caution about imposing extra taxes and so on. They are obviously a way of changing responses but, no matter how much tax you put on something, you cannot make it happen if the research, the knowledge or the will is not there. I am thinking about what happened in California towards the end of the nineties. They tried to push the introduction of battery electric cars by a whole range of incentives, but the technology was not ready. While manufacturers made electric cars and some people used electric cars, it did not go any further than that because the technology was not at a point where it was usable. I am cautioning that, before taxation is used to change things, you have to do the research to make sure that what you are trying to make happen can happen.

Mr Fleay—To reinforce again what I said about the failure of top-down management processes in these circumstances: imposing taxes and a lot of things of that kind have that character about them because they impact on all sorts of people in different ways. In fact, I agree with Mike Upton. That is why this approach is the key to going forward: to learn something from the lessons that the Department of Planning and Infrastructure have applied here, not because they are perfect but because of their potential if we go down that path. I noticed Senator Sterle said he has a background with the Transport Workers Union. I cannot think of one area of workers who are going to be more seriously impacted in this area. The whole business about mass distance charging for trucks is a classic example of this. That is why you have to have this bottom-up approach where all the stakeholders are involved and you reach just outputs so that they see all the changes that everybody has to make and all the things they have to give up in order to gain something fair and equitable. With all the sorts of issues that we have here, that can only happen by bottom-up participation. Everything that everybody has said has a role to play in this. There is not anything that is totally wrong or totally right. Because of this complexity, you can only handle it in this way.

Mr. Head — [In answer to what the barriers were to more fuel-efficient cars]/ It is our role to support local Australian industries and we have local car makers who have committed to a six-cylinder vehicle platform for the foreseeable future—in other words, eight to 20 years. They are committed to rolling out models based on that power plant and that drive train. That leaves us at a point where we politically have to stick our hands up and say: ‘We’re not going to support local manufacturers. We’re going to import what we think are the right vehicles. Tough for you guys.’ So that is one of the barriers.

Mr Fleay—I assume that demand management in transport is part of the agenda for the topic we are discussing. I would like to say something here about the TravelSmart program and its potential. As a preface, I spent my life working in the water industry here, where we have been battling for 30 years to deal with the question of resource limitations. It is my view, in the context of what we are talking about, that the Water Corporation here now sets an example for all corporations insofar as its commercial advertising pleads with its customers to buy less of its product. For those senators who are not familiar with the TravelSmart program, which has an international reputation and has been copied around Australia, it uses a direct-marketing approach. People are approached individually in their houses to review the way they use their cars as opposed to the alternatives of public transport, walking and cycling. It is a dialogue to change their pattern. It is, at a modest level, a very positive result for the people who have participated in terms of reduced car use and increased use of public transport, walking and cycling. Not only that but the increased revenue from public transport alone has paid for the cost of the program in about 18 months or two years. It also has a very high cost-benefit ratio, which includes the health benefits of increased exercise.

This is the small beginning of transport management in the transport business which needs to reach the stage that the water industry has reached. However, if the question of future oil supplies were introduced into this, insofar as people go out, talk to others about what can and cannot be done and say, ‘Here is what you can do,’ there is enormous potential for empowering people as a part of this general process of getting understanding and creating the climate for the right sort of change. I do not think we should underestimate this. It is an area where the transport industry is way backward but where the water industry in this country, and particularly here, has created a change of culture due to the drastic impact of climate change.

Mr. Robinson — Andrew mentioned location specific fuel taxes. This was done in South Australia when the South Australian government had legislative control of what we call the fuel franchise levy. ASPO Australia is suggesting a smart card—a flexible, tradable, allocation pricing system which can handle emergencies and the location specific things. People who live near a train station and an urban city should get less of the low tax petrol. We are taking a model from the water industry in Perth. Domestic water, the amount for basic household necessities, is quite cheap. As you use more and more in a household, you pay more and more incrementally for the units. Those sorts of things can be done. A lot of those things can be done, rather than just going on with business as usual with the fringe benefits tax whereby everyone in Canberra is driving up and done freeways and lending their cars out so they get over the March rush, or whatever it is called, to get over 40,000 kilometers. Those things are just stupid and perverse and they are no more market distorting than putting the price of petrol up, particularly in an incremental way whereby people can see where it is going. It is going into the health system, it is going into defence, it is going into all these sorts of things that we need. We need to be following Dr Samsam Bakhtiari’s thing. We need to be building Noah’s ark, where people said, ‘There is probably something coming; we need to have the ark well planned and under construction.’ It is bloody hard to build an ark under water. If we wait until peak oil hits us, then we are not going to have the time or the resources to do this.

Mr Rice—Yes. Leach Highway is an issue to us—a huge one. First of all, within the time scale that we have been talking about, and in the time scale of political governments, 100 years or so is what we are really dealing in, so any guidance or direction we can get is really useful. For instance, if we are talking about a mixture of personal transport and freight transport, my logic says the trucks are going to get bigger because they will be more fuel efficient; the cars are going to get smaller because they will be more fuel efficient. There is a safety issue—does that impact on the way we design our roads, for instance? That is a fairly simple one. A more complex one is how we can save fuel in urban freight transport. The answer is not to put more on rail. That is a part of the answer and our government is trying to do that. We have a target of getting from about three per cent to 30 per cent of our containers coming from the Fremantle inner harbour from rail in the past to rail in the future. But that is going to make a small difference.

What they are also doing is looking at using our roads more sensibly and, implicitly, using our fuel more sensibly by booking the trucks that come in and out of the Fremantle terminal relative to the containers, because surveys have found that a lot of trucks are going in empty to pick up a container and bring it out and they are passing trucks that are doing exactly the opposite. Obviously, there are some improvements that can be made. How do you make those improvements? You need data and you need some level of control. The problems that we are getting with data relate to some extent to the free market forces where competition is good and then the data becomes commercial-in-confidence and we cannot get it. So there is a bigger issue there.

I believe that in an intelligent future the government as a whole—call it Big Brother if you like—is going to need to have some influence on the availability of data, whether it is for personal trips so that we can group more trips together or whether it is for the clumping of bits of freight so that we move away from lots of small, just-in-time deliveries to some efficient, medium sized deliveries. This is going to have an impact on warehousing because the central distribution systems that are the current rage, and are logistically reasonably efficient because we have got very cheap fuel, are going to have to change. I believe people are going to have to do more warehousing in their businesses again, like they used to. There are a lot of things that we can do but we have got to get the intelligence about it in order to be able to, and we have got to get some leadership.

There was a very interesting survey that I read some years ago about politicians and leadership and how far in front of the community they were. The thesis was that the politicians were in front of the community, therefore they modified their expectations in parliament and cut them back quite a lot. The survey found that, yes, that was true—but the bite was that the politicians were only a tiny little bit in front of the community and they thought they were a long way in front of the community. So I am saying: have courage, but also be realistic. We can all talk about these things and the greenhouse effect and so on, but if this inquiry is going to have any impact whatsoever you need to build upon some synergies to get through.

One of the synergies that you can build upon is COAG’s interest at the moment in urban congestion and congestion management. If we can better manage congestion we can better manage fuel. We did a survey in Perth recently—it was a statistically valid survey—in which we asked people: what kind of problems do you see coming from traffic in your area? To our surprise the answer was, clearly, congestion. You say if you come from Sydney or Melbourne that we do not have any congestion, but that was the current perception of the voters. So there is something in congestion management that can be combined with environmental improvement, better use of our roads, something that the community wants and fuel saving, all together. So look for those synergies and pick the low-hanging fruit first.

Dr Worth—I want to come back to my hobbyhorse about government involvement. A lot of what we have heard in the last period on this topic has been about what things government can do and the need for that. A lot of it comes back to market failure, that there is just not enough information for markets to operate efficiently. The point I want to make about why governments need to get involved is around the speed of change. Markets take a long time to move. It took us 17 years to move the car fleet in Australia from leaded fuel to unleaded. The price of oil has tripled in three or four years. I get a sense that people think that it will stop, but it could double in the next year or 18 months. That is a real reason for governments to get involved, to look at demand management as the simplest and cheapest way of cutting fuel use.

CHAIR—We will go around the room now with concluding statements. What is the key thing you would like us to go away from this hearing with today?

Prof. Harries—Underlying everything I have said is the need for us to get information to do research to be able to manage the uncertainty and, as David Worth has said, the problems. Markets do not happen overnight. You have got to actually help the system happen. What we are on about here is trying to make a smooth transition to alternative markets and alternative ways of doing things—and to do that we need information.

Mr Rice—Grab some inspiration. Govern for sustainability. Why else would you govern?

Mr Robinson—It is highly probable, as people have discussed, that there are lots of things we can do to adapt, particularly if we start thinking in advance. A lot of them are very positive for health and the economy. I would like to congratulate the Senate for starting the process. It is an enormous quantum leap in Australia. We should all be trying, particularly in the opportunity with the Senate, to engage the community and decision makers about peak oil.

Dr Bowran—I would like to see appropriate sectoral strategies so that you have actually got a framework to know which parts are going to go forward with particular types of innovations.

Mr Beveridge—First of all we need a national strategy—and that is where the government can play a really crucial part—but one that can be implemented locally, which is key. I see the government as a catalyst for change. It is clear today that we have got a lot of passion from the stakeholders, which is fantastic. We all ought to be congratulated for providing that passion, which is really good. That should be harnessed. We really need to take decisive action because the clock is ticking.

Mr Fleay—The central theme of your report should be issues I have been hammering about engagement of people, providing leadership and participation and avoiding top-down management approaches. That approach, which has shown some benefit here locally—but it is more a question of what it can potentially become than what it has been so far—is the key to pulling together all the points that people have made and being able to engage with people and to get change. If you can get it to a certain point, positive feedback will take place and it will gain its momentum.

Mr Upton—I would say, like others, that it is important to do get the information and do the research, to determine what is practical—you have to be pragmatic about these things—and to convince the public. Work with the credible stakeholders that can help you to convince the public what the real issues are and how we can all work together to solve those.

Dr Bennett—I would like to go back to a point that Brian Fleay made: agriculture these days is a process of converting oil to food. Some of the modelling activity by the department of agriculture indicates that in the eastern wheat belt, where there is a significant energy input, it is very likely that, as oil prices rise and climate change proceeds, there will be a process of overshoot and collapse, and that might be the case with a number of other parts of the economy. If you think that, on a world basis, the fact that the use of oil in agriculture has probably allowed the increase of the world population to go from two billion to six billion, then the prospect for the world human population as a consequence of what we are facing is dire.


Official Committee Hansard

Senate Rural & Regional Affairs & Transport References Committee

Australia’s future oil supply and alternative transport fuels

Australia’s future oil supply and alternative transport fuels, with particular reference to:

  1. projections of oil production and demand in Australia and globally and the implications for availability and pricing of transport fuels in Australia;
  2. potential of new sources of oil and alternative transport fuels to meet a significant share of Australia’s fuel demands, taking into account technological developments and environmental and economic costs;
  3. flow-on economic and social impacts in Australia from continuing rises in the price of transport fuel and potential reductions in oil supply; and
  4. options for reducing Australia’s transport fuel demands.


BENNETT, Dr David, Founder, Sustainable Transport Coalition

BEVERIDGE, Mr Andrew, Project Manager, Commercialisation, Office of Industry and Innovation, University of Western Australia

CREEMERS, Mr Alexander Henricus Maria, Private capacity

DeLANDGRAFFT, Mr Trevor Frederick, President, Western Australian Farmers Federation

FLEAY, Mr Brian Jesse, Private capacity

GRIFFITHS, Dr Cedric Mills, Theme Leader, Maintaining Australian Oil Self Sufficiency,

CSIRO Petroleum, Commonwealth Scientific and Industrial Research Organisation

HARDWICK, Mr Ross, Executive Officer, Western Australian Farmers Federation

HARRIES, Professor David, Director, Research Institute for Sustainable Energy, Murdoch University.

HEAD, Mr Glen Michael, Director, Perth Fuel Cell Bus Trial and Transport Sustainability, Department for Planning and Infrastructure, Western Australia

NEWMAN, Professor Peter William Geoffrey, Director, Institute for Sustainability and Technology Policy, Murdoch University

PYTTE, Mr Anthony Mark, Australia Country Manager, Sasol Chevron Consulting Ltd

RICE, Mr David, Principal Network Planning Officer, Department for Planning and Infrastructure, Western Australia

ROBINSON, Mr Bruce, Convenor, Australian Association for the Study of Peak Oil and Gas

RONALDS, Dr Beverley Frances, Chief, CSIRO Petroleum, Commonwealth Scientific and Industrial Research Organization

SAMNAKAY, Mr Iqbal, Policy Officer, Transport, Department for Planning and Infrastructure, Western Australia

SCHLAPFER, Dr August, Lecturer, Energy Studies, School of Science and Engineering, Murdoch University

SELWOOD, Mr Richard Neil, Chief Executive Officer, Natural Fuels Australia Ltd

WORTH, Dr David John, Convenor, Sustainable Transport Coalition

Robinson – We will not be in the majority in saying this, but we feel that the fuel price should go up, that there should be a fuel tax escalator along the lines of Margaret Thatcher’s, and that a smartcard, a tradeable fuel allocation system, should be ready in the event of sudden oil shortages. Also, there should be a sensible, rational allocation. I got here today by catching the train. I walked 200 or 300 metres across one road, caught a train here and walked across one or two more roads. Not everyone in the Australian community can do this. People in the farming community cannot do this. So the requirement for fuel varies. I refer to people working on nightshifts in hospitals, and people running farms and businesses. Not everyone can have all the fuel that they will need in the future, if there are fuel shortages—and, certainly, that is what we predict.

Fleay – I want to make one comment about biofuels. I am very concerned about some of the propositions that came up about using some microbiological product to take all the waste—to virtually strip the land bare of all so-called wastes—and convert it to ethanol as a way of getting resource. This has a disastrous impact on soil, because the organic content of the soil is extremely important in providing the environment for the great mass of invertebrate organisms and other things that are critical to soil fertility. This process is, in effect, mining the soil. I have put in a recommendation about having a rigorous approach to assessing these alternative fuels. This includes finding the energy input and energy output and, where you are doing it from crops, including the impact of the process on the soils. We cannot afford to diminish the property of our soils.

One of the problems that wasn’t dealt with yesterday is the process of funding of transport, federal-state relationships and the whole tax system. The fact that roads are funded from taxes is, in effect, a sort of subsidy, whereas funding for rail is through borrowed funds on which there is interest. This is a very lopsided thing; it is very unbalanced.

Studies done throughout history have found that over the last 2,000 years cities in general are about an hour wide—that is to say, people are prepared to spend about an hour each day traveling to and from work. If people were walking, that determined the size of the city and so forth.

Mr Robinson—I am concerned that the climate change people do not mention oil depletion and they have scenarios that are unrealistic for the amount of oil. I think it would be really useful if climate change and oil depletion matters for Australia and internationally were looked at together, because a lot of the mitigation and annotation are the same. There certainly should be energy taxes, but we should not tax just carbon, because carbon from oil and natural gas is more valuable than carbon from coal. It should not just be on an atom basis. In a climate change sense they are valuable but, in a resource depletion sense, carbon atoms in oil are much more valuable than carbon atoms in coal.

Mr Kilsby— My own background is in transport engineering and urban planning. I would like to highlight some submissions that the urban planning and transport group made to you. There are a couple of points on transport and a couple of points on urban planning that I would particularly like to draw to your attention. On transport the key points that we wanted to make are that while the oil position is a national issue it is in the cities where there are more possibilities of limiting or moderating the demand for oil than in rural and regional areas. Urban transport planning is an issue that the Commonwealth government ought to take rather more interest in it than it has to date, if only to make sure that as much oil as possible is available in rural and regional areas.

Another key point on transport, as you have just heard, is that the most vulnerable transport mode will be aviation because what alternatives to oil are there for fuel in planes? There is nothing on the horizon there and, by extension, the parts of the economy that rely on a thriving aviation sector—particularly the tourism industry—are also very vulnerable.

Road transport is quite vulnerable, although perhaps not to the same extent as aviation, because road vehicles require a portable, energy dense fuel. That is why petrol and diesel are the fuels of choice. It would take decades to establish the infrastructure and the vehicle fleet to take advantage of any

alternatives. And that is decades, as you have heard, that we have not got and alternatives that we have not really got either.

The other two main modes are rail transport and sea transport. They are possibly the least vulnerable because a railway locomotive is essentially a rolling power station on rails and a ship is a floating power station. In both cases there is a wider choice of energy sources available, mainly because the power plants are larger than for road vehicles or for aircraft.

On urban planning there are two points we want to highlight. One is that there are many people who have no option but to use their cars to get around. These people tend to live in the outer areas of our cities. The two gentlemen from Griffith University, who will follow me, I think, will make this abundantly clear. It seems to me that the provision of alternatives in such areas should be a priority for government. By that I mean the development of adequate public transport networks, of bicycle networks and of pedestrian networks. The second point on urban planning is that if we are faced with a physical decline of oil in the future—not just higher prices—then it is going to be necessary to establish clear priorities for the use of a more limited amount of oil. Put crudely, as you heard, this could involve a choice between feeding people and letting them drive to work. We will not have the energy resources to make drastic changes when it becomes evident that we have a problem. The sooner planning for a decline starts, the better.

We do not have time on our side, as I think Dr Bakhtiari amply showed.

On the committee’s specific terms of reference, going to oil availability, I would say that there will be less oil available in future and it will cost more. ASPO does not claim to have a crystal ball or that the future will unfold the way we expect it to, but we do say that this is a significant risk to urban transport and, hence, to the national economy. There are well-established risk management techniques which we think should be used. The risk of there being less oil is at least as significant as the risk of terrorist attack, for instance. There are no alternative fuels in sight that will completely replace oil for transport. There will be many flow-on economic and social impacts. I think the greatest community anger will arise from those places where alternatives to cars could have been provided but were not. Those are basically the outer areas of our cities.

Options for reducing fuel demand are mainly urban, possibly from technological development, but all the others—that is, the development of public transport and other policies that I would call business as usual, such as demand management techniques and economic measures—even though we would probably have to apply them in a different way to business as usual outcomes, would have effects in the cities rather than in the rural and regional areas. But, given that there is only a finite amount of oil to go around, applying them in the cities would ensure that there is in the areas where alternatives cannot be provided more oil to go around than there would otherwise be. I think that is as much as I wanted to say.

Mr Kilsby—I was living in the Netherlands when the first oil shock happened in 1973…the Netherlands scarcely missed a beat because they had an alternative in place. The alternative was mainly bicycle networks, which are very good in Holland. The Dutch enjoyed it so much that when the oil started flowing again they considered adopting the ‘carless Sunday’ as a feature of national life rather than an emergency measure, which was why it was introduced. That taught me that the more prepared you are and the more alternatives there are in place the better off you are likely to be when such a catastrophe occurs.

Senator MILNE—Thank you for your submission. It certainly flows on from a lot of other submissions we have had from various local governments on the whole issue of a rapid transition to public transport. One of the big issues for Australian cities is that the most vulnerable live the greater distance from the centre of the city and that there has been a lack of planning for that.   My next question relates particularly to the tourism industry and the agricultural sector, both of which are going to be severely adversely impacted upon by rising prices and oil depletion. What about the aviation sector? At the moment air fares do not reflect the real cost of flying anyone anywhere. Have you done any predictive modeling on the point at which that cannot continue?

Mr Kilsby—No, I have not.

Senator MILNE—Do you have any thoughts about impacts on tourism generally? Have you modeled that or looked at that around the country?

Mr Kilsby—I am currently doing some work in Cairns, for instance, in Far North Queensland. I think it would be hard to find an Australian town that is more dependent on the tourism economy and on people arriving by plane.

Senator MILNE—Can you spell that out a bit more? What we heard this morning was that the new generation of huge global aircraft, the A380s, is unlikely to ever be economic because of the fuel costs. When you say that people will not arrive in Australia by air, do you want to expand on your thinking about that?

Mr Kilsby—My thinking is very much governed by what I am currently doing in Cairns. Most fuel in Cairns—because it is a long way from the refinery, which is in Brisbane—has to be imported by ship, and they currently import more oil for the airport than they import petrol and diesel product for the whole of Far North Queensland. It struck me that the airport is really much like a coaling station, in the days when ships used to run on coal. There are no local fuel resources at all. It all has to be refined in Brisbane and brought up to Cairns by ship. If that becomes less possible in future, then a large part of the economy of that city is going to collapse, because it is geared around servicing tourists. The tourists either drive—and it is a long, long way from anywhere else to get up there—or they come in by plane from Asia, because that is one of the first stops that they make.

Senator MILNE—Do you know of any other work, apart from that which you are doing, where tourism hubs that are more remote and dependent on air travel for their viability are looking at these projections? It would be good to have some specific examples of regional economies that are going to be significantly affected in the short term because of aviation fuel prices and availability.

Mr Kilsby—I am not aware that the aviation industry is even contemplating a shortage of fuel at the moment.

Mr Kilsby—The growth of corn and so on that you need to produce the ethanol and biodiesel requires energy of its own, and it requires land as well. I suspect that the conflict between the land and the energy that you need to supply the additives to petrol and the need for alternative uses of those lands [i.e. food] and energy will be something that you have to consider.

Senator WEBBER—I want to pursue what Senator Joyce was talking about. All of our state economies are very different. I am from Western Australia, and we have the same issue of getting fuel from Perth into the north-west, only then the fuel is used to exploit our resource sector. I am not sure that biodiesels or anything else is an alternative for large haul packs in iron ore mines and what have you. And we do not have a large tourism sector there; it is purely a resource sector. I do not know of many tourists who go to Port Hedland. So that is an issue: all state economies are different, as is what confronts them.

You said in your opening remarks that you felt the need for more Commonwealth government interest in the development of urban transport. Has your organization given any thought to how you think that can be developed? I know that every time we talk about the Commonwealth government spending more money on any particular part of our state economies, there is usually a fight afterwards and then an ad hoc arrangement over the shared responsibilities of state and federal governments. Obviously we need an overall plan, so do you have any other views about how we can organize that?

Mr Kilsby—It seems to me that climate change presents quite a good model for that. The Australian Greenhouse Office is a national office that tried to collect expertise in one place, and the fuel crisis that we are heading for is probably of similar magnitude. So something like an Australian fuel office in central government would probably be the way to go as far as we can see.

Senator WEBBER-There is another issue that I want to pursue. We have had a discussion today about the fact that one of the issues we need to look at is increased use of public transport and the incentives we need to ensure people do that. There has been discussion about the free public transport network that we have in the CBD of Perth. There are other discussions about subsidising public transport. What do you think we need to do to make it more attractive? We have discussed this at previous hearings, overdevelopment and maintaining modern infrastructure to make sure it is reliable and that sort of stuff. What do you think? And if it is about subsidising the use of public transport, then who should pay, as it is seen as a state government responsibility?

Mr Kilsby—In terms of making it more attractive, there are probably three transport sectors. There are private and public sectors, but they both require motors, and there is also the unmotorized sector, which, at the moment, would not make much of a dent in the oil requirement because it only affects the shorter spectrum of trip making. It seems to me that with good urban planning we could perhaps do things to shorten the trip length, and then the third element would become more attractive as well. It is in those outer areas that transport is most difficult to provide. Sydney is clearly the largest Australian city and it is a long way to the CBD from where we are putting people in new houses now. There are probably two million people living out in Western Sydney at the moment, and the only public transport that is being provided of any significance is trains to bring them into the CBD. I think that the Department of Planning in the New South Wales government has an excellent idea in the metropolitan strategy where they are trying to introduce regional cities within Sydney to reduce the amount of trip making that goes on in terms of person kilometers.

Senator STERLE—I refer to page 4 of your submission and the recommendation that states: ‘7. That taxation and fiscal policy instruments should encourage sustainable transport.’ Could you explain that?

Mr Kilsby—At the moment, I think the taxation instruments actually encourage the opposite to sustainable transport with the FBT arrangements and so on. I know that in Canada they have recently introduced a system whereby travel to work by public transport is allowable as a tax expense. It is really that sort of thing that we had in mind.

Senator STERLE—I have had a lot of conversation with the pro-rail lobby. I do not want to talk about freight on trains because I do not think we will ever get common ground on that; I want to talk about public transport on trains. I cannot speak for Sydney, but I can speak for where I come from. We are just putting in a brand new railway 70 kilometers down to Mandurah. It is going to be wonderful—it really will be—but we have had a wonderful train system in Western Australia for a number of years to the northern suburbs and out to the east and to the west. But I still cannot find anything that says we have it right. How can we attract patronage onto public transport? I hear the pro-rail lobby say, ‘Throw a heap of money at us and give us the infrastructure,’ and I have seen some great planning for future suburbs. But we have rail and people are not using it. Why do you think that is? I know you have mentioned costings and all that. Are you suggesting that if we offer free transport people would get on the trains?

Mr Kilsby—No, I am not suggesting that. What I am suggesting is that we concentrate more on local transport, especially in the outer areas because at the moment we are offering people the alternative of traveling quite long distances to central areas, which is where activity tends to be concentrated in our cities, and I think, certainly in Sydney, that we have grown beyond that point. The rail network that Sydney has is probably the most extensive in Australia, but it is very old and you cannot fight your way onto a train at peak times; they are completely crowded, and they are going quite a long way into the CBD. It strikes me that we have to think a little beyond the niche market of getting people traveling to the CBD and start thinking about the more dispersed travel that happens in outer areas of our cities.

Senator STERLE—This is where I get confused. Do you mean putting in extra railway lines to service other suburbs?

Mr Kilsby—That would certainly help, but it probably takes 10 years to get a new railway line implemented and I suspect that is time we do not have. There are alternatives in producing alternatives to cars, and we already have some of these in Sydney. We have a busway that is about to open from the north-west growth area, which is about 40 kilometres from the CBD, to take people down to Parramatta, which is a lot closer than the CBD. We propose to build a railway line from there, starting in 2017, which is a long way away at the moment.

Senator STERLE—I am a bit confused: are you talking about integrating both forms of public transport—rail and bus?

Mr Kilsby—Yes.

Senator STERLE—I just had this vision that we were talking about railway lines and spurs and branching into the suburbs where the housing is already—that sort of stuff.

Mr Kilsby—No, I do not see that that would help very much.

Senator STERLE—But is it realistic?

Mr Kilsby—No.

Senator JOYCE—You talked about the development of railway lines. Do you have any comments on the fact that in some places in New South Wales they are actually ripping up railway lines and sealing the roads so that they can put all the heavy transport back on the road? Surely that is completely counterintuitive to where it is all heading at the moment—for instance, with the branch lines out in the regional areas that move such things as the wheat crop. I can quote you one example: the Baradine to Gwabegar line. They are closing that line down and transport of all the grain produce will go back on the roads. Surely this is completely against the whole inclination. Do you feel that the government—especially the state government—is lacking in capacity to effectively organize itself to make the moving of heavy goods on rail possible? Are people giving up on it? Do you have any views on that?

Mr Kilsby—That is mainly a freight problem. Australia’s rail infrastructure for freight probably falls into two classes. On the one hand, there are some world-class facilities for the bulk export lines and for interstate containerized traffic. On the other hand, things like the grain lines that you mentioned are in a pretty woeful state. I would like to see these developed further.

Senator JOYCE—Once people get something on a truck, they keep it on a truck, and that exacerbates the problem. It is the ability for rail to organize the collection of produce and things like that that are at the crux of the issue. Do you have any views on how rail could better organize itself to be an effective competitor in the transport industry rather than just being there?

Mr Kilsby—I think that would boil down to the economics of particular cases.

Senator JOYCE—Why is rail so ineffective in the transport market in New South Wales and Queensland?

Mr Kilsby—Because they concentrate on particular markets where they do have a competitive advantage. One of those is the long-distance containerized market. Certainly in urban areas there is virtually no freight that moves by rail. It goes from Melbourne to Sydney by rail, but there is very little that moves around within Sydney by rail.

Senator MILNE—We have a national obesity crisis and a national diabetes crisis and we have people paying huge amounts of money to go to gyms. We have the potential to move people by bicycle, but we have very little in the way of safe bicycle facilities. Everywhere we have been, people have said to us that safety is a big disincentive to their riding. The other thing is a bit like gas: you need a transitional fuel from cars to bikes. One of those is electricity. We have seen huge bureaucratic resistance to electric bikes and small electric cars, like the Riva and so on. Can you give any insight into why you think the bureaucracies are so reluctant to license electric bikes and small electric cars in Australia?

Mr Kilsby—I would support the introduction of a low-energy sector. I think that it is one thing that we in Australia are lacking. There is nothing between a bicycle and a car, effectively, whereas if you go overseas—certainly to Europe or developing countries—you see that most people move around on some sort of moped or light motorbike, which we do not have. I cannot really comment on why the bureaucracy are so hostile to that, other than to say that they are probably following their charters or their terms of reference, which say that they have to manage the road system in the interests of the people who are on it at the moment.

Senator MILNE—That is true to some extent, although there is an attempt to have the Riva car registered in Australia and that is being resisted furiously by the bureaucracy on safety grounds. Yet these vehicles are in the EU, in London and all over the place. Apparently they do not meet our safety standards, even though we have an MOU with the EU. As far as I can tell, what we are seeing everywhere is a huge bureaucratic resistance. Some would argue it is political; maybe it is. It is something I want to pursue. We have a chicken and egg situation. We do need safe bicycle lanes, but we also need to have some form of transition in terms of electric bikes. Anyway, I will leave it there.

DODSON, Dr Jago, Research Fellow, Urban Research Program, Griffith University

SIPE, Dr Neil Gavin, Head of School, School of Environmental Planning, Griffith University

Dr Dodson—We have made a written submission to the inquiry, which was effectively a covering letter describing some research that we at the Urban Research Program at Griffith University in Brisbane have been undertaking regarding the potential distribution of adverse impacts arising from the socioeconomic costs of rising fuel prices. This report was sent to the committee. I do not know whether you have all seen it; perhaps you have.

CHAIR—Yes, we have. I must say that a number of people also have been quoting your research to us. Dr Dodson—Since that came out in December 2005, we have received quite a lot of media coverage of it, so we suspect that a few people have read it. We will run very quickly through that. Since you have all read it, we will not dwell too extensively on it. We have just recently completed another research paper which examines specifically the impact of rising fuel prices on households with mortgages, and we will also report to you today briefly some outcomes of that.

We believe our original paper Oil vulnerability in the Australian city was the first attempt in Australia to really comprehend on a very close spatial neighborhood scale the likely distribution of urban impacts of rising fuel prices. This research builds to some extent on research interests that both Dr Sipe and I have had over many years in terms of the distribution of socioeconomic opportunity in Australian cities and the connections between socioeconomic status and access to transport services. This is a continuation of research we have had a longstanding interest in.

The first study we undertook was an attempt to understand the distribution of the socioeconomic impacts of rising fuel costs. We became aware that there were very few data sets that were able to illuminate the issue at a very fine level of spatial detail. Therefore we decided to create an oil vulnerability index, as we term it, based on ABS census data. That is not ideal data to use for this kind of research; however, we feel that as a first cut piece of investigation by academics in Australia, it is worthy of some attention by the committee. Subsequently we have also submitted it to a refereed international urban research journal. The referees were unanimous in agreeing that it should be published and reported to the scholarly community, so we feel confident that our approach has some validity.

In our index, effectively we combined what we describe as an indexed indicator of car dependence, which is the variable within the census of the mode of travel used for the journey to work, with the proportion of households within a given locality that have two cars or more. We decided that together those two variables were a good indicator of the level of car dependence experienced by households. We then combined that with the ABS socioeconomic index for areas, which is the measure the ABS uses to describe socioeconomic status. So together we felt that car dependence and socioeconomic status were useful markers of the likely vulnerability experienced by localities to rising fuel costs on the basis that, if you have high levels of car dependence, your fuel costs are going up and you are of modest or low socioeconomic status, then your capacity to absorb that rising price relative to your income is probably far reduced.

Moving to the results, our initial study investigated Brisbane, Sydney and Melbourne. The choice of cities was largely due to time constraints in our own research schedules. We have focused solely on the major cities in Australia, using the definition of the urban areas for these cities provided by the ABS. I have just outlined the way the ratings are done. On these diagrams, the areas in red and yellow are the most vulnerable; those in green and dark green are the least vulnerable. On the image that you see before you, the inner city areas tend to be less vulnerable in our measure to rising fuel prices and it is the outer suburban areas, particularly those in the growth corridors of Brisbane, which are most vulnerable. If we look at Sydney next, a comparable effect is seen in Sydney, although there is some centralization within the western suburbs. But you can see high vulnerability areas extending along the north-west and south-west growth corridors with lower oil vulnerability concentrated within the CBD and, to some extent, the areas immediately around the CBD and on the North Shore.

In Melbourne there is a comparable effect, particularly with the growth corridors in former industrial areas or areas that have had a high concentration of industrial employment which has since been heavily restructured over recent decades. They have structural unemployment in some of those localities to the west, north and south-east of Melbourne but also with relatively poor provision of public transport in those localities. So combined, you have high car dependence and relatively low socioeconomic status, which contributes to the patterns of oil vulnerability we have presented. As with the other cities, the inner city and middle suburban areas appear to be exhibiting the lower levels of vulnerability to rising fuel costs.

In our first study, we attempted to chart the population numbers within these different categories by oil vulnerability rating: the higher on the scale, the more vulnerable they are. This slide shows Brisbane. If we go to Sydney, there is a similar distribution, and in Melbourne too. You can see there is some variation in the distribution of oil vulnerabilities between these cities.

We have just counted those in the highest vulnerability categories in numbers of population. These people are likely to be experiencing the worst socioeconomic impacts of rising fuel costs. There are, however, a large number in the moderate vulnerability areas who may also be highly impacted.

In our next study, which came out about a week ago, on mortgage and oil vulnerability in the Australian city, we used a similar method of indexing. But, in this study, we have combined ABS census data on car dependence with data on the proportion of households with mortgages and on income this time around. We decided that, for assessing the impact of rising fuel prices on these households, income was a better measure than socioeconomic status—largely because those at the very lowest end of the socioeconomic spectrum were less likely to be homeowners.

The reason we chose to specifically investigate mortgage vulnerability is that it is apparent that the Reserve Bank of Australia is now conceiving of the inflationary impacts of rising fuel costs as a key issue that it needs to address through its control of the interest rate settings. The recent rate rise that came through, I think, in early June was indicative of this perceived relationship that the Reserve Bank sees and is now seeking to address. We felt that there is potential for not only rising fuel costs to impact on households but also rising mortgage costs as interest rates go up. We see this as a twin vulnerability, particularly given that there may be some inflexibility in the labor market in terms of the ability of incomes to rise commensurate to the increases in transport and interest rate costs.

This is our index, called a VAMPIRE—vulnerability assessment for mortgages, petrol, interest rate expenditure. Again, similar to the patterns of vulnerability shown in the socioeconomic oil vulnerability in the size that we showed previously, this study shows a much more widespread distribution of vulnerability in many more areas that have higher vulnerability status. We have done five cities this time. It is primarily those in the outer growth corridors of Brisbane. It is the western suburbs of the Gold Coast, away from the coastline. In Sydney, again, it is in the outer western suburbs along the growth corridors. By comparison, the inner city, the North Shore and inner south-east are relatively less vulnerable. In Melbourne, it is far more distributed in a broad arc right around the outside of Melbourne, compared to the previous assessment of socioeconomic vulnerability, which was fairly tightly concentrated. This is far more general. In Perth, again, you see that phenomenon of a lower vulnerability in a city with a much higher vulnerability arc around the outer and middle suburbs.

The reasons we see these patterns in Australian cities, we feel, are primarily related to the operation of housing markets which tend to provide the cheaper and newer housing in outer suburban and fringe localities. Households seeking to purchase a home for the first time are more likely to locate in those areas, and those on modest and lower incomes who are seeking home ownership are also more likely to locate in those areas because of the way that the housing market is structured.

However, this means that they run into the problem of the relatively poor provision of public transport services in fringe and outer suburban areas compared to the inner-city localities. This is a problem of historic government underinvestment in public transport infrastructure and services in the outer suburbs. This dates back to the shift in Australian transport planning practice that occurred after the Second World War, when planners began to move away from the previous Australian model of largely transit oriented development based around the existing rail and tramway lines to modes of urban development based on the private motor car and the provision of roads and major freeways.

The result is that public transport services have not kept up with growth. The highest quality public transport services are situated within the inner cities. Those on the fringe experience a far lower quality of service in terms of the frequency of services, the hours of operation, the days of operation and, importantly, the connectivity between not only individual modes but also between modes.

In the best public transport services in the world you find a high level of integration between modes, with central planning to ensure that, for example, buses connect to rail stations that give passengers time enough to transfer. The heavy rail system will convey them at high speed to another connection point and then transfer them to another local bus service to take them to where they want to go. In large part that type of public transport service does not exist in Australian cities. It does exist in some localities, but to a large extent the outer and fringe suburbs are poorly served by public transport. We see that as the key point of vulnerability in the context of the rising fuel prices in Australian cities.

In terms of our suggestions or recommendations regarding improvements to public transport, we think there needs to be dedicated public transport statutory type authorities within each state government that stand alone and are independent from the immediate departmental control of state bureaucracies. We also feel there should be strong federal government interest and involvement in public transport planning, coordination and funding. There is some opportunity for partnership arrangements between the federal government and the states. I will leave that to you to contemplate.

In particular, suburban public transport and circumferential public transport routes is required. The majority of public transport heavy rail and bus services in Australian cities are radially focused—that is, they travel from the outer suburbs into the CBD. There is a paucity of public transport services that travel around the outer suburbs that provide the quality of service found within inner and radial areas. We see some scope for expansion of rail services to new fringe estates, particularly in the growth corridor areas of Brisbane, Sydney and Melbourne. For example, Rowville in Melbourne’s outer south-east was promised a train line in 1969. They have been waiting almost 40 years for that to materialize. They are still waiting. Now they are facing rising fuel prices. We see some scope for those rail lines that have been planned for many decades in a lot of instances but have not materialized to be introduced and completed.

There was some discussion in the earlier presentation about how one might finance public transport. If you look at the total transport budget that state governments currently expend, there is actually multiple billions of dollars available for transport. The trouble is that most of it is currently dedicated to providing major road infrastructure such as freeways and tunnels. If you add in tollways, the sums are in the multiple billions. If those projects were postponed—they do not need to be cancelled; they can just be postponed in the budgetary process—that money could be transferred to the funding of specifically local scale public transport services to make sure that the outer suburbs have as high a quality of service as those in the inner city.

We feel that there would then be a high level of amelioration of the oil vulnerability and the mortgage vulnerability that we have described. Should oil prices decline in the future then it would be possible to still revisit further road construction and road projects. However, if it did turn out that a peak oil scenario did happen then Australian cities would be protected, at least partly, in terms of the personal-private cost of transport by provision of improved public transport services.

Finally, we perceive a need to improve local-scale amenity in terms of walking and cycling and access to local shopping trips so that households, in responding to rising fuel prices, are able, even if they do not make all their trips by public transport, to start to cut out a few of those minor local trips that might save them money over time. Those primarily involve walking to the local shops and to employment and other services.

Senator WEBBER—That raises a lot of questions actually. Dr Dodson, you spoke about road expenditure versus provision of local public transport. I am from Perth, so I was very pleased to see that there was something about that.

Dr Dodson—Perth is somewhat of an exception to this general rule.

Senator WEBBER—Absolutely, and we will get to our train line in a minute. In fact, that is what I wanted to say. In Perth, we have got fast-developing suburban corridors. It is relatively cheap to build roads because of our sand base, as opposed to a lot of the other challenges around on this side of the country. What do you mean by the provision of local public transport in terms of that swap from developing roads to developing local public transport? It is much cheaper for me to build a major road or extend the freeway to allow people to get into the city to work than it is to build the train line. It is quicker. Surely, it is not necessarily an either/or, if I am going to allow the city to keep developing. It has to be both. I cannot leave them out there not being able to get anywhere.

Dr Dodson—That is certainly the case. However, given the concern that has been expressed to this committee about rising fuel prices, there is strong potential that there will be less demand for those radial roads that provide access to the CBD. In the future, people will be making fewer trips; therefore, the existing road space potentially would have less traffic on it and there would be greater demand for public transport if fuel prices continue to rise. The problem at the moment is that Australian cities do not have particularly good public transport services in those outer suburban areas, so there is a lack of good examples or models with which to expand upon.

However, there is enormous scope, we believe, for provision of local bus services within local suburban areas that would connect to higher frequency arterial bus services and to rail services, where they exist, with timed connections. They would be timed to arrive a few minutes before the train departs so passengers have time to transfer and get ready for the train and then passengers offloading from the train have time to get onto the bus that ferries them to their local area. We feel those kinds of services would be critical in a scenario where fuel prices were markedly higher than they currently are in order to provide metropolitan access to households, particularly in the outer suburbs.

Dr Sipe—I would just add that we are not really talking about not spending money on roads; we are talking about having more of a balance. In south-east Queensland with the latest regional plan, basically about 20 per cent of the transport funds are spent for public transport and 80 per cent is for roads. Some of those roads are not necessarily to service newly developing areas.

They are trying to move traffic faster through the city by spending $3 billion on a tunnel. We would really question whether, in 10 years, there is going to be anybody who can afford to pay the toll and the fuel to use the tunnel. It is really that issue of bringing things a little bit more into balance, because clearly at this point in time the roads lobby is in charge.

Dr Dodson—It is worth noting that, in Australian cities where public transport is provided at a high level of service quality and interconnectivity, people will use it. In our research report we mention the member for Wentworth, Malcolm Turnbull, who has recently achieved the ability to use his parliamentary vehicle allowance to purchase a yearly public transport ticket. We found it curious that, while Mr Turnbull is one of Australia’s richest citizens, he would deliberately choose to use public transport. The reason he is able to make that choice is that the high-quality services are there. He can get around inner city Sydney easily and efficiently. The newspaper quoted him saying that it is more efficient to use public transport in Sydney. He has that choice because he lives in an electorate where those services exist. Households in the outer areas of Sydney, where that level of quality does not exist, do not have that choice.

Senator WEBBER—That brings me to another point, which is the socioeconomic argument around that. We were having a discussion before about the incentives we need to give people to use public transport. Some people in Victoria and other places have talked about perhaps making it free. It seems to me that, if you accept what you say about the current infrastructure—and it is absolutely right—you are therefore subsidizing the rich.

If you are going to make it free—and most of the infrastructure is in the inner city, where people are fairly affluent—you are not really helping those in the northern suburbs in my home town or in the western suburbs here.

Dr Dodson—I might respond to that by suggesting that there is a subtlety to that observation in the sense that the processes of housing market restructuring in Australian cities over the last two or three decades have resulted in the gentrification of the inner city. Wealthier households have returned to the inner city, after a couple of decades in the 1950s, the 1960s and the early 1970s when they began to depart the inner city. If you look at it in the sense of a subsidy, it is based on a combination of existing infrastructure, housing market change and labor market change. As we point out in our paper, there is a serious inequity when you have your lowest and most modest income households in localities on the fringe, where now they are facing high transport costs. That is a serious social equity issue that we feel that governments should address through their transport policies.

Senator WEBBER—I notice one of your recommendations was to encourage more local access to employment services. Given the urban and suburban sprawl that we have, how do we do that? I do not know of many outer metropolitan areas that want an industrial estate next to them. To make this work, you need large-scale employment. The corner shop cannot employ that many people.

Dr Dodson—You can provide access to industrial areas through the provision of high-quality public transport. That is how industrial areas serviced their labor needs historically until the development of the private motor car. In terms of local services, the postwar period in Australian cities saw a shift away from high streets and local shopping strips towards regional, car based shopping malls. In conditions of rising fuel prices, we would suggest that there may be greater opportunities for providers of services and retailers on the local scale, where they previously would not have been particularly competitive relative to the regional shopping malls. Now that the costs of travel to those regional services are increasing, as fuel prices rise, the relative competitiveness of those local services may increase.

We see that there is an opportunity to support that kind of travel behavior through making local trips by walking and cycling far more pleasant than they typically are for those living in outer suburban estates—where there may not be cycle facilities, where the footpaths may be poorly developed or where there may be limited shading. All of those local amenities that encourage people or support walking and cycling need to be considered and provided in areas where they are insufficient.

Dr Sipe—With development over the past couple of decades, developers in new housing estates have not been providing local retail. There may be a shopping mall but local retail is missing. In Western Sydney in a lot of these areas governments have allowed people to set up shops out of their homes because this need is basically not being provided. In the US it has gone to the extreme where developers are now subsidizing corner shops and local retail rather than putting in a golf course, because they view it as something that is lacking. They support it even though the money is not there in the initial years of a new development to make it financially viable.

Senator WEBBER—I accept a great deal of what you have to say, but where does that leave people in regional Australia? There are lots of towns in my home state where there is not a lot of local employment and people basically live on some form of social security. There is no public transport and they are paying $1.75 a liter for petrol. What do we do to address those kinds of social problems?

Dr Dodson—That is a question we have not undertaken an enormous amount of research into. However, we have recently submitted a grant application to a federal government agency to examine that issue. I think that issue needs to be contemplated within the much larger issue of the impact of rising fuel prices on productive and socioeconomic structures within rural and regional Australia. I see the transition from relatively cheap motor fuel that can drive truck based freight haulage to a greater emphasis on rail as a likely outcome. Although we have not done the research to demonstrate it, we see that as a likely scenario where fuel prices continue to rise or stay at high levels. Therefore the socioeconomic impact on individuals and households needs to be understood within that broader context. There is a possibility that transport systems and settlement patterns in regional and rural areas may undergo significant restructuring in order to better align settlement patterns with the rail infrastructure. That is a potentially stark or extreme depiction, but I think in a forum like this there needs to be debate about what is going to happen with rising fuel prices. I cannot offer any specific solution in that regard, however.

Senator MILNE—Congratulations on this work. It is long overdue. It is great to have something of this kind in the public arena. It is terrific. I have a couple of issues. The first one is the spatial expansion of cities. The frustration I have in this argument is that we can talk about the need to provide public transport, we can talk about the need for transport around the circumference of suburbs but, the minute you put that in, developers and local government see the opportunity to expand another 10 kilometers or 15 kilometers beyond that. That is our problem. Every time we try and anticipate need, people then see it as potential to develop further. Where is there any emphasis in the country on containment of the physical size of cities so that we can start providing adequate transport and adequate services into the future, given the carbon constraints and the oil price and depletion issues we are facing?

Dr Dodson—The issue of urban expansion in terms of infrastructure has been of great concern to governments for the last 30 years—since the original oil shocks in the 1970s. Many state governments have put in place urban consolidation policies to encourage higher density development within existing urban areas, although those have been fairly uneven and partially applied. There has been extensive urbanization in greenfield sites since that period.

Dr Sipe—I guess the most recent example is in south-east Queensland, where, with the regional planning effort over the past couple of years, they have established an urban footprint. I guess we will have to see to what extent—

Senator MILNE—They adhere to it.

Dr Sipe—Right. There were a few areas that had not been decided on and some of those have flipped from nondevelopment into the development realm. We are hoping that this provides some containment on that issue of expansion.

Senator MILNE—The other big issue, and you mention it in your submission, is this. If we were to persuade the federal government to work in a cooperative way with the states and to start seriously investing in public transport provision as a way of dealing with this issue, with the productivity of cities, with congestion, with health issues, with climate change et cetera, financing would become the major issue. If people pick up the argument they are then going to ask, ‘How do you propose we pay for this?’ Have you looked at any financing models that would fit with the fact that we are a federation of states and that local government has the planning provisions and opportunities as well? How far advanced are you on that? That is the key question. If we can get to the persuasion, which I think we are going to have to get to because the circumstances are upon us, how do we pay for it?

Dr Dodson—Our suggestion, as we have outlined today, would be to shift the balance in existing states funding from roads towards public transport, walking and cycling. There is probably some scope for that to occur at the federal level as well. Around $7 billion to $8 billion is spent in federal road funding. A lot of that goes to rural and regional areas, so it would probably not be appropriate to transfer that to public transport provision—although perhaps some sort of regional public transport coach or train network assistance might be worth contemplating. However, I think there would be some significant scope for the use of some of those federal road funds in partnering arrangements or co-financing arrangements with states to identify areas of high public transport need within Australian cities and to plan and coordinate the rollout of new, high-quality services to those localities. As we suggested, it would probably require a dedicated federal government agency to undertake the research, analysis and planning to determine what measures would be the most appropriate in any given locality or circumstance.

Dr Sipe—The only thing I would add is this. As you can tell, I am not from these parts. I come from America. There seems to be a reluctance on the part of both the Commonwealth and the state and local governments to incur any debt in providing public facilities. I see that this is an untapped resource. A lot of these facilities should not be paid for by existing taxpayers. There is an intergenerational issue. They should be paid for over the 20 or 30 years of the life of the project. It seems that governments want to be debt free, and I am not sure that that is necessarily a good thing. Maybe the US is not the best example, having gone to the other extreme, but I think there is some middle ground there in financing projects over a period of time using revenues from public transport or toll roads. I think that is a much better way of doing things than these public-private partnerships that we have seen around Australia.

Senator JOYCE—I want to follow up on one question that Senator Milne put to you. Do you have any idea of the ideal size for a city? As an outsider, as someone who does not live in a city, I came down here the other day and I saw a bus driving around with nobody in it. I thought, ‘Well, that just goes to show that you can have cheap transport that nobody uses.’ What we see as investment in transport infrastructure might just exacerbate the problems that are already there. In your study, do you talk about an ideal size for a city or can cities just get as big as they like?

Dr Dodson—The question of an ideal size of a city is one that exercised the minds of a number of urban researchers in Australia in the 1960s and 1970s; I am not sure that it was ever resolved. The result was the decentralization program under the Whitlam government, which sought to shift population to regional areas such as, I believe, Bathurst-Orange in New South Wales, Albury-Wodonga and parts of Victoria. I am not sure whether they had a program in Queensland or other states. I would not wish to comment too much on the success of those programs. I do not think they are perceived as having had a dramatic impact on changing the rate of growth of Australian capital cities. There may be some scope in the future to revisit questions of decentralization of urban populations to rural and regional centers. We certainly have not done

any analysis or investigation of that type of policy. The problems would be in providing employment and other services in such localities to make it feasible.

Senator JOYCE—I will put the question on its head, then. Do you feel that, with unplanned transport infrastructure in place, there is the potential to exacerbate transport problems for an area and create more red areas? I am thinking about the south-east corner of Queensland, obviously. Wouldn’t an ad hoc growth to an area basically exacerbate problems that are going to be almost impossible to fix because there would be houses where you wanted to put transport infrastructure?

Dr Dodson—That comes down to a question of good planning. Until the postwar period, housing development occurred effectively in unison with rail and tramways. It was after the postwar period that the private motor car gave households and individuals the capacity to travel almost anywhere at will within the city, and that enabled the extensive, often low-density, development you see in, for example, the North Beaudesert shire area of south-east Queensland. Our view would be that well-coordinated and well-planned development with a strong public transport component to it can ameliorate those problems, but it will not necessarily solve them universally and provide some utopian type of urbanization.

Senator JOYCE—What is the cost of fixing the problem that is already there? The houses are already there; the roads are already there. If you want to put in a rail infrastructure, you are going to have to start moving houses and roads and changing everything around. Have you done any costing of your potential loss because the planning process was not proper and in place at the start? A lot of this is a nirvana; it is never going to happen because the cost of putting in new rail networks will be prohibitive.

Dr Dodson—Perhaps yes and perhaps no. I note that the Queensland government is currently expending large sums of money in putting road tunnels through the centre of Brisbane. It is building a number of bus lanes that go through existing inner city localities, many of which have far higher real estate values than those out on the fringe. In terms of the cost of providing new fixed route infrastructure for public or even road transport, I am not sure that the cost of purchasing the corridors and lines for that is necessarily prohibitive. It does not seem to be at the moment.

CHAIR—There are also other forms of public transport, too, like light rail. I understand that that is much less disruptive and you can move a lot of people. Have those things been factored into your equation?

Dr Dodson—In some areas there are opportunities for upgrading underutilized rail infrastructures. There are a couple of train lines in south-east Queensland that are underutilized that could potentially be upgraded. But also simply providing bus services that operate in a coordinated way across outer suburban areas would, in many cases, provide a sufficient level of service that would match or be comparable to a rail service if it were planned, well coordinated and operated efficiently.

Senator JOYCE—What are you going to use as motivation? Once someone jumps in their car to drive to the train station, how are you going to encourage them to get out? It is the same issue that people have in regional areas where, once they put stuff on a truck to get it to a railhead, they say, ‘Don’t bother stopping; keep going.’ It is the same idea with the car: once they jump in the car to drive to the train station and they have the radio going, how are you going to encourage them to get out?

Dr Dodson—The way to do it is to provide the highest possible quality of service that you can so it makes it easy and efficient for them to do it. That level of service exists in many instances in the inner areas of Australian cities, and a high proportion of households and individuals use it. It is the lack of service and the poor quality of service in the outer-suburban areas that prevent people from using public transport, in my opinion. The rising price of motor vehicle travel will be a strong motivational element in encouraging people to use public transport. But the trouble is that it needs to be there and it needs to be of high quality so that they can use it.

Senator JOYCE—I was interested that you were looking at Brisbane. Brisbane is a unique town in that it is hilly and therefore you will need tunnels or bridges in order to get around the place. Because houses are parked on the sides of hills in places like Waterworks Road, there will be an immense capital cost in trying to set up the infrastructure—unless you move the roads, because the roads follow the accessible paths in the lower areas of the topography. Is there a sense that the cost of this is going to be astronomical, as opposed to better planning and getting people to live in areas where the cost of this infrastructure would not be so great?

Dr Sipe—That is what they are trying to do with the regional plan.

Senator JOYCE—Yes, they are moving them but they are just moving them down the street. They are moving them to Ipswich when they should be moving them over the hill and far away.

Dr Dodson—There does not seem to be an immense topographical constraint to the provision of existing public transport services. Buses could easily run along the large arterial roads and the major roads that already exist throughout south-east Queensland. The trouble is that existing government planning is focused on not impeding motor vehicle traffic. In the case of the eastern suburbs of Brisbane, we have Old Cleveland Road, which is a major arterial road, yet the government is now planning to tunnel a busway to provide public transport under that road for approximately 25 kilometers out to the eastern suburb of Capalaba. From my perspective, you can always use existing road space for buses. So there is a question about the opportunity cost of using tunneling, which is going to cost billions of dollars, to provide that service when you could use the existing road service and coordinate services with the regional rail network, and then have plenty of money left over to provide very high-quality local suburban bus services for those in the outer suburbs who are going to be most affected by rising fuel prices. I am not particularly concerned about topography being an impediment to improving public transport.

Dr Sipe—There have been a number of questions about getting people to use public transport. The evidence we have been able to put together over the last six to nine months suggests that that is not going to be a problem, that the price of fuel will take care of that. The real question is: are the public transport companies and authorities planning for this? For example, in Brisbane they basically now publish how many buses go past the bus stop because they are full. The problem is not getting people on; it is providing the capacity. That is what we see as the real problem. Who is building buses? What happens if every city in the world decides it needs 100 more buses?

CHAIR—We are not going to have enough carriages on the Perth trains. Come peak hour now, we are packed in like sardines because we do not have enough carriages on our trains.

Dr Sipe—So who is looking out for this? Somebody should be thinking, ‘If all the cities in Australia are facing this problem, what about all the cities in other parts of the world?’ I have not read that General Motors is going to give up building Hummers and begin to build train carriages and buses.


Posted in GOVERNMENT, Politics, Transportation | Tagged , , , , , | 1 Comment

Australian government was Peak Oil Aware in 2006

Preface. This post is excerpts from Bakhtiari’s testimony about Peak Oil before the Australian Senate Committee in 2006. I’ve excerpted what I found of interest, so if it seems disjointed, that’s my fault. And it isn’t just the Australian Senate that’s “peak oil aware”. Perth’s government assembled 1200 people to brainstorm coping solutions for peak oil as Bakhtiari mentions below.

Some of Bakhtiari’s predictions are wrong — although he knew about fracked oil, he didn’t realize it would be enough to delay peak oil until 2018, when world oil production peaked, and may soften decline for as long as 2025. Though meanwhile it could decline rapidly financially, since it never made money even at $100 a barrel, and investors are weary of backing this industry.  He also may have overestimated how high the price of oil could go if Gail Tverberg is right that low prices rather than high ones will signal peak oil. This is because high prices crash the economy since people can’t afford expensive oil, and oil companies can’t afford to explore or start new petroleum producing projects if the price is low.

At any rate, though the timeline is wrong, Bakhtiari’s idea that the initial T1 phase, the plateau we’ve been on since 2005 is rather benign, appears to be right.  But the decline rate gets worse and worse over T2, T3, and T4 (explained near the end of this post), and I suspect that with the peaking of oil in 2018, we are about to enter T2 and the other more serious phases. These in turn I predict will increase the trend towards autocracies (i.e. Trumpism), social unrest, mass migrations, electric grid outages, supply chain disruptions and more.

It’s also possible the Middle East has higher reserves than Bakhtiari thought, since Saudi Arabia had to invite in outside experts to estimate their reserves when they went public. It is widely estimated that the Middle East has over two-thirds of the conventional, easy, cheap oil reserves in the world.  The other two main sources, the U.S. and Russia are going to shrink. America has less than 4% of remaining reserves, mostly from fracked oil wells that decline by 80% over 3 years. So when all fracked oil is in decline, production will quite dramatically fall.  Nor will Russia fill in the gap. Russia is a super corrupt mafia totalitarian state that isn’t investing in future oil and gas infrastructure and is likely at or past peak oil already.

Australia has done a great deal to educate their public and leaders about the issue of Peak Oil, far more than the United States where I found 2040 hits on peak oil in their parliamentary system. Whether that will make a difference given the lack of alternative energy resources to replace oil, and the possibility China will invade Australia for resources when the U.S. military is immobilized from lack of fuel, remains to be seen. 

I published this back in 2006, and I’m republishing it today because it is just as relevant now.  Also, I’m interested in the topic of why leaders in government, economics, and even science deny peak oil and the repercussions.  If we’d just faced the problem squarely, and while kicking and screaming all the way prepared for the inevitable end of oil, we’d be in much better shape.  More farms would have been converted to organic already, more research on pest control without pesticides, Roman stone roads and aqueducts to last for millennia instead of the 20 years today due to rusting rebar that expands up to 7 fold and destroys roads, dams, and other infrastructure today.  Well, what to do is a huge topic I address elsewhere at energyskeptic, and many people have published books about this topic, and, resilience, transition towns and more are dedicated to our inevitable future.

Alice Friedemann  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report


July 26, 2006. Bakhtiari on Australia’s future oil supply and alternative transport fuels. Parliament of Australia.

Dr Bakhtiari has recently retired as a senior advisor for the National Iranian Oil Company in Tehran and has written several books and more than 65 papers on the Iranian and international oil and gas industry.

Dr Samsam Bakhtiari—I will begin with a short opening statement for you to consider. Crude oil is a commodity unlike any other. It is simultaneously a strategic raw material, a unique industrial feedstock and the most essential of fuels. It is also the most conveniently and widely traded form of energy and therefore the swing element in the world’s energy mix. It is no wonder that the price of crude oil is the most important figure quoted daily worldwide. Its relevance could well rise significantly in the near future as the impact of peak oil or, in other words, the peaking of global crude oil production, becomes evident to all and sundry.

At present, worldwide crude oil output is stagnant at around 81 million barrels a day, give or take one million barrels. OPEC’s 11 member countries are now limited to a maximum of 31 million barrels per day, having produced only 29.35 million barrels in May 2006, and the so-called non-OPEC countries, which represent the rest of the world, are capped at 50 million barrels per day. Thus the world now produces and consumes some 30 billion barrels in each single year.

Most of the world’s major producers are struggling to keep oil production on an even keel, especially both the OPEC and non-OPEC champions—that is, Saudi Arabia and Russia—which are both producing some nine million barrels a day at present while facing almost insurmountable problems to avoid declines in the near future. Moreover, most of the world’s super giant oilfields are now getting old and some of them have entered terminal decline. Suffice it to mention the three largest ones: Saudi Arabia’s Ghawar, Mexico’s Cantarell and Kuwait’s Greater Burgan oilfields, which are surely but steadily going downhill. The last super giant to be discovered was the Kashagan oilfield in the north Caspian Sea offshore from Kazakhstan back in 1999, and it is now scheduled to begin initial production in 2008-09.

Not only have discoveries of super giants dwindled to nil in the 21st century but yearly oil finds have plummeted to between four and six billion barrels a year. There is little hope that this trend will be reversed in the near future because most of the planet’s petroleum provinces have now been explored for petroleum and there is only one last frontier area remaining—that of Antarctica, with its pristine wilderness and its population of some 20 million penguins.

The decline of global oil production seems now irreversible. It is bound to occur over a number of transitions, the first of which I have called transition 1, which has just begun in 2006. Transition 1 has a very benign gradient of decline, and it will take months before one notices it at all. But transition 2 will be far steeper, and each successive transition will show more pronounced declining gradients. My WOCAP model has predicted that over the next 14 years present global production of 81 million barrels per day will decrease by roughly 32%, down to around 55 million barrels per day by the year 2020.

Thus in the face of peak oil and its multiple consequences, which are bound to impact upon almost all aspects of our human standards of life, it seems imperative to get prepared to face all the inevitable shockwaves resulting from that. Preparation should be carried out on individual, familial, societal and national levels as soon as possible. Every preparative step taken today will prove far cheaper than any step taken tomorrow. I thank you for your attention during my opening statement, and I am ready now to try, to the best of my abilities, to reply to any questions that you have.

CHAIR—In the first set of questions, can we concentrate on the issue of peak oil itself and defining that, and then we will move on to the other issues.

Senator JOYCE—Thank you very much, Mr Samsam Bakhtiari. I have been a follower of you for a while; I have been one of your quiet fans. With regard to Hubbetrt’s peak, within the Ghawar oilfields and the Cantarell oilfields, can you explain to us some of the signs that these oilfields are running out of oil? I am talking about gaseous inertia or water inertia. What do you believe are the key indicators that these oilfields are past peak production?

Dr Samsam Bakhtiari—The super giant oilfields are all very great oilfields. Today you have 40% of world production in these super giants. Managing a super giant is a very difficult procedure. The larger the super giant, the more difficult it is. I will firstly state the case of Ghawar. Why? Because it is the largest oilfield in the world by far. At the beginning, it was estimated that it had in 1952—that is when it came on stream, which is some 54 years ago— some 70 billion barrels of recoverable oil. That was 54 years ago. In the meantime, much of that has been already recovered. The situation for Ghawar today is that you have two major problems. It is still producing, we think, between four and 4½ million barrels every single day, but in order to produce that much oil much needs to be done. I will show you two points, if you allow me.

What is happening today is that they are injecting eight million barrels of sea water into Ghawar every single day. What do they get out? This is very schematic. They get 12.5 million barrels of liquid out of the field and they split that into eight million barrels of water and 4.5 million barrels of oil. The water that they are injecting is increasing constantly.

The last information I have is that it has grown now to nine million barrels, but these figures are very approximate, because we do not know exactly what is going on. But it is roughly of that magnitude. So when they say that Ghawar crude is cheap, it is certainly not cheap any more, because you have to do all this enormous processing. You have these huge pipelines which come from the sea and an enormous compressor re-injecting that water under the oil column and pushing the column up. That is one point. There are problems. If you did not have problems you would not need to do all that.

They have done something else. Usually in all these super giants you drill vertical wells and you take out the oil from the vertical wells by the pressure either of the gas or the water. That is how it is mostly in the four super giants in Iran. But in the 1990s there was a new technology called horizontal wells. In Ghawar they thought that instead of relying on the vertical wells they would drill horizontal wells. Horizontal wells are both a blessing and a curse. Why?

Let me show you roughly how this works. You have a cap here. Here you have the oil. On top you have the gas and below you have the water. Naturally this is very schematic. A vertical well comes here in the middle of the oil column and you get your oil by either the pressure of the water beneath or the pressure of the gas from the top. With the gas here you say that this field is gas driven. Most of the Iranian fields are gas driven. Ghawar is water driven. It is either/or, but sometimes, very rarely, both.

The horizontal well is different. It comes down like this and then it goes horizontally for a few kilometers. The horizontal well is a blessing because you can get to the exact middle of the oil structure and so take out your oil more easily. But there is a very great danger with horizontal wells. They tell us that in Ghawar today there are 220, roughly, horizontal wells. The great danger of the horizontal well is that when the water reaches the well it is dead. So one day in the future at Ghawar, the water level will eventually reach the horizontal well.

It is happening but not on a large scale. When it happens on a large scale then Ghawar is going to collapse and you will have a cliff in the production of Ghawar. When you have a cliff there, the whole Saudi production system is going to fall apart. If that happens, we will start hearing bells ringing all over the place, and the price of oil is going to go through the roof.

Senator JOYCE—I have heard you say before that China are prepared to pay any price for oil. Therefore, if they are prepared to pay any price for oil, they are prepared to go anywhere to get it. I got myself into a lot of trouble by suggesting that countries would exploit the Antarctica. If China were prepared to pay any price for oil, which means they would be prepared to go anywhere to get it, and if there were areas of territorial dispute, is there the possibility that oil would be found in the Antarctic continent?

Dr Samsam Bakhtiari—I have studied oil reserves for the past 40 years, from when it was a very new science. In the beginning, there were a few specialists who were not very good, and then came the greatest specialist of oil reserves. He began working for a petrol consultant in the 1990s and, in 1995-96, established what is in my opinion the best set of oil reserves in the world.

These are the oil reserves of Dr Colin Campbell. I think these reserves are the best. I have been able to prove not only that these reserves adapted very well to my model but also that they correlate the production of the 11 OPEC countries in a satisfactory way. So I have adopted them.

Dr Campbell is of the opinion that the total endowment for conventional oil of the planet is around 1,900 billion barrels. I think this is the best number that we have at present. I have been working with that number for the past seven or eight years. Out of that number of 1,900 billion barrels, Dr Campbell is of the opinion that for the two polar sectors, the Arctic and Antarctica, you should have roughly 52 billion barrels. I think that Dr Campbell splits that number roughly half and half between the two poles.

As you know, exploration in the Arctic began in 1995-96—and this exploration is now growing faster and faster. They have given to a research team of the USGS and the Geological Survey of Denmark a joint research project to explore the tectonics and oil sources of the Arctic. Their report should be out next year, 2007, which is the International Polar Year. Antarctica is today the last frontier for the petroleum oil industry. Whether the oil industry is going to go there, I certainly do not know. I know from the very early studies I have made that it is going to be very difficult—firstly, because of the conditions in Antarctica. For seven months of the year it is dark—and you are more aware of the temperatures than I am. Senator Joyce, I believe you have lately been down there on a four-week trip and have seen things first-hand. So it is certainly not something for tomorrow, because conditions are not ready yet. As you know, it is very difficult to drill in ice—and there is an icecap of at least 2,000 meters that you have to drill through before you get to the lower tectonics. But maybe one day, when the price of oil goes up to $200 or $300 a barrel, some oil companies will decide to try their hand there. That could be a possibility. I hope it will not happen. But some governments will have their backs to the wall and in suburbia there will be unrest over petrol. Many things could happen—among them, drilling in the Southern Ocean or Antarctica.

It is extremely difficult to forecast precisely the price of oil in the future. I can see a range of $100 to $150 not very far into the future.

Senator JOYCE—That is $100 to $150 a barrel?

Dr Samsam Bakhtiari—Yes, this we are certainly going to get to. In my opinion, we could get there very easily. We are a couple of hurricanes or some geopolitical problems or a war away from having a worse problem than we have today. There you could go very easily, but after that where can this price go? I am studying that right now, and I have not reached a conclusion yet. There must be some outer limit, and I am beginning to think that maybe the outer limit could be $300 per barrel. I am not so sure yet, because we are entering a brand new era in human history, an era we have not been prepared for at all. For the past six generations, we have been used to having cheap oil always available whenever we wanted it, more or less. Today, in 2006, all of this is beginning to change. We are entering an era in which we know nothing much, where we have a brand new set of rules. I am trying to find out what these new rules are. I have already reached two or three new rules. One of the new rules, in my opinion, is that there will be in the very near future nothing like business as usual. In my opinion, nothing is usual from now on for any of the countries involved. And the lower you are in the pile, the worse it is going to get.

Senator JOYCE—You also made the statement that steps made today are cheaper than steps made tomorrow. With regard to mitigating or alleviating the crisis that would be caused by an oil shortage or a price of oil that is completely prohibitive to the development of industry and the fundamental freedom of people to drive around, what steps do you envisage would be worthwhile taking today? And without loading your answer, can you refer to issues such as the production of a biorenewable fuel industry, the development of ethanol as a fuel alternative and biodiesels, and alternative forms of combustible material that can be used in internal combustion engines.

Dr Samsam Bakhtiari—Allow me to take your questions one by one. I said that steps needed to be taken, because now I am thinking that the price is going to go up. There is no other way. Now let me open a parenthesis: the price might go down tomorrow to $55, but it will come back up again. So you will have in this period a high level of volatility, but eventually it will go to very great heights—maybe to $200, maybe to $300. As long as you have price driven oil, I think it is a very good thing whatever this price is, because one day you will have a question of availability. You will be ready to pay any price, but there will not be any oil.

I remind you that oil is a very special commodity, which is something that is very difficult to realize today. For example, you have no free market in oil. Naturally, you can go to the NYMEX stock exchange and buy as many barrels as you want at the price of $74 now, but these are paper barrels. If you try to buy 10,000 barrels a day of real oil, of genuine barrels, you will have enormous problems getting that much oil on a regular and sustainable basis. So that is one of the problems that we will encounter in the medium term.

Any step you take today is to your advantage. I will give you one example. The city of Perth in Western Australia has free buses. I have been on these free buses. It is a fantastic service. Maybe today it is still too early. It might not be very economical but it is a marvelous step for the future, because one day it will pay enormous dividends, in my opinion. Also, they have a very light rail service going around 140 kilometers of their coast, and this links all of the suburbs. One day this light rail service will save all these suburbs. I was asked about this yesterday. I think that Western Australia is at the forefront of the world in terms of steps being taken. And Australia is at the forefront today of the other countries, because the other countries do not know anything at all and are not willing to prepare. So the faster these new decisions are put in place, I think it will be of benefit to any society, especially societies with suburbs.

Senator JOYCE—You said it is not really a perfect market. Yes, you can go to the New York Stock Exchange and buy oil, but it is paper oil; you are not buying the actual product. You have also talked about how the price of oil will possibly go to a horizon of about $300 a barrel. Of course, that would mean we would be paying about $6 a liter or something like that for fuel for our car, which obviously means we could not afford to fill up. Do you feel the major oil companies have the intention to exploit an arrangement which has the world paying $200 to $300 a barrel for oil? Obviously it would be in their financial interests to get to that position, because it is maximizing the returns on their stock on hand. Their stock on hand is the oil in the ground, and obviously there is a great financial windfall for them to keep the predominant means of internal combustion a mineral based oil product. The question I am asking is: will the oil companies drive the intention for people to continually use oil and be quite prepared to profit from a market of $200 to $300 a barrel? Will they ride us out to the very end? Will their intentions be to ride this cash flow window to its completion?

Dr Samsam Bakhtiari—I do not think it is in the interests of the oil companies for the price to go very high. I think they are very well satisfied with the present price, but I think it will not be in their hands. It will not be in the hands of the companies, it will not be in the hands of the oil producers. I can see Saudi Arabia and others being very worried by prices that are too high, but I do not think any one of these players can do anything about it.

When there is not enough oil, first you will have to raise its price and then you will have the problem of its availability. There may be some kind of worldwide rationing—I do not know. I am trying to look at the future but the future I am talking about, as you mentioned, might be beyond 2020. Maybe beyond 2020 we will have some reasonable idea. What will happen after that is very difficult to predict. I do not think the oil companies would like such a scenario at all. They will be forced—

Senator JOYCE—Who can afford oil at $200 a barrel? Who would be using it?

Dr Samsam Bakhtiari—I think the Chinese are ready to pay anything for oil. I agree with you that it will be very difficult.

Senator MILNE—Recently we had the head of BP in Australia talking about their statistical review. They take at face value the claims, particularly of Middle Eastern countries, about the extent of their reserves. We are aware that a few years ago these countries readjusted their reserves, yet there were no new discoveries that would have justified that. This is a really critical question to ask because it goes to the heart of the argument. Could you give us your frank appraisal of the Saudi reserves, in particular, and the Middle Eastern reserves, generally, and the extent to which they have been inflated for political and economic purposes et cetera and do not reflect what is actually there?

Dr Samsam Bakhtiari—Most reviews of the reserves of the major Middle Eastern countries today, especially the BP Statistical Review of World Energy, mention reserves amounting to between 600 billion to 700 billion barrels. These are official reserve figures—in other words, the countries involved say that they have so much oil reserves available. The Oil and Gas Journal and BP take these reserves at face value. As you mentioned, in the 1980s these reserves were revised upwards. For example, in 1988 Saudi Arabia, which had reserves of 160 billion barrels, suddenly took these up to 260 billion barrels. Since 1989, it has kept this number of 260 billion barrels; there has been no change to it up to this day. So, for 17 years, it as if they have not produced anything.

In Dr Campbell’s opinion—and it is also my personal opinion—the reserves of the Middle East are roughly one half of what is officially said and presented. In other words, there should only be between 300 billion and 350 billion barrels of oil. This is the best figure I have come up with. I and Dr Campbell, as a rule of thumb, divide the official reserves by two to get a number that we believe is the actual amount of the reserves in these countries. Does that answer your question?

Senator MILNE—It certainly does. Can you go on to tell us what your view is of the US Geological Survey and its accuracy in terms of the reserves?

Dr Samsam Bakhtiari—Every institution gives its own numbers, and we can only compare theirs to ours. You can see that the reserves given by the USGS, which is an endowment for the world of over 3,200 million reserves, is much, much higher than the numbers we are using, of only 1,900 million. Of course, we can not accept such reserves as realistic, as we cannot accept the projections of certain institutions like the International Energy Agency in Paris, which predicts that the world will be consuming 118 million barrels per day in the year 2030 as realistic, because I cannot see how the world can get over 81 or, say, 82 per day right now, let alone in the future. I believe we are in decline. So you have an enormous discrepancy between what these institutions publish and what we believe in, whether it is in reserves or whether it is in production of crude oil per day.

Senator MILNE—Given what you have said about the fact that the Middle Eastern reserves are probably half of what they say they are, and given what you have just said about the US survey, how are we going to tell? Given that the Saudis and the other Middle Eastern countries keep on saying that their reserves are the same—and they have been saying they are the same for all these years whilst production has kept on going—how are we going to know? What indications are there going to be so that we can revise the estimates to be more accurate? If they are half of what they say they are, then the shock in the share markets et cetera everywhere around the world will be huge. You mentioned before that they may not be able to manipulate it forever because of the horizontal wells and the step change that will occur. Is that the main indication—when one of the wells goes kaput? Or what will happen, in your view?

Dr Samsam Bakhtiari—From an outsider’s point of view, you have two ways of following what will happen. One is the price. The second is the production. If the production for the next couple of years remains stagnant, then it will mean the institutions that are predicting production of over 100 or 110 are wrong. By the way, the future is always predicted wrongly. So that is one basis. The other way of following this is by the price. If you see the price returning to $50 and staying there, it will mean that we were wrong. But, if you see the price continuing to increase, it will prove that we have been right.

So these are the two ways you can follow the story, but I will return to the French philosopher Pascal. He said the best way may be to take a bet and bet that we are right, because the ones who bet that way have not much to lose. If we are wrong, everything is going to be fine. But, if we are right, I think the ones who took precautions will be very much rewarded in the future.

Senator MILNE—What do you regard as the most authoritative estimate of world reserves? You have spoken about Colin Campbell. Is there anything that you would refer to or would you argue that that is the most accurate assessment?

Dr Samsam Bakhtiari—No, I certainly believe it is the most accurate. I have studied almost all, not all, of the reserve sets that I have been given or that I have come by. I can assure you that my personal archive is a very complete one. I have met almost everybody in this industry—and especially those at the world petroleum congresses, which were the Olympics of oil and were held every four years; before the internet age, at least—and I really think that the 1,900 billion barrels in Dr Campbell’s set of data are the very best that you could find in the world today. I cannot imagine that we will have any better set in the future, especially given that Dr Campbell with Mr Jean Laherrere, a petrol consultant, have done very impressive research on almost all the oil provinces on the planet.

Joyce—Is that 1,900 billion barrels of recoverable oil from now to the end?

Dr Samsam Bakhtiari—1,900 billion barrels total is the estimate of convention oil. You have the non-conventional, which include, among others—

Senator JOYCE—Shale oil.

Dr Samsam Bakhtiari—the tar sands, the shale oil and the heavy oil of Venezuela and Orinoco and all these kinds of oils, which are classified by Dr Campbell as non-conventional.

Senator WEBBER—I want to continue to explore the impact of price. Obviously the higher the price, the greater the impact on consumer behavior. In my home state of Western Australia, the higher price is making fields that were seen to be unprofitable worth developing. For example, we have all known that the Browse field has been there for a long time and now Woodside are looking at developing it. Could you give us an understanding of how an increase in price may bring other oilfields onto the market? I am asking about the relationship between the increase in the price and the increase in the development of fields that were previously seen as unprofitable. Does the increased price mean that there will be an increase in exploration with the result that new fields may come on stream?

Dr Samsam Bakhtiari— Many people are of the idea that with the price increasing you will have new fields that before were not very profitable. Now, we will certainly see some of these factors coming into play. For example, you have exactly what you mentioned in the North Sea: small fields with reserves of 50 million to 100 million barrels of recoverable reserve were left by the wayside in the 1980s and 1990s, when it was not at all profitable to go and develop these fields with prices of $9 or $10 per barrel. These fields might very well be developed now at prices above $70. This will certainly happen not only in the North Sea but maybe also in America, where there are very small fields that now are going to be profitable and will be developed.

In my opinion, however all these are developed in the future, it will have very little impact on either peak oil or world production. It might make a change of, say, half a million barrels in total, not more, and half a million barrels will have very little impact. It will just shift the production curve upwards a bit but it will have very little impact. The reason is this: if you look at the US curve of decline, which was correctly predicted by Dr King Hubbert in 1956 and which peaked in 1970, it has been steadily coming down—but for the addition of Alaska. Alaska just shifted it a bit but it made no difference on the peak. It has been declining continuously since, notwithstanding the developments in exploration, exploitation and all the new technologies and the new investment that were possible at prices of $36 in the early 1980s. So I think that neither investment nor new technology will have any significant impact on the process of transition that we have entered.

Senator STERLE—Can you explain the claimed inadequacies of optimistic official agency predictions of oil production? We have had submissions from oil agencies that have told us that it is very rosy out there because they are spending lots of shareholders’ money—that is how rosy it is. Your report and your figures and Dr Campbell’s figures are at completely the opposite end of the spectrum. Can you explain how the oil agencies could be so far removed from your studies and be so different?

Dr Samsam Bakhtiari—Maybe one explanation could be that they are interested parties and we are disinterested parties. If you hear some people saying today that the price of oil is going to drop to $25 in the near future, and I think it is almost impossible for such a thing to happen unless there is a major catastrophe on a global scale. Maybe they are saying this because they want to grow and buy smaller oil companies. They might say that they will buy at $30 because the price is going to fall to $25, so $30 is a very good price and would be a very good price to pay a small company. And there are other problems. Nobody likes the idea of peak oil. Firstly, you have the politicians. Naturally, a politician will never say that there is such a thing as peak oil. It is suicide to give bad news so a politician will never do that. He will always say, ‘The IEA says that we will be having 118 million barrels in 2030 so why worry?’

Secondly, you have the media. The media does not like peak oil. Why? There is no sponsorship for peak oil. The oil companies do not like peak oil because you should not say that your soup is cold; you should always say that it is very hot and very tasty, yes? So nobody wants to hear of this phenomenon of peak oil. I believe that some of the institutions—I will not name them; they are here and maybe you can guess which ones they are—are saying these things to act as a protection for some politicians who can say: ‘Because these institutions are saying these things, then we follow them. We do not follow Campbell and others.’

Senator JOYCE—It could also inhibit the development of a biorenewable fuel industry too. If they say there is a lot of alternative product around, then they do not need a biorenewable fuel industry.

Dr Samsam Bakhtiari—I do not believe that there are alternatives around. In my opinion there is no alternative to crude oil. There is nothing that can replace it, and this is the problem the world is facing today. There are no alternatives and I will try to explain very briefly why.

In general economics we are taught a very basic rule. When the price goes up, demand comes down, and you have the marvelous figure of Professor Sam Wilson to explain exactly how this works. For crude oil this does not work at all. We were always taught that when the price doubles demand will come down by something. In the past two years the price has tripled and demand has not come down by anything. How far can we go? Nobody knows. I think that it will take three digits—at least over $110 or $120—for us to start seeing demand maybe coming down.

Why? Firstly, you have no way of preserving oil products easily—no way at all. We are all used to the car and we want to drive that car as far as we can possibly pay for it. Even at prices of $1.40 per liter for petrol you are beginning to have problems in the population economically, so what will it be like when the prices are much higher than that? $1.40 per liter is one of the cheapest prices in the Western world. It is just a little above fuel prices in California today so it is very cheap.

Not only do you not have preservation, you do not have any means of substitution, and I will come back to your previous question on alternatives. There is no alternative to crude oil. For the ones who believe that GTL is going to be an alternative, I am sorry to say that this is not a fact.

Today you have only 85,000 barrels per day of GTL capacity in the world. I do not think you will ever have much more than that, and 85,000 is nothing. It is a drop of water in an ocean. The latest GTL plant has just been started in Qatar and I do not know how it is going to fare. It makes 34,000 barrels. It is an enormous plant. I think it cost one and a half billion dollars at least. It has two enormous reactors. If anything goes wrong with these reactors—my God, I do not know what is going to happen! So that is for GTL.

You have coal to liquid. The only coal to liquid plant today in the world is in Secunda in South Africa. It makes 150,000 barrels per day of liquids. I can tell you that because I have visited it, half by helicopter and half by walking around the facilities. It is a very messy affair and it is very inefficient energy wise. Now the Chinese are trying to make CTL—coal to liquid—of one million barrels per day capacity. I think it is going to cost them $10 billion at least. I cannot imagine how this site is going to be. I am waiting for them to finish, but it will probably take them quite a long time to get that one million barrels per day off the ground.

You mentioned ethanol, biodiesel and all that. This is not the future. This is not sustainable because in the future, if our predictions are correct, the No. 1 priority will not be transport and all that. The No. 1 priority is going to be food. And for food you will have to have top priority for fertilizer and insecticides and whatever you need to produce food only. So ethanol is a very, very wasteful system.

And again, however much you want to make some ethanol, it will still be a drop of water in the ocean. Just let me tell you that for every liter of ethanol you will need between three and four liters of water to produce it. The best way to go for these types of fuel, and certainly the most efficient way, is sugarcane. That is what the Brazilians are doing today. With sugarcane you need one square kilometre of sugarcane to produce 3,800 barrels of ethanol per year. It is not very easy and it is very inefficient.

So I cannot see any of these alternatives coming up in the future in a big way. Now, certainly solar power will have a small role to play. Today it is still very expensive at between roughly $US 7,000 and $US 10,000 per megawatt. But it could certainly play a role, especially in Australia where you have quite a lot of sun and quite a lot of land to develop that. Wind also, in windy countries, could play a small role. But these roles will amount to two to three, or maybe four, per cent of oil consumption over the next 15 or 20 years, and not more. The orders of magnitude are not at all the same. You will make a small dent with each one of these but not much more than a dent. Replacing crude oil is not that easy.

CHAIR—I would like to follow up on this issue of price. The Australian Bureau of Agricultural and Resource Economics—ABARE—in their submission to us have done predictions based on future oil costs of $US 30 per barrel. How realistic do you think that is?

Dr Samsam Bakhtiari—I believe you will never, ever see $US30 per barrel again unless you have a bird flu epidemic that wipes out at least millions of people or, as Senator Joyce said, something hits the planet and disrupts all calculations.

Senator JOYCE—That takes out Europe.

Dr Samsam Bakhtiari—If oil falls below even $US50 per barrel, that in my opinion would be very bad news, because if it goes back to, say, $US50 per barrel for some reason and for a short period of time, people will think: ‘Ah! So $US75 was just a spike and now we are back to the good old days and we can begin consuming again. Let’s go and buy that big SUV that we were looking at.’ You then lose two or three years at least.

CHAIR—My next question relates to the industry. BP when they made a presentation to the committee said that the prices now are basically the same proportionally as the spike in the 1970s. What is your opinion of those comments?

Dr Samsam Bakhtiari—If you take into account inflation, it is the roughly same—it was $US 75 to $US 80 in those days. But those were spikes. Today it is a totally different problem.

Today it is a transition into the unknown; then it was known. I am now personally of the opinion that if they had continued with the spikes we would have been much better off today. But they did not. After the two oil price shocks of 1973 and 1979 you had two price counter shocks in 1987 and 1998, when it dropped below $US10 per barrel. That was very bad news, because then demand started going up again. If all these reserves had been better controlled, maybe the transition would have been much easier. Just to remind you, in 1950, which is not that long ago, global consumption was only 10 million barrels per day. That was very easily controllable with the reserves we had. What is not easily controllable is the 81 million barrels per day that we have today.

CHAIR—I want to go back to the price per barrel. What is your understanding of what IEA is saying is the standard price per barrel in the world?

Dr Samsam Bakhtiari—It is very difficult to reply to that question because you have many costs per barrel, depending on whether they are onshore or offshore and whether those offshore are in shallow waters, deep waters or ultra deep waters. To make an average over all that is very difficult. I could not answer you. I can tell you that it is not $75 per barrel; it is certainly lower than that.

Senator MILNE—In your opening presentation, you said that you thought that in 2006 we had begun transition 1, and that it would be a relatively gentle stage, and then we would go to extreme discomfort, presumably in transition 2. Can you outline to me the time frames you see for each of the transition stages, and how they will proceed? What will trigger moving from transition 1 to transition 2? When do you expect the real crisis to hit in that transitional phase?

Dr Samsam Bakhtiari—Certainly. From now on, from 2006 to 2020, making predictions is an extremely difficult process, because we do not know exactly what to expect of these transition periods. But I have decided for the time being to split the next 14 years into four transition periods, which I call transition 1, 2, 3 and 4. Every transition period has a steeper gradient and I do not know exactly how long each of these will take, because it depends on many factorsNevertheless, I envisage now that transition 1 should take between three, four or five years, but I would have to revise this every three to four months.

Now I will try to explain to you when I predict will be the end of transition 1 by drawing you a model on the whiteboard. We are here in 2006, which is, according to my model, the first year of transition 1. And we want to go all the way to the end of transition 1. Here, in the world of oil, we have the following: today, we have a demand for oil which comes from all of the countries and the regions on earth. The demand is about 81 million barrels per day. What happens to this demand is that it does trigger a supply. This supply comes from two entities. The first entity is non-OPEC and the second entity is the 11 OPEC countries. The OPEC countries are the marginal producer—that is, whatever non-OPEC produces is subtracted from the demand, and it leaves what is required from the OPEC countries to produce to make up the rest of the demand.

This is the system today. It is a very simple system. It has been in place since 1960, when they created OPEC. In my opinion, the international oil industry created the entity of OPEC for this very simple reason: to have a marginal producer. So far it has worked very well. But today OPEC is not playing its role, because it is producing oil out, which is not a good thing.

I will open a parenthesis here about the oil industry and the oilfields. There is nothing worse for an oilfield than to be pushed. I believe that is what is happening to oilfields like Ghawar and Cantarell. They have been pushed. A better example is the Samotlor oilfield of Russia, which was a marvelous oilfield that the Soviets in the 1980s, when they badly needed money to have a system that would be a rival to the American Star Wars, destroyed, in my opinion. It was an extraordinary oilfield which could produce three million barrels a day. Today it is only producing 300,000 barrels a day. If they had managed that oilfield better, I think they would have had a much higher return. Pushing an oilfield is not very good for it. Letting an oilfield rest is the best thing you can do for it. The Iraqis’ oilfields had a marvelous time during the 1990s because they rested for a long time. I would be glad if such a thing could happen to the Iranian super giants—if they could rest for some time. I think it would not be bad.

Coming back after this parenthesis to this system, between the beginning and the end of T1, you will have the two major scales tilting. At the end of T1 you will have a supply, and this supply is going to dictate the demand. Here you will have entities which will have the marginal demand. So it will be a totally different system form what we had at the beginning. It is this tilting of the scale that will in my opinion determine the end of T1. We have just begun shifting from one to the other.

In the time frame of T1, you might have some volatility in that it will start shifting to one side and then shifting back again to the demand side and going back and forth. So one has to be very careful. But in the end it will be the total shift that will in my opinion make the end of T1 clearer. About T2, T3 and T4, it is still very early. I am working on the next transition, but first we have to get this transition right.

One thing I might add about T1 is that I see not only that business as usual is not in the new rules but also that mega projects are not to be begun, because mega projects are long-term projects that take 10, 20, maybe 25 years. Because we do not know exactly where we are going at this stage, it is very dangerous to begin mega projects. But people are still doing this.

The Europeans have begun a freight train line from Barcelona to Kiev, which is roughly 2,600 kilometers. The idea of having freight trains is a very good idea, but it is a bit late now. If you have rails you might make the service a bit better, but you should not construct it from scratch because it will take 20 years and will never be finished because the high oil prices will trigger rises in prices for all other commodities.

You already see that steel is way above the usual prices. Copper has hit between $7,000 and $8,000, and it will go much higher than that. Nickel is $22,000. I think $22,000 is very cheap today; it will go much higher. All these commodities and all these metals will go very much higher, because it is the crude oil price which dictates the prices. Sugar is going up, orange juice is going up—everything is going up—because the price of crude oil is going up. It is the price of crude oil which more or less dictates all the other price hikes. In my opinion, you will have a correlation between all the price hikes in the future, and you can already see the first signs now.

Senator HUTCHINS—What do you see in transition phases 2, 3 and 4? Do you see any specific dates?

Dr Samsam Bakhtiari—No, not now, not yet. The gradients will get steeper, so the effects and the impacts will be greater. T1 is very benign; the gradient is very slow and you almost do not notice it. We will go from, maybe, 81 to 79.5 over the next few years; it is not difficult. But T2 will be much more difficult—it is already—because it will start dropping considerably; then you will notice the drops every year, probably, and then it will get worse and worse. It is a process, fortunately, where the introduction is easier than the following phases. But it is still very early to start predicting what T2 will do. Firstly, we have to see what T1 is going to do, because already, in many aspects, T1 is difficult to predict, with all the events that could take place in the next three to four years.

Senator HUTCHINS—What should governments do if you say that supply will determine demand?

Dr Samsam Bakhtiari—I think that every society, every city and every government should do a certain number of things—many things; 1,001 things. There are not one or two solutions. There is no panacea. There is no silver bullet that you can just shoot to get rid of this. You have to start as early as possible and think about this type of future. I do not think the Europeans are ever going to make it.

I do not think that Airbus A380 is a valuable airplane. It is a marvelous airplane, but it is arriving at the wrong time. They should have built it 20 years ago—and it would have been marvelous—when we were in the ascending curve of petroleum, not in the descending one, and not now that we have entered T1. I told them five years ago but naturally they did not want to listen at all, so they carried on. Now they have the problems and they are paying the penalties to all these companies already. It is still not commercial. I do not know why it will be commercial. I do not see a very bright future for that.

There is not too much innovation now; there is certainly a returning to commodities and exploration. I know of a company in Australia that invested very heavily and has just found a brand new copper mine. That is fabulous, because the copper they are going to extract in a few years is going to make enormous profits. If you put money into oil exploration—whether onshore or offshore—almost whatever you find is going to make money. These are types of investment. Or you could invest in agriculture but not ethanol or biodiesel.

Senator HUTCHINS—Yes, I was going to ask you about that—You seem to be dismissive of alternative fuels.

Dr Samsam Bakhtiari—Yes. I do not think it is a very good idea. You can always try it on a small scale, but I think that energy wise it does not make much sense. Now we are in transition 1, I try to look at things from an energy point of view, not from an economic point of view. We do not know these days exactly what economics are. You have to think energetically and about the things you really need. For example, Western Australia—sorry, I am always coming back—

Really, I think Western Australia is doing all the right things. They were kind enough to have been the very first to invite me, and I am very happy for them. Western Australia does not have enough water and the water table is falling. It is a very big problem. They are putting in two desalination plants. They are obliged to put in two desalination plants. The desalination plant will need fuel—it will need gas—to run. In my opinion, they have no alternative so they are obliged to do this. When you are forced then you have to do it. I see that one problem in the future in Australia, much more important than the oil problem, is going to be water.

Your precipitation is going lower and lower. I heard that in June you had an average of only 14 millimeters of rain instead of the normal 108 millimeters. When I crossed from Perth to Sydney in the plane, over 3½ hours, what I saw was very dry. I think one of the problems is water. When you consider that every liter of ethanol or biodiesel will take between three and four liters of water then you start having a problem on the water side and on the energy side. I think you have to reconsider the economics of all of that in the near future.

Senator WEBBER—On that optimistic note—being a Western Australian—what do you consider the prospects for the future of gas as an alternative?

Dr Samsam Bakhtiari—Gas is the big issue, because we are not only having peak oil but, according to my prediction, in 2008 or 2009 we are also going to have global peak gas. Peak gas and peak oil are two totally different things because oil is a very special commodity. Gas is not the same because you cannot just put it in a ship. You either have to consume it locally, pipe it to some other country or put it in a LNG tanker. You have only those three alternatives. Fortunately, Australia has an enormous amount of gas, and I believe this is going to become very handy because the peak for gas will be between 100 and 105 TCF global production in 2008-09.

Because of this peak in gas, you will have enormous problems all over the world but firstly in the US. The price of gas is going to go sky high. Today, it is incredibly cheap. Gas in the US has a threshold price today of between $7 and $8 per million BTU. This is going to go much higher. Every year you will have to add $2 to $3 to that price. The US price is going to affect all the other prices, and it has already begun in South-East Asia. All that will be linked through the LNG price that you will have, and the price of LNG is going to go very high.

I think that Russia does not have much gas anymore, although it is the largest producer in the world. I am very worried for the Europeans, and probably this winter you will see that the Europeans are going to have an enormous number of problems. If it is a harsh winter in Europe, you might have thousands of people dying. You had hundreds last year, but that was only the beginning. If this winter is harsh, you will have thousands dying because the Russians simply do not have enough gas to provide to Europe.

The Americans do not have enough gas. The Americans had the incredible chance to have the mildest winter last year in 100 years. If that had not happened, I do not know where the price of gas would be today. That was very lucky, and they now have enough reserves for the coming winter because all the storage depots are almost full.

That is a positive point, but the Europeans do not have that kind of chance, so you will have lots of problems. The price of LNG is going to go sky high because everybody will want LNG—in America, Mexico and Canada, which are in full decline; in all the South-East Asian countries and especially in China; and even in Europe. If the Europeans cannot get the Russian gas, their only solution will be to get LNG from wherever they can.

I can tell you that, with gas prices in the US being around $6 per barrel, you have LNG spot sales today of $12 per barrel—and we are in a normal situation. So, wait for the panic and you will have prices of $25 or $30 per barrel, and maybe much more than that. For one week in March this year the British did not have enough gas and the price of gas shot up to $258 per barrel oil equivalent. At first I thought I had made a mistake of one decimal place, but then I realized it was not $25.8—it was $258. For one week they were paying that price for their gas.

And we are in a very normal situation now; we are not at peak yet. So you can imagine how it is going to be when it is at peak, with the panic in all those countries because of the winter months. Just wait and see how it develops this winter in Europe.

Senator WEBBER—That is pretty dark.

Senator JOYCE—Going back to the biorenewable fuels issue, ethanol is being used in Brazil, and the terminal gate price of ethanol in Australia is around 80c a liter, so the reason that it is not being utilized is that the oil companies refuse to take it up. I have heard of a lot of what is going wrong but what we are really looking for is the solution; we are looking for the way out. Or is the world as we know it going to come to an end and this is just a prologue to the end? We need to find the solution.

I do not say ethanol is a panacea but it is certainly a mitigating circumstance. We need to take it up. It could run conjointly with a whole range of issues. I have two questions. Firstly, if ethanol is not the answer, can you explain why it is being used so prolifically in places like Brazil, and why the United States, Europe and Asia are all taking it on board as a component of trying to deal with the impending oil crisis—or the oil crisis that is already here, apparently? Secondly, what is your solution? What is the noble horizon we need to head towards in order to maintain our current standards of living and economies?

Dr Samsam Bakhtiari—Allow me to take those questions one by one. First I will address the alternatives. Brazil can use ethanol as a fuel because of its enormous amount of sugarcane. There is also the idea of self-sufficiency. People like the Brazilians and the South Africans always have a complex about self-sufficiency. If the South Africans have gone after GTL and have pursued coal to liquids, it is because they want to be self-sufficient. It was not an economic decision; it was a political decision. I think the Brazilians are in somewhat the same situation. For them, because of the enormous amount of sugarcane they have, it does make some sense, but I really doubt that it makes a lot of sense in terms of energy. And I believe that, come the day there is conflict between producing ethanol or biodiesel and producing food, food is going to win because, first of all, you have to eat.

There is another danger in Brazil. They are destroying the Amazon rainforest at the rate of some 20,000 square kilometers per year and on that land they are planting food crops—in enormous amounts. I think that this will also be part of the future: when the other countries do not have enough food, they will go back to the Brazilians. Brazil has become one of the largest exporters of food in the world, whether it be soy beans, sugar, coffee or beef. It is almost anything. They have the surpluses. The Americans are also trying to get the ethanol. It makes a small dent for the time being, but not a very big one. I think that it is only a question of a few million gallons. I do not know what percentage you have, but it is not very much.

All of the others are trying. I heard there are a few million in Australia, but it will not make a very big difference, so I am not very keen on these types of bio alternatives. As for your second question about what should be done, there are many things.

Everyone should study their own situation and see what can be done with the possibilities at hand, and not one thing, not two, but 10, 20 or 50. In my opinion, the first thing is to develop free public transportation, and that applies to everybody. Make it free from now. Even if it does not make very much economic sense now, it will in the future. Certainly, there is absolutely no doubt, as you go into transition 1, that free public transportation has to make sense. That is one of the things.

There are many other things that you can do. Plan; get new ideas from the grassroots. That is what Perth has been trying to do, to congregate 1,200 people from different walks of life in teams of eight, give them each a computer and have all of these ideas go back to the top for the selection of the ones they think are viable and useful. Have teams of elders. You have a fantastic man out there, Mr Brian Fleay. He predicted peak oil in 1995. It is extraordinary what he did. He was maybe the second person, after Dr Campbell, to have done that. And he did it almost from scratch. So people like this could have predicted that in 1995—in 1995 he wrote his book, so he must have predicted it in 1993 or 1994.

Senator JOYCE—Sorry, I have missed something. What is this team of elders?

CHAIR—What he is talking about is dialogue with the city.

Dr Samsam Bakhtiari—Yes, to have these people present their ideas and solutions, and then to build on that through a committee of elders. Or create steering committees through such people, and then get younger people to come in, very bright people, to start setting the priorities, because one day you will have to set priorities for the use of petrol. Have these in place soon, maybe in the next year or two. You will not need them in the next year or two, but have them in place already so that you are prepared. Get prepared for any eventuality. Have a special committee for that now. That is what I can see. I can advise that such things should be done this year or next year so that when or if the crisis really hits, then you have something to fall back on; you have a team that is already prepared and who has thought these problems through.

Thinking about these problems is very important, but there is something else. It is going to be very, very difficult to change the minds, to have the minds set on the new realities. For six generations we have been thinking one way—that is, that petrol is always there, petrol is not too expensive, oil products are not too expensive. We do not think about it. We do not think about fertilizers. We do not think about insecticides. Why? They are not that expensive, so it does not come into the day-to-day consideration. Petrol was always $1, not that much of a problem. We are used to that. The problem is going to be when it becomes $3 or $4 or $5. Then people will notice. Already at $1.40, some people are beginning to think about it, so when it becomes higher they have to change their minds, their way of thinking and their way of planning.

Senator JOYCE—But changing the way people think is a very hard task. That is not really a solution; it is nirvana. I want to go back to shale oil. They say there are three trillion barrels of shale oil equivalent in China and two trillion barrels in the United States, and I think we have 440 billion barrels of equivalent shale oil between Proserpine and Gladstone. Surely if the price of oil keeps heading north, this potential oil will begin to be exploited. Can you give me your impressions? You have gone through gas to liquid and coal to liquid. Do you have any opinions on the shale oil issue?

Dr Samsam Bakhtiari—Yes. There is a lot of shale—many thousands. There is an enormous amount of oil in there, but it is a very messy and difficult industry. In Canada, you have about 1.1 million barrels per day of synthetic crude oil produced, which is being exported mostly to the US, and which makes economic sense, especially at the prices of $74 to $75 per barrel. I think it costs them around $30 to $40 per barrel, so they are making some money. But I think it is limited, and I think the limits to that industry are, according to my prediction, roughly three million barrels per day. I cannot see Canada or the US together making more than three million barrels per day at the 2020 or 2025 horizon, investing enormous amounts of money. The shale oil industry is like the oil industry. You go to the best places first, naturally. And then, as you go along, it gets more difficult, it gets more expensive and it gets messier. I think you need roughly 2,000 ton of shale oil to make one barrel of synthetic crude oil. You can imagine, on an enormous scale, what that involves for the land and for everywhere else.

Already, at the level of 1.1 million barrels a day, the Canadian rivers are becoming so polluted as to have triggered alarm bells over Canada; the fish are dying and it will soon be impossible to clean up all the rivers. There are side problems for that as well. If one day we reach three million barrels per day I do not know what the situation will be there, but I do not think we can go further than three million; that is it.

There is also the heavy oil in Venezuela. Today there are 600,000 barrels of capacity. I do not think the Venezuelans can go beyond twice that amount, and with the government they have now they are stuck with their 600,000. I do not think anybody will be willing to invest in such expensive and difficult processes of exploitation. But even if the conditions were right I think they can go to 1.2. I really cannot see them going much further than that. So, yes, there is the potential but you have to transform the potential into production.

I forgot to tell you about the tar sands and the shale oil. All the heat you need for that comes from natural gas. You are spending 1.5 million BTUs for every barrel you are going to produce; that makes a lot of gas. What the Americans are beginning to tell the Canadians is, ‘We’d rather have this gas than anything else.’ So you have other problems that arise in this exploitation—at most, three million for tar sands and shale and one million for the Orinoco heavy oil. That makes a total of four million over the next 20 or 25 years. It will not change a thing for people—it is a drop of water—in the 81 we are facing now.

Senator JOYCE—Everyone knows about the price of fuel in Venezuela—I think you can buy a liter of petrol for 6c or 7c or something; it is still cheap—and we know what the price of petrol is on the streets in Australia. The organizations that control basically from the wellhead to the bowser are predominantly the same four major oil companies. We know that the price of Chevron has gone through the roof and that the price of Caltex domestically has gone through the roof, so they are making a far greater return on their asset. Can you say what you believe is their interest in the future—where oil prices are going? Can you also give some sort of indication about what sort of control the major oil companies have through the whole process of oil production as it stands today, from the oilwell to the bowsers? What form of control do they have over the total production of that product? What sorts of profits do you think they would intend to make in the future?

Dr Samsam Bakhtiari—I think that oil companies are like all corporations: they want to make profits, and they want to make the highest return for their shareholders. In 2005, they set new records in every country for profits. I think that in 2006 they will have far higher returns and record profits of, maybe, $50 billion for Exxon or something like that. It will be roughly the same, maybe $40 billion, for BP and a bit less, maybe, for Shell. Their shares will be reevaluated all the time as the price of oil goes up—and, as I told you, it can only go up.

But they control part of the system. You have many players. You have the national oil companies now, like Saudi Aramco, the National Iranian Oil Company and the national oil companies of Kuwait or Qatar. The oil companies control part of the system and it seems that their share of oil production is beginning to decline as well. It is still quite substantial, but it is also beginning to decline. Naturally, I think they are in it for the profits, and they control wherever they are from the wellhead all the way down to the retail. I think they get profit centers all along the way, and they are making enormous profits.

Senator JOYCE—The issue I am getting at is a transfer pricing issue. By the time the fuel gets to Australia, the same organization controlled entity has made its profit offshore. It is only the final stage. The purpose of Australia is just to move the product, not to make the profit. That would be a fair statement, wouldn’t it? Everyone talks about the terminal gate price of fuel as if that is the true price. It is a transfer pricing issue. By the time the fuel arrives in Australia, the same controlled entity has made the profit overseas. The purpose of Australia is to move the final product of petrol—not to make profit but to move product—because the profit has been made before the product actually arrives in Australia. The purpose of the Australian retail market is to move product, not to make profit. Therefore, it would be the intent of the oil industry to keep exclusively their product out there in the market and not encourage an alternative market apart from their product, which is oil.

Dr Samsam Bakhtiari—Yes. Certainly that is one of the goals of any corporation which makes a product: not to have rivals in the field and to try somehow to destroy or not let them in. Certainly you have this factor. I do not think that any oil company would be very happy to see an enormous boom in biodiesels, unless they could control it, which they cannot. So it will be certainly in their interest to see alternatives. Some oil companies want to get into solar and into other types of alternatives, but I do not think it is their job or their way of doing things. Somebody is going to do it much better than that.

Senator STERLE—I have two questions. If we were to take all the alternatives around the world—solar, hydro, gas, CTL, GTL and all those—how far off subsidizing our thirst for oil would that be? Could we supply the world’s demands? Nowhere near it?

Dr Samsam Bakhtiari—Very, very little. In any scenario and in any field for the next, say, 20 years: very, very little. It is a drop of water. If you make the calculation of increasing even by 100 per cent every single year, it is still a drop of water in solar, in biodiesel, in anything.

Senator STERLE—So there really is no alternative at this stage?

Dr Samsam Bakhtiari—No.

Senator STERLE—You spoke about Western Australia and the free public transport. I think it is going to send some ripples, but we really are faced in the world today—and I can only talk of Australia and my home state in particular—with some very hard decisions to be made.

Dr Samsam Bakhtiari—Yes.

Senator STERLE—It will bring in a lot of side issues of employment and revenue for governments—all sorts of things will pop up. If we are not fair dinkum in what we are leaving for the next generation—for our environment, our economies, our communities and our world— we really are in serious trouble. I pick up on that earlier comment you made about public transport and integrating public transport in trains and buses and whatever else there might be. It is not nirvana; it is a reality that we really are confronted with and we have to face.

Dr Samsam Bakhtiari—Yes. Provided that our models and our predictions are correct, this is exactly what you are going to face very soon. I do not want to be more negative, but I have started looking into T2, T3 and T4, and, my God, there are some things I started seeing down there that really send shudders up my spine. But I will spare you that today. Maybe that is for another time.

But I entirely agree with your statement. It should be done if only to get prepared so that if things go the wrong way you have something to fall back on—that you have some organization which you have already set up. As the crisis develops you develop this organization and make it ever bigger and more powerful to take care of the crisis.

There are companies which are employing 300,000 people in 140 countries who do not know a thing about peak oil. I do not know how they are going to react tomorrow.

The Europeans do not want to believe this reality. Next year they are going to start—they have already started—dying from the cold. According to my statistics, at least 900 people in eastern European countries froze to death last year. This year it is going to be double or triple that amount. This is the reality already. When there is a real crisis, how are they going to react?

The most important point is that governments do not to cause people to panic. The worst reaction to this type of crisis will be panic. If governments are not prepared there will be panic. The more prepared governments and institutions are, the less panic you will have. Panics are very costly. I entirely agree with what you just said. There is still time to get prepared. We are not that much down the T1 slope. It will be a very slow development, so there is time.

Senator STERLE—Apart from what you saw in Perth with the free public transport around the CBD, are any other countries taking that lead?

Dr Samsam Bakhtiari—No, nobody. There might be a city or two, but I have not heard of any that have taken this drastic step already, and I have not seen such things at all. I can tell you that the future is to rails because rails are the most fuel efficient system. Would you like to see some figures on that? I can illustrate this for you on the whiteboard. This will give you an order of magnitude. At ton kilometers per liter of fuel, airplanes are between two and three, cars are between 10 and 22, trucks are between 65 and 85 and trains are around 320. So on these very simple figures, I think you can see that the future is to trains, but not trains that you build now; trains that you already have and that you are going to spend money on. I have heard that Sydney in 2006 is planning to spend half its budget on roads and other infrastructures and half on public transportation—it seems to be roughly fifty-fifty. I think that as soon as you change this percentage towards rail and public, fuel efficiency might begin to make some sense. I think you can see the future here.

CHAIR—It is not planes.

Dr Samsam Bakhtiari—Aeroplanes will be the first casualty in the system. They are already making losses. I do not know how they can carry on because the jet fuel is directly proportional to the increases in crude oil. It is not like petrol. Petrol is very much cheaper because you have hidden subsidies and you have the taxes naturally.

Senator MILNE—I have a strategic question about Iran’s contribution to global oil supply as well as to gas. What percentage of global reserves does Iran hold? If Iran were to stop supplying overnight for a geopolitical reason, what impact would that have on 81 million barrels used per day? In other words, T1 is assuming everything goes along smoothly. Let us assume there is a geopolitical crisis and Iran decides to stop supplying into that 81 million barrels a day. What impact would that have?

Dr Samsam Bakhtiari—At present I think that Iran is supplying roughly two million barrels of oil for exports. In the case of some geopolitical problem, you would have to take the two million out of the 81 million. That in itself would not be very harsh. Why? Because major consuming countries have their strategic petroleum reserves. They could start taking it out of their reserves. The latest data on the US SPR is that they have 688 million barrels in their reserves. I believe that the Japanese must have something around 120 million barrels. The Europeans, all together, have roughly the same amount as the Japanese. The Chinese are trying to build up a strategic reserve of roughly 40 million barrels, but they have not started yet. Maybe they hope for the price of crude oil to come a bit lower before they start. They could do that. What would be impacting heavily on the price is the psychological impact of any geopolitical happening, whether in the Persian Gulf or in South-East Asia. Because the leeway in T1 is extremely small—as I have tried to mention to you—the slightest impact geopolitically will have enormous consequences. If you had in Saudi Arabia, for example, or anywhere else, some two million to three million barrels of spare capacity—that you usually had before—then people would not be so worried about this geopolitical impact. But you do not have spare capacity anymore. I do not believe the Saudis have any spare capacity today, although they say they have a million or 1½ million barrels. They have no spare capacity. Nobody, in my opinion—neither OPEC, nor non-OPEC, nor the Russians, nor the Saudis—has any spare capacity. It would have an enormous impact. The price could go anywhere.

I will give you just one example of what we in NOIC did in 1975 after the first price shock, when the price went from roughly $2 per barrel to $11 per barrel. To find out what the real price was NOIC set up an auction, saying, ‘We have a few barrels and we are going to auction these barrels, so whoever is interested should give us a bid.’ Through the bids, we found out what the real price was. Some bids were up to $41. There were people who were willing, at $11 per barrel, to pay $41.

Then you have the problem that the national oil companies today in the Middle East and in OPEC are not what they were in the past. That is another problem. If there is a disruption, as long as the system is working, you have little problem. It just goes on and on. You see that in cases of earthquake or catastrophe. Once there is a catastrophe, it is very difficult to put it back to the way it was before. You see it taking 10, 12 or 15 years to bring it back. If you have geopolitical problems in the Middle East, it will be very difficult after the crisis has been fortunately somehow solved to put the system back to where it was before. For all these reasons—and because of the herd instinct and the panic that might follow—you could easily have prices doubling overnight. If somebody were smart enough to have an auction, you would see prices that even I could not imagine today.

Senator MILNE—You have just talked about the strategic ramifications of even two million barrels being taken out. Australia, as you know, has just signed up to long-term gas exports to China at a fixed price. Given what you have just said, that looks like an increasingly bad deal.

Dr Samsam Bakhtiari—At a fixed price?

Senator MILNE—That is what I said. Yes, I can see that you are not impressed by the brilliance of that and neither are we, but nevertheless the Prime Minister and Premier Wen both opened the terminal in China recently, celebrating Australia selling bulk gas at a fixed price—to the horror of much of our country. But there are some people who are saying that given what we are having with peak oil and approaching peak gas and given Australia’s wealth in gas and the importance of gas as a transition fuel Australia ought not be exporting gas, that we should be keeping gas as a transition fuel as transition 1, if you like, goes to the more difficult transitions 2, 3 and 4. What is your view about that?

Dr Samsam Bakhtiari—I cannot comment on political decision-taking by national politicians but I believe that gas is a very strategic commodity today and the more you have the better it will be. You will certainly see in the next few years, even during transition 1, cases of what they call in international law ‘force majeure’ and when you are confronted with force majeure then there are many decisions that you can take.

Natural gas is certainly a strategic commodity today and commodities are becoming very strategic. Commodities like coal and copper, which do not seem to be very strategic, are very strategic. Uranium, for example, is already costing $47 or $48, which is still very cheap. Uranium was $10 not so long ago when nobody was thinking about it, but I can see uranium going way over $100 a pound. All other commodities are important, but natural gas is a very strong commodity. You can always use it domestically in the long term and I can see that happening easily for gas.

CHAIR—What would you recommend that we invest in? As a committee we need to make recommendations against our terms of reference, so what would you suggest we recommend should be the focus of government to deal with this issue?

Dr Samsam Bakhtiari—It is a very difficult question but I would have one major recommendation, and Senator Siewert touched upon it: to create some kind of national steering committee of experts in the field, dependent upon this committee maybe, to study as fast as possible all these questions, then under the aegis of this steering committee maybe create a very small executive committee to study all that and the priorities so that you have something that is working. That is the only thing that I could recommend now—to study.

CHAIR—Where do ships fit in your chart? You have airplanes, cars, truck and trains. Where does sea transport fit in?

Dr Samsam Bakhtiari—Ships are way down. Shipping is marvelous, in terms of energy efficiency, whether it be cargo or container ships. That is marvelous. Shipping is very good.

CHAIR—One of the scenarios into the future is likely to be that there will be less air travel and more ship transport and cargo.

Dr Samsam Bakhtiari—Yes, certainly. Airplanes in transition 1 are at risk. They are already at risk today and they are going to be much more at risk than that. Air travel will have to be more and more reduced in the future and it is going to be more and more expensive. Shipping will come back because the factor of time is not going to be as important as the factor of energy efficiency.

CHAIR—If I understand you correctly, you are saying that we should be investing now as a matter of priority in public transport.

Dr Samsam Bakhtiari—Certainly, yes. Right now. As soon as possible. Start tomorrow on public transport. It is better than starting the day after tomorrow. You also have the problem that, at some stage, you will not be able to invest that easily. The further we go down the line, investment gets more difficult. People who think they will undertake projects in 10 years time do not realize the problems of making these projects. I will give you two examples. The Europeans have woken up to this lately. They now want to bring gas from the Persian Gulf to Europe, but that is a 20-year project and it will cost at least $25 billion. It is not feasible today. They are dreaming. And even if they think of putting a gas pipeline from Iran to Pakistan to India, they are also dreaming. You cannot do that today. It is too late. You could have done that as long as you were on the curve, but when you are on the top the projects have to be smaller and smaller and you have to start them as soon as possible, and not get caught up by the events. It is a different way to do things.

Byron King, who writes the “Whiskey & Gunpowder” column, wrote Bakhtiari to ask him to further explain his thinking on T1 through T4. Here’s what he said:

“The four Transition periods (T1, T2, T3, and T4) will roughly span the 2006-2020 era. Each Transition [will] cover, on average, three to four years.

The major palpable difference between the four Ts is their respective gradient of oil output decline – very small for T1, perceptible for T2, remarkable in T3, and rather steep for T4. In fact, this gradation in decline is a genuine blessing for those having to cope and adapt.

It should be borne in mind that these four Ts are only an overall theoretical structure for future global oil output. The structure is thus so orderly because [it is] predicted with ‘Pre-Peak’ methods, ‘Pre-Peak’ assumptions, and [a] ‘Pre-Peak’ set of rules.

The problem is that we now are in ‘Post-Peak’ mode, and that none of [the] above applies anymore.

The fact of being in ‘Post-Peak’ will bring about explosive disruptions we know little about, and which are extremely difficult to foresee. And the shock waves from these explosions rippling throughout the financial and industrial infrastructure could have myriad unintended consequences for which we have no precedent and little experience.

So the only Transition we can see rather clearly (or rather, we hope to be able to comprehend) is T1. It is clear that T1 will witness the tilting of the ‘Oil Demand’ and ‘Oil Supply’ scales — with the former dominant at the onset and the latter commanding toward the close (say, by 2009 or 2010).

But even during that rather benign T1, the unexpected might become the rule and the orderly ‘Pre-Peak’ rapidly give way to some chaotic ‘Post-Peak.

In any instance, the overall structure of the ‘Four Transitions’ is a general guideline for the next 14 years or so — as far as global oil output is concerned. In practice, reality might prove to be worse than these theoretical Transitions; but certainly not better.

I also agree that at the junction of two Ts, there should be some kind of a milestone. For example, at the close of T1, Supply should totally dominate Demand…I am toying with [the] idea, very preliminary, that close of T2 could be OPEC [oil production] surpassing non-OPEC [oil production], although OPEC died in 2004”.

Posted in GOVERNMENT, Other Experts, Politics | Tagged , , , | Comments Off on Australian government was Peak Oil Aware in 2006