Life After Fossil Fuels: manufacturing will be less precise

Preface. This is a book review and excerpts of Winchester’s “The Perfectionists: How Precision Engineers created the modern world”. The book describes how the industrial revolution was made possible with ever more precision.  First came the steam engine, possible to build when a way to make them to one tenth of an inch precision so the steam didn’t escape was invented.  By World War II parts could be made precise to within a millionth of an inch and today to 35 zeros of precision (0.00000000000000000000000000000000001), which is required for microchips, jet engines, and other high-tech.

This amazing precision is possible using machine tools to make precise parts by shaping metal, glass, plastic, ceramics and other rigid materials by cutting, boring, grinding, shearing, squeezing, rolling, and stamping plus riveting metals, plastic and other hard materials.  Most precision machine tools are powered by electricity today, and steam engines in the past.

Machine tools also revolutionized our ability to kill each other.  Winchester writes: “When any part of a gun failed, another part had to be handmade by an army blacksmith, a process that, with an inevitable backlog caused by other failures, could take days. As a soldier, you then went into battle without an effective gun, or waited for someone to die and took his, or did your impotent best with your bayonet, or else you ran. Once a gun had been physically damaged in some way, the entire weapon had to be returned to its maker or to a competent gunsmith to be remade or else replaced. It was not possible, incredible though this might, simply to identify the broken part and replace it with another. No one had ever thought to make a gun from component parts that were each so precisely constructed that they were identical one with another.”

Machine tools can not be used for wood because it is flexible. It swells and contracts in unpredictable ways. It can never be a fixed dimension and whether planed or jointed, lapped or milled, or varnished to a brilliant luster, since wood is fundamentally and inherently imprecise.

Since both my books, “When trucks stop running” and “Life After Fossil Fuels” make the case that we are returning to a world where the electric grid is down for good, and wood is the main energy source and infrastructure material after fossil fuels become scarce, the level of civilization we can achieve will depend greatly on how precisely we can make objects in the future.  Because wood charcoal makes inferior and weaker iron, steel, and other metals than coal, today’s precision will no longer be possible. Microchips, jet engines, and much more will be lost forever.  Wood, because of eventual deforestation, will lead to orders of magnitude less metal, brick, ceramics, glass and other products because of lack of wood charcoal. And since peak coal is here, and the remaining reserves in the U.S. are mostly lignite, not great for the high heat needed in manufacturing, civilization as we know it has a limited time-span.

“The Great Simplification” will reduce precision. The good news is that hand-crafting of beautiful objects will return, a far more rewarding way of life than production lines at factories today.

Alice Friedemann   www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Winchester, S. 2018. The Perfectionists: How Precision Engineers created the modern world. HarperCollins.

Two particular aspects of precision need to be addressed. First, its ubiquity in the contemporary conversation—the fact that precision is an integral, unchallenged, and seemingly essential component of our modern social, mercantile, scientific, mechanical, and intellectual landscapes. It pervades our lives entirely, comprehensively, wholly.

Because an ever-increasing desire for ever-higher precision seems to be a leitmotif of modern society, I have arranged the chapters that follow in ascending order of tolerance, with low tolerances of 0.1 and 0.01 starting the story and the absurdly, near-impossibly high tolerances to which some scientists work today—claims of measurements of differences of as little as 0.000 000 000 000 000 000 000 000 000 01 grams, 10 to the -28th grams, have recently been made, for example—toward the end.

Any piece of manufactured metal (or glass or ceramic) must have chemical and physical properties: it must have mass, density, a coefficient of expansion, a degree of hardness, specific heat, and so on. It must also have dimensions: length, height, and width. It must possess geometric characteristics: it must have measurable degrees of straightness, of flatness, of circularity, cylindricity, perpendicularity, symmetry, parallelism, and position—among a mesmerizing host of other qualities even more arcane and obscure.

The piece of machined metal must have a degree of what has come to be known as tolerance. It has to have a tolerance of some degree if it is to fit in some way in a machine, whether that machine is a clock, a ballpoint pen, a jet engine, a telescope, or a guidance system for a torpedo.

To fit with another equally finely machined piece of metal, the piece in question must have an agreed or stated amount of permissible variation in its dimensions or geometry that will allow it to fit. That allowable variation is the tolerance, and the more precise the manufactured piece, the greater the tolerance that will be needed and specified.

The tolerances of the machines at the LIGO site are almost unimaginably huge, and the consequent precision of its components is of a level and nature neither known nor achieved anywhere else on Earth. LIGO is an observatory, the Laser Interferometer Gravitational-Wave Observatory.  The LIGO machines had to be constructed to standards of mechanical perfection that only a few years before were well-nigh inconceivable and that, before then, were neither imaginable nor even achievable.

Precision’s birth derives from the then-imagined possibility of maybe holding and managing and directing this steam, this invisible gaseous form of boiling water, so as to create power from it,

The father of true precision was an eighteenth-century Englishman named John Wilkinson, who was denounced sardonically as lovably mad, and especially so because of his passion for and obsession with metallic iron. He made an iron boat, worked at an iron desk, built an iron pulpit, ordered that he be buried in an iron coffin, which he kept in his workshop (and out of which he would jump to amuse his comely female visitors), and is memorialized by an iron pillar he had erected in advance of his passing in a remote village in south Lancashire.

Though the eventual function of the mechanical clock, brought into being by a variety of claimants during the fourteenth century, was to display the hours and minutes of the passing days, it remains one of the eccentricities of the period (from our current viewpoint) that time itself first played in these mechanisms a subordinate role. In their earliest medieval incarnations, clockwork clocks, through their employment of complex Antikythera-style gear trains and florid and beautifully crafted decorations and dials, displayed astronomical information at least as an equal to the presentation of time.

The behavior of the heavenly bodies was ordained by gods, and therefore was a matter of spiritual significance. As such, it was far worthier of human consideration than our numerical constructions of hours and minutes, and was thus more amply deserving of flamboyant mechanical display.

John Harrison, the man who most famously gave mariners a sure means of determining a vessel’s longitude. This he did by painstakingly constructing a family of extraordinarily precise clocks and watches, each accurate to just a few seconds in years, no matter how sea-punished its travels in the wheelhouse of a ship.

An official Board of Longitude was set up in London in 1714, and a prize of 20,000 pounds offered to anyone who could determine longitude with an accuracy of 30 miles. John Harrison eventually, and after a lifetime of heroic work on five timekeeper designs, would claim the bulk of the prize.

The fact that the Harrison clocks were British-invented and their successor clocks firstly British-made allowed Britain in the heyday of her empire to become for more than a century the undisputed ruler of all the world’s oceans and seas. Precise-running clockwork made for precise navigation; precise navigation made for maritime knowledge, control, and power.

In place of the oscillating beam balances that made the magic of his large clocks so spectacular to see, he substituted a temperature-controlled spiral mainspring, together with a fast-beating balance wheel that spun back and forth at the hitherto unprecedented rate of some 18,000 times an hour. He also had an automatic remontoir, which rewound the mainspring eight times a minute, keeping the tension constant, the beats unvarying. There was a downside, though: this watch needed oiling, and so, in an effort to reduce friction and keep the needed application of oil to a minimum, Harrison introduced, where possible, bearings made of diamond, one of the early instances of a jeweled escapement.

It remains a mystery just how, without the use of precision machine tools—the development of which will be central to the story that follows—Harrison was able to accomplish all this. Certainly, all those who have made watches since then have had to use machine tools to fashion the more delicate parts of the watches: the notion that such work could possibly be done by the hand of a 66-year-old John Harrison still beggars belief. But John Harrison’s clockworks enjoyed perhaps only three centuries’ worth of practical usefulness.

For precision to be a phenomenon that would entirely alter human society, it has to be expressed in a form that is duplicable; it has to be possible for the same precise artifact to be made again and again with comparative ease and at a reasonable frequency and cost.

It was only when precision was created for the many that precision as a concept began to have the profound impact on society as a whole that it does today. And the man who accomplished that single feat, of creating something with great exactitude and making it not by hand but with a machine, and, moreover, with a machine that was specifically created to create it

A machine that makes machines, known today as a “machine tool,” was, is, and will long remain an essential part of the precision story—was the 18th-century Englishman denounced for his supposed lunacy because of his passion for iron, the then-uniquely suitable metal from which all his remarkable new devices could be made.

Wilkinson is today rather little remembered. He is overshadowed quite comprehensively by his much-better-known colleague and customer, the Scotsman James Watt, whose early steam engines came into being, essentially, by way of John Wilkinson’s exceptional technical skills.

On January 27, 1774, John Wilkinson, whose local furnaces, all fired by coal, were producing a healthy twenty tons of good-quality iron a week, invented a technique for the manufacture of guns. The technique had an immediate cascade effect very much more profound than those he ever imagined, and of greater long-term importance.  Up until then, naval cannons were cast hollow, with the interior tube through which the powder and projectile were pushed and fired

The problem with this technique was that the cutting tool would naturally follow the passage of the tube, which may well not have been cast perfectly straight in the first place. This would then cause the finished and polished tube to have eccentricities, and for the inner wall of the cannon to have thin spots where the tool wandered off track.  And thin spots were dangerous—they meant explosions and bursting tubes and destroyed cannon and injuries to the sailors who manned the notoriously dangerous gun decks.

Then came John Wilkinson and his new idea. He decided that he would cast the iron cannon not hollow but solid. This, for a start, had the effect of guaranteeing the integrity of the iron itself—there were fewer parts that cooled early and came out with bubbles and  spongy sections (“honeycomb problems,” as they were called) for which hollow-cast cannon were then notorious.

The secret was in the boring of the cannon hole. Both ends of the operation, the part that did the boring and the part to be bored, had to be held in place, rigid and immovable, because to cut or polish something into dimensions that are fully precise, both tool and workpiece have to be clasped and clamped as tightly as possible to secure immobility.

Cannon after cannon tumbled from the mill, each accurate to the measurements the navy demanded, each one, once unbolted from the mill, identical to its predecessor, each one certain to be the same as the successor that would next be bolted onto it. The new system worked impeccably from the very start.

Yet what elevates Wilkinson’s new method to the status of a world-changing invention would come the following year, 1775, when he started to do serious business with James Watt.

The principle of a steam engine is familiar, and is based on the simple physical fact that when liquid water is heated to its boiling point it becomes a gas. Because the gas occupies some 1,700 times greater volume than the original water, it can be made to perform work.

Newcomen then realized he could increase the work by injecting cold water into the steam-filled cylinder, condensing the steam and bringing it back to 1/1,700 of its volume—creating, in essence, a vacuum, which enabled the pressure of the atmosphere to force the piston back down again. This downstroke could then lift the far end of the rocker beam and, in doing so, perform real work. The beam could lift floodwater, say, out of a waterlogged tin mine.  Thus was born a very rudimentary kind of steam engine, almost useless for any application beyond pumping water.  The Newcomen engine and its like remained in production for more than 70 years, its popularity beginning to lessen only in the mid-1760s, when James Watt showed that it could be markedly improved.

Watt realized that the central inefficiency of the engine he was examining was that the cooling water injected into the cylinder to condense the steam and produce the vacuum also managed to cool the cylinder itself. To keep the engine running efficiently, the cylinder needed to be kept as hot as possible at all times, so the cooling water should perhaps condense the steam not in the cylinder but in a separate vessel, keeping the vacuum in the main cylinder, which would thus retain the cylinder’s heat and allow it to take on steam once more. To make matters even more efficient, the fresh steam could be introduced at the top of the piston rather than the bottom, with stuffing of some sort placed and packed into the cylinder around the piston rod to prevent any steam from leaking out in the process.

These two improvements (the inclusion of a separate steam condenser and the changing of the inlet pipes to allow for the injection of new steam into the upper rather than the lower part of the main cylinder) changed Newcomen’s so-called fire-engine into a fully functioning steam-powered machine.

Once perfected, it was to be the central power source for almost all factories and foundries and transportation systems in Britain and around the world for the next century and more.

Yet perpetually enveloping his engine in a damp, hot, opaque gray fog, were billowing clouds of steam, which incensed James Watt. Try as he might, do as he could, steam always seemed to be leaking in prodigious gushes from the engine’s enormous main cylinder. He tried blocking the leak with all kinds of devices and substances. The gap between the piston’s outer surface and the cylinder’s inner wall should, in theory, have been minimal, and more or less the same wherever it was measured. But because the cylinders were made of iron sheets hammered and forged into a circle, and their edges then sealed together, the gap actually varied enormously from place to place. In some places, piston and cylinder touched, causing friction and wear. In other places, as much as half an inch separated them, and each injection of steam was followed by an immediate eruption from the gap.

Watt tried tucking in pieces of linseed oil–soaked leather; stuffing the gap with a paste made from soaked paper and flour; hammering in corkboard shims, pieces of rubber, even dollops of half-dried horse dung.

By the purest accident, John Wilkinson asked for an engine to be built for him, to act as a bellows for one of his iron forges—and in an instant, he saw and recognized Watt’s steam-leaking problem, and in an equal instant, he knew he had the solution: he would apply his cannon-boring technique to the making of cylinders for steam engines.  Watt beamed with delight. Wilkinson had solved his problem, and the Industrial Revolution—we can say now what those two never imagined—could now formally begin.

And so came the number, the crucial number, the figure that is central to this story, that which appears at the head of this chapter and which will be refined in its exactitude in all the remaining parts of this story. This is the figure of 0.1—one-tenth of an inch. This was the tolerance to which John Wilkinson had ground out his first cylinder.  All of a sudden, there was an interest in tolerance, in the clearance by which one part was made to fit with or into another. This was something quite new, and it begins, essentially, with the delivery of that first machine on May 4, 1776.

The central functioning part of the steam engine was possessed of a mechanical tolerance never before either imagined or achieved, a tolerance of 0.1 inches.

Locks were a British obsession at the time. The social and legislative changes that were sweeping the country in the late eighteenth century were having the undesirable effect of dividing society quite brutally: while the landed aristocracy had for centuries protected itself in grand houses behind walls and parks and ha-has, and with resident staff to keep mischief at bay, the enriched beneficiaries of the new business climate were much more accessible to the persistent poor.

Envy was abroad. Robbery was frequent. Fear was in the air. Doors and windows needed to be bolted. Locks had to be made, and made well. A lock such as Mr. Marshall’s, pickable in 15 minutes by a skilled man, and by a desperate and hungry man maybe in 10, was clearly not good enough. Joseph Bramah decided he would design and make a better one. He did so in 1784, less than a year after picking the Marshall lock. His patent made it almost impossible for a burglar with a wax-covered key blank, the tool most favored by the criminals who could use it to work out the position of the various levers and tumblers inside a lock, to divine what was beyond the keyhole, inside the workings.

Maudslay solved Bramah’s supply problems in an inkling by creating a machine to make them.  He built a whole family of machine tools, in fact, that would each make, or help to make, the various parts of the fantastically complicated locks Joseph Bramah had designed. They could make the parts fast and well and cheaply, without the errors that handcrafting and hand tools inevitably cause. The machines that Maudslay made would, in other words, make the necessary parts with precision.

Metal pieces can be machined into a range of shapes and sizes and configurations, and provided that the settings of the leadscrew and the slide rest are the same for every procedure, and the lathe operator can record these positions and make certain they are the same, time after time, then every machined piece will be the same—will look the same, measure the same, weigh the same (if of the same density of metal) as every other. The pieces are all replicable. They are, crucially, interchangeable. If the machined pieces are to be the parts of a further machine—if they are gearwheels, say, or triggers, or handgrips, or barrels—then they will be interchangeable parts, the ultimate cornerstone components of modern manufacturing. Of equally fundamental importance, a lathe so abundantly equipped as Maudslay’s was also able to make that most essential component of the industrialized world, the screw.

Screws were made to a standard of tolerance of one in one ten-thousandth of an inch.

A slide rest allowed for the making of myriad items, from door hinges to jet engines to cylinder blocks, pistons, and the deadly plutonium cores of atomic bombs

Maudslay next created in truly massive numbers, a vital component for British sailing ships. He built the wondrously complicated machines that would, for the next 150 years, make ships’ pulley blocks, the essential parts of a sailing ship’s rigging that helped give the Royal Navy its ability to travel, police, and, for a while, rule the world’s oceans.  At the time, sails were large pieces of canvas suspended, supported, and controlled by way of endless miles of rigging, of stays and yards and shrouds and footropes, most of which had to pass through systems of tough wooden pulleys that were known simply to navy men as blocks—pulley blocks, beyond the maritime world as block and tackle.

A large ship might have as many as 1400 pulley blocks of varying types and sizes depending on the task required. The lifting of a very heavy object such as an anchor might need an arrangement of six blocks, each with three sheaves, or pulleys, and with a rope passing through all six such that a single sailor might exert a pull of only a few easy pounds in order to lift an anchor weighing half a ton.

Blocks for use on a ship are traditionally exceptionally strong, having to endure years of pounding water, freezing winds, tropical humidity, searing doldrums heat, salt spray, heavy duties, and careless handling by brutish seamen. Back in sailing ship days, they were made principally of elm, with iron plates bolted onto their sides, iron hooks securely attached to their upper and lower ends, and with their sheaves, or pulleys, sandwiched between their cheeks, and around which ropes would be threaded. The sheaves themselves were often made of Lignum vitae (trees from South America),

What principally concerned the admirals was not so much the building of enough ships but the supply of the vital blocks that would allow the sailing ships to sail. The Admiralty needed 130,000 of them every year The complexity of their construction meant that they could be fashioned only by hand. Scores of artisanal woodworkers in and around southern England but were notoriously unreliable.

The Block Mills still stand as testament to many things, most famously to the sheer perfection of each and every one of the hand-built iron machines housed inside. So well were they made—they were masterpieces, most modern engineers agree—that most were still working a century and a half later; the Royal Navy made its last pulley blocks in 1965.

The Block Mills were the first factory to run entirely by steam engine.  The next invention that mattered depended on flatness, without curvature, indentation or protuberance. It involves the creation of a base from which all precise measurement and manufacture can be originated. For, as Maudslay realized, a machine tool can make an accurate machine only if the surface on which the tool is mounted is perfectly flat, is perfectly plane, exactly level, its geometry entirely exact.

A bench micrometer would be able to measure the actual dimension of a physical object to make sure that the components of the machines they were constructing would all fit together, with exact tolerances, and be precise for each machine and accurate to the design standard.

The micrometer that performed all these measurements turned out to be extremely accurate and consistent: this invention of his could measure down to one one-thousandth of an inch and, according to some, maybe even one ten-thousandth of an inch: to a tolerance of 0.0001.

To any schoolchild today, Eli Whitney means just one thing: the cotton gin. To any informed engineer, he signifies something very different: confidence man, trickster, fraud, charlatan almost entirely from his association with the gun trade, with precision manufacturing, and with the promise of being able to deliver weapons assembled from interchangeable parts.  When Whitney won the commission and signed a government contract to do so in 1798, he knew nothing about muskets and even less about their components: he won the order largely because of his Yale connections and the old alumni network that, even then, flourished in the corridors of power in Washington, DC.

It was John Hall who succeeded in making precision guns. At every stage of the work, from the forging of the barrel to the turning of the rifling and the shaping of the barrel, his 63 gauges were set to work, more than any engineer before him, to ensure as best he could that every part of every gun was exactly the same as every other—and that all were made to far stricter tolerances than hitherto: for a lock merely to work required a tolerance of maybe a fifth of a millimeter; to ensure that it not only worked but was infinitely interchangeable, he needed to have the pieces machined to a fiftieth of a millimeter.

Precision shoes were made by turning a shapeless block of wood into a foot-shaped entity of specific dimensions, and repeated time and time again. These shoemaker lasts were of exact sizes, seven inches long, nine, and so on. Before precise shoes were made, they were offered up in barrels and customers pulled them out randomly trying to find a shoe that more or less fit.

Oliver Evans was making flour-milling machinery; Isaac Singer introduced precision into the manufacturing of sewing machines; Cyrus McCormick was creating reapers, mowers, and, later, combine harvesters; and Albert Pope was making bicycles for the masses.

Joseph Whitworth was an absolute champion of accuracy, an uncompromising devotee of precision, and the creator of a device, unprecedented at the time, that could truly measure to an unimaginable one-millionth of an inch.  Using his superb mechanical skills, in 1859 he created a micrometer that allowed for one complete turn of the micrometer wheel to advance the screw not by 1/20 of an inch, but by 1/4,000 of an inch, a truly tiny amount.

Whitworth then incised 250 divisions on the turning wheel’s circumference, which meant that the operator of the machine, by turning the wheel by just one division, could advance or retard the screw and provided the ends of the item being measured are as plane as the plates on the micrometer, opening the gap by that 1/1,000,000 of an inch would make the difference between the item being held firmly, or falling, under the influence of gravity.

Now metal pieces could be made and measured to a tolerance of one-millionth of an inch.

Until Whitworth, each screw and nut and bolt was unique to itself, and the chance that any one-tenth-inch screw, say, might fit any randomly chosen one-tenth-inch nut was slender at best.

With the Model T, Henry Ford changed everything. From the start, he was insistent that no metal filing ever be done in his motor-making factories, because all the parts, components, and pieces he used for the machine would come to him already precisely finished, and to tolerances of cruelly exacting standards such that each would fit exactly without the need for even the most delicate of further adjustment. Once that aspect of his manufacturing system was firmly established, he created a whole new means of assembling the bits and pieces into cars.  He demanded a standard of precision for his components that had seldom been either known or achieved before, and he now married this standard to a new system of manufacture seldom tried before.

The Model T had fewer than 100 parts. A modern car has more than 30,000.

Within Rolls-Royce, it may seem as though the worship of the precise was entirely central to the making of these enormously comfortable, stylish, swift, and comprehensively memorable cars. In fact, it was far more crucial to the making of the less costly, less complex, less remembered machines that poured from the Ford plants around the world. And for a simple reason: the production lines required a limitless supply of parts that were exactly interchangeable.

If one happened not to be so exact, and if an assembly-line worker tried to fit this inexact and imprecise component into a passing workpiece and it refused to fit and the worker tried to make it fit, and wrestled with it—then, just like Charlie Chaplin’s assembly-line worker in Modern Times or, less amusingly, one in Fritz Lang’s Metropolis, the line would slow and falter and eventually stop, and workers for yards around would find their work disrupted, and parts being fed into the system would create unwieldy piles, and the supply chain would clog, and the entire production would slow and falter and maybe even grind, quite literally, to a painful halt. Precision, in other words, is an absolute essential for keeping the unforgiving tyranny of a production line going.

Henry Ford had been helped in his aim of making it so by using one component (and then buying the firm that made it), a component whose creation, by a Swedish man of great modesty, turned out to be of profoundly lasting importance to the world of precision. The Swede was Carl Edvard Johansson, popularly and proudly known by every knowledgeable Swede today as the world’s Master of Measurement. He was the inventor of the set of precise pieces of perfectly flat, hardened steel known to this day as gauge blocks, slip gauges, or, to his honor and in his memory, as Johansson gauges, or quite simply, Jo blocks.

His idea was to create a set of gauge blocks that, if held together in combination, could in theory measure any needed dimension. He calculated that the minimum number of blocks that would be needed was 103 blocks made of certain carefully specified sizes. Arranged in three series, it was possible to take some 20,000 measurements in increments of one one-thousandth of a millimeter, by laying two or more blocks together. His 103-piece combination gauge block set has since directly and indirectly taught engineers, foremen and mechanics to treat tools with care, and at the same time given them familiarity with dimensions of thousandths and ten thousandths of a millimeter.

Gauge blocks first came to the United States in 1908.  Cars were precise only to themselves; maybe every manufactured piece fit impeccably because it was interchangeable to itself, but once another absolutely impeccably manufactured, gauge-block-confirmed piece from another company (a ball bearing from SKF, say) was introduced into the Ford system, then maybe its absolute perfection trumped that of Ford’s, and Ford was wrong—ever so slightly maybe, but wrong nonetheless

Gauge blocks after the Great War, achieved accuracies of up to one-millionth of an inch.

Modern jet engines have hundreds of parts jerking to and fro and they cannot be made more powerful without becoming too complicated.  Modern jet engines can produce more than 100,000 horsepower—still, essentially, they have only a single moving part: a spindle, a rotor, which is induced to spin and, in doing so, causes many pieces of high-precision metal to spin with it.

All that ensures they work as well as they do are the rare and costly materials from which they are made, the protection of the integrity of the pieces machined from these materials, and the superfine tolerances of the manufacture of every part of which they are composed.  Since any increase in engine power and thus aircraft speed would lead to heavier engines, perhaps too heavy for an aircraft to carry, a new kind of engine was invented. The gas turbine.  A crucial element in any combustion engine is air—air is drawn into the engine, mixed with fuel, and then burns or explodes. The thermal energy from that event is turned into kinetic energy, and the engine’s moving parts powered. But a factor in the amount of air sucked into a piston engine is limited by the size of the cylinders. In a gas turbine, there is almost no limit: a gigantic fan at the opening of such an engine can swallow vastly more air than can be taken into a piston engine.

Gas turbines were already beginning to power ships, to generate electricity, to run factories. The simplicity of the basic idea was immensely attractive. Air was drawn in through a cavernous doorway at the front of the engine and immediately compressed, and made hot in the process, and was then mixed with fuel, and ignited. It was the resulting ferociously hot, tightly compressed, and controlled explosion that then drove the turbine, which spun its blades and then performed two functions. It used some of its power to drive the aforementioned compressor, which sucked in and squeezed the air, but it then had a very considerable fraction of its power left, and so was available to do other things, such as turn the propeller of a ship, or turn a generator of electricity, or turn the driving wheels of a railway locomotive (didn’t happen, too many problems), or provide the power for a thousand machines in a factory and keep them running, tirelessly.

The first jet plane was invented in 1941 in Britain, and in 1944 that the public learned about it.  Inside a jet engine, everything is a diabolic labyrinth, a maze of fans and pipes and rotors and discs and tubes and sensors and a Turk’s head of wires of such confusion that it doesn’t seem possible that any metal thing inside it could possibly even move without striking and cutting and dismembering all the other metal things that are crammed together in such dangerously interfering proximity. Yet work and move a jet engine most certainly does, with every bit of it impressively engineered to do so, time and again, and under the harshest and fiercest of working conditions.

There are scores of blades of various sizes in a modern jet engine, whirling this way and that and performing various tasks that help push the hundreds of tons of airplane up and through the sky. But the blades of the high-pressure turbines represent the singularly truest marvel of engineering achievement—and this is primarily because the blades themselves, rotating at incredible speeds and each one of them generating during its maximum operation as much power as a Formula One racing car, operate in a stream of gases that are far hotter than the melting point of the metal from which the blades were made. What stopped these blades from melting?

It turns out to be possible to cool the blades by drilling hundreds of tiny holes in each blade, and of making inside each blade a network of tiny cooling tunnels, all of them manufactured at a size and to such minuscule tolerances as were quite unthinkable only a few years ago.

The first blades that Whittle made were of steel, which somewhat limited the performance of his early prototypes, since steel loses its structural integrity at temperatures higher than about 500 degrees Celsius. But alloys were soon found that made matters much easier, after which blades were constructed from these new metal compounds. They did not run the risk of melting, because the temperatures at which they operated were on the order of a thousand degrees, and the special nickel-and-chromium alloy from which they were made, known as Nimonic, remained solid and secure and stiff up to 1,400 degrees Celsius (2550 F).

the next generation of engines required that the gas mixture roaring out from the combustion chamber be heated to around 1,600 degrees Celsius, and even the finest of the alloys then used melted at around 1,455 degrees Celsius. The metals tended to lose their strength and become soft and vulnerable to all kinds of shape changes and expansions at even lower temperatures. In fact, extended thermal pummeling of the blades at anything above 1,300 degrees Celsius was regarded by early researchers as just too difficult and risky.

Most of that air bypasses the engine (for reasons that are beyond the scope of this chapter), but a substantial portion of it is sent through a witheringly complex maze of blades, some whirling, some bolted and static, that make up the front and relatively cool end of a jet engine and that compress the air, by as much as 50 times. The one ton of air taken each second by the fan, and which would in normal circumstances entirely fill the space equivalent of a squash court, is squeezed to a point where it could fit into a decent-size suitcase. It is dense, and it is hot, and it is ready for high drama. For very nearly all this compressed air is directed straight into the combustion chamber, where it mixes with sprayed kerosene, is ignited by an array of electronic matches, as it were, and explodes directly into the whirling wheel of turbine blades. These blades (more than ninety of them in a modern jet engine, and attached to the outer edge of a disc rotating at great speed) are the first port of call for the air before it passes through the rest of the turbine and, joining the bypassed cool air from the fan, gushes wildly out of the rear of the engine and pushes the plane forward. “Nearly all” is the key. Some of this cool air, the Rolls-Royce engineers realized, could actually be diverted before it reached the combustion chamber, and could be fed into tubes in the disc onto which the blades were bolted. From there it could be directed into a branching network of channels or tunnels that had been machined into the interior of the blade itself. And now that the blade was filled with cool air—cool only by comparison; the simple act of compressing it made it quite hot, about 650 degrees Celsius, but still cooler by a thousand degrees than the post–combustion chamber fuel-air mixture. To make use of this cool air, scores of unimaginably tiny holes were then drilled into the blade surface, drilled with great precision and delicacy and in configurations that had been dictated by the computers, and drilled down through the blade alloy until each one of them reached just into the cool-air-filled tunnels—thus immediately allowing the cool air within to escape or seep or flow or thrust outward, and onto the gleaming hot surface of the blade.

It is here that the awesome computational power that has been available since the late 1960s comes into its own, becomes so crucially useful. Aside from the complex geometry of the hundreds of tiny pinholes, is the fact that the blades are grown from, incredibly, a single crystal of metallic nickel alloy. This makes them extremely strong—which they need to be, as in their high-temperature whirlings, they are subjected to centrifugal forces equivalent to the weight of a double-decker London bus. Very basically, the molten metal (an alloy of nickel, aluminum, chromium, tantalum, titanium, and five other rare-earth elements that Rolls-Royce coyly refuses to discuss) is poured into a mold that has at its base a little and curiously three-turned twisted tube, which resembles nothing more than the tail of P and ends up with all its molecules lined up evenly.

It has become a single crystal of metal, and thus, its eventual resistance to all the physical problems that normally plague metal pieces like this is mightily enhanced. It is very much stronger—which it needs to be, considering the enormous centrifugal forces.

Electrical discharge machining, or EDM, as it is more generally known, employs just a wire and a spark, both of them tiny, the whole process directed by computer and inspected by humans, using powerful microscopes, as it is happening.  The more complex the engines, the more holes need to be drilled into the various surfaces of a single blade: in a Trent XWB engine, there are some 600, arranged in bewildering geometries to ensure that the blade remains stiff, solid, and as cool as possible. Their integrity owes much to the geometry of the cooling holes that are being drilled, which is measured and computed and checked by skilled human beings. No tolerance whatsoever can be accorded to any errors that might creep into the manufacturing process, for a failure in this part of a jet engine can turn into a swiftly accelerating disaster.

As the tolerances shrink still further and limits are set to which even the most well-honed human skills cannot be matched, automation has to take over. The Advanced Blade Casting Facility can perform all these tasks (from the injection of the losable wax to the growing of single-crystal alloys to the drilling of the cooling holes) with the employment of no more than a handful of skilled men and women. It can turn out 100,000 blades a year, all free of errors.

But failure was still possible. The fate of passengers depended on the performance of one tiny metal pipe no more than five centimeters long and three-quarters of a centimeter in diameter, into which someone at a factory in the northern English Midlands had bored a tiny hole, but had mistakenly bored it fractionally out of true. The engine part in question is called an oil feed stub pipe, and though there are many small steel tubes wandering snakelike through any engine, this particular one, a slightly wider stub at the end of longer but narrower snakelike pipe, was positioned in the red-hot air chamber between the high- and intermediate-pressure turbine discs. It was designed to send oil down to the bearings on the rotor that carried the fast-spinning disc. It was machined improperly due to a drill bit that did the work being misaligned, with the result that along one small portion of its circumference, the tube was about half a millimeter too thin.

Metal fatigue is what caused the engine to fail. The aircraft had spent 8,500 hours aloft, and had performed 1,800 takeoff and landing cycles. It is these last that punish the mechanical parts of a plane: the landing gear, the flaps, the brakes, and the internal components of the jet engines. For, every time there is a truly fast or steep takeoff, or every time there is a hard landing, these parts are put under stress that is momentarily greater than the running stresses of temperature and pressure for which the innards of a jet engine are notorious.

Heisenberg, in helping in the 1920s to father the concepts of quantum mechanics, made discoveries and presented calculations that first suggested this might be true: that in dealing with the tiniest of particles, the tiniest of tolerances, the normal rules of precise measurement simply cease to apply. At near-and subatomic levels, solidity becomes merely a chimera; matter comes packaged as either waves or particles that are by themselves both indistinguishable and immeasurable and, even to the greatest talents, only vaguely comprehensible.

The making of the smallest parts for today’s great jet engines, we are reaching down nowhere near the limits that so exercise the minds of quantum mechanicians. Yet we have reached a point in the story where we begin to notice our own possible limitations and, by extension and extrapolation, also the possible end point of our search for perfection.

An overlooked measurement error on the mirror amounting to one-fiftieth the thickness of a human hair managed to render most of the images beamed down from Hubble fuzzy and almost wholly useless.

Chapter 9 (TOLERANCE: 0.000 000 000 000 000 000 000 000 000 000 000 01)  35 places

Here we come to the culmination of precision’s quarter-millennium evolutionary journey. Up until this moment, almost all the devices and creations that required a degree of precision in their making had been made of metal, and performed their various functions through physical movements of one kind or another. Pistons rose and fell; locks opened and closed; rifles fired; sewing machines secured pieces of fabric and created hems and selvedges; bicycles wobbled along lanes; cars ran along highways; ball bearings spun and whirled; trains snorted out of tunnels; aircraft flew through the skies; telescopes deployed; clocks ticked or hummed, and their hands moved ever forward, never back, one precise second at a time. Then came the computer, into an immobile and silent universe, one where electrons and protons and neutrons have replaced iron and oil and bearings and lubricants and trunnions and the paradigm-altering idea of interchangeable parts.

Precision had by now reached a degree of exactitude that would be of relevance and use only at the near-atomic level.

Fab 42—of electronic microprocessor chips, the operating brains of almost all the world’s computers. The enormous ASML devices allow the firm to manufacture these chips, and to place transistors on them in huge numbers and to any almost unreal level of precision and minute scale that today’s computer industry, pressing for ever-speedier and more powerful computers, endlessly demands.

Gordon Moore, one of the founders of Intel, is most probably the man to blame for this trend toward ultraprecision in the electronics world. He made an immense fortune by devising the means to make ever-smaller and smaller transistors and to cram millions, then billions of them onto a single microprocessing chip. There are now more transistors at work on this planet (some 15 quintillion, or 15,000,000,000,000,000,000) than there are leaves on all the trees in the world. In 2015, the four major chip-making firms were making 14 trillion transistors every single second. Also, the sizes of the individual transistors are well down into the atomic level.

When the Broadwell family of chips was created in 2016, node size was down to a previously inconceivably tiny fourteen-billionths of a meter (the size of the smallest of viruses), and each wafer contained no fewer than seven billion transistors. The Skylake chips made by Intel at the time of this writing have transistors that are sixty times smaller than the wavelength of light used by human eyes, and so are literally invisible.

It takes three months to complete a microprocessing chip, starting with the growing of a 400-pound, very fragile, cylindrical boule of pure smelted silicon, which fine-wire saws will cut into dinner plate–size wafers, each an exact two-thirds of a millimeter thick. Chemicals and polishing machines will then smooth the upper surface of each wafer to a mirror finish, after which the polished discs are loaded into ASML machines for the long and tedious process toward becoming operational computer chips. Each wafer will eventually be cut along the lines of a grid that will extract a thousand chip dice from it—and each single die, an exactly cut fragment of the wafer, will eventually hold the billions of transistors that form the non-beating heart of every computer, cellphone, video game, navigation system, and calculator on modern Earth, and every satellite and space vehicle above and beyond it. What happens to the wafers before the chips are cut out of them demands an almost unimaginable degree of miniaturization. Patterns of newly designed transistor arrays are drawn with immense care onto transparent fused silica masks, and then lasers are fired through these masks and the beams directed through arrays of lenses or bounced off long reaches of mirrors, eventually to imprint a highly shrunken version of the patterns onto an exact spot on the gridded wafer, so that the pattern is reproduced, in tiny exactitude, time and time again. After the first pass by the laser light, the wafer is removed, is carefully washed and dried, and then is brought back to the machine, whence the process of having another submicroscopic pattern imprinted on it by a laser is repeated, and then again and again, until thirty, forty, as many as sixty infinitesimally thin layers of patterns (each layer and each tiny piece of each layer a complex array of electronic circuitry) are engraved, one on top of the other.

Rooms within the ASML facility in Holland are very much cleaner than that. They are clean to the far more brutally restrictive demands of ISO number 1, which permits only 10 particles of just one-tenth of a micron per cubic meter, and no particles of any size larger than that. A human being existing in a normal environment swims in a miasma of air and vapor that is five million times less clean.

The test masses on the LIGO devices in Washington State and Louisiana are so exact in their making that the light reflected by them can be measured to one ten-thousandth of the diameter of a proton.

Alpha Centauri A, which lies 4.3 light-years away. The distance in miles of 4.3 light-years is 26 trillion miles, or, in full, 26,000,000,000,000 miles. It is now known with absolute certainty that the cylindrical masses on LIGO can help to measure that vast distance to within the width of a single human hair.

 

This entry was posted in Infrastructure & Collapse, Jobs and Skills, Life After Fossil Fuels, Manufacturing & Industrial Heat and tagged , , , , , . Bookmark the permalink.

Comments are closed.