Book review of Vaclav Smil’s “Energy Transitions: History, Requirements, Prospects”

Preface.  In my extract of the 178 pages in the book below, Smil explains why renewables can’t possibly replace fossil fuels, and appears to be exasperated that people believe this can be done when he writes “Common expectations of energy futures, shared not only by poorly informed enthusiasts and careless politicians but, inexplicably, by too many uncritical professionals, have been, for decades, resembling more science fiction than unbiased engineering, economic, and environmental appraisals.”

Yet Smil makes the same “leap of faith” as the “uncritical professionals” he criticizes.  He remains “hopeful in the long run because we can’t predict the future.” And because the past transitions “created more productive and richer economies and improved the overall quality of life—and this experience should be eventually replicated by the coming energy transition.”

Huh? After all the trouble he’s taken to explain why we can’t possibly transition from fossil fuels to anything else he ends on a note of happy optimism with no possible solution?

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Smil, Vaclav. 2010. Energy Transitions: History, Requirements, Prospects.  Praeger.

Agriculture

Modern agriculture consumes directly only a few percent of the total energy supply as fuels and electricity to operate field machinery (tractors, combines, irrigation pumps) and mostly as electricity for heating, cooling, and machinery used in large-scale animal husbandry. But the indirect energy cost of agricultural production (to produce agricultural machinery, and to synthesize energy- intensive fertilizers, pesticides, and herbicides) and, even more so, energy costs of modern industrial food processing (including excessive packaging), food storage (the category dominated by refrigeration), retailing, cooking, and waste management raise the aggregate cost of the entire food production/distribution/preparation/disposal system to around 15% of total energy supply.

10% of all extracted oil and slightly more than 5% of all natural gas are used as chemical feedstocks, above all for syntheses of ammonia and various plastics.

Biomass

Photosynthesis uses only a small part of available wavelengths (principally blue and red light amounting to less than half of the energy in the incoming spectrum) and its overall conversion efficiency is no more than 0.3% when measured on the planetary scale and only about 1.5% for the most productive terrestrial (forest) ecosystems.

Large-scale biofuel cultivation and repeated removal of excessive shares of photosynthetic production could further undermine the health of many natural ecosystems and agro-ecosystems by extending monocultures and opening ways for greater soil erosion and pest infestation.

Terrestrial photosynthesis proceeds at a rate of nearly 60 TW, and even a tripling of biomass currently used for energy would not yield more than about 9 TW.

All preindustrial societies had a rather simple and persistent pattern of primary fuel use as they derived all of their limited heat requirements from burning biomass fuels. Fuelwood (firewood) was the dominant source of primary energy, but woody phytomass would be a better term: the earliest users did not have any requisite saws and axes to cut and split tree trunks, and those tools remained beyond the reach of the poorest peasants even during the early modern era. Any woody phytomass was used, including branches fallen to the ground or broken off small trees, twigs, and small shrubs. In large parts of the sub-Saharan Africa and in many regions of Asia and Latin America this woody phytomass, collected mostly by women and children, continues to be the only accessible and affordable form of fuel for cooking and water and house heating for the poorest rural families. Moreover, in some environments large shares of all woody matter were always gathered by families outside forests from small tree clumps and bushes, from the litter fall under plantation tree crops (rubber, coconut) or from roadside, backyard, or living fence trees and shrubs. This reliance on non-forest phytomass also continues today in many tropical and subtropical countries: Rural surveys conducted during the late 1990s in Bangladesh, Pakistan, and Sri Lanka found that this non-forest fuelwood accounted for more than 80% of all wood by households (RWEDP, 1997). And in less hospitable, arid or deforested, environments, children and women collected any available non-woody cellulosic phytomass, fallen leaves (commonly raked in North China’s groves, leaving the ground barren), dry grasses, and plant roots. For hundreds of millions of people the grand energy transition traced in this chapter is yet to unfold: They continue to live in the wooden era, perpetuating the fuel usage that began in prehistory.

Another usage that has been around for millennia is the burning of crop residues (mostly cereal and leguminous straws, but also corn or cotton stalks and even some plant roots) and sundry food- processing wastes (ranging from almond shells to date kernels) in many desert, deforested, or heavily cultivated regions. And on the lowest rung of the reliance on biomass fuels was (and is) dry dung, gathered by those with no access to other fuels (be it the westward-moving settlers of the United States during the nineteenth century collecting buffalo dung or the poorest segments of rural population in today’s India) or whose environment (grasslands or high mountain regions) provides no suitable phytomass to collect (Tibetan and Andean plateaus and subtropical deserts of the Old World where, respectively, yak, llama, and camel dung can be collected).

Even if all of the world’s sugar cane crop were converted to ethanol, the annual ethanol yield would be less than 5% of the global gasoline demand in 2010. Even if the entire U.S. corn harvest was converted to ethanol, it would produce an equivalent of less than 15% of the country’s recent annual gasoline consumption. Biofuel enthusiasts envisage biorefineries using plant feedstocks that replace current crude oil refineries-but they forget that unlike the highly energy-dense oil that is produced with high power density, biomass is bulky, tricky to handle, and contains a fairly high share of water.

This makes its transport to a centralized processing facility uneconomical (and too energy intensive) beyond a restricted radius (maximum of about 50 miles / 80 km) and, in turn, this supply constraint limits the throughput of a biorefinery and the range of fuels to be produced-to say nothing about the yet-to-be- traversed path from laboratory benches to mass-scale production (Willems, 2009). A thoughtful review of biofuel prospects summed it up well: They can be an ingredient of the future energy supply but “realistic assessments of the production challenges and costs ahead impose major limits” (Sinclair, 2009, p. 407).

And finally, the proponents of massive biomass harvesting ignore a worrisome fact that modern civilization is already claiming (directly and indirectly) a very high share of the Earth’s net terrestrial primary productivity (NPP), the total of new phytomass that is photosynthesized in the course of a single year and that is dominated by the production of woody tissues (boles, branches, bark, roots) in tropical and temperate forests. Most of this photosynthate should be always left untouched in order to support all other nonhuman heterotrophs (from archaea and bacteria to primates) and to perform, directly or indirectly via the heterotrophs.

Biomass performs numerous indispensable environmental services. Given this fact it is astonishing, and obviously worrisome, that three independently conducted studies (Vitousek et al., 1986; Rojstaczer, Sterling, & Moore, 2001; Imhoff et al., 2004) agree that human actions are already appropriating perhaps as much as 40% of the Earth’s NPP as cultivated food, fiber, and feed, as the harvests of wood for pulp, timber, and fuel, as grass grazed by domesticated animals, and as fires deliberately set to maintain grassy habitats or to convert forests to other uses. This appropriation is also very unevenly distributed, with minuscule rates in some thinly populated areas of tropical rain forests to shares in excess of 60% in East Asia and to more than 70% in Western Europe (Imhoff et al., 2004). Local rates are even higher in the world’s most intensively cultivated agroecosystems of the most densely populated regions of Asia (China’s Jiangsu, Sichuan, and Guangdong, Indonesia’s Java, Bangladesh, the Nile Delta).

Any shift toward large-scale cultivation/harvesting of phytomass would push the global share of human NPP appropriation above 50% and would make many regional appropriation totals intolerably high. There is an utter disconnect between the proponents of transition to mass-scale biomass use and the ecologists whose Millennium Ecosystem Assessment (2005) demonstrated that essential ecosystemic services that underpin the functioning of all economies have been already modified, reduced, and compromised to a worrisome degree. Would any of numerous environmental services provided by diverse ecosystems-ranging from protection against soil erosion to perpetuation of biodiversity-be enhanced by extensive cultivation of high-yielding monocultures for energy? I feel strongly that the recent proposals of massive biomass energy schemes are among the most regrettable examples of wishful thinking and ignorance of ecosystemic realities and necessities.

Phytomass would have a chance to become, once again, a major component of the global primary energy supply only if we were to design new photosynthetic pathways that did not emerge during hundreds of millions of years of autotrophic evolution or if we were able to produce fuels directly by genetically manipulated bacteria. The latter option is now under active investigation, with Exxon being its most important corporate sponsor and Venter’s Synthetic Genomics its leading scientific developer (Service, 2009). Overconfident gene manipulators may boast of soon-to-come feats of algally produced gasoline, but how soon would any promising yields achieved in controlled laboratory conditions be transferable to mass-scale cultivation?

Even if we assume (quite optimistically) that the cultivation of phytomass for energy could average 1 W/m2, then supplanting today’s 12.5 TW of fossil fuels would require 12,500,000 km2, roughly an equivalent of the entire territories of the United States and India, an area more than 400 times larger than the space taken up by all of modern energy’s infrastructures.

Muscle Power

Basal metabolic rate (BMR) of all large mammals is a nonlinear function of their body mass M When expressed in watts it equals 3.4Mo-75 (Smil, 2008). This yields 70-90 W for most adult males and 55-75 W for females. Energy costs of physical exertion are expressed as multiples of the BMR: Light work requires up to 2.5 BMR, moderate tasks up to 5 BMR, and heavy exertions need as much as 7 BMR or in excess of 300 W for women and 500 W for men. Healthy adults can work at those rates for hours, and given the typical efficiency of converting the chemical energy into the mechanical energy of muscles (15-20%) this implies at most between 60 W (for a 50-kg female) and about 100 W (for an 85-kg man) of useful work, and equivalents of five to seven steadily working adults performing as much useful labor as one draft ox and about six to eight men equaling the useful exertion of a good, well-harnessed horse.

With the domestication of draft animals humans acquired more powerful prime movers, but because of the limits imposed by their body sizes and commonly inadequate feeding the working bovines, equids, and camelids were used to perform only mostly the most demanding tasks (plowing, harrowing, pulling heavy cart- or wagon-loads or pulling out stumps, lifting water from deep wells) and most of the labor in traditional societies still needed human exertion.

Working bovines (many cattle breeds and water buffaloes) weigh from just 250 kg to more than 500 kg. With the exception of donkeys and ponies, working equines are more powerful: Larger mules and horses can deliver 500-800 W compared to 250-500 W for oxen. Some desert societies also used draft camels, elephants performed hard forest work in the tropics, and yaks, reindeer, and llamas were important pack animals. At the bottom of the scale were harnessed dogs and goats. Comparison of plowing productivities conveys the relative power of animate prime movers. Even in the light soil it would take a steadily working peasant about 100 hours of hoeing to prepare a hectare of land for planting; in heavier soils it could be easily 150 hours. In contrast, a plowman guiding a medium-sized ox harnessed inefficiently by a simple wooden yoke and pulling a primitive wooden plow would do that work in less than 40 hours; a pair of good horses with collar harness and a steel plough would manage in just three hours.

No draft animal could make good progress on soft muddy or sandy roads, even less so when pulling heavy carts with massive wooden (initially full disk; spokes came around 2000 BCE in Egypt) wheels. When expressed in terms of daily mass-distance (t-km), a man pushing a wheelbarrow rated just around 0.5 t-km (less than 50-kg load transported 10-15 km), a pair of small oxen could reach 4-5 t-km (10 times te load at a similarly slow speed), and a pair of well-fed and well-harnessed nineteenth-century horses on a hard-top road could surpass 25 t-km.

My approximate calculations indicate that by 1850 draft animals supplied roughly half of all useful work, human labor provided as much as 40%, and inanimate prime movers delivered between 10% and 15%. By 1900 inanimate prime movers (dominated by steam engines, with water turbines in the second place) contributed 45%-50%, animal labor provided about a third, and human labor no more than a fifth of the total. By 1950 human labor, although in absolute terms more important than ever, was a marginal contributor (maximum of about 5%), animal work was down to about 10%, and inanimate prime movers (dominated by internal combustion engines and steam and water turbines) contributed at least 85%, and very likely 90%, of all useful work.

Wind

The power of water wheels rose from 102 W to larger wheels of 103 W after 1700 to as much as a few hundred kW (105  W) by 1850.  Windmills showed up a thousand years later and culminated in machines capable of no more than 104 W by the late 19th century.  Although water wheel power rose 1000-fold over 2,000 years, steam engine power grew exponentially in less than 50 years from 105  W to 1 MW (10 W) by 1900.  Steam turbines rose 6 orders of magnitude, a million-fold jump in less than 300 years.

Wind turbines are now seen as great harbingers of renewability, about to sever our dependence on fossil fuels. But their steel towers are made from the metal smelted with coal-derived coke or from recycled steel made in arc furnaces, and both processes are energized by electricity generated largely by turbo-generators powered by coal and natural gas combustion. And their giant blades are made from plastics synthesized from hydrocarbon feedstocks that are derived from crude oil whose extraction remains unthinkable without powerful diesel, or diesel-electric, engines.

The total power of winds generated by this differential heating is a meaningless aggregate when assessing resources that could be harnessed for commercial consumption because the Earth’s most powerful winds are in the jet stream at altitude around 11 km above the surface, and in the northern hemisphere their location shifts with seasons between 30° and 70° N. Even at altitudes reached by the hubs of modern large wind turbines (70-100 m above ground) only less than 15% of winds have speeds suitable for large-scale commercial electricity generation. Moreover, their distribution is uneven, with the Atlantic Europe and the Great Plains of North America being the premiere wind-power regions and with large parts of Europe, Asia, and Africa having relatively unfavorable conditions.

Harnessing significant shares of wind energy could affect regional climates and conceivably even the global air circulation. 

The power density of a 3-MW Vestas machine (now a common choice for large wind farms) is roughly 400 W/m2 and for the world’s largest machine, ENERCON E- 126 rated at 6 MW, it is 481 W/m2.

But because the turbines must be spaced at least three, and better yet five, rotor diameters apart in direction perpendicular to the prevailing wind and at least five, and with large installations up to ten, rotor diameters in the wind direction (in order to avoid excessive wake interference and allow for sufficient wind energy replenishment), power densities of wind generation are usually less than 10 W/m2. Altamont Pass wind farm averages 3.5 W/m2, while exceptionally windy sites may yield more than 10 W/m2 and less windy farms with greater spacing may rate just above 1 W/m2 (Figure 4.1).

Commercialization of large wind turbines has shown notable capacity advances and engendered high expectation. In 1986 California’s Altamont Pass, the first large-scale modern wind farm, whose construction began in the 1981, had average turbine capacity of 94 kW and the largest units rated 330 kW (Smith, 1987). Nearly 20 years later the world’s largest turbine rated 6 MW and typical new installations were 1 MW. This means that the modal capacities of wind turbines have been doubling every 5.5 years (they grew roughly 10-fold in two decades) and that the largest capacities have doubled every 4.4 years (they increased by a factor of 18 in two decades). Even so, these highest unit capacities are two orders of magnitude smaller than the average capacities of steam turbo-generators, the best conversion efficiencies of wind turbines have remained largely unchanged since the late 1980s (at around 35%), and neither they nor the maximum capacities will see several consecutive doublings during the next 10-20 years. The EU’s Up Wind research project has been considering designs of turbines with capacities between 10 and 20 MW whose rotor diameters would be 160-252 m, the latter dimension being twice the diameter of a 5-MW machine and more than three times the wing span of the jumbo A380 jetliner (UpWind, 2009; Figure 4.4).

Hendriks (2008) argues that building such structures is technically possible, because the Eiffel tower had surpassed 300 m already in 1889 and because we routinely build supertankers and giant container vessels whose length approaches 400 m, and assemble bridges whose individual elements have mass more than 5,000 t. That this comparison is guilty of a categorical mistake (as none of those structures is surmounted by massive moving rotors) is not actually so important: What matters are the economies of such giant turbines and, as Bulder (2009) concluded, those are not at all obvious. This is mainly because the weight stresses are proportional to the turbine radius (making longer blades more susceptible to buckling) and because the turbine’s energy yield goes up with the square of its radius while the mass (i.e., the turbine’s cost) goes up with the cube of the radius.

But even if we were to see a 20-MW machine as early as 2020 this would amount to just a tripling of the maximum capacities in a decade, hardly an unprecedented achievement: For example, average capacities of new steam turbo-generators installed in U.S. thermal stations rose from 175 MW in 1960 to 575 MW in 1970, more than a threefold gain. And it is obvious that no wind turbine can be nearly 100% efficient (as natural gas furnace or large electric motors now routinely are), as that would virtually stop the wind flow, and a truly massive deployment of such super-efficient turbines would drastically change local and regional climate by altering the normal wind patterns. The maximum share of wind’s kinetic energy that can be converted into rotary motion occurs when the ratio of wind speed after the passage through the rotor plane and the wind speed impacting the turbine is 1/3 and it amounts to 16/27 or 59% of the wind’s total kinetic energy (Betz, 1926). Consequently, it will be impossible even to double today’s prevailing wind turbine efficiencies in the future.

Hydropower

Storing too much water for hydro generation could weaken many environmental services provided by flowing river water (including silt and nutrient transportation, channel cutting, and oxygen supply to aquatic biota).

The total potential energy of the Earth’s runoff (nearly 370 EJ, or roughly 80% of the global commercial energy use in 2010) is just a grand sum of theoretical interest:  Most of that power can be never tapped for generating hydroelectricity because of the limited number of sites suitable for large dams, seasonal fluctuations of water flows, and the necessity to leave free-flowing sections of streams and to store water for drinking, irrigation, fisheries, flood control, and recreation uses.

As a result, the aggregate of technically exploitable capacity is only about 15% of the theoretical power of river runoff (WEC, 2007), and the capacity that could be eventually economically exploited is obviously even lower.

I have calculated the maximum conceivable share of water power during the late Roman Empire by assuming high numbers of working water wheels (about 25,000 mills), very high average power per machine (1.5 kW), and a high load factor of 50% (Smil, 2010a). These assumptions result in some 300 TJ of useful work while the labor of some 25 million adults (at 60 W for 300 eight-hour days) and 6 million animals (at just 300 W/head for 200 eight-hour days) added up to 30 PJ a year, or at least 100 times as much useful energy per year as the work done by water wheels. Consequently, even with very liberal assumptions water power in the late Roman Empire supplied no more than 1% of all useful energy provided by animate exertion-and the real share was most likely just a fraction of 1%.

Hydrokinetic power

  • Wind-driven ocean waves have kinetic energy of some 60 TW of which only 3 TW (5%) are dissipated along the coasts.
  • Tidal energy amounts to about 3 TW, of which only some 60 GW are dissipated in coastal waters.

Geothermal ultimate maximum globally is 600 GW

The Earth’s geothermal flux amounts to about 42 TW, but nearly 80% of that large total is through the ocean floor and all but a small fraction of it is a low-temperature diffuse heat. Available production techniques using hot steam could tap up to about 140 GW for electricity generation by the year 2050 (Bertani, 2009), and even if three times as much could be used for low- temperature heating the total would be less than 600 GW.

Better efficiencies

What has changed, particularly rapidly during the past 150 years, are the typical efficiencies of the process. In open fires less than 5% of wood’s energy ended up as useful heat that cooked the food; simple household stoves with proper chimneys (a surprisingly late innovation) raised the performance to 15-20%, while today’s most efficient household furnaces used for space heating convert 94-97% of energy in natural gas to heat.

The earliest commercial steam engines (Newcomen’s machines at the beginning of the eighteenth century) transferred less than 1% of coal’s energy into useful reciprocating motion-while the best compound steam engines of the late nineteenth century had efficiencies on the order of 20% and steam locomotives never surpassed 10%. Even today’s best-performing gasoline-fueled engines do not usually surpass 25% efficiency in routine operation.

The world’s largest marine diesel engines are now the only internal combustion machines whose efficiency can reach, and even slightly surpass, 50%.

Gasoline engines

Today’s automotive engines have power ranging from only about 50 kW for urban mini cars to about 375 kW for the Hummer, their compression ratios are typically between 9:1 and 12:1 and their mass/power ratios mostly between .8 and 1.2 g/W.  But even the most powerful gasoline-fueled engines in excess of 500 kW are too small to propel massive ocean-going vessels or used by the largest road trucks and off-road vehicles, or as electricity generators in emergencies or isolated locations.

Diesel engines

Ships, trucks, and generators use diesel engines which due to their high compression are inherently more efficient.

Household energy use

The average U.S. wood and charcoal consumption was very high: about 100 GJ/capita in 1860, compared to about 350 GJ/capita for all fossil and biomass fuel at the beginning of the twenty-first century. But as the typical 1860 combustion efficiencies were only around 10%, the useful energy reached only about 10 GJ/capita. Weighted efficiency of modern household, industrial, and transportation conversions is about 40% and hence the useful energy serving an average American is now roughly 150 GJ/year, nearly 15-fold higher than during the height of the biomass era.

Households claimed a relatively small share of overall energy use during the early phases of industrialization, first only as coal (or coal briquettes) for household stoves, later also as low- energy coal (town) gas, and (starting during the 1880s) as electricity for low-power light bulbs, and soon afterwards also for numerous household appliances. Subsequently, modern energy use has seen a steady decline of industrial and agricultural consumption and increasing claims of transportation and household sectors. For example, in 1950 industries consumed more than half of the world’s primary commercial energy, at the time of the first oil crisis (1973) their share was about one-third, and by 2010 it declined to about 25%. Major appliances (refrigerators, electric stoves, washing machines) became common in the United States after World War I, and private car ownership followed the same trend. As a result by the 1960s households became a leading energy-using sector in all affluent countries. There are substantial differences in sectoral energy use among the industrializing low-income nations and postindustrial high-income economies. Even after excluding all transportation energy, U.S. households have been recently claiming more than 20% of the country’s primary energy supply in 2006, while in China the share was only about 11 %.

Most energy needs are for low-temperature heat, dominated by space heating (up to about 25°C), hot water for bathing and clothes washing (maxima of, respectively, about 40°C and 60°C), and cooking (obviously 100°C for boiling, up to about 250°C for baking). As already noted, ubiquitous heat waste is due to the fact that most of these needs are supplied by high-temperature combustion of fossil fuels. Steam and hot water produced by high-temperature combustion also account for 30-50% of energy needs in food processing, pulp and paper, chemical and petrochemical industries. High-temperature heat dominates metallurgy, production of glass and ceramics, steam-driven generation of electricity, and operation of all internal combustion engines.

Liquid Natural Gas (LNG)

By 2008 there were 250 LNG tankers with the total capacity of 183 Mt/year and the global LNG trade carried about 25% of all internationally traded natural gas (BP, 2009). LNG was imported by 17 countries on four continents, and before the economic downturn of 2008 plans envisaged more than 300 LNG vessels by 2010 with the total capacity of about 250 Mt/year as the global LNG trade has moved toward a competitive market. LNG trade has been finally elevated from a marginal endeavor to an important component of global energy supply, and this has become true in terms of total exports (approaching 30% of all natural gas sold abroad) and number of countries involved (now more than 30 exporters and importers

This brief recounting of LNG history is an excellent illustration of the decades-long spans that are often required to convert theoretical concepts into technical possibilities and then to adapt these technical advances and diffuse them to create new energy industries (Figure 1.4). Theoretical foundations of the liquefaction of gases were laid down more than a century before the first commercial application; the key patent that turned the idea of liquefaction into a commonly used industrial process was granted in 1895, but at that time natural gas was a marginal fuel even in the United States (in 1900 it provided about 3.5% of the country’s fossil fuel energy), and in global terms it had remained one until the 1960s, when its cleanliness and flexibility began to justify high price of its shipborne imports.

If we take the years between 1999 (when worldwide LNG exports surpassed 5% of all natural gas sales) and 2007 (when the number of countries exporting and importing LNG surpassed 30, or more than 15% of all nations) as the onset of LNG’s global importance, then it had taken about four decades to reach that point from the time of the first commercial shipment (1964), about five decades from the time that natural gas began to provide more than 10% of all fossil energies (during the early 1950s), more than a century since we acquired the technical means to liquefy large volumes of gases (by the mid- 1890s)-and about 150 years since the discovery of the principle of gas liquefaction. By 2007 it appeared that nothing could stop an emergence of a very substantial global LNG market. But then a sudden supply overhang that was created in 2008-and that was due to the combination of rapid capacity increases, lower demand caused by the global financial crisis, and the retreat of U.S. imports due to increased domestic output of unconventional gas-has, once again, slowed down global LNG prospects, and it may take years before the future course will become clear. In any case, the history of LNG remains a perfect example of the complexities and vagaries inherent in major energy transitions.

Coal

There have been some indications that the world’s coal resources may be significantly less abundant than the widespread impressions would indicate (Rutledge, 2008).

The genesis of the growing British reliance on coal offers some valuable generic lessons. Thanks to Nef’s (1932) influential work a national wood crisis has been commonly seen as the key reason for the expansion of coal mining between 1550 and 1680-but other historians could not support this claim, pointing to the persistence of large wooded areas in the country, seeing such shortages as largely local and criticizing unwarranted generalization based on the worst-case urban situations (Coleman, 1977). This was undoubtedly true, but not entirely relevant, as transportation constraints would not allow the emergence of a national fuelwood market, and local and regional wood scarcities were real.

In 1900 the worldwide extraction of bituminous coals and lignites added up to about 800 Mt; a century later it was about 4.5 Gt, a roughly 5.6-fold increase in mass terms and (because of the declining energy density of extracted coal) almost exactly four-fold increase in energy terms.

Meanwhile another major change took place, as the USSR, the world’s largest oil producer since 1975, dissolved, and the aggregate oil extraction of its former states declined by nearly a third between 1991 and 1996, making Saudi Arabia a new leader starting in 1993.

Natural gas is actually a mixture of light combustible hydrocarbons, with methane dominant but with up to a fifth of the volume made up of ethane, propane, and butane;

And, not to forget recently fashionable talk of carbon sequestration and storage, retaining the industry’s coal base but hiding its CO2 emissions underground would require putting in place a new massive industry whose mass-handling capacity would have to rival that of the world’s oil industry even if the controls were limited to a fraction of the generated gas.

Because coal’s declining relative importance was accompanied by a steady increase in its absolute production-from about 700 Mt of bituminous coals (including a small share of anthracite) and 70 Mt of lignites in 1900 to more than 3.6 Gt of bituminous coals and nearly 900 Mt of lignites in the year 2000, or a nearly 6-fold increase in mass terms and a more than 4-fold multiple in energy terms, coal ended up indisputably as the century’s most important fuel. Biofuels still supplied about 20% of the world’s fuel energy during the twentieth century, coal accounted for about 37%, oil for 27%, and natural gas for about 15%. Looking just at the shares of the three fossil fuels, coal supplied about 43%, crude oil 34%, and natural gas 20%. This indubitable conclusion runs, once again, against a commonly held, but mistaken, belief that the twentieth century was the oil era that followed the coal era of the nineteenth century.

Coal replacing biofuels reached the 5% mark around 1840, it captured 10% of the global market by 1855, 15% by 1865, 20% by 1870, 25% by 1875, 33% by 1885, 40% by 1895 and 50% by 1900. The sequence of years for these milestones was thus 15-25-30-35-45-55-60.

With China’s coal shares at nearly 73% in 1980 and at 70% in 2008 it is obvious that during the three decades of rapid modernization there was only the tardiest of transitions from solid fuel to hydrocarbons. China’s extraordinary dependence on coal means that the country now accounts for more than 40% of the world extraction, and that the mass it produces annually is larger than the aggregate output of the United States, India, Australia, Russia, Indonesia, and Germany, the world’s second- to seventh-largest coal producers. No other major economy, in fact no other country, is as dependent on coal as China: The fuel has also recently accounted for 95% of all fossil fuels used to produce electricity and as the thermal generation supplies nearly 80% of China’s total generation it is the source of more than 70% of electric power. China was self-sufficient

Nuclear power

Besides France, the countries with the highest nuclear electricity share (setting aside Lithuania, which inherited a large Soviet nuclear plant at Ingalina that gave it a 70% nuclear share) are Belgium and the Slovak Republic (about 55%), Sweden (about 45%), and Switzerland (about 40%); Japan’s share was 29%, the United States’ 19%, Russia’s 16%, India’s 3%, and China’s 2% (IAEA, 2009).

Saudi Arabian oil and gas

The high mean of the Saudi per capita energy consumption is misleading because a large part of the overall energy demand is claimed by the oil and gas industry itself and because it also includes substantial amounts of bunker fuel for oil tankers exporting the Saudi oil and refined products. Average energy use by households remains considerably lower than in the richest EU countries.

Even more importantly, Saudi Arabia’s high energy consumption has not yet translated into a commensurately high quality of life: Infant mortality remains relatively high and the status of women is notoriously low. As a result, the country has one of the world’s largest differences in the ranking between per capita GDP and the Human Development Index (UNDP, 2009). In this it is a typical Muslim society: In recent years 20 out of 24 Muslim countries in North Africa and the Middle East ranked higher in their GDP per capita than in their HDI-and in 2007/2008 the index difference for Saudi Arabia was -19 while for Kuwait and Bahrain it was -8 and for Iran it was -23.

Renewable Energy

There are nine major kinds of renewable energies: solar radiation; its six transformations as running water (hydro energy), wind, wind-generated ocean waves, ocean currents, thermal differences between the ocean’s surface and deep waters, and photosynthesis (primary production); geothermal energy and tidal energy complete the list.

As with fossil fuels, it is imperative to distinguish between renewable resources (aggregates of available fluxes) and reserves, their smaller (or very small) portions that are economically recoverable with existing extraction or conversion techniques. This key distinction applies as much to wind or waste cellulosic biomass as it does to crude oil or uranium, and that is why the often-cited enormous flows of renewable resources give no obvious indication as to the shares that can be realistically exploited.

Reviewing the potentially usable maxima of renewable energy flows shows a sobering reality. First, direct solar radiation is the only form of renewable energy whose total terrestrial flux far surpasses not only today’s demand for fossil fuels but also any level of global energy demand realistically imaginable during the twenty-first century (and far beyond). Second, only an extraordinarily high rate of wind energy capture (that may be environmentally undesirable and technically problematic) could provide a significant share of overall future energy demand. Third, for all other renewable energies maxima available for commercial harnessing fall far short of today’s fossil fuel flux, one order of magnitude in the case of hydro energy, biomass energy, ocean waves, and geothermal energy, two orders of magnitude for tides, and four orders of magnitude for ocean currents and ocean thermal differences.

Many regions (including the Mediterranean, Eastern Europe, large parts of Russia, Central Asia, Latin America, and Central Africa) have relatively low wind-generation potential (Archer & Jacobson, 2005); high geothermal gradients are concentrated along the ridges of major tectonic plates, above all along the Pacific Rim; and tidal power is dissipated mainly along straight coasts (unsuitable for tidal dams) and in regions with minor (<1 m) tidal ranges (Smil, 2008).

As already explained (in chapter 1), even ordinary bituminous coal contains 30-50% more energy than air-dry wood, while the best hard coals are nearly twice as energy-dense as wood and liquid fuels refined from crude oil have nearly three times higher energy density than air-dry phytomass. A biomass-burning power plant would need a mass of fuel 30-50% larger than a coal-fired station of the same capacity. Similarly, ethanol fermented from crop carbohydrates has an energy density of 24 MJ/L, 30% less than gasoline (and biodiesel has an energy density about 12% lower than diesel fuel).

But lower energy density of non-fossil fuels is a relatively small inconvenience compared to inherently lower power densities of converting renewable energy flows into mass-produced commercial fuels or into electricity at GW scales. Power density is the rate of flow of energy per unit of land area. The measure is applicable to natural phenomena as well as to anthropogenic processes, and it can be used in revealing ways to compare the spatial requirements of energy harnessing (extraction, capture, conversion) with the levels of energy consumption. In order to maximize the measure’s utility and to make comparisons of diverse sources, conversions, and uses my numerator is always in watts and the denominator is always a square meter of the Earth’s horizontal area (W/mz). Others have used power density to express the rate of energy flow across a vertical working surface of a converter, most often across the plane of a wind turbine’s rotation (the circle swept by the blades).

Power densities of hydro generation are thus broadly comparable to those of wind-driven generation, both having mostly magnitude of 10° W/m2 and exceptional ratings in the lower range of 101 W/m2.

Hydroelectricity will make important new contributions to the supply of renewable energy only in the modernizing countries of Asia, Africa, and Latin America. Because of their often relatively large reservoirs, smaller stations have power densities less than 1 W/mz; for stations with installed capacities of 0.5-1 GWthe densities go up to about 1.5 W/m2; the average power density for the world’s largest dams (>1 GW) is over 3 W/m2; the largest U.S. hydro station (Grand Coulee on the Columbia) rates nearly 20 W/m2; and the world’s largest project (Three Gorges station on the Chang Jiang) comes close to 30 W/m2 (Smil, 2008).

Typical power densities of phytomass fuels (or fuels derived by conversion of phytomass, including charcoal or ethanol) are even lower. Fast-growing willows, poplars, eucalypti, leucaenas, or pines grown in intensively managed (fertilized and if need be irrigated) plantations yield as little as 0.1 W/m2 in arid and northern climates but up to 1 W/m2 in the best temperate stands, with typical good harvests (about 10 t/ha) prorating to around 0.5 W/m2 (Figure 4.1). Crops that are best at converting solar radiation into new biomass (C4 plants) can have, when grown under optimum natural conditions and supplied by adequate water and nutrients, very high yields: National averages are now above 9 t/ha for U.S. corn and nearly 77 t/ha for Brazilian sugar cane (FAO, 2009). But even when converted with high fermentation efficiency, ethanol production from Iowa corn yields only about 0.25 W/m2 and from Brazilian sugar cane about 0.45 W/m2 (Bresnan & Contini, 2007).

The direct combustion of phytomass would yield the highest amount of useful energy.

Conversion of phytomass to electricity at large stations located near major plantations or the production of liquid or gaseous fuel: Such conversions would obviously lower the overall power density of the phytomass- based energy system (mostly to less than 0.3 W/m2), require even larger areas of woody plantations, and necessitate major extensions of high-voltage transmission lines, and hence further enlarge overall land claims. Moreover, as the greatest opportunities for large-scale cultivation of trees for energy are available only in parts of Latin America, Africa, and Asia, any massive phytomass cultivation would also require voluminous (and energy-intensive) long-distance exports to major consuming regions.

And even if future bioengineered trees could be grown with admirably higher power densities (say, 2 W/m2), their cultivation would run into obvious nutrient constraints. Non-leguminous trees producing dry phytomass at 15 t/ha would require annual nitrogen inputs on the order of 100 kg/ha during 10 years of their maturation. Extending such plantations to slightly more than half of today’s global cropland would require as much nitrogen as is now applied annually to all food and feed crops-but the wood harvest would supply only about half of the energy that we now extract in fossil fuels. Other major environmental concerns include accelerated soil erosion (particularly before the canopies of many row plantations of fast-growing trees would close) and availability of adequate water supplies (Berndes, 2002).

Average insolation densities of 102 W/m2 mean that even with today’s relatively low-efficiency PV conversions (the best rates in everyday operation are still below 20%) we can produce electricity with power densities of around 30 W/m2, and if today’s best experimental designs (multifunction concentrators with efficiency of about 40%) become commercial realities we could see PV generation power densities averaging more than 60 W/m2 and surpassing 400 W/m2 during the peak insolation hours. As impressive as that would be, fossil fuels are extracted in mines and hydrocarbons fields with power densities of 103-104 W/m2 (i.e., 1-10 kW/m2), and the rates for thermal electricity generation are similar (see Figure 4.1). Even after including all other transportation, processing, conversion, transmission, and distribution needs, power densities for the typical provision of coals, hydrocarbons, and thermal electricity generated by their combustion are lowered to no less than 102 W/m2, most commonly to the range of 250-500 W/m2. These typical power densities of fossil fuel energy systems are two to three orders of magnitude higher than the power densities of wind- or water-driven electricity generation and biomass cultivation and conversion, and an order of magnitude higher than today’s best photovoltaic conversions.

I have calculated that in the early years of the twenty-first century no more than 30,000 km2 were taken up by the extraction, processing, and transportation of fossil fuels and by generation and transmission of thermal electricity (Smil, 2008). Spatial claim of the world’s fossil fuel infrastructure is thus equal to the area of Belgium (or, even if the actual figure is up to 40% larger, to the area of Denmark). But if renewable energy sources were to satisfy significant shares (15-30%) of national demand for fuel and electricity, then their low power densities would translate into very large space requirements-and they would add up to unrealistically large land claims if they were to supply major shares of the global energy need.

At the same time, energy is consumed in modern urban and industrial areas at increasingly higher power densities, ranging from less than 10 W/m2 in sprawling cities in low-income countries (including their transportation networks) to 50-150 W/m2 in densely packed high-income metropolitan areas and to more than 500 W/m2 in downtowns of large northern cities during winter (Smil, 2008). Industrial facilities, above all steel mills and refineries, have power densities in excess of 500 W/m2 even prorated over their entire fence area-and high-rise buildings that will house an increasing share of humanity in the twenty-first century megacities go easily above 1,000 W/m2. This mismatch between the inherently low power densities of renewable energy flows and relatively high power densities of modern final energy uses (Figure 4.2) means that a solar-based system will require a profound spatial restructuring with major environmental and socioeconomic consequences.

In order to energize the existing residential, industrial, and transportation infrastructures inherited from the fossil-fuel era, a solar-based society would have to concentrate diffuse flows to bridge power density gaps of two to three orders of magnitude. Mass adoption of renewable energies would thus necessitate a fundamental reshaping of modern energy infrastructures, from a system dominated by global diffusion of concentrated energies from a relatively limited number of nodes extracting fuels with very high power densities to a system that would collect fuels of low energy density at low power densities over extensive areas and concentrate them in the increasingly more populous consumption centers.

Yang (2010) uses the history of solar hot water systems to argue that even at that point the diffusion of decentralized rooftop PV installations may be relatively slow. Solar hot water systems have been cost-effective (saving electricity at a cost well below grid parity) in sunny regions for decades, and with nearly 130 GW installed worldwide they are clearly also a mature innovation-and yet less than 1% of all U.S. households have chosen to install them (Davidson, 2005). The

Even the best conversions in research laboratories have required 15-20 years to double their efficiency and that another doubling for multi-junction and monocrystalline cells is highly unlikely.

Silicon analogy of Moore’s law does not apply to renewable energy

Fundamental physical and biochemical limits restrict the performance of other renewable energy conversions, be it the maximum yield of crops grown for fuel or woody biomass or the power to be harnessed from waves or tides: These limits will assert themselves after only relatively modest improvements of today’s performance and hence no strings of successive performance doublings are ahead.

Production of microprocessors is a costly activity, with the fabrication facilities costing at least $2-3 (and future ones up to $10) billion. But given the entirely automated nature of the production process (with microprocessors used to design more advanced fabrication facilities) and a massive annual output of these factories, the entire world can be served by only a small number of chip-making facilities. Intel, whose share of the global microprocessor market remains close to 80%, has only 15 operating silicon wafer fabrication facilities in nine locations around the world, and two new units under construction (Intel, 2009), and worldwide there are only about 300 plants making high-grade silicon. Such an infrastructural sparsity is the very opposite of the situation prevailing in energy production, delivery, and consumption.

Could anybody expect that the Chinese will suddenly terminate this brand-new investment and turn to costlier methods of electricity generation that remain relatively unproven and that are not readily available at GW scale? In global terms, could we expect that the world will simply walk away from fossil and nuclear energy infrastructures whose replacement cost is worth at least $15-20 trillion before these investments will be paid for and produce rewarding returns? Negative answers to these questions are obvious. But the infrastructural argument cuts forward as well because new large-scale infrastructures must be put in place before any new modes of electricity generation or new methods of producing and distributing biofuels can begin to make a major difference in modern high-energy economies. Given the scale of national and global energy demand (for large countries 1011 W, globally nearly 15 TW in 2010, likely around 20 TW by 2025) and the cost and complexity of the requisite new infrastructures, there can be no advances in the structure and function of energy systems that are even remotely analogical to Moore’s progression of transistor packing.

After an energy crisis, government leaders vow to do something.  Substitution goals are made, but not usually adhered to. “Robust optimism, naïve expectations, and a remarkable unwillingness to err on the side of caution is a common theme for most of these goals.

There have been many assumptions in the past of a rapid and smooth transition to renewable energy, especially after the first two energy crises of 1973-4 and 1979-81.  Here are just a few failed forecasts:

  • 1977 InterTechnology Corporation said by 2000 solar energy could provide 36% of U.S. industrial process heat
  • 1980 Sorensen though by 2005 renewable energy would provide 49% of U.S. power
  • Amory Lovin forecast over 30% renewables by 2000, in reality it was 7% with biogas supplying less than .001%, wind 0.04%, solar PV less than 0.1% and no use of solar energy for industrial heat supply.

Sweden

  • 1978: Sweden planned to get half its energy from tree plantations by 2015 that would cover 6 to 7% of their nation. Reedlands would be converted to pelleted phytomass.
  • 1991: Sweden dreams again of biomass energy from massive willow plantations covering 400,000 hectares by 2020 harvested 4 to 6 years after planting and every 3.5 years thereafter for 20 years to provide district heating and CHP power generation
  • 1996 planting ended at about 10% of the goal, and 40% of farmers stopped growing them.
  • 2008 all burnable renewable and waste biomass (mainly wood) provided less than 2% of primary energy.

Given this history of [failed] attempts at renewables are today’s forecasts of anticipated, planned, or mandated shares of renewable energies as unrealistic as those three decades ago?  Jefferson (2008) thinks so because “targets are usually too short term and clearly unrealistic…subsidy systems often promote renewable energy schemes that are misdirected and buoyed up by grossly exaggerated claims. One or two mature energy technologies are pushed nationally with insufficient regard for the costs, contribution to electricity generation, or transportation fuels’ needs”.

Al Gore believes the three main challenges of the economy, environment, and national security are all due to our “over-reliance on carbon-based fuels,” which could easily be fixed in 10 years by switching to solar, wind and geothermal.  He was confident this was true because as demand for renewable energy grew, the cost of it would fall, and used the Silicon Valley fallacy of technology doubling.

On average 15 GW/year of generating capacity were added every 20 years from 1987 and 2007. To make a transition to renewables 150 GW would needed to be added a year, and the longer the wait to do this the more needs to be added later on, perhaps 200 to 250 GW or 20 times as much as the record rate of 2008 (8.5 GW added wind capacity).  This “should suffice to demonstrate the impossibility of” doing so. On top of that this “impossible feat would also require writing off in a decade the entire fossil-fueled electricity generation industry and the associated production and transportation infrastructure, an enterprise whose replacement value is at least $2 trillion”.

The wind would have to come from the Great Plains and the solar from the Southwest, yet no major HV transmission lines link to East and West coast load centers.  So before you could build millions of wind turbines and solar PV panels, you’d need to rewire the United States first with high-capacity, long-distance transmission links, at least another 65,000 km (40,000 miles) in addition to the existing 265,000 km (165,000 miles) of HV lines.  These lines are at least $2 million/km.

“Installing in 10 years wind- and solar-generating capacity more than twice as large as that of all fossil-fueled stations operating today while concurrently incurring write-off and building costs on the order of $4-5 trillion and reducing regulatory of approval of generation and transmission megaprojects from many years to mere months would be neigher achievable nor affordable at the best of times: At a time when the nation has been adding to its massive national debt at a rate approaching $2 trillion a year, it is nothing but a grand delusion.”

Smil points out that promoters of grand plans greatly exaggerate the capacity factor of wind and solar.  Google plan, Clean Energy 2030, assumed wind and solar capacities of 35% each.  The reality in the European Union between 2003 and 2007 was that the average load factor for wind power was just 20.8%.  Even Arizona had a solar PV capacity factor average less than 25%.

There’s no way even cheaper-than-oil electricity generation in less sunny climates could displace fossil fuels without visionary mega-transmission lines between Algerian Sahara to Europe or from Arizona to the Atlantic coast.

It could take decades of cumulative experience to understand the risks and benefits of large-scale renewable systems and quantify the probability of catastrophic failures and the true lifetime costs.  We need decades of operating experience in a wide range of conditions.

As far as ethanol and biodiesel go, production has depended on very large and very questionable subsidies (Steenblik 2007).  Cellulosic fuels have yet to reach large-scale commercial production (and still haven’t in 2016).  Therefore “they should not be seen as imminent and reliable providers of alternative fuels”.

One of the biggest problems renewable energy enthusiasts don’t recognize is the challenge of converting the 100 year old existing system with centrally produced power from extremely high power density fuels to one with very low power density flows use in high power density urban areas. Decentralized power is fine for a farm or small town, but impossible for large cities that already house more than half of humanity, or megacities like Tokyo.

Renewable enthusiasts especially don’t understand the challenge of replacing fossil fuels required for key industrial feedstocks.  Coke made from coal has unique properties that make it the best way to smelt iron from oreCharcoal made from wood is too fragile to use in the enormous blast furnaces we have today.   If you tried to use wood charcoal to continue to match the coke-fired pig iron smelting of 900 Mt/year, you’d need about 3.5 Gt of dry wood from 350 Mha, the size of two-thirds of Brazil’s forest.  Nor do we have any plant-based substitutes for hydrocarbon feedstocks used to make plastics or synthesizing ammonia (production of fertilizer ammonia requires over 100 Gm3 a year).

Monetary cost.  All claims of price parity with oil and other fossil fuels depend on many assumptions whose true details are often impossible to ascertain, on uncertain choices of amortization periods and discount rates, and all of them are contaminated by past, present, and expected tax breaks, government subsidies, and simplistic, mechanistic assumptions about the future decline of unit costs. One might think that repeated cost overruns and chronically unmet forecasts of capital or operating costs should have had some effect, but they have done little to stop the recitals of new dubious numbers.

The fact that innovations require government support raises questions about the continuity of policies under different governments, or continuation of expensive projects when the economy is bad.

Given how long past transitions took surely a transition from fossil fuels will take generations.  And since the inertia of existing massive and expensive energy infrastructures and the transportation system can’t be replaced overnight, there will surely be a large component dependent on fossil fuels for many decades.   Indeed the transition will likely take much longer than past transitions, because renewables require a much larger physical area than fossil fuels and producing much less energy dense power, while past transitions added increasingly dense high power coal and oil to the energy mix, and yet these transitions took decades as well.

The list of seriously espoused energy “solutions” has run from that of nuclear fusion to an irrepressible (and always commencing in a decade or so) hydrogen economy, and its prominent entries have included everything from liquid metal fast breeder reactors to squeezing 5% of oil from the Rocky Mountain shales.”  And now the renewable list consists of “solutions” such as enormous numbers of bobbing wave converters, flexible PV films surrounding homes, enormous solar panels in orbit, algae disgorging high-octane gasoline, and harnessing jet stream wind with kits 12 km overhead.

“Ours is an overwhelmingly fossil-fueled society, our way of life has been largely created by the combustion of photosynthetically converted and fossilized sunlight—and there can no doubt that the transition to fossil fuels…led to a world where more people enjoy a higher quality of life than at any time in previous history. This grand solar subsidy, this still-intensifying depletion of an energy stock whose beginnings go back hundreds of millions of years, cannot last.”

 

This entry was posted in Alternative Energy, Energy Books, Vaclav Smil and tagged , , , , , , , , , , , . Bookmark the permalink.

13 Responses to Book review of Vaclav Smil’s “Energy Transitions: History, Requirements, Prospects”